Dissertations / Theses on the topic 'Markov chain'

To see the other types of publications on this topic, follow the link: Markov chain.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Markov chain.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bakra, Eleni. "Aspects of population Markov chain Monte Carlo and reversible jump Markov chain Monte Carlo." Thesis, University of Glasgow, 2009. http://theses.gla.ac.uk/1247/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Holenstein, Roman. "Particle Markov chain Monte Carlo." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/7319.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) methods have emerged as the two main tools to sample from high-dimensional probability distributions. Although asymptotic convergence of MCMC algorithms is ensured under weak assumptions, the performance of these latters is unreliable when the proposal distributions used to explore the space are poorly chosen and/or if highly correlated variables are updated independently. In this thesis we propose a new Monte Carlo framework in which we build efficient high-dimensional proposal distributions using SMC methods. This allows us to design effective MCMC algorithms in complex scenarios where standard strategies fail. We demonstrate these algorithms on a number of example problems, including simulated tempering, nonlinear non-Gaussian state-space model, and protein folding.
APA, Harvard, Vancouver, ISO, and other styles
3

Byrd, Jonathan Michael Robert. "Parallel Markov Chain Monte Carlo." Thesis, University of Warwick, 2010. http://wrap.warwick.ac.uk/3634/.

Full text
Abstract:
The increasing availability of multi-core and multi-processor architectures provides new opportunities for improving the performance of many computer simulations. Markov Chain Monte Carlo (MCMC) simulations are widely used for approximate counting problems, Bayesian inference and as a means for estimating very highdimensional integrals. As such MCMC has found a wide variety of applications in fields including computational biology and physics,financial econometrics, machine learning and image processing. This thesis presents a number of new method for reducing the runtime of Markov Chain Monte Carlo simulations by using SMP machines and/or clusters. Two of the methods speculatively perform iterations in parallel, reducing the runtime of MCMC programs whilst producing statistically identical results to conventional sequential implementations. The other methods apply only to problem domains that can be presented as an image, and involve using various means of dividing the image into subimages that can be proceed with some degree of independence. Where possible the thesis includes a theoretical analysis of the reduction in runtime that may be achieved using our technique under perfect conditions, and in all cases the methods are tested and compared on selection of multi-core and multi-processor architectures. A framework is provided to allow easy construction of MCMC application that implement these parallelisation methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Yildirak, Sahap Kasirga. "The Identificaton Of A Bivariate Markov Chain Market Model." Phd thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/1257898/index.pdf.

Full text
Abstract:
This work is an extension of the classical Cox-Ross-Rubinstein discrete time market model in which only one risky asset is considered. We introduce another risky asset into the model. Moreover, the random structure of the asset price sequence is generated by bivariate finite state Markov chain. Then, the interest rate varies over time as it is the function of generating sequences. We discuss how the model can be adapted to the real data. Finally, we illustrate sample implementations to give a better idea about the use of the model.
APA, Harvard, Vancouver, ISO, and other styles
5

Martin, Russell Andrew. "Paths, sampling, and markov chain decomposition." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/29383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Estandia, Gonzalez Luna Antonio. "Stable approximations for Markov-chain filters." Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/38303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Yichuan. "Scalable geometric Markov chain Monte Carlo." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20978.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) is one of the most popular statistical inference methods in machine learning. Recent work shows that a significant improvement of the statistical efficiency of MCMC on complex distributions can be achieved by exploiting geometric properties of the target distribution. This is known as geometric MCMC. However, many such methods, like Riemannian manifold Hamiltonian Monte Carlo (RMHMC), are computationally challenging to scale up to high dimensional distributions. The primary goal of this thesis is to develop novel geometric MCMC methods applicable to large-scale problems. To overcome the computational bottleneck of computing second order derivatives in geometric MCMC, I propose an adaptive MCMC algorithm using an efficient approximation based on Limited memory BFGS. I also propose a simplified variant of RMHMC that is able to work effectively on larger scale than the previous methods. Finally, I address an important limitation of geometric MCMC, namely that is only available for continuous distributions. I investigate a relaxation of discrete variables to continuous variables that allows us to apply the geometric methods. This is a new direction of MCMC research which is of potential interest to many applications. The effectiveness of the proposed methods is demonstrated on a wide range of popular models, including generalised linear models, conditional random fields (CRFs), hierarchical models and Boltzmann machines.
APA, Harvard, Vancouver, ISO, and other styles
8

Fang, Youhan. "Efficient Markov Chain Monte Carlo Methods." Thesis, Purdue University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10809188.

Full text
Abstract:

Generating random samples from a prescribed distribution is one of the most important and challenging problems in machine learning, Bayesian statistics, and the simulation of materials. Markov Chain Monte Carlo (MCMC) methods are usually the required tool for this task, if the desired distribution is known only up to a multiplicative constant. Samples produced by an MCMC method are real values in N-dimensional space, called the configuration space. The distribution of such samples converges to the target distribution in the limit. However, existing MCMC methods still face many challenges that are not well resolved. Difficulties for sampling by using MCMC methods include, but not exclusively, dealing with high dimensional and multimodal problems, high computation cost due to extremely large datasets in Bayesian machine learning models, and lack of reliable indicators for detecting convergence and measuring the accuracy of sampling. This dissertation focuses on new theory and methodology for efficient MCMC methods that aim to overcome the aforementioned difficulties.

One contribution of this dissertation is generalizations of hybrid Monte Carlo (HMC). An HMC method combines a discretized dynamical system in an extended space, called the state space, and an acceptance test based on the Metropolis criterion. The discretized dynamical system used in HMC is volume preserving—meaning that in the state space, the absolute Jacobian of a map from one point on the trajectory to another is 1. Volume preservation is, however, not necessary for the general purpose of sampling. A general theory allowing the use of non-volume preserving dynamics for proposing MCMC moves is proposed. Examples including isokinetic dynamics and variable mass Hamiltonian dynamics with an explicit integrator, are all designed with fewer restrictions based on the general theory. Experiments show improvement in efficiency for sampling high dimensional multimodal problems. A second contribution is stochastic gradient samplers with reduced bias. An in-depth analysis of the noise introduced by the stochastic gradient is provided. Two methods to reduce the bias in the distribution of samples are proposed. One is to correct the dynamics by using an estimated noise based on subsampled data, and the other is to introduce additional variables and corresponding dynamics to adaptively reduce the bias. Extensive experiments show that both methods outperform existing methods. A third contribution is quasi-reliable estimates of effective sample size. Proposed is a more reliable indicator—the longest integrated autocorrelation time over all functions in the state space—for detecting the convergence and measuring the accuracy of MCMC methods. The superiority of the new indicator is supported by experiments on both synthetic and real problems.

Minor contributions include a general framework of changing variables, and a numerical integrator for the Hamiltonian dynamics with fourth order accuracy. The idea of changing variables is to transform the potential energy function as a function of the original variable to a function of the new variable, such that undesired properties can be removed. Two examples are provided and preliminary experimental results are obtained for supporting this idea. The fourth order integrator is constructed by combining the idea of the simplified Takahashi-Imada method and a two-stage Hessian-based integrator. The proposed method, called two-stage simplified Takahashi-Imada method, shows outstanding performance over existing methods in high-dimensional sampling problems.

APA, Harvard, Vancouver, ISO, and other styles
9

Chotard, Alexandre. "Markov chain Analysis of Evolution Strategies." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112230/document.

Full text
Abstract:
Cette thèse contient des preuves de convergence ou de divergence d'algorithmes d'optimisation appelés stratégies d'évolution (ESs), ainsi que le développement d'outils mathématiques permettant ces preuves.Les ESs sont des algorithmes d'optimisation stochastiques dits ``boîte noire'', i.e. où les informations sur la fonction optimisée se réduisent aux valeurs qu'elle associe à des points. En particulier, le gradient de la fonction est inconnu. Des preuves de convergence ou de divergence de ces algorithmes peuvent être obtenues via l'analyse de chaînes de Markov sous-jacentes à ces algorithmes. Les preuves de convergence et de divergence obtenues dans cette thèse permettent d'établir le comportement asymptotique des ESs dans le cadre de l'optimisation d'une fonction linéaire avec ou sans contrainte, qui est un cas clé pour des preuves de convergence d'ESs sur de larges classes de fonctions.Cette thèse présente tout d'abord une introduction aux chaînes de Markov puis un état de l'art sur les ESs et leur contexte parmi les algorithmes d'optimisation continue boîte noire, ainsi que les liens établis entre ESs et chaînes de Markov. Les contributions de cette thèse sont ensuite présentées:o Premièrement des outils mathématiques généraux applicables dans d'autres problèmes sont développés. L'utilisation de ces outils permet d'établir aisément certaines propriétés (à savoir l'irreducibilité, l'apériodicité et le fait que les compacts sont des small sets pour la chaîne de Markov) sur les chaînes de Markov étudiées. Sans ces outils, établir ces propriétés était un processus ad hoc et technique, pouvant se montrer très difficile.o Ensuite différents ESs sont analysés dans différents problèmes. Un (1,\lambda)-ES utilisant cumulative step-size adaptation est étudié dans le cadre de l'optimisation d'une fonction linéaire. Il est démontré que pour \lambda > 2 l'algorithme diverge log-linéairement, optimisant la fonction avec succès. La vitesse de divergence de l'algorithme est donnée explicitement, ce qui peut être utilisé pour calculer une valeur optimale pour \lambda dans le cadre de la fonction linéaire. De plus, la variance du step-size de l'algorithme est calculée, ce qui permet de déduire une condition sur l'adaptation du paramètre de cumulation avec la dimension du problème afin d'obtenir une stabilité de l'algorithme. Ensuite, un (1,\lambda)-ES avec un step-size constant et un (1,\lambda)-ES avec cumulative step-size adaptation sont étudiés dans le cadre de l'optimisation d'une fonction linéaire avec une contrainte linéaire. Avec un step-size constant, l'algorithme résout le problème en divergeant lentement. Sous quelques conditions simples, ce résultat tient aussi lorsque l'algorithme utilise des distributions non Gaussiennes pour générer de nouvelles solutions. En adaptant le step-size avec cumulative step-size adaptation, le succès de l'algorithme dépend de l'angle entre les gradients de la contrainte et de la fonction optimisée. Si celui ci est trop faible, l'algorithme convergence prématurément. Autrement, celui ci diverge log-linéairement.Enfin, les résultats sont résumés, discutés, et des perspectives sur des travaux futurs sont présentées
In this dissertation an analysis of Evolution Strategies (ESs) using the theory of Markov chains is conducted. Proofs of divergence or convergence of these algorithms are obtained, and tools to achieve such proofs are developed.ESs are so called "black-box" stochastic optimization algorithms, i.e. information on the function to be optimized are limited to the values it associates to points. In particular, gradients are unavailable. Proofs of convergence or divergence of these algorithms can be obtained through the analysis of Markov chains underlying these algorithms. The proofs of log-linear convergence and of divergence obtained in this thesis in the context of a linear function with or without constraint are essential components for the proofs of convergence of ESs on wide classes of functions.This dissertation first gives an introduction to Markov chain theory, then a state of the art on ESs and on black-box continuous optimization, and present already established links between ESs and Markov chains.The contributions of this thesis are then presented:o General mathematical tools that can be applied to a wider range of problems are developed. These tools allow to easily prove specific Markov chain properties (irreducibility, aperiodicity and the fact that compact sets are small sets for the Markov chain) on the Markov chains studied. Obtaining these properties without these tools is a ad hoc, tedious and technical process, that can be of very high difficulty.o Then different ESs are analyzed on different problems. We study a (1,\lambda)-ES using cumulative step-size adaptation on a linear function and prove the log-linear divergence of the step-size; we also study the variation of the logarithm of the step-size, from which we establish a necessary condition for the stability of the algorithm with respect to the dimension of the search space. Then we study an ES with constant step-size and with cumulative step-size adaptation on a linear function with a linear constraint, using resampling to handle unfeasible solutions. We prove that with constant step-size the algorithm diverges, while with cumulative step-size adaptation, depending on parameters of the problem and of the ES, the algorithm converges or diverges log-linearly. We then investigate the dependence of the convergence or divergence rate of the algorithm with parameters of the problem and of the ES. Finally we study an ES with a sampling distribution that can be non-Gaussian and with constant step-size on a linear function with a linear constraint. We give sufficient conditions on the sampling distribution for the algorithm to diverge. We also show that different covariance matrices for the sampling distribution correspond to a change of norm of the search space, and that this implies that adapting the covariance matrix of the sampling distribution may allow an ES with cumulative step-size adaptation to successfully diverge on a linear function with any linear constraint.Finally, these results are summed-up, discussed, and perspectives for future work are explored
APA, Harvard, Vancouver, ISO, and other styles
10

Neuhoff, Daniel. "Reversible Jump Markov Chain Monte Carlo." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17461.

Full text
Abstract:
Die vier in der vorliegenden Dissertation enthaltenen Studien beschäftigen sich vorwiegend mit dem dynamischen Verhalten makroökonomischer Zeitreihen. Diese Dynamiken werden sowohl im Kontext eines einfachen DSGE Modells, als auch aus der Sichtweise reiner Zeitreihenmodelle untersucht.
The four studies of this thesis are concerned predominantly with the dynamics of macroeconomic time series, both in the context of a simple DSGE model, as well as from a pure time series modeling perspective.
APA, Harvard, Vancouver, ISO, and other styles
11

Murray, Iain Andrew. "Advances in Markov chain Monte Carlo methods." Thesis, University College London (University of London), 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.487199.

Full text
Abstract:
Probability distributions over many variables occur frequently in Bayesian inference, statistical physics and simulation studies. Samples from distributions give insight into their typical behavior and can allow approximation of any quantity of interest, such as expectations or normalizing constants. Markov chain Monte Carlo (MCMC), introduced by Metropolis et al. (1953), allows r sampling from distributions with intractable normalization, and remains one of most important tools for approximate computation with probability distributions. I While not needed by MCMC, normalizers are key quantities: in Bayesian statistics marginal likelihoods are needed for model comparison; in statistical physics many physical quantities relate to the partition function. In this thesis we propose and investigate several new Monte Carlo algorithms, both for evaluating normalizing constants and for improved sampling of distributions. Many MCMC correctness proofs rely on using reversible transition operators; this can lead to chains exploring by slow random walks. After reviewing existing MCMC algorithms, we develop a new framework for constructing non-reversible transition operators from existing reversible ones. Next we explore and extend MCMC-based algorithms for computing normalizing constants. In particular we develop a newMCMC operator and Nested Sampling approach for the Potts model. Our results demonstrate that these approaches can be superior to finding normalizing constants by annealing methods and can obtain better posterior samples. Finally we consider 'doubly-intractable' distributions with extra unknown normalizer terms that do not cancel in standard MCMC algorithms. We propose using several deterministic approximations for the unknown terms, and investigate their interaction with sampling algorithms. We then develop novel exact-sampling-based MCMC methods, the Exchange Algorithm and Latent Histories. For the first time these algorithms do not require separate approximation before sampling begins. Moreover, the Exchange Algorithm outperforms the only alternative sampling algorithm for doubly intractable distributions.
APA, Harvard, Vancouver, ISO, and other styles
12

Han, Xiao-liang. "Markov Chain Monte Carlo and sampling efficiency." Thesis, University of Bristol, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Fan, Yanan. "Efficient implementation of Markov chain Monte Carlo." Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Brooks, Stephen Peter. "Convergence diagnostics for Markov Chain Monte Carlo." Thesis, University of Cambridge, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.363913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Graham, Matthew McKenzie. "Auxiliary variable Markov chain Monte Carlo methods." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28962.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) methods are a widely applicable class of algorithms for estimating integrals in statistical inference problems. A common approach in MCMC methods is to introduce additional auxiliary variables into the Markov chain state and perform transitions in the joint space of target and auxiliary variables. In this thesis we consider novel methods for using auxiliary variables within MCMC methods to allow approximate inference in otherwise intractable models and to improve sampling performance in models exhibiting challenging properties such as multimodality. We first consider the pseudo-marginal framework. This extends the Metropolis–Hastings algorithm to cases where we only have access to an unbiased estimator of the density of target distribution. The resulting chains can sometimes show ‘sticking’ behaviour where long series of proposed updates are rejected. Further the algorithms can be difficult to tune and it is not immediately clear how to generalise the approach to alternative transition operators. We show that if the auxiliary variables used in the density estimator are included in the chain state it is possible to use new transition operators such as those based on slice-sampling algorithms within a pseudo-marginal setting. This auxiliary pseudo-marginal approach leads to easier to tune methods and is often able to improve sampling efficiency over existing approaches. As a second contribution we consider inference in probabilistic models defined via a generative process with the probability density of the outputs of this process only implicitly defined. The approximate Bayesian computation (ABC) framework allows inference in such models when conditioning on the values of observed model variables by making the approximation that generated observed variables are ‘close’ rather than exactly equal to observed data. Although making the inference problem more tractable, the approximation error introduced in ABC methods can be difficult to quantify and standard algorithms tend to perform poorly when conditioning on high dimensional observations. This often requires further approximation by reducing the observations to lower dimensional summary statistics. We show how including all of the random variables used in generating model outputs as auxiliary variables in a Markov chain state can allow the use of more efficient and robust MCMC methods such as slice sampling and Hamiltonian Monte Carlo (HMC) within an ABC framework. In some cases this can allow inference when conditioning on the full set of observed values when standard ABC methods require reduction to lower dimensional summaries for tractability. Further we introduce a novel constrained HMC method for performing inference in a restricted class of differentiable generative models which allows conditioning the generated observed variables to be arbitrarily close to observed data while maintaining computational tractability. As a final topicwe consider the use of an auxiliary temperature variable in MCMC methods to improve exploration of multimodal target densities and allow estimation of normalising constants. Existing approaches such as simulated tempering and annealed importance sampling use temperature variables which take on only a discrete set of values. The performance of these methods can be sensitive to the number and spacing of the temperature values used, and the discrete nature of the temperature variable prevents the use of gradient-based methods such as HMC to update the temperature alongside the target variables. We introduce new MCMC methods which instead use a continuous temperature variable. This both removes the need to tune the choice of discrete temperature values and allows the temperature variable to be updated jointly with the target variables within a HMC method.
APA, Harvard, Vancouver, ISO, and other styles
16

Hua, Zhili. "Markov Chain Modeling for Multi-Server Clusters." W&M ScholarWorks, 2005. https://scholarworks.wm.edu/etd/1539626843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Matthews, James. "Markov chains for sampling matchings." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/3072.

Full text
Abstract:
Markov Chain Monte Carlo algorithms are often used to sample combinatorial structures such as matchings and independent sets in graphs. A Markov chain is defined whose state space includes the desired sample space, and which has an appropriate stationary distribution. By simulating the chain for a sufficiently large number of steps, we can sample from a distribution arbitrarily close to the stationary distribution. The number of steps required to do this is known as the mixing time of the Markov chain. In this thesis, we consider a number of Markov chains for sampling matchings, both in general and more restricted classes of graphs, and also for sampling independent sets in claw-free graphs. We apply techniques for showing rapid mixing based on two main approaches: coupling and conductance. We consider chains using single-site moves, and also chains using large block moves. Perfect matchings of bipartite graphs are of particular interest in our community. We investigate the mixing time of a Markov chain for sampling perfect matchings in a restricted class of bipartite graphs, and show that its mixing time is exponential in some instances. For a further restricted class of graphs, however, we can show subexponential mixing time. One of the techniques for showing rapid mixing is coupling. The bound on the mixing time depends on a contraction ratio b. Ideally, b < 1, but in the case b = 1 it is still possible to obtain a bound on the mixing time, provided there is a sufficiently large probability of contraction for all pairs of states. We develop a lemma which obtains better bounds on the mixing time in this case than existing theorems, in the case where b = 1 and the probability of a change in distance is proportional to the distance between the two states. We apply this lemma to the Dyer-Greenhill chain for sampling independent sets, and to a Markov chain for sampling 2D-colourings.
APA, Harvard, Vancouver, ISO, and other styles
18

Lindahl, John, and Douglas Persson. "Data-driven test case design of automatic test cases using Markov chains and a Markov chain Monte Carlo method." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43498.

Full text
Abstract:
Large and complex software that is frequently changed leads to testing challenges. It is well established that the later a fault is detected in software development, the more it costs to fix. This thesis aims to research and develop a method of generating relevant and non-redundant test cases for a regression test suite, to catch bugs as early in the development process as possible. The research was executed at Axis Communications AB with their products and systems in mind. The approach utilizes user data to dynamically generate a Markov chain model and with a Markov chain Monte Carlo method, strengthen that model. The model generates test case proposals, detects test gaps, and identifies redundant test cases based on the user data and data from a test suite. The sampling in the Markov chain Monte Carlo method can be modified to bias the model for test coverage or relevancy. The model is generated generically and can therefore be implemented in other API-driven systems. The model was designed with scalability in mind and further implementations can be made to increase the complexity and further specialize the model for individual needs.
APA, Harvard, Vancouver, ISO, and other styles
19

郭慈安 and Chi-on Michael Kwok. "Some results on higher order Markov Chain models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1988. http://hub.hku.hk/bib/B31208654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Kwok, Chi-on Michael. "Some results on higher order Markov Chain models /." [Hong Kong] : University of Hong Kong, 1988. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12432076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

edu, rdlyons@indiana. "Markov Chain Intersections and the Loop--Erased Walk." ESI preprints, 2001. ftp://ftp.esi.ac.at/pub/Preprints/esi1058.ps.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Stormark, Kristian. "Multiple Proposal Strategies for Markov Chain Monte Carlo." Thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9330.

Full text
Abstract:

The multiple proposal methods represent a recent simulation technique for Markov Chain Monte Carlo that allows several proposals to be considered at each step of transition. Motivated by the ideas of Quasi Monte Carlo integration, we examine how strongly correlated proposals can be employed to construct Markov chains with improved mixing properties. We proceed by giving a concise introduction to the Monte Carlo and Markov Chain Monte Carlo theory, and we supply a short discussion of the standard simulation algorithms and the difficulties of efficient sampling. We then examine two multiple proposal methods suggested in the literature, and we indicate the possibility of a unified formulation of the two methods. More essentially, we report some systematic exploration strategies for the two multiple proposals methods. In particular, we present schemes for the utilization of well-distributed point sets and maximally spread search directions. We also include a simple construction procedure for the latter type of point set. A numerical examination of the multiple proposal methods are performed on two simple test problems. We find that the systematic exploration approach may provide a significant improvement of the mixing, especially when the probability mass of the target distribution is ``easy to miss'' by independent sampling. For both test problems, we find that the best results are obtained with the QMC schemes. In particular, we find that the gain is most pronounced for a relatively moderate number of proposal. With fewer proposals, the properties of the well-distributed point sets will no be that relevant. For a large number of proposals, the independent sampling approach will be more competitive, since the coverage of the local neighborhood then will be better.

APA, Harvard, Vancouver, ISO, and other styles
23

Backåker, Fredrik. "The Google Markov Chain: convergence speed and eigenvalues." Thesis, Uppsala universitet, Matematisk statistik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-176610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sanborn, Adam N. "Uncovering mental representations with Markov chain Monte Carlo." [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3278468.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Psychological and Brain Sciences and Program in Neuroscience, 2007.
Source: Dissertation Abstracts International, Volume: 68-10, Section: B, page: 6994. Adviser: Richard M. Shiffrin. Title from dissertation home page (viewed May 21, 2008).
APA, Harvard, Vancouver, ISO, and other styles
25

Suzuki, Yuya. "Rare-event Simulation with Markov Chain Monte Carlo." Thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-138950.

Full text
Abstract:
In this thesis, we consider random sums with heavy-tailed increments. By the term random sum, we mean a sum of random variables where the number of summands is also random. Our interest is to analyse the tail behaviour of random sums and to construct an efficient method to calculate quantiles. For the sake of efficiency, we simulate rare-events (tail-events) using a Markov chain Monte Carlo (MCMC) method. The asymptotic behaviour of sum and the maximum of heavy-tailed random sums is identical. Therefore we compare random sum and maximum value for various distributions, to investigate from which point one can use the asymptotic approximation. Furthermore, we propose a new method to estimate quantiles and the estimator is shown to be efficient.
APA, Harvard, Vancouver, ISO, and other styles
26

Gudmundsson, Thorbjörn. "Rare-event simulation with Markov chain Monte Carlo." Doctoral thesis, KTH, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157522.

Full text
Abstract:
Stochastic simulation is a popular method for computing probabilities or expecta- tions where analytical answers are difficult to derive. It is well known that standard methods of simulation are inefficient for computing rare-event probabilities and there- fore more advanced methods are needed to those problems. This thesis presents a new method based on Markov chain Monte Carlo (MCMC) algorithm to effectively compute the probability of a rare event. The conditional distri- bution of the underlying process given that the rare event occurs has the probability of the rare event as its normalising constant. Using the MCMC methodology a Markov chain is simulated, with that conditional distribution as its invariant distribution, and information about the normalising constant is extracted from its trajectory. In the first two papers of the thesis, the algorithm is described in full generality and applied to four problems of computing rare-event probability in the context of heavy- tailed distributions. The assumption of heavy-tails allows us to propose distributions which approximate the conditional distribution conditioned on the rare event. The first problem considers a random walk Y1 + · · · + Yn exceeding a high threshold, where the increments Y are independent and identically distributed and heavy-tailed. The second problem is an extension of the first one to a heavy-tailed random sum Y1+···+YN exceeding a high threshold,where the number of increments N is random and independent of Y1 , Y2 , . . .. The third problem considers the solution Xm to a stochastic recurrence equation, Xm = AmXm−1 + Bm, exceeding a high threshold, where the innovations B are independent and identically distributed and heavy-tailed and the multipliers A satisfy a moment condition. The fourth problem is closely related to the third and considers the ruin probability for an insurance company with risky investments. In last two papers of this thesis, the algorithm is extended to the context of light- tailed distributions and applied to four problems. The light-tail assumption ensures the existence of a large deviation principle or Laplace principle, which in turn allows us to propose distributions which approximate the conditional distribution conditioned on the rare event. The first problem considers a random walk Y1 + · · · + Yn exceeding a high threshold, where the increments Y are independent and identically distributed and light-tailed. The second problem considers a discrete-time Markov chains and the computation of general expectation, of its sample path, related to rare-events. The third problem extends the the discrete-time setting to Markov chains in continuous- time. The fourth problem is closely related to the third and considers a birth-and-death process with spatial intensities and the computation of first passage probabilities. An unbiased estimator of the reciprocal probability for each corresponding prob- lem is constructed with efficient rare-event properties. The algorithms are illustrated numerically and compared to existing importance sampling algorithms.

QC 20141216

APA, Harvard, Vancouver, ISO, and other styles
27

Jindasawat, Jutaporn. "Testing the order of a Markov chain model." Thesis, University of Newcastle Upon Tyne, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.446197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Hastie, David. "Towards automatic reversible jump Markov Chain Monte Carlo." Thesis, University of Bristol, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.414179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Groff, Jeffrey R. "Markov chain models of calcium puffs and sparks." W&M ScholarWorks, 2008. https://scholarworks.wm.edu/etd/1539623333.

Full text
Abstract:
Localized cytosolic Ca2+ elevations known as puffs and sparks are important regulators of cellular function that arise due to the cooperative activity of Ca2+-regulated inositol 1,4,5-trisphosphate receptors (IP3Rs) or ryanodine receptors (RyRs) co-localized at Ca2+ release sites on the surface of the endoplasmic reticulum or sarcoplasmic reticulum. Theoretical studies have demonstrated that the cooperative gating of a cluster of Ca2+-regulated Ca 2+ channels modeled as a continuous-time discrete-state Markov chain may result in dynamics reminiscent of Ca2+ puffs and sparks. In such simulations, individual Ca2+-release channels are coupled via a mathematical representation of the local [Ca2+] and exhibit "stochastic Ca2+ excitability" where channels open and close in a concerted fashion. This dissertation uses Markov chain models of Ca 2+ release sites to advance our understanding of the biophysics connecting the microscopic parameters of IP3R and RyR gating to the collective phenomenon of puffs and sparks.;The dynamics of puffs and sparks exhibited by release site models that include both Ca2+ coupling and nearest-neighbor allosteric coupling are studied. Allosteric interactions are included in a manner that promotes the synchronous gating of channels by stabilizing neighboring closed-closed and/or open-open channel pairs. When the strength of Ca2+-mediated channel coupling is systematically varied, simulations that include allosteric interactions often exhibit more robust Ca2+ puffs and sparks. Interestingly, the changes in puff/spark duration, inter-event interval, and frequency observed upon the random removal of allosteric couplings that stabilize closed-closed channel pairs are qualitatively different than the changes observed when open-open channel pairs, or both open-open and closed-closed channel pairs are stabilized. The validity of a computationally efficient mean-field reduction applicable to the dynamics of a cluster of Ca2+-release Ca2+ channels coupled via the local [Ca2+] and allosteric interactions is also investigated.;Markov chain models of Ca2+ release sites composed of channels that are both activated and inactivated by Ca2+ are used to clarify the role of Ca2+ inactivation in the generation and termination of puffs and sparks. It is found that when the average fraction of inactivated channels is significant, puffs and sparks are often less sensitive to variations in the number of channels at release sites and the strength of Ca2+ coupling. While excessively fast Ca2+ inactivation can preclude puffs and sparks moderately fast Ca2+ inactivation often leads to time-irreversible puff/sparks whose termination is facilitated by the recruitment of inactivated channels throughout the duration of the puff/spark event. On the other hand, Ca2+ inactivation may be an important negative feedback mechanism even when its time constant is much greater than the duration of puffs and sparks. In fact, slow Ca 2+ inactivation can lead to release sites with a substantial fraction of inactivated channels that exhibit nearly time-reversible puffs and sparks that terminate without additional recruitment of inactivated channels.
APA, Harvard, Vancouver, ISO, and other styles
30

Guha, Subharup. "Benchmark estimation for Markov Chain Monte Carlo samplers." The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1085594208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Li, Shuying. "Phylogenetic tree construction using markov chain monte carlo /." The Ohio State University, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487942182323916.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Xu, Jason Qian. "Markov Chain Monte Carlo and Non-Reversible Methods." Thesis, The University of Arizona, 2012. http://hdl.handle.net/10150/244823.

Full text
Abstract:
The bulk of Markov chain Monte Carlo applications make use of reversible chains, relying on the Metropolis-Hastings algorithm or similar methods. While reversible chains have the advantage of being relatively easy to analyze, it has been shown that non-reversible chains may outperform them in various scenarios. Neal proposes an algorithm that transforms a general reversible chain into a non-reversible chain with a construction that does not increase the asymptotic variance. These modified chains work to avoid diffusive backtracking behavior which causes Markov chains to be trapped in one position for too long. In this paper, we provide an introduction to MCMC, and discuss the Metropolis algorithm and Neal’s algorithm. We introduce a decaying memory algorithm inspired by Neal’s idea, and then analyze and compare the performance of these chains on several examples.
APA, Harvard, Vancouver, ISO, and other styles
33

Levitz, Michael. "Separation, completeness, and Markov properties for AMP chain graph models /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/9564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zhu, Dongmei, and 朱冬梅. "Construction of non-standard Markov chain models with applications." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/202358.

Full text
Abstract:
In this thesis, the properties of some non-standard Markov chain models and their corresponding parameter estimation methods are investigated. Several practical applications and extensions are also discussed. The estimation of model parameters plays a key role in the real-world applications of Markov chain models. Some widely used estimation methods for Markov chain models are based on the existence of stationary vectors. In this thesis, some weaker sufficient conditions for the existence of stationary vectors for highorder Markov chain models, multivariate Markov chain models and high-order multivariate Markov chain models are proposed. Furthermore, for multivariate Markov chain models, a new estimation method based on minimizing the prediction error is proposed. Numerical experiments are conducted to demonstrate the efficiency of the proposed estimation methods with an application in demand prediction. Hidden Markov Model (HMM) is a bivariate stochastic process such that one of the process is hidden and the other is observable. The distribution of observable sequence depends on the hidden sequence. In a traditional HMM, the hidden states directly affect the observable states but not vice versa. However, in reality, observable sequence may also have effect on the hidden sequence. For this reason, the concept of Interactive Hidden Markov Model (IHMM) is introduced, whose key idea is that the transitions of the hidden states depend on the observable states too. In this thesis, efforts are devoted in building a highorder IHMM where the probability laws governing both observable and hidden states can be written as a pair of high-order stochastic difference equations. We also propose a new model by capturing the effect of observable sequence on the hidden sequence through using the threshold principle. In this case, reference probability methods are adopted in estimating the optimal model parameters, while for unknown threshold parameter, Akaike Information Criterion (AIC) is used. We explore asset allocation problems from both domestic and foreign perspective where asset price dynamics follows autoregressive HMM. The object of an investor is not only to maximize the expected utility of the terminal wealth, but also to ensure that the risk of the portfolio described by the Value-at-Risk (VaR) does not exceed a specified level. In many decision processes, fuzziness is a major source of imprecision. As a perception of usual Markov chains, the definition of fuzzy Markov chains is introduced. Compared to traditional Markov chain models, fuzzy Markov chains are relatively new and many properties of them are still unknown. Due to the potential applications of fuzzy Markov chains, we provide some characterizations to ensure the ergodicity of these chains under both max-min and max-product compositions.
published_or_final_version
Mathematics
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
35

Möllering, Karin. "Inventory rationing : a new modeling approach using Markov chain theory /." Köln : Kölner Wiss.-Verl, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=2942052&prov=M&dokv̲ar=1&doke̲xt=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Frühwirth-Schnatter, Sylvia, Stefan Pittner, Andrea Weber, and Rudolf Winter-Ebmer. "Analysing plant closure effects using time-varying mixture-of-experts Markov chain clustering." Institute of Mathematical Statistics, 2018. http://dx.doi.org/10.1214/17-AOAS1132.

Full text
Abstract:
In this paper we study data on discrete labor market transitions from Austria. In particular, we follow the careers of workers who experience a job displacement due to plant closure and observe - over a period of 40 quarters - whether these workers manage to return to a steady career path. To analyse these discrete-valued panel data, we apply a new method of Bayesian Markov chain clustering analysis based on inhomogeneous first order Markov transition processes with time-varying transition matrices. In addition, a mixtureof- experts approach allows us to model the probability of belonging to a certain cluster as depending on a set of covariates via a multinomial logit model. Our cluster analysis identifies five career patterns after plant closure and reveals that some workers cope quite easily with a job loss whereas others suffer large losses over extended periods of time.
APA, Harvard, Vancouver, ISO, and other styles
37

Banisch, Sven [Verfasser]. "Markov chain aggregation for agent-based models / Sven Banisch." Bielefeld : Universitätsbibliothek Bielefeld, 2014. http://d-nb.info/1057957089/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Bentley, Jason Phillip. "Exact Markov chain Monte Carlo and Bayesian linear regression." Thesis, University of Canterbury. Mathematics and Statistics, 2009. http://hdl.handle.net/10092/2534.

Full text
Abstract:
In this work we investigate the use of perfect sampling methods within the context of Bayesian linear regression. We focus on inference problems related to the marginal posterior model probabilities. Model averaged inference for the response and Bayesian variable selection are considered. Perfect sampling is an alternate form of Markov chain Monte Carlo that generates exact sample points from the posterior of interest. This approach removes the need for burn-in assessment faced by traditional MCMC methods. For model averaged inference, we find the monotone Gibbs coupling from the past (CFTP) algorithm is the preferred choice. This requires the predictor matrix be orthogonal, preventing variable selection, but allowing model averaging for prediction of the response. Exploring choices of priors for the parameters in the Bayesian linear model, we investigate sufficiency for monotonicity assuming Gaussian errors. We discover that a number of other sufficient conditions exist, besides an orthogonal predictor matrix, for the construction of a monotone Gibbs Markov chain. Requiring an orthogonal predictor matrix, we investigate new methods of orthogonalizing the original predictor matrix. We find that a new method using the modified Gram-Schmidt orthogonalization procedure performs comparably with existing transformation methods, such as generalized principal components. Accounting for the effect of using an orthogonal predictor matrix, we discover that inference using model averaging for in-sample prediction of the response is comparable between the original and orthogonal predictor matrix. The Gibbs sampler is then investigated for sampling when using the original predictor matrix and the orthogonal predictor matrix. We find that a hybrid method, using a standard Gibbs sampler on the orthogonal space in conjunction with the monotone CFTP Gibbs sampler, provides the fastest computation and convergence to the posterior distribution. We conclude the hybrid approach should be used when the monotone Gibbs CFTP sampler becomes impractical, due to large backwards coupling times. We demonstrate large backwards coupling times occur when the sample size is close to the number of predictors, or when hyper-parameter choices increase model competition. The monotone Gibbs CFTP sampler should be taken advantage of when the backwards coupling time is small. For the problem of variable selection we turn to the exact version of the independent Metropolis-Hastings (IMH) algorithm. We reiterate the notion that the exact IMH sampler is redundant, being a needlessly complicated rejection sampler. We then determine a rejection sampler is feasible for variable selection when the sample size is close to the number of predictors and using Zellner’s prior with a small value for the hyper-parameter c. Finally, we use the example of simulating from the posterior of c conditional on a model to demonstrate how the use of an exact IMH view-point clarifies how the rejection sampler can be adapted to improve efficiency.
APA, Harvard, Vancouver, ISO, and other styles
39

Pooley, James P. "Exploring phonetic category structure with Markov chain Monte Carlo." Connect to resource, 2008. http://hdl.handle.net/1811/32221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Josefsson, Marcus, and Erik Rasmusson. "A Markov Chain Approach to Monetary Policy Decision Making." Thesis, KTH, Matematik (Inst.), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-103872.

Full text
Abstract:
Through monetary policy, central banks aim to prevent societal costs associated with high or unstable ination. Forecasts and several other tools are used to provide guidance to this end, as outcomes of interest rate decisions are not fully predictable. This report presents a statistical approach, viewing the development of the economy as a Markov chain. The economy is thus represented by a nite number of states, composed of ination and short-term variations in GDP. The Markov property is assumed to hold, that is, the economy moves between states over an appropriately chosen time period and the transition probabilities depend only on the initial state. Using the Markov Decision Process (MDP) framework, the transition probabilities between such states are evaluated using historical data, distinguished by the interest rate decision preceding the transition. Completing the model, a cost of ination is de ned for each state as the deviation from a set target. An optimal policy is then determined as a xed decision for each state, minimizing the expected average cost incurred while using the model. The model is evaluated on data from Sweden and the U.S., for periods 1994-2007 and 1954-2007 respectively. The results are assessed by the estimated transition probabilities as well as by the optimal policy suggested. While the Swedish observations are concluded to be too few in number to render valuable results, outcomes using the U.S. data agree in several aspects with what would have been expected from macroeconomic theory. In conclusion, the results suggest that the model might be applied to the problem, granted sucient data is available for reliable transition probabilities to be estimated and that this estimation can be performed in an unbiased way. Presently, this appears to be a dicult task.
APA, Harvard, Vancouver, ISO, and other styles
41

Meddin, Mona. "Genetic algorithms : a markov chain and detail balance approach." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/29196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Nathan, Shaoul. "Derivatives pricing in a Markov chain jump-diffusion setting." Thesis, London School of Economics and Political Science (University of London), 2005. http://etheses.lse.ac.uk/1789/.

Full text
Abstract:
In this work we develop a Markov Chain Jump-Diffusion (MCJD) model, where we have a financial market in which there are several possible states. Asset prices in the market follow a generalised geometric Brownian motion, with drift and volatility depending on the state of the market. So for example, one state may represent a bull market where drifts are high, whilst another state may represent a bear market where where drifts are low. The state the market is in is governed by a continuous time Markov chain. We add to this diffusion process jumps in the asset price which occur when the market changes state, and the jump sizes are dependent on the states the market is transiting to and transiting from. We also allow the market to transit to the same state, which corresponds to a jump in the asset price with no change to the drift or volatility. We will develop conditions of no arbitrage in such a market, and methods for pricing derivatives of assets whose prices follow MCJD processes. We will also consider Term-Structure models where the short rate (or forward rate) follows an MCJD process.
APA, Harvard, Vancouver, ISO, and other styles
43

Fung, Siu-leung, and 馮紹樑. "Higher-order Markov chain models for categorical data sequences." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B26666224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Angelino, Elaine Lee. "Accelerating Markov chain Monte Carlo via parallel predictive prefetching." Thesis, Harvard University, 2014. http://nrs.harvard.edu/urn-3:HUL.InstRepos:13070022.

Full text
Abstract:
We present a general framework for accelerating a large class of widely used Markov chain Monte Carlo (MCMC) algorithms. This dissertation demonstrates that MCMC inference can be accelerated in a model of parallel computation that uses speculation to predict and complete computational work ahead of when it is known to be useful. By exploiting fast, iterative approximations to the target density, we can speculatively evaluate many potential future steps of the chain in parallel. In Bayesian inference problems, this approach can accelerate sampling from the target distribution, without compromising exactness, by exploiting subsets of data. It takes advantage of whatever parallel resources are available, but produces results exactly equivalent to standard serial execution. In the initial burn-in phase of chain evaluation, it achieves speedup over serial evaluation that is close to linear in the number of available cores.
Engineering and Applied Sciences
APA, Harvard, Vancouver, ISO, and other styles
45

Sagir, Yavuz. "Dynamic bandwidth provisioning using Markov chain based on RSVP." Thesis, Monterey, California: Naval Postgraduate School, 2013. http://hdl.handle.net/10945/37708.

Full text
Abstract:
Approved for public release; distribution is unlimited
An important aspect of wireless communication is efficiency. Efficient network resource management and quality of service (QoS) are parameters that need to be achieved especially when considering network delays. The cooperative nature of unmanned ground vehicle (UGV) networks requires that bandwidth allocation be shared fairly between individual UGV nodes, depending on necessity. In this thesis, we study the problem of dynamic bandwidth provisioning in a UGV network. Specifically, we integrate the use of a basic statistical model, known as the Markov chain with a widely known, network bandwidth reservation protocol, known as the Resource Reservation Protocol (RSVP). The Markov chain results are used with RSVP to identify specific bandwidth allocation requirements along a path such that data transmission along that path is successful. Using a wireless simulation program known as Qualnet, we analyze the bandwidth efficiency and show that this algorithm provides higher bandwidth guarantees and better overall QoS when compared with solely using RSVP in wireless communication networks.
APA, Harvard, Vancouver, ISO, and other styles
46

Vaičiulytė, Ingrida. "Study and application of Markov chain Monte Carlo method." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20141209_112440-55390.

Full text
Abstract:
Markov chain Monte Carlo adaptive methods by creating computationally effective algorithms for decision-making of data analysis with the given accuracy are analyzed in this dissertation. The tasks for estimation of parameters of the multivariate distributions which are constructed in hierarchical way (skew t distribution, Poisson-Gaussian model, stable symmetric vector law) are described and solved in this research. To create the adaptive MCMC procedure, the sequential generating method is applied for Monte Carlo samples, introducing rules for statistical termination and for sample size regulation of Markov chains. Statistical tasks, solved by this method, reveal characteristics of relevant computational problems including MCMC method. Effectiveness of the MCMC algorithms is analyzed by statistical modeling method, constructed in the dissertation. Tests made with sportsmen data and financial data of enterprises, belonging to health-care industry, confirmed that numerical properties of the method correspond to the theoretical model. The methods and algorithms created also are applied to construct the model for sociological data analysis. Tests of algorithms have shown that adaptive MCMC algorithm allows to obtain estimators of examined distribution parameters in lower number of chains, and reducing the volume of calculations approximately two times. The algorithms created in this dissertation can be used to test the systems of stochastic type and to solve other statistical... [to full text]
Disertacijoje nagrinėjami Markovo grandinės Monte-Karlo (MCMC) adaptavimo metodai, skirti efektyviems skaitiniams duomenų analizės sprendimų priėmimo su iš anksto nustatytu patikimumu algoritmams sudaryti. Suformuluoti ir išspręsti hierarchiniu būdu sudarytų daugiamačių skirstinių (asimetrinio t skirstinio, Puasono-Gauso modelio, stabiliojo simetrinio vektoriaus dėsnio) parametrų vertinimo uždaviniai. Adaptuotai MCMC procedūrai sukurti yra pritaikytas nuoseklaus Monte-Karlo imčių generavimo metodas, įvedant statistinį stabdymo kriterijų ir imties tūrio reguliavimą. Statistiniai uždaviniai išspręsti šiuo metodu leidžia atskleisti aktualias MCMC metodų skaitmeninimo problemų ypatybes. MCMC algoritmų efektyvumas tiriamas pasinaudojant disertacijoje sudarytu statistinio modeliavimo metodu. Atlikti eksperimentai su sportininkų duomenimis ir sveikatos industrijai priklausančių įmonių finansiniais duomenimis patvirtino, kad metodo skaitinės savybės atitinka teorinį modelį. Taip pat sukurti metodai ir algoritmai pritaikyti sociologinių duomenų analizės modeliui sudaryti. Atlikti tyrimai parodė, kad adaptuotas MCMC algoritmas leidžia gauti nagrinėjamų skirstinių parametrų įvertinius per mažesnį grandžių skaičių ir maždaug du kartus sumažinti skaičiavimų apimtį. Disertacijoje sukonstruoti algoritmai gali būti pritaikyti stochastinio pobūdžio sistemų tyrimui ir kitiems statistikos uždaviniams spręsti MCMC metodu.
APA, Harvard, Vancouver, ISO, and other styles
47

Pereira, Fernanda Chaves. "Bayesian Markov chain Monte Carlo methods in general insurance." Thesis, City University London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Mangoubi, Oren (Oren Rami). "Integral geometry, Hamiltonian dynamics, and Markov Chain Monte Carlo." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104583.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 97-101).
This thesis presents applications of differential geometry and graph theory to the design and analysis of Markov chain Monte Carlo (MCMC) algorithms. MCMC algorithms are used to generate samples from an arbitrary probability density [pi] in computationally demanding situations, since their mixing times need not grow exponentially with the dimension of [pi]. However, if [pi] has many modes, MCMC algorithms may still have very long mixing times. It is therefore crucial to understand and reduce MCMC mixing times, and there is currently a need for global mixing time bounds as well as algorithms that mix quickly for multi-modal densities. In the Gibbs sampling MCMC algorithm, the variance in the size of modes intersected by the algorithm's search-subspaces can grow exponentially in the dimension, greatly increasing the mixing time. We use integral geometry, together with the Hessian of r and the Chern-Gauss-Bonnet theorem, to correct these distortions and avoid this exponential increase in the mixing time. Towards this end, we prove a generalization of the classical Crofton's formula in integral geometry that can allow one to greatly reduce the variance of Crofton's formula without introducing a bias. Hamiltonian Monte Carlo (HMC) algorithms are some the most widely-used MCMC algorithms. We use the symplectic properties of Hamiltonians to prove global Cheeger-type lower bounds for the mixing times of HMC algorithms, including Riemannian Manifold HMC as well as No-U-Turn HMC, the workhorse of the popular Bayesian software package Stan. One consequence of our work is the impossibility of energy-conserving Hamiltonian Markov chains to search for far-apart sub-Gaussian modes in polynomial time. We then prove another generalization of Crofton's formula that applies to Hamiltonian trajectories, and use our generalized Crofton formula to improve the convergence speed of HMC-based integration on manifolds. We also present a generalization of the Hopf fibration acting on arbitrary- ghost-valued random variables. For [beta] = 4, the geometry of the Hopf fibration is encoded by the quaternions; we investigate the extent to which the elegant properties of this encoding are preserved when one replaces quaternions with general [beta] > 0 ghosts.
by Oren Mangoubi.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
49

Doerschuk, Peter Charles. "A Markov chain approach to electrocardiogram modeling and analysis." Thesis, Massachusetts Institute of Technology, 1985. http://hdl.handle.net/1721.1/15224.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1985.
MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.
Bibliography: leaves 393-401.
by Peter Charles Doerschuk.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
50

Persing, Adam. "Some contributions to particle Markov chain Monte Carlo algorithms." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/23277.

Full text
Abstract:
Hidden Markov models (HMMs) (Cappe et al., 2005) and discrete time stopped Markov processes (Del Moral, 2004, Section 2.2.3) are used to model phenomena in a wide range of fields. However, as practitioners develop more intricate models, analytical Bayesian inference becomes very difficult. In light of this issue, this work focuses on sampling from the posteriors of HMMs and stopped Markov processes using sequential Monte Carlo (SMC) (Doucet et al. 2008, Doucet et al. 2001, Gordon et al. 1993) and, more importantly, particle Markov chain Monte Carlo (PMCMC) (Andrieu et al., 2010). The thesis consists of three major contributions, which enhance the performance of PMCMC. The first work focuses on HMMs, and it begins by introducing a new SMC smoothing (Briers et al. 2010, Fearnhead et al. 2010) estimate of the HMM's normalising constant; we prove the estimate's unbiasedness and a central limit theorem. We use this estimate to develop new PMCMC algorithms that, under certain algorithmic settings, require less computational time than the algorithms of Andrieu et al. (2010). Our new estimate also leads to the discovery of an optimal setting for the smoothers of Briers et al. (2010) and Fearnhead et al. (2010). As this setting is not available for the general class of HMMs, we develop three algorithms for approximating it. The second major work builds from Jasra et al. (2013) and Whiteley et al. (2012) to develop new SMC and PMCMC algorithms that draw from HMMs whose observations have intractable density functions. While these types of algorithms have appeared before (see Jasra et al. 2013, Jasra et al. 2012, and Martin et al. 2012), this work uses twisted proposals as in Whiteley et al. (2012) to reduce the variance of SMC estimates of the normalising constant to improve the convergence of PMCMC in some scenarios. Finally, the third project is concerned with inferring the unknown parameters of stopped Markov processes that are only observed upon reaching their terminal sets. Bayesian inference has not been attempted on this class of problems before. The parameters are inferred through two new adaptive and non-adaptive PMCMC algorithms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography