To see the other types of publications on this topic, follow the link: Markoc chain.

Dissertations / Theses on the topic 'Markoc chain'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Markoc chain.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Yi ting. "Random generation of executions of concurrent systems." Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS071.pdf.

Full text
Abstract:
La concurrence joue un rôle important dans les systèmes et la programmation modernes. Il révèle le phénomène selon lequel plusieurs calculs s'exécutent simultanément. Ces exécutions entrelacées entraînent le "problème d'explosion d'états". Dans cette thèse, nous visons à construire un cadre probabiliste sur les exécutions de systèmes concurrents à des fins de génération aléatoire. La mesure uniforme des exécutions s'inspire des monoïdes de traces définis sur des traces infinies. La théorie des traces a une solide base combinatoire autour du polynôme de Möbius. L'irréductibilité des monoïdes de traces implique la forte connectivité du digraphe des cliques. Par conséquent, une valeur propre dominante existe et détermine le taux de croissance des monoïdes de traces. Dans notre travail, nous considérons les systèmes concurrents abstraits comme des actions de monoïdes sur un ensemble fini d'états. Ce paramètre englobe les réseaux de Petri à 1-bornés. Nous donnons deux interprétations à la mesure uniforme des exécutions pour les systèmes concurrents. La première interprétation donne la valeur de la measure uniforme sur les cylindres élémentaires du point de vue algébrique sur le monoïde de traces. Cette mesure uniforme est réalisée par une chaîne de Markov d'états-et-cliques. L'autre interprétation s'intéresse à la mesure de Parry sur le digraphe des états-et-cliques. La difficulté à étendre aux systèmes concurrents est que le théorème de Perron-Frobenius n'est pas applicable. Pour résoudre ce problème, nous avons trouvé la propriété spectrale des systèmes concurrents irréductibles. Cela nous permet de distinguer les principaux composants qui déterminent la racine caractéristique du système. Nous prouvons également l'unicité de cette mesure uniforme. La matrice de transition peut être obtenue soit à partir de la chaîne de Markov d'états-et-cliques, soit à partir de la mesure de Parry avec le rayon spectral des composantes dominantes
Concurrency has an important role in modern systems and programming. It reveals the phenomenon that multiple computations run simultaneously. These interleaved executions cause the so-called "State explosion problem". In this thesis, we aim at constructing a probabilistic framework on the executions of concurrent systems for the purpose of random generation. The uniform measure of executions is inspired by trace monoids defined on infinite traces. Trace theory has a strong combinatorial foundation around the Möbius polynomial. The irreducibility of trace monoids implies the strong connectivity of the digraph of cliques. Hence, a dominant eigenvalue exists and determines the growth rate of trace monoids. In our work, we view the abstract concurrent systems as monoid actions on a finite set of states. This setting encompasses 1-bounded Petri nets. We give two interpretations to a uniform measure of executions for concurrent systems. One is constructed by the elementary cylinders in trace monoids. This uniform measure is realized a Markov chain of states-and-cliques. The other is to study the Parry measure on the digraph of states-and-cliques. The difficulty to extend to concurrent systems is that the Perron-Frobenius theorem is not applicable. To resolve this problem, we found the spectral property of the irreducible concurrent systems. This allows us to distinguish the main components which determine the characteristic root of the system. We also prove the uniqueness of this uniform measure. The transition matrix can be obtained either from the Markov chain of states-and-cliques or from the Parry measure with the spectral radius of the dominant components
APA, Harvard, Vancouver, ISO, and other styles
2

Yildirak, Sahap Kasirga. "The Identificaton Of A Bivariate Markov Chain Market Model." Phd thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/1257898/index.pdf.

Full text
Abstract:
This work is an extension of the classical Cox-Ross-Rubinstein discrete time market model in which only one risky asset is considered. We introduce another risky asset into the model. Moreover, the random structure of the asset price sequence is generated by bivariate finite state Markov chain. Then, the interest rate varies over time as it is the function of generating sequences. We discuss how the model can be adapted to the real data. Finally, we illustrate sample implementations to give a better idea about the use of the model.
APA, Harvard, Vancouver, ISO, and other styles
3

Lindahl, John, and Douglas Persson. "Data-driven test case design of automatic test cases using Markov chains and a Markov chain Monte Carlo method." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43498.

Full text
Abstract:
Large and complex software that is frequently changed leads to testing challenges. It is well established that the later a fault is detected in software development, the more it costs to fix. This thesis aims to research and develop a method of generating relevant and non-redundant test cases for a regression test suite, to catch bugs as early in the development process as possible. The research was executed at Axis Communications AB with their products and systems in mind. The approach utilizes user data to dynamically generate a Markov chain model and with a Markov chain Monte Carlo method, strengthen that model. The model generates test case proposals, detects test gaps, and identifies redundant test cases based on the user data and data from a test suite. The sampling in the Markov chain Monte Carlo method can be modified to bias the model for test coverage or relevancy. The model is generated generically and can therefore be implemented in other API-driven systems. The model was designed with scalability in mind and further implementations can be made to increase the complexity and further specialize the model for individual needs.
APA, Harvard, Vancouver, ISO, and other styles
4

Bakra, Eleni. "Aspects of population Markov chain Monte Carlo and reversible jump Markov chain Monte Carlo." Thesis, University of Glasgow, 2009. http://theses.gla.ac.uk/1247/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Holenstein, Roman. "Particle Markov chain Monte Carlo." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/7319.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) methods have emerged as the two main tools to sample from high-dimensional probability distributions. Although asymptotic convergence of MCMC algorithms is ensured under weak assumptions, the performance of these latters is unreliable when the proposal distributions used to explore the space are poorly chosen and/or if highly correlated variables are updated independently. In this thesis we propose a new Monte Carlo framework in which we build efficient high-dimensional proposal distributions using SMC methods. This allows us to design effective MCMC algorithms in complex scenarios where standard strategies fail. We demonstrate these algorithms on a number of example problems, including simulated tempering, nonlinear non-Gaussian state-space model, and protein folding.
APA, Harvard, Vancouver, ISO, and other styles
6

Byrd, Jonathan Michael Robert. "Parallel Markov Chain Monte Carlo." Thesis, University of Warwick, 2010. http://wrap.warwick.ac.uk/3634/.

Full text
Abstract:
The increasing availability of multi-core and multi-processor architectures provides new opportunities for improving the performance of many computer simulations. Markov Chain Monte Carlo (MCMC) simulations are widely used for approximate counting problems, Bayesian inference and as a means for estimating very highdimensional integrals. As such MCMC has found a wide variety of applications in fields including computational biology and physics,financial econometrics, machine learning and image processing. This thesis presents a number of new method for reducing the runtime of Markov Chain Monte Carlo simulations by using SMP machines and/or clusters. Two of the methods speculatively perform iterations in parallel, reducing the runtime of MCMC programs whilst producing statistically identical results to conventional sequential implementations. The other methods apply only to problem domains that can be presented as an image, and involve using various means of dividing the image into subimages that can be proceed with some degree of independence. Where possible the thesis includes a theoretical analysis of the reduction in runtime that may be achieved using our technique under perfect conditions, and in all cases the methods are tested and compared on selection of multi-core and multi-processor architectures. A framework is provided to allow easy construction of MCMC application that implement these parallelisation methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Frühwirth-Schnatter, Sylvia, Stefan Pittner, Andrea Weber, and Rudolf Winter-Ebmer. "Analysing plant closure effects using time-varying mixture-of-experts Markov chain clustering." Institute of Mathematical Statistics, 2018. http://dx.doi.org/10.1214/17-AOAS1132.

Full text
Abstract:
In this paper we study data on discrete labor market transitions from Austria. In particular, we follow the careers of workers who experience a job displacement due to plant closure and observe - over a period of 40 quarters - whether these workers manage to return to a steady career path. To analyse these discrete-valued panel data, we apply a new method of Bayesian Markov chain clustering analysis based on inhomogeneous first order Markov transition processes with time-varying transition matrices. In addition, a mixtureof- experts approach allows us to model the probability of belonging to a certain cluster as depending on a set of covariates via a multinomial logit model. Our cluster analysis identifies five career patterns after plant closure and reveals that some workers cope quite easily with a job loss whereas others suffer large losses over extended periods of time.
APA, Harvard, Vancouver, ISO, and other styles
8

Michalaros, Anastasios. "Engagement of Individual Performance in the Application of Markov Chains Models in Hellenic Navys Chain of Command." Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/6835.

Full text
Abstract:
The recent financial-crisis that Greece (Hellas) suffers has restricted and reduced the budgets of many organizations. Among those, the Hellenic ministry of defense has begun examining ways to reduce costs while maintaining operational readiness. Retirement legislation is the first area the Hellenic ministry of defense is examining. Variables such as years of service required to receive a pension, years of service by pay grade, and the skills officers should possess for promotion were examined and recorded in ordinances (directives) issued by the president of the Hellenic Republic. However, these ordinances are expected to expand the number of officers in the middle pay grades. In an attempt to deal with potential increases in middle and higher pay grades of officer inventory the Hellenic Ministry of Defense is examining an alternative plan of two parallel officer force structures war and auxiliary. The primary structure will consist of war officers. These officers are considered top performers whose careers stop at the pay grade of flag officer. The auxiliary inventory includes those officers exhibiting lower performance with the terminal pay grade of captain. The purpose of these parallel paths is to ensure all officers serve 35 years in order to receive full pensions. This thesis analyzed job performance from the perspective of experience, ability, motivation, and accomplishment of advanced degrees. It concluded that experience should be combined with education level as a reliable evaluation field. Through the use of weighting priorities, the Hellenic navy should establish job performance as a single number, or officer ranking. Thus, top performers are distinguished from officers with lower performance on periodic evaluations. Using Markov-chain models and officer scores on job performance, the war and auxiliary inventories were examined. The war inventory was then adjusted to corresponding billets at every pay grade during a five-year period. The auxiliary officers were examined for future vacancies in the war inventory.
APA, Harvard, Vancouver, ISO, and other styles
9

Planting, Ralf. "The use of the DWV3 classification system in manufacturing companies for evaluating a market-specific supply chain strategy - A case study at Atlas Copco Industrial Technique." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-103925.

Full text
Abstract:
The research topic of this study is market-specific supply chain strategy, and the research problem is defined as, how manufacturing companies can use the DWV3 classification system to evaluate the opportunity for a market-specific supply chain strategy. What has been written about the DWV3 classification system is somewhat general in its nature and the practitioner is left without detailed instructions on how to proceed with the analytical analysis. Key elements of the DWV3 classification system that is not explicitly described in the literature is (1) how to measure each of the classification variables, (2) how to define a suitable limit for each measure in order to classify the products and (3) how to reason when sequencing the classification variables in the clustering analysis. Hence, the purpose of this thesis is to make the DWV3 classification system more available to practitioners, and thus the aim is to illustrate how to tackle the key elements of the framework by applying it on the Atlas Copco Industrial Technique Business Area product portfolio. A single-case study design was chosen as a suitable research approach for this thesis. The application of the DWV3 system to the ITBA product portfolio was considered as the phenomenon under investigation, the case, of this study. Two sets of quantitative data were collected, demand data and product master data. The qualitative data collected was related to the ITBA supply chain set-up and the products as well as the customers’ responsiveness requirements for each assortment included in the study. All qualitative data was collected through interviews. The findings of this study are summarized in a number of conclusions that can serve as guidelines for practitioners that are about to apply the DWV3 system. These are (1) as far as possible use measures at the single product level, (2) use measures that express each classification variable in a way that is relevant to the matching of demand characteristics and supply chain strategy, (3) be prepared to redefine initial measures in order to describe the studied products’ characteristics in the best possible way, (4) develop measures that are based on available data or data that is feasible to attain, (5) adjust the number of codification levels to find the best trade-off between the level of detail in the cluster analysis and the number of populated segments, (6) alter the sequencing and repeat the cluster analysis to gain insight into the demand characteristics of the product portfolio, (7) the final sequencing of the classification variables must produce clusters that are relevant for the chosen production philosophy concepts.
APA, Harvard, Vancouver, ISO, and other styles
10

Martin, Russell Andrew. "Paths, sampling, and markov chain decomposition." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/29383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Estandia, Gonzalez Luna Antonio. "Stable approximations for Markov-chain filters." Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/38303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Yichuan. "Scalable geometric Markov chain Monte Carlo." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20978.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) is one of the most popular statistical inference methods in machine learning. Recent work shows that a significant improvement of the statistical efficiency of MCMC on complex distributions can be achieved by exploiting geometric properties of the target distribution. This is known as geometric MCMC. However, many such methods, like Riemannian manifold Hamiltonian Monte Carlo (RMHMC), are computationally challenging to scale up to high dimensional distributions. The primary goal of this thesis is to develop novel geometric MCMC methods applicable to large-scale problems. To overcome the computational bottleneck of computing second order derivatives in geometric MCMC, I propose an adaptive MCMC algorithm using an efficient approximation based on Limited memory BFGS. I also propose a simplified variant of RMHMC that is able to work effectively on larger scale than the previous methods. Finally, I address an important limitation of geometric MCMC, namely that is only available for continuous distributions. I investigate a relaxation of discrete variables to continuous variables that allows us to apply the geometric methods. This is a new direction of MCMC research which is of potential interest to many applications. The effectiveness of the proposed methods is demonstrated on a wide range of popular models, including generalised linear models, conditional random fields (CRFs), hierarchical models and Boltzmann machines.
APA, Harvard, Vancouver, ISO, and other styles
13

Fang, Youhan. "Efficient Markov Chain Monte Carlo Methods." Thesis, Purdue University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10809188.

Full text
Abstract:

Generating random samples from a prescribed distribution is one of the most important and challenging problems in machine learning, Bayesian statistics, and the simulation of materials. Markov Chain Monte Carlo (MCMC) methods are usually the required tool for this task, if the desired distribution is known only up to a multiplicative constant. Samples produced by an MCMC method are real values in N-dimensional space, called the configuration space. The distribution of such samples converges to the target distribution in the limit. However, existing MCMC methods still face many challenges that are not well resolved. Difficulties for sampling by using MCMC methods include, but not exclusively, dealing with high dimensional and multimodal problems, high computation cost due to extremely large datasets in Bayesian machine learning models, and lack of reliable indicators for detecting convergence and measuring the accuracy of sampling. This dissertation focuses on new theory and methodology for efficient MCMC methods that aim to overcome the aforementioned difficulties.

One contribution of this dissertation is generalizations of hybrid Monte Carlo (HMC). An HMC method combines a discretized dynamical system in an extended space, called the state space, and an acceptance test based on the Metropolis criterion. The discretized dynamical system used in HMC is volume preserving—meaning that in the state space, the absolute Jacobian of a map from one point on the trajectory to another is 1. Volume preservation is, however, not necessary for the general purpose of sampling. A general theory allowing the use of non-volume preserving dynamics for proposing MCMC moves is proposed. Examples including isokinetic dynamics and variable mass Hamiltonian dynamics with an explicit integrator, are all designed with fewer restrictions based on the general theory. Experiments show improvement in efficiency for sampling high dimensional multimodal problems. A second contribution is stochastic gradient samplers with reduced bias. An in-depth analysis of the noise introduced by the stochastic gradient is provided. Two methods to reduce the bias in the distribution of samples are proposed. One is to correct the dynamics by using an estimated noise based on subsampled data, and the other is to introduce additional variables and corresponding dynamics to adaptively reduce the bias. Extensive experiments show that both methods outperform existing methods. A third contribution is quasi-reliable estimates of effective sample size. Proposed is a more reliable indicator—the longest integrated autocorrelation time over all functions in the state space—for detecting the convergence and measuring the accuracy of MCMC methods. The superiority of the new indicator is supported by experiments on both synthetic and real problems.

Minor contributions include a general framework of changing variables, and a numerical integrator for the Hamiltonian dynamics with fourth order accuracy. The idea of changing variables is to transform the potential energy function as a function of the original variable to a function of the new variable, such that undesired properties can be removed. Two examples are provided and preliminary experimental results are obtained for supporting this idea. The fourth order integrator is constructed by combining the idea of the simplified Takahashi-Imada method and a two-stage Hessian-based integrator. The proposed method, called two-stage simplified Takahashi-Imada method, shows outstanding performance over existing methods in high-dimensional sampling problems.

APA, Harvard, Vancouver, ISO, and other styles
14

Chotard, Alexandre. "Markov chain Analysis of Evolution Strategies." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112230/document.

Full text
Abstract:
Cette thèse contient des preuves de convergence ou de divergence d'algorithmes d'optimisation appelés stratégies d'évolution (ESs), ainsi que le développement d'outils mathématiques permettant ces preuves.Les ESs sont des algorithmes d'optimisation stochastiques dits ``boîte noire'', i.e. où les informations sur la fonction optimisée se réduisent aux valeurs qu'elle associe à des points. En particulier, le gradient de la fonction est inconnu. Des preuves de convergence ou de divergence de ces algorithmes peuvent être obtenues via l'analyse de chaînes de Markov sous-jacentes à ces algorithmes. Les preuves de convergence et de divergence obtenues dans cette thèse permettent d'établir le comportement asymptotique des ESs dans le cadre de l'optimisation d'une fonction linéaire avec ou sans contrainte, qui est un cas clé pour des preuves de convergence d'ESs sur de larges classes de fonctions.Cette thèse présente tout d'abord une introduction aux chaînes de Markov puis un état de l'art sur les ESs et leur contexte parmi les algorithmes d'optimisation continue boîte noire, ainsi que les liens établis entre ESs et chaînes de Markov. Les contributions de cette thèse sont ensuite présentées:o Premièrement des outils mathématiques généraux applicables dans d'autres problèmes sont développés. L'utilisation de ces outils permet d'établir aisément certaines propriétés (à savoir l'irreducibilité, l'apériodicité et le fait que les compacts sont des small sets pour la chaîne de Markov) sur les chaînes de Markov étudiées. Sans ces outils, établir ces propriétés était un processus ad hoc et technique, pouvant se montrer très difficile.o Ensuite différents ESs sont analysés dans différents problèmes. Un (1,\lambda)-ES utilisant cumulative step-size adaptation est étudié dans le cadre de l'optimisation d'une fonction linéaire. Il est démontré que pour \lambda > 2 l'algorithme diverge log-linéairement, optimisant la fonction avec succès. La vitesse de divergence de l'algorithme est donnée explicitement, ce qui peut être utilisé pour calculer une valeur optimale pour \lambda dans le cadre de la fonction linéaire. De plus, la variance du step-size de l'algorithme est calculée, ce qui permet de déduire une condition sur l'adaptation du paramètre de cumulation avec la dimension du problème afin d'obtenir une stabilité de l'algorithme. Ensuite, un (1,\lambda)-ES avec un step-size constant et un (1,\lambda)-ES avec cumulative step-size adaptation sont étudiés dans le cadre de l'optimisation d'une fonction linéaire avec une contrainte linéaire. Avec un step-size constant, l'algorithme résout le problème en divergeant lentement. Sous quelques conditions simples, ce résultat tient aussi lorsque l'algorithme utilise des distributions non Gaussiennes pour générer de nouvelles solutions. En adaptant le step-size avec cumulative step-size adaptation, le succès de l'algorithme dépend de l'angle entre les gradients de la contrainte et de la fonction optimisée. Si celui ci est trop faible, l'algorithme convergence prématurément. Autrement, celui ci diverge log-linéairement.Enfin, les résultats sont résumés, discutés, et des perspectives sur des travaux futurs sont présentées
In this dissertation an analysis of Evolution Strategies (ESs) using the theory of Markov chains is conducted. Proofs of divergence or convergence of these algorithms are obtained, and tools to achieve such proofs are developed.ESs are so called "black-box" stochastic optimization algorithms, i.e. information on the function to be optimized are limited to the values it associates to points. In particular, gradients are unavailable. Proofs of convergence or divergence of these algorithms can be obtained through the analysis of Markov chains underlying these algorithms. The proofs of log-linear convergence and of divergence obtained in this thesis in the context of a linear function with or without constraint are essential components for the proofs of convergence of ESs on wide classes of functions.This dissertation first gives an introduction to Markov chain theory, then a state of the art on ESs and on black-box continuous optimization, and present already established links between ESs and Markov chains.The contributions of this thesis are then presented:o General mathematical tools that can be applied to a wider range of problems are developed. These tools allow to easily prove specific Markov chain properties (irreducibility, aperiodicity and the fact that compact sets are small sets for the Markov chain) on the Markov chains studied. Obtaining these properties without these tools is a ad hoc, tedious and technical process, that can be of very high difficulty.o Then different ESs are analyzed on different problems. We study a (1,\lambda)-ES using cumulative step-size adaptation on a linear function and prove the log-linear divergence of the step-size; we also study the variation of the logarithm of the step-size, from which we establish a necessary condition for the stability of the algorithm with respect to the dimension of the search space. Then we study an ES with constant step-size and with cumulative step-size adaptation on a linear function with a linear constraint, using resampling to handle unfeasible solutions. We prove that with constant step-size the algorithm diverges, while with cumulative step-size adaptation, depending on parameters of the problem and of the ES, the algorithm converges or diverges log-linearly. We then investigate the dependence of the convergence or divergence rate of the algorithm with parameters of the problem and of the ES. Finally we study an ES with a sampling distribution that can be non-Gaussian and with constant step-size on a linear function with a linear constraint. We give sufficient conditions on the sampling distribution for the algorithm to diverge. We also show that different covariance matrices for the sampling distribution correspond to a change of norm of the search space, and that this implies that adapting the covariance matrix of the sampling distribution may allow an ES with cumulative step-size adaptation to successfully diverge on a linear function with any linear constraint.Finally, these results are summed-up, discussed, and perspectives for future work are explored
APA, Harvard, Vancouver, ISO, and other styles
15

Neuhoff, Daniel. "Reversible Jump Markov Chain Monte Carlo." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17461.

Full text
Abstract:
Die vier in der vorliegenden Dissertation enthaltenen Studien beschäftigen sich vorwiegend mit dem dynamischen Verhalten makroökonomischer Zeitreihen. Diese Dynamiken werden sowohl im Kontext eines einfachen DSGE Modells, als auch aus der Sichtweise reiner Zeitreihenmodelle untersucht.
The four studies of this thesis are concerned predominantly with the dynamics of macroeconomic time series, both in the context of a simple DSGE model, as well as from a pure time series modeling perspective.
APA, Harvard, Vancouver, ISO, and other styles
16

Skorniakov, Viktor. "Asymptotically homogeneous Markov chains." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20101223_152954-43357.

Full text
Abstract:
In the dissertation there is investigated a class of Markov chains defined by iterations of a function possessing a property of asymptotical homogeneity. Two problems are solved: 1) there are established rather general conditions under which the chain has unique stationary distribution; 2) for the chains evolving in a real line there are established conditions under which the stationary distribution of the chain is heavy-tailed.
Disertacijoje tirta Markovo grandinių klasė, kurios iteracijos nusakomos atsitiktinėmis asimptotiškai homogeninėmis funkcijomis, ir išspręsti du uždaviniai: 1) surastos bendros sąlygos, kurios garantuoja vienintelio stacionaraus skirstinio egzistavimą; 2) vienmatėms grandinėms surastos sąlygos, kurioms esant stacionarus skirstinys turi "sunkias" uodegas.
APA, Harvard, Vancouver, ISO, and other styles
17

Bhatnagar, Nayantara. "Annealing and Tempering for Sampling and Counting." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16323.

Full text
Abstract:
The Markov Chain Monte Carlo (MCMC) method has been widely used in practice since the 1950's in areas such as biology, statistics, and physics. However, it is only in the last few decades that powerful techniques for obtaining rigorous performance guarantees with respect to the running time have been developed. Today, with only a few notable exceptions, most known algorithms for approximately uniform sampling and approximate counting rely on the MCMC method. This thesis focuses on algorithms that use MCMC combined with an algorithm from optimization called simulated annealing, for sampling and counting problems. Annealing is a heuristic for finding the global optimum of a function over a large search space. It has recently emerged as a powerful technique used in conjunction with the MCMC method for sampling problems, for example in the estimation of the permanent and in algorithms for computing the volume of a convex body. We examine other applications of annealing to sampling problems as well as scenarios when it fails to converge in polynomial time. We consider the problem of randomly generating 0-1 contingency tables. This is a well-studied problem in statistics, as well as the theory of random graphs, since it is also equivalent to generating a random bipartite graph with a prescribed degree sequence. Previously, the only algorithm known for all degree sequences was by reduction to approximating the permanent of a 0-1 matrix. We give a direct and more efficient combinatorial algorithm which relies on simulated annealing. Simulated tempering is a variant of annealing used for sampling in which a temperature parameter is randomly raised or lowered during the simulation. The idea is that by extending the state space of the Markov chain to a polynomial number of progressively smoother distributions, parameterized by temperature, the chain could cross bottlenecks in the original space which cause slow mixing. We show that simulated tempering mixes torpidly for the 3-state ferromagnetic Potts model on the complete graph. Moreover, we disprove the conventional belief that tempering can slow fixed temperature algorithms by at most a polynomial in the number of temperatures and show that it can converge at a rate that is slower by at least an exponential factor.
APA, Harvard, Vancouver, ISO, and other styles
18

Matthews, James. "Markov chains for sampling matchings." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/3072.

Full text
Abstract:
Markov Chain Monte Carlo algorithms are often used to sample combinatorial structures such as matchings and independent sets in graphs. A Markov chain is defined whose state space includes the desired sample space, and which has an appropriate stationary distribution. By simulating the chain for a sufficiently large number of steps, we can sample from a distribution arbitrarily close to the stationary distribution. The number of steps required to do this is known as the mixing time of the Markov chain. In this thesis, we consider a number of Markov chains for sampling matchings, both in general and more restricted classes of graphs, and also for sampling independent sets in claw-free graphs. We apply techniques for showing rapid mixing based on two main approaches: coupling and conductance. We consider chains using single-site moves, and also chains using large block moves. Perfect matchings of bipartite graphs are of particular interest in our community. We investigate the mixing time of a Markov chain for sampling perfect matchings in a restricted class of bipartite graphs, and show that its mixing time is exponential in some instances. For a further restricted class of graphs, however, we can show subexponential mixing time. One of the techniques for showing rapid mixing is coupling. The bound on the mixing time depends on a contraction ratio b. Ideally, b < 1, but in the case b = 1 it is still possible to obtain a bound on the mixing time, provided there is a sufficiently large probability of contraction for all pairs of states. We develop a lemma which obtains better bounds on the mixing time in this case than existing theorems, in the case where b = 1 and the probability of a change in distance is proportional to the distance between the two states. We apply this lemma to the Dyer-Greenhill chain for sampling independent sets, and to a Markov chain for sampling 2D-colourings.
APA, Harvard, Vancouver, ISO, and other styles
19

Webb, Jared Anthony. "A Topics Analysis Model for Health Insurance Claims." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3805.

Full text
Abstract:
Mathematical probability has a rich theory and powerful applications. Of particular note is the Markov chain Monte Carlo (MCMC) method for sampling from high dimensional distributions that may not admit a naive analysis. We develop the theory of the MCMC method from first principles and prove its relevance. We also define a Bayesian hierarchical model for generating data. By understanding how data are generated we may infer hidden structure about these models. We use a specific MCMC method called a Gibbs' sampler to discover topic distributions in a hierarchical Bayesian model called Topics Over Time. We propose an innovative use of this model to discover disease and treatment topics in a corpus of health insurance claims data. By representing individuals as mixtures of topics, we are able to consider their future costs on an individual level rather than as part of a large collective.
APA, Harvard, Vancouver, ISO, and other styles
20

Murray, Iain Andrew. "Advances in Markov chain Monte Carlo methods." Thesis, University College London (University of London), 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.487199.

Full text
Abstract:
Probability distributions over many variables occur frequently in Bayesian inference, statistical physics and simulation studies. Samples from distributions give insight into their typical behavior and can allow approximation of any quantity of interest, such as expectations or normalizing constants. Markov chain Monte Carlo (MCMC), introduced by Metropolis et al. (1953), allows r sampling from distributions with intractable normalization, and remains one of most important tools for approximate computation with probability distributions. I While not needed by MCMC, normalizers are key quantities: in Bayesian statistics marginal likelihoods are needed for model comparison; in statistical physics many physical quantities relate to the partition function. In this thesis we propose and investigate several new Monte Carlo algorithms, both for evaluating normalizing constants and for improved sampling of distributions. Many MCMC correctness proofs rely on using reversible transition operators; this can lead to chains exploring by slow random walks. After reviewing existing MCMC algorithms, we develop a new framework for constructing non-reversible transition operators from existing reversible ones. Next we explore and extend MCMC-based algorithms for computing normalizing constants. In particular we develop a newMCMC operator and Nested Sampling approach for the Potts model. Our results demonstrate that these approaches can be superior to finding normalizing constants by annealing methods and can obtain better posterior samples. Finally we consider 'doubly-intractable' distributions with extra unknown normalizer terms that do not cancel in standard MCMC algorithms. We propose using several deterministic approximations for the unknown terms, and investigate their interaction with sampling algorithms. We then develop novel exact-sampling-based MCMC methods, the Exchange Algorithm and Latent Histories. For the first time these algorithms do not require separate approximation before sampling begins. Moreover, the Exchange Algorithm outperforms the only alternative sampling algorithm for doubly intractable distributions.
APA, Harvard, Vancouver, ISO, and other styles
21

Han, Xiao-liang. "Markov Chain Monte Carlo and sampling efficiency." Thesis, University of Bristol, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Fan, Yanan. "Efficient implementation of Markov chain Monte Carlo." Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Brooks, Stephen Peter. "Convergence diagnostics for Markov Chain Monte Carlo." Thesis, University of Cambridge, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.363913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Graham, Matthew McKenzie. "Auxiliary variable Markov chain Monte Carlo methods." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28962.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) methods are a widely applicable class of algorithms for estimating integrals in statistical inference problems. A common approach in MCMC methods is to introduce additional auxiliary variables into the Markov chain state and perform transitions in the joint space of target and auxiliary variables. In this thesis we consider novel methods for using auxiliary variables within MCMC methods to allow approximate inference in otherwise intractable models and to improve sampling performance in models exhibiting challenging properties such as multimodality. We first consider the pseudo-marginal framework. This extends the Metropolis–Hastings algorithm to cases where we only have access to an unbiased estimator of the density of target distribution. The resulting chains can sometimes show ‘sticking’ behaviour where long series of proposed updates are rejected. Further the algorithms can be difficult to tune and it is not immediately clear how to generalise the approach to alternative transition operators. We show that if the auxiliary variables used in the density estimator are included in the chain state it is possible to use new transition operators such as those based on slice-sampling algorithms within a pseudo-marginal setting. This auxiliary pseudo-marginal approach leads to easier to tune methods and is often able to improve sampling efficiency over existing approaches. As a second contribution we consider inference in probabilistic models defined via a generative process with the probability density of the outputs of this process only implicitly defined. The approximate Bayesian computation (ABC) framework allows inference in such models when conditioning on the values of observed model variables by making the approximation that generated observed variables are ‘close’ rather than exactly equal to observed data. Although making the inference problem more tractable, the approximation error introduced in ABC methods can be difficult to quantify and standard algorithms tend to perform poorly when conditioning on high dimensional observations. This often requires further approximation by reducing the observations to lower dimensional summary statistics. We show how including all of the random variables used in generating model outputs as auxiliary variables in a Markov chain state can allow the use of more efficient and robust MCMC methods such as slice sampling and Hamiltonian Monte Carlo (HMC) within an ABC framework. In some cases this can allow inference when conditioning on the full set of observed values when standard ABC methods require reduction to lower dimensional summaries for tractability. Further we introduce a novel constrained HMC method for performing inference in a restricted class of differentiable generative models which allows conditioning the generated observed variables to be arbitrarily close to observed data while maintaining computational tractability. As a final topicwe consider the use of an auxiliary temperature variable in MCMC methods to improve exploration of multimodal target densities and allow estimation of normalising constants. Existing approaches such as simulated tempering and annealed importance sampling use temperature variables which take on only a discrete set of values. The performance of these methods can be sensitive to the number and spacing of the temperature values used, and the discrete nature of the temperature variable prevents the use of gradient-based methods such as HMC to update the temperature alongside the target variables. We introduce new MCMC methods which instead use a continuous temperature variable. This both removes the need to tune the choice of discrete temperature values and allows the temperature variable to be updated jointly with the target variables within a HMC method.
APA, Harvard, Vancouver, ISO, and other styles
25

Hua, Zhili. "Markov Chain Modeling for Multi-Server Clusters." W&M ScholarWorks, 2005. https://scholarworks.wm.edu/etd/1539626843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Cho, Eun Hea. "Computation for Markov Chains." NCSU, 2000. http://www.lib.ncsu.edu/theses/available/etd-20000303-164550.

Full text
Abstract:

A finite, homogeneous, irreducible Markov chain $\mC$ with transitionprobability matrix possesses a unique stationary distribution vector. The questions one can pose in the area of computation of Markov chains include the following:
- How does one compute the stationary distributions?
- How accurate is the resulting answer?
In this thesis, we try to provide answers to these questions.

The thesis is divided in two parts. The first part deals with the perturbation theory of finite, homogeneous, irreducible Markov Chains, which is related to the first question above. The purpose of this part is to analyze the sensitivity of the stationarydistribution vector to perturbations in the transition probabilitymatrix. The second part gives answers to the question of computing the stationarydistributions of nearly uncoupled Markov chains (NUMC).

APA, Harvard, Vancouver, ISO, and other styles
27

Dessain, Thomas James. "Perturbations of Markov chains." Thesis, Durham University, 2014. http://etheses.dur.ac.uk/10619/.

Full text
Abstract:
This thesis is concerned with studying the hitting time of an absorbing state on Markov chain models that have a countable state space. For many models it is challenging to study the hitting time directly; I present a perturbative approach that allows one to uniformly bound the difference between the hitting time moment generating functions of two Markov chains in a neighbourhood of the origin. I demonstrate how this result can be applied to both discrete and continuous time Markov chains. The motivation for this work came from the field of biology, namely DNA damage and repair. Biophysicists have highlighted that the repair process can lead to Double Strand Breaks; due to the serious nature of such an eventuality it is important to understand the hitting time of this event. There is a phase transition in the model that I consider. In the regime of parameters where the process reaches quasi-stationarity before being absorbed I am able to apply my perturbative technique in order to further understand this hitting time.
APA, Harvard, Vancouver, ISO, and other styles
28

Tiozzo, Gobetto Francesca. "Finite state Markov chains and prediction of stock market trends using real data." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19255/.

Full text
Abstract:
In this thesis we discuss finite state Markov chains, which are a special class of stochastic processes. They can be represented either by a graph or by a matrix [P]. The reader is first introduced to Markov chains and is then guided in their classification. Some relevant theorems are discussed. The results are used to explain when [P^n], the matrix obtained by taking the nth power of [P], converges as n approaches infinity. We start by studying the convergence in the case of [P] > 0 and we continue by focusing on two specific kinds of Markov chains: ergodic finite state chains and ergodic unichains. We then cover more general types of chains. In the end we give an example of how these tools can be used in the field of finance. We develop a model that predicts fluctuations in the prices of stocks and we apply it to the FTSE-MIB Index using data from Borsa Italiana.
APA, Harvard, Vancouver, ISO, and other styles
29

郭慈安 and Chi-on Michael Kwok. "Some results on higher order Markov Chain models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1988. http://hub.hku.hk/bib/B31208654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kwok, Chi-on Michael. "Some results on higher order Markov Chain models /." [Hong Kong] : University of Hong Kong, 1988. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12432076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Di, Cecco Davide <1980&gt. "Markov exchangeable data and mixtures of Markov Chains." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1547/1/Di_Cecco_Davide_Tesi.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Di, Cecco Davide <1980&gt. "Markov exchangeable data and mixtures of Markov Chains." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1547/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Levitz, Michael. "Separation, completeness, and Markov properties for AMP chain graph models /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/9564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

edu, rdlyons@indiana. "Markov Chain Intersections and the Loop--Erased Walk." ESI preprints, 2001. ftp://ftp.esi.ac.at/pub/Preprints/esi1058.ps.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Stormark, Kristian. "Multiple Proposal Strategies for Markov Chain Monte Carlo." Thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9330.

Full text
Abstract:

The multiple proposal methods represent a recent simulation technique for Markov Chain Monte Carlo that allows several proposals to be considered at each step of transition. Motivated by the ideas of Quasi Monte Carlo integration, we examine how strongly correlated proposals can be employed to construct Markov chains with improved mixing properties. We proceed by giving a concise introduction to the Monte Carlo and Markov Chain Monte Carlo theory, and we supply a short discussion of the standard simulation algorithms and the difficulties of efficient sampling. We then examine two multiple proposal methods suggested in the literature, and we indicate the possibility of a unified formulation of the two methods. More essentially, we report some systematic exploration strategies for the two multiple proposals methods. In particular, we present schemes for the utilization of well-distributed point sets and maximally spread search directions. We also include a simple construction procedure for the latter type of point set. A numerical examination of the multiple proposal methods are performed on two simple test problems. We find that the systematic exploration approach may provide a significant improvement of the mixing, especially when the probability mass of the target distribution is ``easy to miss'' by independent sampling. For both test problems, we find that the best results are obtained with the QMC schemes. In particular, we find that the gain is most pronounced for a relatively moderate number of proposal. With fewer proposals, the properties of the well-distributed point sets will no be that relevant. For a large number of proposals, the independent sampling approach will be more competitive, since the coverage of the local neighborhood then will be better.

APA, Harvard, Vancouver, ISO, and other styles
36

Backåker, Fredrik. "The Google Markov Chain: convergence speed and eigenvalues." Thesis, Uppsala universitet, Matematisk statistik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-176610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Sanborn, Adam N. "Uncovering mental representations with Markov chain Monte Carlo." [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3278468.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Psychological and Brain Sciences and Program in Neuroscience, 2007.
Source: Dissertation Abstracts International, Volume: 68-10, Section: B, page: 6994. Adviser: Richard M. Shiffrin. Title from dissertation home page (viewed May 21, 2008).
APA, Harvard, Vancouver, ISO, and other styles
38

Suzuki, Yuya. "Rare-event Simulation with Markov Chain Monte Carlo." Thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-138950.

Full text
Abstract:
In this thesis, we consider random sums with heavy-tailed increments. By the term random sum, we mean a sum of random variables where the number of summands is also random. Our interest is to analyse the tail behaviour of random sums and to construct an efficient method to calculate quantiles. For the sake of efficiency, we simulate rare-events (tail-events) using a Markov chain Monte Carlo (MCMC) method. The asymptotic behaviour of sum and the maximum of heavy-tailed random sums is identical. Therefore we compare random sum and maximum value for various distributions, to investigate from which point one can use the asymptotic approximation. Furthermore, we propose a new method to estimate quantiles and the estimator is shown to be efficient.
APA, Harvard, Vancouver, ISO, and other styles
39

Gudmundsson, Thorbjörn. "Rare-event simulation with Markov chain Monte Carlo." Doctoral thesis, KTH, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157522.

Full text
Abstract:
Stochastic simulation is a popular method for computing probabilities or expecta- tions where analytical answers are difficult to derive. It is well known that standard methods of simulation are inefficient for computing rare-event probabilities and there- fore more advanced methods are needed to those problems. This thesis presents a new method based on Markov chain Monte Carlo (MCMC) algorithm to effectively compute the probability of a rare event. The conditional distri- bution of the underlying process given that the rare event occurs has the probability of the rare event as its normalising constant. Using the MCMC methodology a Markov chain is simulated, with that conditional distribution as its invariant distribution, and information about the normalising constant is extracted from its trajectory. In the first two papers of the thesis, the algorithm is described in full generality and applied to four problems of computing rare-event probability in the context of heavy- tailed distributions. The assumption of heavy-tails allows us to propose distributions which approximate the conditional distribution conditioned on the rare event. The first problem considers a random walk Y1 + · · · + Yn exceeding a high threshold, where the increments Y are independent and identically distributed and heavy-tailed. The second problem is an extension of the first one to a heavy-tailed random sum Y1+···+YN exceeding a high threshold,where the number of increments N is random and independent of Y1 , Y2 , . . .. The third problem considers the solution Xm to a stochastic recurrence equation, Xm = AmXm−1 + Bm, exceeding a high threshold, where the innovations B are independent and identically distributed and heavy-tailed and the multipliers A satisfy a moment condition. The fourth problem is closely related to the third and considers the ruin probability for an insurance company with risky investments. In last two papers of this thesis, the algorithm is extended to the context of light- tailed distributions and applied to four problems. The light-tail assumption ensures the existence of a large deviation principle or Laplace principle, which in turn allows us to propose distributions which approximate the conditional distribution conditioned on the rare event. The first problem considers a random walk Y1 + · · · + Yn exceeding a high threshold, where the increments Y are independent and identically distributed and light-tailed. The second problem considers a discrete-time Markov chains and the computation of general expectation, of its sample path, related to rare-events. The third problem extends the the discrete-time setting to Markov chains in continuous- time. The fourth problem is closely related to the third and considers a birth-and-death process with spatial intensities and the computation of first passage probabilities. An unbiased estimator of the reciprocal probability for each corresponding prob- lem is constructed with efficient rare-event properties. The algorithms are illustrated numerically and compared to existing importance sampling algorithms.

QC 20141216

APA, Harvard, Vancouver, ISO, and other styles
40

Jindasawat, Jutaporn. "Testing the order of a Markov chain model." Thesis, University of Newcastle Upon Tyne, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.446197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Hastie, David. "Towards automatic reversible jump Markov Chain Monte Carlo." Thesis, University of Bristol, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.414179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Groff, Jeffrey R. "Markov chain models of calcium puffs and sparks." W&M ScholarWorks, 2008. https://scholarworks.wm.edu/etd/1539623333.

Full text
Abstract:
Localized cytosolic Ca2+ elevations known as puffs and sparks are important regulators of cellular function that arise due to the cooperative activity of Ca2+-regulated inositol 1,4,5-trisphosphate receptors (IP3Rs) or ryanodine receptors (RyRs) co-localized at Ca2+ release sites on the surface of the endoplasmic reticulum or sarcoplasmic reticulum. Theoretical studies have demonstrated that the cooperative gating of a cluster of Ca2+-regulated Ca 2+ channels modeled as a continuous-time discrete-state Markov chain may result in dynamics reminiscent of Ca2+ puffs and sparks. In such simulations, individual Ca2+-release channels are coupled via a mathematical representation of the local [Ca2+] and exhibit "stochastic Ca2+ excitability" where channels open and close in a concerted fashion. This dissertation uses Markov chain models of Ca 2+ release sites to advance our understanding of the biophysics connecting the microscopic parameters of IP3R and RyR gating to the collective phenomenon of puffs and sparks.;The dynamics of puffs and sparks exhibited by release site models that include both Ca2+ coupling and nearest-neighbor allosteric coupling are studied. Allosteric interactions are included in a manner that promotes the synchronous gating of channels by stabilizing neighboring closed-closed and/or open-open channel pairs. When the strength of Ca2+-mediated channel coupling is systematically varied, simulations that include allosteric interactions often exhibit more robust Ca2+ puffs and sparks. Interestingly, the changes in puff/spark duration, inter-event interval, and frequency observed upon the random removal of allosteric couplings that stabilize closed-closed channel pairs are qualitatively different than the changes observed when open-open channel pairs, or both open-open and closed-closed channel pairs are stabilized. The validity of a computationally efficient mean-field reduction applicable to the dynamics of a cluster of Ca2+-release Ca2+ channels coupled via the local [Ca2+] and allosteric interactions is also investigated.;Markov chain models of Ca2+ release sites composed of channels that are both activated and inactivated by Ca2+ are used to clarify the role of Ca2+ inactivation in the generation and termination of puffs and sparks. It is found that when the average fraction of inactivated channels is significant, puffs and sparks are often less sensitive to variations in the number of channels at release sites and the strength of Ca2+ coupling. While excessively fast Ca2+ inactivation can preclude puffs and sparks moderately fast Ca2+ inactivation often leads to time-irreversible puff/sparks whose termination is facilitated by the recruitment of inactivated channels throughout the duration of the puff/spark event. On the other hand, Ca2+ inactivation may be an important negative feedback mechanism even when its time constant is much greater than the duration of puffs and sparks. In fact, slow Ca 2+ inactivation can lead to release sites with a substantial fraction of inactivated channels that exhibit nearly time-reversible puffs and sparks that terminate without additional recruitment of inactivated channels.
APA, Harvard, Vancouver, ISO, and other styles
43

Guha, Subharup. "Benchmark estimation for Markov Chain Monte Carlo samplers." The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1085594208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Shuying. "Phylogenetic tree construction using markov chain monte carlo /." The Ohio State University, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487942182323916.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Xu, Jason Qian. "Markov Chain Monte Carlo and Non-Reversible Methods." Thesis, The University of Arizona, 2012. http://hdl.handle.net/10150/244823.

Full text
Abstract:
The bulk of Markov chain Monte Carlo applications make use of reversible chains, relying on the Metropolis-Hastings algorithm or similar methods. While reversible chains have the advantage of being relatively easy to analyze, it has been shown that non-reversible chains may outperform them in various scenarios. Neal proposes an algorithm that transforms a general reversible chain into a non-reversible chain with a construction that does not increase the asymptotic variance. These modified chains work to avoid diffusive backtracking behavior which causes Markov chains to be trapped in one position for too long. In this paper, we provide an introduction to MCMC, and discuss the Metropolis algorithm and Neal’s algorithm. We introduce a decaying memory algorithm inspired by Neal’s idea, and then analyze and compare the performance of these chains on several examples.
APA, Harvard, Vancouver, ISO, and other styles
46

Zhu, Dongmei, and 朱冬梅. "Construction of non-standard Markov chain models with applications." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/202358.

Full text
Abstract:
In this thesis, the properties of some non-standard Markov chain models and their corresponding parameter estimation methods are investigated. Several practical applications and extensions are also discussed. The estimation of model parameters plays a key role in the real-world applications of Markov chain models. Some widely used estimation methods for Markov chain models are based on the existence of stationary vectors. In this thesis, some weaker sufficient conditions for the existence of stationary vectors for highorder Markov chain models, multivariate Markov chain models and high-order multivariate Markov chain models are proposed. Furthermore, for multivariate Markov chain models, a new estimation method based on minimizing the prediction error is proposed. Numerical experiments are conducted to demonstrate the efficiency of the proposed estimation methods with an application in demand prediction. Hidden Markov Model (HMM) is a bivariate stochastic process such that one of the process is hidden and the other is observable. The distribution of observable sequence depends on the hidden sequence. In a traditional HMM, the hidden states directly affect the observable states but not vice versa. However, in reality, observable sequence may also have effect on the hidden sequence. For this reason, the concept of Interactive Hidden Markov Model (IHMM) is introduced, whose key idea is that the transitions of the hidden states depend on the observable states too. In this thesis, efforts are devoted in building a highorder IHMM where the probability laws governing both observable and hidden states can be written as a pair of high-order stochastic difference equations. We also propose a new model by capturing the effect of observable sequence on the hidden sequence through using the threshold principle. In this case, reference probability methods are adopted in estimating the optimal model parameters, while for unknown threshold parameter, Akaike Information Criterion (AIC) is used. We explore asset allocation problems from both domestic and foreign perspective where asset price dynamics follows autoregressive HMM. The object of an investor is not only to maximize the expected utility of the terminal wealth, but also to ensure that the risk of the portfolio described by the Value-at-Risk (VaR) does not exceed a specified level. In many decision processes, fuzziness is a major source of imprecision. As a perception of usual Markov chains, the definition of fuzzy Markov chains is introduced. Compared to traditional Markov chain models, fuzzy Markov chains are relatively new and many properties of them are still unknown. Due to the potential applications of fuzzy Markov chains, we provide some characterizations to ensure the ergodicity of these chains under both max-min and max-product compositions.
published_or_final_version
Mathematics
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
47

Wilson, David Bruce. "Exact sampling with Markov chains." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Mestern, Mark Andrew. "Distributed analysis of Markov chains." Master's thesis, University of Cape Town, 1998. http://hdl.handle.net/11427/9693.

Full text
Abstract:
Bibliography: leaves 88-91.
This thesis examines how parallel and distributed algorithms can increase the power of techniques for correctness and performance analysis of concurrent systems. The systems in question are state transition systems from which Markov chains can be derived. Both phases of the analysis pipeline are considered: state space generation from a state transition model to form the Markov chain and finding performance information by solving the steady state equations of the Markov Chain. The state transition models are specified in a general interface language which can describe any Markovian process. The models are not tied to a specific modelling formalism, but common formal description techniques such as generalised stochastic Petri nets and queuing networks can generate these models. Tools for Markov chain analysis face the problem of state Spaces that are so large that they exceed the memory and processing power of a single workstation. This problem is attacked with methods to reduce memory usage, and by dividing the problem between several workstations. A distributed state space generation algorithm was designed and implemented for a local area network of workstations. The state space generation algorithm also includes a probabilistic dynamic hash compaction technique for storing state hash tables, which dramatically reduces memory consumption.- Numerical solution methods for Markov chains are surveyed and two iterative methods, BiCG and BiCGSTAB, were chosen for a parallel implementation to show that this stage of analysis also benefits from a distributed approach. The results from the distributed generation algorithm show a good speed up of the state space generation phase and that the method makes the generation of larger state spaces possible. The distributed methods for the steady state solution also allow larger models to be analysed, but the heavy communications load on the network prevents improved execution time.
APA, Harvard, Vancouver, ISO, and other styles
49

Salzman, Julia. "Spectral analysis with Markov chains /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Dorff, Rebecca. "Modelling Infertility with Markov Chains." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/4070.

Full text
Abstract:
Infertility affects approximately 15% of couples. Testing and interventions are costly, in time, money, and emotional energy. This paper will discuss using Markov decision and multi-armed bandit processes to identify a systematic approach of interventions that will lead to the desired baby while minimizing costs.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography