Articles de revues sur le sujet « Markoc chain »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Markoc chain.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Markoc chain ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Guyo, X., et C. Hardouin†. « Markow chain markov field dynamics:models and statistics ». Statistics 35, no 4 (janvier 2001) : 593–627. http://dx.doi.org/10.1080/02331880108802756.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Valenzuela, Mississippi. « Markov chains and applications ». Selecciones Matemáticas 9, no 01 (30 juin 2022) : 53–78. http://dx.doi.org/10.17268/sel.mat.2022.01.05.

Texte intégral
Résumé :
This work has three important purposes: first it is the study of Markov Chains, the second is to show that Markov chains have different applications and finally it is to model a process of this behaves. Throughout this work we will describe what a Markov chain is, what these processes are for and how these chains are classified. We will describe a Markov Chain, that is, analyze what are the primary elements that make up a Markov chain, among others.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Barker, Richard J., et Matthew R. Schofield. « Putting Markov Chains Back into Markov Chain Monte Carlo ». Journal of Applied Mathematics and Decision Sciences 2007 (30 octobre 2007) : 1–13. http://dx.doi.org/10.1155/2007/98086.

Texte intégral
Résumé :
Markov chain theory plays an important role in statistical inference both in the formulation of models for data and in the construction of efficient algorithms for inference. The use of Markov chains in modeling data has a long history, however the use of Markov chain theory in developing algorithms for statistical inference has only become popular recently. Using mark-recapture models as an illustration, we show how Markov chains can be used for developing demographic models and also in developing efficient algorithms for inference. We anticipate that a major area of future research involving mark-recapture data will be the development of hierarchical models that lead to better demographic models that account for all uncertainties in the analysis. A key issue is determining when the chains produced by Markov chain Monte Carlo sampling have converged.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Marcus, Brian, et Selim Tuncel. « The weight-per-symbol polytope and scaffolds of invariants associated with Markov chains ». Ergodic Theory and Dynamical Systems 11, no 1 (mars 1991) : 129–80. http://dx.doi.org/10.1017/s0143385700006052.

Texte intégral
Résumé :
AbstractWe study Markov chains via invariants constructed from periodic orbits. Canonical extensions, based on these invariants, are used to establish a constraint on the degree of finite-to-one block homomorphisms from one Markov chain to another. We construct a polytope from the normalized weights of periodic orbits. Using this polytope, we find canonically-defined induced Markov chains inside the original Markov chain. Each of the invariants associated with these Markov chains gives rise to a scaffold of invariants for the original Markov chain. This is used to obtain counterexamples to the finite equivalence conjecture and to a conjecture regarding finitary isomorphism with finite expected coding time. Also included are results related to the problem of minimality (with respect to block homomorphism) of Bernoulli shifts in the class of Markov chains with beta function equal to the beta function of the Bernoulli shift.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Xiang, Xuyan, Xiao Zhang et Xiaoyun Mo. « Statistical Identification of Markov Chain on Trees ». Mathematical Problems in Engineering 2018 (2018) : 1–13. http://dx.doi.org/10.1155/2018/2036248.

Texte intégral
Résumé :
The theoretical study of continuous-time homogeneous Markov chains is usually based on a natural assumption of a known transition rate matrix (TRM). However, the TRM of a Markov chain in realistic systems might be unknown and might even need to be identified by partially observable data. Thus, an issue on how to identify the TRM of the underlying Markov chain by partially observable information is derived from the great significance in applications. That is what we call the statistical identification of Markov chain. The Markov chain inversion approach has been derived for basic Markov chains by partial observation at few states. In the current letter, a more extensive class of Markov chain on trees is investigated. Firstly, a type of a more operable derivative constraint is developed. Then, it is shown that all Markov chains on trees can be identified only by such derivative constraints of univariate distributions of sojourn time and/or hitting time at a few states. A numerical example is included to demonstrate the correctness of the proposed algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
6

APOSTOLOV, S. S., Z. A. MAYZELIS, O. V. USATENKO et V. A. YAMPOL'SKII. « HIGH-ORDER CORRELATION FUNCTIONS OF BINARY MULTI-STEP MARKOV CHAINS ». International Journal of Modern Physics B 22, no 22 (10 septembre 2008) : 3841–53. http://dx.doi.org/10.1142/s0217979208048589.

Texte intégral
Résumé :
Two approaches to studying the correlation functions of the binary Markov sequences are considered. The first of them is based on the study of probability of occurring different "words" in the sequence. The other one uses recurrence relations for correlation functions. These methods are applied for two important particular classes of the Markov chains. These classes include the Markov chains with permutative conditional probability functions and the additive Markov chains with the small memory functions. The exciting property of the self-similarity (discovered in Phys. Rev. Lett.90, 110601 (2003) for the additive Markov chain with the step-wise memory function) is proved to be the intrinsic property of any permutative Markov chain. Applicability of the correlation functions of the additive Markov chains with the small memory functions to calculating the thermodynamic characteristics of the classical Ising spin chain with long-range interaction is discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Masuyama, Hiroyuki. « Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions ». Advances in Applied Probability 47, no 1 (mars 2015) : 83–105. http://dx.doi.org/10.1239/aap/1427814582.

Texte intégral
Résumé :
In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally, we discuss the application of our results to GI/G/1-type Markov chains.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Masuyama, Hiroyuki. « Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions ». Advances in Applied Probability 47, no 01 (mars 2015) : 83–105. http://dx.doi.org/10.1017/s0001867800007710.

Texte intégral
Résumé :
In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally, we discuss the application of our results to GI/G/1-type Markov chains.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Verbeken, Brecht, et Marie-Anne Guerry. « Attainability for Markov and Semi-Markov Chains ». Mathematics 12, no 8 (19 avril 2024) : 1227. http://dx.doi.org/10.3390/math12081227.

Texte intégral
Résumé :
When studying Markov chain models and semi-Markov chain models, it is useful to know which state vectors n, where each component ni represents the number of entities in the state Si, can be maintained or attained. This question leads to the definitions of maintainability and attainability for (time-homogeneous) Markov chain models. Recently, the definition of maintainability was extended to the concept of state reunion maintainability (SR-maintainability) for semi-Markov chains. Within the framework of semi-Markov chains, the states are subdivided further into seniority-based states. State reunion maintainability assesses the maintainability of the distribution across states. Following this idea, we introduce the concept of state reunion attainability, which encompasses the potential of a system to attain a specific distribution across the states after uniting the seniority-based states into the underlying states. In this paper, we start by extending the concept of attainability for constant-sized Markov chain models to systems that are subject to growth or contraction. Afterwards, we introduce the concepts of attainability and state reunion attainability for semi-Markov chain models, using SR-maintainability as a starting point. The attainable region, as well as the state reunion attainable region, are described as the convex hull of their respective vertices, and properties of these regions are investigated.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Ledoux, James. « A geometric invariant in weak lumpability of finite Markov chains ». Journal of Applied Probability 34, no 4 (décembre 1997) : 847–58. http://dx.doi.org/10.2307/3215001.

Texte intégral
Résumé :
We consider weak lumpability of finite homogeneous Markov chains, which is when a lumped Markov chain with respect to a partition of the initial state space is also a homogeneous Markov chain. We show that weak lumpability is equivalent to the existence of a direct sum of polyhedral cones that is positively invariant by the transition probability matrix of the original chain. It allows us, in a unified way, to derive new results on lumpability of reducible Markov chains and to obtain spectral properties associated with lumpability.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Ledoux, James. « A geometric invariant in weak lumpability of finite Markov chains ». Journal of Applied Probability 34, no 04 (décembre 1997) : 847–58. http://dx.doi.org/10.1017/s0021900200101561.

Texte intégral
Résumé :
We consider weak lumpability of finite homogeneous Markov chains, which is when a lumped Markov chain with respect to a partition of the initial state space is also a homogeneous Markov chain. We show that weak lumpability is equivalent to the existence of a direct sum of polyhedral cones that is positively invariant by the transition probability matrix of the original chain. It allows us, in a unified way, to derive new results on lumpability of reducible Markov chains and to obtain spectral properties associated with lumpability.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Takemura, Akimichi, et Hisayuki Hara. « Markov chain Monte Carlo test of toric homogeneous Markov chains ». Statistical Methodology 9, no 3 (mai 2012) : 392–406. http://dx.doi.org/10.1016/j.stamet.2011.10.004.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

BOUCHER, THOMAS R., et DAREN B. H. CLINE. « PIGGYBACKING THRESHOLD PROCESSES WITH A FINITE STATE MARKOV CHAIN ». Stochastics and Dynamics 09, no 02 (juin 2009) : 187–204. http://dx.doi.org/10.1142/s0219493709002622.

Texte intégral
Résumé :
The state-space representations of certain nonlinear autoregressive time series are general state Markov chains. The transitions of a general state Markov chain among regions in its state-space can be modeled with the transitions among states of a finite state Markov chain. Stability of the time series is then informed by the stationary distributions of the finite state Markov chain. This approach generalizes some previous results.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Rydén, Tobias. « On identifiability and order of continuous-time aggregated Markov chains, Markov-modulated Poisson processes, and phase-type distributions ». Journal of Applied Probability 33, no 3 (septembre 1996) : 640–53. http://dx.doi.org/10.2307/3215346.

Texte intégral
Résumé :
An aggregated Markov chain is a Markov chain for which some states cannot be distinguished from each other by the observer. In this paper we consider the identifiability problem for such processes in continuous time, i.e. the problem of determining whether two parameters induce identical laws for the observable process or not. We also study the order of a continuous-time aggregated Markov chain, which is the minimum number of states needed to represent it. In particular, we give a lower bound on the order. As a by-product, we obtain results of this kind also for Markov-modulated Poisson processes, i.e. doubly stochastic Poisson processes whose intensities are directed by continuous-time Markov chains, and phase-type distributions, which are hitting times in finite-state Markov chains.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Rydén, Tobias. « On identifiability and order of continuous-time aggregated Markov chains, Markov-modulated Poisson processes, and phase-type distributions ». Journal of Applied Probability 33, no 03 (septembre 1996) : 640–53. http://dx.doi.org/10.1017/s0021900200100087.

Texte intégral
Résumé :
An aggregated Markov chain is a Markov chain for which some states cannot be distinguished from each other by the observer. In this paper we consider the identifiability problem for such processes in continuous time, i.e. the problem of determining whether two parameters induce identical laws for the observable process or not. We also study the order of a continuous-time aggregated Markov chain, which is the minimum number of states needed to represent it. In particular, we give a lower bound on the order. As a by-product, we obtain results of this kind also for Markov-modulated Poisson processes, i.e. doubly stochastic Poisson processes whose intensities are directed by continuous-time Markov chains, and phase-type distributions, which are hitting times in finite-state Markov chains.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Dixit, Purushottam D. « Introducing User-Prescribed Constraints in Markov Chains for Nonlinear Dimensionality Reduction ». Neural Computation 31, no 5 (mai 2019) : 980–97. http://dx.doi.org/10.1162/neco_a_01184.

Texte intégral
Résumé :
Stochastic kernel-based dimensionality-reduction approaches have become popular in the past decade. The central component of many of these methods is a symmetric kernel that quantifies the vicinity between pairs of data points and a kernel-induced Markov chain on the data. Typically, the Markov chain is fully specified by the kernel through row normalization. However, in many cases, it is desirable to impose user-specified stationary-state and dynamical constraints on the Markov chain. Unfortunately, no systematic framework exists to impose such user-defined constraints. Here, based on our previous work on inference of Markov models, we introduce a path entropy maximization based approach to derive the transition probabilities of Markov chains using a kernel and additional user-specified constraints. We illustrate the usefulness of these Markov chains with examples.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Zhong, Pingping, Weiguo Yang et Peipei Liang. « THE ASYMPTOTIC EQUIPARTITION PROPERTY FOR ASYMPTOTIC CIRCULAR MARKOV CHAINS ». Probability in the Engineering and Informational Sciences 24, no 2 (18 mars 2010) : 279–88. http://dx.doi.org/10.1017/s0269964809990271.

Texte intégral
Résumé :
In this article, we study the asymptotic equipartition property (AEP) for asymptotic circular Markov chains. First, the definition of an asymptotic circular Markov chain is introduced. Then by applying the limit property for the bivariate functions of nonhomogeneous Markov chains, the strong limit theorem on the frequencies of occurrence of states for asymptotic circular Markov chains is established. Next, the strong law of large numbers on the frequencies of occurrence of states for asymptotic circular Markov chains is obtained. Finally, we prove the AEP for asymptotic circular Markov chains.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Qi-feng, Yao, Dong Yun et Wang Zhong-Zhi. « An Entropy Rate Theorem for a Hidden Inhomogeneous Markov Chain ». Open Statistics & ; Probability Journal 8, no 1 (30 septembre 2017) : 19–26. http://dx.doi.org/10.2174/1876527001708010019.

Texte intégral
Résumé :
Objective: The main object of our study is to extend some entropy rate theorems to a Hidden Inhomogeneous Markov Chain (HIMC) and establish an entropy rate theorem under some mild conditions. Introduction: A hidden inhomogeneous Markov chain contains two different stochastic processes; one is an inhomogeneous Markov chain whose states are hidden and the other is a stochastic process whose states are observable. Materials and Methods: The proof of theorem requires some ergodic properties of an inhomogeneous Markov chain, and the flexible application of the properties of norm and the bounded conditions of series are also indispensable. Results: This paper presents an entropy rate theorem for an HIMC under some mild conditions and two corollaries for a hidden Markov chain and an inhomogeneous Markov chain. Conclusion: Under some mild conditions, the entropy rates of an inhomogeneous Markov chains, a hidden Markov chain and an HIMC are similar and easy to calculate.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Nuel, Grégory. « Pattern Markov Chains : Optimal Markov Chain Embedding Through Deterministic Finite Automata ». Journal of Applied Probability 45, no 1 (mars 2008) : 226–43. http://dx.doi.org/10.1239/jap/1208358964.

Texte intégral
Résumé :
In the framework of patterns in random texts, the Markov chain embedding techniques consist of turning the occurrences of a pattern over an order-m Markov sequence into those of a subset of states into an order-1 Markov chain. In this paper we use the theory of language and automata to provide space-optimal Markov chain embedding using the new notion of pattern Markov chains (PMCs), and we give explicit constructive algorithms to build the PMC associated to any given pattern problem. The interest of PMCs is then illustrated through the exact computation of P-values whose complexity is discussed and compared to other classical asymptotic approximations. Finally, we consider two illustrative examples of highly degenerated pattern problems (structured motifs and PROSITE signatures), which further illustrate the usefulness of our approach.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Nuel, Grégory. « Pattern Markov Chains : Optimal Markov Chain Embedding Through Deterministic Finite Automata ». Journal of Applied Probability 45, no 01 (mars 2008) : 226–43. http://dx.doi.org/10.1017/s0021900200004083.

Texte intégral
Résumé :
In the framework of patterns in random texts, the Markov chain embedding techniques consist of turning the occurrences of a pattern over an order-m Markov sequence into those of a subset of states into an order-1 Markov chain. In this paper we use the theory of language and automata to provide space-optimal Markov chain embedding using the new notion of pattern Markov chains (PMCs), and we give explicit constructive algorithms to build the PMC associated to any given pattern problem. The interest of PMCs is then illustrated through the exact computation of P-values whose complexity is discussed and compared to other classical asymptotic approximations. Finally, we consider two illustrative examples of highly degenerated pattern problems (structured motifs and PROSITE signatures), which further illustrate the usefulness of our approach.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Glynn, Peter W., et Chang-Han Rhee. « Exact estimation for Markov chain equilibrium expectations ». Journal of Applied Probability 51, A (décembre 2014) : 377–89. http://dx.doi.org/10.1239/jap/1417528487.

Texte intégral
Résumé :
We introduce a new class of Monte Carlo methods, which we call exact estimation algorithms. Such algorithms provide unbiased estimators for equilibrium expectations associated with real-valued functionals defined on a Markov chain. We provide easily implemented algorithms for the class of positive Harris recurrent Markov chains, and for chains that are contracting on average. We further argue that exact estimation in the Markov chain setting provides a significant theoretical relaxation relative to exact simulation methods.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Glynn, Peter W., et Chang-Han Rhee. « Exact estimation for Markov chain equilibrium expectations ». Journal of Applied Probability 51, A (décembre 2014) : 377–89. http://dx.doi.org/10.1017/s0021900200021392.

Texte intégral
Résumé :
We introduce a new class of Monte Carlo methods, which we call exact estimation algorithms. Such algorithms provide unbiased estimators for equilibrium expectations associated with real-valued functionals defined on a Markov chain. We provide easily implemented algorithms for the class of positive Harris recurrent Markov chains, and for chains that are contracting on average. We further argue that exact estimation in the Markov chain setting provides a significant theoretical relaxation relative to exact simulation methods.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Lekgari, Mokaedi V. « Maximal Coupling Procedure and Stability of Continuous-Time Markov Chains ». Bulletin of Mathematical Sciences and Applications 10 (novembre 2014) : 30–37. http://dx.doi.org/10.18052/www.scipress.com/bmsa.10.30.

Texte intégral
Résumé :
In this study we first investigate the stability of subsampled discrete Markov chains through the use of the maximal coupling procedure. This is an extension of the available results on Markov chains and is realized through the analysis of the subsampled chain ΦΤn, where {Τn, nєZ+}is an increasing sequence of random stopping times. Then the similar results are realized for the stability of countable-state Continuous-time Markov processes by employing the skeleton-chain method.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Papastathopoulos, I., K. Strokorb, J. A. Tawn et A. Butler. « Extreme events of Markov chains ». Advances in Applied Probability 49, no 1 (mars 2017) : 134–61. http://dx.doi.org/10.1017/apr.2016.82.

Texte intégral
Résumé :
Abstract The extremal behaviour of a Markov chain is typically characterised by its tail chain. For asymptotically dependent Markov chains, existing formulations fail to capture the full evolution of the extreme event when the chain moves out of the extreme tail region, and, for asymptotically independent chains, recent results fail to cover well-known asymptotically independent processes, such as Markov processes with a Gaussian copula between consecutive values. We use more sophisticated limiting mechanisms that cover a broader class of asymptotically independent processes than current methods, including an extension of the canonical Heffernan‒Tawn normalisation scheme, and reveal features which existing methods reduce to a degenerate form associated with nonextreme states.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Choi, Michael C. H., et Pierre Patie. « Analysis of non-reversible Markov chains via similarity orbits ». Combinatorics, Probability and Computing 29, no 4 (18 février 2020) : 508–36. http://dx.doi.org/10.1017/s0963548320000024.

Texte intégral
Résumé :
AbstractIn this paper we develop an in-depth analysis of non-reversible Markov chains on denumerable state space from a similarity orbit perspective. In particular, we study the class of Markov chains whose transition kernel is in the similarity orbit of a normal transition kernel, such as that of birth–death chains or reversible Markov chains. We start by identifying a set of sufficient conditions for a Markov chain to belong to the similarity orbit of a birth–death chain. As by-products, we obtain a spectral representation in terms of non-self-adjoint resolutions of identity in the sense of Dunford [21] and offer a detailed analysis on the convergence rate, separation cutoff and L2-cutoff of this class of non-reversible Markov chains. We also look into the problem of estimating the integral functionals from discrete observations for this class. In the last part of this paper we investigate a particular similarity orbit of reversible Markov kernels, which we call the pure birth orbit, and analyse various possibly non-reversible variants of classical birth–death processes in this orbit.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Zhou, Hua, et Kenneth Lange. « Composition Markov chains of multinomial type ». Advances in Applied Probability 41, no 1 (mars 2009) : 270–91. http://dx.doi.org/10.1239/aap/1240319585.

Texte intégral
Résumé :
Suppose that n identical particles evolve according to the same marginal Markov chain. In this setting we study chains such as the Ehrenfest chain that move a prescribed number of randomly chosen particles at each epoch. The product chain constructed by this device inherits its eigenstructure from the marginal chain. There is a further chain derived from the product chain called the composition chain that ignores particle labels and tracks the numbers of particles in the various states. The composition chain in turn inherits its eigenstructure and various properties such as reversibility from the product chain. The equilibrium distribution of the composition chain is multinomial. The current paper proves these facts in the well-known framework of state lumping and identifies the column eigenvectors of the composition chain with the multivariate Krawtchouk polynomials of Griffiths. The advantages of knowing the full spectral decomposition of the composition chain include (a) detailed estimates of the rate of convergence to equilibrium, (b) construction of martingales that allow calculation of the moments of the particle counts, and (c) explicit expressions for mean coalescence times in multi-person random walks. These possibilities are illustrated by applications to Ehrenfest chains, the Hoare and Rahman chain, Kimura's continuous-time chain for DNA evolution, a light bulb chain, and random walks on some specific graphs.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Zhou, Hua, et Kenneth Lange. « Composition Markov chains of multinomial type ». Advances in Applied Probability 41, no 01 (mars 2009) : 270–91. http://dx.doi.org/10.1017/s0001867800003220.

Texte intégral
Résumé :
Suppose that n identical particles evolve according to the same marginal Markov chain. In this setting we study chains such as the Ehrenfest chain that move a prescribed number of randomly chosen particles at each epoch. The product chain constructed by this device inherits its eigenstructure from the marginal chain. There is a further chain derived from the product chain called the composition chain that ignores particle labels and tracks the numbers of particles in the various states. The composition chain in turn inherits its eigenstructure and various properties such as reversibility from the product chain. The equilibrium distribution of the composition chain is multinomial. The current paper proves these facts in the well-known framework of state lumping and identifies the column eigenvectors of the composition chain with the multivariate Krawtchouk polynomials of Griffiths. The advantages of knowing the full spectral decomposition of the composition chain include (a) detailed estimates of the rate of convergence to equilibrium, (b) construction of martingales that allow calculation of the moments of the particle counts, and (c) explicit expressions for mean coalescence times in multi-person random walks. These possibilities are illustrated by applications to Ehrenfest chains, the Hoare and Rahman chain, Kimura's continuous-time chain for DNA evolution, a light bulb chain, and random walks on some specific graphs.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Li, Wenxi, et Zhongzhi Wang. « A NOTE ON RÉNYI'S ENTROPY RATE FOR TIME-INHOMOGENEOUS MARKOV CHAINS ». Probability in the Engineering and Informational Sciences 33, no 4 (5 décembre 2018) : 579–90. http://dx.doi.org/10.1017/s026996481800044x.

Texte intégral
Résumé :
AbstractIn this note, we use the Perron–Frobenius theorem to obtain the Rényi's entropy rate for a time-inhomogeneous Markov chain whose transition matrices converge to a primitive matrix. As direct corollaries, we also obtain the Rényi's entropy rate for asymptotic circular Markov chain and the Rényi's divergence rate between two time-inhomogeneous Markov chains.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Politis, Dimitris N. « Markov Chains in Many Dimensions ». Advances in Applied Probability 26, no 3 (septembre 1994) : 756–74. http://dx.doi.org/10.2307/1427819.

Texte intégral
Résumé :
A generalization of the notion of a stationary Markov chain in more than one dimension is proposed, and is found to be a special class of homogeneous Markov random fields. Stationary Markov chains in many dimensions are shown to possess a maximum entropy property, analogous to the corresponding property for Markov chains in one dimension. In addition, a representation of Markov chains in many dimensions is provided, together with a method for their generation that converges to their stationary distribution.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Politis, Dimitris N. « Markov Chains in Many Dimensions ». Advances in Applied Probability 26, no 03 (septembre 1994) : 756–74. http://dx.doi.org/10.1017/s0001867800026537.

Texte intégral
Résumé :
A generalization of the notion of a stationary Markov chain in more than one dimension is proposed, and is found to be a special class of homogeneous Markov random fields. Stationary Markov chains in many dimensions are shown to possess a maximum entropy property, analogous to the corresponding property for Markov chains in one dimension. In addition, a representation of Markov chains in many dimensions is provided, together with a method for their generation that converges to their stationary distribution.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Lorek, Paweł. « Antiduality and Möbius monotonicity : generalized coupon collector problem ». ESAIM : Probability and Statistics 23 (2019) : 739–69. http://dx.doi.org/10.1051/ps/2019004.

Texte intégral
Résumé :
For a given absorbing Markov chain X* on a finite state space, a chain X is a sharp antidual of X* if the fastest strong stationary time (FSST) of X is equal, in distribution, to the absorption time of X*. In this paper, we show a systematic way of finding such an antidual based on some partial ordering of the state space. We use a theory of strong stationary duality developed recently for Möbius monotone Markov chains. We give several sharp antidual chains for Markov chain corresponding to a generalized coupon collector problem. As a consequence – utilizing known results on the limiting distribution of the absorption time – we indicate separation cutoffs (with their window sizes) in several chains. We also present a chain which (under some conditions) has a prescribed stationary distribution and its FSST is distributed as a prescribed mixture of sums of geometric random variables.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Hsiau, Shoou-Ren. « Compound Poisson limit theorems for Markov chains ». Journal of Applied Probability 34, no 1 (mars 1997) : 24–34. http://dx.doi.org/10.2307/3215171.

Texte intégral
Résumé :
This paper establishes a compound Poisson limit theorem for the sum of a sequence of multi-state Markov chains. Our theorem generalizes an earlier one by Koopman for the two-state Markov chain. Moreover, a similar approach is used to derive a limit theorem for the sum of the k th-order two-state Markov chain.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Hsiau, Shoou-Ren. « Compound Poisson limit theorems for Markov chains ». Journal of Applied Probability 34, no 01 (mars 1997) : 24–34. http://dx.doi.org/10.1017/s002190020010066x.

Texte intégral
Résumé :
This paper establishes a compound Poisson limit theorem for the sum of a sequence of multi-state Markov chains. Our theorem generalizes an earlier one by Koopman for the two-state Markov chain. Moreover, a similar approach is used to derive a limit theorem for the sum of the k th-order two-state Markov chain.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Boys, R. J., et D. A. Henderson. « On Determining the Order of Markov Dependence of an Observed Process Governed by a Hidden Markov Model ». Scientific Programming 10, no 3 (2002) : 241–51. http://dx.doi.org/10.1155/2002/683164.

Texte intégral
Résumé :
This paper describes a Bayesian approach to determining the order of a finite state Markov chain whose transition probabilities are themselves governed by a homogeneous finite state Markov chain. It extends previous work on homogeneous Markov chains to more general and applicable hidden Markov models. The method we describe uses a Markov chain Monte Carlo algorithm to obtain samples from the (posterior) distribution for both the order of Markov dependence in the observed sequence and the other governing model parameters. These samples allow coherent inferences to be made straightforwardly in contrast to those which use information criteria. The methods are illustrated by their application to both simulated and real data sets.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Janssen, A., et J. Segers. « Markov Tail Chains ». Journal of Applied Probability 51, no 4 (décembre 2014) : 1133–53. http://dx.doi.org/10.1239/jap/1421763332.

Texte intégral
Résumé :
The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions inRd. We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, we will show that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is also Markovian. We analyze the resulting class of limiting processes in detail. Applications of the theory yield the asymptotic distribution of both the past and the future of univariate and multivariate stochastic difference equations conditioned on an extreme event.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Janssen, A., et J. Segers. « Markov Tail Chains ». Journal of Applied Probability 51, no 04 (décembre 2014) : 1133–53. http://dx.doi.org/10.1017/s0001867800012027.

Texte intégral
Résumé :
The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions in R d . We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, we will show that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is also Markovian. We analyze the resulting class of limiting processes in detail. Applications of the theory yield the asymptotic distribution of both the past and the future of univariate and multivariate stochastic difference equations conditioned on an extreme event.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Janssen, A., et J. Segers. « Markov Tail Chains ». Journal of Applied Probability 51, no 04 (décembre 2014) : 1133–53. http://dx.doi.org/10.1017/s002190020001202x.

Texte intégral
Résumé :
The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions in R d . We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, we will show that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is also Markovian. We analyze the resulting class of limiting processes in detail. Applications of the theory yield the asymptotic distribution of both the past and the future of univariate and multivariate stochastic difference equations conditioned on an extreme event.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Altman, Eitan, Konstantin E. Avrachenkov et Rudesindo Núñez-Queija. « Perturbation analysis for denumerable Markov chains with application to queueing models ». Advances in Applied Probability 36, no 3 (septembre 2004) : 839–53. http://dx.doi.org/10.1239/aap/1093962237.

Texte intégral
Résumé :
We study the parametric perturbation of Markov chains with denumerable state spaces. We consider both regular and singular perturbations. By the latter we mean that transition probabilities of a Markov chain, with several ergodic classes, are perturbed such that (rare) transitions among the different ergodic classes of the unperturbed chain are allowed. Singularly perturbed Markov chains have been studied in the literature under more restrictive assumptions such as strong recurrence ergodicity or Doeblin conditions. We relax these conditions so that our results can be applied to queueing models (where the conditions mentioned above typically fail to hold). Assuming ν-geometric ergodicity, we are able to explicitly express the steady-state distribution of the perturbed Markov chain as a Taylor series in the perturbation parameter. We apply our results to quasi-birth-and-death processes and queueing models.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Altman, Eitan, Konstantin E. Avrachenkov et Rudesindo Núñez-Queija. « Perturbation analysis for denumerable Markov chains with application to queueing models ». Advances in Applied Probability 36, no 03 (septembre 2004) : 839–53. http://dx.doi.org/10.1017/s0001867800013148.

Texte intégral
Résumé :
We study the parametric perturbation of Markov chains with denumerable state spaces. We consider both regular and singular perturbations. By the latter we mean that transition probabilities of a Markov chain, with several ergodic classes, are perturbed such that (rare) transitions among the different ergodic classes of the unperturbed chain are allowed. Singularly perturbed Markov chains have been studied in the literature under more restrictive assumptions such as strong recurrence ergodicity or Doeblin conditions. We relax these conditions so that our results can be applied to queueing models (where the conditions mentioned above typically fail to hold). Assumingν-geometric ergodicity, we are able to explicitly express the steady-state distribution of the perturbed Markov chain as a Taylor series in the perturbation parameter. We apply our results to quasi-birth-and-death processes and queueing models.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Guihenneuc-jouyaux, Chantal, et Christian P. Robert. « Discretization of Continuous Markov Chains and Markov Chain Monte Carlo Convergence Assessment ». Journal of the American Statistical Association 93, no 443 (septembre 1998) : 1055–67. http://dx.doi.org/10.1080/01621459.1998.10473767.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

Zhao, Yiqiang Q., Wei Li et W. John Braun. « Infinite block-structured transition matrices and their properties ». Advances in Applied Probability 30, no 2 (juin 1998) : 365–84. http://dx.doi.org/10.1239/aap/1035228074.

Texte intégral
Résumé :
In this paper, we study Markov chains with infinite state block-structured transition matrices, whose states are partitioned into levels according to the block structure, and various associated measures. Roughly speaking, these measures involve first passage times or expected numbers of visits to certain levels without hitting other levels. They are very important and often play a key role in the study of a Markov chain. Necessary and/or sufficient conditions are obtained for a Markov chain to be positive recurrent, recurrent, or transient in terms of these measures. Results are obtained for general irreducible Markov chains as well as those with transition matrices possessing some block structure. We also discuss the decomposition or the factorization of the characteristic equations of these measures. In the scalar case, we locate the zeros of these characteristic functions and therefore use these zeros to characterize a Markov chain. Examples and various remarks are given to illustrate some of the results.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Zhao, Yiqiang Q., Wei Li et W. John Braun. « Infinite block-structured transition matrices and their properties ». Advances in Applied Probability 30, no 02 (juin 1998) : 365–84. http://dx.doi.org/10.1017/s0001867800047339.

Texte intégral
Résumé :
In this paper, we study Markov chains with infinite state block-structured transition matrices, whose states are partitioned into levels according to the block structure, and various associated measures. Roughly speaking, these measures involve first passage times or expected numbers of visits to certain levels without hitting other levels. They are very important and often play a key role in the study of a Markov chain. Necessary and/or sufficient conditions are obtained for a Markov chain to be positive recurrent, recurrent, or transient in terms of these measures. Results are obtained for general irreducible Markov chains as well as those with transition matrices possessing some block structure. We also discuss the decomposition or the factorization of the characteristic equations of these measures. In the scalar case, we locate the zeros of these characteristic functions and therefore use these zeros to characterize a Markov chain. Examples and various remarks are given to illustrate some of the results.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Gerontidis, Ioannis I. « Semi-Markov Replacement Chains ». Advances in Applied Probability 26, no 3 (septembre 1994) : 728–55. http://dx.doi.org/10.2307/1427818.

Texte intégral
Résumé :
We consider an absorbing semi-Markov chain for which each time absorption occurs there is a resetting of the chain according to some initial (replacement) distribution. The new process is a semi-Markov replacement chain and we study its properties in terms of those of the imbedded Markov replacement chain. A time-dependent version of the model is also defined and analysed asymptotically for two types of environmental behaviour, i.e. either convergent or cyclic. The results contribute to the control theory of semi-Markov chains and extend in a natural manner a wide variety of applied probability models. An application to the modelling of populations with semi-Markovian replacements is also presented.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Gerontidis, Ioannis I. « Semi-Markov Replacement Chains ». Advances in Applied Probability 26, no 03 (septembre 1994) : 728–55. http://dx.doi.org/10.1017/s0001867800026525.

Texte intégral
Résumé :
We consider an absorbing semi-Markov chain for which each time absorption occurs there is a resetting of the chain according to some initial (replacement) distribution. The new process is a semi-Markov replacement chain and we study its properties in terms of those of the imbedded Markov replacement chain. A time-dependent version of the model is also defined and analysed asymptotically for two types of environmental behaviour, i.e. either convergent or cyclic. The results contribute to the control theory of semi-Markov chains and extend in a natural manner a wide variety of applied probability models. An application to the modelling of populations with semi-Markovian replacements is also presented.
Styles APA, Harvard, Vancouver, ISO, etc.
45

BOWEN, LEWIS. « Non-abelian free group actions : Markov processes, the Abramov–Rohlin formula and Yuzvinskii’s formula ». Ergodic Theory and Dynamical Systems 30, no 6 (13 octobre 2009) : 1629–63. http://dx.doi.org/10.1017/s0143385709000844.

Texte intégral
Résumé :
AbstractThis paper introduces Markov chains and processes over non-abelian free groups and semigroups. We prove a formula for the f-invariant of a Markov chain over a free group in terms of transition matrices that parallels the classical formula for the entropy a Markov chain. Applications include free group analogues of the Abramov–Rohlin formula for skew-product actions and Yuzvinskii’s addition formula for algebraic actions.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Golomoziy, Vitaliy, et Olha Moskanova. « Polynomial Recurrence of Time-inhomogeneous Markov Chains ». Austrian Journal of Statistics 52, SI (15 août 2023) : 40–53. http://dx.doi.org/10.17713/ajs.v52isi.1752.

Texte intégral
Résumé :
This paper is devoted to establishing conditions that guarantee the existence of a p-th moment of the time it takes for a timeinhomogeneous Markov chain to hit some set C. We modified the well-known Drift Condition from the theory of homogeneous Markov chains. We demonstrated that the inhomogeneous Markov chain may be polynomially recurrent while exhibiting different dynamics from its homogeneous counterpart.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Huang, Huilin, et Weiguo Yang. « Limit theorems for asymptotic circular mth-order Markov chains indexed by an m-rooted homogeneous tree ». Filomat 33, no 6 (2019) : 1817–32. http://dx.doi.org/10.2298/fil1906817h.

Texte intégral
Résumé :
In this paper, we give the definition of an asymptotic circularmth-order Markov chain indexed by an m rooted homogeneous tree. By applying the limit property for a sequence of multi-variables functions of a nonhomogeneous Markov chain indexed by such tree, we estabish the strong law of large numbers and the asymptotic equipartition property (AEP) for asymptotic circular mth-order finite Markov chains indexed by this homogeneous tree. As a corollary, we can obtain the strong law of large numbers and AEP about the mth-order finite nonhomogeneous Markov chain indexed by the m rooted homogeneous tree.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Shi, Zhiyan, Pingping Zhong et Yan Fan. « THE SHANNON–MCMILLAN THEOREM FOR MARKOV CHAINS INDEXED BY A CAYLEY TREE IN RANDOM ENVIRONMENT ». Probability in the Engineering and Informational Sciences 32, no 4 (29 décembre 2017) : 626–39. http://dx.doi.org/10.1017/s0269964817000444.

Texte intégral
Résumé :
In this paper, we give the definition of tree-indexed Markov chains in random environment with countable state space, and then study the realization of Markov chain indexed by a tree in random environment. Finally, we prove the strong law of large numbers and Shannon–McMillan theorem for Markov chains indexed by a Cayley tree in a Markovian environment with countable state space.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Kijima, Masaaki. « Hazard rate and reversed hazard rate monotonicities in continuous-time Markov chains ». Journal of Applied Probability 35, no 3 (septembre 1998) : 545–56. http://dx.doi.org/10.1239/jap/1032265203.

Texte intégral
Résumé :
A continuous-time Markov chain on the non-negative integers is called skip-free to the right (left) if only unit increments to the right (left) are permitted. If a Markov chain is skip-free both to the right and to the left, it is called a birth–death process. Karlin and McGregor (1959) showed that if a continuous-time Markov chain is monotone in the sense of likelihood ratio ordering then it must be an (extended) birth–death process. This paper proves that if an irreducible Markov chain in continuous time is monotone in the sense of hazard rate (reversed hazard rate) ordering then it must be skip-free to the right (left). A birth–death process is then characterized as a continuous-time Markov chain that is monotone in the sense of both hazard rate and reversed hazard rate orderings. As an application, the first-passage-time distributions of such Markov chains are also studied.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Kijima, Masaaki. « Hazard rate and reversed hazard rate monotonicities in continuous-time Markov chains ». Journal of Applied Probability 35, no 03 (septembre 1998) : 545–56. http://dx.doi.org/10.1017/s002190020001620x.

Texte intégral
Résumé :
A continuous-time Markov chain on the non-negative integers is called skip-free to the right (left) if only unit increments to the right (left) are permitted. If a Markov chain is skip-free both to the right and to the left, it is called a birth–death process. Karlin and McGregor (1959) showed that if a continuous-time Markov chain is monotone in the sense of likelihood ratio ordering then it must be an (extended) birth–death process. This paper proves that if an irreducible Markov chain in continuous time is monotone in the sense of hazard rate (reversed hazard rate) ordering then it must be skip-free to the right (left). A birth–death process is then characterized as a continuous-time Markov chain that is monotone in the sense of both hazard rate and reversed hazard rate orderings. As an application, the first-passage-time distributions of such Markov chains are also studied.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie