Artykuły w czasopismach na temat „Markoc chain”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Markoc chain.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Markoc chain”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Guyo, X., i C. Hardouin†. "Markow chain markov field dynamics:models and statistics". Statistics 35, nr 4 (styczeń 2001): 593–627. http://dx.doi.org/10.1080/02331880108802756.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Valenzuela, Mississippi. "Markov chains and applications". Selecciones Matemáticas 9, nr 01 (30.06.2022): 53–78. http://dx.doi.org/10.17268/sel.mat.2022.01.05.

Pełny tekst źródła
Streszczenie:
This work has three important purposes: first it is the study of Markov Chains, the second is to show that Markov chains have different applications and finally it is to model a process of this behaves. Throughout this work we will describe what a Markov chain is, what these processes are for and how these chains are classified. We will describe a Markov Chain, that is, analyze what are the primary elements that make up a Markov chain, among others.
Style APA, Harvard, Vancouver, ISO itp.
3

Barker, Richard J., i Matthew R. Schofield. "Putting Markov Chains Back into Markov Chain Monte Carlo". Journal of Applied Mathematics and Decision Sciences 2007 (30.10.2007): 1–13. http://dx.doi.org/10.1155/2007/98086.

Pełny tekst źródła
Streszczenie:
Markov chain theory plays an important role in statistical inference both in the formulation of models for data and in the construction of efficient algorithms for inference. The use of Markov chains in modeling data has a long history, however the use of Markov chain theory in developing algorithms for statistical inference has only become popular recently. Using mark-recapture models as an illustration, we show how Markov chains can be used for developing demographic models and also in developing efficient algorithms for inference. We anticipate that a major area of future research involving mark-recapture data will be the development of hierarchical models that lead to better demographic models that account for all uncertainties in the analysis. A key issue is determining when the chains produced by Markov chain Monte Carlo sampling have converged.
Style APA, Harvard, Vancouver, ISO itp.
4

Marcus, Brian, i Selim Tuncel. "The weight-per-symbol polytope and scaffolds of invariants associated with Markov chains". Ergodic Theory and Dynamical Systems 11, nr 1 (marzec 1991): 129–80. http://dx.doi.org/10.1017/s0143385700006052.

Pełny tekst źródła
Streszczenie:
AbstractWe study Markov chains via invariants constructed from periodic orbits. Canonical extensions, based on these invariants, are used to establish a constraint on the degree of finite-to-one block homomorphisms from one Markov chain to another. We construct a polytope from the normalized weights of periodic orbits. Using this polytope, we find canonically-defined induced Markov chains inside the original Markov chain. Each of the invariants associated with these Markov chains gives rise to a scaffold of invariants for the original Markov chain. This is used to obtain counterexamples to the finite equivalence conjecture and to a conjecture regarding finitary isomorphism with finite expected coding time. Also included are results related to the problem of minimality (with respect to block homomorphism) of Bernoulli shifts in the class of Markov chains with beta function equal to the beta function of the Bernoulli shift.
Style APA, Harvard, Vancouver, ISO itp.
5

Xiang, Xuyan, Xiao Zhang i Xiaoyun Mo. "Statistical Identification of Markov Chain on Trees". Mathematical Problems in Engineering 2018 (2018): 1–13. http://dx.doi.org/10.1155/2018/2036248.

Pełny tekst źródła
Streszczenie:
The theoretical study of continuous-time homogeneous Markov chains is usually based on a natural assumption of a known transition rate matrix (TRM). However, the TRM of a Markov chain in realistic systems might be unknown and might even need to be identified by partially observable data. Thus, an issue on how to identify the TRM of the underlying Markov chain by partially observable information is derived from the great significance in applications. That is what we call the statistical identification of Markov chain. The Markov chain inversion approach has been derived for basic Markov chains by partial observation at few states. In the current letter, a more extensive class of Markov chain on trees is investigated. Firstly, a type of a more operable derivative constraint is developed. Then, it is shown that all Markov chains on trees can be identified only by such derivative constraints of univariate distributions of sojourn time and/or hitting time at a few states. A numerical example is included to demonstrate the correctness of the proposed algorithms.
Style APA, Harvard, Vancouver, ISO itp.
6

APOSTOLOV, S. S., Z. A. MAYZELIS, O. V. USATENKO i V. A. YAMPOL'SKII. "HIGH-ORDER CORRELATION FUNCTIONS OF BINARY MULTI-STEP MARKOV CHAINS". International Journal of Modern Physics B 22, nr 22 (10.09.2008): 3841–53. http://dx.doi.org/10.1142/s0217979208048589.

Pełny tekst źródła
Streszczenie:
Two approaches to studying the correlation functions of the binary Markov sequences are considered. The first of them is based on the study of probability of occurring different "words" in the sequence. The other one uses recurrence relations for correlation functions. These methods are applied for two important particular classes of the Markov chains. These classes include the Markov chains with permutative conditional probability functions and the additive Markov chains with the small memory functions. The exciting property of the self-similarity (discovered in Phys. Rev. Lett.90, 110601 (2003) for the additive Markov chain with the step-wise memory function) is proved to be the intrinsic property of any permutative Markov chain. Applicability of the correlation functions of the additive Markov chains with the small memory functions to calculating the thermodynamic characteristics of the classical Ising spin chain with long-range interaction is discussed.
Style APA, Harvard, Vancouver, ISO itp.
7

Masuyama, Hiroyuki. "Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions". Advances in Applied Probability 47, nr 1 (marzec 2015): 83–105. http://dx.doi.org/10.1239/aap/1427814582.

Pełny tekst źródła
Streszczenie:
In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally, we discuss the application of our results to GI/G/1-type Markov chains.
Style APA, Harvard, Vancouver, ISO itp.
8

Masuyama, Hiroyuki. "Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions". Advances in Applied Probability 47, nr 01 (marzec 2015): 83–105. http://dx.doi.org/10.1017/s0001867800007710.

Pełny tekst źródła
Streszczenie:
In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally, we discuss the application of our results to GI/G/1-type Markov chains.
Style APA, Harvard, Vancouver, ISO itp.
9

Verbeken, Brecht, i Marie-Anne Guerry. "Attainability for Markov and Semi-Markov Chains". Mathematics 12, nr 8 (19.04.2024): 1227. http://dx.doi.org/10.3390/math12081227.

Pełny tekst źródła
Streszczenie:
When studying Markov chain models and semi-Markov chain models, it is useful to know which state vectors n, where each component ni represents the number of entities in the state Si, can be maintained or attained. This question leads to the definitions of maintainability and attainability for (time-homogeneous) Markov chain models. Recently, the definition of maintainability was extended to the concept of state reunion maintainability (SR-maintainability) for semi-Markov chains. Within the framework of semi-Markov chains, the states are subdivided further into seniority-based states. State reunion maintainability assesses the maintainability of the distribution across states. Following this idea, we introduce the concept of state reunion attainability, which encompasses the potential of a system to attain a specific distribution across the states after uniting the seniority-based states into the underlying states. In this paper, we start by extending the concept of attainability for constant-sized Markov chain models to systems that are subject to growth or contraction. Afterwards, we introduce the concepts of attainability and state reunion attainability for semi-Markov chain models, using SR-maintainability as a starting point. The attainable region, as well as the state reunion attainable region, are described as the convex hull of their respective vertices, and properties of these regions are investigated.
Style APA, Harvard, Vancouver, ISO itp.
10

Ledoux, James. "A geometric invariant in weak lumpability of finite Markov chains". Journal of Applied Probability 34, nr 4 (grudzień 1997): 847–58. http://dx.doi.org/10.2307/3215001.

Pełny tekst źródła
Streszczenie:
We consider weak lumpability of finite homogeneous Markov chains, which is when a lumped Markov chain with respect to a partition of the initial state space is also a homogeneous Markov chain. We show that weak lumpability is equivalent to the existence of a direct sum of polyhedral cones that is positively invariant by the transition probability matrix of the original chain. It allows us, in a unified way, to derive new results on lumpability of reducible Markov chains and to obtain spectral properties associated with lumpability.
Style APA, Harvard, Vancouver, ISO itp.
11

Ledoux, James. "A geometric invariant in weak lumpability of finite Markov chains". Journal of Applied Probability 34, nr 04 (grudzień 1997): 847–58. http://dx.doi.org/10.1017/s0021900200101561.

Pełny tekst źródła
Streszczenie:
We consider weak lumpability of finite homogeneous Markov chains, which is when a lumped Markov chain with respect to a partition of the initial state space is also a homogeneous Markov chain. We show that weak lumpability is equivalent to the existence of a direct sum of polyhedral cones that is positively invariant by the transition probability matrix of the original chain. It allows us, in a unified way, to derive new results on lumpability of reducible Markov chains and to obtain spectral properties associated with lumpability.
Style APA, Harvard, Vancouver, ISO itp.
12

Takemura, Akimichi, i Hisayuki Hara. "Markov chain Monte Carlo test of toric homogeneous Markov chains". Statistical Methodology 9, nr 3 (maj 2012): 392–406. http://dx.doi.org/10.1016/j.stamet.2011.10.004.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

BOUCHER, THOMAS R., i DAREN B. H. CLINE. "PIGGYBACKING THRESHOLD PROCESSES WITH A FINITE STATE MARKOV CHAIN". Stochastics and Dynamics 09, nr 02 (czerwiec 2009): 187–204. http://dx.doi.org/10.1142/s0219493709002622.

Pełny tekst źródła
Streszczenie:
The state-space representations of certain nonlinear autoregressive time series are general state Markov chains. The transitions of a general state Markov chain among regions in its state-space can be modeled with the transitions among states of a finite state Markov chain. Stability of the time series is then informed by the stationary distributions of the finite state Markov chain. This approach generalizes some previous results.
Style APA, Harvard, Vancouver, ISO itp.
14

Rydén, Tobias. "On identifiability and order of continuous-time aggregated Markov chains, Markov-modulated Poisson processes, and phase-type distributions". Journal of Applied Probability 33, nr 3 (wrzesień 1996): 640–53. http://dx.doi.org/10.2307/3215346.

Pełny tekst źródła
Streszczenie:
An aggregated Markov chain is a Markov chain for which some states cannot be distinguished from each other by the observer. In this paper we consider the identifiability problem for such processes in continuous time, i.e. the problem of determining whether two parameters induce identical laws for the observable process or not. We also study the order of a continuous-time aggregated Markov chain, which is the minimum number of states needed to represent it. In particular, we give a lower bound on the order. As a by-product, we obtain results of this kind also for Markov-modulated Poisson processes, i.e. doubly stochastic Poisson processes whose intensities are directed by continuous-time Markov chains, and phase-type distributions, which are hitting times in finite-state Markov chains.
Style APA, Harvard, Vancouver, ISO itp.
15

Rydén, Tobias. "On identifiability and order of continuous-time aggregated Markov chains, Markov-modulated Poisson processes, and phase-type distributions". Journal of Applied Probability 33, nr 03 (wrzesień 1996): 640–53. http://dx.doi.org/10.1017/s0021900200100087.

Pełny tekst źródła
Streszczenie:
An aggregated Markov chain is a Markov chain for which some states cannot be distinguished from each other by the observer. In this paper we consider the identifiability problem for such processes in continuous time, i.e. the problem of determining whether two parameters induce identical laws for the observable process or not. We also study the order of a continuous-time aggregated Markov chain, which is the minimum number of states needed to represent it. In particular, we give a lower bound on the order. As a by-product, we obtain results of this kind also for Markov-modulated Poisson processes, i.e. doubly stochastic Poisson processes whose intensities are directed by continuous-time Markov chains, and phase-type distributions, which are hitting times in finite-state Markov chains.
Style APA, Harvard, Vancouver, ISO itp.
16

Dixit, Purushottam D. "Introducing User-Prescribed Constraints in Markov Chains for Nonlinear Dimensionality Reduction". Neural Computation 31, nr 5 (maj 2019): 980–97. http://dx.doi.org/10.1162/neco_a_01184.

Pełny tekst źródła
Streszczenie:
Stochastic kernel-based dimensionality-reduction approaches have become popular in the past decade. The central component of many of these methods is a symmetric kernel that quantifies the vicinity between pairs of data points and a kernel-induced Markov chain on the data. Typically, the Markov chain is fully specified by the kernel through row normalization. However, in many cases, it is desirable to impose user-specified stationary-state and dynamical constraints on the Markov chain. Unfortunately, no systematic framework exists to impose such user-defined constraints. Here, based on our previous work on inference of Markov models, we introduce a path entropy maximization based approach to derive the transition probabilities of Markov chains using a kernel and additional user-specified constraints. We illustrate the usefulness of these Markov chains with examples.
Style APA, Harvard, Vancouver, ISO itp.
17

Zhong, Pingping, Weiguo Yang i Peipei Liang. "THE ASYMPTOTIC EQUIPARTITION PROPERTY FOR ASYMPTOTIC CIRCULAR MARKOV CHAINS". Probability in the Engineering and Informational Sciences 24, nr 2 (18.03.2010): 279–88. http://dx.doi.org/10.1017/s0269964809990271.

Pełny tekst źródła
Streszczenie:
In this article, we study the asymptotic equipartition property (AEP) for asymptotic circular Markov chains. First, the definition of an asymptotic circular Markov chain is introduced. Then by applying the limit property for the bivariate functions of nonhomogeneous Markov chains, the strong limit theorem on the frequencies of occurrence of states for asymptotic circular Markov chains is established. Next, the strong law of large numbers on the frequencies of occurrence of states for asymptotic circular Markov chains is obtained. Finally, we prove the AEP for asymptotic circular Markov chains.
Style APA, Harvard, Vancouver, ISO itp.
18

Qi-feng, Yao, Dong Yun i Wang Zhong-Zhi. "An Entropy Rate Theorem for a Hidden Inhomogeneous Markov Chain". Open Statistics & Probability Journal 8, nr 1 (30.09.2017): 19–26. http://dx.doi.org/10.2174/1876527001708010019.

Pełny tekst źródła
Streszczenie:
Objective: The main object of our study is to extend some entropy rate theorems to a Hidden Inhomogeneous Markov Chain (HIMC) and establish an entropy rate theorem under some mild conditions. Introduction: A hidden inhomogeneous Markov chain contains two different stochastic processes; one is an inhomogeneous Markov chain whose states are hidden and the other is a stochastic process whose states are observable. Materials and Methods: The proof of theorem requires some ergodic properties of an inhomogeneous Markov chain, and the flexible application of the properties of norm and the bounded conditions of series are also indispensable. Results: This paper presents an entropy rate theorem for an HIMC under some mild conditions and two corollaries for a hidden Markov chain and an inhomogeneous Markov chain. Conclusion: Under some mild conditions, the entropy rates of an inhomogeneous Markov chains, a hidden Markov chain and an HIMC are similar and easy to calculate.
Style APA, Harvard, Vancouver, ISO itp.
19

Nuel, Grégory. "Pattern Markov Chains: Optimal Markov Chain Embedding Through Deterministic Finite Automata". Journal of Applied Probability 45, nr 1 (marzec 2008): 226–43. http://dx.doi.org/10.1239/jap/1208358964.

Pełny tekst źródła
Streszczenie:
In the framework of patterns in random texts, the Markov chain embedding techniques consist of turning the occurrences of a pattern over an order-m Markov sequence into those of a subset of states into an order-1 Markov chain. In this paper we use the theory of language and automata to provide space-optimal Markov chain embedding using the new notion of pattern Markov chains (PMCs), and we give explicit constructive algorithms to build the PMC associated to any given pattern problem. The interest of PMCs is then illustrated through the exact computation of P-values whose complexity is discussed and compared to other classical asymptotic approximations. Finally, we consider two illustrative examples of highly degenerated pattern problems (structured motifs and PROSITE signatures), which further illustrate the usefulness of our approach.
Style APA, Harvard, Vancouver, ISO itp.
20

Nuel, Grégory. "Pattern Markov Chains: Optimal Markov Chain Embedding Through Deterministic Finite Automata". Journal of Applied Probability 45, nr 01 (marzec 2008): 226–43. http://dx.doi.org/10.1017/s0021900200004083.

Pełny tekst źródła
Streszczenie:
In the framework of patterns in random texts, the Markov chain embedding techniques consist of turning the occurrences of a pattern over an order-m Markov sequence into those of a subset of states into an order-1 Markov chain. In this paper we use the theory of language and automata to provide space-optimal Markov chain embedding using the new notion of pattern Markov chains (PMCs), and we give explicit constructive algorithms to build the PMC associated to any given pattern problem. The interest of PMCs is then illustrated through the exact computation of P-values whose complexity is discussed and compared to other classical asymptotic approximations. Finally, we consider two illustrative examples of highly degenerated pattern problems (structured motifs and PROSITE signatures), which further illustrate the usefulness of our approach.
Style APA, Harvard, Vancouver, ISO itp.
21

Glynn, Peter W., i Chang-Han Rhee. "Exact estimation for Markov chain equilibrium expectations". Journal of Applied Probability 51, A (grudzień 2014): 377–89. http://dx.doi.org/10.1239/jap/1417528487.

Pełny tekst źródła
Streszczenie:
We introduce a new class of Monte Carlo methods, which we call exact estimation algorithms. Such algorithms provide unbiased estimators for equilibrium expectations associated with real-valued functionals defined on a Markov chain. We provide easily implemented algorithms for the class of positive Harris recurrent Markov chains, and for chains that are contracting on average. We further argue that exact estimation in the Markov chain setting provides a significant theoretical relaxation relative to exact simulation methods.
Style APA, Harvard, Vancouver, ISO itp.
22

Glynn, Peter W., i Chang-Han Rhee. "Exact estimation for Markov chain equilibrium expectations". Journal of Applied Probability 51, A (grudzień 2014): 377–89. http://dx.doi.org/10.1017/s0021900200021392.

Pełny tekst źródła
Streszczenie:
We introduce a new class of Monte Carlo methods, which we call exact estimation algorithms. Such algorithms provide unbiased estimators for equilibrium expectations associated with real-valued functionals defined on a Markov chain. We provide easily implemented algorithms for the class of positive Harris recurrent Markov chains, and for chains that are contracting on average. We further argue that exact estimation in the Markov chain setting provides a significant theoretical relaxation relative to exact simulation methods.
Style APA, Harvard, Vancouver, ISO itp.
23

Lekgari, Mokaedi V. "Maximal Coupling Procedure and Stability of Continuous-Time Markov Chains". Bulletin of Mathematical Sciences and Applications 10 (listopad 2014): 30–37. http://dx.doi.org/10.18052/www.scipress.com/bmsa.10.30.

Pełny tekst źródła
Streszczenie:
In this study we first investigate the stability of subsampled discrete Markov chains through the use of the maximal coupling procedure. This is an extension of the available results on Markov chains and is realized through the analysis of the subsampled chain ΦΤn, where {Τn, nєZ+}is an increasing sequence of random stopping times. Then the similar results are realized for the stability of countable-state Continuous-time Markov processes by employing the skeleton-chain method.
Style APA, Harvard, Vancouver, ISO itp.
24

Papastathopoulos, I., K. Strokorb, J. A. Tawn i A. Butler. "Extreme events of Markov chains". Advances in Applied Probability 49, nr 1 (marzec 2017): 134–61. http://dx.doi.org/10.1017/apr.2016.82.

Pełny tekst źródła
Streszczenie:
Abstract The extremal behaviour of a Markov chain is typically characterised by its tail chain. For asymptotically dependent Markov chains, existing formulations fail to capture the full evolution of the extreme event when the chain moves out of the extreme tail region, and, for asymptotically independent chains, recent results fail to cover well-known asymptotically independent processes, such as Markov processes with a Gaussian copula between consecutive values. We use more sophisticated limiting mechanisms that cover a broader class of asymptotically independent processes than current methods, including an extension of the canonical Heffernan‒Tawn normalisation scheme, and reveal features which existing methods reduce to a degenerate form associated with nonextreme states.
Style APA, Harvard, Vancouver, ISO itp.
25

Choi, Michael C. H., i Pierre Patie. "Analysis of non-reversible Markov chains via similarity orbits". Combinatorics, Probability and Computing 29, nr 4 (18.02.2020): 508–36. http://dx.doi.org/10.1017/s0963548320000024.

Pełny tekst źródła
Streszczenie:
AbstractIn this paper we develop an in-depth analysis of non-reversible Markov chains on denumerable state space from a similarity orbit perspective. In particular, we study the class of Markov chains whose transition kernel is in the similarity orbit of a normal transition kernel, such as that of birth–death chains or reversible Markov chains. We start by identifying a set of sufficient conditions for a Markov chain to belong to the similarity orbit of a birth–death chain. As by-products, we obtain a spectral representation in terms of non-self-adjoint resolutions of identity in the sense of Dunford [21] and offer a detailed analysis on the convergence rate, separation cutoff and L2-cutoff of this class of non-reversible Markov chains. We also look into the problem of estimating the integral functionals from discrete observations for this class. In the last part of this paper we investigate a particular similarity orbit of reversible Markov kernels, which we call the pure birth orbit, and analyse various possibly non-reversible variants of classical birth–death processes in this orbit.
Style APA, Harvard, Vancouver, ISO itp.
26

Zhou, Hua, i Kenneth Lange. "Composition Markov chains of multinomial type". Advances in Applied Probability 41, nr 1 (marzec 2009): 270–91. http://dx.doi.org/10.1239/aap/1240319585.

Pełny tekst źródła
Streszczenie:
Suppose that n identical particles evolve according to the same marginal Markov chain. In this setting we study chains such as the Ehrenfest chain that move a prescribed number of randomly chosen particles at each epoch. The product chain constructed by this device inherits its eigenstructure from the marginal chain. There is a further chain derived from the product chain called the composition chain that ignores particle labels and tracks the numbers of particles in the various states. The composition chain in turn inherits its eigenstructure and various properties such as reversibility from the product chain. The equilibrium distribution of the composition chain is multinomial. The current paper proves these facts in the well-known framework of state lumping and identifies the column eigenvectors of the composition chain with the multivariate Krawtchouk polynomials of Griffiths. The advantages of knowing the full spectral decomposition of the composition chain include (a) detailed estimates of the rate of convergence to equilibrium, (b) construction of martingales that allow calculation of the moments of the particle counts, and (c) explicit expressions for mean coalescence times in multi-person random walks. These possibilities are illustrated by applications to Ehrenfest chains, the Hoare and Rahman chain, Kimura's continuous-time chain for DNA evolution, a light bulb chain, and random walks on some specific graphs.
Style APA, Harvard, Vancouver, ISO itp.
27

Zhou, Hua, i Kenneth Lange. "Composition Markov chains of multinomial type". Advances in Applied Probability 41, nr 01 (marzec 2009): 270–91. http://dx.doi.org/10.1017/s0001867800003220.

Pełny tekst źródła
Streszczenie:
Suppose that n identical particles evolve according to the same marginal Markov chain. In this setting we study chains such as the Ehrenfest chain that move a prescribed number of randomly chosen particles at each epoch. The product chain constructed by this device inherits its eigenstructure from the marginal chain. There is a further chain derived from the product chain called the composition chain that ignores particle labels and tracks the numbers of particles in the various states. The composition chain in turn inherits its eigenstructure and various properties such as reversibility from the product chain. The equilibrium distribution of the composition chain is multinomial. The current paper proves these facts in the well-known framework of state lumping and identifies the column eigenvectors of the composition chain with the multivariate Krawtchouk polynomials of Griffiths. The advantages of knowing the full spectral decomposition of the composition chain include (a) detailed estimates of the rate of convergence to equilibrium, (b) construction of martingales that allow calculation of the moments of the particle counts, and (c) explicit expressions for mean coalescence times in multi-person random walks. These possibilities are illustrated by applications to Ehrenfest chains, the Hoare and Rahman chain, Kimura's continuous-time chain for DNA evolution, a light bulb chain, and random walks on some specific graphs.
Style APA, Harvard, Vancouver, ISO itp.
28

Li, Wenxi, i Zhongzhi Wang. "A NOTE ON RÉNYI'S ENTROPY RATE FOR TIME-INHOMOGENEOUS MARKOV CHAINS". Probability in the Engineering and Informational Sciences 33, nr 4 (5.12.2018): 579–90. http://dx.doi.org/10.1017/s026996481800044x.

Pełny tekst źródła
Streszczenie:
AbstractIn this note, we use the Perron–Frobenius theorem to obtain the Rényi's entropy rate for a time-inhomogeneous Markov chain whose transition matrices converge to a primitive matrix. As direct corollaries, we also obtain the Rényi's entropy rate for asymptotic circular Markov chain and the Rényi's divergence rate between two time-inhomogeneous Markov chains.
Style APA, Harvard, Vancouver, ISO itp.
29

Politis, Dimitris N. "Markov Chains in Many Dimensions". Advances in Applied Probability 26, nr 3 (wrzesień 1994): 756–74. http://dx.doi.org/10.2307/1427819.

Pełny tekst źródła
Streszczenie:
A generalization of the notion of a stationary Markov chain in more than one dimension is proposed, and is found to be a special class of homogeneous Markov random fields. Stationary Markov chains in many dimensions are shown to possess a maximum entropy property, analogous to the corresponding property for Markov chains in one dimension. In addition, a representation of Markov chains in many dimensions is provided, together with a method for their generation that converges to their stationary distribution.
Style APA, Harvard, Vancouver, ISO itp.
30

Politis, Dimitris N. "Markov Chains in Many Dimensions". Advances in Applied Probability 26, nr 03 (wrzesień 1994): 756–74. http://dx.doi.org/10.1017/s0001867800026537.

Pełny tekst źródła
Streszczenie:
A generalization of the notion of a stationary Markov chain in more than one dimension is proposed, and is found to be a special class of homogeneous Markov random fields. Stationary Markov chains in many dimensions are shown to possess a maximum entropy property, analogous to the corresponding property for Markov chains in one dimension. In addition, a representation of Markov chains in many dimensions is provided, together with a method for their generation that converges to their stationary distribution.
Style APA, Harvard, Vancouver, ISO itp.
31

Lorek, Paweł. "Antiduality and Möbius monotonicity: generalized coupon collector problem". ESAIM: Probability and Statistics 23 (2019): 739–69. http://dx.doi.org/10.1051/ps/2019004.

Pełny tekst źródła
Streszczenie:
For a given absorbing Markov chain X* on a finite state space, a chain X is a sharp antidual of X* if the fastest strong stationary time (FSST) of X is equal, in distribution, to the absorption time of X*. In this paper, we show a systematic way of finding such an antidual based on some partial ordering of the state space. We use a theory of strong stationary duality developed recently for Möbius monotone Markov chains. We give several sharp antidual chains for Markov chain corresponding to a generalized coupon collector problem. As a consequence – utilizing known results on the limiting distribution of the absorption time – we indicate separation cutoffs (with their window sizes) in several chains. We also present a chain which (under some conditions) has a prescribed stationary distribution and its FSST is distributed as a prescribed mixture of sums of geometric random variables.
Style APA, Harvard, Vancouver, ISO itp.
32

Hsiau, Shoou-Ren. "Compound Poisson limit theorems for Markov chains". Journal of Applied Probability 34, nr 1 (marzec 1997): 24–34. http://dx.doi.org/10.2307/3215171.

Pełny tekst źródła
Streszczenie:
This paper establishes a compound Poisson limit theorem for the sum of a sequence of multi-state Markov chains. Our theorem generalizes an earlier one by Koopman for the two-state Markov chain. Moreover, a similar approach is used to derive a limit theorem for the sum of the k th-order two-state Markov chain.
Style APA, Harvard, Vancouver, ISO itp.
33

Hsiau, Shoou-Ren. "Compound Poisson limit theorems for Markov chains". Journal of Applied Probability 34, nr 01 (marzec 1997): 24–34. http://dx.doi.org/10.1017/s002190020010066x.

Pełny tekst źródła
Streszczenie:
This paper establishes a compound Poisson limit theorem for the sum of a sequence of multi-state Markov chains. Our theorem generalizes an earlier one by Koopman for the two-state Markov chain. Moreover, a similar approach is used to derive a limit theorem for the sum of the k th-order two-state Markov chain.
Style APA, Harvard, Vancouver, ISO itp.
34

Boys, R. J., i D. A. Henderson. "On Determining the Order of Markov Dependence of an Observed Process Governed by a Hidden Markov Model". Scientific Programming 10, nr 3 (2002): 241–51. http://dx.doi.org/10.1155/2002/683164.

Pełny tekst źródła
Streszczenie:
This paper describes a Bayesian approach to determining the order of a finite state Markov chain whose transition probabilities are themselves governed by a homogeneous finite state Markov chain. It extends previous work on homogeneous Markov chains to more general and applicable hidden Markov models. The method we describe uses a Markov chain Monte Carlo algorithm to obtain samples from the (posterior) distribution for both the order of Markov dependence in the observed sequence and the other governing model parameters. These samples allow coherent inferences to be made straightforwardly in contrast to those which use information criteria. The methods are illustrated by their application to both simulated and real data sets.
Style APA, Harvard, Vancouver, ISO itp.
35

Janssen, A., i J. Segers. "Markov Tail Chains". Journal of Applied Probability 51, nr 4 (grudzień 2014): 1133–53. http://dx.doi.org/10.1239/jap/1421763332.

Pełny tekst źródła
Streszczenie:
The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions inRd. We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, we will show that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is also Markovian. We analyze the resulting class of limiting processes in detail. Applications of the theory yield the asymptotic distribution of both the past and the future of univariate and multivariate stochastic difference equations conditioned on an extreme event.
Style APA, Harvard, Vancouver, ISO itp.
36

Janssen, A., i J. Segers. "Markov Tail Chains". Journal of Applied Probability 51, nr 04 (grudzień 2014): 1133–53. http://dx.doi.org/10.1017/s0001867800012027.

Pełny tekst źródła
Streszczenie:
The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions in R d . We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, we will show that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is also Markovian. We analyze the resulting class of limiting processes in detail. Applications of the theory yield the asymptotic distribution of both the past and the future of univariate and multivariate stochastic difference equations conditioned on an extreme event.
Style APA, Harvard, Vancouver, ISO itp.
37

Janssen, A., i J. Segers. "Markov Tail Chains". Journal of Applied Probability 51, nr 04 (grudzień 2014): 1133–53. http://dx.doi.org/10.1017/s002190020001202x.

Pełny tekst źródła
Streszczenie:
The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions in R d . We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, we will show that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is also Markovian. We analyze the resulting class of limiting processes in detail. Applications of the theory yield the asymptotic distribution of both the past and the future of univariate and multivariate stochastic difference equations conditioned on an extreme event.
Style APA, Harvard, Vancouver, ISO itp.
38

Altman, Eitan, Konstantin E. Avrachenkov i Rudesindo Núñez-Queija. "Perturbation analysis for denumerable Markov chains with application to queueing models". Advances in Applied Probability 36, nr 3 (wrzesień 2004): 839–53. http://dx.doi.org/10.1239/aap/1093962237.

Pełny tekst źródła
Streszczenie:
We study the parametric perturbation of Markov chains with denumerable state spaces. We consider both regular and singular perturbations. By the latter we mean that transition probabilities of a Markov chain, with several ergodic classes, are perturbed such that (rare) transitions among the different ergodic classes of the unperturbed chain are allowed. Singularly perturbed Markov chains have been studied in the literature under more restrictive assumptions such as strong recurrence ergodicity or Doeblin conditions. We relax these conditions so that our results can be applied to queueing models (where the conditions mentioned above typically fail to hold). Assuming ν-geometric ergodicity, we are able to explicitly express the steady-state distribution of the perturbed Markov chain as a Taylor series in the perturbation parameter. We apply our results to quasi-birth-and-death processes and queueing models.
Style APA, Harvard, Vancouver, ISO itp.
39

Altman, Eitan, Konstantin E. Avrachenkov i Rudesindo Núñez-Queija. "Perturbation analysis for denumerable Markov chains with application to queueing models". Advances in Applied Probability 36, nr 03 (wrzesień 2004): 839–53. http://dx.doi.org/10.1017/s0001867800013148.

Pełny tekst źródła
Streszczenie:
We study the parametric perturbation of Markov chains with denumerable state spaces. We consider both regular and singular perturbations. By the latter we mean that transition probabilities of a Markov chain, with several ergodic classes, are perturbed such that (rare) transitions among the different ergodic classes of the unperturbed chain are allowed. Singularly perturbed Markov chains have been studied in the literature under more restrictive assumptions such as strong recurrence ergodicity or Doeblin conditions. We relax these conditions so that our results can be applied to queueing models (where the conditions mentioned above typically fail to hold). Assumingν-geometric ergodicity, we are able to explicitly express the steady-state distribution of the perturbed Markov chain as a Taylor series in the perturbation parameter. We apply our results to quasi-birth-and-death processes and queueing models.
Style APA, Harvard, Vancouver, ISO itp.
40

Guihenneuc-jouyaux, Chantal, i Christian P. Robert. "Discretization of Continuous Markov Chains and Markov Chain Monte Carlo Convergence Assessment". Journal of the American Statistical Association 93, nr 443 (wrzesień 1998): 1055–67. http://dx.doi.org/10.1080/01621459.1998.10473767.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Zhao, Yiqiang Q., Wei Li i W. John Braun. "Infinite block-structured transition matrices and their properties". Advances in Applied Probability 30, nr 2 (czerwiec 1998): 365–84. http://dx.doi.org/10.1239/aap/1035228074.

Pełny tekst źródła
Streszczenie:
In this paper, we study Markov chains with infinite state block-structured transition matrices, whose states are partitioned into levels according to the block structure, and various associated measures. Roughly speaking, these measures involve first passage times or expected numbers of visits to certain levels without hitting other levels. They are very important and often play a key role in the study of a Markov chain. Necessary and/or sufficient conditions are obtained for a Markov chain to be positive recurrent, recurrent, or transient in terms of these measures. Results are obtained for general irreducible Markov chains as well as those with transition matrices possessing some block structure. We also discuss the decomposition or the factorization of the characteristic equations of these measures. In the scalar case, we locate the zeros of these characteristic functions and therefore use these zeros to characterize a Markov chain. Examples and various remarks are given to illustrate some of the results.
Style APA, Harvard, Vancouver, ISO itp.
42

Zhao, Yiqiang Q., Wei Li i W. John Braun. "Infinite block-structured transition matrices and their properties". Advances in Applied Probability 30, nr 02 (czerwiec 1998): 365–84. http://dx.doi.org/10.1017/s0001867800047339.

Pełny tekst źródła
Streszczenie:
In this paper, we study Markov chains with infinite state block-structured transition matrices, whose states are partitioned into levels according to the block structure, and various associated measures. Roughly speaking, these measures involve first passage times or expected numbers of visits to certain levels without hitting other levels. They are very important and often play a key role in the study of a Markov chain. Necessary and/or sufficient conditions are obtained for a Markov chain to be positive recurrent, recurrent, or transient in terms of these measures. Results are obtained for general irreducible Markov chains as well as those with transition matrices possessing some block structure. We also discuss the decomposition or the factorization of the characteristic equations of these measures. In the scalar case, we locate the zeros of these characteristic functions and therefore use these zeros to characterize a Markov chain. Examples and various remarks are given to illustrate some of the results.
Style APA, Harvard, Vancouver, ISO itp.
43

Gerontidis, Ioannis I. "Semi-Markov Replacement Chains". Advances in Applied Probability 26, nr 3 (wrzesień 1994): 728–55. http://dx.doi.org/10.2307/1427818.

Pełny tekst źródła
Streszczenie:
We consider an absorbing semi-Markov chain for which each time absorption occurs there is a resetting of the chain according to some initial (replacement) distribution. The new process is a semi-Markov replacement chain and we study its properties in terms of those of the imbedded Markov replacement chain. A time-dependent version of the model is also defined and analysed asymptotically for two types of environmental behaviour, i.e. either convergent or cyclic. The results contribute to the control theory of semi-Markov chains and extend in a natural manner a wide variety of applied probability models. An application to the modelling of populations with semi-Markovian replacements is also presented.
Style APA, Harvard, Vancouver, ISO itp.
44

Gerontidis, Ioannis I. "Semi-Markov Replacement Chains". Advances in Applied Probability 26, nr 03 (wrzesień 1994): 728–55. http://dx.doi.org/10.1017/s0001867800026525.

Pełny tekst źródła
Streszczenie:
We consider an absorbing semi-Markov chain for which each time absorption occurs there is a resetting of the chain according to some initial (replacement) distribution. The new process is a semi-Markov replacement chain and we study its properties in terms of those of the imbedded Markov replacement chain. A time-dependent version of the model is also defined and analysed asymptotically for two types of environmental behaviour, i.e. either convergent or cyclic. The results contribute to the control theory of semi-Markov chains and extend in a natural manner a wide variety of applied probability models. An application to the modelling of populations with semi-Markovian replacements is also presented.
Style APA, Harvard, Vancouver, ISO itp.
45

BOWEN, LEWIS. "Non-abelian free group actions: Markov processes, the Abramov–Rohlin formula and Yuzvinskii’s formula". Ergodic Theory and Dynamical Systems 30, nr 6 (13.10.2009): 1629–63. http://dx.doi.org/10.1017/s0143385709000844.

Pełny tekst źródła
Streszczenie:
AbstractThis paper introduces Markov chains and processes over non-abelian free groups and semigroups. We prove a formula for the f-invariant of a Markov chain over a free group in terms of transition matrices that parallels the classical formula for the entropy a Markov chain. Applications include free group analogues of the Abramov–Rohlin formula for skew-product actions and Yuzvinskii’s addition formula for algebraic actions.
Style APA, Harvard, Vancouver, ISO itp.
46

Golomoziy, Vitaliy, i Olha Moskanova. "Polynomial Recurrence of Time-inhomogeneous Markov Chains". Austrian Journal of Statistics 52, SI (15.08.2023): 40–53. http://dx.doi.org/10.17713/ajs.v52isi.1752.

Pełny tekst źródła
Streszczenie:
This paper is devoted to establishing conditions that guarantee the existence of a p-th moment of the time it takes for a timeinhomogeneous Markov chain to hit some set C. We modified the well-known Drift Condition from the theory of homogeneous Markov chains. We demonstrated that the inhomogeneous Markov chain may be polynomially recurrent while exhibiting different dynamics from its homogeneous counterpart.
Style APA, Harvard, Vancouver, ISO itp.
47

Huang, Huilin, i Weiguo Yang. "Limit theorems for asymptotic circular mth-order Markov chains indexed by an m-rooted homogeneous tree". Filomat 33, nr 6 (2019): 1817–32. http://dx.doi.org/10.2298/fil1906817h.

Pełny tekst źródła
Streszczenie:
In this paper, we give the definition of an asymptotic circularmth-order Markov chain indexed by an m rooted homogeneous tree. By applying the limit property for a sequence of multi-variables functions of a nonhomogeneous Markov chain indexed by such tree, we estabish the strong law of large numbers and the asymptotic equipartition property (AEP) for asymptotic circular mth-order finite Markov chains indexed by this homogeneous tree. As a corollary, we can obtain the strong law of large numbers and AEP about the mth-order finite nonhomogeneous Markov chain indexed by the m rooted homogeneous tree.
Style APA, Harvard, Vancouver, ISO itp.
48

Shi, Zhiyan, Pingping Zhong i Yan Fan. "THE SHANNON–MCMILLAN THEOREM FOR MARKOV CHAINS INDEXED BY A CAYLEY TREE IN RANDOM ENVIRONMENT". Probability in the Engineering and Informational Sciences 32, nr 4 (29.12.2017): 626–39. http://dx.doi.org/10.1017/s0269964817000444.

Pełny tekst źródła
Streszczenie:
In this paper, we give the definition of tree-indexed Markov chains in random environment with countable state space, and then study the realization of Markov chain indexed by a tree in random environment. Finally, we prove the strong law of large numbers and Shannon–McMillan theorem for Markov chains indexed by a Cayley tree in a Markovian environment with countable state space.
Style APA, Harvard, Vancouver, ISO itp.
49

Kijima, Masaaki. "Hazard rate and reversed hazard rate monotonicities in continuous-time Markov chains". Journal of Applied Probability 35, nr 3 (wrzesień 1998): 545–56. http://dx.doi.org/10.1239/jap/1032265203.

Pełny tekst źródła
Streszczenie:
A continuous-time Markov chain on the non-negative integers is called skip-free to the right (left) if only unit increments to the right (left) are permitted. If a Markov chain is skip-free both to the right and to the left, it is called a birth–death process. Karlin and McGregor (1959) showed that if a continuous-time Markov chain is monotone in the sense of likelihood ratio ordering then it must be an (extended) birth–death process. This paper proves that if an irreducible Markov chain in continuous time is monotone in the sense of hazard rate (reversed hazard rate) ordering then it must be skip-free to the right (left). A birth–death process is then characterized as a continuous-time Markov chain that is monotone in the sense of both hazard rate and reversed hazard rate orderings. As an application, the first-passage-time distributions of such Markov chains are also studied.
Style APA, Harvard, Vancouver, ISO itp.
50

Kijima, Masaaki. "Hazard rate and reversed hazard rate monotonicities in continuous-time Markov chains". Journal of Applied Probability 35, nr 03 (wrzesień 1998): 545–56. http://dx.doi.org/10.1017/s002190020001620x.

Pełny tekst źródła
Streszczenie:
A continuous-time Markov chain on the non-negative integers is called skip-free to the right (left) if only unit increments to the right (left) are permitted. If a Markov chain is skip-free both to the right and to the left, it is called a birth–death process. Karlin and McGregor (1959) showed that if a continuous-time Markov chain is monotone in the sense of likelihood ratio ordering then it must be an (extended) birth–death process. This paper proves that if an irreducible Markov chain in continuous time is monotone in the sense of hazard rate (reversed hazard rate) ordering then it must be skip-free to the right (left). A birth–death process is then characterized as a continuous-time Markov chain that is monotone in the sense of both hazard rate and reversed hazard rate orderings. As an application, the first-passage-time distributions of such Markov chains are also studied.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii