Journal articles on the topic 'Markov chains'

To see the other types of publications on this topic, follow the link: Markov chains.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Markov chains.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Grewal, Jasleen K., Martin Krzywinski, and Naomi Altman. "Markov models—Markov chains." Nature Methods 16, no. 8 (July 30, 2019): 663–64. http://dx.doi.org/10.1038/s41592-019-0476-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Valenzuela, Mississippi. "Markov chains and applications." Selecciones Matemáticas 9, no. 01 (June 30, 2022): 53–78. http://dx.doi.org/10.17268/sel.mat.2022.01.05.

Full text
Abstract:
This work has three important purposes: first it is the study of Markov Chains, the second is to show that Markov chains have different applications and finally it is to model a process of this behaves. Throughout this work we will describe what a Markov chain is, what these processes are for and how these chains are classified. We will describe a Markov Chain, that is, analyze what are the primary elements that make up a Markov chain, among others.
APA, Harvard, Vancouver, ISO, and other styles
3

Lindley, D. V., and J. R. Norris. "Markov Chains." Mathematical Gazette 83, no. 496 (March 1999): 188. http://dx.doi.org/10.2307/3618756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lund, Robert B., and J. R. Norris. "Markov Chains." Journal of the American Statistical Association 94, no. 446 (June 1999): 654. http://dx.doi.org/10.2307/2670196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Verbeken, Brecht, and Marie-Anne Guerry. "Attainability for Markov and Semi-Markov Chains." Mathematics 12, no. 8 (April 19, 2024): 1227. http://dx.doi.org/10.3390/math12081227.

Full text
Abstract:
When studying Markov chain models and semi-Markov chain models, it is useful to know which state vectors n, where each component ni represents the number of entities in the state Si, can be maintained or attained. This question leads to the definitions of maintainability and attainability for (time-homogeneous) Markov chain models. Recently, the definition of maintainability was extended to the concept of state reunion maintainability (SR-maintainability) for semi-Markov chains. Within the framework of semi-Markov chains, the states are subdivided further into seniority-based states. State reunion maintainability assesses the maintainability of the distribution across states. Following this idea, we introduce the concept of state reunion attainability, which encompasses the potential of a system to attain a specific distribution across the states after uniting the seniority-based states into the underlying states. In this paper, we start by extending the concept of attainability for constant-sized Markov chain models to systems that are subject to growth or contraction. Afterwards, we introduce the concepts of attainability and state reunion attainability for semi-Markov chain models, using SR-maintainability as a starting point. The attainable region, as well as the state reunion attainable region, are described as the convex hull of their respective vertices, and properties of these regions are investigated.
APA, Harvard, Vancouver, ISO, and other styles
6

Barker, Richard J., and Matthew R. Schofield. "Putting Markov Chains Back into Markov Chain Monte Carlo." Journal of Applied Mathematics and Decision Sciences 2007 (October 30, 2007): 1–13. http://dx.doi.org/10.1155/2007/98086.

Full text
Abstract:
Markov chain theory plays an important role in statistical inference both in the formulation of models for data and in the construction of efficient algorithms for inference. The use of Markov chains in modeling data has a long history, however the use of Markov chain theory in developing algorithms for statistical inference has only become popular recently. Using mark-recapture models as an illustration, we show how Markov chains can be used for developing demographic models and also in developing efficient algorithms for inference. We anticipate that a major area of future research involving mark-recapture data will be the development of hierarchical models that lead to better demographic models that account for all uncertainties in the analysis. A key issue is determining when the chains produced by Markov chain Monte Carlo sampling have converged.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhong, Pingping, Weiguo Yang, and Peipei Liang. "THE ASYMPTOTIC EQUIPARTITION PROPERTY FOR ASYMPTOTIC CIRCULAR MARKOV CHAINS." Probability in the Engineering and Informational Sciences 24, no. 2 (March 18, 2010): 279–88. http://dx.doi.org/10.1017/s0269964809990271.

Full text
Abstract:
In this article, we study the asymptotic equipartition property (AEP) for asymptotic circular Markov chains. First, the definition of an asymptotic circular Markov chain is introduced. Then by applying the limit property for the bivariate functions of nonhomogeneous Markov chains, the strong limit theorem on the frequencies of occurrence of states for asymptotic circular Markov chains is established. Next, the strong law of large numbers on the frequencies of occurrence of states for asymptotic circular Markov chains is obtained. Finally, we prove the AEP for asymptotic circular Markov chains.
APA, Harvard, Vancouver, ISO, and other styles
8

Lund, Robert, Ying Zhao, and Peter C. Kiessler. "A monotonicity in reversible Markov chains." Journal of Applied Probability 43, no. 2 (June 2006): 486–99. http://dx.doi.org/10.1239/jap/1152413736.

Full text
Abstract:
In this paper we identify a monotonicity in all countable-state-space reversible Markov chains and examine several consequences of this structure. In particular, we show that the return times to every state in a reversible chain have a decreasing hazard rate on the subsequence of even times. This monotonicity is used to develop geometric convergence rate bounds for time-reversible Markov chains. Results relating the radius of convergence of the probability generating function of first return times to the chain's rate of convergence are presented. An effort is made to keep the exposition rudimentary.
APA, Harvard, Vancouver, ISO, and other styles
9

Lund, Robert, Ying Zhao, and Peter C. Kiessler. "A monotonicity in reversible Markov chains." Journal of Applied Probability 43, no. 02 (June 2006): 486–99. http://dx.doi.org/10.1017/s0021900200001777.

Full text
Abstract:
In this paper we identify a monotonicity in all countable-state-space reversible Markov chains and examine several consequences of this structure. In particular, we show that the return times toeverystate in a reversible chain have a decreasing hazard rate on the subsequence of even times. This monotonicity is used to develop geometric convergence rate bounds for time-reversible Markov chains. Results relating the radius of convergence of the probability generating function of first return times to the chain's rate of convergence are presented. An effort is made to keep the exposition rudimentary.
APA, Harvard, Vancouver, ISO, and other styles
10

Janssen, A., and J. Segers. "Markov Tail Chains." Journal of Applied Probability 51, no. 4 (December 2014): 1133–53. http://dx.doi.org/10.1239/jap/1421763332.

Full text
Abstract:
The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions inRd. We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, we will show that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is also Markovian. We analyze the resulting class of limiting processes in detail. Applications of the theory yield the asymptotic distribution of both the past and the future of univariate and multivariate stochastic difference equations conditioned on an extreme event.
APA, Harvard, Vancouver, ISO, and other styles
11

Janssen, A., and J. Segers. "Markov Tail Chains." Journal of Applied Probability 51, no. 04 (December 2014): 1133–53. http://dx.doi.org/10.1017/s0001867800012027.

Full text
Abstract:
The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions in R d . We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, we will show that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is also Markovian. We analyze the resulting class of limiting processes in detail. Applications of the theory yield the asymptotic distribution of both the past and the future of univariate and multivariate stochastic difference equations conditioned on an extreme event.
APA, Harvard, Vancouver, ISO, and other styles
12

Janssen, A., and J. Segers. "Markov Tail Chains." Journal of Applied Probability 51, no. 04 (December 2014): 1133–53. http://dx.doi.org/10.1017/s002190020001202x.

Full text
Abstract:
The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions in R d . We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, we will show that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is also Markovian. We analyze the resulting class of limiting processes in detail. Applications of the theory yield the asymptotic distribution of both the past and the future of univariate and multivariate stochastic difference equations conditioned on an extreme event.
APA, Harvard, Vancouver, ISO, and other styles
13

Marcus, Brian, and Selim Tuncel. "The weight-per-symbol polytope and scaffolds of invariants associated with Markov chains." Ergodic Theory and Dynamical Systems 11, no. 1 (March 1991): 129–80. http://dx.doi.org/10.1017/s0143385700006052.

Full text
Abstract:
AbstractWe study Markov chains via invariants constructed from periodic orbits. Canonical extensions, based on these invariants, are used to establish a constraint on the degree of finite-to-one block homomorphisms from one Markov chain to another. We construct a polytope from the normalized weights of periodic orbits. Using this polytope, we find canonically-defined induced Markov chains inside the original Markov chain. Each of the invariants associated with these Markov chains gives rise to a scaffold of invariants for the original Markov chain. This is used to obtain counterexamples to the finite equivalence conjecture and to a conjecture regarding finitary isomorphism with finite expected coding time. Also included are results related to the problem of minimality (with respect to block homomorphism) of Bernoulli shifts in the class of Markov chains with beta function equal to the beta function of the Bernoulli shift.
APA, Harvard, Vancouver, ISO, and other styles
14

APOSTOLOV, S. S., Z. A. MAYZELIS, O. V. USATENKO, and V. A. YAMPOL'SKII. "HIGH-ORDER CORRELATION FUNCTIONS OF BINARY MULTI-STEP MARKOV CHAINS." International Journal of Modern Physics B 22, no. 22 (September 10, 2008): 3841–53. http://dx.doi.org/10.1142/s0217979208048589.

Full text
Abstract:
Two approaches to studying the correlation functions of the binary Markov sequences are considered. The first of them is based on the study of probability of occurring different "words" in the sequence. The other one uses recurrence relations for correlation functions. These methods are applied for two important particular classes of the Markov chains. These classes include the Markov chains with permutative conditional probability functions and the additive Markov chains with the small memory functions. The exciting property of the self-similarity (discovered in Phys. Rev. Lett.90, 110601 (2003) for the additive Markov chain with the step-wise memory function) is proved to be the intrinsic property of any permutative Markov chain. Applicability of the correlation functions of the additive Markov chains with the small memory functions to calculating the thermodynamic characteristics of the classical Ising spin chain with long-range interaction is discussed.
APA, Harvard, Vancouver, ISO, and other styles
15

Politis, Dimitris N. "Markov Chains in Many Dimensions." Advances in Applied Probability 26, no. 3 (September 1994): 756–74. http://dx.doi.org/10.2307/1427819.

Full text
Abstract:
A generalization of the notion of a stationary Markov chain in more than one dimension is proposed, and is found to be a special class of homogeneous Markov random fields. Stationary Markov chains in many dimensions are shown to possess a maximum entropy property, analogous to the corresponding property for Markov chains in one dimension. In addition, a representation of Markov chains in many dimensions is provided, together with a method for their generation that converges to their stationary distribution.
APA, Harvard, Vancouver, ISO, and other styles
16

Politis, Dimitris N. "Markov Chains in Many Dimensions." Advances in Applied Probability 26, no. 03 (September 1994): 756–74. http://dx.doi.org/10.1017/s0001867800026537.

Full text
Abstract:
A generalization of the notion of a stationary Markov chain in more than one dimension is proposed, and is found to be a special class of homogeneous Markov random fields. Stationary Markov chains in many dimensions are shown to possess a maximum entropy property, analogous to the corresponding property for Markov chains in one dimension. In addition, a representation of Markov chains in many dimensions is provided, together with a method for their generation that converges to their stationary distribution.
APA, Harvard, Vancouver, ISO, and other styles
17

Takemura, Akimichi, and Hisayuki Hara. "Markov chain Monte Carlo test of toric homogeneous Markov chains." Statistical Methodology 9, no. 3 (May 2012): 392–406. http://dx.doi.org/10.1016/j.stamet.2011.10.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Gerontidis, Ioannis I. "Semi-Markov Replacement Chains." Advances in Applied Probability 26, no. 3 (September 1994): 728–55. http://dx.doi.org/10.2307/1427818.

Full text
Abstract:
We consider an absorbing semi-Markov chain for which each time absorption occurs there is a resetting of the chain according to some initial (replacement) distribution. The new process is a semi-Markov replacement chain and we study its properties in terms of those of the imbedded Markov replacement chain. A time-dependent version of the model is also defined and analysed asymptotically for two types of environmental behaviour, i.e. either convergent or cyclic. The results contribute to the control theory of semi-Markov chains and extend in a natural manner a wide variety of applied probability models. An application to the modelling of populations with semi-Markovian replacements is also presented.
APA, Harvard, Vancouver, ISO, and other styles
19

Gerontidis, Ioannis I. "Semi-Markov Replacement Chains." Advances in Applied Probability 26, no. 03 (September 1994): 728–55. http://dx.doi.org/10.1017/s0001867800026525.

Full text
Abstract:
We consider an absorbing semi-Markov chain for which each time absorption occurs there is a resetting of the chain according to some initial (replacement) distribution. The new process is a semi-Markov replacement chain and we study its properties in terms of those of the imbedded Markov replacement chain. A time-dependent version of the model is also defined and analysed asymptotically for two types of environmental behaviour, i.e. either convergent or cyclic. The results contribute to the control theory of semi-Markov chains and extend in a natural manner a wide variety of applied probability models. An application to the modelling of populations with semi-Markovian replacements is also presented.
APA, Harvard, Vancouver, ISO, and other styles
20

Rising, William. "Geometric Markov chains." Journal of Applied Probability 32, no. 2 (June 1995): 349–74. http://dx.doi.org/10.2307/3215293.

Full text
Abstract:
A generalization of the familiar birth–death chain, called the geometric chain, is introduced and explored. By the introduction of two families of parameters in addition to the infinitesimal birth and death rates, the geometric chain allows transitions beyond the nearest neighbor, but is shown to retain the simple computational formulas of the birth–death chain for the stationary distribution and the expected first-passage times between states. It is also demonstrated that even when not reversible, a reversed geometric chain is again a geometric chain.
APA, Harvard, Vancouver, ISO, and other styles
21

Evans, Steven, and Adam Jaffe. "Virtual Markov Chains." New Zealand Journal of Mathematics 52 (September 19, 2021): 511–59. http://dx.doi.org/10.53733/147.

Full text
Abstract:
We introduce the space of virtual Markov chains (VMCs) as a projective limit of the spaces of all finite state space Markov chains (MCs), in the same way that the space of virtual permutations is the projective limit of the spaces of all permutations of finite sets.We introduce the notions of virtual initial distribution (VID) and a virtual transition matrix (VTM), and we show that the law of any VMC is uniquely characterized by a pair of a VID and VTM which have to satisfy a certain compatibility condition.Lastly, we study various properties of compact convex sets associated to the theory of VMCs, including that the Birkhoff-von Neumann theorem fails in the virtual setting.
APA, Harvard, Vancouver, ISO, and other styles
22

VOLCHENKOV, DIMA, and JEAN RENÉ DAWIN. "MUSICAL MARKOV CHAINS." International Journal of Modern Physics: Conference Series 16 (January 2012): 116–35. http://dx.doi.org/10.1142/s2010194512007829.

Full text
Abstract:
A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.
APA, Harvard, Vancouver, ISO, and other styles
23

Pieczynski, W. "Pairwise markov chains." IEEE Transactions on Pattern Analysis and Machine Intelligence 25, no. 5 (May 2003): 634–39. http://dx.doi.org/10.1109/tpami.2003.1195998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sisson, Scott A. "Transdimensional Markov Chains." Journal of the American Statistical Association 100, no. 471 (September 2005): 1077–89. http://dx.doi.org/10.1198/016214505000000664.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Solan, Eilon, and Nicolas Vieille. "Perturbed Markov chains." Journal of Applied Probability 40, no. 1 (March 2003): 107–22. http://dx.doi.org/10.1239/jap/1044476830.

Full text
Abstract:
We study irreducible time-homogenous Markov chains with finite state space in discrete time. We obtain results on the sensitivity of the stationary distribution and other statistical quantities with respect to perturbations of the transition matrix. We define a new closeness relation between transition matrices, and use graph-theoretic techniques, in contrast with the matrix analysis techniques previously used.
APA, Harvard, Vancouver, ISO, and other styles
26

Pollett, P. K. "Similar Markov chains." Journal of Applied Probability 38, A (2001): 53–65. http://dx.doi.org/10.1239/jap/1085496591.

Full text
Abstract:
Lenin et al. (2000) recently introduced the idea of similarity in the context of birth-death processes. This paper examines the extent to which their results can be extended to arbitrary Markov chains. It is proved that, under a variety of conditions, similar chains are strongly similar in a sense which is described, and it is shown that minimal chains are strongly similar if and only if the corresponding transition-rate matrices are strongly similar. A general framework is given for constructing families of strongly similar chains; it permits the construction of all such chains in the irreducible case.
APA, Harvard, Vancouver, ISO, and other styles
27

Rising, William. "Geometric Markov chains." Journal of Applied Probability 32, no. 02 (June 1995): 349–74. http://dx.doi.org/10.1017/s0021900200102839.

Full text
Abstract:
A generalization of the familiar birth–death chain, called the geometric chain, is introduced and explored. By the introduction of two families of parameters in addition to the infinitesimal birth and death rates, the geometric chain allows transitions beyond the nearest neighbor, but is shown to retain the simple computational formulas of the birth–death chain for the stationary distribution and the expected first-passage times between states. It is also demonstrated that even when not reversible, a reversed geometric chain is again a geometric chain.
APA, Harvard, Vancouver, ISO, and other styles
28

Pollett, P. K. "Similar Markov chains." Journal of Applied Probability 38, A (2001): 53–65. http://dx.doi.org/10.1017/s0021900200112677.

Full text
Abstract:
Lenin et al. (2000) recently introduced the idea of similarity in the context of birth-death processes. This paper examines the extent to which their results can be extended to arbitrary Markov chains. It is proved that, under a variety of conditions, similar chains are strongly similar in a sense which is described, and it is shown that minimal chains are strongly similar if and only if the corresponding transition-rate matrices are strongly similar. A general framework is given for constructing families of strongly similar chains; it permits the construction of all such chains in the irreducible case.
APA, Harvard, Vancouver, ISO, and other styles
29

Gudder, Stanley. "Quantum Markov chains." Journal of Mathematical Physics 49, no. 7 (July 2008): 072105. http://dx.doi.org/10.1063/1.2953952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Solan, Eilon, and Nicolas Vieille. "Perturbed Markov chains." Journal of Applied Probability 40, no. 01 (March 2003): 107–22. http://dx.doi.org/10.1017/s0021900200022294.

Full text
Abstract:
We study irreducible time-homogenous Markov chains with finite state space in discrete time. We obtain results on the sensitivity of the stationary distribution and other statistical quantities with respect to perturbations of the transition matrix. We define a new closeness relation between transition matrices, and use graph-theoretic techniques, in contrast with the matrix analysis techniques previously used.
APA, Harvard, Vancouver, ISO, and other styles
31

Hirscher, Timo, and Anders Martinsson. "Segregating Markov Chains." Journal of Theoretical Probability 31, no. 3 (March 29, 2017): 1512–38. http://dx.doi.org/10.1007/s10959-017-0743-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Fornasini, Ettore. "2D Markov chains." Linear Algebra and its Applications 140 (October 1990): 101–27. http://dx.doi.org/10.1016/0024-3795(90)90224-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Caillaud, Benoît, Benoît Delahaye, Kim G. Larsen, Axel Legay, Mikkel L. Pedersen, and Andrzej Wąsowski. "Constraint Markov Chains." Theoretical Computer Science 412, no. 34 (August 2011): 4373–404. http://dx.doi.org/10.1016/j.tcs.2011.05.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Landim, Claudio. "Metastable Markov chains." Probability Surveys 16 (2019): 143–227. http://dx.doi.org/10.1214/18-ps310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Pincus, S. M. "Approximating Markov chains." Proceedings of the National Academy of Sciences 89, no. 10 (May 15, 1992): 4432–36. http://dx.doi.org/10.1073/pnas.89.10.4432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Accardi, Luigi, and Francesco Fidaleo. "Entangled Markov chains." Annali di Matematica Pura ed Applicata (1923 -) 184, no. 3 (August 2005): 327–46. http://dx.doi.org/10.1007/s10231-004-0118-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Defant, Colin, Rupert Li, and Evita Nestoridi. "Rowmotion Markov chains." Advances in Applied Mathematics 155 (April 2024): 102666. http://dx.doi.org/10.1016/j.aam.2023.102666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Xiang, Xuyan, Xiao Zhang, and Xiaoyun Mo. "Statistical Identification of Markov Chain on Trees." Mathematical Problems in Engineering 2018 (2018): 1–13. http://dx.doi.org/10.1155/2018/2036248.

Full text
Abstract:
The theoretical study of continuous-time homogeneous Markov chains is usually based on a natural assumption of a known transition rate matrix (TRM). However, the TRM of a Markov chain in realistic systems might be unknown and might even need to be identified by partially observable data. Thus, an issue on how to identify the TRM of the underlying Markov chain by partially observable information is derived from the great significance in applications. That is what we call the statistical identification of Markov chain. The Markov chain inversion approach has been derived for basic Markov chains by partial observation at few states. In the current letter, a more extensive class of Markov chain on trees is investigated. Firstly, a type of a more operable derivative constraint is developed. Then, it is shown that all Markov chains on trees can be identified only by such derivative constraints of univariate distributions of sojourn time and/or hitting time at a few states. A numerical example is included to demonstrate the correctness of the proposed algorithms.
APA, Harvard, Vancouver, ISO, and other styles
39

Guédon, Yann. "Hidden hybrid Markov/semi-Markov chains." Computational Statistics & Data Analysis 49, no. 3 (June 2005): 663–88. http://dx.doi.org/10.1016/j.csda.2004.05.033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lee, Shiowjen, and J. Lynch. "Total Positivity of Markov Chains and the Failure Rate Character of Some First Passage Times." Advances in Applied Probability 29, no. 3 (September 1997): 713–32. http://dx.doi.org/10.2307/1428083.

Full text
Abstract:
It is shown that totally positive order 2 (TP2) properties of the infinitesimal generator of a continuous-time Markov chain with totally ordered state space carry over to the chain's transition distribution function. For chains with such properties, failure rate characteristics of the first passage times are established. For Markov chains with partially ordered state space, it is shown that the first passage times have an IFR distribution under a multivariate total positivity condition on the transition function.
APA, Harvard, Vancouver, ISO, and other styles
41

Lee, Shiowjen, and J. Lynch. "Total Positivity of Markov Chains and the Failure Rate Character of Some First Passage Times." Advances in Applied Probability 29, no. 03 (September 1997): 713–32. http://dx.doi.org/10.1017/s0001867800028317.

Full text
Abstract:
It is shown that totally positive order 2 (TP2) properties of the infinitesimal generator of a continuous-time Markov chain with totally ordered state space carry over to the chain's transition distribution function. For chains with such properties, failure rate characteristics of the first passage times are established. For Markov chains with partially ordered state space, it is shown that the first passage times have an IFR distribution under a multivariate total positivity condition on the transition function.
APA, Harvard, Vancouver, ISO, and other styles
42

Masuyama, Hiroyuki. "Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions." Advances in Applied Probability 47, no. 1 (March 2015): 83–105. http://dx.doi.org/10.1239/aap/1427814582.

Full text
Abstract:
In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally, we discuss the application of our results to GI/G/1-type Markov chains.
APA, Harvard, Vancouver, ISO, and other styles
43

Masuyama, Hiroyuki. "Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions." Advances in Applied Probability 47, no. 01 (March 2015): 83–105. http://dx.doi.org/10.1017/s0001867800007710.

Full text
Abstract:
In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally, we discuss the application of our results to GI/G/1-type Markov chains.
APA, Harvard, Vancouver, ISO, and other styles
44

Papastathopoulos, I., K. Strokorb, J. A. Tawn, and A. Butler. "Extreme events of Markov chains." Advances in Applied Probability 49, no. 1 (March 2017): 134–61. http://dx.doi.org/10.1017/apr.2016.82.

Full text
Abstract:
Abstract The extremal behaviour of a Markov chain is typically characterised by its tail chain. For asymptotically dependent Markov chains, existing formulations fail to capture the full evolution of the extreme event when the chain moves out of the extreme tail region, and, for asymptotically independent chains, recent results fail to cover well-known asymptotically independent processes, such as Markov processes with a Gaussian copula between consecutive values. We use more sophisticated limiting mechanisms that cover a broader class of asymptotically independent processes than current methods, including an extension of the canonical Heffernan‒Tawn normalisation scheme, and reveal features which existing methods reduce to a degenerate form associated with nonextreme states.
APA, Harvard, Vancouver, ISO, and other styles
45

Choi, Michael C. H., and Pierre Patie. "Analysis of non-reversible Markov chains via similarity orbits." Combinatorics, Probability and Computing 29, no. 4 (February 18, 2020): 508–36. http://dx.doi.org/10.1017/s0963548320000024.

Full text
Abstract:
AbstractIn this paper we develop an in-depth analysis of non-reversible Markov chains on denumerable state space from a similarity orbit perspective. In particular, we study the class of Markov chains whose transition kernel is in the similarity orbit of a normal transition kernel, such as that of birth–death chains or reversible Markov chains. We start by identifying a set of sufficient conditions for a Markov chain to belong to the similarity orbit of a birth–death chain. As by-products, we obtain a spectral representation in terms of non-self-adjoint resolutions of identity in the sense of Dunford [21] and offer a detailed analysis on the convergence rate, separation cutoff and L2-cutoff of this class of non-reversible Markov chains. We also look into the problem of estimating the integral functionals from discrete observations for this class. In the last part of this paper we investigate a particular similarity orbit of reversible Markov kernels, which we call the pure birth orbit, and analyse various possibly non-reversible variants of classical birth–death processes in this orbit.
APA, Harvard, Vancouver, ISO, and other styles
46

Ledoux, James. "A geometric invariant in weak lumpability of finite Markov chains." Journal of Applied Probability 34, no. 4 (December 1997): 847–58. http://dx.doi.org/10.2307/3215001.

Full text
Abstract:
We consider weak lumpability of finite homogeneous Markov chains, which is when a lumped Markov chain with respect to a partition of the initial state space is also a homogeneous Markov chain. We show that weak lumpability is equivalent to the existence of a direct sum of polyhedral cones that is positively invariant by the transition probability matrix of the original chain. It allows us, in a unified way, to derive new results on lumpability of reducible Markov chains and to obtain spectral properties associated with lumpability.
APA, Harvard, Vancouver, ISO, and other styles
47

Ledoux, James. "A geometric invariant in weak lumpability of finite Markov chains." Journal of Applied Probability 34, no. 04 (December 1997): 847–58. http://dx.doi.org/10.1017/s0021900200101561.

Full text
Abstract:
We consider weak lumpability of finite homogeneous Markov chains, which is when a lumped Markov chain with respect to a partition of the initial state space is also a homogeneous Markov chain. We show that weak lumpability is equivalent to the existence of a direct sum of polyhedral cones that is positively invariant by the transition probability matrix of the original chain. It allows us, in a unified way, to derive new results on lumpability of reducible Markov chains and to obtain spectral properties associated with lumpability.
APA, Harvard, Vancouver, ISO, and other styles
48

Ledoux, James, and Laurent Truffet. "Markovian bounds on functions of finite Markov chains." Advances in Applied Probability 33, no. 2 (June 2001): 505–19. http://dx.doi.org/10.1017/s0001867800010910.

Full text
Abstract:
In this paper, we obtain Markovian bounds on a function of a homogeneous discrete time Markov chain. For deriving such bounds, we use well-known results on stochastic majorization of Markov chains and the Rogers–Pitman lumpability criterion. The proposed method of comparison between functions of Markov chains is not equivalent to generalized coupling method of Markov chains, although we obtain same kind of majorization. We derive necessary and sufficient conditions for existence of our Markovian bounds. We also discuss the choice of the geometric invariant related to the lumpability condition that we use.
APA, Harvard, Vancouver, ISO, and other styles
49

Ross, Sheldon M. "A MARKOV CHAIN CHOICE PROBLEM." Probability in the Engineering and Informational Sciences 27, no. 1 (December 10, 2012): 53–55. http://dx.doi.org/10.1017/s0269964812000290.

Full text
Abstract:
Consider two independent Markov chains having states 0, 1, and identical transition probabilities. At each stage one of the chains is observed, and a reward equal to the observed state is earned. Assuming prior probabilities on the initial states of the chains it is shown that the myopic policy that always chooses to observe the chain most likely to be in state 1 stochastically maximizes the sequence of rewards earned in each period.
APA, Harvard, Vancouver, ISO, and other styles
50

Mohamed, Mohamed, Mohamed Bisher Zeina, and Yasin Karmouta. "Classification of States for Literal Neutrosophic and Plithogenic Markov Chains." Journal of Neutrosophic and Fuzzy Systems 08, no. 2 (2024): 49–61. http://dx.doi.org/10.54216/jnfs.080206.

Full text
Abstract:
In this paper we represent many classifications of neutrosophic and plithogenic Markov Chains states including absorbent states, inessential and essential states, recurrent states and communicated states. We prove that if a state (i) according to a neutrosophic Markov Chain with neutrosophic transition matrix is classified as any of the previous classifications then it is also classified as the same classification in classical scene to two Markov Chains defined with transition matrices respectively. Also, we prove that if a state (i) according to a plithogenic Markov Chain with plithogenic transition matrix is classified as any of the previous classifications then it is also classified as the same classification in classical scene to three Markov Chains defined with transition matrices respectively. Many theorems and solved examples are presented and solved successfully.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography