To see the other types of publications on this topic, follow the link: Chaines de Markov.

Journal articles on the topic 'Chaines de Markov'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Chaines de Markov.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Dies, Jacques-Edouard. "Transience des chaines de Markov lineaires sur les permutations." Journal of Applied Probability 24, no. 4 (December 1987): 899–907. http://dx.doi.org/10.2307/3214214.

Full text
Abstract:
Computer scientists have introduced ‘paging algorithms' which are a special class of Markov chains on permutations known, in probability theory, as ‘libraries': books being placed on a shelf T (T is an infinite interval of the set Z of the integers) and a policy ρ : T → T such that ρ (t) < t being chosen, a book b placed at t ∊ T is selected with probability pb, it is removed and replaced at ρ (t) prior to next removal. The different arrangements of books on the shelf are the states of the Markov chain. In this paper we prove that, if the shelf is not bounded on the left, any library (i.e. for any policy ρ and any probability ρ on the books) is transient.
APA, Harvard, Vancouver, ISO, and other styles
2

Dies, Jacques-Edouard. "Transience des chaines de Markov lineaires sur les permutations." Journal of Applied Probability 24, no. 04 (December 1987): 899–907. http://dx.doi.org/10.1017/s0021900200116778.

Full text
Abstract:
Computer scientists have introduced ‘paging algorithms' which are a special class of Markov chains on permutations known, in probability theory, as ‘libraries': books being placed on a shelf T (T is an infinite interval of the set Z of the integers) and a policy ρ : T → T such that ρ (t) &lt; t being chosen, a book b placed at t ∊ T is selected with probability pb , it is removed and replaced at ρ (t) prior to next removal. The different arrangements of books on the shelf are the states of the Markov chain. In this paper we prove that, if the shelf is not bounded on the left, any library (i.e. for any policy ρ and any probability ρ on the books) is transient.
APA, Harvard, Vancouver, ISO, and other styles
3

Córdoba y Ordóñez, Juan, and Javier Antón Burgos. "Aplicación de cadenas de Markov al análisis dinámico de la centralidad en un sistema de transporte." Estudios Geográficos 51, no. 198 (March 30, 1990): 33–64. http://dx.doi.org/10.3989/egeogr.1990.i198.33.

Full text
Abstract:
Se expone en este artículo la utilidad que ofrecen las cadenas de Markov para el análisis de ciertos aspectos dinámicos del sistema de transporte, como la jerarquía de centros, la centralidad y la cohesión de la red. El trabajo se realiza a partir de un supuesto teórico sobre la red de transporte aéreo española y en él se presta especial atención al proceso metodológico. [fr] Cet article verse sur les chaines de Markov comme outil méthodologique pour l'analyse de certains aspects dynamiques des systèmes de transport, tels que la hiérarchie des centres, la centralité et la cohésion du réseau. L'expérience, qui fait surtout attention au processus méthodologique, porte sur un hypothétique réseau espagnol de transport aérien.
APA, Harvard, Vancouver, ISO, and other styles
4

Dimou, Michel, Gabriel Figueiredo De Oliveira, and Alexandra Schaffar. "Entre convergence et disparité. Les SIDS face aux défis de la transition énergétique." Revue d'économie politique Vol. 134, no. 3 (June 21, 2024): 417–41. http://dx.doi.org/10.3917/redp.343.0417.

Full text
Abstract:
L’objectif de cet article est d’analyser les performances énergétiques des petites économies insulaires en développement (SIDS). Dans un contexte de tensions généralisées sur les marchés énergétiques mondiaux, les SIDS se révèlent des espaces particulièrement vulnérables. En utilisant des données de la Banque Mondiale concernant 183 pays, et en mobilisant des tests de convergence et des méthodes non paramétriques telles que les chaines de Markov, cet article compare les performances énergétiques des SIDS à celles des autres pays. Les résultats indiquent qu’il n’existe pas de convergence absolue entre les pays en termes d’intensité énergétique, mais plutôt une convergence par clubs. Bien que la plupart des SIDS se retrouvent dans les clubs inférieurs ou intermédiaires en ce qui concerne leur intensité énergétique, leurs performances en termes de réduction de cette intensité énergétique entre 2000 et 2019 sont parmi les plus faibles.
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Shiowjen, and J. Lynch. "Total Positivity of Markov Chains and the Failure Rate Character of Some First Passage Times." Advances in Applied Probability 29, no. 3 (September 1997): 713–32. http://dx.doi.org/10.2307/1428083.

Full text
Abstract:
It is shown that totally positive order 2 (TP2) properties of the infinitesimal generator of a continuous-time Markov chain with totally ordered state space carry over to the chain's transition distribution function. For chains with such properties, failure rate characteristics of the first passage times are established. For Markov chains with partially ordered state space, it is shown that the first passage times have an IFR distribution under a multivariate total positivity condition on the transition function.
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Shiowjen, and J. Lynch. "Total Positivity of Markov Chains and the Failure Rate Character of Some First Passage Times." Advances in Applied Probability 29, no. 03 (September 1997): 713–32. http://dx.doi.org/10.1017/s0001867800028317.

Full text
Abstract:
It is shown that totally positive order 2 (TP2) properties of the infinitesimal generator of a continuous-time Markov chain with totally ordered state space carry over to the chain's transition distribution function. For chains with such properties, failure rate characteristics of the first passage times are established. For Markov chains with partially ordered state space, it is shown that the first passage times have an IFR distribution under a multivariate total positivity condition on the transition function.
APA, Harvard, Vancouver, ISO, and other styles
7

Lund, Robert, Ying Zhao, and Peter C. Kiessler. "A monotonicity in reversible Markov chains." Journal of Applied Probability 43, no. 2 (June 2006): 486–99. http://dx.doi.org/10.1239/jap/1152413736.

Full text
Abstract:
In this paper we identify a monotonicity in all countable-state-space reversible Markov chains and examine several consequences of this structure. In particular, we show that the return times to every state in a reversible chain have a decreasing hazard rate on the subsequence of even times. This monotonicity is used to develop geometric convergence rate bounds for time-reversible Markov chains. Results relating the radius of convergence of the probability generating function of first return times to the chain's rate of convergence are presented. An effort is made to keep the exposition rudimentary.
APA, Harvard, Vancouver, ISO, and other styles
8

Lund, Robert, Ying Zhao, and Peter C. Kiessler. "A monotonicity in reversible Markov chains." Journal of Applied Probability 43, no. 02 (June 2006): 486–99. http://dx.doi.org/10.1017/s0021900200001777.

Full text
Abstract:
In this paper we identify a monotonicity in all countable-state-space reversible Markov chains and examine several consequences of this structure. In particular, we show that the return times toeverystate in a reversible chain have a decreasing hazard rate on the subsequence of even times. This monotonicity is used to develop geometric convergence rate bounds for time-reversible Markov chains. Results relating the radius of convergence of the probability generating function of first return times to the chain's rate of convergence are presented. An effort is made to keep the exposition rudimentary.
APA, Harvard, Vancouver, ISO, and other styles
9

Grewal, Jasleen K., Martin Krzywinski, and Naomi Altman. "Markov models—Markov chains." Nature Methods 16, no. 8 (July 30, 2019): 663–64. http://dx.doi.org/10.1038/s41592-019-0476-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lindley, D. V., and J. R. Norris. "Markov Chains." Mathematical Gazette 83, no. 496 (March 1999): 188. http://dx.doi.org/10.2307/3618756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lund, Robert B., and J. R. Norris. "Markov Chains." Journal of the American Statistical Association 94, no. 446 (June 1999): 654. http://dx.doi.org/10.2307/2670196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hermosilla, Armando, Richard Carmagnola, Carlos Sauer, Eduardo Redondo, and Luis Centurion. "Demand Forecasts for Chronic Cardiovascular Diseases Medication Based on Markov Chains." Latin American Journal of Applied Engineering 4, no. 1 (December 31, 2021): 7–12. http://dx.doi.org/10.69681/lajae.v4i1.20.

Full text
Abstract:
In this work, we propose and evaluate models to predict the demand for cardiovascular drugs using Markov chains. The models use transactional data of patient medication delivery to identify consumption levels. These levels are considered as Markov chain states. Four model configurations were evaluated, differing on the arrival/departure nature of patients to the system and the inclusion of an idle state. The models were trained with 12 months of real data and tested with a four-month horizon. Also, the models were sequentially applied to an 18-month Losartan consumption data, simulating thus the chained implementation in a real scenario. The MAPE of two months ahead forecasts ranged from 3.92% to 5.55% in three of the four evaluated models. As well, our results showed that variations of the consumption level could be modeled using Markov chains, and in low inventory levels situations, these tools are usable to prioritize patients with higher levels of consumption.
APA, Harvard, Vancouver, ISO, and other styles
13

Guédon, Yann. "Hidden hybrid Markov/semi-Markov chains." Computational Statistics & Data Analysis 49, no. 3 (June 2005): 663–88. http://dx.doi.org/10.1016/j.csda.2004.05.033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Rising, William. "Geometric Markov chains." Journal of Applied Probability 32, no. 2 (June 1995): 349–74. http://dx.doi.org/10.2307/3215293.

Full text
Abstract:
A generalization of the familiar birth–death chain, called the geometric chain, is introduced and explored. By the introduction of two families of parameters in addition to the infinitesimal birth and death rates, the geometric chain allows transitions beyond the nearest neighbor, but is shown to retain the simple computational formulas of the birth–death chain for the stationary distribution and the expected first-passage times between states. It is also demonstrated that even when not reversible, a reversed geometric chain is again a geometric chain.
APA, Harvard, Vancouver, ISO, and other styles
15

Evans, Steven, and Adam Jaffe. "Virtual Markov Chains." New Zealand Journal of Mathematics 52 (September 19, 2021): 511–59. http://dx.doi.org/10.53733/147.

Full text
Abstract:
We introduce the space of virtual Markov chains (VMCs) as a projective limit of the spaces of all finite state space Markov chains (MCs), in the same way that the space of virtual permutations is the projective limit of the spaces of all permutations of finite sets.We introduce the notions of virtual initial distribution (VID) and a virtual transition matrix (VTM), and we show that the law of any VMC is uniquely characterized by a pair of a VID and VTM which have to satisfy a certain compatibility condition.Lastly, we study various properties of compact convex sets associated to the theory of VMCs, including that the Birkhoff-von Neumann theorem fails in the virtual setting.
APA, Harvard, Vancouver, ISO, and other styles
16

VOLCHENKOV, DIMA, and JEAN RENÉ DAWIN. "MUSICAL MARKOV CHAINS." International Journal of Modern Physics: Conference Series 16 (January 2012): 116–35. http://dx.doi.org/10.1142/s2010194512007829.

Full text
Abstract:
A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.
APA, Harvard, Vancouver, ISO, and other styles
17

Janssen, A., and J. Segers. "Markov Tail Chains." Journal of Applied Probability 51, no. 4 (December 2014): 1133–53. http://dx.doi.org/10.1239/jap/1421763332.

Full text
Abstract:
The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions inRd. We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, we will show that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is also Markovian. We analyze the resulting class of limiting processes in detail. Applications of the theory yield the asymptotic distribution of both the past and the future of univariate and multivariate stochastic difference equations conditioned on an extreme event.
APA, Harvard, Vancouver, ISO, and other styles
18

Pieczynski, W. "Pairwise markov chains." IEEE Transactions on Pattern Analysis and Machine Intelligence 25, no. 5 (May 2003): 634–39. http://dx.doi.org/10.1109/tpami.2003.1195998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Sisson, Scott A. "Transdimensional Markov Chains." Journal of the American Statistical Association 100, no. 471 (September 2005): 1077–89. http://dx.doi.org/10.1198/016214505000000664.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Solan, Eilon, and Nicolas Vieille. "Perturbed Markov chains." Journal of Applied Probability 40, no. 1 (March 2003): 107–22. http://dx.doi.org/10.1239/jap/1044476830.

Full text
Abstract:
We study irreducible time-homogenous Markov chains with finite state space in discrete time. We obtain results on the sensitivity of the stationary distribution and other statistical quantities with respect to perturbations of the transition matrix. We define a new closeness relation between transition matrices, and use graph-theoretic techniques, in contrast with the matrix analysis techniques previously used.
APA, Harvard, Vancouver, ISO, and other styles
21

Pollett, P. K. "Similar Markov chains." Journal of Applied Probability 38, A (2001): 53–65. http://dx.doi.org/10.1239/jap/1085496591.

Full text
Abstract:
Lenin et al. (2000) recently introduced the idea of similarity in the context of birth-death processes. This paper examines the extent to which their results can be extended to arbitrary Markov chains. It is proved that, under a variety of conditions, similar chains are strongly similar in a sense which is described, and it is shown that minimal chains are strongly similar if and only if the corresponding transition-rate matrices are strongly similar. A general framework is given for constructing families of strongly similar chains; it permits the construction of all such chains in the irreducible case.
APA, Harvard, Vancouver, ISO, and other styles
22

Rising, William. "Geometric Markov chains." Journal of Applied Probability 32, no. 02 (June 1995): 349–74. http://dx.doi.org/10.1017/s0021900200102839.

Full text
Abstract:
A generalization of the familiar birth–death chain, called the geometric chain, is introduced and explored. By the introduction of two families of parameters in addition to the infinitesimal birth and death rates, the geometric chain allows transitions beyond the nearest neighbor, but is shown to retain the simple computational formulas of the birth–death chain for the stationary distribution and the expected first-passage times between states. It is also demonstrated that even when not reversible, a reversed geometric chain is again a geometric chain.
APA, Harvard, Vancouver, ISO, and other styles
23

Pollett, P. K. "Similar Markov chains." Journal of Applied Probability 38, A (2001): 53–65. http://dx.doi.org/10.1017/s0021900200112677.

Full text
Abstract:
Lenin et al. (2000) recently introduced the idea of similarity in the context of birth-death processes. This paper examines the extent to which their results can be extended to arbitrary Markov chains. It is proved that, under a variety of conditions, similar chains are strongly similar in a sense which is described, and it is shown that minimal chains are strongly similar if and only if the corresponding transition-rate matrices are strongly similar. A general framework is given for constructing families of strongly similar chains; it permits the construction of all such chains in the irreducible case.
APA, Harvard, Vancouver, ISO, and other styles
24

Gudder, Stanley. "Quantum Markov chains." Journal of Mathematical Physics 49, no. 7 (July 2008): 072105. http://dx.doi.org/10.1063/1.2953952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Janssen, A., and J. Segers. "Markov Tail Chains." Journal of Applied Probability 51, no. 04 (December 2014): 1133–53. http://dx.doi.org/10.1017/s0001867800012027.

Full text
Abstract:
The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions in R d . We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, we will show that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is also Markovian. We analyze the resulting class of limiting processes in detail. Applications of the theory yield the asymptotic distribution of both the past and the future of univariate and multivariate stochastic difference equations conditioned on an extreme event.
APA, Harvard, Vancouver, ISO, and other styles
26

Janssen, A., and J. Segers. "Markov Tail Chains." Journal of Applied Probability 51, no. 04 (December 2014): 1133–53. http://dx.doi.org/10.1017/s002190020001202x.

Full text
Abstract:
The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions in R d . We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, we will show that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is also Markovian. We analyze the resulting class of limiting processes in detail. Applications of the theory yield the asymptotic distribution of both the past and the future of univariate and multivariate stochastic difference equations conditioned on an extreme event.
APA, Harvard, Vancouver, ISO, and other styles
27

Solan, Eilon, and Nicolas Vieille. "Perturbed Markov chains." Journal of Applied Probability 40, no. 01 (March 2003): 107–22. http://dx.doi.org/10.1017/s0021900200022294.

Full text
Abstract:
We study irreducible time-homogenous Markov chains with finite state space in discrete time. We obtain results on the sensitivity of the stationary distribution and other statistical quantities with respect to perturbations of the transition matrix. We define a new closeness relation between transition matrices, and use graph-theoretic techniques, in contrast with the matrix analysis techniques previously used.
APA, Harvard, Vancouver, ISO, and other styles
28

Hirscher, Timo, and Anders Martinsson. "Segregating Markov Chains." Journal of Theoretical Probability 31, no. 3 (March 29, 2017): 1512–38. http://dx.doi.org/10.1007/s10959-017-0743-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Fornasini, Ettore. "2D Markov chains." Linear Algebra and its Applications 140 (October 1990): 101–27. http://dx.doi.org/10.1016/0024-3795(90)90224-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Caillaud, Benoît, Benoît Delahaye, Kim G. Larsen, Axel Legay, Mikkel L. Pedersen, and Andrzej Wąsowski. "Constraint Markov Chains." Theoretical Computer Science 412, no. 34 (August 2011): 4373–404. http://dx.doi.org/10.1016/j.tcs.2011.05.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Landim, Claudio. "Metastable Markov chains." Probability Surveys 16 (2019): 143–227. http://dx.doi.org/10.1214/18-ps310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Pincus, S. M. "Approximating Markov chains." Proceedings of the National Academy of Sciences 89, no. 10 (May 15, 1992): 4432–36. http://dx.doi.org/10.1073/pnas.89.10.4432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Accardi, Luigi, and Francesco Fidaleo. "Entangled Markov chains." Annali di Matematica Pura ed Applicata (1923 -) 184, no. 3 (August 2005): 327–46. http://dx.doi.org/10.1007/s10231-004-0118-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Defant, Colin, Rupert Li, and Evita Nestoridi. "Rowmotion Markov chains." Advances in Applied Mathematics 155 (April 2024): 102666. http://dx.doi.org/10.1016/j.aam.2023.102666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Andrei, Gurãu Marian. "Shaping the Evolution of the Company Products Market by Applying Markov Chains Method." International Journal of Advances in Management and Economics 1, no. 5 (September 2, 2012): 52–56. http://dx.doi.org/10.31270/ijame/01/05/2012/09.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hartfiel, D. J., and E. Seneta. "On the theory of Markov set-chains." Advances in Applied Probability 26, no. 4 (December 1994): 947–64. http://dx.doi.org/10.2307/1427899.

Full text
Abstract:
In the theory of homogeneous Markov chains, states are classified according to their connectivity to other states and this classification leads to a classification of the Markov chains themselves. In this paper we classify Markov set-chains analogously, particularly into ergodic, regular, and absorbing Markov set-chains. A weak law of large numbers is developed for regular Markov set-chains. Examples are used to illustrate analysis of behavior of Markov set-chains.
APA, Harvard, Vancouver, ISO, and other styles
37

Hartfiel, D. J., and E. Seneta. "On the theory of Markov set-chains." Advances in Applied Probability 26, no. 04 (December 1994): 947–64. http://dx.doi.org/10.1017/s0001867800026707.

Full text
Abstract:
In the theory of homogeneous Markov chains, states are classified according to their connectivity to other states and this classification leads to a classification of the Markov chains themselves. In this paper we classify Markov set-chains analogously, particularly into ergodic, regular, and absorbing Markov set-chains. A weak law of large numbers is developed for regular Markov set-chains. Examples are used to illustrate analysis of behavior of Markov set-chains.
APA, Harvard, Vancouver, ISO, and other styles
38

ACCARDI, LUIGI, HIROMICHI OHNO, and FARRUKH MUKHAMEDOV. "QUANTUM MARKOV FIELDS ON GRAPHS." Infinite Dimensional Analysis, Quantum Probability and Related Topics 13, no. 02 (June 2010): 165–89. http://dx.doi.org/10.1142/s0219025710004000.

Full text
Abstract:
We introduce generalized quantum Markov states and generalized d-Markov chains which extend the notion quantum Markov chains on spin systems to that on C*-algebras defined by general graphs. As examples of generalized d-Markov chains, we construct the entangled Markov fields on tree graphs. The concrete examples of generalized d-Markov chains on Cayley trees are also investigated.
APA, Harvard, Vancouver, ISO, and other styles
39

Valenzuela, Mississippi. "Markov chains and applications." Selecciones Matemáticas 9, no. 01 (June 30, 2022): 53–78. http://dx.doi.org/10.17268/sel.mat.2022.01.05.

Full text
Abstract:
This work has three important purposes: first it is the study of Markov Chains, the second is to show that Markov chains have different applications and finally it is to model a process of this behaves. Throughout this work we will describe what a Markov chain is, what these processes are for and how these chains are classified. We will describe a Markov Chain, that is, analyze what are the primary elements that make up a Markov chain, among others.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhong, Pingping, Weiguo Yang, and Peipei Liang. "THE ASYMPTOTIC EQUIPARTITION PROPERTY FOR ASYMPTOTIC CIRCULAR MARKOV CHAINS." Probability in the Engineering and Informational Sciences 24, no. 2 (March 18, 2010): 279–88. http://dx.doi.org/10.1017/s0269964809990271.

Full text
Abstract:
In this article, we study the asymptotic equipartition property (AEP) for asymptotic circular Markov chains. First, the definition of an asymptotic circular Markov chain is introduced. Then by applying the limit property for the bivariate functions of nonhomogeneous Markov chains, the strong limit theorem on the frequencies of occurrence of states for asymptotic circular Markov chains is established. Next, the strong law of large numbers on the frequencies of occurrence of states for asymptotic circular Markov chains is obtained. Finally, we prove the AEP for asymptotic circular Markov chains.
APA, Harvard, Vancouver, ISO, and other styles
41

Katehakis, Michael N., and Laurens C. Smit. "A SUCCESSIVE LUMPING PROCEDURE FOR A CLASS OF MARKOV CHAINS." Probability in the Engineering and Informational Sciences 26, no. 4 (July 30, 2012): 483–508. http://dx.doi.org/10.1017/s0269964812000150.

Full text
Abstract:
A class of Markov chains we call successively lumbaple is specified for which it is shown that the stationary probabilities can be obtained by successively computing the stationary probabilities of a propitiously constructed sequence of Markov chains. Each of the latter chains has a(typically much) smaller state space and this yields significant computational improvements. We discuss how the results for discrete time Markov chains extend to semi-Markov processes and continuous time Markov processes. Finally, we will study applications of successively lumbaple Markov chains to classical reliability and queueing models.
APA, Harvard, Vancouver, ISO, and other styles
42

Verbeken, Brecht, and Marie-Anne Guerry. "Attainability for Markov and Semi-Markov Chains." Mathematics 12, no. 8 (April 19, 2024): 1227. http://dx.doi.org/10.3390/math12081227.

Full text
Abstract:
When studying Markov chain models and semi-Markov chain models, it is useful to know which state vectors n, where each component ni represents the number of entities in the state Si, can be maintained or attained. This question leads to the definitions of maintainability and attainability for (time-homogeneous) Markov chain models. Recently, the definition of maintainability was extended to the concept of state reunion maintainability (SR-maintainability) for semi-Markov chains. Within the framework of semi-Markov chains, the states are subdivided further into seniority-based states. State reunion maintainability assesses the maintainability of the distribution across states. Following this idea, we introduce the concept of state reunion attainability, which encompasses the potential of a system to attain a specific distribution across the states after uniting the seniority-based states into the underlying states. In this paper, we start by extending the concept of attainability for constant-sized Markov chain models to systems that are subject to growth or contraction. Afterwards, we introduce the concepts of attainability and state reunion attainability for semi-Markov chain models, using SR-maintainability as a starting point. The attainable region, as well as the state reunion attainable region, are described as the convex hull of their respective vertices, and properties of these regions are investigated.
APA, Harvard, Vancouver, ISO, and other styles
43

Sproston, Jeremy. "Qualitative reachability for open interval Markov chains." PeerJ Computer Science 9 (August 28, 2023): e1489. http://dx.doi.org/10.7717/peerj-cs.1489.

Full text
Abstract:
Interval Markov chains extend classical Markov chains with the possibility to describe transition probabilities using intervals, rather than exact values. While the standard formulation of interval Markov chains features closed intervals, previous work has considered open interval Markov chains, in which the intervals can also be open or half-open. In this article we focus on qualitative reachability problems for open interval Markov chains, which consider whether the optimal (maximum or minimum) probability with which a certain set of states can be reached is equal to 0 or 1. We present polynomial-time algorithms for these problems for both of the standard semantics of interval Markov chains. Our methods do not rely on the closure of open intervals, in contrast to previous approaches for open interval Markov chains, and can address situations in which probability 0 or 1 can be attained not exactly but arbitrarily closely.
APA, Harvard, Vancouver, ISO, and other styles
44

Pérez, Juan F., and Benny Van Houdt. "THE M/G/1-TYPE MARKOV CHAIN WITH RESTRICTED TRANSITIONS AND ITS APPLICATION TO QUEUES WITH BATCH ARRIVALS." Probability in the Engineering and Informational Sciences 25, no. 4 (July 21, 2011): 487–517. http://dx.doi.org/10.1017/s0269964811000155.

Full text
Abstract:
We consider M/G/1-type Markov chains where a transition that decreases the value of the level triggers the phase to a small subset of the phase space. We show how this structure—referred to as restricted downward transitions—can be exploited to speed up the computation of the stationary probability vector of the chain. To this end we define a new M/G/1-type Markov chain with a smaller block size, the G matrix of which is used to find the original chain's G matrix. This approach is then used to analyze the BMAP/PH/1 queue and the BMAP[2]/PH[2]/1 preemptive priority queue, yielding significant reductions in computation time.
APA, Harvard, Vancouver, ISO, and other styles
45

Politis, Dimitris N. "Markov Chains in Many Dimensions." Advances in Applied Probability 26, no. 3 (September 1994): 756–74. http://dx.doi.org/10.2307/1427819.

Full text
Abstract:
A generalization of the notion of a stationary Markov chain in more than one dimension is proposed, and is found to be a special class of homogeneous Markov random fields. Stationary Markov chains in many dimensions are shown to possess a maximum entropy property, analogous to the corresponding property for Markov chains in one dimension. In addition, a representation of Markov chains in many dimensions is provided, together with a method for their generation that converges to their stationary distribution.
APA, Harvard, Vancouver, ISO, and other styles
46

Politis, Dimitris N. "Markov Chains in Many Dimensions." Advances in Applied Probability 26, no. 03 (September 1994): 756–74. http://dx.doi.org/10.1017/s0001867800026537.

Full text
Abstract:
A generalization of the notion of a stationary Markov chain in more than one dimension is proposed, and is found to be a special class of homogeneous Markov random fields. Stationary Markov chains in many dimensions are shown to possess a maximum entropy property, analogous to the corresponding property for Markov chains in one dimension. In addition, a representation of Markov chains in many dimensions is provided, together with a method for their generation that converges to their stationary distribution.
APA, Harvard, Vancouver, ISO, and other styles
47

Rose, Nicholas J. "On Regular Markov Chains." American Mathematical Monthly 92, no. 2 (February 1985): 146. http://dx.doi.org/10.2307/2322653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Burnley, Christine. "Perturbation of Markov Chains." Mathematics Magazine 60, no. 1 (February 1, 1987): 21. http://dx.doi.org/10.2307/2690133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

AKHMET, MARAT. "Unpredictability in Markov chains." Carpathian Journal of Mathematics 38, no. 1 (November 15, 2021): 13–19. http://dx.doi.org/10.37193/cjm.2022.01.02.

Full text
Abstract:
We have formalized realizations of Markov chains as conveniently constructed sequences, and explained, why the random dynamics admits the unpredictability, the concept introduced in our papers previously. The method of the domain structured dynamics (dynamics on labels) has been applied. An illustrating example with a proper numerical simulation is provided.
APA, Harvard, Vancouver, ISO, and other styles
50

Qin, Liang, Philipp Höllmer, and Werner Krauth. "Direction-sweep Markov chains." Journal of Physics A: Mathematical and Theoretical 55, no. 10 (February 16, 2022): 105003. http://dx.doi.org/10.1088/1751-8121/ac508a.

Full text
Abstract:
Abstract We discuss a non-reversible, lifted Markov-chain Monte Carlo (MCMC) algorithm for particle systems in which the direction of proposed displacements is changed deterministically. This algorithm sweeps through directions analogously to the popular MCMC sweep methods for particle or spin indices. Direction-sweep MCMC can be applied to a wide range of reversible or non-reversible Markov chains, such as the Metropolis algorithm or the event-chain Monte Carlo algorithm. For a single two-dimensional tethered hard-disk dipole, we consider direction-sweep MCMC in the limit where restricted equilibrium is reached among the accessible configurations for a fixed direction before incrementing it. We show rigorously that direction-sweep MCMC leaves the stationary probability distribution unchanged and that it profoundly modifies the Markov-chain trajectory. Long excursions, with persistent rotation in one direction, alternate with long sequences of rapid zigzags resulting in persistent rotation in the opposite direction in the limit of small direction increments. The mapping to a Langevin equation then yields the exact scaling of excursions while the zigzags are described through a non-linear differential equation that is solved exactly. We show that the direction-sweep algorithm can have shorter mixing times than the algorithms with random updates of directions. We point out possible applications of direction-sweep MCMC in polymer physics and in molecular simulation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography