Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Invariant distribution of Markov processes.

Zeitschriftenartikel zum Thema „Invariant distribution of Markov processes“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Invariant distribution of Markov processes" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Arnold, Barry C., und C. A. Robertson. „Autoregressive logistic processes“. Journal of Applied Probability 26, Nr. 3 (September 1989): 524–31. http://dx.doi.org/10.2307/3214410.

Der volle Inhalt der Quelle
Annotation:
A stochastic model is presented which yields a stationary Markov process whose invariant distribution is logistic. The model is autoregressive in character and is closely related to the autoregressive Pareto processes introduced earlier by Yeh et al. (1988). The model may be constructed to have absolutely continuous joint distributions. Analogous higher-order autoregressive and moving average processes may be constructed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Arnold, Barry C., und C. A. Robertson. „Autoregressive logistic processes“. Journal of Applied Probability 26, Nr. 03 (September 1989): 524–31. http://dx.doi.org/10.1017/s0021900200038122.

Der volle Inhalt der Quelle
Annotation:
A stochastic model is presented which yields a stationary Markov process whose invariant distribution is logistic. The model is autoregressive in character and is closely related to the autoregressive Pareto processes introduced earlier by Yeh et al. (1988). The model may be constructed to have absolutely continuous joint distributions. Analogous higher-order autoregressive and moving average processes may be constructed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

McDonald, D. „An invariance principle for semi-Markov processes“. Advances in Applied Probability 17, Nr. 1 (März 1985): 100–126. http://dx.doi.org/10.2307/1427055.

Der volle Inhalt der Quelle
Annotation:
Let (I(t))∞t = () be a semi-Markov process with state space II and recurrent probability transition kernel P. Subject to certain mixing conditions, where Δis an invariant probability measure for P and μb is the expected sojourn time in state b ϵΠ. We show that this limit is robust; that is, for each state b ϵ Πthe sojourn-time distribution may change for each transition, but, as long as the expected sojourn time in b is µb on the average, the above limit still holds. The kernel P may also vary for each transition as long as Δis invariant.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

McDonald, D. „An invariance principle for semi-Markov processes“. Advances in Applied Probability 17, Nr. 01 (März 1985): 100–126. http://dx.doi.org/10.1017/s0001867800014683.

Der volle Inhalt der Quelle
Annotation:
Let (I(t))∞ t = () be a semi-Markov process with state space II and recurrent probability transition kernel P. Subject to certain mixing conditions, where Δis an invariant probability measure for P and μ b is the expected sojourn time in state b ϵΠ. We show that this limit is robust; that is, for each state b ϵ Πthe sojourn-time distribution may change for each transition, but, as long as the expected sojourn time in b is µ b on the average, the above limit still holds. The kernel P may also vary for each transition as long as Δis invariant.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Barnsley, Michael F., und John H. Elton. „A new class of markov processes for image encoding“. Advances in Applied Probability 20, Nr. 1 (März 1988): 14–32. http://dx.doi.org/10.2307/1427268.

Der volle Inhalt der Quelle
Annotation:
A new class of iterated function systems is introduced, which allows for the computation of non-compactly supported invariant measures, which may represent, for example, greytone images of infinite extent. Conditions for the existence and attractiveness of invariant measures for this new class of randomly iterated maps, which are not necessarily contractions, in metric spaces such as , are established. Estimates for moments of these measures are obtained.Special conditions are given for existence of the invariant measure in the interesting case of affine maps on . For non-singular affine maps on , the support of the measure is shown to be an infinite interval, but Fourier transform analysis shows that the measure can be purely singular even though its distribution function is strictly increasing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Barnsley, Michael F., und John H. Elton. „A new class of markov processes for image encoding“. Advances in Applied Probability 20, Nr. 01 (März 1988): 14–32. http://dx.doi.org/10.1017/s0001867800017924.

Der volle Inhalt der Quelle
Annotation:
A new class of iterated function systems is introduced, which allows for the computation of non-compactly supported invariant measures, which may represent, for example, greytone images of infinite extent. Conditions for the existence and attractiveness of invariant measures for this new class of randomly iterated maps, which are not necessarily contractions, in metric spaces such as , are established. Estimates for moments of these measures are obtained. Special conditions are given for existence of the invariant measure in the interesting case of affine maps on . For non-singular affine maps on , the support of the measure is shown to be an infinite interval, but Fourier transform analysis shows that the measure can be purely singular even though its distribution function is strictly increasing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kalpazidou, S. „On Levy's theorem concerning positiveness of transition probabilities of Markov processes: the circuit processes case“. Journal of Applied Probability 30, Nr. 1 (März 1993): 28–39. http://dx.doi.org/10.2307/3214619.

Der volle Inhalt der Quelle
Annotation:
We prove Lévy's theorem concerning positiveness of transition probabilities of Markov processes when the state space is countable and an invariant probability distribution exists. Our approach relies on the representation of transition probabilities in terms of the directed circuits that occur along the sample paths.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Kalpazidou, S. „On Levy's theorem concerning positiveness of transition probabilities of Markov processes: the circuit processes case“. Journal of Applied Probability 30, Nr. 01 (März 1993): 28–39. http://dx.doi.org/10.1017/s0021900200043977.

Der volle Inhalt der Quelle
Annotation:
We prove Lévy's theorem concerning positiveness of transition probabilities of Markov processes when the state space is countable and an invariant probability distribution exists. Our approach relies on the representation of transition probabilities in terms of the directed circuits that occur along the sample paths.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Avrachenkov, Konstantin, Alexey Piunovskiy und Yi Zhang. „Markov Processes with Restart“. Journal of Applied Probability 50, Nr. 4 (Dezember 2013): 960–68. http://dx.doi.org/10.1239/jap/1389370093.

Der volle Inhalt der Quelle
Annotation:
We consider a general homogeneous continuous-time Markov process with restarts. The process is forced to restart from a given distribution at time moments generated by an independent Poisson process. The motivation to study such processes comes from modeling human and animal mobility patterns, restart processes in communication protocols, and from application of restarting random walks in information retrieval. We provide a connection between the transition probability functions of the original Markov process and the modified process with restarts. We give closed-form expressions for the invariant probability measure of the modified process. When the process evolves on the Euclidean space, there is also a closed-form expression for the moments of the modified process. We show that the modified process is always positive Harris recurrent and exponentially ergodic with the index equal to (or greater than) the rate of restarts. Finally, we illustrate the general results by the standard and geometric Brownian motions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Avrachenkov, Konstantin, Alexey Piunovskiy und Yi Zhang. „Markov Processes with Restart“. Journal of Applied Probability 50, Nr. 04 (Dezember 2013): 960–68. http://dx.doi.org/10.1017/s0021900200013735.

Der volle Inhalt der Quelle
Annotation:
We consider a general homogeneous continuous-time Markov process with restarts. The process is forced to restart from a given distribution at time moments generated by an independent Poisson process. The motivation to study such processes comes from modeling human and animal mobility patterns, restart processes in communication protocols, and from application of restarting random walks in information retrieval. We provide a connection between the transition probability functions of the original Markov process and the modified process with restarts. We give closed-form expressions for the invariant probability measure of the modified process. When the process evolves on the Euclidean space, there is also a closed-form expression for the moments of the modified process. We show that the modified process is always positive Harris recurrent and exponentially ergodic with the index equal to (or greater than) the rate of restarts. Finally, we illustrate the general results by the standard and geometric Brownian motions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Pagès, Gilles, und Clément Rey. „Recursive computation of the invariant distributions of Feller processes: Revisited examples and new applications“. Monte Carlo Methods and Applications 25, Nr. 1 (01.03.2019): 1–36. http://dx.doi.org/10.1515/mcma-2018-2027.

Der volle Inhalt der Quelle
Annotation:
Abstract In this paper, we show that the abstract framework developed in [G. Pagès and C. Rey, Recursive computation of the invariant distribution of Markov and Feller processes, preprint 2017, https://arxiv.org/abs/1703.04557] and inspired by [D. Lamberton and G. Pagès, Recursive computation of the invariant distribution of a diffusion, Bernoulli 8 2002, 3, 367–405] can be used to build invariant distributions for Brownian diffusion processes using the Milstein scheme and for diffusion processes with censored jump using the Euler scheme. Both studies rely on a weakly mean-reverting setting for both cases. For the Milstein scheme we prove the convergence for test functions with polynomial (Wasserstein convergence) and exponential growth. For the Euler scheme of diffusion processes with censored jump we prove the convergence for test functions with polynomial growth.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Fredes, Luis, und Jean-François Marckert. „Invariant measures of interacting particle systems: Algebraic aspects“. ESAIM: Probability and Statistics 24 (2020): 526–80. http://dx.doi.org/10.1051/ps/2020008.

Der volle Inhalt der Quelle
Annotation:
Consider a continuous time particle system ηt = (ηt(k), k ∈ 𝕃), indexed by a lattice 𝕃 which will be either ℤ, ℤ∕nℤ, a segment {1, ⋯ , n}, or ℤd, and taking its values in the set Eκ𝕃 where Eκ = {0, ⋯ , κ − 1} for some fixed κ ∈{∞, 2, 3, ⋯ }. Assume that the Markovian evolution of the particle system (PS) is driven by some translation invariant local dynamics with bounded range, encoded by a jump rate matrix ⊤. These are standard settings, satisfied by the TASEP, the voter models, the contact processes. The aim of this paper is to provide some sufficient and/or necessary conditions on the matrix ⊤ so that this Markov process admits some simple invariant distribution, as a product measure (if 𝕃 is any of the spaces mentioned above), the law of a Markov process indexed by ℤ or [1, n] ∩ ℤ (if 𝕃 = ℤ or {1, …, n}), or a Gibbs measure if 𝕃 = ℤ/nℤ. Multiple applications follow: efficient ways to find invariant Markov laws for a given jump rate matrix or to prove that none exists. The voter models and the contact processes are shown not to possess any Markov laws as invariant distribution (for any memory m). (As usual, a random process X indexed by ℤ or ℕ is said to be a Markov chain with memory m ∈ {0, 1, 2, ⋯ } if ℙ(Xk ∈ A | Xk−i, i ≥ 1) = ℙ(Xk ∈ A | Xk−i, 1 ≤ i ≤ m), for any k.) We also prove that some models close to these models do. We exhibit PS admitting hidden Markov chains as invariant distribution and design many PS on ℤ2, with jump rates indexed by 2 × 2 squares, admitting product invariant measures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Pollett, P. K., und P. G. Taylor. „On the Problem of Establishing the Existence of Stationary Distributions for Continuous-Time Markov Chains“. Probability in the Engineering and Informational Sciences 7, Nr. 4 (Oktober 1993): 529–43. http://dx.doi.org/10.1017/s0269964800003119.

Der volle Inhalt der Quelle
Annotation:
We consider the problem of establishing the existence of stationary distributions for continuous-time Markov chains directly from the transition rates Q. Given an invariant probability distribution m for Q, we show that a necessary and sufficient condition for m to be a stationary distribution for the minimal process is that Q be regular. We provide sufficient conditions for the regularity of Q that are simple to verify in practice, thus allowing one to easily identify stationary distributions for a variety of models. To illustrate our results, we shall consider three classes of multidimensional Markov chains, namely, networks of queues with batch movements, semireversible queues, and partially balanced Markov processes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Chen, Anyue, Kai Wang Ng und Hanjun Zhang. „Uniqueness and Decay Properties of Markov Branching Processes with Disasters“. Journal of Applied Probability 51, Nr. 3 (September 2014): 613–24. http://dx.doi.org/10.1239/jap/1409932662.

Der volle Inhalt der Quelle
Annotation:
In this paper we discuss the decay properties of Markov branching processes with disasters, including the decay parameter, invariant measures, and quasistationary distributions. After showing that the corresponding q-matrix Q is always regular and, thus, that the Feller minimal Q-process is honest, we obtain the exact value of the decay parameter λC. We show that the decay parameter can be easily expressed explicitly. We further show that the Markov branching process with disaster is always λC-positive. The invariant vectors, the invariant measures, and the quasidistributions are given explicitly.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Chen, Anyue, Kai Wang Ng und Hanjun Zhang. „Uniqueness and Decay Properties of Markov Branching Processes with Disasters“. Journal of Applied Probability 51, Nr. 03 (September 2014): 613–24. http://dx.doi.org/10.1017/s0021900200011554.

Der volle Inhalt der Quelle
Annotation:
In this paper we discuss the decay properties of Markov branching processes with disasters, including the decay parameter, invariant measures, and quasistationary distributions. After showing that the corresponding q-matrix Q is always regular and, thus, that the Feller minimal Q-process is honest, we obtain the exact value of the decay parameter λ C . We show that the decay parameter can be easily expressed explicitly. We further show that the Markov branching process with disaster is always λ C -positive. The invariant vectors, the invariant measures, and the quasidistributions are given explicitly.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Pollett, P. K. „Reversibility, invariance and μ-invariance“. Advances in Applied Probability 20, Nr. 3 (September 1988): 600–621. http://dx.doi.org/10.2307/1427037.

Der volle Inhalt der Quelle
Annotation:
In this paper we consider a number of questions relating to the problem of determining quasi-stationary distributions for transient Markov processes. First we find conditions under which a measure or vector that is µ-invariant for a matrix of transition rates is also μ-invariant for the family of transition matrices of the minimal process it generates. These provide a means for determining whether or not the so-called stationary conditional quasi-stationary distribution exists in the λ-transient case. The process is not assumed to be regular, nor is it assumed to be uniform or irreducible. In deriving the invariance conditions we reveal a relationship between μ-invariance and the invariance of measures for related processes called the μ-reverse and the μ-dual processes. They play a role analogous to the time-reverse process which arises in the discussion of stationary distributions. Secondly we bring the related notions of detail-balance and reversibility into the realm of quasi-stationary processes. For example, if a process can be identified as being μ-reversible, the problem of determining quasi-stationary distributions is made much simpler. Finally, we consider some practical problems that emerge when calculating quasi-stationary distributions directly from the transition rates of the process. Our results are illustrated with reference to a variety of processes including examples of birth and death processes and the birth, death and catastrophe process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Pollett, P. K. „Reversibility, invariance and μ-invariance“. Advances in Applied Probability 20, Nr. 03 (September 1988): 600–621. http://dx.doi.org/10.1017/s0001867800018164.

Der volle Inhalt der Quelle
Annotation:
In this paper we consider a number of questions relating to the problem of determining quasi-stationary distributions for transient Markov processes. First we find conditions under which a measure or vector that is µ-invariant for a matrix of transition rates is also μ-invariant for the family of transition matrices of the minimal process it generates. These provide a means for determining whether or not the so-called stationary conditional quasi-stationary distribution exists in the λ-transient case. The process is not assumed to be regular, nor is it assumed to be uniform or irreducible. In deriving the invariance conditions we reveal a relationship between μ-invariance and the invariance of measures for related processes called the μ-reverse and the μ-dual processes. They play a role analogous to the time-reverse process which arises in the discussion of stationary distributions. Secondly we bring the related notions of detail-balance and reversibility into the realm of quasi-stationary processes. For example, if a process can be identified as being μ-reversible, the problem of determining quasi-stationary distributions is made much simpler. Finally, we consider some practical problems that emerge when calculating quasi-stationary distributions directly from the transition rates of the process. Our results are illustrated with reference to a variety of processes including examples of birth and death processes and the birth, death and catastrophe process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Kazakevičius, Vytautas, und Remigijus Leipus. „A new theorem on the existence of invariant distributions with applications to ARCH processes“. Journal of Applied Probability 40, Nr. 1 (März 2003): 147–62. http://dx.doi.org/10.1239/jap/1044476832.

Der volle Inhalt der Quelle
Annotation:
A new theorem on the existence of an invariant initial distribution for a Markov chain evolving on a Polish space is proved. As an application of the theorem, sufficient conditions for the existence of integrated ARCH processes are established. In the case where these conditions are violated, the top Lyapunov exponent is shown to be zero.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Kazakevičius, Vytautas, und Remigijus Leipus. „A new theorem on the existence of invariant distributions with applications to ARCH processes“. Journal of Applied Probability 40, Nr. 01 (März 2003): 147–62. http://dx.doi.org/10.1017/s0021900200022312.

Der volle Inhalt der Quelle
Annotation:
A new theorem on the existence of an invariant initial distribution for a Markov chain evolving on a Polish space is proved. As an application of the theorem, sufficient conditions for the existence of integrated ARCH processes are established. In the case where these conditions are violated, the top Lyapunov exponent is shown to be zero.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Li, Yangrong, Anthony G. Pakes, Jia Li und Anhui Gu. „The Limit Behavior of Dual Markov Branching Processes“. Journal of Applied Probability 45, Nr. 1 (März 2008): 176–89. http://dx.doi.org/10.1239/jap/1208358960.

Der volle Inhalt der Quelle
Annotation:
A dual Markov branching process (DMBP) is by definition a Siegmund's predual of some Markov branching process (MBP). Such a process does exist and is uniquely determined by the so-called dual-branching property. Its q-matrix Q is derived and proved to be regular and monotone. Several equivalent definitions for a DMBP are given. The criteria for transience, positive recurrence, strong ergodicity, and the Feller property are established. The invariant distributions are given by a clear formulation with a geometric limit law.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Li, Yangrong, Anthony G. Pakes, Jia Li und Anhui Gu. „The Limit Behavior of Dual Markov Branching Processes“. Journal of Applied Probability 45, Nr. 01 (März 2008): 176–89. http://dx.doi.org/10.1017/s0021900200004046.

Der volle Inhalt der Quelle
Annotation:
A dual Markov branching process (DMBP) is by definition a Siegmund's predual of some Markov branching process (MBP). Such a process does exist and is uniquely determined by the so-called dual-branching property. Its q-matrix Q is derived and proved to be regular and monotone. Several equivalent definitions for a DMBP are given. The criteria for transience, positive recurrence, strong ergodicity, and the Feller property are established. The invariant distributions are given by a clear formulation with a geometric limit law.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Nair, M. G., und P. K. Pollett. „On the relationship between µ-invariant measures and quasi-stationary distributions for continuous-time Markov chains“. Advances in Applied Probability 25, Nr. 1 (März 1993): 82–102. http://dx.doi.org/10.2307/1427497.

Der volle Inhalt der Quelle
Annotation:
In a recent paper, van Doorn (1991) explained how quasi-stationary distributions for an absorbing birth-death process could be determined from the transition rates of the process, thus generalizing earlier work of Cavender (1978). In this paper we shall show that many of van Doorn's results can be extended to deal with an arbitrary continuous-time Markov chain over a countable state space, consisting of an irreducible class, C, and an absorbing state, 0, which is accessible from C. Some of our results are extensions of theorems proved for honest chains in Pollett and Vere-Jones (1992).In Section 3 we prove that a probability distribution on C is a quasi-stationary distribution if and only if it is a µ-invariant measure for the transition function, P. We shall also show that if m is a quasi-stationary distribution for P, then a necessary and sufficient condition for m to be µ-invariant for Q is that P satisfies the Kolmogorov forward equations over C. When the remaining forward equations hold, the quasi-stationary distribution must satisfy a set of ‘residual equations' involving the transition rates into the absorbing state. The residual equations allow us to determine the value of µ for which the quasi-stationary distribution is µ-invariant for P. We also prove some more general results giving bounds on the values of µ for which a convergent measure can be a µ-subinvariant and then µ-invariant measure for P. The remainder of the paper is devoted to the question of when a convergent µ-subinvariant measure, m, for Q is a quasi-stationary distribution. Section 4 establishes a necessary and sufficient condition for m to be a quasi-stationary distribution for the minimal chain. In Section 5 we consider ‘single-exit' chains. We derive a necessary and sufficient condition for there to exist a process for which m is a quasi-stationary distribution. Under this condition all such processes can be specified explicitly through their resolvents. The results proved here allow us to conclude that the bounds for µ obtained in Section 3 are, in fact, tight. Finally, in Section 6, we illustrate our results by way of two examples: regular birth-death processes and a pure-birth process with absorption.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Nair, M. G., und P. K. Pollett. „On the relationship between µ-invariant measures and quasi-stationary distributions for continuous-time Markov chains“. Advances in Applied Probability 25, Nr. 01 (März 1993): 82–102. http://dx.doi.org/10.1017/s0001867800025180.

Der volle Inhalt der Quelle
Annotation:
In a recent paper, van Doorn (1991) explained how quasi-stationary distributions for an absorbing birth-death process could be determined from the transition rates of the process, thus generalizing earlier work of Cavender (1978). In this paper we shall show that many of van Doorn's results can be extended to deal with an arbitrary continuous-time Markov chain over a countable state space, consisting of an irreducible class, C, and an absorbing state, 0, which is accessible from C. Some of our results are extensions of theorems proved for honest chains in Pollett and Vere-Jones (1992). In Section 3 we prove that a probability distribution on C is a quasi-stationary distribution if and only if it is a µ-invariant measure for the transition function, P. We shall also show that if m is a quasi-stationary distribution for P, then a necessary and sufficient condition for m to be µ-invariant for Q is that P satisfies the Kolmogorov forward equations over C. When the remaining forward equations hold, the quasi-stationary distribution must satisfy a set of ‘residual equations' involving the transition rates into the absorbing state. The residual equations allow us to determine the value of µ for which the quasi-stationary distribution is µ-invariant for P. We also prove some more general results giving bounds on the values of µ for which a convergent measure can be a µ-subinvariant and then µ-invariant measure for P. The remainder of the paper is devoted to the question of when a convergent µ-subinvariant measure, m, for Q is a quasi-stationary distribution. Section 4 establishes a necessary and sufficient condition for m to be a quasi-stationary distribution for the minimal chain. In Section 5 we consider ‘single-exit' chains. We derive a necessary and sufficient condition for there to exist a process for which m is a quasi-stationary distribution. Under this condition all such processes can be specified explicitly through their resolvents. The results proved here allow us to conclude that the bounds for µ obtained in Section 3 are, in fact, tight. Finally, in Section 6, we illustrate our results by way of two examples: regular birth-death processes and a pure-birth process with absorption.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Ferrari, Pablo A., und Nancy Lopes Garcia. „One-dimensional loss networks and conditioned M/G/∞ queues“. Journal of Applied Probability 35, Nr. 4 (Dezember 1998): 963–75. http://dx.doi.org/10.1239/jap/1032438391.

Der volle Inhalt der Quelle
Annotation:
We study one-dimensional continuous loss networks with length distribution G and cable capacity C. We prove that the unique stationary distribution ηL of the network for which the restriction on the number of calls to be less than C is imposed only in the segment [−L,L] is the same as the distribution of a stationary M/G/∞ queue conditioned to be less than C in the time interval [−L,L]. For distributions G which are of phase type (= absorbing times of finite state Markov processes) we show that the limit as L → ∞ of ηL exists and is unique. The limiting distribution turns out to be invariant for the infinite loss network. This was conjectured by Kelly (1991).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Asselah, Amine, Pablo A. Ferrari und Pablo Groisman. „Quasistationary Distributions and Fleming-Viot Processes in Finite Spaces“. Journal of Applied Probability 48, Nr. 02 (Juni 2011): 322–32. http://dx.doi.org/10.1017/s0021900200007907.

Der volle Inhalt der Quelle
Annotation:
Consider a continuous-time Markov process with transition rates matrixQin the state space Λ ⋃ {0}. In the associated Fleming-Viot processNparticles evolve independently in Λ with transition rates matrixQuntil one of them attempts to jump to state 0. At this moment the particle jumps to one of the positions of the other particles, chosen uniformly at random. When Λ is finite, we show that the empirical distribution of the particles at a fixed time converges asN→ ∞ to the distribution of a single particle at the same time conditioned on not touching {0}. Furthermore, the empirical profile of the unique invariant measure for the Fleming-Viot process withNparticles converges asN→ ∞ to the unique quasistationary distribution of the one-particle motion. A key element of the approach is to show that the two-particle correlations are of order 1 /N.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Asselah, Amine, Pablo A. Ferrari und Pablo Groisman. „Quasistationary Distributions and Fleming-Viot Processes in Finite Spaces“. Journal of Applied Probability 48, Nr. 2 (Juni 2011): 322–32. http://dx.doi.org/10.1239/jap/1308662630.

Der volle Inhalt der Quelle
Annotation:
Consider a continuous-time Markov process with transition rates matrix Q in the state space Λ ⋃ {0}. In the associated Fleming-Viot process N particles evolve independently in Λ with transition rates matrix Q until one of them attempts to jump to state 0. At this moment the particle jumps to one of the positions of the other particles, chosen uniformly at random. When Λ is finite, we show that the empirical distribution of the particles at a fixed time converges as N → ∞ to the distribution of a single particle at the same time conditioned on not touching {0}. Furthermore, the empirical profile of the unique invariant measure for the Fleming-Viot process with N particles converges as N → ∞ to the unique quasistationary distribution of the one-particle motion. A key element of the approach is to show that the two-particle correlations are of order 1 / N.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Ferrari, Pablo A., und Nancy Lopes Garcia. „One-dimensional loss networks and conditioned M/G/∞ queues“. Journal of Applied Probability 35, Nr. 04 (Dezember 1998): 963–75. http://dx.doi.org/10.1017/s0021900200016661.

Der volle Inhalt der Quelle
Annotation:
We study one-dimensional continuous loss networks with length distribution G and cable capacity C. We prove that the unique stationary distribution η L of the network for which the restriction on the number of calls to be less than C is imposed only in the segment [−L,L] is the same as the distribution of a stationary M/G/∞ queue conditioned to be less than C in the time interval [−L,L]. For distributions G which are of phase type (= absorbing times of finite state Markov processes) we show that the limit as L → ∞ of η L exists and is unique. The limiting distribution turns out to be invariant for the infinite loss network. This was conjectured by Kelly (1991).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Pollett, P. K., und A. J. Roberts. „A description of the long-term behaviour of absorbing continuous-time Markov chains using a centre manifold“. Advances in Applied Probability 22, Nr. 1 (März 1990): 111–28. http://dx.doi.org/10.2307/1427600.

Der volle Inhalt der Quelle
Annotation:
We use the notion of an invariant manifold to describe the long-term behaviour of absorbing continuous-time Markov processes with a denumerable infinity of states. We show that there exists an invariant manifold for the forward differential equations and we are able to describe the evolution of the state probabilities on this manifold. Our approach gives rise to a new method for calculating conditional limiting distributions, one which is also appropriate for dealing with processes whose transition probabilities satisfy a system of non-linear differential equations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Pollett, P. K., und A. J. Roberts. „A description of the long-term behaviour of absorbing continuous-time Markov chains using a centre manifold“. Advances in Applied Probability 22, Nr. 01 (März 1990): 111–28. http://dx.doi.org/10.1017/s0001867800019364.

Der volle Inhalt der Quelle
Annotation:
We use the notion of an invariant manifold to describe the long-term behaviour of absorbing continuous-time Markov processes with a denumerable infinity of states. We show that there exists an invariant manifold for the forward differential equations and we are able to describe the evolution of the state probabilities on this manifold. Our approach gives rise to a new method for calculating conditional limiting distributions, one which is also appropriate for dealing with processes whose transition probabilities satisfy a system of non-linear differential equations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Daduna, Hans, und Ryszard Szekli. „Correlation formulas for Markovian network processes in a random environment“. Advances in Applied Probability 48, Nr. 1 (März 2016): 176–98. http://dx.doi.org/10.1017/apr.2015.12.

Der volle Inhalt der Quelle
Annotation:
Abstract We consider Markov processes, which describe, e.g. queueing network processes, in a random environment which influences the network by determining random breakdown of nodes, and the necessity of repair thereafter. Starting from an explicit steady-state distribution of product form available in the literature, we note that this steady-state distribution does not provide information about the correlation structure in time and space (over nodes). We study this correlation structure via one-step correlations for the queueing-environment process. Although formulas for absolute values of these correlations are complicated, the differences of correlations of related networks are simple and have a nice structure. We therefore compare two networks in a random environment having the same invariant distribution, and focus on the time behaviour of the processes when in such a network the environment changes or the rules for travelling are perturbed. Evaluating the comparison formulas we compare spectral gaps and asymptotic variances of related processes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Klebaner, F. C., U. Rösler und S. Sagitov. „Transformations of Galton-Watson processes and linear fractional reproduction“. Advances in Applied Probability 39, Nr. 4 (Dezember 2007): 1036–53. http://dx.doi.org/10.1239/aap/1198177238.

Der volle Inhalt der Quelle
Annotation:
By establishing general relationships between branching transformations (Harris-Sevastyanov, Lamperti-Ney, time reversals, and Asmussen-Sigman) and Markov chain transforms (Doob's h-transform, time reversal, and the cone dual), we discover a deeper connection between these transformations with harmonic functions and invariant measures for the process itself and its space-time process. We give a classification of the duals into Doob's h-transform, pathwise time reversal, and cone reversal. Explicit results are obtained for the linear fractional offspring distribution. Remarkably, for this case, all reversals turn out to be a Galton-Watson process with a dual reproduction law and eternal particle or some kind of immigration. In particular, we generalize a result of Klebaner and Sagitov (2002) in which only a geometric offspring distribution was considered. A new graphical representation in terms of an associated simple random walk on N2 allows for illuminating picture proofs of our main results concerning transformations of the linear fractional Galton-Watson process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Klebaner, F. C., U. Rösler und S. Sagitov. „Transformations of Galton-Watson processes and linear fractional reproduction“. Advances in Applied Probability 39, Nr. 04 (Dezember 2007): 1036–53. http://dx.doi.org/10.1017/s0001867800002226.

Der volle Inhalt der Quelle
Annotation:
By establishing general relationships between branching transformations (Harris-Sevastyanov, Lamperti-Ney, time reversals, and Asmussen-Sigman) and Markov chain transforms (Doob's h-transform, time reversal, and the cone dual), we discover a deeper connection between these transformations with harmonic functions and invariant measures for the process itself and its space-time process. We give a classification of the duals into Doob's h-transform, pathwise time reversal, and cone reversal. Explicit results are obtained for the linear fractional offspring distribution. Remarkably, for this case, all reversals turn out to be a Galton-Watson process with a dual reproduction law and eternal particle or some kind of immigration. In particular, we generalize a result of Klebaner and Sagitov (2002) in which only a geometric offspring distribution was considered. A new graphical representation in terms of an associated simple random walk on N 2 allows for illuminating picture proofs of our main results concerning transformations of the linear fractional Galton-Watson process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Thierrin, Ferenc Cole, Fady Alajaji und Tamás Linder. „Rényi Cross-Entropy Measures for Common Distributions and Processes with Memory“. Entropy 24, Nr. 10 (04.10.2022): 1417. http://dx.doi.org/10.3390/e24101417.

Der volle Inhalt der Quelle
Annotation:
Two Rényi-type generalizations of the Shannon cross-entropy, the Rényi cross-entropy and the Natural Rényi cross-entropy, were recently used as loss functions for the improved design of deep learning generative adversarial networks. In this work, we derive the Rényi and Natural Rényi differential cross-entropy measures in closed form for a wide class of common continuous distributions belonging to the exponential family, and we tabulate the results for ease of reference. We also summarise the Rényi-type cross-entropy rates between stationary Gaussian processes and between finite-alphabet time-invariant Markov sources.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Fackeldey, Konstantin, Amir Niknejad und Marcus Weber. „Finding metastabilities in reversible Markov chains based on incomplete sampling“. Special Matrices 5, Nr. 1 (26.01.2017): 73–81. http://dx.doi.org/10.1515/spma-2017-0006.

Der volle Inhalt der Quelle
Annotation:
Abstract In order to fully characterize the state-transition behaviour of finite Markov chains one needs to provide the corresponding transition matrix P. In many applications such as molecular simulation and drug design, the entries of the transition matrix P are estimated by generating realizations of the Markov chain and determining the one-step conditional probability Pij for a transition from one state i to state j. This sampling can be computational very demanding. Therefore, it is a good idea to reduce the sampling effort. The main purpose of this paper is to design a sampling strategy, which provides a partial sampling of only a subset of the rows of such a matrix P. Our proposed approach fits very well to stochastic processes stemming from simulation of molecular systems or random walks on graphs and it is different from the matrix completion approaches which try to approximate the transition matrix by using a low-rank-assumption. It will be shown how Markov chains can be analyzed on the basis of a partial sampling. More precisely. First, we will estimate the stationary distribution from a partially given matrix P. Second, we will estimate the infinitesimal generator Q of P on the basis of this stationary distribution. Third, from the generator we will compute the leading invariant subspace, which should be identical to the leading invariant subspace of P. Forth, we will apply Robust Perron Cluster Analysis (PCCA+) in order to identify metastabilities using this subspace.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Chen, Francis K. C., und Richard Cowan. „Invariant distributions for shapes in sequences of randomly-divided rectangles“. Advances in Applied Probability 31, Nr. 1 (März 1999): 1–14. http://dx.doi.org/10.1239/aap/1029954262.

Der volle Inhalt der Quelle
Annotation:
Interest has been shown in Markovian sequences of geometric shapes. Mostly the equations for invariant probability measures over shape space are extremely complicated and multidimensional. This paper deals with rectangles which have a simple one-dimensional shape descriptor. We explore the invariant distributions of shape under a variety of randomised rules for splitting the rectangle into two sub-rectangles, with numerous methods for selecting the next shape in sequence. Many explicit results emerge. These help to fill a vacant niche in shape theory, whilst contributing at the same time, new distributions on [0,1] and interesting examples of Markov processes or, in the language of another discipline, of stochastic dynamical systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Chen, Francis K. C., und Richard Cowan. „Invariant distributions for shapes in sequences of randomly-divided rectangles“. Advances in Applied Probability 31, Nr. 01 (März 1999): 1–14. http://dx.doi.org/10.1017/s0001867800008910.

Der volle Inhalt der Quelle
Annotation:
Interest has been shown in Markovian sequences of geometric shapes. Mostly the equations for invariant probability measures over shape space are extremely complicated and multidimensional. This paper deals with rectangles which have a simple one-dimensional shape descriptor. We explore the invariant distributions of shape under a variety of randomised rules for splitting the rectangle into two sub-rectangles, with numerous methods for selecting the next shape in sequence. Many explicit results emerge. These help to fill a vacant niche in shape theory, whilst contributing at the same time, new distributions on [0,1] and interesting examples of Markov processes or, in the language of another discipline, of stochastic dynamical systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Ball, Frank, Robin K. Milne, Ian D. Tame und Geoffrey F. Yeo. „Superposition of Interacting Aggregated Continuous-Time Markov Chains“. Advances in Applied Probability 29, Nr. 1 (März 1997): 56–91. http://dx.doi.org/10.2307/1427861.

Der volle Inhalt der Quelle
Annotation:
Consider a system of interacting finite Markov chains in continuous time, where each subsystem is aggregated by a common partitioning of the state space. The interaction is assumed to arise from dependence of some of the transition rates for a given subsystem at a specified time on the states of the other subsystems at that time. With two subsystem classes, labelled 0 and 1, the superposition process arising from a system counts the number of subsystems in the latter class. Key structure and results from the theory of aggregated Markov processes are summarized. These are then applied also to superposition processes. In particular, we consider invariant distributions for the level m entry process, marginal and joint distributions for sojourn-times of the superposition process at its various levels, and moments and correlation functions associated with these distributions. The distributions are obtained mainly by using matrix methods, though an approach based on point process methods and conditional probability arguments is outlined. Conditions under which an interacting aggregated Markov chain is reversible are established. The ideas are illustrated with simple examples for which numerical results are obtained using Matlab. Motivation for this study has come from stochastic modelling of the behaviour of ion channels; another application is in reliability modelling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Ball, Frank, Robin K. Milne, Ian D. Tame und Geoffrey F. Yeo. „Superposition of Interacting Aggregated Continuous-Time Markov Chains“. Advances in Applied Probability 29, Nr. 01 (März 1997): 56–91. http://dx.doi.org/10.1017/s0001867800027798.

Der volle Inhalt der Quelle
Annotation:
Consider a system of interacting finite Markov chains in continuous time, where each subsystem is aggregated by a common partitioning of the state space. The interaction is assumed to arise from dependence of some of the transition rates for a given subsystem at a specified time on the states of the other subsystems at that time. With two subsystem classes, labelled 0 and 1, the superposition process arising from a system counts the number of subsystems in the latter class. Key structure and results from the theory of aggregated Markov processes are summarized. These are then applied also to superposition processes. In particular, we consider invariant distributions for the level m entry process, marginal and joint distributions for sojourn-times of the superposition process at its various levels, and moments and correlation functions associated with these distributions. The distributions are obtained mainly by using matrix methods, though an approach based on point process methods and conditional probability arguments is outlined. Conditions under which an interacting aggregated Markov chain is reversible are established. The ideas are illustrated with simple examples for which numerical results are obtained using Matlab. Motivation for this study has come from stochastic modelling of the behaviour of ion channels; another application is in reliability modelling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Dshalalow, Jewgeni. „Multichannel queueing systems with infinite waiting room and stochastic control“. Journal of Applied Probability 26, Nr. 2 (Juni 1989): 345–62. http://dx.doi.org/10.2307/3214040.

Der volle Inhalt der Quelle
Annotation:
A wide class of multichannel queueing models appears to be useful in practice where the input stream of customers can be controlled at the moments preceding the customers' departures from the source (e.g. airports, transportation systems, inventories, tandem queues). In addition, the servicing facility can govern the intensity of the servicing process that further improves flexibility of the system. In such a multichannel queue with infinite waiting room the queueing process {Zt; t ≧ 0} is under investigation. The author obtains explicit formulas for the limiting distribution of (Zt) partly using an approach developed in previous work and based on the theory of semi-regenerative processes. Among other results the limiting distributions of the actual and virtual waiting time are derived. The input stream (which is not recurrent) is investigated, and distribution of the residual time from t to the next arrival is obtained. The author also treats a Markov chain embedded in (Zt) and gives a necessary and sufficient condition for its existence. Under this condition the invariant probability measure is derived.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Dshalalow, Jewgeni. „Multichannel queueing systems with infinite waiting room and stochastic control“. Journal of Applied Probability 26, Nr. 02 (Juni 1989): 345–62. http://dx.doi.org/10.1017/s0021900200027339.

Der volle Inhalt der Quelle
Annotation:
A wide class of multichannel queueing models appears to be useful in practice where the input stream of customers can be controlled at the moments preceding the customers' departures from the source (e.g. airports, transportation systems, inventories, tandem queues). In addition, the servicing facility can govern the intensity of the servicing process that further improves flexibility of the system. In such a multichannel queue with infinite waiting room the queueing process {Zt ; t ≧ 0} is under investigation. The author obtains explicit formulas for the limiting distribution of (Zt ) partly using an approach developed in previous work and based on the theory of semi-regenerative processes. Among other results the limiting distributions of the actual and virtual waiting time are derived. The input stream (which is not recurrent) is investigated, and distribution of the residual time from t to the next arrival is obtained. The author also treats a Markov chain embedded in (Zt ) and gives a necessary and sufficient condition for its existence. Under this condition the invariant probability measure is derived.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Robini, Marc C., Yoram Bresler und Isabelle E. Magnin. „ON THE CONVERGENCE OF METROPOLIS-TYPE RELAXATION AND ANNEALING WITH CONSTRAINTS“. Probability in the Engineering and Informational Sciences 16, Nr. 4 (Oktober 2002): 427–52. http://dx.doi.org/10.1017/s0269964802164035.

Der volle Inhalt der Quelle
Annotation:
We discuss the asymptotic behavior of time-inhomogeneous Metropolis chains for solving constrained sampling and optimization problems. In addition to the usual inverse temperature schedule (βn)n∈[hollow N]*, the type of Markov processes under consideration is controlled by a divergent sequence (θn)n∈[hollow N]* of parameters acting as Lagrange multipliers. The associated transition probability matrices (Pβn,θn)n∈[hollow N]* are defined by Pβ,θ = q(x, y)exp(−β(Wθ(y) − Wθ(x))+) for all pairs (x, y) of distinct elements of a finite set Ω, where q is an irreducible and reversible Markov kernel and the energy function Wθ is of the form Wθ = U + θV for some functions U,V : Ω → [hollow R]. Our approach, which is based on a comparison of the distribution of the chain at time n with the invariant measure of Pβn,θn, requires the computation of an upper bound for the second largest eigenvalue in absolute value of Pβn,θn. We extend the geometric bounds derived by Ingrassia and we give new sufficient conditions on the control sequences for the algorithm to simulate a Gibbs distribution with energy U on the constrained set [Ω with tilde above] = {x ∈ Ω : V(x) = minz∈ΩV(z)} and to minimize U over [Ω with tilde above].
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Chong, Siang Yew, Peter Tiňo, Jun He und Xin Yao. „A New Framework for Analysis of Coevolutionary Systems—Directed Graph Representation and Random Walks“. Evolutionary Computation 27, Nr. 2 (Juni 2019): 195–228. http://dx.doi.org/10.1162/evco_a_00218.

Der volle Inhalt der Quelle
Annotation:
Studying coevolutionary systems in the context of simplified models (i.e., games with pairwise interactions between coevolving solutions modeled as self plays) remains an open challenge since the rich underlying structures associated with pairwise-comparison-based fitness measures are often not taken fully into account. Although cyclic dynamics have been demonstrated in several contexts (such as intransitivity in coevolutionary problems), there is no complete characterization of cycle structures and their effects on coevolutionary search. We develop a new framework to address this issue. At the core of our approach is the directed graph (digraph) representation of coevolutionary problems that fully captures structures in the relations between candidate solutions. Coevolutionary processes are modeled as a specific type of Markov chains—random walks on digraphs. Using this framework, we show that coevolutionary problems admit a qualitative characterization: a coevolutionary problem is either solvable (there is a subset of solutions that dominates the remaining candidate solutions) or not. This has an implication on coevolutionary search. We further develop our framework that provides the means to construct quantitative tools for analysis of coevolutionary processes and demonstrate their applications through case studies. We show that coevolution of solvable problems corresponds to an absorbing Markov chain for which we can compute the expected hitting time of the absorbing class. Otherwise, coevolution will cycle indefinitely and the quantity of interest will be the limiting invariant distribution of the Markov chain. We also provide an index for characterizing complexity in coevolutionary problems and show how they can be generated in a controlled manner.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Ruessink, B. G. „Parameter stability and consistency in an alongshore-current model determined with Markov chain Monte Carlo“. Journal of Hydroinformatics 10, Nr. 2 (01.03.2008): 153–62. http://dx.doi.org/10.2166/hydro.2008.016.

Der volle Inhalt der Quelle
Annotation:
When a numerical model is to be used as a practical tool, its parameters should preferably be stable and consistent, that is, possess a small uncertainty and be time-invariant. Using data and predictions of alongshore mean currents flowing on a beach as a case study, this paper illustrates how parameter stability and consistency can be assessed using Markov chain Monte Carlo. Within a single calibration run, Markov chain Monte Carlo estimates the parameter posterior probability density function, its mode being the best-fit parameter set. Parameter stability is investigated by stepwise adding new data to a calibration run, while consistency is examined by calibrating the model on different datasets of equal length. The results for the present case study indicate that various tidal cycles with strong (say, >0.5 m/s) currents are required to obtain stable parameter estimates, and that the best-fit model parameters and the underlying posterior distribution are strongly time-varying. This inconsistent parameter behavior may reflect unresolved variability of the processes represented by the parameters, or may represent compensational behavior for temporal violations in specific model assumptions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

HAFOUTA, YEOR. „Limit theorems for some skew products with mixing base maps“. Ergodic Theory and Dynamical Systems 41, Nr. 1 (05.08.2019): 241–71. http://dx.doi.org/10.1017/etds.2019.48.

Der volle Inhalt der Quelle
Annotation:
We obtain a central limit theorem, local limit theorems and renewal theorems for stationary processes generated by skew product maps $T(\unicode[STIX]{x1D714},x)=(\unicode[STIX]{x1D703}\unicode[STIX]{x1D714},T_{\unicode[STIX]{x1D714}}x)$ together with a $T$-invariant measure whose base map $\unicode[STIX]{x1D703}$ satisfies certain topological and mixing conditions and the maps $T_{\unicode[STIX]{x1D714}}$ on the fibers are certain non-singular distance-expanding maps. Our results hold true when $\unicode[STIX]{x1D703}$ is either a sufficiently fast mixing Markov shift with positive transition densities or a (non-uniform) Young tower with at least one periodic point and polynomial tails. The proofs are based on the random complex Ruelle–Perron–Frobenius theorem from Hafouta and Kifer [Nonconventional Limit Theorems and Random Dynamics. World Scientific, Singapore, 2018] applied with appropriate random transfer operators generated by $T_{\unicode[STIX]{x1D714}}$, together with certain regularity assumptions (as functions of $\unicode[STIX]{x1D714}$) of these operators. Limit theorems for deterministic processes whose distributions on the fibers are generated by Markov chains with transition operators satisfying a random version of the Doeblin condition are also obtained. The main innovation in this paper is that the results hold true even though the spectral theory used in Aimino, Nicol and Vaienti [Annealed and quenched limit theorems for random expanding dynamical systems. Probab. Theory Related Fields162 (2015), 233–274] does not seem to be applicable, and the dual of the Koopman operator of $T$ (with respect to the invariant measure) does not seem to have a spectral gap.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Steif, Jeffrey E. „-Convergence to equilibrium and space—time bernoullicity for spin systems in the M < ε case“. Ergodic Theory and Dynamical Systems 11, Nr. 3 (September 1991): 547–75. http://dx.doi.org/10.1017/s0143385700006337.

Der volle Inhalt der Quelle
Annotation:
AbstractLiggett has proved that for spin systems, Markov processes with state space {0,1}ℤn, there is a unique stationary distribution in the M < ε regime and all initial configurations uniformly approach this unique stationary distribution exponentially in the weak topology. Here, M and ε are two parameters of the system. We extend this result to discrete time but strengthen it by proving exponential convergence in the stronger - metric instead of the usual weak topology. This is then used to show that the unique stationary process with state space {0,1}ℤn and index set ℤ is isomorphic (in the sense of ergodic theory) to an independent process indexed by ℤ. In the translation invariant case, we prove the stronger fact that this stationary process viewed as a {0, l}-valued process with index set ℤn × ℤ (spacetime) is isomorphic to an independent process also indexed by ℤn × ℤ. This shows that this process is in some sense the most random possible. An application of this last result to approximating by an infinite number of finite systems concatenated independently together is also presented. Finally, we extend all of these results to continuous time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Macek, Wiesław M., Dariusz Wójcik und James L. Burch. „Magnetospheric Multiscale Observations of Markov Turbulence on Kinetic Scales“. Astrophysical Journal 943, Nr. 2 (01.02.2023): 152. http://dx.doi.org/10.3847/1538-4357/aca0a0.

Der volle Inhalt der Quelle
Annotation:
Abstract In our previous studies we have examined solar wind and magnetospheric plasma turbulence, including Markovian character on large inertial magnetohydrodynamic scales. Here we present the results of the statistical analysis of magnetic field fluctuations in the Earth’s magnetosheath, based on the Magnetospheric Multiscale mission at much smaller kinetic scales. Following our results on spectral analysis with very large slopes of about −16/3, we apply a Markov-process approach to turbulence in this kinetic regime. It is shown that the Chapman–Kolmogorov equation is satisfied and that the lowest-order Kramers–Moyal coefficients describing drift and diffusion with a power-law dependence are consistent with a generalized Ornstein–Uhlenbeck process. The solutions of the Fokker–Planck equation agree with experimental probability density functions, which exhibit a universal global scale invariance through the kinetic domain. In particular, for moderate scales we have the kappa distribution described by various peaked shapes with heavy tails, which, with large values of the kappa parameter, are reduced to the Gaussian distribution for large inertial scales. This shows that the turbulence cascade can be described by the Markov processes also on very small scales. The obtained results on kinetic scales may be useful for a better understanding of the physical mechanisms governing turbulence.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Li, Xiu-Juan, und Yu-Peng Yang. „Signatures of the Self-organized Criticality Phenomenon in Precursors of Gamma-Ray Bursts“. Astrophysical Journal Letters 955, Nr. 2 (29.09.2023): L34. http://dx.doi.org/10.3847/2041-8213/acf12c.

Der volle Inhalt der Quelle
Annotation:
Abstract Precursors provide important clues to the nature of gamma-ray burst (GRB) central engines and can be used to contain GRB physical processes. In this Letter, we study the self-organized criticality in precursors of long GRBs in the third Swift/Burst Alert Telescope catalog. We investigate the differential and cumulative size distributions of 100 precursors, including peak flux, duration, rise time, decay time, and quiescent time with the Markov Chain Monte Carlo technique. It is found that all of the distributions can be well described by power-law models and understood within the physical framework of a self-organized criticality system. In addition, we inspect the cumulative distribution functions of the size differences with a q-Gaussian function. The scale-invariance structures of precursors further strengthen our findings. Particularly, similar analyses are made in 127 main bursts. The results show that both precursors and main bursts can be attributed to a self-organized criticality system with the spatial dimension S = 3 and driven by a similar magnetically dominated process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Massoud, Elias C., A. Anthony Bloom, Marcos Longo, John T. Reager, Paul A. Levine und John R. Worden. „Information content of soil hydrology in a west Amazon watershed as informed by GRACE“. Hydrology and Earth System Sciences 26, Nr. 5 (15.03.2022): 1407–23. http://dx.doi.org/10.5194/hess-26-1407-2022.

Der volle Inhalt der Quelle
Annotation:
Abstract. The seasonal-to-decadal terrestrial water balance on river basin scales depends on several well-characterized but uncertain soil physical processes, including soil moisture, plant available water, rooting depth, and recharge to lower soil layers. Reducing uncertainties in these quantities using observations is a key step toward improving the data fidelity and skill of land surface models. In this study, we quantitatively characterize the capability of Gravity Recovery and Climate Experiment (NASA-GRACE) measurements – a key constraint on total water storage (TWS) – to inform and constrain these processes. We use a reduced-complexity physically based model capable of simulating the hydrologic cycle, and we apply Bayesian inference on the model parameters using a Markov chain Monte Carlo algorithm, to minimize mismatches between model-simulated and GRACE-observed TWS anomalies. Based on the prior and posterior model parameter distributions, we further quantify information gain with regard to terrestrial water states, associated fluxes, and time-invariant process parameters. We show that the data-constrained terrestrial water storage model can capture basic physics of the hydrologic cycle for a watershed in the western Amazon during the period January 2003 through December 2012, with an r2 of 0.98 and root mean square error of 30.99 mm between observed and simulated TWS. Furthermore, we show a reduction of uncertainty in many of the parameters and state variables, ranging from a 2 % reduction in uncertainty for the porosity parameter to an 85 % reduction for the rooting depth parameter. The annual and interannual variability of the system are also simulated accurately, with the model simulations capturing the impacts of the 2005–2006 and 2010–2011 South American droughts. The results shown here suggest the potential of using gravimetric observations of TWS to identify and constrain key parameters in soil hydrologic models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Osada, Hirofumi. „Stochastic geometry and dynamics of infinitely many particle systems—random matrices and interacting Brownian motions in infinite dimensions“. Sugaku Expositions 34, Nr. 2 (12.10.2021): 141–73. http://dx.doi.org/10.1090/suga/461.

Der volle Inhalt der Quelle
Annotation:
We explain the general theories involved in solving an infinite-dimensional stochastic differential equation (ISDE) for interacting Brownian motions in infinite dimensions related to random matrices. Typical examples are the stochastic dynamics of infinite particle systems with logarithmic interaction potentials such as the sine, Airy, Bessel, and also for the Ginibre interacting Brownian motions. The first three are infinite-dimensional stochastic dynamics in one-dimensional space related to random matrices called Gaussian ensembles. They are the stationary distributions of interacting Brownian motions and given by the limit point processes of the distributions of eigenvalues of these random matrices. The sine, Airy, and Bessel point processes and interacting Brownian motions are thought to be geometrically and dynamically universal as the limits of bulk, soft edge, and hard edge scaling. The Ginibre point process is a rotation- and translation-invariant point process on R 2 \mathbb {R}^2 , and an equilibrium state of the Ginibre interacting Brownian motions. It is the bulk limit of the distributions of eigenvalues of non-Hermitian Gaussian random matrices. When the interacting Brownian motions constitute a one-dimensional system interacting with each other through the logarithmic potential with inverse temperature β = 2 \beta = 2 , an algebraic construction is known in which the stochastic dynamics are defined by the space-time correlation function. The approach based on the stochastic analysis (called the analytic approach) can be applied to an extremely wide class. If we apply the analytic approach to this system, we see that these two constructions give the same stochastic dynamics. From the algebraic construction, despite being an infinite interacting particle system, it is possible to represent and calculate various quantities such as moments by the correlation functions. We can thus obtain quantitative information. From the analytic construction, it is possible to represent the dynamics as a solution of an ISDE. We can obtain qualitative information such as semi-martingale properties, continuity, and non-collision properties of each particle, and the strong Markov property of the infinite particle system as a whole. Ginibre interacting Brownian motions constitute a two-dimensional infinite particle system related to non-Hermitian Gaussian random matrices. It has a logarithmic interaction potential with β = 2 \beta = 2 , but no algebraic configurations are known.The present result is the only construction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Pollett, P. K. „Connecting reversible Markov processes“. Advances in Applied Probability 18, Nr. 4 (Dezember 1986): 880–900. http://dx.doi.org/10.2307/1427254.

Der volle Inhalt der Quelle
Annotation:
We provide a framework for interconnecting a collection of reversible Markov processes in such a way that the resulting process has a product-form invariant measure with respect to which the process is reversible. A number of examples are discussed including Kingman&s reversible migration process, interconnected random walks and stratified clustering processes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie