To see the other types of publications on this topic, follow the link: Multilevel models (Statistics) Markov processes.

Journal articles on the topic 'Multilevel models (Statistics) Markov processes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multilevel models (Statistics) Markov processes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Keiding, Niels, and Richard D. Gill. "Random Truncation Models and Markov Processes." Annals of Statistics 18, no. 2 (June 1990): 582–602. http://dx.doi.org/10.1214/aos/1176347617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huzurbazar, Aparna V. "Multistate Models, Flowgraph Models, and Semi-Markov Processes." Communications in Statistics - Theory and Methods 33, no. 3 (January 5, 2004): 457–74. http://dx.doi.org/10.1081/sta-120028678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

DIDELEZ, VANESSA. "Graphical Models for Composable Finite Markov Processes." Scandinavian Journal of Statistics 34, no. 1 (March 2007): 169–85. http://dx.doi.org/10.1111/j.1467-9469.2006.00528.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mitrophanov, Alexander Yu, Alexandre Lomsadze, and Mark Borodovsky. "Sensitivity of hidden Markov models." Journal of Applied Probability 42, no. 3 (September 2005): 632–42. http://dx.doi.org/10.1239/jap/1127322017.

Full text
Abstract:
We derive a tight perturbation bound for hidden Markov models. Using this bound, we show that, in many cases, the distribution of a hidden Markov model is considerably more sensitive to perturbations in the emission probabilities than to perturbations in the transition probability matrix and the initial distribution of the underlying Markov chain. Our approach can also be used to assess the sensitivity of other stochastic models, such as mixture processes and semi-Markov processes.
APA, Harvard, Vancouver, ISO, and other styles
5

Resnick, Sidney, and Rishin Roy. "Multivariate extremal processes, leader processes and dynamic choice models." Advances in Applied Probability 22, no. 2 (June 1990): 309–31. http://dx.doi.org/10.2307/1427538.

Full text
Abstract:
Let (Y(t), t > 0) be a d-dimensional non-homogeneous multivariate extremal process. We suppose the ith component of Y describes time-dependent behaviour of random utilities associated with the ith choice. At time t we choose the ith alternative if the ith component of Y(t) is the largest of all the components. Let J(t) be the index of the largest component at time t so J has range {1, …, d} and call {J(t)} the leader process. Let Z(t) be the value of the largest component at time t. Then the bivariate process (J(t), Z(t)} is Markov. We discuss when J(t) and Z(t) are independent, when {J(s), 0<s≦t} and Z(t) are independent and when J(t) and {Z(s), 0<s≦t} are independent. In usual circumstances, {J(t)} is Markov and particular properties are given when the underlying distribution is max-stable. In the max-stable time-homogeneous case, {J(et)} is a stationary Markov chain with stationary transition probabilities.
APA, Harvard, Vancouver, ISO, and other styles
6

Bartolucci, Francesco, and Monia Lupparelli. "Pairwise Likelihood Inference for Nested Hidden Markov Chain Models for Multilevel Longitudinal Data." Journal of the American Statistical Association 111, no. 513 (January 2, 2016): 216–28. http://dx.doi.org/10.1080/01621459.2014.998935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ōsawa, Hideo. "Reversibility of Markov chains with applications to storage models." Journal of Applied Probability 22, no. 1 (March 1985): 123–37. http://dx.doi.org/10.2307/3213752.

Full text
Abstract:
This paper studies the reversibility conditions of stationary Markov chains (discrete-time Markov processes) with general state space. In particular, we investigate the Markov chains having atomic points in the state space. Such processes are often seen in storage models, for example waiting time in a queue, insurance risk reserve, dam content and so on. The necessary and sufficient conditions for reversibility of these processes are obtained. Further, we apply these conditions to some storage models and present some interesting results for single-server queues and a finite insurance risk model.
APA, Harvard, Vancouver, ISO, and other styles
8

Lefèvre, Claude, and Matthieu Simon. "SIR-Type Epidemic Models as Block-Structured Markov Processes." Methodology and Computing in Applied Probability 22, no. 2 (April 3, 2019): 433–53. http://dx.doi.org/10.1007/s11009-019-09710-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Aggoun, Lakhdar, and Robert J. Elliott. "Finite-dimensional models for hidden Markov chains." Advances in Applied Probability 27, no. 1 (March 1995): 146–60. http://dx.doi.org/10.2307/1428101.

Full text
Abstract:
A continuous-time, non-linear filtering problem is considered in which both signal and observation processes are Markov chains. New finite-dimensional filters and smoothers are obtained for the state of the signal, for the number of jumps from one state to another, for the occupation time in any state of the signal, and for joint occupation times of the two processes. These estimates are then used in the expectation maximization algorithm to improve the parameters in the model. Consequently, our filters and model are adaptive, or self-tuning.
APA, Harvard, Vancouver, ISO, and other styles
10

Borisov, A. V. "State Analysis of Hidden Markov Models Governed by Special Jump Processes." Theory of Probability & Its Applications 51, no. 3 (January 2007): 518–28. http://dx.doi.org/10.1137/s0040585x97982542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lam, C. Y. Teresa. "New better than used in expectation processes." Journal of Applied Probability 29, no. 1 (March 1992): 116–28. http://dx.doi.org/10.2307/3214796.

Full text
Abstract:
In this paper, we study the new better than used in expectation (NBUE) and new worse than used in expectation (NWUE) properties of Markov renewal processes. We show that a Markov renewal process belongs to a more general class of stochastic processes encountered in reliability or maintenance applications. We present sufficient conditions such that the first-passage times of these processes are new better than used in expectation. The results are applied to the study of shock and repair models, random repair time processes, inventory, and queueing models.
APA, Harvard, Vancouver, ISO, and other styles
12

Ball, Frank. "Central limit theorems for multivariate semi-Markov sequences and processes, with applications." Journal of Applied Probability 36, no. 2 (June 1999): 415–32. http://dx.doi.org/10.1239/jap/1032374462.

Full text
Abstract:
In this paper, central limit theorems for multivariate semi-Markov sequences and processes are obtained, both as the number of jumps of the associated Markov chain tends to infinity and, if appropriate, as the time for which the process has been running tends to infinity. The theorems are widely applicable since many functions defined on Markov or semi-Markov processes can be analysed by exploiting appropriate embedded multivariate semi-Markov sequences. An application to a problem in ion channel modelling is described in detail. Other applications, including to multivariate stationary reward processes, counting processes associated with Markov renewal processes, the interpretation of Markov chain Monte Carlo runs and statistical inference on semi-Markov models are briefly outlined.
APA, Harvard, Vancouver, ISO, and other styles
13

Kemp, A. W. "Steady-state Markov chain models for the Heine and Euler distributions." Journal of Applied Probability 29, no. 4 (December 1992): 869–76. http://dx.doi.org/10.2307/3214719.

Full text
Abstract:
The paper puts forward steady-state Markov chain models for the Heine and Euler distributions. The models for oil exploration strategies that were discussed by Benkherouf and Bather (1988) are reinterpreted as current-age models for discrete renewal processes. Steady-state success-runs processes with non-zero probabilities that a trial is abandoned, Foster processes, and equilibrium random walks corresponding to elective M/M/1 queues are also examined.
APA, Harvard, Vancouver, ISO, and other styles
14

Kemp, A. W. "Steady-state Markov chain models for the Heine and Euler distributions." Journal of Applied Probability 29, no. 04 (December 1992): 869–76. http://dx.doi.org/10.1017/s0021900200043746.

Full text
Abstract:
The paper puts forward steady-state Markov chain models for the Heine and Euler distributions. The models for oil exploration strategies that were discussed by Benkherouf and Bather (1988) are reinterpreted as current-age models for discrete renewal processes. Steady-state success-runs processes with non-zero probabilities that a trial is abandoned, Foster processes, and equilibrium random walks corresponding to elective M/M/1 queues are also examined.
APA, Harvard, Vancouver, ISO, and other styles
15

Komaki, Fumiyasu. "Homogeneous Gaussian Markov processes on general lattices." Advances in Applied Probability 28, no. 1 (March 1996): 189–206. http://dx.doi.org/10.2307/1427917.

Full text
Abstract:
A homogeneous Gaussian Markov lattice-process model has a regression coefficient that determines the extent to which a random variable of a vertex is dependent on those of the neighbors. In many studies, the absolute value of this parameter has been assumed to be less than the reciprocal of the number of neighbors. This condition is shown to be necessary and sufficient for the existence of the Gaussian process satisfying the model equations under some assumptions on lattices using the notion of dual processes. We also give examples of models that neither satisfy the condition imposed on the region for the parameter nor the assumptions on lattices. A formula for autocovariance functions of Gaussian Markov processes on general lattices is derived, and numerical procedures to calculate the autocovariance functions are proposed.
APA, Harvard, Vancouver, ISO, and other styles
16

Møller, Jesper, and Rasmus Plenge Waagepetersen. "Markov connected component fields." Advances in Applied Probability 30, no. 1 (March 1998): 1–35. http://dx.doi.org/10.1239/aap/1035227989.

Full text
Abstract:
A new class of Gibbsian models with potentials associated with the connected components or homogeneous parts of images is introduced. For these models the neighbourhood of a pixel is not fixed as for Markov random fields, but is given by the components which are adjacent to the pixel. The relationship to Markov random fields and marked point processes is explored and spatial Markov properties are established. Extensions to infinite lattices are also studied, and statistical inference problems including geostatistical applications and statistical image analysis are discussed. Finally, simulation studies are presented which show that the models may be appropriate for a variety of interesting patterns, including images exhibiting intermediate degrees of spatial continuity and images of objects against background.
APA, Harvard, Vancouver, ISO, and other styles
17

Herkenrath, Ulrich. "On the uniform ergodicity of Markov processes of order 2." Journal of Applied Probability 40, no. 02 (June 2003): 455–72. http://dx.doi.org/10.1017/s0021900200019422.

Full text
Abstract:
We study the uniform ergodicity of Markov processes (Z n , n ≥ 1) of order 2 with a general state space (Z, 𝒵). Markov processes of order higher than 1 were defined in the literature long ago, but scarcely treated in detail. We take as the basis for our considerations the natural transition probability Q of such a process. A Markov process of order 2 is transformed into one of order 1 by combining two consecutive variables Z 2n–1 and Z 2n into one variable Y n with values in the Cartesian product space (Z × Z, 𝒵 ⊗ 𝒵). Thus, a Markov process (Y n , n ≥ 1) of order 1 with transition probability R is generated. Uniform ergodicity for the process (Z n , n ≥ 1) is defined in terms of the same property for (Y n , n ≥ 1). We give some conditions on the transition probability Q which transfer to R and thus ensure the uniform ergodicity of (Z n , n ≥ 1). We apply the general results to study the uniform ergodicity of Markov processes of order 2 which arise in some nonlinear time series models and as sequences of smoothed values in sequential smoothing procedures of Markovian observations. As for the time series models, Markovian noise sequences are covered.
APA, Harvard, Vancouver, ISO, and other styles
18

Herkenrath, Ulrich. "On the uniform ergodicity of Markov processes of order 2." Journal of Applied Probability 40, no. 2 (June 2003): 455–72. http://dx.doi.org/10.1239/jap/1053003556.

Full text
Abstract:
We study the uniform ergodicity of Markov processes (Zn, n ≥ 1) of order 2 with a general state space (Z, 𝒵). Markov processes of order higher than 1 were defined in the literature long ago, but scarcely treated in detail. We take as the basis for our considerations the natural transition probability Q of such a process. A Markov process of order 2 is transformed into one of order 1 by combining two consecutive variables Z2n–1 and Z2n into one variable Yn with values in the Cartesian product space (Z × Z, 𝒵 ⊗ 𝒵). Thus, a Markov process (Yn, n ≥ 1) of order 1 with transition probability R is generated. Uniform ergodicity for the process (Zn, n ≥ 1) is defined in terms of the same property for (Yn, n ≥ 1). We give some conditions on the transition probability Q which transfer to R and thus ensure the uniform ergodicity of (Zn, n ≥ 1). We apply the general results to study the uniform ergodicity of Markov processes of order 2 which arise in some nonlinear time series models and as sequences of smoothed values in sequential smoothing procedures of Markovian observations. As for the time series models, Markovian noise sequences are covered.
APA, Harvard, Vancouver, ISO, and other styles
19

Lee, Mei-Ling Ting, and G. Alex Whitmore. "Stochastic processes directed by randomized time." Journal of Applied Probability 30, no. 2 (June 1993): 302–14. http://dx.doi.org/10.2307/3214840.

Full text
Abstract:
The paper investigates stochastic processes directed by a randomized time process. A new family of directing processes called Hougaard processes is introduced. Monotonicity properties preserved under subordination, and dependence among processes directed by a common randomized time are studied. Results for processes subordinated to Poisson and stable processes are presented. Potential applications to shock models and threshold models are also discussed. Only Markov processes are considered.
APA, Harvard, Vancouver, ISO, and other styles
20

Mena, Ramsés H., and Freddy Palma. "Continuous-time Markov processes, orthogonal polynomials and Lancaster probabilities." ESAIM: Probability and Statistics 24 (2020): 100–112. http://dx.doi.org/10.1051/ps/2020004.

Full text
Abstract:
This work links the conditional probability structure of Lancaster probabilities to a construction of reversible continuous-time Markov processes. Such a task is achieved by using the spectral expansion of the corresponding transition probabilities in order to introduce a continuous time dependence in the orthogonal representation inherent to Lancaster probabilities. This relationship provides a novel methodology to build continuous-time Markov processes via Lancaster probabilities. Particular cases of well-known models are seen to fall within this approach. As a byproduct, it also unveils new identities associated to well known orthogonal polynomials.
APA, Harvard, Vancouver, ISO, and other styles
21

H�pfner, Reinhard. "On statistics of Markov step processes: Representation of log-likelihood ratio processes in filtered local models." Probability Theory and Related Fields 94, no. 3 (September 1993): 375–98. http://dx.doi.org/10.1007/bf01199249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Tüfekçi, Tolga, and Refik Güllü. "An iterative approximation scheme for repetitive Markov processes." Journal of Applied Probability 36, no. 03 (September 1999): 654–67. http://dx.doi.org/10.1017/s0021900200017472.

Full text
Abstract:
Repetitive Markov processes form a class of processes where the generator matrix has a particular repeating form. Many queueing models fall in this category such as M/M/1 queues, quasi-birth-and-death processes, and processes with M/G/1 or GI/M/1 generator matrices. In this paper, a new iterative scheme is proposed for computing the stationary probabilities of such processes. An infinite state process is approximated by a finite state process by lumping an infinite number of states into a super-state. What we call the feedback rate, the conditional expected rate of flow from the super-state to the remaining states, given the process is in the super-state, is approximated simultaneously with the steady state probabilities. The method is theoretically developed and numerically tested for quasi-birth-and-death processes. It turns out that the new concept of the feedback rate can be effectively used in computing the stationary probabilities.
APA, Harvard, Vancouver, ISO, and other styles
23

Tüfekçi, Tolga, and Refik Güllü. "An iterative approximation scheme for repetitive Markov processes." Journal of Applied Probability 36, no. 3 (September 1999): 654–67. http://dx.doi.org/10.1239/jap/1032374624.

Full text
Abstract:
Repetitive Markov processes form a class of processes where the generator matrix has a particular repeating form. Many queueing models fall in this category such as M/M/1 queues, quasi-birth-and-death processes, and processes with M/G/1 or GI/M/1 generator matrices. In this paper, a new iterative scheme is proposed for computing the stationary probabilities of such processes. An infinite state process is approximated by a finite state process by lumping an infinite number of states into a super-state. What we call the feedback rate, the conditional expected rate of flow from the super-state to the remaining states, given the process is in the super-state, is approximated simultaneously with the steady state probabilities. The method is theoretically developed and numerically tested for quasi-birth-and-death processes. It turns out that the new concept of the feedback rate can be effectively used in computing the stationary probabilities.
APA, Harvard, Vancouver, ISO, and other styles
24

Diehn, Manuel, Axel Munk, and Daniel Rudolf. "Maximum likelihood estimation in hidden Markov models with inhomogeneous noise." ESAIM: Probability and Statistics 23 (2019): 492–523. http://dx.doi.org/10.1051/ps/2018017.

Full text
Abstract:
We consider parameter estimation in finite hidden state space Markov models with time-dependent inhomogeneous noise, where the inhomogeneity vanishes sufficiently fast. Based on the concept of asymptotic mean stationary processes we prove that the maximum likelihood and a quasi-maximum likelihood estimator (QMLE) are strongly consistent. The computation of the QMLE ignores the inhomogeneity, hence, is much simpler and robust. The theory is motivated by an example from biophysics and applied to a Poisson- and linear Gaussian model.
APA, Harvard, Vancouver, ISO, and other styles
25

Hwan Cha, Ji, and Sophie Mercier. "Transformed Lévy processes as state-dependent wear models." Advances in Applied Probability 51, no. 2 (June 2019): 468–86. http://dx.doi.org/10.1017/apr.2019.21.

Full text
Abstract:
AbstractMany wear processes used for modeling accumulative deterioration in a reliability context are nonhomogeneous Lévy processes and, hence, have independent increments, which may not be suitable in an application context. In this work we consider Lévy processes transformed by monotonous functions to overcome this restriction, and provide a new state-dependent wear model. These transformed Lévy processes are first observed to remain tractable Markov processes. Some distributional properties are derived. We investigate the impact of the current state on the future increment level and on the overall accumulated level from a stochastic monotonicity point of view. We also study positive dependence properties and stochastic monotonicity of increments.
APA, Harvard, Vancouver, ISO, and other styles
26

Mao, Yong-Hua, and Tao Wang. "Lyapunov-type conditions for non-strong ergodicity of Markov processes." Journal of Applied Probability 58, no. 1 (February 25, 2021): 238–53. http://dx.doi.org/10.1017/jpr.2020.84.

Full text
Abstract:
AbstractWe present Lyapunov-type conditions for non-strong ergodicity of Markov processes. Some concrete models are discussed, including diffusion processes on Riemannian manifolds and Ornstein–Uhlenbeck processes driven by symmetric $\alpha$-stable processes. In particular, we show that any process of d-dimensional Ornstein–Uhlenbeck type driven by $\alpha$-stable noise is not strongly ergodic for every $\alpha\in (0,2]$.
APA, Harvard, Vancouver, ISO, and other styles
27

House, Thomas. "Lie Algebra Solution of Population Models Based on Time-Inhomogeneous Markov Chains." Journal of Applied Probability 49, no. 02 (June 2012): 472–81. http://dx.doi.org/10.1017/s0021900200009219.

Full text
Abstract:
Many natural populations are well modelled through time-inhomogeneous stochastic processes. Such processes have been analysed in the physical sciences using a method based on Lie algebras, but this methodology is not widely used for models with ecological, medical, and social applications. In this paper we present the Lie algebraic method, and apply it to three biologically well-motivated examples. The result of this is a solution form that is often highly computationally advantageous.
APA, Harvard, Vancouver, ISO, and other styles
28

House, Thomas. "Lie Algebra Solution of Population Models Based on Time-Inhomogeneous Markov Chains." Journal of Applied Probability 49, no. 2 (June 2012): 472–81. http://dx.doi.org/10.1239/jap/1339878799.

Full text
Abstract:
Many natural populations are well modelled through time-inhomogeneous stochastic processes. Such processes have been analysed in the physical sciences using a method based on Lie algebras, but this methodology is not widely used for models with ecological, medical, and social applications. In this paper we present the Lie algebraic method, and apply it to three biologically well-motivated examples. The result of this is a solution form that is often highly computationally advantageous.
APA, Harvard, Vancouver, ISO, and other styles
29

Latouche, Guy, and V. Ramaswami. "A logarithmic reduction algorithm for quasi-birth-death processes." Journal of Applied Probability 30, no. 3 (September 1993): 650–74. http://dx.doi.org/10.2307/3214773.

Full text
Abstract:
Quasi-birth-death processes are commonly used Markov chain models in queueing theory, computer performance, teletraffic modeling and other areas. We provide a new, simple algorithm for the matrix-geometric rate matrix. We demonstrate that it has quadratic convergence. We show theoretically and through numerical examples that it converges very fast and provides extremely accurate results even for almost unstable models.
APA, Harvard, Vancouver, ISO, and other styles
30

Latouche, Guy, and V. Ramaswami. "A logarithmic reduction algorithm for quasi-birth-death processes." Journal of Applied Probability 30, no. 03 (September 1993): 650–74. http://dx.doi.org/10.1017/s0021900200044387.

Full text
Abstract:
Quasi-birth-death processes are commonly used Markov chain models in queueing theory, computer performance, teletraffic modeling and other areas. We provide a new, simple algorithm for the matrix-geometric rate matrix. We demonstrate that it has quadratic convergence. We show theoretically and through numerical examples that it converges very fast and provides extremely accurate results even for almost unstable models.
APA, Harvard, Vancouver, ISO, and other styles
31

Stefanov, Valeri T. "On the occurrence of composite events and clusters of points." Journal of Applied Probability 36, no. 4 (December 1999): 1012–18. http://dx.doi.org/10.1239/jap/1032374751.

Full text
Abstract:
We derive explicit closed expressions for the moment generating functions of whole collections of quantities associated with the waiting time till the occurrence of composite events in either discrete or continuous-time models. The discrete-time models are independent, or Markov-dependent, binary trials and the events of interest are collections of successes with the property that each two consecutive successes are separated by no more than a fixed number of failures. The continuous-time models are renewal processes and the relevant events are clusters of points. We provide a unifying technology for treating both the discrete and continuous-time cases. This is based on first embedding the problems into similar ones for suitably selected Markov chains or Markov renewal processes, and second, applying tools from the exponential family technology.
APA, Harvard, Vancouver, ISO, and other styles
32

Gani, J., and Sid Yakowitz. "Error bounds for deterministic approximations to Markov processes, with applications to epidemic models." Journal of Applied Probability 32, no. 4 (December 1995): 1063–76. http://dx.doi.org/10.2307/3215220.

Full text
Abstract:
The computer age and the phenomenological complexity of the AIDS/HIV epidemic have engendered a rich profusion of deterministic and stochastic time series models for the development of an epidemic. The present study examines the reliability of deterministic approximations of fundamentally random processes. Through numerical analysis and probabilistic considerations, we derive absolute and simultaneous confidence interval bounding techniques, and offer a practical procedure based on these developments. A heartening aspect of the computational study presented at the close of this paper indicates that when the population size is in the thousands, the deterministic version to the classical logistic epidemic is a good approximation.
APA, Harvard, Vancouver, ISO, and other styles
33

Last, Günter. "Ergodicity properties of stress release, repairable system and workload models." Advances in Applied Probability 36, no. 2 (June 2004): 471–98. http://dx.doi.org/10.1239/aap/1086957582.

Full text
Abstract:
In this paper we derive some of the main ergodicity properties of a class of Markov renewal processes and the associated marked point processes. This class represents a generic model of applied probability and is of importance in earthquake modeling, reliability theory and queueing.
APA, Harvard, Vancouver, ISO, and other styles
34

Boshuizen, Frans A., and José M. Gouweleeuw. "General optimal stopping theorems for semi-Markov processes." Advances in Applied Probability 25, no. 4 (December 1993): 825–46. http://dx.doi.org/10.2307/1427794.

Full text
Abstract:
In this paper, optimal stopping problems for semi-Markov processes are studied in a fairly general setting. In such a process transitions are made from state to state in accordance with a Markov chain, but the amount of time spent in each state is random. The times spent in each state follow a general renewal process. They may depend on the present state as well as on the state into which the next transition is made.Our goal is to maximize the expected net return, which is given as a function of the state at time t minus some cost function. Discounting may or may not be considered. The main theorems (Theorems 3.5 and 3.11) are expressions for the optimal stopping time in the undiscounted and discounted case. These theorems generalize results of Zuckerman [16] and Boshuizen and Gouweleeuw [3]. Applications are given in various special cases.The results developed in this paper can also be applied to semi-Markov shock models, as considered in Taylor [13], Feldman [6] and Zuckerman [15].
APA, Harvard, Vancouver, ISO, and other styles
35

Jacobsen, Martin. "The Time to Ruin in Some Additive Risk Models with Random Premium Rates." Journal of Applied Probability 49, no. 04 (December 2012): 915–38. http://dx.doi.org/10.1017/s002190020001278x.

Full text
Abstract:
The risk processes considered in this paper are generated by an underlying Markov process with a regenerative structure and an independent sequence of independent and identically distributed claims. Between the arrivals of claims the process increases at a rate which is a nonnegative function of the present value of the Markov process. The intensity for a claim to occur is another nonnegative function of the value of the Markov process. The claim arrival times are the regeneration times for the Markov process. Two-sided claims are allowed, but the distribution of the positive claims is assumed to have a Laplace transform that is a rational function. The main results describe the joint Laplace transform of the time at ruin and the deficit at ruin. The method used consists in finding partial eigenfunctions for the generator of the joint process consisting of the Markov process and the accumulated claims process, a joint process which is also Markov. These partial eigenfunctions are then used to find a martingale that directly leads to an expression for the desired Laplace transform. In the final section, three examples are given involving different types of the underlying Markov process.
APA, Harvard, Vancouver, ISO, and other styles
36

Shcherbakov, Vadim, and Stanislav Volkov. "Boundary effect in competition processes." Journal of Applied Probability 56, no. 3 (September 2019): 750–68. http://dx.doi.org/10.1017/jpr.2019.46.

Full text
Abstract:
AbstractThis paper is devoted to studying the long-term behaviour of a continuous-time Markov chain that can be interpreted as a pair of linear birth processes which evolve with a competitive interaction; as a special case, they include the famous Lotka–Volterra interaction. Another example of our process is related to urn models with ball removal. We show that, with probability one, the process eventually escapes to infinity by sticking to the boundary in a rather unusual way.
APA, Harvard, Vancouver, ISO, and other styles
37

D'Amico, Guglielmo, Jacques Janssen, and Raimondo Manca. "Monounireducible Nonhomogeneous Continuous Time Semi-Markov Processes Applied to Rating Migration Models." Advances in Decision Sciences 2012 (October 16, 2012): 1–12. http://dx.doi.org/10.1155/2012/123635.

Full text
Abstract:
Monounireducible nonhomogeneous semi- Markov processes are defined and investigated. The mono- unireducible topological structure is a sufficient condition that guarantees the absorption of the semi-Markov process in a state of the process. This situation is of fundamental importance in the modelling of credit rating migrations because permits the derivation of the distribution function of the time of default. An application in credit rating modelling is given in order to illustrate the results.
APA, Harvard, Vancouver, ISO, and other styles
38

Hahn, Ute, Eva B. Vedel Jensen, Marie-Colette van Lieshout, and Linda Stougaard Nielsen. "Inhomogeneous spatial point processes by location-dependent scaling." Advances in Applied Probability 35, no. 2 (June 2003): 319–36. http://dx.doi.org/10.1239/aap/1051201648.

Full text
Abstract:
A new class of models for inhomogeneous spatial point processes is introduced. These locally scaled point processes are modifications of homogeneous template point processes, having the property that regions with different intensities differ only by a scale factor. This is achieved by replacing volume measures used in the density with locally scaled analogues defined by a location-dependent scaling function. The new approach is particularly appealing for modelling inhomogeneous Markov point processes. Distance-interaction and shot noise weighted Markov point processes are discussed in detail. It is shown that the locally scaled versions are again Markov and that locally the Papangelou conditional intensity of the new process behaves like that of a global scaling of the homogeneous process. Approximations are suggested that simplify calculation of the density, for example, in simulation. For sequential point processes, an alternative and simpler definition of local scaling is proposed.
APA, Harvard, Vancouver, ISO, and other styles
39

Kersting, G. "On recurrence and transience of growth models." Journal of Applied Probability 23, no. 3 (September 1986): 614–25. http://dx.doi.org/10.2307/3214001.

Full text
Abstract:
Let Xn be non-negative random variables, possessing the Markov property. We given criteria for deciding whether Pr(Xn →∞) is positive or 0. It turns out that essentially this depends on the magnitude of E(Xn+1 | Xn = x) compared to that of E(X2n+1 | Xn = x) for large x. The assumptions are chosen such that for example population-dependent branching processes can be treated by our results.
APA, Harvard, Vancouver, ISO, and other styles
40

Ball, Frank, Robin K. Milne, and Geoffrey F. Yeo. "On the exact distribution of observed open times in single ion channel models." Journal of Applied Probability 30, no. 3 (September 1993): 529–37. http://dx.doi.org/10.2307/3214763.

Full text
Abstract:
Continuous-time Markov chain models have been widely considered for the gating behaviour of a single ion channel. In such models the state space is usually partitioned into two classes, designated ‘open' and ‘closed', and there is ‘aggregation' in that it is possible to observe only which class the process is in at any given time. Hawkes et al. (1990) have derived an expression for the density function of the exact distribution of an observed open time in such an aggregated Markov model, where brief sojourns in either the open or the closed class are unobservable. This paper extends their result to single ion channel models based on aggregated semi-Markov processes, giving a more direct derivation which is probabilistic and exhibits clearly the combinatorial content.
APA, Harvard, Vancouver, ISO, and other styles
41

Poskitt, D. S., and Shin-Ho Chung. "Markov chain models, time series analysis and extreme value theory." Advances in Applied Probability 28, no. 2 (June 1996): 405–25. http://dx.doi.org/10.2307/1428065.

Full text
Abstract:
Markov chain processes are becoming increasingly popular as a means of modelling various phenomena in different disciplines. For example, a new approach to the investigation of the electrical activity of molecular structures known as ion channels is to analyse raw digitized current recordings using Markov chain models. An outstanding question which arises with the application of such models is how to determine the number of states required for the Markov chain to characterize the observed process. In this paper we derive a realization theorem showing that observations on a finite state Markov chain embedded in continuous noise can be synthesized as values obtained from an autoregressive moving-average data generating mechanism. We then use this realization result to motivate the construction of a procedure for identifying the state dimension of the hidden Markov chain. The identification technique is based on a new approach to the estimation of the order of an autoregressive moving-average process. Conditions for the method to produce strongly consistent estimates of the state dimension are given. The asymptotic distribution of the statistic underlying the identification process is also presented and shown to yield critical values commensurate with the requirements for strong consistency.
APA, Harvard, Vancouver, ISO, and other styles
42

Fink, Holger, Claudia Klüppelberg, and Martina Zähle. "Conditional Distributions of Processes Related to Fractional Brownian Motion." Journal of Applied Probability 50, no. 1 (March 2013): 166–83. http://dx.doi.org/10.1239/jap/1363784431.

Full text
Abstract:
Conditional distributions for affine Markov processes are at the core of present (defaultable) bond pricing. There is, however, evidence that Markov processes may not be realistic models for short rates. Fractional Brownian motion (FBM) can be introduced by an integral representation with respect to standard Brownian motion. Using a simple prediction formula for the conditional expectation of an FBM and its Gaussianity, we derive the conditional distributions of FBM and related processes. We derive conditional distributions for fractional analogies of prominent affine processes, including important examples like fractional Ornstein–Uhlenbeck or fractional Cox–Ingersoll–Ross processes. As an application, we propose a fractional Vasicek bond market model and compare prices of zero-coupon bonds to those achieved in the classical Vasicek model.
APA, Harvard, Vancouver, ISO, and other styles
43

Klebaner, F. C. "Asymptotic behaviour of Markov population processes by asymptotically linear rate of change." Journal of Applied Probability 31, no. 3 (September 1994): 614–25. http://dx.doi.org/10.2307/3215142.

Full text
Abstract:
Multidimensional Markov processes in continuous time with asymptotically linear mean change per unit of time are studied as randomly perturbed linear differential equations. Conditions for exponential and polynomial growth rates with stable type distribution are given. From these conditions results on branching models of populations with stabilizing reproduction for near-supercritical and near-critical cases follow.
APA, Harvard, Vancouver, ISO, and other styles
44

Klebaner, F. C. "Asymptotic behaviour of Markov population processes by asymptotically linear rate of change." Journal of Applied Probability 31, no. 03 (September 1994): 614–25. http://dx.doi.org/10.1017/s0021900200045204.

Full text
Abstract:
Multidimensional Markov processes in continuous time with asymptotically linear mean change per unit of time are studied as randomly perturbed linear differential equations. Conditions for exponential and polynomial growth rates with stable type distribution are given. From these conditions results on branching models of populations with stabilizing reproduction for near-supercritical and near-critical cases follow.
APA, Harvard, Vancouver, ISO, and other styles
45

Møller, Jesper. "Shot noise Cox processes." Advances in Applied Probability 35, no. 3 (September 2003): 614–40. http://dx.doi.org/10.1239/aap/1059486821.

Full text
Abstract:
Shot noise Cox processes constitute a large class of Cox and Poisson cluster processes in ℝd, including Neyman-Scott, Poisson-gamma and shot noise G Cox processes. It is demonstrated that, due to the structure of such models, a number of useful and general results can easily be established. The focus is on the probabilistic aspects with a view to statistical applications, particularly results for summary statistics, reduced Palm distributions, simulation with or without edge effects, conditional simulation of the intensity function and local and spatial Markov properties.
APA, Harvard, Vancouver, ISO, and other styles
46

Fredes, Luis, and Jean-François Marckert. "Invariant measures of interacting particle systems: Algebraic aspects." ESAIM: Probability and Statistics 24 (2020): 526–80. http://dx.doi.org/10.1051/ps/2020008.

Full text
Abstract:
Consider a continuous time particle system ηt = (ηt(k), k ∈ 𝕃), indexed by a lattice 𝕃 which will be either ℤ, ℤ∕nℤ, a segment {1, ⋯ , n}, or ℤd, and taking its values in the set Eκ𝕃 where Eκ = {0, ⋯ , κ − 1} for some fixed κ ∈{∞, 2, 3, ⋯ }. Assume that the Markovian evolution of the particle system (PS) is driven by some translation invariant local dynamics with bounded range, encoded by a jump rate matrix ⊤. These are standard settings, satisfied by the TASEP, the voter models, the contact processes. The aim of this paper is to provide some sufficient and/or necessary conditions on the matrix ⊤ so that this Markov process admits some simple invariant distribution, as a product measure (if 𝕃 is any of the spaces mentioned above), the law of a Markov process indexed by ℤ or [1, n] ∩ ℤ (if 𝕃 = ℤ or {1, …, n}), or a Gibbs measure if 𝕃 = ℤ/nℤ. Multiple applications follow: efficient ways to find invariant Markov laws for a given jump rate matrix or to prove that none exists. The voter models and the contact processes are shown not to possess any Markov laws as invariant distribution (for any memory m). (As usual, a random process X indexed by ℤ or ℕ is said to be a Markov chain with memory m ∈ {0, 1, 2, ⋯ } if ℙ(Xk ∈ A | Xk−i, i ≥ 1) = ℙ(Xk ∈ A | Xk−i, 1 ≤ i ≤ m), for any k.) We also prove that some models close to these models do. We exhibit PS admitting hidden Markov chains as invariant distribution and design many PS on ℤ2, with jump rates indexed by 2 × 2 squares, admitting product invariant measures.
APA, Harvard, Vancouver, ISO, and other styles
47

Harel, Michel, and Madan L. Puri. "Universally consistent conditionalU-statistics for absolutely regular processes and its applications for hidden Markov models." Annals of the Institute of Statistical Mathematics 56, no. 4 (December 2004): 819–32. http://dx.doi.org/10.1007/bf02506491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Stern, Thomas E., and Anwar I. Elwalid. "Analysis of separable Markov-modulated rate models for information-handling systems." Advances in Applied Probability 23, no. 1 (March 1991): 105–39. http://dx.doi.org/10.2307/1427514.

Full text
Abstract:
In many communication and computer systems, information arrives to a multiplexer, switch or information processor at a rate which fluctuates randomly, often with a high degree of correlation in time. The information is buffered for service (the server typically being a communication channel or processing unit) and the service rate may also vary randomly. Accurate capture of the statistical properties of these fluctuations is facilitated by modeling the arrival and service rates as superpositions of a number of independent finite state reversible Markov processes. We call such models separable Markov-modulated rate processes (MMRP).In this work a general mathematical model for separable MMRPs is presented, focusing on Markov-modulated continuous flow models. An efficient procedure for analyzing their performance is derived. It is shown that the ‘state explosion' problem typical of systems composed of a large number of subsystems, can be circumvented because of the separability property, which permits a decomposition of the equations for the equilibrium probabilities of these systems. The decomposition technique (generalizing a method proposed by Kosten) leads to a solution of the equilibrium equations expressed as a sum of terms in Kronecker product form. A key consequence of decomposition is that the computational complexity of the problem is vastly reduced for large systems. Examples are presented to illustrate the power of the solution technique.
APA, Harvard, Vancouver, ISO, and other styles
49

Henderson, W., and P. Taylor. "Insensitivity of processes with interruptions." Journal of Applied Probability 26, no. 2 (June 1989): 242–58. http://dx.doi.org/10.2307/3214032.

Full text
Abstract:
The theory of insensitivity within generalised semi-Markov processes is extended to cover classes of models in which the generally distributed lifetimes can be terminated prematurely by the deaths of negative exponentially distributed lifetimes. As a consequence of this approach it is shown that there exist classes of processes which are insensitive with respect to characteristics of the general distributions other than the mean. Two examples are given. The first is an analysis of networks of queues in which the generally distributed service times can be interrupted with resulting changes in routing probabilities. The second is a model for the effect of disturbances on the evolution of a vegetation community.
APA, Harvard, Vancouver, ISO, and other styles
50

ADLER, MARK, PIERRE VAN MOERBEKE, and DONG WANG. "RANDOM MATRIX MINOR PROCESSES RELATED TO PERCOLATION THEORY." Random Matrices: Theory and Applications 02, no. 04 (October 2013): 1350008. http://dx.doi.org/10.1142/s2010326313500081.

Full text
Abstract:
This paper studies a number of matrix models of size n and the associated Markov chains for the eigenvalues of the models for consecutive n's. They are consecutive principal minors for two of the models, GUE with external source and the multiple Laguerre matrix model, and merely properly defined consecutive matrices for the third one, the Jacobi–Piñeiro model; nevertheless the eigenvalues of the consecutive models all interlace. We show: (i) For each of those finite models, we give the transition probability of the associated Markov chain and the joint distribution of the entire interlacing set of eigenvalues; we show this is a determinantal point process whose extended kernels share many common features. (ii) To each of these models and their set of eigenvalues, we associate a last-passage percolation model, either finite percolation or percolation along an infinite strip of finite width, yielding a precise relationship between the last-passage times and the eigenvalues. (iii) Finally, it is shown that for appropriate choices of exponential distribution on the percolation, with very small means, the rescaled last-passage times lead to the Pearcey process; this should connect the Pearcey statistics with random directed polymers.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography