To see the other types of publications on this topic, follow the link: Controlled Markov chain.

Journal articles on the topic 'Controlled Markov chain'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Controlled Markov chain.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ray, Anandaroop, David L. Alumbaugh, G. Michael Hoversten, and Kerry Key. "Robust and accelerated Bayesian inversion of marine controlled-source electromagnetic data using parallel tempering." GEOPHYSICS 78, no. 6 (2013): E271—E280. http://dx.doi.org/10.1190/geo2013-0128.1.

Full text
Abstract:
Bayesian methods can quantify the model uncertainty that is inherent in inversion of highly nonlinear geophysical problems. In this approach, a model likelihood function based on knowledge of the data noise statistics is used to sample the posterior model distribution, which conveys information on the resolvability of the model parameters. Because these distributions are multidimensional and nonlinear, we used Markov chain Monte Carlo methods for highly efficient sampling. Because a single Markov chain can become stuck in a local probability mode, we run various randomized Markov chains indepe
APA, Harvard, Vancouver, ISO, and other styles
2

Lefebvre, Mario, and Moussa Kounta. "Discrete homing problems." Archives of Control Sciences 23, no. 1 (2013): 5–18. http://dx.doi.org/10.2478/v10170-011-0039-6.

Full text
Abstract:
Abstract We consider the so-called homing problem for discrete-time Markov chains. The aim is to optimally control the Markov chain until it hits a given boundary. Depending on a parameter in the cost function, the optimizer either wants to maximize or minimize the time spent by the controlled process in the continuation region. Particular problems are considered and solved explicitly. Both the optimal control and the value function are obtained
APA, Harvard, Vancouver, ISO, and other styles
3

Andini, Enggartya, Sudarno Sudarno, and Rita Rahmawati. "PENERAPAN METODE PENGENDALIAN KUALITAS MEWMA BERDASARKAN ARL DENGAN PENDEKATAN RANTAI MARKOV (Studi Kasus: Batik Semarang 16, Meteseh)." Jurnal Gaussian 10, no. 1 (2021): 125–35. http://dx.doi.org/10.14710/j.gauss.v10i1.30939.

Full text
Abstract:
An industrial company requires quality control to maintain quality consistency from the production results so that it is able to compete with other companies in the world market. In the industrial sector, most processes are influenced by more than one quality characteristic. One tool that can be used to control more than one quality characteristic is the Multivariate Exponentially Weighted Moving Average (MEWMA) control chart. The graph is used to determine whether the process has been controlled or not, if the process is not yet controlled, the next analysis that can be used is to use the Ave
APA, Harvard, Vancouver, ISO, and other styles
4

CAI, KAI-YUAN, TSONG YUEH CHEN, YONG-CHAO LI, YUEN TAK YU, and LEI ZHAO. "ON THE ONLINE PARAMETER ESTIMATION PROBLEM IN ADAPTIVE SOFTWARE TESTING." International Journal of Software Engineering and Knowledge Engineering 18, no. 03 (2008): 357–81. http://dx.doi.org/10.1142/s0218194008003696.

Full text
Abstract:
Software cybernetics is an emerging area that explores the interplay between software and control. The controlled Markov chain (CMC) approach to software testing supports the idea of software cybernetics by treating software testing as a control problem, where the software under test serves as a controlled object modeled by a controlled Markov chain and the software testing strategy serves as the corresponding controller. The software under test and the corresponding software testing strategy form a closed-loop feedback control system. The theory of controlled Markov chains is used to design a
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Jinzhi, and Shixia Ma. "Pricing Options with Credit Risk in Markovian Regime-Switching Markets." Journal of Applied Mathematics 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/621371.

Full text
Abstract:
This paper investigates the valuation of European option with credit risk in a reduced form model when the stock price is driven by the so-called Markov-modulated jump-diffusion process, in which the arrival rate of rare events and the volatility rate of stock are controlled by a continuous-time Markov chain. We also assume that the interest rate and the default intensity follow the Vasicek models whose parameters are governed by the same Markov chain. We study the pricing of European option and present numerical illustrations.
APA, Harvard, Vancouver, ISO, and other styles
6

Dshalalow, Jewgeni. "On the multiserver queue with finite waiting room and controlled input." Advances in Applied Probability 17, no. 2 (1985): 408–23. http://dx.doi.org/10.2307/1427148.

Full text
Abstract:
In this paper we study a multi-channel queueing model of type with N waiting places and a non-recurrent input flow dependent on queue length at the time of each arrival. The queue length is treated as a basic process. We first determine explicitly the limit distribution of the embedded Markov chain. Then, by introducing an auxiliary Markov process, we find a simple relationship between the limiting distribution of the Markov chain and the limiting distribution of the original process with continuous time parameter. Here we simultaneously combine two methods: solving the corresponding Kolmogoro
APA, Harvard, Vancouver, ISO, and other styles
7

Dshalalow, Jewgeni. "On the multiserver queue with finite waiting room and controlled input." Advances in Applied Probability 17, no. 02 (1985): 408–23. http://dx.doi.org/10.1017/s0001867800015044.

Full text
Abstract:
In this paper we study a multi-channel queueing model of type with N waiting places and a non-recurrent input flow dependent on queue length at the time of each arrival. The queue length is treated as a basic process. We first determine explicitly the limit distribution of the embedded Markov chain. Then, by introducing an auxiliary Markov process, we find a simple relationship between the limiting distribution of the Markov chain and the limiting distribution of the original process with continuous time parameter. Here we simultaneously combine two methods: solving the corresponding Kolmogoro
APA, Harvard, Vancouver, ISO, and other styles
8

Attia, F. A. "The control of a finite dam with penalty cost function: Markov input rate." Journal of Applied Probability 24, no. 2 (1987): 457–65. http://dx.doi.org/10.2307/3214269.

Full text
Abstract:
The long-run average cost per unit time of operating a finite dam controlled by a policy (Lam Yeh (1985)) is determined when the cumulative input process is the integral of a Markov chain. A penalty cost which accrues continuously at a rate g(X(t)), where g is a bounded measurable function of the content, is also introduced. An example where the input rate is a two-state Markov chain is considered in detail to illustrate the computations.
APA, Harvard, Vancouver, ISO, and other styles
9

Attia, F. A. "The control of a finite dam with penalty cost function: Markov input rate." Journal of Applied Probability 24, no. 02 (1987): 457–65. http://dx.doi.org/10.1017/s0021900200031090.

Full text
Abstract:
The long-run average cost per unit time of operating a finite dam controlled by a policy (Lam Yeh (1985)) is determined when the cumulative input process is the integral of a Markov chain. A penalty cost which accrues continuously at a rate g(X(t)), where g is a bounded measurable function of the content, is also introduced. An example where the input rate is a two-state Markov chain is considered in detail to illustrate the computations.
APA, Harvard, Vancouver, ISO, and other styles
10

Fort, Gersende. "Central limit theorems for stochastic approximation with controlled Markov chain dynamics." ESAIM: Probability and Statistics 19 (2015): 60–80. http://dx.doi.org/10.1051/ps/2014013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Zaremba, Piotr. "The stopped distributions of a controlled Markov chain with discrete time." Systems & Control Letters 6, no. 4 (1985): 277–85. http://dx.doi.org/10.1016/0167-6911(85)90080-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hooghiemstra, G., and M. Keane. "Calculation of the equilibrium distribution for a solar energy storage model." Journal of Applied Probability 22, no. 4 (1985): 852–64. http://dx.doi.org/10.2307/3213953.

Full text
Abstract:
The study of simple solar energy storage models leads to the question of analyzing the equilibrium distribution of Markov chains (Harris chains), for which the state at epoch (n + 1) (i.e. the temperature of the storage tank) depends on the state at epoch n and on a controlled input, acceptance of which entails a further decrease of the temperature level. Here we study the model where the input is exponentially distributed. For all values of the parameters involved an explicit expression for the equilibrium distribution of the Markov chain is derived, and from this we calculate, as one of the
APA, Harvard, Vancouver, ISO, and other styles
13

Hooghiemstra, G., and M. Keane. "Calculation of the equilibrium distribution for a solar energy storage model." Journal of Applied Probability 22, no. 04 (1985): 852–64. http://dx.doi.org/10.1017/s0021900200108095.

Full text
Abstract:
The study of simple solar energy storage models leads to the question of analyzing the equilibrium distribution of Markov chains (Harris chains), for which the state at epoch (n + 1) (i.e. the temperature of the storage tank) depends on the state at epoch n and on a controlled input, acceptance of which entails a further decrease of the temperature level. Here we study the model where the input is exponentially distributed. For all values of the parameters involved an explicit expression for the equilibrium distribution of the Markov chain is derived, and from this we calculate, as one of the
APA, Harvard, Vancouver, ISO, and other styles
14

Chinyuchin, Yu M., and A. S. Solov'ev. "Application of Markov processes for analysis and control of aircraft maintainability." Civil Aviation High Technologies 23, no. 1 (2020): 71–83. http://dx.doi.org/10.26467/2079-0619-2020-23-1-71-83.

Full text
Abstract:
The process of aircraft operation involves constant effects of various factors on its components leading to accidental or systematic changes in their technical condition. Markov processes are a particular case of stochastic processes, which take place during aeronautical equipment operation. The relationship of the reliability characteristics with the cost recovery of the objects allows us to apply the analytic apparatus of Markov processes for the analysis and optimization of maintainability factors. The article describes two methods of the analysis and control of object maintainability based
APA, Harvard, Vancouver, ISO, and other styles
15

Tanikawa, Akio. "Martingale limit theorem and its application to an ergodic controlled Markov chain." Systems & Control Letters 26, no. 4 (1995): 261–66. http://dx.doi.org/10.1016/0167-6911(95)00020-a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Song, Qingshuo, and Gang George Yin. "Convergence rates of Markov chain approximation methods for controlled diffusions with stopping." Journal of Systems Science and Complexity 23, no. 3 (2010): 600–621. http://dx.doi.org/10.1007/s11424-010-0148-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Beutler, Frederick J., and Keith W. Ross. "Uniformization for semi-Markov decision processes under stationary policies." Journal of Applied Probability 24, no. 3 (1987): 644–56. http://dx.doi.org/10.2307/3214096.

Full text
Abstract:
Uniformization permits the replacement of a semi-Markov decision process (SMDP) by a Markov chain exhibiting the same average rewards for simple (non-randomized) policies. It is shown that various anomalies may occur, especially for stationary (randomized) policies; uniformization introduces virtual jumps with concomitant action changes not present in the original process. Since these lead to discrepancies in the average rewards for stationary processes, uniformization can be accepted as valid only for simple policies.We generalize uniformization to yield consistent results for stationary poli
APA, Harvard, Vancouver, ISO, and other styles
18

Beutler, Frederick J., and Keith W. Ross. "Uniformization for semi-Markov decision processes under stationary policies." Journal of Applied Probability 24, no. 03 (1987): 644–56. http://dx.doi.org/10.1017/s0021900200031375.

Full text
Abstract:
Uniformization permits the replacement of a semi-Markov decision process (SMDP) by a Markov chain exhibiting the same average rewards for simple (non-randomized) policies. It is shown that various anomalies may occur, especially for stationary (randomized) policies; uniformization introduces virtual jumps with concomitant action changes not present in the original process. Since these lead to discrepancies in the average rewards for stationary processes, uniformization can be accepted as valid only for simple policies. We generalize uniformization to yield consistent results for stationary pol
APA, Harvard, Vancouver, ISO, and other styles
19

Jin, Zhuo, Rebecca Stockbridge, and George Yin. "Some Recent Progress on Numerical Methods for Controlled Regime-Switching Models with Applications to Insurance and Risk Management." Computational Methods in Applied Mathematics 15, no. 3 (2015): 331–51. http://dx.doi.org/10.1515/cmam-2015-0015.

Full text
Abstract:
AbstractThis paper provides a survey on several numerical approximation schemes for stochastic control problems that arise from actuarial science and finance. The problems to be considered include dividend optimization, reinsurance game, and quantile hedging for guaranteed minimum death benefits. To better describe the complicated financial markets and their inherent uncertainty and randomness, the so-called regime-switching models are adopted. Such models are more realistic and versatile, however, far more complicated to handle. Due to the complexity of the construction, the regime-switching
APA, Harvard, Vancouver, ISO, and other styles
20

González, M., R. Martínez, and M. Mota. "On the geometric growth in a class of homogeneous multitype Markov chain." Journal of Applied Probability 42, no. 4 (2005): 1015–30. http://dx.doi.org/10.1239/jap/1134587813.

Full text
Abstract:
In this paper, we investigate the geometric growth of homogeneous multitype Markov chains whose states have nonnegative integer coordinates. Such models are considered in a situation similar to the supercritical case for branching processes. Finally, our general theoretical results are applied to a class of controlled multitype branching process in which the control is random.
APA, Harvard, Vancouver, ISO, and other styles
21

González, M., R. Martínez, and M. Mota. "On the geometric growth in a class of homogeneous multitype Markov chain." Journal of Applied Probability 42, no. 04 (2005): 1015–30. http://dx.doi.org/10.1017/s0021900200001078.

Full text
Abstract:
In this paper, we investigate the geometric growth of homogeneous multitype Markov chains whose states have nonnegative integer coordinates. Such models are considered in a situation similar to the supercritical case for branching processes. Finally, our general theoretical results are applied to a class of controlled multitype branching process in which the control is random.
APA, Harvard, Vancouver, ISO, and other styles
22

Fernandez-Gaucherand, E., A. Arapostathis, and S. I. Marcus. "Analysis of an adaptive control scheme for a partially observed controlled Markov chain." IEEE Transactions on Automatic Control 38, no. 6 (1993): 987–93. http://dx.doi.org/10.1109/9.222316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Shervashidze, T. "Local Limit Theorems for Conditionally Independent Random Variables Controlled by a Finite Markov Chain." Theory of Probability & Its Applications 44, no. 1 (2000): 131–35. http://dx.doi.org/10.1137/s0040585x97977446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Schouten, Rianne M., Marcos L. P. Bueno, Wouter Duivesteijn, and Mykola Pechenizkiy. "Mining sequences with exceptional transition behaviour of varying order using quality measures based on information-theoretic scoring functions." Data Mining and Knowledge Discovery 36, no. 1 (2021): 379–413. http://dx.doi.org/10.1007/s10618-021-00808-x.

Full text
Abstract:
AbstractDiscrete Markov chains are frequently used to analyse transition behaviour in sequential data. Here, the transition probabilities can be estimated using varying order Markov chains, where order k specifies the length of the sequence history that is used to model these probabilities. Generally, such a model is fitted to the entire dataset, but in practice it is likely that some heterogeneity in the data exists and that some sequences would be better modelled with alternative parameter values, or with a Markov chain of a different order. We use the framework of Exceptional Model Mining (
APA, Harvard, Vancouver, ISO, and other styles
25

Ghazi, Shahid, and Nigel P. Mountney. "Application of Markov chain analysis to a fining-upward fluvial succession of the Early Permian Warchha Sandstone, Salt Range, Pakistan." Journal of Nepal Geological Society 40 (December 1, 2010): 21–30. http://dx.doi.org/10.3126/jngs.v40i0.23593.

Full text
Abstract:
Markov chain analysis is applied to the cyclic properties and degree of ordering of lithofacies in the Early Permian (Artinskian) Warchha Sandstone. The 30 to 155 m-thick Warchha Sandstone is well exposed in the Salt Range, Pakistan and dominantly composed of sandstone, siltstone and claystone succession. Seven lithofacies have been identified on the basis of geometry, gross lithology and sedimentary structures. Lithofacies are cyclically arranged in a fining-upward pattern. A complete cycle starts with pebbly sandstone accomplished by thin layer of basal conglomerate and terminates with clays
APA, Harvard, Vancouver, ISO, and other styles
26

SOMARAJU, RAM, MAZYAR MIRRAHIMI, and PIERRE ROUCHON. "APPROXIMATE STABILIZATION OF AN INFINITE DIMENSIONAL QUANTUM STOCHASTIC SYSTEM." Reviews in Mathematical Physics 25, no. 01 (2013): 1350001. http://dx.doi.org/10.1142/s0129055x13500013.

Full text
Abstract:
We study the state feedback stabilization of a quantum harmonic oscillator near a pre-specified Fock state (photon number state). Such a state feedback controller has been recently implemented on a quantized electromagnetic field in an almost lossless cavity. Such open quantum systems are governed by a controlled discrete-time Markov chain in the unit ball of an infinite dimensional Hilbert space. The control design is based on an unbounded Lyapunov function that is minimized at each time-step by feedback. This ensures (weak-*) convergence of probability measures to a final measure concentrate
APA, Harvard, Vancouver, ISO, and other styles
27

Trainor-Guitton, Whitney, and G. Michael Hoversten. "Stochastic inversion for electromagnetic geophysics: Practical challenges and improving convergence efficiency." GEOPHYSICS 76, no. 6 (2011): F373—F386. http://dx.doi.org/10.1190/geo2010-0223.1.

Full text
Abstract:
Traditional deterministic geophysical inversion algorithms are not designed to provide a robust evaluation of uncertainty that reflects the limitations of the geophysical technique. Stochastic inversions, which do provide a sampling-based measure of uncertainty, are computationally expensive and not straightforward to implement for nonexperts (nonstatisticians). Our results include stochastic inversion for magnetotelluric and controlled source electromagnetic data. Two Markov Chain sampling algorithms (Metropolis-Hastings and Slice Sampler) can significantly decrease the computational expense
APA, Harvard, Vancouver, ISO, and other styles
28

Attia, F. A. "Resolvent operators of Markov processes and their applications in the control of a finite dam." Journal of Applied Probability 26, no. 2 (1989): 314–24. http://dx.doi.org/10.2307/3214038.

Full text
Abstract:
The resolvent operators of the following two processes are obtained: (a) the bivariate Markov process W = (X, Y), where Y(t) is an irreducible Markov chain and X(t) is its integral, and (b) the geometric Wiener process G(t) = exp{B(t} where B(t) is a Wiener process with non-negative drift μ and variance parameter σ2. These results are then used via a limiting procedure to determine the long-run average cost per unit time of operating a finite dam where the input process is either X(t) or G(t). The system is controlled by a policy (Attia [1], Lam [6]).
APA, Harvard, Vancouver, ISO, and other styles
29

Attia, F. A. "Resolvent operators of Markov processes and their applications in the control of a finite dam." Journal of Applied Probability 26, no. 02 (1989): 314–24. http://dx.doi.org/10.1017/s0021900200027315.

Full text
Abstract:
The resolvent operators of the following two processes are obtained: (a) the bivariate Markov process W = (X, Y), where Y(t) is an irreducible Markov chain and X(t) is its integral, and (b) the geometric Wiener process G(t) = exp{B(t} where B(t) is a Wiener process with non-negative drift μ and variance parameter σ2. These results are then used via a limiting procedure to determine the long-run average cost per unit time of operating a finite dam where the input process is either X(t) or G(t). The system is controlled by a policy (Attia [1], Lam [6]).
APA, Harvard, Vancouver, ISO, and other styles
30

Jin, Zhuo, Ming Qiu, Ky Q. Tran, and George Yin. "A survey of numerical solutions for stochastic control problems: Some recent progress." Numerical Algebra, Control & Optimization 12, no. 2 (2022): 213. http://dx.doi.org/10.3934/naco.2022004.

Full text
Abstract:
<p style='text-indent:20px;'>This paper presents a survey on some of the recent progress on numerical solutions for controlled switching diffusions. We begin by recalling the basics of switching diffusions and controlled switching diffusions. We then present regular controls and singular controls. The main objective of this paper is to provide a survey on some recent advances on Markov chain approximation methods for solving stochastic control problems numerically. A number of applications in insurance, mathematical biology, epidemiology, and economics are presented. Several numerical ex
APA, Harvard, Vancouver, ISO, and other styles
31

Radaideh, Ashraf, Umesh Vaidya, and Venkataramana Ajjarapu. "Sequential Set-Point Control for Heterogeneous Thermostatically Controlled Loads Through an Extended Markov Chain Abstraction." IEEE Transactions on Smart Grid 10, no. 1 (2019): 116–27. http://dx.doi.org/10.1109/tsg.2017.2732949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Chong, Siang Yew, Peter Tiňo, Jun He, and Xin Yao. "A New Framework for Analysis of Coevolutionary Systems—Directed Graph Representation and Random Walks." Evolutionary Computation 27, no. 2 (2019): 195–228. http://dx.doi.org/10.1162/evco_a_00218.

Full text
Abstract:
Studying coevolutionary systems in the context of simplified models (i.e., games with pairwise interactions between coevolving solutions modeled as self plays) remains an open challenge since the rich underlying structures associated with pairwise-comparison-based fitness measures are often not taken fully into account. Although cyclic dynamics have been demonstrated in several contexts (such as intransitivity in coevolutionary problems), there is no complete characterization of cycle structures and their effects on coevolutionary search. We develop a new framework to address this issue. At th
APA, Harvard, Vancouver, ISO, and other styles
33

Finke, Axel, Arnaud Doucet, and Adam M. Johansen. "Limit theorems for sequential MCMC methods." Advances in Applied Probability 52, no. 2 (2020): 377–403. http://dx.doi.org/10.1017/apr.2020.9.

Full text
Abstract:
AbstractBoth sequential Monte Carlo (SMC) methods (a.k.a. ‘particle filters’) and sequential Markov chain Monte Carlo (sequential MCMC) methods constitute classes of algorithms which can be used to approximate expectations with respect to (a sequence of) probability distributions and their normalising constants. While SMC methods sample particles conditionally independently at each time step, sequential MCMC methods sample particles according to a Markov chain Monte Carlo (MCMC) kernel. Introduced over twenty years ago in [6], sequential MCMC methods have attracted renewed interest recently as
APA, Harvard, Vancouver, ISO, and other styles
34

Tollar, Eric S. "On the limit behavior of a multicompartment storage model with an underlying Markov chain." Advances in Applied Probability 20, no. 1 (1988): 208–27. http://dx.doi.org/10.2307/1427276.

Full text
Abstract:
The present paper considers a multicompartment storage model with one-way flow. The inputs and outputs for each compartment are controlled by a denumerable-state Markov chain. Assuming finite first and second moments, it is shown that the amounts of material in certain compartments converge in distribution while for others they diverge, based on appropriate first-moment conditions on the inputs and outputs. It is also shown that the diverging compartments under suitable normalization converge to functionals of Brownian motion, independent of those compartments which converge without normalizat
APA, Harvard, Vancouver, ISO, and other styles
35

Tollar, Eric S. "On the limit behavior of a multicompartment storage model with an underlying Markov chain." Advances in Applied Probability 20, no. 01 (1988): 208–27. http://dx.doi.org/10.1017/s0001867800018000.

Full text
Abstract:
The present paper considers a multicompartment storage model with one-way flow. The inputs and outputs for each compartment are controlled by a denumerable-state Markov chain. Assuming finite first and second moments, it is shown that the amounts of material in certain compartments converge in distribution while for others they diverge, based on appropriate first-moment conditions on the inputs and outputs. It is also shown that the diverging compartments under suitable normalization converge to functionals of Brownian motion, independent of those compartments which converge without normalizat
APA, Harvard, Vancouver, ISO, and other styles
36

Lefebvre, Mario. "Optimal control of jump-diffusion processes with random parameters." Buletinul Academiei de Ştiinţe a Republicii Moldova. Matematica, no. 3(100) (June 2023): 22–29. http://dx.doi.org/10.56415/basm.y2022.i3.p22.

Full text
Abstract:
Let $X(t)$ be a controlled jump-diffusion process starting at $x \in [a,b]$ and whose infinitesimal parameters vary according to a con\-tinuous-time Markov chain. The aim is to minimize the expected value of a cost function with quadratic control costs until $X(t)$ leaves the interval $(a,b)$, and a termination cost that depends on the final value of $X(t)$. Exact and explicit solutions are obtained for important processes.
APA, Harvard, Vancouver, ISO, and other styles
37

Mezhennaya, N. M. "On the limit distribution of a number of runs in polynomial sequence controlled by Markov chain." Vestnik Udmurtskogo Universiteta. Matematika. Mekhanika. Komp'yuternye Nauki 26, no. 3 (2016): 324–35. http://dx.doi.org/10.20537/vm160303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Yu, Li Ming, Shou Qiang Wei, Tian Tian Xing, and Hong Liang Liu. "Reliability Analysis of Hybrid Actuation Based on GSPN." Advanced Materials Research 430-432 (January 2012): 1914–17. http://dx.doi.org/10.4028/www.scientific.net/amr.430-432.1914.

Full text
Abstract:
Generalized stochastic Petri nets is adopted to develop the reliability models of two operating modes of the hybrid actuation system, which is composed of a SHA (Servo valve controlled Hydraulic Actuator), an EHA (Electro-Hydrostatic Actuator) and an EBHA (Electrical Back-up Hydrostatic Actuator).The dependability of hybrid actuation is got through the Markov chain which the Petri nets sate is isomorphic to and the Monte-Carlo simulation. Simulations are conducted to analyze influences of the operating mode and the fault coverage on system reliability of hybrid actuation system.
APA, Harvard, Vancouver, ISO, and other styles
39

Hordijk, Arie, and Flos Spieksma. "Constrained admission control to a queueing system." Advances in Applied Probability 21, no. 2 (1989): 409–31. http://dx.doi.org/10.2307/1427167.

Full text
Abstract:
We consider an exponential queue with arrival and service rates depending on the number of jobs present in the queue. The queueing system is controlled by restricting arrivals. Typically, a good policy should provide a proper balance between throughput and congestion. A mathematical model for computing such a policy is a Markov decision chain with rewards and a constrained cost function. We give general conditions on the reward and cost function which guarantee the existence of an optimal threshold or thinning policy. An efficient algorithm for computing an optimal policy is constructed.
APA, Harvard, Vancouver, ISO, and other styles
40

Hordijk, Arie, and Flos Spieksma. "Constrained admission control to a queueing system." Advances in Applied Probability 21, no. 02 (1989): 409–31. http://dx.doi.org/10.1017/s0001867800018619.

Full text
Abstract:
We consider an exponential queue with arrival and service rates depending on the number of jobs present in the queue. The queueing system is controlled by restricting arrivals. Typically, a good policy should provide a proper balance between throughput and congestion. A mathematical model for computing such a policy is a Markov decision chain with rewards and a constrained cost function. We give general conditions on the reward and cost function which guarantee the existence of an optimal threshold or thinning policy. An efficient algorithm for computing an optimal policy is constructed.
APA, Harvard, Vancouver, ISO, and other styles
41

Miyazawa, Masakiyo, and Hiroyuki Takada. "A matrix exponential form for hitting probabilities and its application to a Markov-modulated fluid queue with downward jumps." Journal of Applied Probability 39, no. 3 (2002): 604–18. http://dx.doi.org/10.1239/jap/1034082131.

Full text
Abstract:
We consider a fluid queue with downward jumps, where the fluid flow rate and the downward jumps are controlled by a background Markov chain with a finite state space. We show that the stationary distribution of a buffer content has a matrix exponential form, and identify the exponent matrix. We derive these results using time-reversed arguments and the background state distribution at the hitting time concerning the corresponding fluid flow with upward jumps. This distribution was recently studied for a fluid queue with upward jumps under a stability condition. We give an alternative proof for
APA, Harvard, Vancouver, ISO, and other styles
42

Miyazawa, Masakiyo, and Hiroyuki Takada. "A matrix exponential form for hitting probabilities and its application to a Markov-modulated fluid queue with downward jumps." Journal of Applied Probability 39, no. 03 (2002): 604–18. http://dx.doi.org/10.1017/s0021900200021835.

Full text
Abstract:
We consider a fluid queue with downward jumps, where the fluid flow rate and the downward jumps are controlled by a background Markov chain with a finite state space. We show that the stationary distribution of a buffer content has a matrix exponential form, and identify the exponent matrix. We derive these results using time-reversed arguments and the background state distribution at the hitting time concerning the corresponding fluid flow with upward jumps. This distribution was recently studied for a fluid queue with upward jumps under a stability condition. We give an alternative proof for
APA, Harvard, Vancouver, ISO, and other styles
43

Phan, Kevin, Declan Lloyd, Ash Wilson-Smith, Vannessa Leung, and Marko Andric. "Intraocular bleeding in patients managed with novel oral anticoagulation and traditional anticoagulation: a network meta-analysis and systematic review." British Journal of Ophthalmology 103, no. 5 (2018): 641–47. http://dx.doi.org/10.1136/bjophthalmol-2018-312198.

Full text
Abstract:
Background/aimTo clarify the nature of the relationship between novel oral anticoagulants (NOACs) and traditional anticoagulation in respect to intraocular bleeding.MethodsA comprehensive literature search up to October 2017 yielded 12 randomised controlled trials. Bayesian Markov chain Monte Carlo analysis was employed to investigate the relationship across multiple trials with varying NOACs. Random effects (informative priors) ORs were applied for the risk of intraocular bleeding due to various treatment measures. Mantel-Haenszel pairwise analyses were also performed. A total of 102 617 part
APA, Harvard, Vancouver, ISO, and other styles
44

Keery, John, Andrew Binley, Ahmed Elshenawy, and Jeremy Clifford. "Markov-chain Monte Carlo estimation of distributed Debye relaxations in spectral induced polarization." GEOPHYSICS 77, no. 2 (2012): E159—E170. http://dx.doi.org/10.1190/geo2011-0244.1.

Full text
Abstract:
There is growing interest in the link between electrical polarization and physical properties of geologic porous media. In particular, spectral characteristics may be controlled by the same pore geometric properties that influence fluid permeability of such media. Various models have been proposed to describe the spectral-induced-polarization (SIP) response of permeable rocks, and the links between these models and hydraulic properties have been explored, albeit empirically. Computation of the uncertainties in the parameters of such electrical models is essential for effective use of these rel
APA, Harvard, Vancouver, ISO, and other styles
45

Robini, Marc C., Yoram Bresler, and Isabelle E. Magnin. "ON THE CONVERGENCE OF METROPOLIS-TYPE RELAXATION AND ANNEALING WITH CONSTRAINTS." Probability in the Engineering and Informational Sciences 16, no. 4 (2002): 427–52. http://dx.doi.org/10.1017/s0269964802164035.

Full text
Abstract:
We discuss the asymptotic behavior of time-inhomogeneous Metropolis chains for solving constrained sampling and optimization problems. In addition to the usual inverse temperature schedule (βn)n∈[hollow N]*, the type of Markov processes under consideration is controlled by a divergent sequence (θn)n∈[hollow N]* of parameters acting as Lagrange multipliers. The associated transition probability matrices (Pβn,θn)n∈[hollow N]* are defined by Pβ,θ = q(x, y)exp(−β(Wθ(y) − Wθ(x))+) for all pairs (x, y) of distinct elements of a finite set Ω, where q is an irreducible and reversible Markov kernel and
APA, Harvard, Vancouver, ISO, and other styles
46

Sushchenko, S. P., P. V. Pristupa, P. A. Mikheev, and V. V. Poddubny. "Evaluation of the efficiency of forward error correction of transport protocol data blocks." Proceedings of Tomsk State University of Control Systems and Radioelectronics 23, no. 4 (2020): 35–39. http://dx.doi.org/10.21293/1818-0442-2020-23-4-35-39.

Full text
Abstract:
A model of a transport connection controlled by a transport protocol with the technology of forward error correction in the selective failure mode in the form of a discrete-time Markov chain is proposed. The model takes into account the influence of the protocol parameters, the level of errors in the communication channels, the round-trip delay and the technological parameters of forward error correction on the throughput of the transport connection. The analysis of the dependence of the advantages of the transport protocol with forward error correction over the classical transport protocol is
APA, Harvard, Vancouver, ISO, and other styles
47

Mezhennaya, N. M. "ESTIMATOR FOR THE DISTRIBUTION OF THE NUMBERS OF RUNS IN A RANDOM SEQUENCE CONTROLLED BY STATIONARY MARKOV CHAIN." Prikladnaya diskretnaya matematika, no. 35 (March 1, 2017): 14–28. http://dx.doi.org/10.17223/20710410/35/2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Wenzhao. "Discrete-Time Constrained Average Stochastic Games with Independent State Processes." Mathematics 7, no. 11 (2019): 1089. http://dx.doi.org/10.3390/math7111089.

Full text
Abstract:
In this paper, we consider the discrete-time constrained average stochastic games with independent state processes. The state space of each player is denumerable and one-stage cost functions can be unbounded. In these game models, each player chooses an action each time which influences the transition probability of a Markov chain controlled only by this player. Moreover, each player needs to pay some costs which depend on the actions of all the players. First, we give an existence condition of stationary constrained Nash equilibria based on the technique of average occupation measures and the
APA, Harvard, Vancouver, ISO, and other styles
49

Abolnikov, Lev, and Alexander Dukhovny. "Complex-analytic and matrix-analytic solutions for a queueing system with group service controlled by arrivals." Journal of Applied Mathematics and Stochastic Analysis 13, no. 4 (2000): 415–27. http://dx.doi.org/10.1155/s1048953300000356.

Full text
Abstract:
A bulk M/G/1 system is considered that responds to large increases (decreases) of the queue during the service act by alternating between two service modes. The switching rule is based on two “up” and “down” thresholds for total arrivals over the service act. A necessary and sufficient condition for the ergodicity of a Markov chain embedded into the main queueing process is found. Both complex-analytic and matrix-analytic solutions are obtained for the steady-state distribution. Under the assumption of the same service time distribution in both modes, a combined complex-matrix-analytic method
APA, Harvard, Vancouver, ISO, and other styles
50

Narwal, Priti, Deepak Kumar, Shailendra Narayan Singh, and Peeyush Tewari. "Stochastic Intrusion Detection Game-Based Arrangement Using Controlled Markov Chain for Prevention of DoS and DDoS Attacks in Cloud." Journal of Information Technology Research 14, no. 4 (2021): 45–57. http://dx.doi.org/10.4018/jitr.2021100104.

Full text
Abstract:
DoS (denial of service) assault is the most prevalent assault these days. It imposes a major risk to cybersecurity. At the point when this assault is propelled by numerous conveyed machines on a solitary server machine, it is called as a DDoS (distributed denial of service) assault. Additionally, DoS bypass on DHCP (dynamic host configuration protocol) server assault is a rising and famous assault in a system. The authors have proposed a stochastic intrusion detection game-based arrangement utilizing controlled Markov chain that figures the transition probabilities starting with one state then
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!