To see the other types of publications on this topic, follow the link: Controlled Markov chain.

Journal articles on the topic 'Controlled Markov chain'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Controlled Markov chain.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ray, Anandaroop, David L. Alumbaugh, G. Michael Hoversten, and Kerry Key. "Robust and accelerated Bayesian inversion of marine controlled-source electromagnetic data using parallel tempering." GEOPHYSICS 78, no. 6 (November 1, 2013): E271—E280. http://dx.doi.org/10.1190/geo2013-0128.1.

Full text
Abstract:
Bayesian methods can quantify the model uncertainty that is inherent in inversion of highly nonlinear geophysical problems. In this approach, a model likelihood function based on knowledge of the data noise statistics is used to sample the posterior model distribution, which conveys information on the resolvability of the model parameters. Because these distributions are multidimensional and nonlinear, we used Markov chain Monte Carlo methods for highly efficient sampling. Because a single Markov chain can become stuck in a local probability mode, we run various randomized Markov chains independently. To some extent, this problem can be mitigated by running independent Markov chains, but unless a very large number of chains are run, biased results may be obtained. We got around these limitations by running parallel, interacting Markov chains with “annealed” or “tempered” likelihoods, which enable the whole system of chains to effectively escape local probability maxima. We tested this approach using a transdimensional algorithm, where the number of model parameters as well as the parameters themselves were treated as unknowns during the inversion. This gave us a measure of uncertainty that was independent of any particular parameterization. We then subset the ensemble of inversion models to either reduce uncertainty based on a priori constraints or to examine the probability of various geologic scenarios. We demonstrated our algorithms’ fast convergence to the posterior model distribution with a synthetic 1D marine controlled-source electromagnetic data example. The speed up gained from this new approach will facilitate the practical implementation of future 2D and 3D Bayesian inversions, where the cost of each forward evaluation is significantly more expensive than for the 1D case.
APA, Harvard, Vancouver, ISO, and other styles
2

Lefebvre, Mario, and Moussa Kounta. "Discrete homing problems." Archives of Control Sciences 23, no. 1 (March 1, 2013): 5–18. http://dx.doi.org/10.2478/v10170-011-0039-6.

Full text
Abstract:
Abstract We consider the so-called homing problem for discrete-time Markov chains. The aim is to optimally control the Markov chain until it hits a given boundary. Depending on a parameter in the cost function, the optimizer either wants to maximize or minimize the time spent by the controlled process in the continuation region. Particular problems are considered and solved explicitly. Both the optimal control and the value function are obtained
APA, Harvard, Vancouver, ISO, and other styles
3

Andini, Enggartya, Sudarno Sudarno, and Rita Rahmawati. "PENERAPAN METODE PENGENDALIAN KUALITAS MEWMA BERDASARKAN ARL DENGAN PENDEKATAN RANTAI MARKOV (Studi Kasus: Batik Semarang 16, Meteseh)." Jurnal Gaussian 10, no. 1 (February 28, 2021): 125–35. http://dx.doi.org/10.14710/j.gauss.v10i1.30939.

Full text
Abstract:
An industrial company requires quality control to maintain quality consistency from the production results so that it is able to compete with other companies in the world market. In the industrial sector, most processes are influenced by more than one quality characteristic. One tool that can be used to control more than one quality characteristic is the Multivariate Exponentially Weighted Moving Average (MEWMA) control chart. The graph is used to determine whether the process has been controlled or not, if the process is not yet controlled, the next analysis that can be used is to use the Average Run Length (ARL) with the Markov Chain approach. The markov chain is the chance of today's event is only influenced by yesterday's incident, in this case the chance of the incident in question is the incident in getting a sampel of data on the production process of batik cloth to get a product that is in accordance with the company standards. ARL is the average number of sample points drawn before a point indicates an uncontrollable state. In this study, 60 sample data were used which consisted of three quality characteristics, namely the length of the cloth, the width of the cloth, and the time of the fabric for the production of written batik in Batik Semarang 16 Meteseh. Based on the results and discussion that has been done, the MEWMA controller chart uses the λ weighting which is determined using trial and error. MEWMA control chart can not be said to be stable and controlled with λ = 0.6, after calculating, the value is obtained Upper Control Limit (BKA) of 11.3864 and Lower Control Limit (BKB) of 0. It is known that from 60 data samples there is a Tj2 value that comes out from the upper control limit (BKA) where the amount of 15.70871, which indicates the production process is not controlled statistically. Improvements to the MEWMA controller chart can be done based on the ARL with the Markov Chain approach. In this final project, the ARL value used is 200, the magnitude of the process shift is 1.7 and the r value is 0.28, where the value of r is a constant obtained on the r parameter graph. The optimal MEWMA control chart based on ARL with the Markov Chain approach can be said to be stable and controlled if there is no Tj2 value that is outside the upper control limit (BKA). The results of the MEWMA control chart based on the ARL with the Markov Chain approach show that the process is not statistically capable because the MCpm value is 0.516797 and the MCpmk value is 0.437807, the value indicates a process capability index value of less than 1. Keywords: Handmade batik, Multivariate Exponentially Weighted Moving Average (MEWMA), Average Run Length (ARL), Capability process.
APA, Harvard, Vancouver, ISO, and other styles
4

CAI, KAI-YUAN, TSONG YUEH CHEN, YONG-CHAO LI, YUEN TAK YU, and LEI ZHAO. "ON THE ONLINE PARAMETER ESTIMATION PROBLEM IN ADAPTIVE SOFTWARE TESTING." International Journal of Software Engineering and Knowledge Engineering 18, no. 03 (May 2008): 357–81. http://dx.doi.org/10.1142/s0218194008003696.

Full text
Abstract:
Software cybernetics is an emerging area that explores the interplay between software and control. The controlled Markov chain (CMC) approach to software testing supports the idea of software cybernetics by treating software testing as a control problem, where the software under test serves as a controlled object modeled by a controlled Markov chain and the software testing strategy serves as the corresponding controller. The software under test and the corresponding software testing strategy form a closed-loop feedback control system. The theory of controlled Markov chains is used to design and optimize the testing strategy in accordance with the testing/reliability goal given explicitly and a priori. Adaptive software testing adjusts and improves software testing strategy online by using the testing data collected in the course of software testing. In doing so, the online parameter estimations play a key role. In this paper, we study the effects of genetic algorithm and the gradient method for doing online parameter estimation in adaptive software testing. We find that genetic algorithm is effective and does not require prior knowledge of the software parameters of concern. Although genetic algorithm is computationally intensive, it leads the adaptive software testing strategy to an optimal software testing strategy that is determined by optimizing a given testing goal, such as minimizing the total cost incurred for removing a given number of defects. On the other hand, the gradient method is computationally favorable, but requires appropriate initial values of the software parameters of concern. It may lead, or fail to lead, the adaptive software testing strategy to an optimal software testing strategy, depending on whether the given initial parameter values are appropriate or not. In general, the genetic algorithm should be used instead of the gradient method in adaptive software testing. Simulation results show that adaptive software testing does work and outperforms random testing.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Jinzhi, and Shixia Ma. "Pricing Options with Credit Risk in Markovian Regime-Switching Markets." Journal of Applied Mathematics 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/621371.

Full text
Abstract:
This paper investigates the valuation of European option with credit risk in a reduced form model when the stock price is driven by the so-called Markov-modulated jump-diffusion process, in which the arrival rate of rare events and the volatility rate of stock are controlled by a continuous-time Markov chain. We also assume that the interest rate and the default intensity follow the Vasicek models whose parameters are governed by the same Markov chain. We study the pricing of European option and present numerical illustrations.
APA, Harvard, Vancouver, ISO, and other styles
6

Dshalalow, Jewgeni. "On the multiserver queue with finite waiting room and controlled input." Advances in Applied Probability 17, no. 2 (June 1985): 408–23. http://dx.doi.org/10.2307/1427148.

Full text
Abstract:
In this paper we study a multi-channel queueing model of type with N waiting places and a non-recurrent input flow dependent on queue length at the time of each arrival. The queue length is treated as a basic process. We first determine explicitly the limit distribution of the embedded Markov chain. Then, by introducing an auxiliary Markov process, we find a simple relationship between the limiting distribution of the Markov chain and the limiting distribution of the original process with continuous time parameter. Here we simultaneously combine two methods: solving the corresponding Kolmogorov system of the differential equations, and using an approach based on the theory of semi-regenerative processes. Among various applications of multi-channel queues with state-dependent input stream, we consider a closed single-server system with reserve replacement and state-dependent service, which turns out to be dual (in a certain sense) in relation to our model; an optimization problem is also solved, and an interpretation by means of tandem systems is discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Dshalalow, Jewgeni. "On the multiserver queue with finite waiting room and controlled input." Advances in Applied Probability 17, no. 02 (June 1985): 408–23. http://dx.doi.org/10.1017/s0001867800015044.

Full text
Abstract:
In this paper we study a multi-channel queueing model of type with N waiting places and a non-recurrent input flow dependent on queue length at the time of each arrival. The queue length is treated as a basic process. We first determine explicitly the limit distribution of the embedded Markov chain. Then, by introducing an auxiliary Markov process, we find a simple relationship between the limiting distribution of the Markov chain and the limiting distribution of the original process with continuous time parameter. Here we simultaneously combine two methods: solving the corresponding Kolmogorov system of the differential equations, and using an approach based on the theory of semi-regenerative processes. Among various applications of multi-channel queues with state-dependent input stream, we consider a closed single-server system with reserve replacement and state-dependent service, which turns out to be dual (in a certain sense) in relation to our model; an optimization problem is also solved, and an interpretation by means of tandem systems is discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Attia, F. A. "The control of a finite dam with penalty cost function: Markov input rate." Journal of Applied Probability 24, no. 2 (June 1987): 457–65. http://dx.doi.org/10.2307/3214269.

Full text
Abstract:
The long-run average cost per unit time of operating a finite dam controlled by a policy (Lam Yeh (1985)) is determined when the cumulative input process is the integral of a Markov chain. A penalty cost which accrues continuously at a rate g(X(t)), where g is a bounded measurable function of the content, is also introduced. An example where the input rate is a two-state Markov chain is considered in detail to illustrate the computations.
APA, Harvard, Vancouver, ISO, and other styles
9

Attia, F. A. "The control of a finite dam with penalty cost function: Markov input rate." Journal of Applied Probability 24, no. 02 (June 1987): 457–65. http://dx.doi.org/10.1017/s0021900200031090.

Full text
Abstract:
The long-run average cost per unit time of operating a finite dam controlled by a policy (Lam Yeh (1985)) is determined when the cumulative input process is the integral of a Markov chain. A penalty cost which accrues continuously at a rate g(X(t)), where g is a bounded measurable function of the content, is also introduced. An example where the input rate is a two-state Markov chain is considered in detail to illustrate the computations.
APA, Harvard, Vancouver, ISO, and other styles
10

Fort, Gersende. "Central limit theorems for stochastic approximation with controlled Markov chain dynamics." ESAIM: Probability and Statistics 19 (2015): 60–80. http://dx.doi.org/10.1051/ps/2014013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Zaremba, Piotr. "The stopped distributions of a controlled Markov chain with discrete time." Systems & Control Letters 6, no. 4 (October 1985): 277–85. http://dx.doi.org/10.1016/0167-6911(85)90080-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hooghiemstra, G., and M. Keane. "Calculation of the equilibrium distribution for a solar energy storage model." Journal of Applied Probability 22, no. 4 (December 1985): 852–64. http://dx.doi.org/10.2307/3213953.

Full text
Abstract:
The study of simple solar energy storage models leads to the question of analyzing the equilibrium distribution of Markov chains (Harris chains), for which the state at epoch (n + 1) (i.e. the temperature of the storage tank) depends on the state at epoch n and on a controlled input, acceptance of which entails a further decrease of the temperature level. Here we study the model where the input is exponentially distributed. For all values of the parameters involved an explicit expression for the equilibrium distribution of the Markov chain is derived, and from this we calculate, as one of the possible applications, the exact values of the mean of this equilibrium distribution.
APA, Harvard, Vancouver, ISO, and other styles
13

Hooghiemstra, G., and M. Keane. "Calculation of the equilibrium distribution for a solar energy storage model." Journal of Applied Probability 22, no. 04 (December 1985): 852–64. http://dx.doi.org/10.1017/s0021900200108095.

Full text
Abstract:
The study of simple solar energy storage models leads to the question of analyzing the equilibrium distribution of Markov chains (Harris chains), for which the state at epoch (n + 1) (i.e. the temperature of the storage tank) depends on the state at epoch n and on a controlled input, acceptance of which entails a further decrease of the temperature level. Here we study the model where the input is exponentially distributed. For all values of the parameters involved an explicit expression for the equilibrium distribution of the Markov chain is derived, and from this we calculate, as one of the possible applications, the exact values of the mean of this equilibrium distribution.
APA, Harvard, Vancouver, ISO, and other styles
14

Chinyuchin, Yu M., and A. S. Solov'ev. "Application of Markov processes for analysis and control of aircraft maintainability." Civil Aviation High Technologies 23, no. 1 (February 26, 2020): 71–83. http://dx.doi.org/10.26467/2079-0619-2020-23-1-71-83.

Full text
Abstract:
The process of aircraft operation involves constant effects of various factors on its components leading to accidental or systematic changes in their technical condition. Markov processes are a particular case of stochastic processes, which take place during aeronautical equipment operation. The relationship of the reliability characteristics with the cost recovery of the objects allows us to apply the analytic apparatus of Markov processes for the analysis and optimization of maintainability factors. The article describes two methods of the analysis and control of object maintainability based on stationary and non-stationary Markov chains. The model of a stationary Markov chain is used for the equipment with constant in time intensity of the events. For the objects with time-varying events intensity, a non-stationary Markov chain is used. In order to reduce the number of the mathematical operations for the analysis of aeronautical engineering maintainability by using non-stationary Markov processes an algorithm for their optimization is presented. The suggested methods of the analysis by means of Markov chains allow to execute comparative assessments of expected maintenance and repair costs for one or several one-type objects taking into account their original conditions and operation time. The process of maintainability control using Markov chains includes search of the optimal strategy of maintenance and repair considering each state of an object under which maintenance costs will be minimal. The given approbation of the analysis methods and maintainability control using Markov processes for an object under control allowed to build a predictive-controlled model in which the expected costs for its maintenance and repair are calculated as well as the required number of spare parts for each specified operating time interval. The possibility of using the mathematical apparatus of Markov processes for a large number of objects with different reliability factors distribution is shown. The software implementation of the described methods as well as the usage of tabular adapted software will contribute to reducing the complexity of the calculations and improving data visualization.
APA, Harvard, Vancouver, ISO, and other styles
15

Tanikawa, Akio. "Martingale limit theorem and its application to an ergodic controlled Markov chain." Systems & Control Letters 26, no. 4 (November 1995): 261–66. http://dx.doi.org/10.1016/0167-6911(95)00020-a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Song, Qingshuo, and Gang George Yin. "Convergence rates of Markov chain approximation methods for controlled diffusions with stopping." Journal of Systems Science and Complexity 23, no. 3 (June 2010): 600–621. http://dx.doi.org/10.1007/s11424-010-0148-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Beutler, Frederick J., and Keith W. Ross. "Uniformization for semi-Markov decision processes under stationary policies." Journal of Applied Probability 24, no. 3 (September 1987): 644–56. http://dx.doi.org/10.2307/3214096.

Full text
Abstract:
Uniformization permits the replacement of a semi-Markov decision process (SMDP) by a Markov chain exhibiting the same average rewards for simple (non-randomized) policies. It is shown that various anomalies may occur, especially for stationary (randomized) policies; uniformization introduces virtual jumps with concomitant action changes not present in the original process. Since these lead to discrepancies in the average rewards for stationary processes, uniformization can be accepted as valid only for simple policies.We generalize uniformization to yield consistent results for stationary policies also. These results are applied to constrained optimization of SMDP, in which stationary (randomized) policies appear naturally. The structure of optimal constrained SMDP policies can then be elucidated by studying the corresponding controlled Markov chains. Moreover, constrained SMDP optimal policy computations can be more easily implemented in discrete time, the generalized uniformization being employed to relate discrete- and continuous-time optimal constrained policies.
APA, Harvard, Vancouver, ISO, and other styles
18

Beutler, Frederick J., and Keith W. Ross. "Uniformization for semi-Markov decision processes under stationary policies." Journal of Applied Probability 24, no. 03 (September 1987): 644–56. http://dx.doi.org/10.1017/s0021900200031375.

Full text
Abstract:
Uniformization permits the replacement of a semi-Markov decision process (SMDP) by a Markov chain exhibiting the same average rewards for simple (non-randomized) policies. It is shown that various anomalies may occur, especially for stationary (randomized) policies; uniformization introduces virtual jumps with concomitant action changes not present in the original process. Since these lead to discrepancies in the average rewards for stationary processes, uniformization can be accepted as valid only for simple policies. We generalize uniformization to yield consistent results for stationary policies also. These results are applied to constrained optimization of SMDP, in which stationary (randomized) policies appear naturally. The structure of optimal constrained SMDP policies can then be elucidated by studying the corresponding controlled Markov chains. Moreover, constrained SMDP optimal policy computations can be more easily implemented in discrete time, the generalized uniformization being employed to relate discrete- and continuous-time optimal constrained policies.
APA, Harvard, Vancouver, ISO, and other styles
19

Jin, Zhuo, Rebecca Stockbridge, and George Yin. "Some Recent Progress on Numerical Methods for Controlled Regime-Switching Models with Applications to Insurance and Risk Management." Computational Methods in Applied Mathematics 15, no. 3 (July 1, 2015): 331–51. http://dx.doi.org/10.1515/cmam-2015-0015.

Full text
Abstract:
AbstractThis paper provides a survey on several numerical approximation schemes for stochastic control problems that arise from actuarial science and finance. The problems to be considered include dividend optimization, reinsurance game, and quantile hedging for guaranteed minimum death benefits. To better describe the complicated financial markets and their inherent uncertainty and randomness, the so-called regime-switching models are adopted. Such models are more realistic and versatile, however, far more complicated to handle. Due to the complexity of the construction, the regime-switching diffusion systems can only be solved in very special cases. In general, it is virtually impossible to obtain closed-form solutions. We use Markov chain approximation techniques to construct discrete-time controlled Markov chains to approximate the value function and optimal controls. Examples are presented to illustrate the applicability of the numerical methods.
APA, Harvard, Vancouver, ISO, and other styles
20

González, M., R. Martínez, and M. Mota. "On the geometric growth in a class of homogeneous multitype Markov chain." Journal of Applied Probability 42, no. 4 (December 2005): 1015–30. http://dx.doi.org/10.1239/jap/1134587813.

Full text
Abstract:
In this paper, we investigate the geometric growth of homogeneous multitype Markov chains whose states have nonnegative integer coordinates. Such models are considered in a situation similar to the supercritical case for branching processes. Finally, our general theoretical results are applied to a class of controlled multitype branching process in which the control is random.
APA, Harvard, Vancouver, ISO, and other styles
21

González, M., R. Martínez, and M. Mota. "On the geometric growth in a class of homogeneous multitype Markov chain." Journal of Applied Probability 42, no. 04 (December 2005): 1015–30. http://dx.doi.org/10.1017/s0021900200001078.

Full text
Abstract:
In this paper, we investigate the geometric growth of homogeneous multitype Markov chains whose states have nonnegative integer coordinates. Such models are considered in a situation similar to the supercritical case for branching processes. Finally, our general theoretical results are applied to a class of controlled multitype branching process in which the control is random.
APA, Harvard, Vancouver, ISO, and other styles
22

Fernandez-Gaucherand, E., A. Arapostathis, and S. I. Marcus. "Analysis of an adaptive control scheme for a partially observed controlled Markov chain." IEEE Transactions on Automatic Control 38, no. 6 (June 1993): 987–93. http://dx.doi.org/10.1109/9.222316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Shervashidze, T. "Local Limit Theorems for Conditionally Independent Random Variables Controlled by a Finite Markov Chain." Theory of Probability & Its Applications 44, no. 1 (January 2000): 131–35. http://dx.doi.org/10.1137/s0040585x97977446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Schouten, Rianne M., Marcos L. P. Bueno, Wouter Duivesteijn, and Mykola Pechenizkiy. "Mining sequences with exceptional transition behaviour of varying order using quality measures based on information-theoretic scoring functions." Data Mining and Knowledge Discovery 36, no. 1 (November 24, 2021): 379–413. http://dx.doi.org/10.1007/s10618-021-00808-x.

Full text
Abstract:
AbstractDiscrete Markov chains are frequently used to analyse transition behaviour in sequential data. Here, the transition probabilities can be estimated using varying order Markov chains, where order k specifies the length of the sequence history that is used to model these probabilities. Generally, such a model is fitted to the entire dataset, but in practice it is likely that some heterogeneity in the data exists and that some sequences would be better modelled with alternative parameter values, or with a Markov chain of a different order. We use the framework of Exceptional Model Mining (EMM) to discover these exceptionally behaving sequences. In particular, we propose an EMM model class that allows for discovering subgroups with transition behaviour of varying order. To that end, we propose three new quality measures based on information-theoretic scoring functions. Our findings from controlled experiments show that all three quality measures find exceptional transition behaviour of varying order and are reasonably sensitive. The quality measure based on Akaike’s Information Criterion is most robust for the number of observations. We furthermore add to existing work by seeking for subgroups of sequences, as opposite to subgroups of transitions. Since we use sequence-level descriptive attributes, we form subgroups of entire sequences, which is practically relevant in situations where you want to identify the originators of exceptional sequences, such as patients. We show this relevance by analysing sequences of blood glucose values of adult persons with diabetes type 2. In the experiments, we find subgroups of patients based on age and glycated haemoglobin (HbA1c), a measure known to correlate with average blood glucose values. Clinicians and domain experts confirmed the transition behaviour as estimated by the fitted Markov chain models.
APA, Harvard, Vancouver, ISO, and other styles
25

Ghazi, Shahid, and Nigel P. Mountney. "Application of Markov chain analysis to a fining-upward fluvial succession of the Early Permian Warchha Sandstone, Salt Range, Pakistan." Journal of Nepal Geological Society 40 (December 1, 2010): 21–30. http://dx.doi.org/10.3126/jngs.v40i0.23593.

Full text
Abstract:
Markov chain analysis is applied to the cyclic properties and degree of ordering of lithofacies in the Early Permian (Artinskian) Warchha Sandstone. The 30 to 155 m-thick Warchha Sandstone is well exposed in the Salt Range, Pakistan and dominantly composed of sandstone, siltstone and claystone succession. Seven lithofacies have been identified on the basis of geometry, gross lithology and sedimentary structures. Lithofacies are cyclically arranged in a fining-upward pattern. A complete cycle starts with pebbly sandstone accomplished by thin layer of basal conglomerate and terminates with claystone. The non-stationary first order Markov chains have been applied statistically on vertical transition of facies using outcrop data of the Warchha Sandstone succession. Later, a chi-square test was applied to test the dependency between any two facies states in facies transitions. The results of this study reveal that the sediments of the Warchha succession were controlled by Makovian mechanism and as a whole represent fluvial succession deposited in a predictable cyclic arrangement of lithofacies.
APA, Harvard, Vancouver, ISO, and other styles
26

SOMARAJU, RAM, MAZYAR MIRRAHIMI, and PIERRE ROUCHON. "APPROXIMATE STABILIZATION OF AN INFINITE DIMENSIONAL QUANTUM STOCHASTIC SYSTEM." Reviews in Mathematical Physics 25, no. 01 (February 2013): 1350001. http://dx.doi.org/10.1142/s0129055x13500013.

Full text
Abstract:
We study the state feedback stabilization of a quantum harmonic oscillator near a pre-specified Fock state (photon number state). Such a state feedback controller has been recently implemented on a quantized electromagnetic field in an almost lossless cavity. Such open quantum systems are governed by a controlled discrete-time Markov chain in the unit ball of an infinite dimensional Hilbert space. The control design is based on an unbounded Lyapunov function that is minimized at each time-step by feedback. This ensures (weak-*) convergence of probability measures to a final measure concentrated on the target Fock state with a pre-specified probability. This probability may be made arbitrarily close to 1 by choosing the feedback parameters and the Lyapunov function. They are chosen so that the stochastic flow that describes the Markov process may be shown to be tight (concentrated on a compact set with probability arbitrarily close to 1). Convergence proof uses Prohorov's theorem and specific properties of this Lyapunov function.
APA, Harvard, Vancouver, ISO, and other styles
27

Trainor-Guitton, Whitney, and G. Michael Hoversten. "Stochastic inversion for electromagnetic geophysics: Practical challenges and improving convergence efficiency." GEOPHYSICS 76, no. 6 (November 2011): F373—F386. http://dx.doi.org/10.1190/geo2010-0223.1.

Full text
Abstract:
Traditional deterministic geophysical inversion algorithms are not designed to provide a robust evaluation of uncertainty that reflects the limitations of the geophysical technique. Stochastic inversions, which do provide a sampling-based measure of uncertainty, are computationally expensive and not straightforward to implement for nonexperts (nonstatisticians). Our results include stochastic inversion for magnetotelluric and controlled source electromagnetic data. Two Markov Chain sampling algorithms (Metropolis-Hastings and Slice Sampler) can significantly decrease the computational expense compared to using either sampler alone. The statistics of the stochastic inversion allow for (1) variances that better reveal the measurement sensitivities of the two different electromagnetic techniques than traditional techniques and (2) models defined by the median and modes of parameter probability density functions, which produce amplitude and phase data that are consistent with the observed data. In general, parameter error estimates from the covariance matrix significantly underestimate the true parameter error, whereas the parameter variance derived from Markov chains accurately encompass the error.
APA, Harvard, Vancouver, ISO, and other styles
28

Attia, F. A. "Resolvent operators of Markov processes and their applications in the control of a finite dam." Journal of Applied Probability 26, no. 2 (June 1989): 314–24. http://dx.doi.org/10.2307/3214038.

Full text
Abstract:
The resolvent operators of the following two processes are obtained: (a) the bivariate Markov process W = (X, Y), where Y(t) is an irreducible Markov chain and X(t) is its integral, and (b) the geometric Wiener process G(t) = exp{B(t} where B(t) is a Wiener process with non-negative drift μ and variance parameter σ2. These results are then used via a limiting procedure to determine the long-run average cost per unit time of operating a finite dam where the input process is either X(t) or G(t). The system is controlled by a policy (Attia [1], Lam [6]).
APA, Harvard, Vancouver, ISO, and other styles
29

Attia, F. A. "Resolvent operators of Markov processes and their applications in the control of a finite dam." Journal of Applied Probability 26, no. 02 (June 1989): 314–24. http://dx.doi.org/10.1017/s0021900200027315.

Full text
Abstract:
The resolvent operators of the following two processes are obtained: (a) the bivariate Markov process W = (X, Y), where Y(t) is an irreducible Markov chain and X(t) is its integral, and (b) the geometric Wiener process G(t) = exp{B(t} where B(t) is a Wiener process with non-negative drift μ and variance parameter σ2. These results are then used via a limiting procedure to determine the long-run average cost per unit time of operating a finite dam where the input process is either X(t) or G(t). The system is controlled by a policy (Attia [1], Lam [6]).
APA, Harvard, Vancouver, ISO, and other styles
30

Jin, Zhuo, Ming Qiu, Ky Q. Tran, and George Yin. "A survey of numerical solutions for stochastic control problems: Some recent progress." Numerical Algebra, Control & Optimization 12, no. 2 (2022): 213. http://dx.doi.org/10.3934/naco.2022004.

Full text
Abstract:
<p style='text-indent:20px;'>This paper presents a survey on some of the recent progress on numerical solutions for controlled switching diffusions. We begin by recalling the basics of switching diffusions and controlled switching diffusions. We then present regular controls and singular controls. The main objective of this paper is to provide a survey on some recent advances on Markov chain approximation methods for solving stochastic control problems numerically. A number of applications in insurance, mathematical biology, epidemiology, and economics are presented. Several numerical examples are provided for demonstration.</p>
APA, Harvard, Vancouver, ISO, and other styles
31

Radaideh, Ashraf, Umesh Vaidya, and Venkataramana Ajjarapu. "Sequential Set-Point Control for Heterogeneous Thermostatically Controlled Loads Through an Extended Markov Chain Abstraction." IEEE Transactions on Smart Grid 10, no. 1 (January 2019): 116–27. http://dx.doi.org/10.1109/tsg.2017.2732949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Chong, Siang Yew, Peter Tiňo, Jun He, and Xin Yao. "A New Framework for Analysis of Coevolutionary Systems—Directed Graph Representation and Random Walks." Evolutionary Computation 27, no. 2 (June 2019): 195–228. http://dx.doi.org/10.1162/evco_a_00218.

Full text
Abstract:
Studying coevolutionary systems in the context of simplified models (i.e., games with pairwise interactions between coevolving solutions modeled as self plays) remains an open challenge since the rich underlying structures associated with pairwise-comparison-based fitness measures are often not taken fully into account. Although cyclic dynamics have been demonstrated in several contexts (such as intransitivity in coevolutionary problems), there is no complete characterization of cycle structures and their effects on coevolutionary search. We develop a new framework to address this issue. At the core of our approach is the directed graph (digraph) representation of coevolutionary problems that fully captures structures in the relations between candidate solutions. Coevolutionary processes are modeled as a specific type of Markov chains—random walks on digraphs. Using this framework, we show that coevolutionary problems admit a qualitative characterization: a coevolutionary problem is either solvable (there is a subset of solutions that dominates the remaining candidate solutions) or not. This has an implication on coevolutionary search. We further develop our framework that provides the means to construct quantitative tools for analysis of coevolutionary processes and demonstrate their applications through case studies. We show that coevolution of solvable problems corresponds to an absorbing Markov chain for which we can compute the expected hitting time of the absorbing class. Otherwise, coevolution will cycle indefinitely and the quantity of interest will be the limiting invariant distribution of the Markov chain. We also provide an index for characterizing complexity in coevolutionary problems and show how they can be generated in a controlled manner.
APA, Harvard, Vancouver, ISO, and other styles
33

Finke, Axel, Arnaud Doucet, and Adam M. Johansen. "Limit theorems for sequential MCMC methods." Advances in Applied Probability 52, no. 2 (June 2020): 377–403. http://dx.doi.org/10.1017/apr.2020.9.

Full text
Abstract:
AbstractBoth sequential Monte Carlo (SMC) methods (a.k.a. ‘particle filters’) and sequential Markov chain Monte Carlo (sequential MCMC) methods constitute classes of algorithms which can be used to approximate expectations with respect to (a sequence of) probability distributions and their normalising constants. While SMC methods sample particles conditionally independently at each time step, sequential MCMC methods sample particles according to a Markov chain Monte Carlo (MCMC) kernel. Introduced over twenty years ago in [6], sequential MCMC methods have attracted renewed interest recently as they empirically outperform SMC methods in some applications. We establish an $\mathbb{L}_r$ -inequality (which implies a strong law of large numbers) and a central limit theorem for sequential MCMC methods and provide conditions under which errors can be controlled uniformly in time. In the context of state-space models, we also provide conditions under which sequential MCMC methods can indeed outperform standard SMC methods in terms of asymptotic variance of the corresponding Monte Carlo estimators.
APA, Harvard, Vancouver, ISO, and other styles
34

Tollar, Eric S. "On the limit behavior of a multicompartment storage model with an underlying Markov chain." Advances in Applied Probability 20, no. 1 (March 1988): 208–27. http://dx.doi.org/10.2307/1427276.

Full text
Abstract:
The present paper considers a multicompartment storage model with one-way flow. The inputs and outputs for each compartment are controlled by a denumerable-state Markov chain. Assuming finite first and second moments, it is shown that the amounts of material in certain compartments converge in distribution while for others they diverge, based on appropriate first-moment conditions on the inputs and outputs. It is also shown that the diverging compartments under suitable normalization converge to functionals of Brownian motion, independent of those compartments which converge without normalization.
APA, Harvard, Vancouver, ISO, and other styles
35

Tollar, Eric S. "On the limit behavior of a multicompartment storage model with an underlying Markov chain." Advances in Applied Probability 20, no. 01 (March 1988): 208–27. http://dx.doi.org/10.1017/s0001867800018000.

Full text
Abstract:
The present paper considers a multicompartment storage model with one-way flow. The inputs and outputs for each compartment are controlled by a denumerable-state Markov chain. Assuming finite first and second moments, it is shown that the amounts of material in certain compartments converge in distribution while for others they diverge, based on appropriate first-moment conditions on the inputs and outputs. It is also shown that the diverging compartments under suitable normalization converge to functionals of Brownian motion, independent of those compartments which converge without normalization.
APA, Harvard, Vancouver, ISO, and other styles
36

Lefebvre, Mario. "Optimal control of jump-diffusion processes with random parameters." Buletinul Academiei de Ştiinţe a Republicii Moldova. Matematica, no. 3(100) (June 2023): 22–29. http://dx.doi.org/10.56415/basm.y2022.i3.p22.

Full text
Abstract:
Let $X(t)$ be a controlled jump-diffusion process starting at $x \in [a,b]$ and whose infinitesimal parameters vary according to a con\-tinuous-time Markov chain. The aim is to minimize the expected value of a cost function with quadratic control costs until $X(t)$ leaves the interval $(a,b)$, and a termination cost that depends on the final value of $X(t)$. Exact and explicit solutions are obtained for important processes.
APA, Harvard, Vancouver, ISO, and other styles
37

Mezhennaya, N. M. "On the limit distribution of a number of runs in polynomial sequence controlled by Markov chain." Vestnik Udmurtskogo Universiteta. Matematika. Mekhanika. Komp'yuternye Nauki 26, no. 3 (September 2016): 324–35. http://dx.doi.org/10.20537/vm160303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Yu, Li Ming, Shou Qiang Wei, Tian Tian Xing, and Hong Liang Liu. "Reliability Analysis of Hybrid Actuation Based on GSPN." Advanced Materials Research 430-432 (January 2012): 1914–17. http://dx.doi.org/10.4028/www.scientific.net/amr.430-432.1914.

Full text
Abstract:
Generalized stochastic Petri nets is adopted to develop the reliability models of two operating modes of the hybrid actuation system, which is composed of a SHA (Servo valve controlled Hydraulic Actuator), an EHA (Electro-Hydrostatic Actuator) and an EBHA (Electrical Back-up Hydrostatic Actuator).The dependability of hybrid actuation is got through the Markov chain which the Petri nets sate is isomorphic to and the Monte-Carlo simulation. Simulations are conducted to analyze influences of the operating mode and the fault coverage on system reliability of hybrid actuation system.
APA, Harvard, Vancouver, ISO, and other styles
39

Hordijk, Arie, and Flos Spieksma. "Constrained admission control to a queueing system." Advances in Applied Probability 21, no. 2 (June 1989): 409–31. http://dx.doi.org/10.2307/1427167.

Full text
Abstract:
We consider an exponential queue with arrival and service rates depending on the number of jobs present in the queue. The queueing system is controlled by restricting arrivals. Typically, a good policy should provide a proper balance between throughput and congestion. A mathematical model for computing such a policy is a Markov decision chain with rewards and a constrained cost function. We give general conditions on the reward and cost function which guarantee the existence of an optimal threshold or thinning policy. An efficient algorithm for computing an optimal policy is constructed.
APA, Harvard, Vancouver, ISO, and other styles
40

Hordijk, Arie, and Flos Spieksma. "Constrained admission control to a queueing system." Advances in Applied Probability 21, no. 02 (June 1989): 409–31. http://dx.doi.org/10.1017/s0001867800018619.

Full text
Abstract:
We consider an exponential queue with arrival and service rates depending on the number of jobs present in the queue. The queueing system is controlled by restricting arrivals. Typically, a good policy should provide a proper balance between throughput and congestion. A mathematical model for computing such a policy is a Markov decision chain with rewards and a constrained cost function. We give general conditions on the reward and cost function which guarantee the existence of an optimal threshold or thinning policy. An efficient algorithm for computing an optimal policy is constructed.
APA, Harvard, Vancouver, ISO, and other styles
41

Miyazawa, Masakiyo, and Hiroyuki Takada. "A matrix exponential form for hitting probabilities and its application to a Markov-modulated fluid queue with downward jumps." Journal of Applied Probability 39, no. 3 (September 2002): 604–18. http://dx.doi.org/10.1239/jap/1034082131.

Full text
Abstract:
We consider a fluid queue with downward jumps, where the fluid flow rate and the downward jumps are controlled by a background Markov chain with a finite state space. We show that the stationary distribution of a buffer content has a matrix exponential form, and identify the exponent matrix. We derive these results using time-reversed arguments and the background state distribution at the hitting time concerning the corresponding fluid flow with upward jumps. This distribution was recently studied for a fluid queue with upward jumps under a stability condition. We give an alternative proof for this result using the rate conservation law. This proof not only simplifies the proof, but also explains an underlying Markov structure and enables us to study more complex cases such that the fluid flow has jumps subject to a nondecreasing Lévy process, a Brownian component, and countably many background states.
APA, Harvard, Vancouver, ISO, and other styles
42

Miyazawa, Masakiyo, and Hiroyuki Takada. "A matrix exponential form for hitting probabilities and its application to a Markov-modulated fluid queue with downward jumps." Journal of Applied Probability 39, no. 03 (September 2002): 604–18. http://dx.doi.org/10.1017/s0021900200021835.

Full text
Abstract:
We consider a fluid queue with downward jumps, where the fluid flow rate and the downward jumps are controlled by a background Markov chain with a finite state space. We show that the stationary distribution of a buffer content has a matrix exponential form, and identify the exponent matrix. We derive these results using time-reversed arguments and the background state distribution at the hitting time concerning the corresponding fluid flow with upward jumps. This distribution was recently studied for a fluid queue with upward jumps under a stability condition. We give an alternative proof for this result using the rate conservation law. This proof not only simplifies the proof, but also explains an underlying Markov structure and enables us to study more complex cases such that the fluid flow has jumps subject to a nondecreasing Lévy process, a Brownian component, and countably many background states.
APA, Harvard, Vancouver, ISO, and other styles
43

Phan, Kevin, Declan Lloyd, Ash Wilson-Smith, Vannessa Leung, and Marko Andric. "Intraocular bleeding in patients managed with novel oral anticoagulation and traditional anticoagulation: a network meta-analysis and systematic review." British Journal of Ophthalmology 103, no. 5 (June 20, 2018): 641–47. http://dx.doi.org/10.1136/bjophthalmol-2018-312198.

Full text
Abstract:
Background/aimTo clarify the nature of the relationship between novel oral anticoagulants (NOACs) and traditional anticoagulation in respect to intraocular bleeding.MethodsA comprehensive literature search up to October 2017 yielded 12 randomised controlled trials. Bayesian Markov chain Monte Carlo analysis was employed to investigate the relationship across multiple trials with varying NOACs. Random effects (informative priors) ORs were applied for the risk of intraocular bleeding due to various treatment measures. Mantel-Haenszel pairwise analyses were also performed. A total of 102 617 participants from 12 different randomised controlled trials. 11 746 received apixaban, 16 074 received dabigatran, 18 132 received edoxaban, 11 893 received rivaroxaban and 44 764 received warfarin.ResultsEdoxaban was significantly associated with a reduced risk of intraocular bleeding in comparison to warfarin (OR 0.59; 95% CI 0.34 to 0.98). All other findings were non-significant; however, apixaban was the only NOAC to trend with an increased event rate against warfarin. The Bayesian Markov chain Monte Carlo modelling indicated that edoxaban had the greatest chance of producing the lowest rate of bleeding (surface under the cumulative ranking curve 0.8642). Pooled pairwise analysis supported the network analysis results favouring edoxaban against warfarin (OR 0.59; 95% CI 0.39 to 0.90; p=0.02) as well as subgroup analysis of low-dose edoxaban versus warfarin (OR 0.43; 95% CI 0.24 to 0.78).ConclusionThe analysis suggests that edoxaban may be the paramount agent in reducing intraocular bleeding rates. Given a paucity of reporting data for this rare event, future research and confirmation is strongly recommended.
APA, Harvard, Vancouver, ISO, and other styles
44

Keery, John, Andrew Binley, Ahmed Elshenawy, and Jeremy Clifford. "Markov-chain Monte Carlo estimation of distributed Debye relaxations in spectral induced polarization." GEOPHYSICS 77, no. 2 (March 2012): E159—E170. http://dx.doi.org/10.1190/geo2011-0244.1.

Full text
Abstract:
There is growing interest in the link between electrical polarization and physical properties of geologic porous media. In particular, spectral characteristics may be controlled by the same pore geometric properties that influence fluid permeability of such media. Various models have been proposed to describe the spectral-induced-polarization (SIP) response of permeable rocks, and the links between these models and hydraulic properties have been explored, albeit empirically. Computation of the uncertainties in the parameters of such electrical models is essential for effective use of these relationships. The formulation of an electrical dispersion model in terms of a distribution of relaxation times and associated chargeabilities has been demonstrated to be an effective generalized approach; however, thus far, such an approach has only been considered in a deterministic framework. Here, we formulate a spectral model based on a distribution of polarizations. By using a simple polynomial descriptor of such a distribution, we are able to cast the model in a stochastic manner and solve it using a Markov-chain Monte Carlo (McMC) sampler, thus allowing the computation of model-parameter uncertainties. We apply the model to synthetic data and demonstrate that the stochastic method can provide posterior distributions of model parameters with narrow bounds around the true values when little or no noise is added to the synthetic data, with posterior distributions that broaden with increasing noise. We also apply our model to experimental measurements of six sandstone samples and compare physical properties of a number of samples of porous media with stochastic estimates of characteristic relaxation times. We demonstrate the utility of our method on electrical spectra with different response characteristics and show that a single metric of relaxation time for the SIP response is not sufficient to provide clear insight into the physical characteristics of a sample.
APA, Harvard, Vancouver, ISO, and other styles
45

Robini, Marc C., Yoram Bresler, and Isabelle E. Magnin. "ON THE CONVERGENCE OF METROPOLIS-TYPE RELAXATION AND ANNEALING WITH CONSTRAINTS." Probability in the Engineering and Informational Sciences 16, no. 4 (October 2002): 427–52. http://dx.doi.org/10.1017/s0269964802164035.

Full text
Abstract:
We discuss the asymptotic behavior of time-inhomogeneous Metropolis chains for solving constrained sampling and optimization problems. In addition to the usual inverse temperature schedule (βn)n∈[hollow N]*, the type of Markov processes under consideration is controlled by a divergent sequence (θn)n∈[hollow N]* of parameters acting as Lagrange multipliers. The associated transition probability matrices (Pβn,θn)n∈[hollow N]* are defined by Pβ,θ = q(x, y)exp(−β(Wθ(y) − Wθ(x))+) for all pairs (x, y) of distinct elements of a finite set Ω, where q is an irreducible and reversible Markov kernel and the energy function Wθ is of the form Wθ = U + θV for some functions U,V : Ω → [hollow R]. Our approach, which is based on a comparison of the distribution of the chain at time n with the invariant measure of Pβn,θn, requires the computation of an upper bound for the second largest eigenvalue in absolute value of Pβn,θn. We extend the geometric bounds derived by Ingrassia and we give new sufficient conditions on the control sequences for the algorithm to simulate a Gibbs distribution with energy U on the constrained set [Ω with tilde above] = {x ∈ Ω : V(x) = minz∈ΩV(z)} and to minimize U over [Ω with tilde above].
APA, Harvard, Vancouver, ISO, and other styles
46

Sushchenko, S. P., P. V. Pristupa, P. A. Mikheev, and V. V. Poddubny. "Evaluation of the efficiency of forward error correction of transport protocol data blocks." Proceedings of Tomsk State University of Control Systems and Radioelectronics 23, no. 4 (December 25, 2020): 35–39. http://dx.doi.org/10.21293/1818-0442-2020-23-4-35-39.

Full text
Abstract:
A model of a transport connection controlled by a transport protocol with the technology of forward error correction in the selective failure mode in the form of a discrete-time Markov chain is proposed. The model takes into account the influence of the protocol parameters, the level of errors in the communication channels, the round-trip delay and the technological parameters of forward error correction on the throughput of the transport connection. The analysis of the dependence of the advantages of the transport protocol with forward error correction over the classical transport protocol is carried out.
APA, Harvard, Vancouver, ISO, and other styles
47

Mezhennaya, N. M. "ESTIMATOR FOR THE DISTRIBUTION OF THE NUMBERS OF RUNS IN A RANDOM SEQUENCE CONTROLLED BY STATIONARY MARKOV CHAIN." Prikladnaya diskretnaya matematika, no. 35 (March 1, 2017): 14–28. http://dx.doi.org/10.17223/20710410/35/2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Wenzhao. "Discrete-Time Constrained Average Stochastic Games with Independent State Processes." Mathematics 7, no. 11 (November 11, 2019): 1089. http://dx.doi.org/10.3390/math7111089.

Full text
Abstract:
In this paper, we consider the discrete-time constrained average stochastic games with independent state processes. The state space of each player is denumerable and one-stage cost functions can be unbounded. In these game models, each player chooses an action each time which influences the transition probability of a Markov chain controlled only by this player. Moreover, each player needs to pay some costs which depend on the actions of all the players. First, we give an existence condition of stationary constrained Nash equilibria based on the technique of average occupation measures and the best response linear program. Then, combining the best response linear program and duality program, we present a non-convex mathematic program and prove that each stationary Nash equilibrium is a global minimizer of this mathematic program. Finally, a controlled wireless network is presented to illustrate our main results.
APA, Harvard, Vancouver, ISO, and other styles
49

Abolnikov, Lev, and Alexander Dukhovny. "Complex-analytic and matrix-analytic solutions for a queueing system with group service controlled by arrivals." Journal of Applied Mathematics and Stochastic Analysis 13, no. 4 (January 1, 2000): 415–27. http://dx.doi.org/10.1155/s1048953300000356.

Full text
Abstract:
A bulk M/G/1 system is considered that responds to large increases (decreases) of the queue during the service act by alternating between two service modes. The switching rule is based on two “up” and “down” thresholds for total arrivals over the service act. A necessary and sufficient condition for the ergodicity of a Markov chain embedded into the main queueing process is found. Both complex-analytic and matrix-analytic solutions are obtained for the steady-state distribution. Under the assumption of the same service time distribution in both modes, a combined complex-matrix-analytic method is introduced. The technique of “matrix unfolding” is used, which reduces the problem to a matrix iteration process with the block size much smaller than in the direct application of the matrix-analytic method.
APA, Harvard, Vancouver, ISO, and other styles
50

Narwal, Priti, Deepak Kumar, Shailendra Narayan Singh, and Peeyush Tewari. "Stochastic Intrusion Detection Game-Based Arrangement Using Controlled Markov Chain for Prevention of DoS and DDoS Attacks in Cloud." Journal of Information Technology Research 14, no. 4 (October 2021): 45–57. http://dx.doi.org/10.4018/jitr.2021100104.

Full text
Abstract:
DoS (denial of service) assault is the most prevalent assault these days. It imposes a major risk to cybersecurity. At the point when this assault is propelled by numerous conveyed machines on a solitary server machine, it is called as a DDoS (distributed denial of service) assault. Additionally, DoS bypass on DHCP (dynamic host configuration protocol) server assault is a rising and famous assault in a system. The authors have proposed a stochastic intrusion detection game-based arrangement utilizing controlled Markov chain that figures the transition probabilities starting with one state then onto the next in a state transition diagram. At first, the authors have conjectured these assaults, and after that, they proposed a methodology that uses the idea of master and slave IPS (intrusion prevention system). This approach works well when mapped to these estimated assaults and accordingly helps in the recognition and counteractive action of these assaults in a cloud domain.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography