Academic literature on the topic 'Controlled Markov chain'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Controlled Markov chain.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Controlled Markov chain"

1

Ray, Anandaroop, David L. Alumbaugh, G. Michael Hoversten, and Kerry Key. "Robust and accelerated Bayesian inversion of marine controlled-source electromagnetic data using parallel tempering." GEOPHYSICS 78, no. 6 (November 1, 2013): E271—E280. http://dx.doi.org/10.1190/geo2013-0128.1.

Full text
Abstract:
Bayesian methods can quantify the model uncertainty that is inherent in inversion of highly nonlinear geophysical problems. In this approach, a model likelihood function based on knowledge of the data noise statistics is used to sample the posterior model distribution, which conveys information on the resolvability of the model parameters. Because these distributions are multidimensional and nonlinear, we used Markov chain Monte Carlo methods for highly efficient sampling. Because a single Markov chain can become stuck in a local probability mode, we run various randomized Markov chains independently. To some extent, this problem can be mitigated by running independent Markov chains, but unless a very large number of chains are run, biased results may be obtained. We got around these limitations by running parallel, interacting Markov chains with “annealed” or “tempered” likelihoods, which enable the whole system of chains to effectively escape local probability maxima. We tested this approach using a transdimensional algorithm, where the number of model parameters as well as the parameters themselves were treated as unknowns during the inversion. This gave us a measure of uncertainty that was independent of any particular parameterization. We then subset the ensemble of inversion models to either reduce uncertainty based on a priori constraints or to examine the probability of various geologic scenarios. We demonstrated our algorithms’ fast convergence to the posterior model distribution with a synthetic 1D marine controlled-source electromagnetic data example. The speed up gained from this new approach will facilitate the practical implementation of future 2D and 3D Bayesian inversions, where the cost of each forward evaluation is significantly more expensive than for the 1D case.
APA, Harvard, Vancouver, ISO, and other styles
2

Lefebvre, Mario, and Moussa Kounta. "Discrete homing problems." Archives of Control Sciences 23, no. 1 (March 1, 2013): 5–18. http://dx.doi.org/10.2478/v10170-011-0039-6.

Full text
Abstract:
Abstract We consider the so-called homing problem for discrete-time Markov chains. The aim is to optimally control the Markov chain until it hits a given boundary. Depending on a parameter in the cost function, the optimizer either wants to maximize or minimize the time spent by the controlled process in the continuation region. Particular problems are considered and solved explicitly. Both the optimal control and the value function are obtained
APA, Harvard, Vancouver, ISO, and other styles
3

Andini, Enggartya, Sudarno Sudarno, and Rita Rahmawati. "PENERAPAN METODE PENGENDALIAN KUALITAS MEWMA BERDASARKAN ARL DENGAN PENDEKATAN RANTAI MARKOV (Studi Kasus: Batik Semarang 16, Meteseh)." Jurnal Gaussian 10, no. 1 (February 28, 2021): 125–35. http://dx.doi.org/10.14710/j.gauss.v10i1.30939.

Full text
Abstract:
An industrial company requires quality control to maintain quality consistency from the production results so that it is able to compete with other companies in the world market. In the industrial sector, most processes are influenced by more than one quality characteristic. One tool that can be used to control more than one quality characteristic is the Multivariate Exponentially Weighted Moving Average (MEWMA) control chart. The graph is used to determine whether the process has been controlled or not, if the process is not yet controlled, the next analysis that can be used is to use the Average Run Length (ARL) with the Markov Chain approach. The markov chain is the chance of today's event is only influenced by yesterday's incident, in this case the chance of the incident in question is the incident in getting a sampel of data on the production process of batik cloth to get a product that is in accordance with the company standards. ARL is the average number of sample points drawn before a point indicates an uncontrollable state. In this study, 60 sample data were used which consisted of three quality characteristics, namely the length of the cloth, the width of the cloth, and the time of the fabric for the production of written batik in Batik Semarang 16 Meteseh. Based on the results and discussion that has been done, the MEWMA controller chart uses the λ weighting which is determined using trial and error. MEWMA control chart can not be said to be stable and controlled with λ = 0.6, after calculating, the value is obtained Upper Control Limit (BKA) of 11.3864 and Lower Control Limit (BKB) of 0. It is known that from 60 data samples there is a Tj2 value that comes out from the upper control limit (BKA) where the amount of 15.70871, which indicates the production process is not controlled statistically. Improvements to the MEWMA controller chart can be done based on the ARL with the Markov Chain approach. In this final project, the ARL value used is 200, the magnitude of the process shift is 1.7 and the r value is 0.28, where the value of r is a constant obtained on the r parameter graph. The optimal MEWMA control chart based on ARL with the Markov Chain approach can be said to be stable and controlled if there is no Tj2 value that is outside the upper control limit (BKA). The results of the MEWMA control chart based on the ARL with the Markov Chain approach show that the process is not statistically capable because the MCpm value is 0.516797 and the MCpmk value is 0.437807, the value indicates a process capability index value of less than 1. Keywords: Handmade batik, Multivariate Exponentially Weighted Moving Average (MEWMA), Average Run Length (ARL), Capability process.
APA, Harvard, Vancouver, ISO, and other styles
4

CAI, KAI-YUAN, TSONG YUEH CHEN, YONG-CHAO LI, YUEN TAK YU, and LEI ZHAO. "ON THE ONLINE PARAMETER ESTIMATION PROBLEM IN ADAPTIVE SOFTWARE TESTING." International Journal of Software Engineering and Knowledge Engineering 18, no. 03 (May 2008): 357–81. http://dx.doi.org/10.1142/s0218194008003696.

Full text
Abstract:
Software cybernetics is an emerging area that explores the interplay between software and control. The controlled Markov chain (CMC) approach to software testing supports the idea of software cybernetics by treating software testing as a control problem, where the software under test serves as a controlled object modeled by a controlled Markov chain and the software testing strategy serves as the corresponding controller. The software under test and the corresponding software testing strategy form a closed-loop feedback control system. The theory of controlled Markov chains is used to design and optimize the testing strategy in accordance with the testing/reliability goal given explicitly and a priori. Adaptive software testing adjusts and improves software testing strategy online by using the testing data collected in the course of software testing. In doing so, the online parameter estimations play a key role. In this paper, we study the effects of genetic algorithm and the gradient method for doing online parameter estimation in adaptive software testing. We find that genetic algorithm is effective and does not require prior knowledge of the software parameters of concern. Although genetic algorithm is computationally intensive, it leads the adaptive software testing strategy to an optimal software testing strategy that is determined by optimizing a given testing goal, such as minimizing the total cost incurred for removing a given number of defects. On the other hand, the gradient method is computationally favorable, but requires appropriate initial values of the software parameters of concern. It may lead, or fail to lead, the adaptive software testing strategy to an optimal software testing strategy, depending on whether the given initial parameter values are appropriate or not. In general, the genetic algorithm should be used instead of the gradient method in adaptive software testing. Simulation results show that adaptive software testing does work and outperforms random testing.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Jinzhi, and Shixia Ma. "Pricing Options with Credit Risk in Markovian Regime-Switching Markets." Journal of Applied Mathematics 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/621371.

Full text
Abstract:
This paper investigates the valuation of European option with credit risk in a reduced form model when the stock price is driven by the so-called Markov-modulated jump-diffusion process, in which the arrival rate of rare events and the volatility rate of stock are controlled by a continuous-time Markov chain. We also assume that the interest rate and the default intensity follow the Vasicek models whose parameters are governed by the same Markov chain. We study the pricing of European option and present numerical illustrations.
APA, Harvard, Vancouver, ISO, and other styles
6

Dshalalow, Jewgeni. "On the multiserver queue with finite waiting room and controlled input." Advances in Applied Probability 17, no. 2 (June 1985): 408–23. http://dx.doi.org/10.2307/1427148.

Full text
Abstract:
In this paper we study a multi-channel queueing model of type with N waiting places and a non-recurrent input flow dependent on queue length at the time of each arrival. The queue length is treated as a basic process. We first determine explicitly the limit distribution of the embedded Markov chain. Then, by introducing an auxiliary Markov process, we find a simple relationship between the limiting distribution of the Markov chain and the limiting distribution of the original process with continuous time parameter. Here we simultaneously combine two methods: solving the corresponding Kolmogorov system of the differential equations, and using an approach based on the theory of semi-regenerative processes. Among various applications of multi-channel queues with state-dependent input stream, we consider a closed single-server system with reserve replacement and state-dependent service, which turns out to be dual (in a certain sense) in relation to our model; an optimization problem is also solved, and an interpretation by means of tandem systems is discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Dshalalow, Jewgeni. "On the multiserver queue with finite waiting room and controlled input." Advances in Applied Probability 17, no. 02 (June 1985): 408–23. http://dx.doi.org/10.1017/s0001867800015044.

Full text
Abstract:
In this paper we study a multi-channel queueing model of type with N waiting places and a non-recurrent input flow dependent on queue length at the time of each arrival. The queue length is treated as a basic process. We first determine explicitly the limit distribution of the embedded Markov chain. Then, by introducing an auxiliary Markov process, we find a simple relationship between the limiting distribution of the Markov chain and the limiting distribution of the original process with continuous time parameter. Here we simultaneously combine two methods: solving the corresponding Kolmogorov system of the differential equations, and using an approach based on the theory of semi-regenerative processes. Among various applications of multi-channel queues with state-dependent input stream, we consider a closed single-server system with reserve replacement and state-dependent service, which turns out to be dual (in a certain sense) in relation to our model; an optimization problem is also solved, and an interpretation by means of tandem systems is discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Attia, F. A. "The control of a finite dam with penalty cost function: Markov input rate." Journal of Applied Probability 24, no. 2 (June 1987): 457–65. http://dx.doi.org/10.2307/3214269.

Full text
Abstract:
The long-run average cost per unit time of operating a finite dam controlled by a policy (Lam Yeh (1985)) is determined when the cumulative input process is the integral of a Markov chain. A penalty cost which accrues continuously at a rate g(X(t)), where g is a bounded measurable function of the content, is also introduced. An example where the input rate is a two-state Markov chain is considered in detail to illustrate the computations.
APA, Harvard, Vancouver, ISO, and other styles
9

Attia, F. A. "The control of a finite dam with penalty cost function: Markov input rate." Journal of Applied Probability 24, no. 02 (June 1987): 457–65. http://dx.doi.org/10.1017/s0021900200031090.

Full text
Abstract:
The long-run average cost per unit time of operating a finite dam controlled by a policy (Lam Yeh (1985)) is determined when the cumulative input process is the integral of a Markov chain. A penalty cost which accrues continuously at a rate g(X(t)), where g is a bounded measurable function of the content, is also introduced. An example where the input rate is a two-state Markov chain is considered in detail to illustrate the computations.
APA, Harvard, Vancouver, ISO, and other styles
10

Fort, Gersende. "Central limit theorems for stochastic approximation with controlled Markov chain dynamics." ESAIM: Probability and Statistics 19 (2015): 60–80. http://dx.doi.org/10.1051/ps/2014013.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Controlled Markov chain"

1

Kuri, Joy. "Optimal Control Problems In Communication Networks With Information Delays And Quality Of Service Constraints." Thesis, Indian Institute of Science, 1995. https://etd.iisc.ac.in/handle/2005/162.

Full text
Abstract:
In this thesis, we consider optimal control problems arising in high-speed integrated communication networks with Quality of Service (QOS) constraints. Integrated networks are expected to carry a large variety of traffic sources with widely varying traffic characteristics and performance requirements. Broadly, the traffic sources fall into two categories: (a) real-time sources with specified performance criteria, like small end to end delay and loss probability (sources of this type are referred to as Type 1 sources below), and (b) sources that do not have stringent performance criteria and do not demand performance guarantees from the network - the so-called Best Effort Type sources (these are referred to as Type 2 sources below). From the network's point of view, Type 2 sources are much more "controllable" than Type 1 sources, in the sense that the Type 2 sources can be dynamically slowed down, stopped or speeded up depending on traffic congestion in the network, while for Type 1 sources, the only control action available in case of congestion is packet dropping. Carrying sources of both types in the same network concurrently while meeting the performance objectives of Type 1 sources is a challenge and raises the question of equitable sharing of resources. The objective is to carry as much Type 2 traffic as possible without sacrificing the performance requirements of Type 1 traffic. We consider simple models that capture this situation. Consider a network node through which two connections pass, one each of Types 1 and 2. One would like to maximize the throughput of the Type 2 connection while ensuring that the Type 1 connection's performance objectives are met. This can be set up as a constrained optimization problem that, however, is very hard to solve. We introduce a parameter b that represents the "cost" of buffer occupancy by Type 2 traffic. Since buffer space is limited and shared, a queued Type 2 packet means that a buffer position is not available for storing a Type 1 packet; to discourage the Type 2 connection from hogging the buffer, the cost parameter b is introduced, while a reward for each Type 2 packet coming into the buffer encourages the Type 2 connection to transmit at a high rate. Using standard on-off models for the Type 1 sources, we show how values can be assigned to the parameter b; the value depends on the characteristics of the Type 1 connection passing through the node, i.e., whether it is a Variable Bit Rate (VBR) video connection or a Continuous Bit Rate (CBR) connection etc. Our approach gives concrete networking significance to the parameter b, which has long been considered as an abstract parameter in reward-penalty formulations of flow control problems (for example, [Stidham '85]). Having seen how to assign values to b, we focus on the Type 2 connection next. Since Type 2 connections do not have strict performance requirements, it is possible to defer transmitting a Type 2 packet, if the conditions downstream so warrant. This leads to the question: what is the "best" transmission policy for Type 2 packets? Decisions to transmit or not must be based on congestion conditions downstream; however, the network state that is available at any instant gives information that is old, since feedback latency is an inherent feature of high speed networks. Thus the problem is to identify the best transmission policy under delayed feedback information. We study this problem in the framework of Markov Decision Theory. With appropriate assumptions on the arrivals, service times and scheduling discipline at a network node, we formulate our problem as a Partially Observable Controlled Markov Chain (PO-CMC). We then give an equivalent formulation of the problem in terms of a Completely Observable Controlled Markov Chain (CO-CMC) that is easier to deal with., Using Dynamic Programming and Value Iteration, we identify structural properties of an optimal transmission policy when the delay in obtaining feedback information is one time slot. For both discounted and average cost criteria, we show that the optimal policy has a two-threshold structure, with the threshold on the observed queue length depending, on whether a Type 2 packet was transmitted in the last slot or not. For an observation delay k > 2, the Value Iteration technique does not yield results. We use the structure of the problem to provide computable upper and lower bounds to the optimal value function. A study of these bounds yields information about the structure of the optimal policy for this problem. We show that for appropriate values of the parameters of the problem, depending on the number of transmissions in the last k steps, there is an "upper cut off" number which is a value such that if the observed queue length is greater than or equal to this number, the optimal action is to not transmit. Since the number of transmissions in the last k steps is between 0 and A: both inclusive, we have a stack of (k+1) upper cut off values. We conjecture that these (k + l) values axe thresholds and the optimal policy for this problem has a (k + l)-threshold structure. So far it has been assumed that the parameters of the problem are known at the transmission control point. In reality, this is usually not known and changes over time. Thus, one needs an adaptive transmission policy that keeps track of and adjusts to changing network conditions. We show that the information structure in our problem admits a simple adaptive policy that performs reasonably well in a quasi-static traffic environment. Up to this point, the models we have studied correspond to a single hop in a virtual connection. We consider the multiple hop problem next. A basic matter of interest here is whether one should have end to end or hop by hop controls. We develop a sample path approach to answer this question. It turns out that depending on the relative values of the b parameter in the transmitting node and its downstream neighbour, sometimes end to end controls are preferable while at other times hop by hop controls are preferable. Finally, we consider a routing problem in a high speed network where feedback information is delayed, as usual. As before, we formulate the problem in the framework of Markov Decision Theory and apply Value Iteration to deduce structural properties of an optimal control policy. We show that for both discounted and average cost criteria, the optimal policy for an observation delay of one slot is Join the Shortest Expected Queue (JSEQ) - a natural and intuitively satisfactory extension of the well-known Join the Shortest Queue (JSQ) policy that is optimal when there is no feedback delay (see, for example, [Weber 78]). However, for an observation delay of more than one slot, we show that the JSEQ policy is not optimal. Determining the structure of the optimal policy for a delay k>2 appears to be very difficult using the Value Iteration approach; we explore some likely policies by simulation.
APA, Harvard, Vancouver, ISO, and other styles
2

Kuri, Joy. "Optimal Control Problems In Communication Networks With Information Delays And Quality Of Service Constraints." Thesis, Indian Institute of Science, 1995. http://hdl.handle.net/2005/162.

Full text
Abstract:
In this thesis, we consider optimal control problems arising in high-speed integrated communication networks with Quality of Service (QOS) constraints. Integrated networks are expected to carry a large variety of traffic sources with widely varying traffic characteristics and performance requirements. Broadly, the traffic sources fall into two categories: (a) real-time sources with specified performance criteria, like small end to end delay and loss probability (sources of this type are referred to as Type 1 sources below), and (b) sources that do not have stringent performance criteria and do not demand performance guarantees from the network - the so-called Best Effort Type sources (these are referred to as Type 2 sources below). From the network's point of view, Type 2 sources are much more "controllable" than Type 1 sources, in the sense that the Type 2 sources can be dynamically slowed down, stopped or speeded up depending on traffic congestion in the network, while for Type 1 sources, the only control action available in case of congestion is packet dropping. Carrying sources of both types in the same network concurrently while meeting the performance objectives of Type 1 sources is a challenge and raises the question of equitable sharing of resources. The objective is to carry as much Type 2 traffic as possible without sacrificing the performance requirements of Type 1 traffic. We consider simple models that capture this situation. Consider a network node through which two connections pass, one each of Types 1 and 2. One would like to maximize the throughput of the Type 2 connection while ensuring that the Type 1 connection's performance objectives are met. This can be set up as a constrained optimization problem that, however, is very hard to solve. We introduce a parameter b that represents the "cost" of buffer occupancy by Type 2 traffic. Since buffer space is limited and shared, a queued Type 2 packet means that a buffer position is not available for storing a Type 1 packet; to discourage the Type 2 connection from hogging the buffer, the cost parameter b is introduced, while a reward for each Type 2 packet coming into the buffer encourages the Type 2 connection to transmit at a high rate. Using standard on-off models for the Type 1 sources, we show how values can be assigned to the parameter b; the value depends on the characteristics of the Type 1 connection passing through the node, i.e., whether it is a Variable Bit Rate (VBR) video connection or a Continuous Bit Rate (CBR) connection etc. Our approach gives concrete networking significance to the parameter b, which has long been considered as an abstract parameter in reward-penalty formulations of flow control problems (for example, [Stidham '85]). Having seen how to assign values to b, we focus on the Type 2 connection next. Since Type 2 connections do not have strict performance requirements, it is possible to defer transmitting a Type 2 packet, if the conditions downstream so warrant. This leads to the question: what is the "best" transmission policy for Type 2 packets? Decisions to transmit or not must be based on congestion conditions downstream; however, the network state that is available at any instant gives information that is old, since feedback latency is an inherent feature of high speed networks. Thus the problem is to identify the best transmission policy under delayed feedback information. We study this problem in the framework of Markov Decision Theory. With appropriate assumptions on the arrivals, service times and scheduling discipline at a network node, we formulate our problem as a Partially Observable Controlled Markov Chain (PO-CMC). We then give an equivalent formulation of the problem in terms of a Completely Observable Controlled Markov Chain (CO-CMC) that is easier to deal with., Using Dynamic Programming and Value Iteration, we identify structural properties of an optimal transmission policy when the delay in obtaining feedback information is one time slot. For both discounted and average cost criteria, we show that the optimal policy has a two-threshold structure, with the threshold on the observed queue length depending, on whether a Type 2 packet was transmitted in the last slot or not. For an observation delay k > 2, the Value Iteration technique does not yield results. We use the structure of the problem to provide computable upper and lower bounds to the optimal value function. A study of these bounds yields information about the structure of the optimal policy for this problem. We show that for appropriate values of the parameters of the problem, depending on the number of transmissions in the last k steps, there is an "upper cut off" number which is a value such that if the observed queue length is greater than or equal to this number, the optimal action is to not transmit. Since the number of transmissions in the last k steps is between 0 and A: both inclusive, we have a stack of (k+1) upper cut off values. We conjecture that these (k + l) values axe thresholds and the optimal policy for this problem has a (k + l)-threshold structure. So far it has been assumed that the parameters of the problem are known at the transmission control point. In reality, this is usually not known and changes over time. Thus, one needs an adaptive transmission policy that keeps track of and adjusts to changing network conditions. We show that the information structure in our problem admits a simple adaptive policy that performs reasonably well in a quasi-static traffic environment. Up to this point, the models we have studied correspond to a single hop in a virtual connection. We consider the multiple hop problem next. A basic matter of interest here is whether one should have end to end or hop by hop controls. We develop a sample path approach to answer this question. It turns out that depending on the relative values of the b parameter in the transmitting node and its downstream neighbour, sometimes end to end controls are preferable while at other times hop by hop controls are preferable. Finally, we consider a routing problem in a high speed network where feedback information is delayed, as usual. As before, we formulate the problem in the framework of Markov Decision Theory and apply Value Iteration to deduce structural properties of an optimal control policy. We show that for both discounted and average cost criteria, the optimal policy for an observation delay of one slot is Join the Shortest Expected Queue (JSEQ) - a natural and intuitively satisfactory extension of the well-known Join the Shortest Queue (JSQ) policy that is optimal when there is no feedback delay (see, for example, [Weber 78]). However, for an observation delay of more than one slot, we show that the JSEQ policy is not optimal. Determining the structure of the optimal policy for a delay k>2 appears to be very difficult using the Value Iteration approach; we explore some likely policies by simulation.
APA, Harvard, Vancouver, ISO, and other styles
3

Brau, Rojas Agustin. "Controlled Markov chains with risk-sensitive average cost criterion." Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/284004.

Full text
Abstract:
Discrete controlled Markov chains with finite action space and bounded cost per stage are studied in this dissertation. The performance index function, the exponential average cost (EAC), models risk-sensitivity by means of an exponential (dis)utility function. First, for the finite state space model, the EAC corresponding to a fixed stationary (deterministic) policy is characterized in terms of the spectral radii of matrices associated to irreducible communicating classes of both recurrent and transient states. This result generalizes a well known theorem of Howard and Matheson, which treats the particular case in which the Markov cost chain has only one dosed class of states. Then, it is shown that under strong recurrence conditions, the risk-sensitive model approaches the risk-null model when the risk-sensitivity coefficient is small. However, it is proved and also illustrated by means of examples, that in general, fundamental differences arise between both models, e.g., the EAC may depend on the cost structure at the transient states. In particular, the behavior of the EAC for large risk-sensitivity is also analyzed. After showing that an exponential average optimality equation holds for the countable state space model, a proof of the existence of solutions to that equation for the finite model under a simultaneous Doeblin condition is provided, which is simpler than that given in recent work of Cavazos-Cadena and Fernandez-Gaucherand. The adverse impact of "large risk-sensitivity" on recently obtained conditions for the existence of solutions to an optimality inequality is illustrated by means of an example. Finally, a counterexample is included to show that, unlike previous results for finite models, a controlled Markov chain with infinite state space may not have ultimately stationary optimal policies in the risk-sensitive (exponential) discounted cost case, in general. Moreover, a simultaneous Doeblin condition is satisfied in our example, an assumption that enables the vanishing discount approach in the risk-null case, thus further suggesting that more restrictive conditions than those commonly used in the risk neutral context are needed to develop the mentioned approach for risk-sensitive criteria.
APA, Harvard, Vancouver, ISO, and other styles
4

Avila, Godoy Micaela Guadalupe. "Controlled Markov chains with exponential risk-sensitive criteria: Modularity, structured policies and applications." Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/289049.

Full text
Abstract:
Controlled Markov chains (CMC's) are mathematical models for the control of sequential decision stochastic systems. Starting in the early 1950's with the work of R. Bellman, many basic contributions to CMC's have been made, and numerous applications to engineering, operation research, and economics, among other areas, have been developed. The optimal control problem for CMC's with a countable state space, and with a general action space, is studied for (exponential) total and discounted risk-sensitive cost criteria. General (dynamic programming) results for the finite and the infinite horizon cases are obtained. A set of general conditions is presented to obtain structural properties of the optimal value function and policies. In particular, monotonicity properties of value functions and optimal policies are established. The approach followed is to show the (sub)modularity of certain functions (related to the optimality equations). Four applications studies are used to illustrate the general results obtained is this dissertation: equipment replacement, optimal resource allocation, scheduling of uncertain jobs, and inventory control.
APA, Harvard, Vancouver, ISO, and other styles
5

Figueiredo, Danilo Zucolli. "Discrete-time jump linear systems with Markov chain in a general state space." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-18012017-115659/.

Full text
Abstract:
This thesis deals with discrete-time Markov jump linear systems (MJLS) with Markov chain in a general Borel space S. Several control issues have been addressed for this class of dynamic systems, including stochastic stability (SS), linear quadratic (LQ) optimal control synthesis, fllter design and a separation principle. Necessary and sffcient conditions for SS have been derived. It was shown that SS is equivalent to the spectral radius of an operator being less than 1 or to the existence of a solution to a \\Lyapunov-like\" equation. Based on the SS concept, the finite- and infinite-horizon LQ optimal control problems were tackled. The solution to the finite- (infinite-)horizon LQ optimal control problem was derived from the associated control S-coupled Riccati difference (algebraic) equations. By S-coupled it is meant that the equations are coupled via an integral over a transition probability kernel having a density with respect to a in-finite measure on the Borel space S. The design of linear Markov jump filters was analyzed and a solution to the finite- (infinite-)horizon filtering problem was obtained based on the associated filtering S-coupled Riccati difference (algebraic) equations. Conditions for the existence and uniqueness of a stabilizing positive semi-definite solution to the control and filtering S-coupled algebraic Riccati equations have also been derived. Finally a separation principle for discrete-time MJLS with Markov chain in a general state space was obtained. It was shown that the optimal controller for a partial information optimal control problem separates the partial information control problem into two problems, one associated with a filtering problem and the other associated with an optimal control problem with complete information. It is expected that the results obtained in this thesis may motivate further research on discrete-time MJLS with Markov chain in a general state space.
Esta tese trata de sistemas lineares com saltos markovianos (MJLS) a tempo discreto com cadeia de Markov em um espaço geral de Borel S. Vários problemas de controle foram abordados para esta classe de sistemas dinâmicos, incluindo estabilidade estocástica (SS), síntese de controle ótimo linear quadrático (LQ), projeto de filtros e um princípio da separação. Condições necessárias e suficientes para a SS foram obtidas. Foi demonstrado que SS é equivalente ao raio espectral de um operador ser menor que 1 ou à existência de uma solução para uma equação de Lyapunov. Os problemas de controle ótimo a horizonte finito e infinito foram abordados com base no conceito de SS. A solução para o problema de controle ótimo LQ a horizonte finito (infinito) foi obtida a partir das associadas equações a diferenças (algébricas) de Riccati S-acopladas de controle. Por S-acopladas entende-se que as equações são acopladas por uma integral sobre o kernel estocástico com densidade de transição em relação a uma medida in-finita no espaço de Borel S. O projeto de filtros lineares markovianos foi analisado e uma solução para o problema da filtragem a horizonte finito (infinito) foi obtida com base nas associadas equações a diferenças (algébricas) de Riccati S-acopladas de filtragem. Condições para a existência e unicidade de uma solução positiva semi-definida e estabilizável para as equações algébricas de Riccati S-acopladas associadas aos problemas de controle e filtragem também foram obtidas. Por último, foi estabelecido um princípio da separação para MJLS a tempo discreto com cadeia de Markov em um espaço de estados geral. Foi demonstrado que o controlador ótimo para um problema de controle ótimo com informação parcial separa o problema de controle com informação parcial em dois problemas, um deles associado a um problema de filtragem e o outro associado a um problema de controle ótimo com informação completa. Espera-se que os resultados obtidos nesta tese possam motivar futuras pesquisas sobre MJLS a tempo discreto com cadeia de Markov em um espaço de estados geral.
APA, Harvard, Vancouver, ISO, and other styles
6

Franco, Bruno Chaves [UNESP]. "Planejamento econômico de gráficos de controle X para monitoramento de processos autocorrelacionados." Universidade Estadual Paulista (UNESP), 2011. http://hdl.handle.net/11449/93084.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:26:18Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-06-15Bitstream added on 2014-06-13T19:25:33Z : No. of bitstreams: 1 franco_bc_me_guara.pdf: 1492178 bytes, checksum: 2c201ad6660278573573b0b255349982 (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Esta pesquisa propõe o planejamento econômico de gráficos de controle ̅ para o monitoramento de uma característica de qualidade na qual as observações se ajustam a um modelo autorregressivo de primeira ordem com erro adicional. O modelo de custos de Duncan é usado para selecionar os parâmetros do gráfico, tamanho da amostra, intervalo de amostragem e os limites de controle. Utiliza-se o algoritmo genético na busca do custo mínimo de monitoramento. Para determinação do número médio de amostras até o sinal e o número de alarmes falsos são utilizadas Cadeias de Markov. Uma análise de sensibilidade mostrou que a autocorrelação provoca efeito adverso nos parâmetros do gráfico elevando seu custo de monitoramento e reduzindo sua eficiência
This research proposes an economic design of a ̅ control chart used to monitor a quality characteristic whose observations fit to a first-order autoregressive model with additional error. The Duncan's cost model is used to select the control chart parameters, namely the sample size, the sampling interval and the control limit coefficient, using genetic algorithm in order to search the minimum cost. The Markov chain is used to determine the average number of samples until the signal and the number of false alarms. A sensitivity analysis showed that the autocorrelation causes adverse effect on the parameters of the control chart increasing monitoring cost and reducing significantly their efficiency
APA, Harvard, Vancouver, ISO, and other styles
7

Trindade, Anderson Laécio Galindo. "Contribuições para o controle on-line de processos por atributos." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/3/3136/tde-02062008-132508/.

Full text
Abstract:
O procedimento de controle on-line de processos por atributos, proposto por Taguchi et al. (1989), consiste em amostrar um item a cada m produzidos e decidir, a cada inspeção, se houve ou não aumento na fração de itens não-conformes produzidos. Caso o item inspecionado seja não-conforme, pára-se o processo para ajuste supondo-se que tenha ocorrido uma mudança para a condição fora de controle. Como i) o sistema de inspeção pode estar sujeito a erros de classificação e o item inspecionado ser submetido à classificações repetidas; ii) a fração de itens não-conformes no estado fora de controle pode variar em função do número de itens produzidos (x) segundo uma função y (x); iii) e a decisão de parar o processo pode ser tomada com base no resultado das últimas h inspeções, desenvolve-se um modelo que engloba estes pontos. Utilizando-se as propriedades de uma cadeia de Markov ergódica, obtém-se a expressão do custo médio do sistema de controle, que é minimizada por parâmetros que vão além do intervalo de inspeção m: o número de classificações repetidas r; o número mínimo de classificações conformes para declarar um item como conforme s, o comprimento do histórico de inspeções considerado h e o critério de parada para ajuste u. Os resultados obtidos mostram que: o uso de classificações repetidas pode ser uma alternativa econômica quando apenas um item é considerado na decisão sobre o ajuste do processo; uma cadeia da Markov finita pode ser utilizada para representar o sistema de controle na presença de uma função y (x) não-constante; tomar a decisão de ajuste com base na observação de uma seqüência de itens inspecionados é a alternativa de maior impacto sobre o custo de controle do processo.
The quality control procedure for attributes, proposed by Taguchi et al. (1989), consists in inspecting a single item at every m produced items and, based on the result of each inspection, deciding weather the non-conforming fraction has increased or not. If an inspected item is declared non-conforming, the process is stopped and adjusted, assuming that it has changed to out-of-control condition. Once: i) the inspection system is subject to misclassification and it is possible to carry out repetitive classifications in the inspected item; ii) the non-conforming fraction, when the process is out-of-control, can be described by y(x); iii) the decision about stopping the process can be based on last h inspections, a model which considers those points is developed. Using properties of ergodic Markov chains, the average cost expression is calculated and can be minimized by parameters beyond m: number of repetitive classifications (r); minimum number of classifications as conforming to declare an item as conforming (s); number of inspections taken into account (h) and stopping criteria (u). The results obtained show that: repetitive classifications of the inspected item can be a viable option if only one item is used to decide about the process condition; a finite Markov chain can be used to represent the control procedure in presence of a function y(x); deciding about the process condition based on last h inspections has a significant impact on the average cost.
APA, Harvard, Vancouver, ISO, and other styles
8

Franco, Bruno Chaves. "Planejamento econômico de gráficos de controle X para monitoramento de processos autocorrelacionados /." Guaratinguetá : [s.n.], 2011. http://hdl.handle.net/11449/93084.

Full text
Abstract:
Resumo: Esta pesquisa propõe o planejamento econômico de gráficos de controle ̅ para o monitoramento de uma característica de qualidade na qual as observações se ajustam a um modelo autorregressivo de primeira ordem com erro adicional. O modelo de custos de Duncan é usado para selecionar os parâmetros do gráfico, tamanho da amostra, intervalo de amostragem e os limites de controle. Utiliza-se o algoritmo genético na busca do custo mínimo de monitoramento. Para determinação do número médio de amostras até o sinal e o número de alarmes falsos são utilizadas Cadeias de Markov. Uma análise de sensibilidade mostrou que a autocorrelação provoca efeito adverso nos parâmetros do gráfico elevando seu custo de monitoramento e reduzindo sua eficiência
Abstract: This research proposes an economic design of a ̅ control chart used to monitor a quality characteristic whose observations fit to a first-order autoregressive model with additional error. The Duncan's cost model is used to select the control chart parameters, namely the sample size, the sampling interval and the control limit coefficient, using genetic algorithm in order to search the minimum cost. The Markov chain is used to determine the average number of samples until the signal and the number of false alarms. A sensitivity analysis showed that the autocorrelation causes adverse effect on the parameters of the control chart increasing monitoring cost and reducing significantly their efficiency
Orientadora: Marcela Aparecida Guerreiro Machado
Coorientadora: Antonio Fernando Branco Costa
Banca: Fernando Augusto Silva Marins
Banca: Anderson Paula de Paiva
Mestre
APA, Harvard, Vancouver, ISO, and other styles
9

Marcos, Lucas Barbosa. "Controle de sistemas lineares sujeitos a saltos Markovianos aplicado em veículos autônomos." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-27042017-085140/.

Full text
Abstract:
No contexto do mundo contemporâneo, os veículos automotores estão cada vez mais integrados ao cotidiano das pessoas, sendo mais de 1 bilhão deles circulando pelo mundo. Por serem controlados por motoristas, estão sujeitos a falhas decorrentes da inerente condição humana, ocasionando acidentes, mortes e outros prejuízos. O controle autônomo de veículos tem se apresentado como alternativa na busca de redução desses prejuízos, sendo utilizado nas mais diferentes abordagens, por distintas instituições ao redor do planeta. Deste modo, torna-se uma pauta fundamental para o estudo de sistemas de controle. Este trabalho, valendo-se da descrição matemática do comportamento veicular, busca o desenvolvimento e a implementação de um método eficiente de controle autônomo de veículos voltado, principalmente, para a modelagem em espaço de estados. Considerando que mudanças de marchas, principalmente em um cenário de dirigibilidade autônoma, são ações aleatórias, o objetivo desta dissertação é utilizar estratégias de controle baseadas em sistemas lineares sujeitos a saltos Markovianos.
In nowadays society, automobile vehicles are getting more and more integrated to people\'s daily activities, as there are more than 1 billion of them on the streets around the world. As they are controlled by drivers, vehicles are subjected to failures caused by human mistakes that lead to accidents, injuries and others. Autonomous vehicle control has shown itself to be an alternative in the pursuit of damage reduction, and it is applied by different institutions in many countries. Therefore, it is a main subject in the area of control systems. This paper, relying on mathematical descriptions of vehicle behavior, aims to develop and apply an efficient autonomous control method that takes into account state-space formulation. This goal will be achieved by the use of control strategies based on Markovian Jump Linear Systems that will describe the highly non-linear dynamics of the vehicle in different operation points.
APA, Harvard, Vancouver, ISO, and other styles
10

Melo, Diogo Henrique de. "Otimização de consumo de combustível em veículos usando um modelo simplificado de trânsito e sistemas com saltos markovianos." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-01022017-160814/.

Full text
Abstract:
Esta dissertação aborda o problema de redução do consumo de combustível para veículos. Com esse objetivo, realiza-se o levantamento de um modelo estocástico e de seus parâmetros, o desenvolvimento de um controlador para o veículo, e análise dos resultados. O problema considera a interação com o trânsito de outros veículos, que limita a aplicação de resultados antes disponíveis. Para isto, propõe-se modelar a dinâmica do problema de maneira aproximada, usando sistemas com saltos markovianos, e levantar as probabilidades de transição dos estados da cadeia através de um modelo mais completo para o trânsito no percurso.
This dissertation deals with control of vehicles aiming at the fuel consumption optimization, taking into account the interference of traffic. Stochastic interferences like this and other real world phenomena prevents us from directly applying available results. We propose to employ a relatively simple system with Markov jumping parameters as a model for the vehicle subject to traffic interference, and to obtain the transition probabilities from a separate model for the traffic. This dissertation presents the model identification, the solution of the new problem using dynamic programming, and simulation of the obtained control.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Controlled Markov chain"

1

Zhenting, Hou, Filar Jerzy A. 1949-, and Chen Anyue, eds. Markov processes and controlled Markov chains. Dordrecht: Kluwer Academic Publishers, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hou, Zhenting, Jerzy A. Filar, and Anyue Chen, eds. Markov Processes and Controlled Markov Chains. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Borkar, Vivek S. Topics in controlled Markov chains. Harlow, Essex, England: Longman Scientific & Technical, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Filar, Jerzy A. Controlled markov chains, graphs and hamiltonicity. Hanover, Mass: Now Publishers, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cao, Xi-Ren. Foundations of Average-Cost Nonhomogeneous Controlled Markov Chains. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-56678-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hou, Zhenting Zhenting, Anyue Anyue Chen, and Jerzy A. Filar. Markov Processes and Controlled Markov Chains. Springer London, Limited, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Markov Processes and Controlled Markov Chains. Springer, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

(Editor), Zhenting Hou, Jerzy A. Filar (Editor), and Anyue Chen (Editor), eds. Markov Processes and Controlled Markov Chains. Springer, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Filar, Jerzy A. Controlled Markov chains, graphs and hamiltonicity. 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Selected Topics On Continuoustime Controlled Markov Chains And Markov Games. Imperial College Press, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Controlled Markov chain"

1

Cao, Yijia, and Lilian Cao. "Controlled Markov Chain Optimization of Genetic Algorithms." In Lecture Notes in Computer Science, 186–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48774-3_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kvatadze, Z. A., and T. L. Shervashidze. "On limit theorems for conditionally independent random variables controlled by a finite Markov chain." In Lecture Notes in Mathematics, 250–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/bfb0078480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kushner, Harold J., and Paul Dupuis. "Controlled Markov Chains." In Numerical Methods for Stochastic Control Problems in Continuous Time, 35–52. New York, NY: Springer New York, 2001. http://dx.doi.org/10.1007/978-1-4613-0007-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cassandras, Christos G., and Stéphane Lafortune. "Controlled Markov Chains." In Introduction to Discrete Event Systems, 523–89. Boston, MA: Springer US, 1999. http://dx.doi.org/10.1007/978-1-4757-4070-7_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kushner, Harold J., and Paul G. Dupuis. "Controlled Markov Chains." In Numerical Methods for Stochastic Control Problems in Continuous Time, 35–51. New York, NY: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4684-0441-8_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cassandras, Christos G., and Stéphane Lafortune. "Controlled Markov Chains." In Introduction to Discrete Event Systems, 535–91. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72274-6_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hou, Zhenting, Zaiming Liu, Jiezhong Zou, and Xuerong Chen. "Markov Skeleton Processes." In Markov Processes and Controlled Markov Chains, 69–92. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dynkin, E. B. "Branching Exit Markov System and their Applications to Partial Differential Equations." In Markov Processes and Controlled Markov Chains, 3–13. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Guo, Xianping, and Weiping Zhu. "Optimality Conditions for CTMDP with Average Cost Criterion." In Markov Processes and Controlled Markov Chains, 167–88. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cavazos-Cadena, Rolando, and Raúl Montes-de-Oca. "Optimal and Nearly Optimal Policies in Markov Decision Chains with Nonnegative Rewards and Risk-Sensitive Expected Total-Reward Criterion." In Markov Processes and Controlled Markov Chains, 189–221. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Controlled Markov chain"

1

Radaideh, Ashraf, Umesh Vaidya, and Venkataramana Ajjarapu. "Sensitivity analysis on modeling heterogeneous thermostatically controlled loads using Markov chain abstraction." In 2017 IEEE Power & Energy Society General Meeting (PESGM). IEEE, 2017. http://dx.doi.org/10.1109/pesgm.2017.8273971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hu, Hai, Chang-Hai Jiang, and Kai-Yuan Cai. "Adaptive Software Testing in the Context of an Improved Controlled Markov Chain Model." In 2008 32nd Annual IEEE International Computer Software and Applications Conference. IEEE, 2008. http://dx.doi.org/10.1109/compsac.2008.186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Arapostathis, A., E. Fernandez-Gaucherand, and S. I. Marcus. "Analysis of an adaptive control scheme for a partially observed controlled Markov chain." In 29th IEEE Conference on Decision and Control. IEEE, 1990. http://dx.doi.org/10.1109/cdc.1990.203849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Laszlo Makara, Arpad, and Laszlo Csurgai-Horvath. "Indoor User Movement Simulation with Markov Chain for Deep Learning Controlled Antenna Beam Alignment." In 2021 International Conference on Electrical, Computer and Energy Technologies (ICECET). IEEE, 2021. http://dx.doi.org/10.1109/icecet52533.2021.9698600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Song, Qingshuo, and G. Yin. "Rates of convergence of Markov chain approximation for controlled regime-switching diffusions with stopping times." In 2010 49th IEEE Conference on Decision and Control (CDC). IEEE, 2010. http://dx.doi.org/10.1109/cdc.2010.5717658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Malikopoulos, Andreas A. "Convergence Properties of a Computational Learning Model for Unknown Markov Chains." In ASME 2008 Dynamic Systems and Control Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/dscc2008-2174.

Full text
Abstract:
The increasing complexity of engineering systems has motivated continuing research on computational learning methods towards making autonomous intelligent systems that can learn how to improve their performance over time while interacting with their environment. These systems need not only to be able to sense their environment, but should also integrate information from the environment into all decision making. The evolution of such systems is modeled as an unknown controlled Markov chain. In previous research, the predictive optimal decision-making (POD) model was developed that aims to learn in real time the unknown transition probabilities and associated costs over a varying finite time horizon. In this paper, the convergence of POD to the stationary distribution of a Markov chain is proven, thus establishing POD as a robust model for making autonomous intelligent systems. The paper provides the conditions that POD can be valid, and an interpretation of its underlying structure.
APA, Harvard, Vancouver, ISO, and other styles
7

Malikopoulos, Andreas A. "A Rollout Control Algorithm for Discrete-Time Stochastic Systems." In ASME 2010 Dynamic Systems and Control Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/dscc2010-4047.

Full text
Abstract:
The growing demand for making autonomous intelligent systems that can learn how to improve their performance while interacting with their environment has induced significant research on computational cognitive models. Computational intelligence, or rationality, can be achieved by modeling a system and the interaction with its environment through actions, perceptions, and associated costs. A widely adopted paradigm for modeling this interaction is the controlled Markov chain. In this context, the problem is formulated as a sequential decision-making process in which an intelligent system has to select those control actions in several time steps to achieve long-term goals. This paper presents a rollout control algorithm that aims to build an online decision-making mechanism for a controlled Markov chain. The algorithm yields a lookahead suboptimal control policy. Under certain conditions, a theoretical bound on its performance can be established.
APA, Harvard, Vancouver, ISO, and other styles
8

Sovizi, Javad, Suren Kumar, and Venkat Krovi. "Optimal Feedback Control of a Flexible Needle Under Anatomical Motion Uncertainty." In ASME 2015 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/dscc2015-9976.

Full text
Abstract:
Bevel-tip flexible needles allow for reaching remote/inaccessible organs while avoiding the obstacles (sensitive organs, bones, etc.). Motion planning and control of such systems is a challenging problem due to the uncertainty induced by needle-tissue interactions, anatomical motions (respiratory and cardiac induced motions), imperfect actuation, etc. In this paper, we use an analogy where steering the needle in a soft tissue subject to the uncertain anatomical motions is compared to the Dubins vehicle traveling in the stochastic wind field. Achieving the optimal feedback control policy requires solution of a dynamic programming problem that is often computationally demanding. Efficiency is not central to many optimal control algorithms that often need to be computed only once for a given system/noise statistics. However, intraoperative policy updates may be required for adaptive or patient-specific models. We use the method of approximating Markov chain to approximate the continuous (and controlled) process with its discrete and locally consistent counterpart. We examine the linear programming method of solving the imposed dynamic programming problem that significantly improves the computational efficiency in comparison to the state-of-the-art approaches. In addition, the probability of success and failure are simply the variables of the linear optimization problem and can be directly used for different objective definitions. A numerical example of the 2D needle steering problem is considered to investigate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
9

"Performance of mixtures of adaptive controllers based on Markov chains." In Proceedings of the 1999 American Control Conference. IEEE, 1999. http://dx.doi.org/10.1109/acc.1999.782739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Punčochář, Ivo, and Miroslav Šimandl. "Infinite Horizon Input Signal for Active Fault Detection in Controlled Markov Chains." In Power and Energy. Calgary,AB,Canada: ACTAPRESS, 2013. http://dx.doi.org/10.2316/p.2013.807-028.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Controlled Markov chain"

1

Kim, Tae-Hun, and Jung Won Kang. The clinical evidence of effectiveness and safety of massage chair: a scoping review. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, February 2023. http://dx.doi.org/10.37766/inplasy2023.2.0021.

Full text
Abstract:
Review question / Objective: To evaluate current clinical evidence status of massage chair and to present evidence map for future research implication. Background: A massage chair is a furniture-type device such as a sofa or bed which provides automated massage using installed rollers and airbags. Although the market is growing and the number of users is increasing, its clinical evidence of the benefit and harm has not been clearly established yet. Because it is accepted like a furniture not a medical devices, its use is not controlled by medical personnel like other medical devices, so there is a need to pay attention to safety issues and effectiveness in terms of individual health promotion. This scoping review will assess the current evidence status of massage chair and present clinical research agenda in future.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography