Academic literature on the topic 'Controlled Markov chain'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Controlled Markov chain.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Controlled Markov chain"
Ray, Anandaroop, David L. Alumbaugh, G. Michael Hoversten, and Kerry Key. "Robust and accelerated Bayesian inversion of marine controlled-source electromagnetic data using parallel tempering." GEOPHYSICS 78, no. 6 (November 1, 2013): E271—E280. http://dx.doi.org/10.1190/geo2013-0128.1.
Full textLefebvre, Mario, and Moussa Kounta. "Discrete homing problems." Archives of Control Sciences 23, no. 1 (March 1, 2013): 5–18. http://dx.doi.org/10.2478/v10170-011-0039-6.
Full textAndini, Enggartya, Sudarno Sudarno, and Rita Rahmawati. "PENERAPAN METODE PENGENDALIAN KUALITAS MEWMA BERDASARKAN ARL DENGAN PENDEKATAN RANTAI MARKOV (Studi Kasus: Batik Semarang 16, Meteseh)." Jurnal Gaussian 10, no. 1 (February 28, 2021): 125–35. http://dx.doi.org/10.14710/j.gauss.v10i1.30939.
Full textCAI, KAI-YUAN, TSONG YUEH CHEN, YONG-CHAO LI, YUEN TAK YU, and LEI ZHAO. "ON THE ONLINE PARAMETER ESTIMATION PROBLEM IN ADAPTIVE SOFTWARE TESTING." International Journal of Software Engineering and Knowledge Engineering 18, no. 03 (May 2008): 357–81. http://dx.doi.org/10.1142/s0218194008003696.
Full textLi, Jinzhi, and Shixia Ma. "Pricing Options with Credit Risk in Markovian Regime-Switching Markets." Journal of Applied Mathematics 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/621371.
Full textDshalalow, Jewgeni. "On the multiserver queue with finite waiting room and controlled input." Advances in Applied Probability 17, no. 2 (June 1985): 408–23. http://dx.doi.org/10.2307/1427148.
Full textDshalalow, Jewgeni. "On the multiserver queue with finite waiting room and controlled input." Advances in Applied Probability 17, no. 02 (June 1985): 408–23. http://dx.doi.org/10.1017/s0001867800015044.
Full textAttia, F. A. "The control of a finite dam with penalty cost function: Markov input rate." Journal of Applied Probability 24, no. 2 (June 1987): 457–65. http://dx.doi.org/10.2307/3214269.
Full textAttia, F. A. "The control of a finite dam with penalty cost function: Markov input rate." Journal of Applied Probability 24, no. 02 (June 1987): 457–65. http://dx.doi.org/10.1017/s0021900200031090.
Full textFort, Gersende. "Central limit theorems for stochastic approximation with controlled Markov chain dynamics." ESAIM: Probability and Statistics 19 (2015): 60–80. http://dx.doi.org/10.1051/ps/2014013.
Full textDissertations / Theses on the topic "Controlled Markov chain"
Kuri, Joy. "Optimal Control Problems In Communication Networks With Information Delays And Quality Of Service Constraints." Thesis, Indian Institute of Science, 1995. https://etd.iisc.ac.in/handle/2005/162.
Full textKuri, Joy. "Optimal Control Problems In Communication Networks With Information Delays And Quality Of Service Constraints." Thesis, Indian Institute of Science, 1995. http://hdl.handle.net/2005/162.
Full textBrau, Rojas Agustin. "Controlled Markov chains with risk-sensitive average cost criterion." Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/284004.
Full textAvila, Godoy Micaela Guadalupe. "Controlled Markov chains with exponential risk-sensitive criteria: Modularity, structured policies and applications." Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/289049.
Full textFigueiredo, Danilo Zucolli. "Discrete-time jump linear systems with Markov chain in a general state space." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-18012017-115659/.
Full textEsta tese trata de sistemas lineares com saltos markovianos (MJLS) a tempo discreto com cadeia de Markov em um espaço geral de Borel S. Vários problemas de controle foram abordados para esta classe de sistemas dinâmicos, incluindo estabilidade estocástica (SS), síntese de controle ótimo linear quadrático (LQ), projeto de filtros e um princípio da separação. Condições necessárias e suficientes para a SS foram obtidas. Foi demonstrado que SS é equivalente ao raio espectral de um operador ser menor que 1 ou à existência de uma solução para uma equação de Lyapunov. Os problemas de controle ótimo a horizonte finito e infinito foram abordados com base no conceito de SS. A solução para o problema de controle ótimo LQ a horizonte finito (infinito) foi obtida a partir das associadas equações a diferenças (algébricas) de Riccati S-acopladas de controle. Por S-acopladas entende-se que as equações são acopladas por uma integral sobre o kernel estocástico com densidade de transição em relação a uma medida in-finita no espaço de Borel S. O projeto de filtros lineares markovianos foi analisado e uma solução para o problema da filtragem a horizonte finito (infinito) foi obtida com base nas associadas equações a diferenças (algébricas) de Riccati S-acopladas de filtragem. Condições para a existência e unicidade de uma solução positiva semi-definida e estabilizável para as equações algébricas de Riccati S-acopladas associadas aos problemas de controle e filtragem também foram obtidas. Por último, foi estabelecido um princípio da separação para MJLS a tempo discreto com cadeia de Markov em um espaço de estados geral. Foi demonstrado que o controlador ótimo para um problema de controle ótimo com informação parcial separa o problema de controle com informação parcial em dois problemas, um deles associado a um problema de filtragem e o outro associado a um problema de controle ótimo com informação completa. Espera-se que os resultados obtidos nesta tese possam motivar futuras pesquisas sobre MJLS a tempo discreto com cadeia de Markov em um espaço de estados geral.
Franco, Bruno Chaves [UNESP]. "Planejamento econômico de gráficos de controle X para monitoramento de processos autocorrelacionados." Universidade Estadual Paulista (UNESP), 2011. http://hdl.handle.net/11449/93084.
Full textCoordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Esta pesquisa propõe o planejamento econômico de gráficos de controle ̅ para o monitoramento de uma característica de qualidade na qual as observações se ajustam a um modelo autorregressivo de primeira ordem com erro adicional. O modelo de custos de Duncan é usado para selecionar os parâmetros do gráfico, tamanho da amostra, intervalo de amostragem e os limites de controle. Utiliza-se o algoritmo genético na busca do custo mínimo de monitoramento. Para determinação do número médio de amostras até o sinal e o número de alarmes falsos são utilizadas Cadeias de Markov. Uma análise de sensibilidade mostrou que a autocorrelação provoca efeito adverso nos parâmetros do gráfico elevando seu custo de monitoramento e reduzindo sua eficiência
This research proposes an economic design of a ̅ control chart used to monitor a quality characteristic whose observations fit to a first-order autoregressive model with additional error. The Duncan's cost model is used to select the control chart parameters, namely the sample size, the sampling interval and the control limit coefficient, using genetic algorithm in order to search the minimum cost. The Markov chain is used to determine the average number of samples until the signal and the number of false alarms. A sensitivity analysis showed that the autocorrelation causes adverse effect on the parameters of the control chart increasing monitoring cost and reducing significantly their efficiency
Trindade, Anderson Laécio Galindo. "Contribuições para o controle on-line de processos por atributos." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/3/3136/tde-02062008-132508/.
Full textThe quality control procedure for attributes, proposed by Taguchi et al. (1989), consists in inspecting a single item at every m produced items and, based on the result of each inspection, deciding weather the non-conforming fraction has increased or not. If an inspected item is declared non-conforming, the process is stopped and adjusted, assuming that it has changed to out-of-control condition. Once: i) the inspection system is subject to misclassification and it is possible to carry out repetitive classifications in the inspected item; ii) the non-conforming fraction, when the process is out-of-control, can be described by y(x); iii) the decision about stopping the process can be based on last h inspections, a model which considers those points is developed. Using properties of ergodic Markov chains, the average cost expression is calculated and can be minimized by parameters beyond m: number of repetitive classifications (r); minimum number of classifications as conforming to declare an item as conforming (s); number of inspections taken into account (h) and stopping criteria (u). The results obtained show that: repetitive classifications of the inspected item can be a viable option if only one item is used to decide about the process condition; a finite Markov chain can be used to represent the control procedure in presence of a function y(x); deciding about the process condition based on last h inspections has a significant impact on the average cost.
Franco, Bruno Chaves. "Planejamento econômico de gráficos de controle X para monitoramento de processos autocorrelacionados /." Guaratinguetá : [s.n.], 2011. http://hdl.handle.net/11449/93084.
Full textAbstract: This research proposes an economic design of a ̅ control chart used to monitor a quality characteristic whose observations fit to a first-order autoregressive model with additional error. The Duncan's cost model is used to select the control chart parameters, namely the sample size, the sampling interval and the control limit coefficient, using genetic algorithm in order to search the minimum cost. The Markov chain is used to determine the average number of samples until the signal and the number of false alarms. A sensitivity analysis showed that the autocorrelation causes adverse effect on the parameters of the control chart increasing monitoring cost and reducing significantly their efficiency
Orientadora: Marcela Aparecida Guerreiro Machado
Coorientadora: Antonio Fernando Branco Costa
Banca: Fernando Augusto Silva Marins
Banca: Anderson Paula de Paiva
Mestre
Marcos, Lucas Barbosa. "Controle de sistemas lineares sujeitos a saltos Markovianos aplicado em veículos autônomos." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-27042017-085140/.
Full textIn nowadays society, automobile vehicles are getting more and more integrated to people\'s daily activities, as there are more than 1 billion of them on the streets around the world. As they are controlled by drivers, vehicles are subjected to failures caused by human mistakes that lead to accidents, injuries and others. Autonomous vehicle control has shown itself to be an alternative in the pursuit of damage reduction, and it is applied by different institutions in many countries. Therefore, it is a main subject in the area of control systems. This paper, relying on mathematical descriptions of vehicle behavior, aims to develop and apply an efficient autonomous control method that takes into account state-space formulation. This goal will be achieved by the use of control strategies based on Markovian Jump Linear Systems that will describe the highly non-linear dynamics of the vehicle in different operation points.
Melo, Diogo Henrique de. "Otimização de consumo de combustível em veículos usando um modelo simplificado de trânsito e sistemas com saltos markovianos." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-01022017-160814/.
Full textThis dissertation deals with control of vehicles aiming at the fuel consumption optimization, taking into account the interference of traffic. Stochastic interferences like this and other real world phenomena prevents us from directly applying available results. We propose to employ a relatively simple system with Markov jumping parameters as a model for the vehicle subject to traffic interference, and to obtain the transition probabilities from a separate model for the traffic. This dissertation presents the model identification, the solution of the new problem using dynamic programming, and simulation of the obtained control.
Books on the topic "Controlled Markov chain"
Zhenting, Hou, Filar Jerzy A. 1949-, and Chen Anyue, eds. Markov processes and controlled Markov chains. Dordrecht: Kluwer Academic Publishers, 2002.
Find full textHou, Zhenting, Jerzy A. Filar, and Anyue Chen, eds. Markov Processes and Controlled Markov Chains. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0.
Full textBorkar, Vivek S. Topics in controlled Markov chains. Harlow, Essex, England: Longman Scientific & Technical, 1991.
Find full textFilar, Jerzy A. Controlled markov chains, graphs and hamiltonicity. Hanover, Mass: Now Publishers, 2007.
Find full textCao, Xi-Ren. Foundations of Average-Cost Nonhomogeneous Controlled Markov Chains. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-56678-4.
Full textHou, Zhenting Zhenting, Anyue Anyue Chen, and Jerzy A. Filar. Markov Processes and Controlled Markov Chains. Springer London, Limited, 2013.
Find full textMarkov Processes and Controlled Markov Chains. Springer, 2011.
Find full text(Editor), Zhenting Hou, Jerzy A. Filar (Editor), and Anyue Chen (Editor), eds. Markov Processes and Controlled Markov Chains. Springer, 2002.
Find full textFilar, Jerzy A. Controlled Markov chains, graphs and hamiltonicity. 2007.
Find full textSelected Topics On Continuoustime Controlled Markov Chains And Markov Games. Imperial College Press, 2012.
Find full textBook chapters on the topic "Controlled Markov chain"
Cao, Yijia, and Lilian Cao. "Controlled Markov Chain Optimization of Genetic Algorithms." In Lecture Notes in Computer Science, 186–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48774-3_22.
Full textKvatadze, Z. A., and T. L. Shervashidze. "On limit theorems for conditionally independent random variables controlled by a finite Markov chain." In Lecture Notes in Mathematics, 250–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/bfb0078480.
Full textKushner, Harold J., and Paul Dupuis. "Controlled Markov Chains." In Numerical Methods for Stochastic Control Problems in Continuous Time, 35–52. New York, NY: Springer New York, 2001. http://dx.doi.org/10.1007/978-1-4613-0007-6_3.
Full textCassandras, Christos G., and Stéphane Lafortune. "Controlled Markov Chains." In Introduction to Discrete Event Systems, 523–89. Boston, MA: Springer US, 1999. http://dx.doi.org/10.1007/978-1-4757-4070-7_9.
Full textKushner, Harold J., and Paul G. Dupuis. "Controlled Markov Chains." In Numerical Methods for Stochastic Control Problems in Continuous Time, 35–51. New York, NY: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4684-0441-8_3.
Full textCassandras, Christos G., and Stéphane Lafortune. "Controlled Markov Chains." In Introduction to Discrete Event Systems, 535–91. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72274-6_9.
Full textHou, Zhenting, Zaiming Liu, Jiezhong Zou, and Xuerong Chen. "Markov Skeleton Processes." In Markov Processes and Controlled Markov Chains, 69–92. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_5.
Full textDynkin, E. B. "Branching Exit Markov System and their Applications to Partial Differential Equations." In Markov Processes and Controlled Markov Chains, 3–13. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_1.
Full textGuo, Xianping, and Weiping Zhu. "Optimality Conditions for CTMDP with Average Cost Criterion." In Markov Processes and Controlled Markov Chains, 167–88. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_10.
Full textCavazos-Cadena, Rolando, and Raúl Montes-de-Oca. "Optimal and Nearly Optimal Policies in Markov Decision Chains with Nonnegative Rewards and Risk-Sensitive Expected Total-Reward Criterion." In Markov Processes and Controlled Markov Chains, 189–221. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_11.
Full textConference papers on the topic "Controlled Markov chain"
Radaideh, Ashraf, Umesh Vaidya, and Venkataramana Ajjarapu. "Sensitivity analysis on modeling heterogeneous thermostatically controlled loads using Markov chain abstraction." In 2017 IEEE Power & Energy Society General Meeting (PESGM). IEEE, 2017. http://dx.doi.org/10.1109/pesgm.2017.8273971.
Full textHu, Hai, Chang-Hai Jiang, and Kai-Yuan Cai. "Adaptive Software Testing in the Context of an Improved Controlled Markov Chain Model." In 2008 32nd Annual IEEE International Computer Software and Applications Conference. IEEE, 2008. http://dx.doi.org/10.1109/compsac.2008.186.
Full textArapostathis, A., E. Fernandez-Gaucherand, and S. I. Marcus. "Analysis of an adaptive control scheme for a partially observed controlled Markov chain." In 29th IEEE Conference on Decision and Control. IEEE, 1990. http://dx.doi.org/10.1109/cdc.1990.203849.
Full textLaszlo Makara, Arpad, and Laszlo Csurgai-Horvath. "Indoor User Movement Simulation with Markov Chain for Deep Learning Controlled Antenna Beam Alignment." In 2021 International Conference on Electrical, Computer and Energy Technologies (ICECET). IEEE, 2021. http://dx.doi.org/10.1109/icecet52533.2021.9698600.
Full textSong, Qingshuo, and G. Yin. "Rates of convergence of Markov chain approximation for controlled regime-switching diffusions with stopping times." In 2010 49th IEEE Conference on Decision and Control (CDC). IEEE, 2010. http://dx.doi.org/10.1109/cdc.2010.5717658.
Full textMalikopoulos, Andreas A. "Convergence Properties of a Computational Learning Model for Unknown Markov Chains." In ASME 2008 Dynamic Systems and Control Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/dscc2008-2174.
Full textMalikopoulos, Andreas A. "A Rollout Control Algorithm for Discrete-Time Stochastic Systems." In ASME 2010 Dynamic Systems and Control Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/dscc2010-4047.
Full textSovizi, Javad, Suren Kumar, and Venkat Krovi. "Optimal Feedback Control of a Flexible Needle Under Anatomical Motion Uncertainty." In ASME 2015 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/dscc2015-9976.
Full text"Performance of mixtures of adaptive controllers based on Markov chains." In Proceedings of the 1999 American Control Conference. IEEE, 1999. http://dx.doi.org/10.1109/acc.1999.782739.
Full textPunčochář, Ivo, and Miroslav Šimandl. "Infinite Horizon Input Signal for Active Fault Detection in Controlled Markov Chains." In Power and Energy. Calgary,AB,Canada: ACTAPRESS, 2013. http://dx.doi.org/10.2316/p.2013.807-028.
Full textReports on the topic "Controlled Markov chain"
Kim, Tae-Hun, and Jung Won Kang. The clinical evidence of effectiveness and safety of massage chair: a scoping review. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, February 2023. http://dx.doi.org/10.37766/inplasy2023.2.0021.
Full text