Literatura académica sobre el tema "Controlled Markov chain"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Controlled Markov chain".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Controlled Markov chain"
Ray, Anandaroop, David L. Alumbaugh, G. Michael Hoversten y Kerry Key. "Robust and accelerated Bayesian inversion of marine controlled-source electromagnetic data using parallel tempering". GEOPHYSICS 78, n.º 6 (1 de noviembre de 2013): E271—E280. http://dx.doi.org/10.1190/geo2013-0128.1.
Texto completoLefebvre, Mario y Moussa Kounta. "Discrete homing problems". Archives of Control Sciences 23, n.º 1 (1 de marzo de 2013): 5–18. http://dx.doi.org/10.2478/v10170-011-0039-6.
Texto completoAndini, Enggartya, Sudarno Sudarno y Rita Rahmawati. "PENERAPAN METODE PENGENDALIAN KUALITAS MEWMA BERDASARKAN ARL DENGAN PENDEKATAN RANTAI MARKOV (Studi Kasus: Batik Semarang 16, Meteseh)". Jurnal Gaussian 10, n.º 1 (28 de febrero de 2021): 125–35. http://dx.doi.org/10.14710/j.gauss.v10i1.30939.
Texto completoCAI, KAI-YUAN, TSONG YUEH CHEN, YONG-CHAO LI, YUEN TAK YU y LEI ZHAO. "ON THE ONLINE PARAMETER ESTIMATION PROBLEM IN ADAPTIVE SOFTWARE TESTING". International Journal of Software Engineering and Knowledge Engineering 18, n.º 03 (mayo de 2008): 357–81. http://dx.doi.org/10.1142/s0218194008003696.
Texto completoLi, Jinzhi y Shixia Ma. "Pricing Options with Credit Risk in Markovian Regime-Switching Markets". Journal of Applied Mathematics 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/621371.
Texto completoDshalalow, Jewgeni. "On the multiserver queue with finite waiting room and controlled input". Advances in Applied Probability 17, n.º 2 (junio de 1985): 408–23. http://dx.doi.org/10.2307/1427148.
Texto completoDshalalow, Jewgeni. "On the multiserver queue with finite waiting room and controlled input". Advances in Applied Probability 17, n.º 02 (junio de 1985): 408–23. http://dx.doi.org/10.1017/s0001867800015044.
Texto completoAttia, F. A. "The control of a finite dam with penalty cost function: Markov input rate". Journal of Applied Probability 24, n.º 2 (junio de 1987): 457–65. http://dx.doi.org/10.2307/3214269.
Texto completoAttia, F. A. "The control of a finite dam with penalty cost function: Markov input rate". Journal of Applied Probability 24, n.º 02 (junio de 1987): 457–65. http://dx.doi.org/10.1017/s0021900200031090.
Texto completoFort, Gersende. "Central limit theorems for stochastic approximation with controlled Markov chain dynamics". ESAIM: Probability and Statistics 19 (2015): 60–80. http://dx.doi.org/10.1051/ps/2014013.
Texto completoTesis sobre el tema "Controlled Markov chain"
Kuri, Joy. "Optimal Control Problems In Communication Networks With Information Delays And Quality Of Service Constraints". Thesis, Indian Institute of Science, 1995. https://etd.iisc.ac.in/handle/2005/162.
Texto completoKuri, Joy. "Optimal Control Problems In Communication Networks With Information Delays And Quality Of Service Constraints". Thesis, Indian Institute of Science, 1995. http://hdl.handle.net/2005/162.
Texto completoBrau, Rojas Agustin. "Controlled Markov chains with risk-sensitive average cost criterion". Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/284004.
Texto completoAvila, Godoy Micaela Guadalupe. "Controlled Markov chains with exponential risk-sensitive criteria: Modularity, structured policies and applications". Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/289049.
Texto completoFigueiredo, Danilo Zucolli. "Discrete-time jump linear systems with Markov chain in a general state space". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-18012017-115659/.
Texto completoEsta tese trata de sistemas lineares com saltos markovianos (MJLS) a tempo discreto com cadeia de Markov em um espaço geral de Borel S. Vários problemas de controle foram abordados para esta classe de sistemas dinâmicos, incluindo estabilidade estocástica (SS), síntese de controle ótimo linear quadrático (LQ), projeto de filtros e um princípio da separação. Condições necessárias e suficientes para a SS foram obtidas. Foi demonstrado que SS é equivalente ao raio espectral de um operador ser menor que 1 ou à existência de uma solução para uma equação de Lyapunov. Os problemas de controle ótimo a horizonte finito e infinito foram abordados com base no conceito de SS. A solução para o problema de controle ótimo LQ a horizonte finito (infinito) foi obtida a partir das associadas equações a diferenças (algébricas) de Riccati S-acopladas de controle. Por S-acopladas entende-se que as equações são acopladas por uma integral sobre o kernel estocástico com densidade de transição em relação a uma medida in-finita no espaço de Borel S. O projeto de filtros lineares markovianos foi analisado e uma solução para o problema da filtragem a horizonte finito (infinito) foi obtida com base nas associadas equações a diferenças (algébricas) de Riccati S-acopladas de filtragem. Condições para a existência e unicidade de uma solução positiva semi-definida e estabilizável para as equações algébricas de Riccati S-acopladas associadas aos problemas de controle e filtragem também foram obtidas. Por último, foi estabelecido um princípio da separação para MJLS a tempo discreto com cadeia de Markov em um espaço de estados geral. Foi demonstrado que o controlador ótimo para um problema de controle ótimo com informação parcial separa o problema de controle com informação parcial em dois problemas, um deles associado a um problema de filtragem e o outro associado a um problema de controle ótimo com informação completa. Espera-se que os resultados obtidos nesta tese possam motivar futuras pesquisas sobre MJLS a tempo discreto com cadeia de Markov em um espaço de estados geral.
Franco, Bruno Chaves [UNESP]. "Planejamento econômico de gráficos de controle X para monitoramento de processos autocorrelacionados". Universidade Estadual Paulista (UNESP), 2011. http://hdl.handle.net/11449/93084.
Texto completoCoordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Esta pesquisa propõe o planejamento econômico de gráficos de controle ̅ para o monitoramento de uma característica de qualidade na qual as observações se ajustam a um modelo autorregressivo de primeira ordem com erro adicional. O modelo de custos de Duncan é usado para selecionar os parâmetros do gráfico, tamanho da amostra, intervalo de amostragem e os limites de controle. Utiliza-se o algoritmo genético na busca do custo mínimo de monitoramento. Para determinação do número médio de amostras até o sinal e o número de alarmes falsos são utilizadas Cadeias de Markov. Uma análise de sensibilidade mostrou que a autocorrelação provoca efeito adverso nos parâmetros do gráfico elevando seu custo de monitoramento e reduzindo sua eficiência
This research proposes an economic design of a ̅ control chart used to monitor a quality characteristic whose observations fit to a first-order autoregressive model with additional error. The Duncan's cost model is used to select the control chart parameters, namely the sample size, the sampling interval and the control limit coefficient, using genetic algorithm in order to search the minimum cost. The Markov chain is used to determine the average number of samples until the signal and the number of false alarms. A sensitivity analysis showed that the autocorrelation causes adverse effect on the parameters of the control chart increasing monitoring cost and reducing significantly their efficiency
Trindade, Anderson Laécio Galindo. "Contribuições para o controle on-line de processos por atributos". Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/3/3136/tde-02062008-132508/.
Texto completoThe quality control procedure for attributes, proposed by Taguchi et al. (1989), consists in inspecting a single item at every m produced items and, based on the result of each inspection, deciding weather the non-conforming fraction has increased or not. If an inspected item is declared non-conforming, the process is stopped and adjusted, assuming that it has changed to out-of-control condition. Once: i) the inspection system is subject to misclassification and it is possible to carry out repetitive classifications in the inspected item; ii) the non-conforming fraction, when the process is out-of-control, can be described by y(x); iii) the decision about stopping the process can be based on last h inspections, a model which considers those points is developed. Using properties of ergodic Markov chains, the average cost expression is calculated and can be minimized by parameters beyond m: number of repetitive classifications (r); minimum number of classifications as conforming to declare an item as conforming (s); number of inspections taken into account (h) and stopping criteria (u). The results obtained show that: repetitive classifications of the inspected item can be a viable option if only one item is used to decide about the process condition; a finite Markov chain can be used to represent the control procedure in presence of a function y(x); deciding about the process condition based on last h inspections has a significant impact on the average cost.
Franco, Bruno Chaves. "Planejamento econômico de gráficos de controle X para monitoramento de processos autocorrelacionados /". Guaratinguetá : [s.n.], 2011. http://hdl.handle.net/11449/93084.
Texto completoAbstract: This research proposes an economic design of a ̅ control chart used to monitor a quality characteristic whose observations fit to a first-order autoregressive model with additional error. The Duncan's cost model is used to select the control chart parameters, namely the sample size, the sampling interval and the control limit coefficient, using genetic algorithm in order to search the minimum cost. The Markov chain is used to determine the average number of samples until the signal and the number of false alarms. A sensitivity analysis showed that the autocorrelation causes adverse effect on the parameters of the control chart increasing monitoring cost and reducing significantly their efficiency
Orientadora: Marcela Aparecida Guerreiro Machado
Coorientadora: Antonio Fernando Branco Costa
Banca: Fernando Augusto Silva Marins
Banca: Anderson Paula de Paiva
Mestre
Marcos, Lucas Barbosa. "Controle de sistemas lineares sujeitos a saltos Markovianos aplicado em veículos autônomos". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-27042017-085140/.
Texto completoIn nowadays society, automobile vehicles are getting more and more integrated to people\'s daily activities, as there are more than 1 billion of them on the streets around the world. As they are controlled by drivers, vehicles are subjected to failures caused by human mistakes that lead to accidents, injuries and others. Autonomous vehicle control has shown itself to be an alternative in the pursuit of damage reduction, and it is applied by different institutions in many countries. Therefore, it is a main subject in the area of control systems. This paper, relying on mathematical descriptions of vehicle behavior, aims to develop and apply an efficient autonomous control method that takes into account state-space formulation. This goal will be achieved by the use of control strategies based on Markovian Jump Linear Systems that will describe the highly non-linear dynamics of the vehicle in different operation points.
Melo, Diogo Henrique de. "Otimização de consumo de combustível em veículos usando um modelo simplificado de trânsito e sistemas com saltos markovianos". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-01022017-160814/.
Texto completoThis dissertation deals with control of vehicles aiming at the fuel consumption optimization, taking into account the interference of traffic. Stochastic interferences like this and other real world phenomena prevents us from directly applying available results. We propose to employ a relatively simple system with Markov jumping parameters as a model for the vehicle subject to traffic interference, and to obtain the transition probabilities from a separate model for the traffic. This dissertation presents the model identification, the solution of the new problem using dynamic programming, and simulation of the obtained control.
Libros sobre el tema "Controlled Markov chain"
Zhenting, Hou, Filar Jerzy A. 1949- y Chen Anyue, eds. Markov processes and controlled Markov chains. Dordrecht: Kluwer Academic Publishers, 2002.
Buscar texto completoHou, Zhenting, Jerzy A. Filar y Anyue Chen, eds. Markov Processes and Controlled Markov Chains. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0.
Texto completoBorkar, Vivek S. Topics in controlled Markov chains. Harlow, Essex, England: Longman Scientific & Technical, 1991.
Buscar texto completoFilar, Jerzy A. Controlled markov chains, graphs and hamiltonicity. Hanover, Mass: Now Publishers, 2007.
Buscar texto completoCao, Xi-Ren. Foundations of Average-Cost Nonhomogeneous Controlled Markov Chains. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-56678-4.
Texto completoHou, Zhenting Zhenting, Anyue Anyue Chen y Jerzy A. Filar. Markov Processes and Controlled Markov Chains. Springer London, Limited, 2013.
Buscar texto completoMarkov Processes and Controlled Markov Chains. Springer, 2011.
Buscar texto completo(Editor), Zhenting Hou, Jerzy A. Filar (Editor) y Anyue Chen (Editor), eds. Markov Processes and Controlled Markov Chains. Springer, 2002.
Buscar texto completoFilar, Jerzy A. Controlled Markov chains, graphs and hamiltonicity. 2007.
Buscar texto completoSelected Topics On Continuoustime Controlled Markov Chains And Markov Games. Imperial College Press, 2012.
Buscar texto completoCapítulos de libros sobre el tema "Controlled Markov chain"
Cao, Yijia y Lilian Cao. "Controlled Markov Chain Optimization of Genetic Algorithms". En Lecture Notes in Computer Science, 186–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48774-3_22.
Texto completoKvatadze, Z. A. y T. L. Shervashidze. "On limit theorems for conditionally independent random variables controlled by a finite Markov chain". En Lecture Notes in Mathematics, 250–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/bfb0078480.
Texto completoKushner, Harold J. y Paul Dupuis. "Controlled Markov Chains". En Numerical Methods for Stochastic Control Problems in Continuous Time, 35–52. New York, NY: Springer New York, 2001. http://dx.doi.org/10.1007/978-1-4613-0007-6_3.
Texto completoCassandras, Christos G. y Stéphane Lafortune. "Controlled Markov Chains". En Introduction to Discrete Event Systems, 523–89. Boston, MA: Springer US, 1999. http://dx.doi.org/10.1007/978-1-4757-4070-7_9.
Texto completoKushner, Harold J. y Paul G. Dupuis. "Controlled Markov Chains". En Numerical Methods for Stochastic Control Problems in Continuous Time, 35–51. New York, NY: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4684-0441-8_3.
Texto completoCassandras, Christos G. y Stéphane Lafortune. "Controlled Markov Chains". En Introduction to Discrete Event Systems, 535–91. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72274-6_9.
Texto completoHou, Zhenting, Zaiming Liu, Jiezhong Zou y Xuerong Chen. "Markov Skeleton Processes". En Markov Processes and Controlled Markov Chains, 69–92. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_5.
Texto completoDynkin, E. B. "Branching Exit Markov System and their Applications to Partial Differential Equations". En Markov Processes and Controlled Markov Chains, 3–13. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_1.
Texto completoGuo, Xianping y Weiping Zhu. "Optimality Conditions for CTMDP with Average Cost Criterion". En Markov Processes and Controlled Markov Chains, 167–88. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_10.
Texto completoCavazos-Cadena, Rolando y Raúl Montes-de-Oca. "Optimal and Nearly Optimal Policies in Markov Decision Chains with Nonnegative Rewards and Risk-Sensitive Expected Total-Reward Criterion". En Markov Processes and Controlled Markov Chains, 189–221. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_11.
Texto completoActas de conferencias sobre el tema "Controlled Markov chain"
Radaideh, Ashraf, Umesh Vaidya y Venkataramana Ajjarapu. "Sensitivity analysis on modeling heterogeneous thermostatically controlled loads using Markov chain abstraction". En 2017 IEEE Power & Energy Society General Meeting (PESGM). IEEE, 2017. http://dx.doi.org/10.1109/pesgm.2017.8273971.
Texto completoHu, Hai, Chang-Hai Jiang y Kai-Yuan Cai. "Adaptive Software Testing in the Context of an Improved Controlled Markov Chain Model". En 2008 32nd Annual IEEE International Computer Software and Applications Conference. IEEE, 2008. http://dx.doi.org/10.1109/compsac.2008.186.
Texto completoArapostathis, A., E. Fernandez-Gaucherand y S. I. Marcus. "Analysis of an adaptive control scheme for a partially observed controlled Markov chain". En 29th IEEE Conference on Decision and Control. IEEE, 1990. http://dx.doi.org/10.1109/cdc.1990.203849.
Texto completoLaszlo Makara, Arpad y Laszlo Csurgai-Horvath. "Indoor User Movement Simulation with Markov Chain for Deep Learning Controlled Antenna Beam Alignment". En 2021 International Conference on Electrical, Computer and Energy Technologies (ICECET). IEEE, 2021. http://dx.doi.org/10.1109/icecet52533.2021.9698600.
Texto completoSong, Qingshuo y G. Yin. "Rates of convergence of Markov chain approximation for controlled regime-switching diffusions with stopping times". En 2010 49th IEEE Conference on Decision and Control (CDC). IEEE, 2010. http://dx.doi.org/10.1109/cdc.2010.5717658.
Texto completoMalikopoulos, Andreas A. "Convergence Properties of a Computational Learning Model for Unknown Markov Chains". En ASME 2008 Dynamic Systems and Control Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/dscc2008-2174.
Texto completoMalikopoulos, Andreas A. "A Rollout Control Algorithm for Discrete-Time Stochastic Systems". En ASME 2010 Dynamic Systems and Control Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/dscc2010-4047.
Texto completoSovizi, Javad, Suren Kumar y Venkat Krovi. "Optimal Feedback Control of a Flexible Needle Under Anatomical Motion Uncertainty". En ASME 2015 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/dscc2015-9976.
Texto completo"Performance of mixtures of adaptive controllers based on Markov chains". En Proceedings of the 1999 American Control Conference. IEEE, 1999. http://dx.doi.org/10.1109/acc.1999.782739.
Texto completoPunčochář, Ivo y Miroslav Šimandl. "Infinite Horizon Input Signal for Active Fault Detection in Controlled Markov Chains". En Power and Energy. Calgary,AB,Canada: ACTAPRESS, 2013. http://dx.doi.org/10.2316/p.2013.807-028.
Texto completoInformes sobre el tema "Controlled Markov chain"
Kim, Tae-Hun y Jung Won Kang. The clinical evidence of effectiveness and safety of massage chair: a scoping review. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, febrero de 2023. http://dx.doi.org/10.37766/inplasy2023.2.0021.
Texto completo