Gotowa bibliografia na temat „Controlled Markov chain”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Controlled Markov chain”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Controlled Markov chain"
Ray, Anandaroop, David L. Alumbaugh, G. Michael Hoversten i Kerry Key. "Robust and accelerated Bayesian inversion of marine controlled-source electromagnetic data using parallel tempering". GEOPHYSICS 78, nr 6 (1.11.2013): E271—E280. http://dx.doi.org/10.1190/geo2013-0128.1.
Pełny tekst źródłaLefebvre, Mario, i Moussa Kounta. "Discrete homing problems". Archives of Control Sciences 23, nr 1 (1.03.2013): 5–18. http://dx.doi.org/10.2478/v10170-011-0039-6.
Pełny tekst źródłaAndini, Enggartya, Sudarno Sudarno i Rita Rahmawati. "PENERAPAN METODE PENGENDALIAN KUALITAS MEWMA BERDASARKAN ARL DENGAN PENDEKATAN RANTAI MARKOV (Studi Kasus: Batik Semarang 16, Meteseh)". Jurnal Gaussian 10, nr 1 (28.02.2021): 125–35. http://dx.doi.org/10.14710/j.gauss.v10i1.30939.
Pełny tekst źródłaCAI, KAI-YUAN, TSONG YUEH CHEN, YONG-CHAO LI, YUEN TAK YU i LEI ZHAO. "ON THE ONLINE PARAMETER ESTIMATION PROBLEM IN ADAPTIVE SOFTWARE TESTING". International Journal of Software Engineering and Knowledge Engineering 18, nr 03 (maj 2008): 357–81. http://dx.doi.org/10.1142/s0218194008003696.
Pełny tekst źródłaLi, Jinzhi, i Shixia Ma. "Pricing Options with Credit Risk in Markovian Regime-Switching Markets". Journal of Applied Mathematics 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/621371.
Pełny tekst źródłaDshalalow, Jewgeni. "On the multiserver queue with finite waiting room and controlled input". Advances in Applied Probability 17, nr 2 (czerwiec 1985): 408–23. http://dx.doi.org/10.2307/1427148.
Pełny tekst źródłaDshalalow, Jewgeni. "On the multiserver queue with finite waiting room and controlled input". Advances in Applied Probability 17, nr 02 (czerwiec 1985): 408–23. http://dx.doi.org/10.1017/s0001867800015044.
Pełny tekst źródłaAttia, F. A. "The control of a finite dam with penalty cost function: Markov input rate". Journal of Applied Probability 24, nr 2 (czerwiec 1987): 457–65. http://dx.doi.org/10.2307/3214269.
Pełny tekst źródłaAttia, F. A. "The control of a finite dam with penalty cost function: Markov input rate". Journal of Applied Probability 24, nr 02 (czerwiec 1987): 457–65. http://dx.doi.org/10.1017/s0021900200031090.
Pełny tekst źródłaFort, Gersende. "Central limit theorems for stochastic approximation with controlled Markov chain dynamics". ESAIM: Probability and Statistics 19 (2015): 60–80. http://dx.doi.org/10.1051/ps/2014013.
Pełny tekst źródłaRozprawy doktorskie na temat "Controlled Markov chain"
Kuri, Joy. "Optimal Control Problems In Communication Networks With Information Delays And Quality Of Service Constraints". Thesis, Indian Institute of Science, 1995. https://etd.iisc.ac.in/handle/2005/162.
Pełny tekst źródłaKuri, Joy. "Optimal Control Problems In Communication Networks With Information Delays And Quality Of Service Constraints". Thesis, Indian Institute of Science, 1995. http://hdl.handle.net/2005/162.
Pełny tekst źródłaBrau, Rojas Agustin. "Controlled Markov chains with risk-sensitive average cost criterion". Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/284004.
Pełny tekst źródłaAvila, Godoy Micaela Guadalupe. "Controlled Markov chains with exponential risk-sensitive criteria: Modularity, structured policies and applications". Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/289049.
Pełny tekst źródłaFigueiredo, Danilo Zucolli. "Discrete-time jump linear systems with Markov chain in a general state space". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-18012017-115659/.
Pełny tekst źródłaEsta tese trata de sistemas lineares com saltos markovianos (MJLS) a tempo discreto com cadeia de Markov em um espaço geral de Borel S. Vários problemas de controle foram abordados para esta classe de sistemas dinâmicos, incluindo estabilidade estocástica (SS), síntese de controle ótimo linear quadrático (LQ), projeto de filtros e um princípio da separação. Condições necessárias e suficientes para a SS foram obtidas. Foi demonstrado que SS é equivalente ao raio espectral de um operador ser menor que 1 ou à existência de uma solução para uma equação de Lyapunov. Os problemas de controle ótimo a horizonte finito e infinito foram abordados com base no conceito de SS. A solução para o problema de controle ótimo LQ a horizonte finito (infinito) foi obtida a partir das associadas equações a diferenças (algébricas) de Riccati S-acopladas de controle. Por S-acopladas entende-se que as equações são acopladas por uma integral sobre o kernel estocástico com densidade de transição em relação a uma medida in-finita no espaço de Borel S. O projeto de filtros lineares markovianos foi analisado e uma solução para o problema da filtragem a horizonte finito (infinito) foi obtida com base nas associadas equações a diferenças (algébricas) de Riccati S-acopladas de filtragem. Condições para a existência e unicidade de uma solução positiva semi-definida e estabilizável para as equações algébricas de Riccati S-acopladas associadas aos problemas de controle e filtragem também foram obtidas. Por último, foi estabelecido um princípio da separação para MJLS a tempo discreto com cadeia de Markov em um espaço de estados geral. Foi demonstrado que o controlador ótimo para um problema de controle ótimo com informação parcial separa o problema de controle com informação parcial em dois problemas, um deles associado a um problema de filtragem e o outro associado a um problema de controle ótimo com informação completa. Espera-se que os resultados obtidos nesta tese possam motivar futuras pesquisas sobre MJLS a tempo discreto com cadeia de Markov em um espaço de estados geral.
Franco, Bruno Chaves [UNESP]. "Planejamento econômico de gráficos de controle X para monitoramento de processos autocorrelacionados". Universidade Estadual Paulista (UNESP), 2011. http://hdl.handle.net/11449/93084.
Pełny tekst źródłaCoordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Esta pesquisa propõe o planejamento econômico de gráficos de controle ̅ para o monitoramento de uma característica de qualidade na qual as observações se ajustam a um modelo autorregressivo de primeira ordem com erro adicional. O modelo de custos de Duncan é usado para selecionar os parâmetros do gráfico, tamanho da amostra, intervalo de amostragem e os limites de controle. Utiliza-se o algoritmo genético na busca do custo mínimo de monitoramento. Para determinação do número médio de amostras até o sinal e o número de alarmes falsos são utilizadas Cadeias de Markov. Uma análise de sensibilidade mostrou que a autocorrelação provoca efeito adverso nos parâmetros do gráfico elevando seu custo de monitoramento e reduzindo sua eficiência
This research proposes an economic design of a ̅ control chart used to monitor a quality characteristic whose observations fit to a first-order autoregressive model with additional error. The Duncan's cost model is used to select the control chart parameters, namely the sample size, the sampling interval and the control limit coefficient, using genetic algorithm in order to search the minimum cost. The Markov chain is used to determine the average number of samples until the signal and the number of false alarms. A sensitivity analysis showed that the autocorrelation causes adverse effect on the parameters of the control chart increasing monitoring cost and reducing significantly their efficiency
Trindade, Anderson Laécio Galindo. "Contribuições para o controle on-line de processos por atributos". Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/3/3136/tde-02062008-132508/.
Pełny tekst źródłaThe quality control procedure for attributes, proposed by Taguchi et al. (1989), consists in inspecting a single item at every m produced items and, based on the result of each inspection, deciding weather the non-conforming fraction has increased or not. If an inspected item is declared non-conforming, the process is stopped and adjusted, assuming that it has changed to out-of-control condition. Once: i) the inspection system is subject to misclassification and it is possible to carry out repetitive classifications in the inspected item; ii) the non-conforming fraction, when the process is out-of-control, can be described by y(x); iii) the decision about stopping the process can be based on last h inspections, a model which considers those points is developed. Using properties of ergodic Markov chains, the average cost expression is calculated and can be minimized by parameters beyond m: number of repetitive classifications (r); minimum number of classifications as conforming to declare an item as conforming (s); number of inspections taken into account (h) and stopping criteria (u). The results obtained show that: repetitive classifications of the inspected item can be a viable option if only one item is used to decide about the process condition; a finite Markov chain can be used to represent the control procedure in presence of a function y(x); deciding about the process condition based on last h inspections has a significant impact on the average cost.
Franco, Bruno Chaves. "Planejamento econômico de gráficos de controle X para monitoramento de processos autocorrelacionados /". Guaratinguetá : [s.n.], 2011. http://hdl.handle.net/11449/93084.
Pełny tekst źródłaAbstract: This research proposes an economic design of a ̅ control chart used to monitor a quality characteristic whose observations fit to a first-order autoregressive model with additional error. The Duncan's cost model is used to select the control chart parameters, namely the sample size, the sampling interval and the control limit coefficient, using genetic algorithm in order to search the minimum cost. The Markov chain is used to determine the average number of samples until the signal and the number of false alarms. A sensitivity analysis showed that the autocorrelation causes adverse effect on the parameters of the control chart increasing monitoring cost and reducing significantly their efficiency
Orientadora: Marcela Aparecida Guerreiro Machado
Coorientadora: Antonio Fernando Branco Costa
Banca: Fernando Augusto Silva Marins
Banca: Anderson Paula de Paiva
Mestre
Marcos, Lucas Barbosa. "Controle de sistemas lineares sujeitos a saltos Markovianos aplicado em veículos autônomos". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-27042017-085140/.
Pełny tekst źródłaIn nowadays society, automobile vehicles are getting more and more integrated to people\'s daily activities, as there are more than 1 billion of them on the streets around the world. As they are controlled by drivers, vehicles are subjected to failures caused by human mistakes that lead to accidents, injuries and others. Autonomous vehicle control has shown itself to be an alternative in the pursuit of damage reduction, and it is applied by different institutions in many countries. Therefore, it is a main subject in the area of control systems. This paper, relying on mathematical descriptions of vehicle behavior, aims to develop and apply an efficient autonomous control method that takes into account state-space formulation. This goal will be achieved by the use of control strategies based on Markovian Jump Linear Systems that will describe the highly non-linear dynamics of the vehicle in different operation points.
Melo, Diogo Henrique de. "Otimização de consumo de combustível em veículos usando um modelo simplificado de trânsito e sistemas com saltos markovianos". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-01022017-160814/.
Pełny tekst źródłaThis dissertation deals with control of vehicles aiming at the fuel consumption optimization, taking into account the interference of traffic. Stochastic interferences like this and other real world phenomena prevents us from directly applying available results. We propose to employ a relatively simple system with Markov jumping parameters as a model for the vehicle subject to traffic interference, and to obtain the transition probabilities from a separate model for the traffic. This dissertation presents the model identification, the solution of the new problem using dynamic programming, and simulation of the obtained control.
Książki na temat "Controlled Markov chain"
Zhenting, Hou, Filar Jerzy A. 1949- i Chen Anyue, red. Markov processes and controlled Markov chains. Dordrecht: Kluwer Academic Publishers, 2002.
Znajdź pełny tekst źródłaHou, Zhenting, Jerzy A. Filar i Anyue Chen, red. Markov Processes and Controlled Markov Chains. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0.
Pełny tekst źródłaBorkar, Vivek S. Topics in controlled Markov chains. Harlow, Essex, England: Longman Scientific & Technical, 1991.
Znajdź pełny tekst źródłaFilar, Jerzy A. Controlled markov chains, graphs and hamiltonicity. Hanover, Mass: Now Publishers, 2007.
Znajdź pełny tekst źródłaCao, Xi-Ren. Foundations of Average-Cost Nonhomogeneous Controlled Markov Chains. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-56678-4.
Pełny tekst źródłaHou, Zhenting Zhenting, Anyue Anyue Chen i Jerzy A. Filar. Markov Processes and Controlled Markov Chains. Springer London, Limited, 2013.
Znajdź pełny tekst źródłaMarkov Processes and Controlled Markov Chains. Springer, 2011.
Znajdź pełny tekst źródła(Editor), Zhenting Hou, Jerzy A. Filar (Editor) i Anyue Chen (Editor), red. Markov Processes and Controlled Markov Chains. Springer, 2002.
Znajdź pełny tekst źródłaFilar, Jerzy A. Controlled Markov chains, graphs and hamiltonicity. 2007.
Znajdź pełny tekst źródłaSelected Topics On Continuoustime Controlled Markov Chains And Markov Games. Imperial College Press, 2012.
Znajdź pełny tekst źródłaCzęści książek na temat "Controlled Markov chain"
Cao, Yijia, i Lilian Cao. "Controlled Markov Chain Optimization of Genetic Algorithms". W Lecture Notes in Computer Science, 186–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48774-3_22.
Pełny tekst źródłaKvatadze, Z. A., i T. L. Shervashidze. "On limit theorems for conditionally independent random variables controlled by a finite Markov chain". W Lecture Notes in Mathematics, 250–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/bfb0078480.
Pełny tekst źródłaKushner, Harold J., i Paul Dupuis. "Controlled Markov Chains". W Numerical Methods for Stochastic Control Problems in Continuous Time, 35–52. New York, NY: Springer New York, 2001. http://dx.doi.org/10.1007/978-1-4613-0007-6_3.
Pełny tekst źródłaCassandras, Christos G., i Stéphane Lafortune. "Controlled Markov Chains". W Introduction to Discrete Event Systems, 523–89. Boston, MA: Springer US, 1999. http://dx.doi.org/10.1007/978-1-4757-4070-7_9.
Pełny tekst źródłaKushner, Harold J., i Paul G. Dupuis. "Controlled Markov Chains". W Numerical Methods for Stochastic Control Problems in Continuous Time, 35–51. New York, NY: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4684-0441-8_3.
Pełny tekst źródłaCassandras, Christos G., i Stéphane Lafortune. "Controlled Markov Chains". W Introduction to Discrete Event Systems, 535–91. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72274-6_9.
Pełny tekst źródłaHou, Zhenting, Zaiming Liu, Jiezhong Zou i Xuerong Chen. "Markov Skeleton Processes". W Markov Processes and Controlled Markov Chains, 69–92. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_5.
Pełny tekst źródłaDynkin, E. B. "Branching Exit Markov System and their Applications to Partial Differential Equations". W Markov Processes and Controlled Markov Chains, 3–13. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_1.
Pełny tekst źródłaGuo, Xianping, i Weiping Zhu. "Optimality Conditions for CTMDP with Average Cost Criterion". W Markov Processes and Controlled Markov Chains, 167–88. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_10.
Pełny tekst źródłaCavazos-Cadena, Rolando, i Raúl Montes-de-Oca. "Optimal and Nearly Optimal Policies in Markov Decision Chains with Nonnegative Rewards and Risk-Sensitive Expected Total-Reward Criterion". W Markov Processes and Controlled Markov Chains, 189–221. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_11.
Pełny tekst źródłaStreszczenia konferencji na temat "Controlled Markov chain"
Radaideh, Ashraf, Umesh Vaidya i Venkataramana Ajjarapu. "Sensitivity analysis on modeling heterogeneous thermostatically controlled loads using Markov chain abstraction". W 2017 IEEE Power & Energy Society General Meeting (PESGM). IEEE, 2017. http://dx.doi.org/10.1109/pesgm.2017.8273971.
Pełny tekst źródłaHu, Hai, Chang-Hai Jiang i Kai-Yuan Cai. "Adaptive Software Testing in the Context of an Improved Controlled Markov Chain Model". W 2008 32nd Annual IEEE International Computer Software and Applications Conference. IEEE, 2008. http://dx.doi.org/10.1109/compsac.2008.186.
Pełny tekst źródłaArapostathis, A., E. Fernandez-Gaucherand i S. I. Marcus. "Analysis of an adaptive control scheme for a partially observed controlled Markov chain". W 29th IEEE Conference on Decision and Control. IEEE, 1990. http://dx.doi.org/10.1109/cdc.1990.203849.
Pełny tekst źródłaLaszlo Makara, Arpad, i Laszlo Csurgai-Horvath. "Indoor User Movement Simulation with Markov Chain for Deep Learning Controlled Antenna Beam Alignment". W 2021 International Conference on Electrical, Computer and Energy Technologies (ICECET). IEEE, 2021. http://dx.doi.org/10.1109/icecet52533.2021.9698600.
Pełny tekst źródłaSong, Qingshuo, i G. Yin. "Rates of convergence of Markov chain approximation for controlled regime-switching diffusions with stopping times". W 2010 49th IEEE Conference on Decision and Control (CDC). IEEE, 2010. http://dx.doi.org/10.1109/cdc.2010.5717658.
Pełny tekst źródłaMalikopoulos, Andreas A. "Convergence Properties of a Computational Learning Model for Unknown Markov Chains". W ASME 2008 Dynamic Systems and Control Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/dscc2008-2174.
Pełny tekst źródłaMalikopoulos, Andreas A. "A Rollout Control Algorithm for Discrete-Time Stochastic Systems". W ASME 2010 Dynamic Systems and Control Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/dscc2010-4047.
Pełny tekst źródłaSovizi, Javad, Suren Kumar i Venkat Krovi. "Optimal Feedback Control of a Flexible Needle Under Anatomical Motion Uncertainty". W ASME 2015 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/dscc2015-9976.
Pełny tekst źródła"Performance of mixtures of adaptive controllers based on Markov chains". W Proceedings of the 1999 American Control Conference. IEEE, 1999. http://dx.doi.org/10.1109/acc.1999.782739.
Pełny tekst źródłaPunčochář, Ivo, i Miroslav Šimandl. "Infinite Horizon Input Signal for Active Fault Detection in Controlled Markov Chains". W Power and Energy. Calgary,AB,Canada: ACTAPRESS, 2013. http://dx.doi.org/10.2316/p.2013.807-028.
Pełny tekst źródłaRaporty organizacyjne na temat "Controlled Markov chain"
Kim, Tae-Hun, i Jung Won Kang. The clinical evidence of effectiveness and safety of massage chair: a scoping review. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, luty 2023. http://dx.doi.org/10.37766/inplasy2023.2.0021.
Pełny tekst źródła