Literatura científica selecionada sobre o tema "Convergence of Markov processes"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Convergence of Markov processes".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Convergence of Markov processes"
Abakuks, A., S. N. Ethier e T. G. Kurtz. "Markov Processes: Characterization and Convergence." Biometrics 43, n.º 2 (junho de 1987): 484. http://dx.doi.org/10.2307/2531839.
Texto completo da fontePerkins, Edwin, S. N. Ethier e T. G. Kurtz. "Markov Processes, Characterization and Convergence." Journal of the Royal Statistical Society. Series A (Statistics in Society) 151, n.º 2 (1988): 367. http://dx.doi.org/10.2307/2982773.
Texto completo da fonteFranz, Uwe, Volkmar Liebscher e Stefan Zeiser. "Piecewise-Deterministic Markov Processes as Limits of Markov Jump Processes". Advances in Applied Probability 44, n.º 3 (setembro de 2012): 729–48. http://dx.doi.org/10.1239/aap/1346955262.
Texto completo da fonteFranz, Uwe, Volkmar Liebscher e Stefan Zeiser. "Piecewise-Deterministic Markov Processes as Limits of Markov Jump Processes". Advances in Applied Probability 44, n.º 03 (setembro de 2012): 729–48. http://dx.doi.org/10.1017/s0001867800005851.
Texto completo da fonteHWANG, CHII-RUEY. "ACCELERATING MONTE CARLO MARKOV PROCESSES". COSMOS 01, n.º 01 (maio de 2005): 87–94. http://dx.doi.org/10.1142/s0219607705000085.
Texto completo da fonteAldous, David J. "Book Review: Markov processes: Characterization and convergence". Bulletin of the American Mathematical Society 16, n.º 2 (1 de abril de 1987): 315–19. http://dx.doi.org/10.1090/s0273-0979-1987-15533-9.
Texto completo da fonteSwishchuk, Anatoliy, e M. Shafiqul Islam. "Diffusion Approximations of the Geometric Markov Renewal Processes and Option Price Formulas". International Journal of Stochastic Analysis 2010 (19 de dezembro de 2010): 1–21. http://dx.doi.org/10.1155/2010/347105.
Texto completo da fonteCrank, Keith N., e Prem S. Puri. "A method of approximating Markov jump processes". Advances in Applied Probability 20, n.º 1 (março de 1988): 33–58. http://dx.doi.org/10.2307/1427269.
Texto completo da fonteCrank, Keith N., e Prem S. Puri. "A method of approximating Markov jump processes". Advances in Applied Probability 20, n.º 01 (março de 1988): 33–58. http://dx.doi.org/10.1017/s0001867800017936.
Texto completo da fonteDeng, Chang-Song, René L. Schilling e Yan-Hong Song. "Subgeometric rates of convergence for Markov processes under subordination". Advances in Applied Probability 49, n.º 1 (março de 2017): 162–81. http://dx.doi.org/10.1017/apr.2016.83.
Texto completo da fonteTeses / dissertações sobre o assunto "Convergence of Markov processes"
Hahn, Léo. "Interacting run-and-tumble particles as piecewise deterministic Markov processes : invariant distribution and convergence". Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0084.
Texto completo da fonte1. Simulating active and metastable systems with piecewise deterministic Markov processes (PDMPs): - Which dynamics to choose to efficiently simulate metastable states? - How to directly exploit the non-equilibrium nature of PDMPs to study the modeled physical systems? 2. Modeling active systems with PDMPs: - What conditions must a system meet to be modeled by a PDMP? - In which cases does the system have a stationary distribution? - How to calculate dynamic quantities (e.g., transition rates) in this framework? 3. Improving simulation techniques for equilibrium systems: - Can results obtained in the context of non-equilibrium systems be used to accelerate the simulation of equilibrium systems? - How to use topological information to adapt the dynamics in real-time?
Pötzelberger, Klaus. "On the Approximation of finite Markov-exchangeable processes by mixtures of Markov Processes". Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1991. http://epub.wu.ac.at/526/1/document.pdf.
Texto completo da fonteSeries: Forschungsberichte / Institut für Statistik
Drozdenko, Myroslav. "Weak Convergence of First-Rare-Event Times for Semi-Markov Processes". Doctoral thesis, Västerås : Mälardalen University, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-394.
Texto completo da fonteYuen, Wai Kong. "Application of geometric bounds to convergence rates of Markov chains and Markov processes on R[superscript]n". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ58619.pdf.
Texto completo da fonteKaijser, Thomas. "Convergence in distribution for filtering processes associated to Hidden Markov Models with densities". Linköpings universitet, Matematik och tillämpad matematik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-92590.
Texto completo da fonteLachaud, Béatrice. "Détection de la convergence de processus de Markov". Phd thesis, Université René Descartes - Paris V, 2005. http://tel.archives-ouvertes.fr/tel-00010473.
Texto completo da fonteFisher, Diana. "Convergence analysis of MCMC method in the study of genetic linkage with missing data". Huntington, WV : [Marshall University Libraries], 2005. http://www.marshall.edu/etd/descript.asp?ref=568.
Texto completo da fonteWang, Xinyu. "Sur la convergence sous-exponentielle de processus de Markov". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00840858.
Texto completo da fonteBouguet, Florian. "Étude quantitative de processus de Markov déterministes par morceaux issus de la modélisation". Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S040/document.
Texto completo da fonteThe purpose of this Ph.D. thesis is the study of piecewise deterministic Markov processes, which are often used for modeling many natural phenomena. Precisely, we shall focus on their long time behavior as well as their speed of convergence to equilibrium, whenever they possess a stationary probability measure. Providing sharp quantitative bounds for this speed of convergence is one of the main orientations of this manuscript, which will usually be done through coupling methods. We shall emphasize the link between Markov processes and mathematical fields of research where they may be of interest, such as partial differential equations. The last chapter of this thesis is devoted to the introduction of a unified approach to study the long time behavior of inhomogeneous Markov chains, which can provide functional limit theorems with the help of asymptotic pseudotrajectories
Chotard, Alexandre. "Markov chain Analysis of Evolution Strategies". Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112230/document.
Texto completo da fonteIn this dissertation an analysis of Evolution Strategies (ESs) using the theory of Markov chains is conducted. Proofs of divergence or convergence of these algorithms are obtained, and tools to achieve such proofs are developed.ESs are so called "black-box" stochastic optimization algorithms, i.e. information on the function to be optimized are limited to the values it associates to points. In particular, gradients are unavailable. Proofs of convergence or divergence of these algorithms can be obtained through the analysis of Markov chains underlying these algorithms. The proofs of log-linear convergence and of divergence obtained in this thesis in the context of a linear function with or without constraint are essential components for the proofs of convergence of ESs on wide classes of functions.This dissertation first gives an introduction to Markov chain theory, then a state of the art on ESs and on black-box continuous optimization, and present already established links between ESs and Markov chains.The contributions of this thesis are then presented:o General mathematical tools that can be applied to a wider range of problems are developed. These tools allow to easily prove specific Markov chain properties (irreducibility, aperiodicity and the fact that compact sets are small sets for the Markov chain) on the Markov chains studied. Obtaining these properties without these tools is a ad hoc, tedious and technical process, that can be of very high difficulty.o Then different ESs are analyzed on different problems. We study a (1,\lambda)-ES using cumulative step-size adaptation on a linear function and prove the log-linear divergence of the step-size; we also study the variation of the logarithm of the step-size, from which we establish a necessary condition for the stability of the algorithm with respect to the dimension of the search space. Then we study an ES with constant step-size and with cumulative step-size adaptation on a linear function with a linear constraint, using resampling to handle unfeasible solutions. We prove that with constant step-size the algorithm diverges, while with cumulative step-size adaptation, depending on parameters of the problem and of the ES, the algorithm converges or diverges log-linearly. We then investigate the dependence of the convergence or divergence rate of the algorithm with parameters of the problem and of the ES. Finally we study an ES with a sampling distribution that can be non-Gaussian and with constant step-size on a linear function with a linear constraint. We give sufficient conditions on the sampling distribution for the algorithm to diverge. We also show that different covariance matrices for the sampling distribution correspond to a change of norm of the search space, and that this implies that adapting the covariance matrix of the sampling distribution may allow an ES with cumulative step-size adaptation to successfully diverge on a linear function with any linear constraint.Finally, these results are summed-up, discussed, and perspectives for future work are explored
Livros sobre o assunto "Convergence of Markov processes"
G, Kurtz Thomas, ed. Markov processes: Characterization and convergence. New York: Wiley, 1986.
Encontre o texto completo da fonteRoberts, Gareth O. Convergence of slice sampler Markov chains. [Toronto: University of Toronto, 1997.
Encontre o texto completo da fonteBaxter, John Robert. Rates of convergence for everywhere-positive markov chains. [Toronto, Ont.]: University of Toronto, Dept. of Statistics, 1994.
Encontre o texto completo da fonteRoberts, Gareth O. Quantitative bounds for convergence rates of continuous time Markov processes. [Toronto]: University of Toronto, Dept. of Statistics, 1996.
Encontre o texto completo da fonteYuen, Wai Kong. Applications of Cheeger's constant to the convergence rate of Markov chains on Rn. Toronto: University of Toronto, Dept. of Statistics, 1997.
Encontre o texto completo da fonteRoberts, Gareth O. On convergence rates of Gibbs samplers for uniform distributions. [Toronto: University of Toronto, 1997.
Encontre o texto completo da fonteCowles, Mary Kathryn. Possible biases induced by MCMC convergence diagnostics. Toronto: University of Toronto, Dept. of Statistics, 1997.
Encontre o texto completo da fonteCowles, Mary Kathryn. A simulation approach to convergence rates for Markov chain Monte Carlo algorithms. [Toronto]: University of Toronto, Dept. of Statistics, 1996.
Encontre o texto completo da fonteWirsching, Günther J. The dynamical system generated by the 3n + 1 function. Berlin: Springer, 1998.
Encontre o texto completo da fontePetrone, Sonia. A note on convergence rates of Gibbs sampling for nonparametric mixtures. Toronto: University of Toronto, Dept. of Statistics, 1998.
Encontre o texto completo da fonteCapítulos de livros sobre o assunto "Convergence of Markov processes"
Zhang, Hanjun, Qixiang Mei, Xiang Lin e Zhenting Hou. "Convergence Property of Standard Transition Functions". In Markov Processes and Controlled Markov Chains, 57–67. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_4.
Texto completo da fonteAltman, Eitan. "Convergence of discounted constrained MDPs". In Constrained Markov Decision Processes, 193–98. Boca Raton: Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-17.
Texto completo da fonteAltman, Eitan. "Convergence as the horizon tends to infinity". In Constrained Markov Decision Processes, 199–203. Boca Raton: Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-18.
Texto completo da fonteKersting, G., e F. C. Klebaner. "Explosions in Markov Processes and Submartingale Convergence." In Athens Conference on Applied Probability and Time Series Analysis, 127–36. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-0749-8_9.
Texto completo da fonteCai, Yuzhi. "How Rates of Convergence for Gibbs Fields Depend on the Interaction and the Kind of Scanning Used". In Markov Processes and Controlled Markov Chains, 489–98. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_31.
Texto completo da fonteBernou, Armand. "On Subexponential Convergence to Equilibrium of Markov Processes". In Lecture Notes in Mathematics, 143–74. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96409-2_5.
Texto completo da fontePop-Stojanovic, Z. R. "Convergence in Energy and the Sector Condition for Markov Processes". In Seminar on Stochastic Processes, 1984, 165–72. Boston, MA: Birkhäuser Boston, 1986. http://dx.doi.org/10.1007/978-1-4684-6745-1_10.
Texto completo da fonteFeng, Jin, e Thomas Kurtz. "Large deviations for Markov processes and nonlinear semigroup convergence". In Mathematical Surveys and Monographs, 79–96. Providence, Rhode Island: American Mathematical Society, 2006. http://dx.doi.org/10.1090/surv/131/05.
Texto completo da fonteNegoro, Akira, e Masaaki Tsuchiya. "Convergence and uniqueness theorems for markov processes associated with Lévy operators". In Lecture Notes in Mathematics, 348–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/bfb0078492.
Texto completo da fonteZverkina, Galina. "Ergodicity and Polynomial Convergence Rate of Generalized Markov Modulated Poisson Processes". In Communications in Computer and Information Science, 367–81. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66242-4_29.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Convergence of Markov processes"
Majeed, Sultan Javed, e Marcus Hutter. "On Q-learning Convergence for Non-Markov Decision Processes". In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/353.
Texto completo da fonteAmiri, Mohsen, e Sindri Magnússon. "On the Convergence of TD-Learning on Markov Reward Processes with Hidden States". In 2024 European Control Conference (ECC). IEEE, 2024. http://dx.doi.org/10.23919/ecc64448.2024.10591108.
Texto completo da fonteDing, Dongsheng, Kaiqing Zhang, Tamer Basar e Mihailo R. Jovanovic. "Convergence and optimality of policy gradient primal-dual method for constrained Markov decision processes". In 2022 American Control Conference (ACC). IEEE, 2022. http://dx.doi.org/10.23919/acc53348.2022.9867805.
Texto completo da fonteShi, Chongyang, Yuheng Bu e Jie Fu. "Information-Theoretic Opacity-Enforcement in Markov Decision Processes". In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/749.
Texto completo da fonteFerreira Salvador, Paulo J., e Rui J. M. T. Valadas. "Framework based on Markov modulated Poisson processes for modeling traffic with long-range dependence". In ITCom 2001: International Symposium on the Convergence of IT and Communications, editado por Robert D. van der Mei e Frank Huebner-Szabo de Bucs. SPIE, 2001. http://dx.doi.org/10.1117/12.434317.
Texto completo da fonteTakagi, Hideaki, Muneo Kitajima, Tetsuo Yamamoto e Yongbing Zhang. "Search process evaluation for a hierarchical menu system by Markov chains". In ITCom 2001: International Symposium on the Convergence of IT and Communications, editado por Robert D. van der Mei e Frank Huebner-Szabo de Bucs. SPIE, 2001. http://dx.doi.org/10.1117/12.434312.
Texto completo da fonteHongbin Liang, Lin X. Cai, Hangguan Shan, Xuemin Shen e Daiyuan Peng. "Adaptive resource allocation for media services based on semi-Markov decision process". In 2010 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2010. http://dx.doi.org/10.1109/ictc.2010.5674663.
Texto completo da fonteTayeb, Shahab, Miresmaeil Mirnabibaboli e Shahram Latifi. "Load Balancing in WSNs using a Novel Markov Decision Process Based Routing Algorithm". In 2016 6th International Conference on IT Convergence and Security (ICITCS). IEEE, 2016. http://dx.doi.org/10.1109/icitcs.2016.7740350.
Texto completo da fonteChanron, Vincent, e Kemper Lewis. "A Study of Convergence in Decentralized Design". In ASME 2003 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2003. http://dx.doi.org/10.1115/detc2003/dac-48782.
Texto completo da fonteKuznetsova, Natalia, e Zhanna Pisarenko. "Financial convergence at the world financial market: pension funds and insurance entities prospects: case of China, EU, USA". In Contemporary Issues in Business, Management and Economics Engineering. Vilnius Gediminas Technical University, 2019. http://dx.doi.org/10.3846/cibmee.2019.037.
Texto completo da fonteRelatórios de organizações sobre o assunto "Convergence of Markov processes"
Adler, Robert J., Stamatis Gambanis e Gennady Samorodnitsky. On Stable Markov Processes. Fort Belvoir, VA: Defense Technical Information Center, setembro de 1987. http://dx.doi.org/10.21236/ada192892.
Texto completo da fonteAthreya, Krishna B., Hani Doss e Jayaram Sethuraman. A Proof of Convergence of the Markov Chain Simulation Method. Fort Belvoir, VA: Defense Technical Information Center, julho de 1992. http://dx.doi.org/10.21236/ada255456.
Texto completo da fonteAbdel-Hameed, M. Markovian Shock Models, Deterioration Processes, Stratified Markov Processes Replacement Policies. Fort Belvoir, VA: Defense Technical Information Center, dezembro de 1985. http://dx.doi.org/10.21236/ada174646.
Texto completo da fonteNewell, Alan. Markovian Shock Models, Deterioration Processes, Stratified Markov Processes and Replacement Policies. Fort Belvoir, VA: Defense Technical Information Center, maio de 1986. http://dx.doi.org/10.21236/ada174995.
Texto completo da fonteCinlar, E. Markov Processes Applied to Control, Reliability and Replacement. Fort Belvoir, VA: Defense Technical Information Center, abril de 1989. http://dx.doi.org/10.21236/ada208634.
Texto completo da fonteRohlicek, J. R., e A. S. Willsky. Structural Decomposition of Multiple Time Scale Markov Processes,. Fort Belvoir, VA: Defense Technical Information Center, outubro de 1987. http://dx.doi.org/10.21236/ada189739.
Texto completo da fonteSerfozo, Richard F. Poisson Functionals of Markov Processes and Queueing Networks. Fort Belvoir, VA: Defense Technical Information Center, dezembro de 1987. http://dx.doi.org/10.21236/ada191217.
Texto completo da fonteSerfozo, R. F. Poisson Functionals of Markov Processes and Queueing Networks,. Fort Belvoir, VA: Defense Technical Information Center, dezembro de 1987. http://dx.doi.org/10.21236/ada194289.
Texto completo da fonteDraper, Bruce A., e J. Ross Beveridge. Learning to Populate Geospatial Databases via Markov Processes. Fort Belvoir, VA: Defense Technical Information Center, dezembro de 1999. http://dx.doi.org/10.21236/ada374536.
Texto completo da fonteSethuraman, Jayaram. Easily Verifiable Conditions for the Convergence of the Markov Chain Monte Carlo Method. Fort Belvoir, VA: Defense Technical Information Center, dezembro de 1995. http://dx.doi.org/10.21236/ada308874.
Texto completo da fonte