Literatura académica sobre el tema "Convergence de processus de Markov"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Convergence de processus de Markov".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Convergence de processus de Markov"
Abakuks, A., S. N. Ethier y T. G. Kurtz. "Markov Processes: Characterization and Convergence." Biometrics 43, n.º 2 (junio de 1987): 484. http://dx.doi.org/10.2307/2531839.
Texto completoPerkins, Edwin, S. N. Ethier y T. G. Kurtz. "Markov Processes, Characterization and Convergence." Journal of the Royal Statistical Society. Series A (Statistics in Society) 151, n.º 2 (1988): 367. http://dx.doi.org/10.2307/2982773.
Texto completoSwishchuk, Anatoliy y M. Shafiqul Islam. "Diffusion Approximations of the Geometric Markov Renewal Processes and Option Price Formulas". International Journal of Stochastic Analysis 2010 (19 de diciembre de 2010): 1–21. http://dx.doi.org/10.1155/2010/347105.
Texto completoHWANG, CHII-RUEY. "ACCELERATING MONTE CARLO MARKOV PROCESSES". COSMOS 01, n.º 01 (mayo de 2005): 87–94. http://dx.doi.org/10.1142/s0219607705000085.
Texto completoAldous, David J. "Book Review: Markov processes: Characterization and convergence". Bulletin of the American Mathematical Society 16, n.º 2 (1 de abril de 1987): 315–19. http://dx.doi.org/10.1090/s0273-0979-1987-15533-9.
Texto completoFranz, Uwe, Volkmar Liebscher y Stefan Zeiser. "Piecewise-Deterministic Markov Processes as Limits of Markov Jump Processes". Advances in Applied Probability 44, n.º 3 (septiembre de 2012): 729–48. http://dx.doi.org/10.1239/aap/1346955262.
Texto completoFranz, Uwe, Volkmar Liebscher y Stefan Zeiser. "Piecewise-Deterministic Markov Processes as Limits of Markov Jump Processes". Advances in Applied Probability 44, n.º 03 (septiembre de 2012): 729–48. http://dx.doi.org/10.1017/s0001867800005851.
Texto completoMacci, Claudio. "Continuous-time Markov additive processes: Composition of large deviations principles and comparison between exponential rates of convergence". Journal of Applied Probability 38, n.º 4 (diciembre de 2001): 917–31. http://dx.doi.org/10.1239/jap/1011994182.
Texto completoDeng, Chang-Song, René L. Schilling y Yan-Hong Song. "Subgeometric rates of convergence for Markov processes under subordination". Advances in Applied Probability 49, n.º 1 (marzo de 2017): 162–81. http://dx.doi.org/10.1017/apr.2016.83.
Texto completoCrank, Keith N. y Prem S. Puri. "A method of approximating Markov jump processes". Advances in Applied Probability 20, n.º 1 (marzo de 1988): 33–58. http://dx.doi.org/10.2307/1427269.
Texto completoTesis sobre el tema "Convergence de processus de Markov"
Lachaud, Béatrice. "Détection de la convergence de processus de Markov". Phd thesis, Université René Descartes - Paris V, 2005. http://tel.archives-ouvertes.fr/tel-00010473.
Texto completoWang, Xinyu. "Sur la convergence sous-exponentielle de processus de Markov". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00840858.
Texto completoHahn, Léo. "Interacting run-and-tumble particles as piecewise deterministic Markov processes : invariant distribution and convergence". Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0084.
Texto completoThis thesis investigates the long-time behavior of run-and-tumble particles (RTPs), a model for bacteria's moves and interactions in out-of-equilibrium statistical mechanics, using piecewise deterministic Markov processes (PDMPs). The motivation is to improve the particle-level understanding of active phenomena, in particular motility induced phase separation (MIPS). The invariant measure for two jamming RTPs on a 1D torus is determined for general tumbling and jamming, revealing two out-of-equilibrium universality classes. Furthermore, the dependence of the mixing time on model parameters is established using coupling techniques and the continuous PDMP model is rigorously linked to a known on-lattice model. In the case of two jamming RTPs on the real line interacting through an attractive potential, the invariant measure displays qualitative differences based on model parameters, reminiscent of shape transitions and universality classes. Sharp quantitative convergence bounds are again obtained through coupling techniques. Additionally, the explicit invariant measure of three jamming RTPs on the 1D torus is computed. Finally, hypocoercive convergence results are extended to RTPs, achieving sharp \( L^2 \) convergence rates in a general setting that also covers kinetic Langevin and sampling PDMPs
Bouguet, Florian. "Étude quantitative de processus de Markov déterministes par morceaux issus de la modélisation". Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S040/document.
Texto completoThe purpose of this Ph.D. thesis is the study of piecewise deterministic Markov processes, which are often used for modeling many natural phenomena. Precisely, we shall focus on their long time behavior as well as their speed of convergence to equilibrium, whenever they possess a stationary probability measure. Providing sharp quantitative bounds for this speed of convergence is one of the main orientations of this manuscript, which will usually be done through coupling methods. We shall emphasize the link between Markov processes and mathematical fields of research where they may be of interest, such as partial differential equations. The last chapter of this thesis is devoted to the introduction of a unified approach to study the long time behavior of inhomogeneous Markov chains, which can provide functional limit theorems with the help of asymptotic pseudotrajectories
Rochet, Sophie. "Convergence des algorithmes génétiques : modèles stochastiques et épistasie". Aix-Marseille 1, 1998. http://www.theses.fr/1998AIX11032.
Texto completoBertoncini, Olivier. "Convergence abrupte et métastabilité". Phd thesis, Rouen, 2007. http://www.theses.fr/2007ROUES038.
Texto completoThe aim of this thesis is to link two phenomena concerning the asymptotical behavior of stochastic processes, which were disjoined up to now. The abrupt convergence or cutoff phenomenon on one hand, and metastability on the other hand. In the cutoff case an abrupt convergence towards the equilibrium measure occurs at a time which can be determined, whereas metastability is linked to a great uncertainty of the time at which we leave some equilibrium. We propose a common framework to compare and study both phenomena : that of discrete time birth and death chains on N with drift towards zero. Under the drift hypothesis, we prove that there is an abrupt convergence towards zero, metastability in the other direction, and that the last exit in the metastability is the time reverse of a typical cutoff path. We extend our approach to the Ehrenfest model, which allows us to prove abrupt convergence and metastability with a weaker drift hypothesis
Bertoncini, Olivier. "Convergence abrupte et métastabilité". Phd thesis, Université de Rouen, 2007. http://tel.archives-ouvertes.fr/tel-00218132.
Texto completoOn montre que sous l'hypothèse de dérive il y a convergence abrupte vers zéro et métastabilité dans l'autre sens. De plus la dernière excursion dans la métastabilité est la renversée temporelle d'une trajectoire typique de cutoff.
On étend notre approche au modèle d'Ehrenfest, ce qui nous permet de montrer la convergence abrupte et la métastabilité sous une hypothèse de dérive plus faible.
Tagorti, Manel. "Sur les abstractions et les projections des processus décisionnels de Markov de grande taille". Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0005/document.
Texto completoMarkov Decision Processes (MDP) are a mathematical formalism of many domains of artifical intelligence such as planning, machine learning, reinforcement learning... Solving an MDP means finding the optimal strategy or policy of an agent interacting in a stochastic environment. When the size of this system becomes very large it becomes hard to solve this problem with classical methods. This thesis deals with the resolution of MDPs with large state space. It studies some resolution methods such as: abstractions and the projection methods. It shows the limits of some approachs and identifies some structures that may be interesting for the MDP resolution. This thesis focuses also on projection methods, the Least square temporal difference algorithm LSTD(λ). An estimate of the rate of the convergence of this algorithm has been derived with an emphasis on the role played by the parameter [lambda]. This analysis has then been generalized to the case of Least square non stationary policy iteration LS(λ)NSPI . We compute a performance bound for LS([lambda])NSPI by bounding the error between the value computed given a fixed iteration and the value computed under the optimal policy, that we aim to determine
Tagorti, Manel. "Sur les abstractions et les projections des processus décisionnels de Markov de grande taille". Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0005.
Texto completoMarkov Decision Processes (MDP) are a mathematical formalism of many domains of artifical intelligence such as planning, machine learning, reinforcement learning... Solving an MDP means finding the optimal strategy or policy of an agent interacting in a stochastic environment. When the size of this system becomes very large it becomes hard to solve this problem with classical methods. This thesis deals with the resolution of MDPs with large state space. It studies some resolution methods such as: abstractions and the projection methods. It shows the limits of some approachs and identifies some structures that may be interesting for the MDP resolution. This thesis focuses also on projection methods, the Least square temporal difference algorithm LSTD(λ). An estimate of the rate of the convergence of this algorithm has been derived with an emphasis on the role played by the parameter [lambda]. This analysis has then been generalized to the case of Least square non stationary policy iteration LS(λ)NSPI . We compute a performance bound for LS([lambda])NSPI by bounding the error between the value computed given a fixed iteration and the value computed under the optimal policy, that we aim to determine
Gavra, Iona Alexandra. "Algorithmes stochastiques d'optimisation sous incertitude sur des structures complexes : convergence et applications". Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30141/document.
Texto completoThe main topics of this thesis involve the development of stochastic algorithms for optimization under uncertainty, the study of their theoretical properties and applications. The proposed algorithms are modified versions of simulated an- nealing that use only unbiased estimators of the cost function. We study their convergence using the tools developed in the theory of Markov processes: we use properties of infinitesimal generators and functional inequalities to measure the distance between their probability law and a target one. The first part is concerned with quantum graphs endowed with a probability measure on their vertex set. Quantum graphs are continuous versions of undirected weighted graphs. The starting point of the present work was the question of finding Fréchet means on such a graph. The Fréchet mean is an extension of the Euclidean mean to general metric spaces and is defined as an element that minimizes the sum of weighted square distances to all vertices. Our method relies on a Langevin formulation of a noisy simulated annealing dealt with using homogenization. In order to establish the convergence in probability of the process, we study the evolution of the relative entropy of its law with respect to a convenient Gibbs measure. Using functional inequalities (Poincare and Sobolev) and Gronwall's Lemma, we then show that the relative entropy goes to zero. We test our method on some real data sets and propose an heuristic method to adapt the algorithm to huge graphs, using a preliminary clustering. In the same framework, we introduce a definition of principal component analysis for quantum graphs. This implies, once more, a stochastic optimization problem, this time on the space of the graph's geodesics. We suggest an algorithm for finding the first principal component and conjecture the convergence of the associated Markov process to the wanted set. On the second part, we propose a modified version of the simulated annealing algorithm for solving a stochastic global optimization problem on a finite space. Our approach is inspired by the general field of Monte Carlo methods and relies on a Markov chain whose probability transition at each step is defined with the help of mini batches of increasing (random) size. We prove the algorithm's convergence in probability towards the optimal set, provide convergence rate and its optimized parametrization to ensure a minimal number of evaluations for a given accuracy and a confidence level close to 1. This work is completed with a set of numerical experiments and the assessment of the practical performance both on benchmark test cases and on real world examples
Libros sobre el tema "Convergence de processus de Markov"
G, Kurtz Thomas, ed. Markov processes: Characterization and convergence. New York: Wiley, 1986.
Buscar texto completoRoberts, Gareth O. Convergence of slice sampler Markov chains. [Toronto: University of Toronto, 1997.
Buscar texto completoBaxter, John Robert. Rates of convergence for everywhere-positive markov chains. [Toronto, Ont.]: University of Toronto, Dept. of Statistics, 1994.
Buscar texto completoRoberts, Gareth O. Quantitative bounds for convergence rates of continuous time Markov processes. [Toronto]: University of Toronto, Dept. of Statistics, 1996.
Buscar texto completoRoberts, Gareth O. On convergence rates of Gibbs samplers for uniform distributions. [Toronto: University of Toronto, 1997.
Buscar texto completoCowles, Mary Kathryn. Possible biases induced by MCMC convergence diagnostics. Toronto: University of Toronto, Dept. of Statistics, 1997.
Buscar texto completoYuen, Wai Kong. Applications of Cheeger's constant to the convergence rate of Markov chains on Rn. Toronto: University of Toronto, Dept. of Statistics, 1997.
Buscar texto completoCowles, Mary Kathryn. A simulation approach to convergence rates for Markov chain Monte Carlo algorithms. [Toronto]: University of Toronto, Dept. of Statistics, 1996.
Buscar texto completoWirsching, Günther J. The dynamical system generated by the 3n + 1 function. Berlin: Springer, 1998.
Buscar texto completoPetrone, Sonia. A note on convergence rates of Gibbs sampling for nonparametric mixtures. Toronto: University of Toronto, Dept. of Statistics, 1998.
Buscar texto completoCapítulos de libros sobre el tema "Convergence de processus de Markov"
Zhang, Hanjun, Qixiang Mei, Xiang Lin y Zhenting Hou. "Convergence Property of Standard Transition Functions". En Markov Processes and Controlled Markov Chains, 57–67. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_4.
Texto completoAltman, Eitan. "Convergence of discounted constrained MDPs". En Constrained Markov Decision Processes, 193–98. Boca Raton: Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-17.
Texto completoAltman, Eitan. "Convergence as the horizon tends to infinity". En Constrained Markov Decision Processes, 199–203. Boca Raton: Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-18.
Texto completoKersting, G. y F. C. Klebaner. "Explosions in Markov Processes and Submartingale Convergence." En Athens Conference on Applied Probability and Time Series Analysis, 127–36. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-0749-8_9.
Texto completoCai, Yuzhi. "How Rates of Convergence for Gibbs Fields Depend on the Interaction and the Kind of Scanning Used". En Markov Processes and Controlled Markov Chains, 489–98. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_31.
Texto completoBernou, Armand. "On Subexponential Convergence to Equilibrium of Markov Processes". En Lecture Notes in Mathematics, 143–74. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96409-2_5.
Texto completoFeng, Jin y Thomas Kurtz. "Large deviations for Markov processes and nonlinear semigroup convergence". En Mathematical Surveys and Monographs, 79–96. Providence, Rhode Island: American Mathematical Society, 2006. http://dx.doi.org/10.1090/surv/131/05.
Texto completoPop-Stojanovic, Z. R. "Convergence in Energy and the Sector Condition for Markov Processes". En Seminar on Stochastic Processes, 1984, 165–72. Boston, MA: Birkhäuser Boston, 1986. http://dx.doi.org/10.1007/978-1-4684-6745-1_10.
Texto completoNegoro, Akira y Masaaki Tsuchiya. "Convergence and uniqueness theorems for markov processes associated with Lévy operators". En Lecture Notes in Mathematics, 348–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/bfb0078492.
Texto completoZverkina, Galina. "Ergodicity and Polynomial Convergence Rate of Generalized Markov Modulated Poisson Processes". En Communications in Computer and Information Science, 367–81. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66242-4_29.
Texto completoActas de conferencias sobre el tema "Convergence de processus de Markov"
Shi, Zhengbin. "Volatility Prediction Algorithm in Enterprise Financial Risk Management Based on Markov Chain Algorithm". En 2023 International Conference on Intelligent Computing, Communication & Convergence (ICI3C), 152–56. IEEE, 2023. http://dx.doi.org/10.1109/ici3c60830.2023.00039.
Texto completoMajeed, Sultan Javed y Marcus Hutter. "On Q-learning Convergence for Non-Markov Decision Processes". En Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/353.
Texto completoAmiri, Mohsen y Sindri Magnússon. "On the Convergence of TD-Learning on Markov Reward Processes with Hidden States". En 2024 European Control Conference (ECC). IEEE, 2024. http://dx.doi.org/10.23919/ecc64448.2024.10591108.
Texto completoTakagi, Hideaki, Muneo Kitajima, Tetsuo Yamamoto y Yongbing Zhang. "Search process evaluation for a hierarchical menu system by Markov chains". En ITCom 2001: International Symposium on the Convergence of IT and Communications, editado por Robert D. van der Mei y Frank Huebner-Szabo de Bucs. SPIE, 2001. http://dx.doi.org/10.1117/12.434312.
Texto completoHongbin Liang, Lin X. Cai, Hangguan Shan, Xuemin Shen y Daiyuan Peng. "Adaptive resource allocation for media services based on semi-Markov decision process". En 2010 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2010. http://dx.doi.org/10.1109/ictc.2010.5674663.
Texto completoDing, Dongsheng, Kaiqing Zhang, Tamer Basar y Mihailo R. Jovanovic. "Convergence and optimality of policy gradient primal-dual method for constrained Markov decision processes". En 2022 American Control Conference (ACC). IEEE, 2022. http://dx.doi.org/10.23919/acc53348.2022.9867805.
Texto completoTayeb, Shahab, Miresmaeil Mirnabibaboli y Shahram Latifi. "Load Balancing in WSNs using a Novel Markov Decision Process Based Routing Algorithm". En 2016 6th International Conference on IT Convergence and Security (ICITCS). IEEE, 2016. http://dx.doi.org/10.1109/icitcs.2016.7740350.
Texto completoFerreira Salvador, Paulo J. y Rui J. M. T. Valadas. "Framework based on Markov modulated Poisson processes for modeling traffic with long-range dependence". En ITCom 2001: International Symposium on the Convergence of IT and Communications, editado por Robert D. van der Mei y Frank Huebner-Szabo de Bucs. SPIE, 2001. http://dx.doi.org/10.1117/12.434317.
Texto completoShi, Chongyang, Yuheng Bu y Jie Fu. "Information-Theoretic Opacity-Enforcement in Markov Decision Processes". En Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/749.
Texto completoHorák, Karel, Branislav Bošanský y Krishnendu Chatterjee. "Goal-HSVI: Heuristic Search Value Iteration for Goal POMDPs". En Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/662.
Texto completoInformes sobre el tema "Convergence de processus de Markov"
Athreya, Krishna B., Hani Doss y Jayaram Sethuraman. A Proof of Convergence of the Markov Chain Simulation Method. Fort Belvoir, VA: Defense Technical Information Center, julio de 1992. http://dx.doi.org/10.21236/ada255456.
Texto completoSethuraman, Jayaram. Easily Verifiable Conditions for the Convergence of the Markov Chain Monte Carlo Method. Fort Belvoir, VA: Defense Technical Information Center, diciembre de 1995. http://dx.doi.org/10.21236/ada308874.
Texto completoAthreya, Krishna B., Hani Doss y Jayaram Sethuraman. Easy-to-Apply Results for Establishing Convergence of Markov Chains in Bayesian Analysis. Fort Belvoir, VA: Defense Technical Information Center, febrero de 1993. http://dx.doi.org/10.21236/ada264015.
Texto completoBledsoe, Keith C. Implement Method for Automated Testing of Markov Chain Convergence into INVERSE for ORNL12-RS-108J: Advanced Multi-Dimensional Forward and Inverse Modeling. Office of Scientific and Technical Information (OSTI), abril de 2015. http://dx.doi.org/10.2172/1234327.
Texto completoŠiljak, Dženita. The Effects of Institutions on the Transition of the Western Balkans. Külügyi és Külgazdasági Intézet, 2022. http://dx.doi.org/10.47683/kkielemzesek.ke-2022.19.
Texto completoQuevedo, Fernando, Paolo Giordano y Mauricio Mesquita Moreira. El tratamiento de las asimetrías en los acuerdos de integración regional. Inter-American Development Bank, agosto de 2004. http://dx.doi.org/10.18235/0009450.
Texto completoBriones, Roehlano, Ivory Myka Galang, Isabel Espineli, Aniceto Jr Orbeta y Marife Ballesteros. Endline Study Report and Policy Study for the ConVERGE Project. Philippine Institute for Development Studies, septiembre de 2023. http://dx.doi.org/10.62986/dp2023.13.
Texto completoHori, Tsuneki, Sergio Lacambra Ayuso, Ana María Torres, Lina Salazar, Gilberto Romero, Rolando Durán, Ginés Suarez, Lizardo Narváez y Ernesto Visconti. Índice de Gobernabilidad y Políticas Públicas en Gestión de Riesgo de Desastres (iGOPP): Informe Nacional de Perú. Inter-American Development Bank, octubre de 2015. http://dx.doi.org/10.18235/0010086.
Texto completoOcampo, José Antonio, Roberto Steiner Sampedro, Mauricio Villamizar Villegas, Bibiana Taboada Arango, Jaime Jaramillo Vallejo, Olga Lucia Acosta Navarro y Leonardo Villar Gómez. Informe de la Junta Directiva al Congreso de la República - Marzo de 2023. Banco de la República, marzo de 2023. http://dx.doi.org/10.32468/inf-jun-dir-con-rep.3-2023.
Texto completoOcampo-Gaviria, José Antonio, Roberto Steiner Sampedro, Mauricio Villamizar Villegas, Bibiana Taboada Arango, Jaime Jaramillo Vallejo, Olga Lucia Acosta-Navarro y Leonardo Villar Gómez. Report of the Board of Directors to the Congress of Colombia - March 2023. Banco de la República de Colombia, junio de 2023. http://dx.doi.org/10.32468/inf-jun-dir-con-rep-eng.03-2023.
Texto completo