Academic literature on the topic 'Convergence de processus de Markov'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Convergence de processus de Markov.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Convergence de processus de Markov"
Abakuks, A., S. N. Ethier, and T. G. Kurtz. "Markov Processes: Characterization and Convergence." Biometrics 43, no. 2 (June 1987): 484. http://dx.doi.org/10.2307/2531839.
Full textPerkins, Edwin, S. N. Ethier, and T. G. Kurtz. "Markov Processes, Characterization and Convergence." Journal of the Royal Statistical Society. Series A (Statistics in Society) 151, no. 2 (1988): 367. http://dx.doi.org/10.2307/2982773.
Full textSwishchuk, Anatoliy, and M. Shafiqul Islam. "Diffusion Approximations of the Geometric Markov Renewal Processes and Option Price Formulas." International Journal of Stochastic Analysis 2010 (December 19, 2010): 1–21. http://dx.doi.org/10.1155/2010/347105.
Full textHWANG, CHII-RUEY. "ACCELERATING MONTE CARLO MARKOV PROCESSES." COSMOS 01, no. 01 (May 2005): 87–94. http://dx.doi.org/10.1142/s0219607705000085.
Full textAldous, David J. "Book Review: Markov processes: Characterization and convergence." Bulletin of the American Mathematical Society 16, no. 2 (April 1, 1987): 315–19. http://dx.doi.org/10.1090/s0273-0979-1987-15533-9.
Full textFranz, Uwe, Volkmar Liebscher, and Stefan Zeiser. "Piecewise-Deterministic Markov Processes as Limits of Markov Jump Processes." Advances in Applied Probability 44, no. 3 (September 2012): 729–48. http://dx.doi.org/10.1239/aap/1346955262.
Full textFranz, Uwe, Volkmar Liebscher, and Stefan Zeiser. "Piecewise-Deterministic Markov Processes as Limits of Markov Jump Processes." Advances in Applied Probability 44, no. 03 (September 2012): 729–48. http://dx.doi.org/10.1017/s0001867800005851.
Full textMacci, Claudio. "Continuous-time Markov additive processes: Composition of large deviations principles and comparison between exponential rates of convergence." Journal of Applied Probability 38, no. 4 (December 2001): 917–31. http://dx.doi.org/10.1239/jap/1011994182.
Full textDeng, Chang-Song, René L. Schilling, and Yan-Hong Song. "Subgeometric rates of convergence for Markov processes under subordination." Advances in Applied Probability 49, no. 1 (March 2017): 162–81. http://dx.doi.org/10.1017/apr.2016.83.
Full textCrank, Keith N., and Prem S. Puri. "A method of approximating Markov jump processes." Advances in Applied Probability 20, no. 1 (March 1988): 33–58. http://dx.doi.org/10.2307/1427269.
Full textDissertations / Theses on the topic "Convergence de processus de Markov"
Lachaud, Béatrice. "Détection de la convergence de processus de Markov." Phd thesis, Université René Descartes - Paris V, 2005. http://tel.archives-ouvertes.fr/tel-00010473.
Full textWang, Xinyu. "Sur la convergence sous-exponentielle de processus de Markov." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00840858.
Full textHahn, Léo. "Interacting run-and-tumble particles as piecewise deterministic Markov processes : invariant distribution and convergence." Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0084.
Full text1. Simulating active and metastable systems with piecewise deterministic Markov processes (PDMPs): - Which dynamics to choose to efficiently simulate metastable states? - How to directly exploit the non-equilibrium nature of PDMPs to study the modeled physical systems? 2. Modeling active systems with PDMPs: - What conditions must a system meet to be modeled by a PDMP? - In which cases does the system have a stationary distribution? - How to calculate dynamic quantities (e.g., transition rates) in this framework? 3. Improving simulation techniques for equilibrium systems: - Can results obtained in the context of non-equilibrium systems be used to accelerate the simulation of equilibrium systems? - How to use topological information to adapt the dynamics in real-time?
Bouguet, Florian. "Étude quantitative de processus de Markov déterministes par morceaux issus de la modélisation." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S040/document.
Full textThe purpose of this Ph.D. thesis is the study of piecewise deterministic Markov processes, which are often used for modeling many natural phenomena. Precisely, we shall focus on their long time behavior as well as their speed of convergence to equilibrium, whenever they possess a stationary probability measure. Providing sharp quantitative bounds for this speed of convergence is one of the main orientations of this manuscript, which will usually be done through coupling methods. We shall emphasize the link between Markov processes and mathematical fields of research where they may be of interest, such as partial differential equations. The last chapter of this thesis is devoted to the introduction of a unified approach to study the long time behavior of inhomogeneous Markov chains, which can provide functional limit theorems with the help of asymptotic pseudotrajectories
Rochet, Sophie. "Convergence des algorithmes génétiques : modèles stochastiques et épistasie." Aix-Marseille 1, 1998. http://www.theses.fr/1998AIX11032.
Full textBertoncini, Olivier. "Convergence abrupte et métastabilité." Phd thesis, Rouen, 2007. http://www.theses.fr/2007ROUES038.
Full textThe aim of this thesis is to link two phenomena concerning the asymptotical behavior of stochastic processes, which were disjoined up to now. The abrupt convergence or cutoff phenomenon on one hand, and metastability on the other hand. In the cutoff case an abrupt convergence towards the equilibrium measure occurs at a time which can be determined, whereas metastability is linked to a great uncertainty of the time at which we leave some equilibrium. We propose a common framework to compare and study both phenomena : that of discrete time birth and death chains on N with drift towards zero. Under the drift hypothesis, we prove that there is an abrupt convergence towards zero, metastability in the other direction, and that the last exit in the metastability is the time reverse of a typical cutoff path. We extend our approach to the Ehrenfest model, which allows us to prove abrupt convergence and metastability with a weaker drift hypothesis
Bertoncini, Olivier. "Convergence abrupte et métastabilité." Phd thesis, Université de Rouen, 2007. http://tel.archives-ouvertes.fr/tel-00218132.
Full textOn montre que sous l'hypothèse de dérive il y a convergence abrupte vers zéro et métastabilité dans l'autre sens. De plus la dernière excursion dans la métastabilité est la renversée temporelle d'une trajectoire typique de cutoff.
On étend notre approche au modèle d'Ehrenfest, ce qui nous permet de montrer la convergence abrupte et la métastabilité sous une hypothèse de dérive plus faible.
Tagorti, Manel. "Sur les abstractions et les projections des processus décisionnels de Markov de grande taille." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0005/document.
Full textMarkov Decision Processes (MDP) are a mathematical formalism of many domains of artifical intelligence such as planning, machine learning, reinforcement learning... Solving an MDP means finding the optimal strategy or policy of an agent interacting in a stochastic environment. When the size of this system becomes very large it becomes hard to solve this problem with classical methods. This thesis deals with the resolution of MDPs with large state space. It studies some resolution methods such as: abstractions and the projection methods. It shows the limits of some approachs and identifies some structures that may be interesting for the MDP resolution. This thesis focuses also on projection methods, the Least square temporal difference algorithm LSTD(λ). An estimate of the rate of the convergence of this algorithm has been derived with an emphasis on the role played by the parameter [lambda]. This analysis has then been generalized to the case of Least square non stationary policy iteration LS(λ)NSPI . We compute a performance bound for LS([lambda])NSPI by bounding the error between the value computed given a fixed iteration and the value computed under the optimal policy, that we aim to determine
Tagorti, Manel. "Sur les abstractions et les projections des processus décisionnels de Markov de grande taille." Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0005.
Full textMarkov Decision Processes (MDP) are a mathematical formalism of many domains of artifical intelligence such as planning, machine learning, reinforcement learning... Solving an MDP means finding the optimal strategy or policy of an agent interacting in a stochastic environment. When the size of this system becomes very large it becomes hard to solve this problem with classical methods. This thesis deals with the resolution of MDPs with large state space. It studies some resolution methods such as: abstractions and the projection methods. It shows the limits of some approachs and identifies some structures that may be interesting for the MDP resolution. This thesis focuses also on projection methods, the Least square temporal difference algorithm LSTD(λ). An estimate of the rate of the convergence of this algorithm has been derived with an emphasis on the role played by the parameter [lambda]. This analysis has then been generalized to the case of Least square non stationary policy iteration LS(λ)NSPI . We compute a performance bound for LS([lambda])NSPI by bounding the error between the value computed given a fixed iteration and the value computed under the optimal policy, that we aim to determine
Gavra, Iona Alexandra. "Algorithmes stochastiques d'optimisation sous incertitude sur des structures complexes : convergence et applications." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30141/document.
Full textThe main topics of this thesis involve the development of stochastic algorithms for optimization under uncertainty, the study of their theoretical properties and applications. The proposed algorithms are modified versions of simulated an- nealing that use only unbiased estimators of the cost function. We study their convergence using the tools developed in the theory of Markov processes: we use properties of infinitesimal generators and functional inequalities to measure the distance between their probability law and a target one. The first part is concerned with quantum graphs endowed with a probability measure on their vertex set. Quantum graphs are continuous versions of undirected weighted graphs. The starting point of the present work was the question of finding Fréchet means on such a graph. The Fréchet mean is an extension of the Euclidean mean to general metric spaces and is defined as an element that minimizes the sum of weighted square distances to all vertices. Our method relies on a Langevin formulation of a noisy simulated annealing dealt with using homogenization. In order to establish the convergence in probability of the process, we study the evolution of the relative entropy of its law with respect to a convenient Gibbs measure. Using functional inequalities (Poincare and Sobolev) and Gronwall's Lemma, we then show that the relative entropy goes to zero. We test our method on some real data sets and propose an heuristic method to adapt the algorithm to huge graphs, using a preliminary clustering. In the same framework, we introduce a definition of principal component analysis for quantum graphs. This implies, once more, a stochastic optimization problem, this time on the space of the graph's geodesics. We suggest an algorithm for finding the first principal component and conjecture the convergence of the associated Markov process to the wanted set. On the second part, we propose a modified version of the simulated annealing algorithm for solving a stochastic global optimization problem on a finite space. Our approach is inspired by the general field of Monte Carlo methods and relies on a Markov chain whose probability transition at each step is defined with the help of mini batches of increasing (random) size. We prove the algorithm's convergence in probability towards the optimal set, provide convergence rate and its optimized parametrization to ensure a minimal number of evaluations for a given accuracy and a confidence level close to 1. This work is completed with a set of numerical experiments and the assessment of the practical performance both on benchmark test cases and on real world examples
Books on the topic "Convergence de processus de Markov"
G, Kurtz Thomas, ed. Markov processes: Characterization and convergence. New York: Wiley, 1986.
Find full textRoberts, Gareth O. Convergence of slice sampler Markov chains. [Toronto: University of Toronto, 1997.
Find full textBaxter, John Robert. Rates of convergence for everywhere-positive markov chains. [Toronto, Ont.]: University of Toronto, Dept. of Statistics, 1994.
Find full textRoberts, Gareth O. Quantitative bounds for convergence rates of continuous time Markov processes. [Toronto]: University of Toronto, Dept. of Statistics, 1996.
Find full textRoberts, Gareth O. On convergence rates of Gibbs samplers for uniform distributions. [Toronto: University of Toronto, 1997.
Find full textCowles, Mary Kathryn. Possible biases induced by MCMC convergence diagnostics. Toronto: University of Toronto, Dept. of Statistics, 1997.
Find full textYuen, Wai Kong. Applications of Cheeger's constant to the convergence rate of Markov chains on Rn. Toronto: University of Toronto, Dept. of Statistics, 1997.
Find full textCowles, Mary Kathryn. A simulation approach to convergence rates for Markov chain Monte Carlo algorithms. [Toronto]: University of Toronto, Dept. of Statistics, 1996.
Find full textWirsching, Günther J. The dynamical system generated by the 3n + 1 function. Berlin: Springer, 1998.
Find full textPetrone, Sonia. A note on convergence rates of Gibbs sampling for nonparametric mixtures. Toronto: University of Toronto, Dept. of Statistics, 1998.
Find full textBook chapters on the topic "Convergence de processus de Markov"
Zhang, Hanjun, Qixiang Mei, Xiang Lin, and Zhenting Hou. "Convergence Property of Standard Transition Functions." In Markov Processes and Controlled Markov Chains, 57–67. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_4.
Full textAltman, Eitan. "Convergence of discounted constrained MDPs." In Constrained Markov Decision Processes, 193–98. Boca Raton: Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-17.
Full textAltman, Eitan. "Convergence as the horizon tends to infinity." In Constrained Markov Decision Processes, 199–203. Boca Raton: Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-18.
Full textKersting, G., and F. C. Klebaner. "Explosions in Markov Processes and Submartingale Convergence." In Athens Conference on Applied Probability and Time Series Analysis, 127–36. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-0749-8_9.
Full textCai, Yuzhi. "How Rates of Convergence for Gibbs Fields Depend on the Interaction and the Kind of Scanning Used." In Markov Processes and Controlled Markov Chains, 489–98. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_31.
Full textBernou, Armand. "On Subexponential Convergence to Equilibrium of Markov Processes." In Lecture Notes in Mathematics, 143–74. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96409-2_5.
Full textFeng, Jin, and Thomas Kurtz. "Large deviations for Markov processes and nonlinear semigroup convergence." In Mathematical Surveys and Monographs, 79–96. Providence, Rhode Island: American Mathematical Society, 2006. http://dx.doi.org/10.1090/surv/131/05.
Full textPop-Stojanovic, Z. R. "Convergence in Energy and the Sector Condition for Markov Processes." In Seminar on Stochastic Processes, 1984, 165–72. Boston, MA: Birkhäuser Boston, 1986. http://dx.doi.org/10.1007/978-1-4684-6745-1_10.
Full textNegoro, Akira, and Masaaki Tsuchiya. "Convergence and uniqueness theorems for markov processes associated with Lévy operators." In Lecture Notes in Mathematics, 348–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/bfb0078492.
Full textZverkina, Galina. "Ergodicity and Polynomial Convergence Rate of Generalized Markov Modulated Poisson Processes." In Communications in Computer and Information Science, 367–81. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66242-4_29.
Full textConference papers on the topic "Convergence de processus de Markov"
Shi, Zhengbin. "Volatility Prediction Algorithm in Enterprise Financial Risk Management Based on Markov Chain Algorithm." In 2023 International Conference on Intelligent Computing, Communication & Convergence (ICI3C), 152–56. IEEE, 2023. http://dx.doi.org/10.1109/ici3c60830.2023.00039.
Full textMajeed, Sultan Javed, and Marcus Hutter. "On Q-learning Convergence for Non-Markov Decision Processes." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/353.
Full textAmiri, Mohsen, and Sindri Magnússon. "On the Convergence of TD-Learning on Markov Reward Processes with Hidden States." In 2024 European Control Conference (ECC). IEEE, 2024. http://dx.doi.org/10.23919/ecc64448.2024.10591108.
Full textTakagi, Hideaki, Muneo Kitajima, Tetsuo Yamamoto, and Yongbing Zhang. "Search process evaluation for a hierarchical menu system by Markov chains." In ITCom 2001: International Symposium on the Convergence of IT and Communications, edited by Robert D. van der Mei and Frank Huebner-Szabo de Bucs. SPIE, 2001. http://dx.doi.org/10.1117/12.434312.
Full textHongbin Liang, Lin X. Cai, Hangguan Shan, Xuemin Shen, and Daiyuan Peng. "Adaptive resource allocation for media services based on semi-Markov decision process." In 2010 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2010. http://dx.doi.org/10.1109/ictc.2010.5674663.
Full textDing, Dongsheng, Kaiqing Zhang, Tamer Basar, and Mihailo R. Jovanovic. "Convergence and optimality of policy gradient primal-dual method for constrained Markov decision processes." In 2022 American Control Conference (ACC). IEEE, 2022. http://dx.doi.org/10.23919/acc53348.2022.9867805.
Full textTayeb, Shahab, Miresmaeil Mirnabibaboli, and Shahram Latifi. "Load Balancing in WSNs using a Novel Markov Decision Process Based Routing Algorithm." In 2016 6th International Conference on IT Convergence and Security (ICITCS). IEEE, 2016. http://dx.doi.org/10.1109/icitcs.2016.7740350.
Full textFerreira Salvador, Paulo J., and Rui J. M. T. Valadas. "Framework based on Markov modulated Poisson processes for modeling traffic with long-range dependence." In ITCom 2001: International Symposium on the Convergence of IT and Communications, edited by Robert D. van der Mei and Frank Huebner-Szabo de Bucs. SPIE, 2001. http://dx.doi.org/10.1117/12.434317.
Full textShi, Chongyang, Yuheng Bu, and Jie Fu. "Information-Theoretic Opacity-Enforcement in Markov Decision Processes." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/749.
Full textHorák, Karel, Branislav Bošanský, and Krishnendu Chatterjee. "Goal-HSVI: Heuristic Search Value Iteration for Goal POMDPs." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/662.
Full textReports on the topic "Convergence de processus de Markov"
Athreya, Krishna B., Hani Doss, and Jayaram Sethuraman. A Proof of Convergence of the Markov Chain Simulation Method. Fort Belvoir, VA: Defense Technical Information Center, July 1992. http://dx.doi.org/10.21236/ada255456.
Full textSethuraman, Jayaram. Easily Verifiable Conditions for the Convergence of the Markov Chain Monte Carlo Method. Fort Belvoir, VA: Defense Technical Information Center, December 1995. http://dx.doi.org/10.21236/ada308874.
Full textAthreya, Krishna B., Hani Doss, and Jayaram Sethuraman. Easy-to-Apply Results for Establishing Convergence of Markov Chains in Bayesian Analysis. Fort Belvoir, VA: Defense Technical Information Center, February 1993. http://dx.doi.org/10.21236/ada264015.
Full textBledsoe, Keith C. Implement Method for Automated Testing of Markov Chain Convergence into INVERSE for ORNL12-RS-108J: Advanced Multi-Dimensional Forward and Inverse Modeling. Office of Scientific and Technical Information (OSTI), April 2015. http://dx.doi.org/10.2172/1234327.
Full textŠiljak, Dženita. The Effects of Institutions on the Transition of the Western Balkans. Külügyi és Külgazdasági Intézet, 2022. http://dx.doi.org/10.47683/kkielemzesek.ke-2022.19.
Full textQuevedo, Fernando, Paolo Giordano, and Mauricio Mesquita Moreira. El tratamiento de las asimetrías en los acuerdos de integración regional. Inter-American Development Bank, August 2004. http://dx.doi.org/10.18235/0009450.
Full textBriones, Roehlano, Ivory Myka Galang, Isabel Espineli, Aniceto Jr Orbeta, and Marife Ballesteros. Endline Study Report and Policy Study for the ConVERGE Project. Philippine Institute for Development Studies, September 2023. http://dx.doi.org/10.62986/dp2023.13.
Full textHori, Tsuneki, Sergio Lacambra Ayuso, Ana María Torres, Lina Salazar, Gilberto Romero, Rolando Durán, Ginés Suarez, Lizardo Narváez, and Ernesto Visconti. Índice de Gobernabilidad y Políticas Públicas en Gestión de Riesgo de Desastres (iGOPP): Informe Nacional de Perú. Inter-American Development Bank, October 2015. http://dx.doi.org/10.18235/0010086.
Full textOcampo, José Antonio, Roberto Steiner Sampedro, Mauricio Villamizar Villegas, Bibiana Taboada Arango, Jaime Jaramillo Vallejo, Olga Lucia Acosta Navarro, and Leonardo Villar Gómez. Informe de la Junta Directiva al Congreso de la República - Marzo de 2023. Banco de la República, March 2023. http://dx.doi.org/10.32468/inf-jun-dir-con-rep.3-2023.
Full textOcampo-Gaviria, José Antonio, Roberto Steiner Sampedro, Mauricio Villamizar Villegas, Bibiana Taboada Arango, Jaime Jaramillo Vallejo, Olga Lucia Acosta-Navarro, and Leonardo Villar Gómez. Report of the Board of Directors to the Congress of Colombia - March 2023. Banco de la República de Colombia, June 2023. http://dx.doi.org/10.32468/inf-jun-dir-con-rep-eng.03-2023.
Full text