Literatura científica selecionada sobre o tema "Convergence de processus de Markov"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Convergence de processus de Markov".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Convergence de processus de Markov"
Abakuks, A., S. N. Ethier e T. G. Kurtz. "Markov Processes: Characterization and Convergence." Biometrics 43, n.º 2 (junho de 1987): 484. http://dx.doi.org/10.2307/2531839.
Texto completo da fontePerkins, Edwin, S. N. Ethier e T. G. Kurtz. "Markov Processes, Characterization and Convergence." Journal of the Royal Statistical Society. Series A (Statistics in Society) 151, n.º 2 (1988): 367. http://dx.doi.org/10.2307/2982773.
Texto completo da fonteSwishchuk, Anatoliy, e M. Shafiqul Islam. "Diffusion Approximations of the Geometric Markov Renewal Processes and Option Price Formulas". International Journal of Stochastic Analysis 2010 (19 de dezembro de 2010): 1–21. http://dx.doi.org/10.1155/2010/347105.
Texto completo da fonteHWANG, CHII-RUEY. "ACCELERATING MONTE CARLO MARKOV PROCESSES". COSMOS 01, n.º 01 (maio de 2005): 87–94. http://dx.doi.org/10.1142/s0219607705000085.
Texto completo da fonteAldous, David J. "Book Review: Markov processes: Characterization and convergence". Bulletin of the American Mathematical Society 16, n.º 2 (1 de abril de 1987): 315–19. http://dx.doi.org/10.1090/s0273-0979-1987-15533-9.
Texto completo da fonteFranz, Uwe, Volkmar Liebscher e Stefan Zeiser. "Piecewise-Deterministic Markov Processes as Limits of Markov Jump Processes". Advances in Applied Probability 44, n.º 3 (setembro de 2012): 729–48. http://dx.doi.org/10.1239/aap/1346955262.
Texto completo da fonteFranz, Uwe, Volkmar Liebscher e Stefan Zeiser. "Piecewise-Deterministic Markov Processes as Limits of Markov Jump Processes". Advances in Applied Probability 44, n.º 03 (setembro de 2012): 729–48. http://dx.doi.org/10.1017/s0001867800005851.
Texto completo da fonteMacci, Claudio. "Continuous-time Markov additive processes: Composition of large deviations principles and comparison between exponential rates of convergence". Journal of Applied Probability 38, n.º 4 (dezembro de 2001): 917–31. http://dx.doi.org/10.1239/jap/1011994182.
Texto completo da fonteDeng, Chang-Song, René L. Schilling e Yan-Hong Song. "Subgeometric rates of convergence for Markov processes under subordination". Advances in Applied Probability 49, n.º 1 (março de 2017): 162–81. http://dx.doi.org/10.1017/apr.2016.83.
Texto completo da fonteCrank, Keith N., e Prem S. Puri. "A method of approximating Markov jump processes". Advances in Applied Probability 20, n.º 1 (março de 1988): 33–58. http://dx.doi.org/10.2307/1427269.
Texto completo da fonteTeses / dissertações sobre o assunto "Convergence de processus de Markov"
Lachaud, Béatrice. "Détection de la convergence de processus de Markov". Phd thesis, Université René Descartes - Paris V, 2005. http://tel.archives-ouvertes.fr/tel-00010473.
Texto completo da fonteWang, Xinyu. "Sur la convergence sous-exponentielle de processus de Markov". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00840858.
Texto completo da fonteHahn, Léo. "Interacting run-and-tumble particles as piecewise deterministic Markov processes : invariant distribution and convergence". Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0084.
Texto completo da fonte1. Simulating active and metastable systems with piecewise deterministic Markov processes (PDMPs): - Which dynamics to choose to efficiently simulate metastable states? - How to directly exploit the non-equilibrium nature of PDMPs to study the modeled physical systems? 2. Modeling active systems with PDMPs: - What conditions must a system meet to be modeled by a PDMP? - In which cases does the system have a stationary distribution? - How to calculate dynamic quantities (e.g., transition rates) in this framework? 3. Improving simulation techniques for equilibrium systems: - Can results obtained in the context of non-equilibrium systems be used to accelerate the simulation of equilibrium systems? - How to use topological information to adapt the dynamics in real-time?
Bouguet, Florian. "Étude quantitative de processus de Markov déterministes par morceaux issus de la modélisation". Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S040/document.
Texto completo da fonteThe purpose of this Ph.D. thesis is the study of piecewise deterministic Markov processes, which are often used for modeling many natural phenomena. Precisely, we shall focus on their long time behavior as well as their speed of convergence to equilibrium, whenever they possess a stationary probability measure. Providing sharp quantitative bounds for this speed of convergence is one of the main orientations of this manuscript, which will usually be done through coupling methods. We shall emphasize the link between Markov processes and mathematical fields of research where they may be of interest, such as partial differential equations. The last chapter of this thesis is devoted to the introduction of a unified approach to study the long time behavior of inhomogeneous Markov chains, which can provide functional limit theorems with the help of asymptotic pseudotrajectories
Rochet, Sophie. "Convergence des algorithmes génétiques : modèles stochastiques et épistasie". Aix-Marseille 1, 1998. http://www.theses.fr/1998AIX11032.
Texto completo da fonteBertoncini, Olivier. "Convergence abrupte et métastabilité". Phd thesis, Rouen, 2007. http://www.theses.fr/2007ROUES038.
Texto completo da fonteThe aim of this thesis is to link two phenomena concerning the asymptotical behavior of stochastic processes, which were disjoined up to now. The abrupt convergence or cutoff phenomenon on one hand, and metastability on the other hand. In the cutoff case an abrupt convergence towards the equilibrium measure occurs at a time which can be determined, whereas metastability is linked to a great uncertainty of the time at which we leave some equilibrium. We propose a common framework to compare and study both phenomena : that of discrete time birth and death chains on N with drift towards zero. Under the drift hypothesis, we prove that there is an abrupt convergence towards zero, metastability in the other direction, and that the last exit in the metastability is the time reverse of a typical cutoff path. We extend our approach to the Ehrenfest model, which allows us to prove abrupt convergence and metastability with a weaker drift hypothesis
Bertoncini, Olivier. "Convergence abrupte et métastabilité". Phd thesis, Université de Rouen, 2007. http://tel.archives-ouvertes.fr/tel-00218132.
Texto completo da fonteOn montre que sous l'hypothèse de dérive il y a convergence abrupte vers zéro et métastabilité dans l'autre sens. De plus la dernière excursion dans la métastabilité est la renversée temporelle d'une trajectoire typique de cutoff.
On étend notre approche au modèle d'Ehrenfest, ce qui nous permet de montrer la convergence abrupte et la métastabilité sous une hypothèse de dérive plus faible.
Tagorti, Manel. "Sur les abstractions et les projections des processus décisionnels de Markov de grande taille". Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0005/document.
Texto completo da fonteMarkov Decision Processes (MDP) are a mathematical formalism of many domains of artifical intelligence such as planning, machine learning, reinforcement learning... Solving an MDP means finding the optimal strategy or policy of an agent interacting in a stochastic environment. When the size of this system becomes very large it becomes hard to solve this problem with classical methods. This thesis deals with the resolution of MDPs with large state space. It studies some resolution methods such as: abstractions and the projection methods. It shows the limits of some approachs and identifies some structures that may be interesting for the MDP resolution. This thesis focuses also on projection methods, the Least square temporal difference algorithm LSTD(λ). An estimate of the rate of the convergence of this algorithm has been derived with an emphasis on the role played by the parameter [lambda]. This analysis has then been generalized to the case of Least square non stationary policy iteration LS(λ)NSPI . We compute a performance bound for LS([lambda])NSPI by bounding the error between the value computed given a fixed iteration and the value computed under the optimal policy, that we aim to determine
Tagorti, Manel. "Sur les abstractions et les projections des processus décisionnels de Markov de grande taille". Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0005.
Texto completo da fonteMarkov Decision Processes (MDP) are a mathematical formalism of many domains of artifical intelligence such as planning, machine learning, reinforcement learning... Solving an MDP means finding the optimal strategy or policy of an agent interacting in a stochastic environment. When the size of this system becomes very large it becomes hard to solve this problem with classical methods. This thesis deals with the resolution of MDPs with large state space. It studies some resolution methods such as: abstractions and the projection methods. It shows the limits of some approachs and identifies some structures that may be interesting for the MDP resolution. This thesis focuses also on projection methods, the Least square temporal difference algorithm LSTD(λ). An estimate of the rate of the convergence of this algorithm has been derived with an emphasis on the role played by the parameter [lambda]. This analysis has then been generalized to the case of Least square non stationary policy iteration LS(λ)NSPI . We compute a performance bound for LS([lambda])NSPI by bounding the error between the value computed given a fixed iteration and the value computed under the optimal policy, that we aim to determine
Gavra, Iona Alexandra. "Algorithmes stochastiques d'optimisation sous incertitude sur des structures complexes : convergence et applications". Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30141/document.
Texto completo da fonteThe main topics of this thesis involve the development of stochastic algorithms for optimization under uncertainty, the study of their theoretical properties and applications. The proposed algorithms are modified versions of simulated an- nealing that use only unbiased estimators of the cost function. We study their convergence using the tools developed in the theory of Markov processes: we use properties of infinitesimal generators and functional inequalities to measure the distance between their probability law and a target one. The first part is concerned with quantum graphs endowed with a probability measure on their vertex set. Quantum graphs are continuous versions of undirected weighted graphs. The starting point of the present work was the question of finding Fréchet means on such a graph. The Fréchet mean is an extension of the Euclidean mean to general metric spaces and is defined as an element that minimizes the sum of weighted square distances to all vertices. Our method relies on a Langevin formulation of a noisy simulated annealing dealt with using homogenization. In order to establish the convergence in probability of the process, we study the evolution of the relative entropy of its law with respect to a convenient Gibbs measure. Using functional inequalities (Poincare and Sobolev) and Gronwall's Lemma, we then show that the relative entropy goes to zero. We test our method on some real data sets and propose an heuristic method to adapt the algorithm to huge graphs, using a preliminary clustering. In the same framework, we introduce a definition of principal component analysis for quantum graphs. This implies, once more, a stochastic optimization problem, this time on the space of the graph's geodesics. We suggest an algorithm for finding the first principal component and conjecture the convergence of the associated Markov process to the wanted set. On the second part, we propose a modified version of the simulated annealing algorithm for solving a stochastic global optimization problem on a finite space. Our approach is inspired by the general field of Monte Carlo methods and relies on a Markov chain whose probability transition at each step is defined with the help of mini batches of increasing (random) size. We prove the algorithm's convergence in probability towards the optimal set, provide convergence rate and its optimized parametrization to ensure a minimal number of evaluations for a given accuracy and a confidence level close to 1. This work is completed with a set of numerical experiments and the assessment of the practical performance both on benchmark test cases and on real world examples
Livros sobre o assunto "Convergence de processus de Markov"
G, Kurtz Thomas, ed. Markov processes: Characterization and convergence. New York: Wiley, 1986.
Encontre o texto completo da fonteRoberts, Gareth O. Convergence of slice sampler Markov chains. [Toronto: University of Toronto, 1997.
Encontre o texto completo da fonteBaxter, John Robert. Rates of convergence for everywhere-positive markov chains. [Toronto, Ont.]: University of Toronto, Dept. of Statistics, 1994.
Encontre o texto completo da fonteRoberts, Gareth O. Quantitative bounds for convergence rates of continuous time Markov processes. [Toronto]: University of Toronto, Dept. of Statistics, 1996.
Encontre o texto completo da fonteRoberts, Gareth O. On convergence rates of Gibbs samplers for uniform distributions. [Toronto: University of Toronto, 1997.
Encontre o texto completo da fonteCowles, Mary Kathryn. Possible biases induced by MCMC convergence diagnostics. Toronto: University of Toronto, Dept. of Statistics, 1997.
Encontre o texto completo da fonteYuen, Wai Kong. Applications of Cheeger's constant to the convergence rate of Markov chains on Rn. Toronto: University of Toronto, Dept. of Statistics, 1997.
Encontre o texto completo da fonteCowles, Mary Kathryn. A simulation approach to convergence rates for Markov chain Monte Carlo algorithms. [Toronto]: University of Toronto, Dept. of Statistics, 1996.
Encontre o texto completo da fonteWirsching, Günther J. The dynamical system generated by the 3n + 1 function. Berlin: Springer, 1998.
Encontre o texto completo da fontePetrone, Sonia. A note on convergence rates of Gibbs sampling for nonparametric mixtures. Toronto: University of Toronto, Dept. of Statistics, 1998.
Encontre o texto completo da fonteCapítulos de livros sobre o assunto "Convergence de processus de Markov"
Zhang, Hanjun, Qixiang Mei, Xiang Lin e Zhenting Hou. "Convergence Property of Standard Transition Functions". In Markov Processes and Controlled Markov Chains, 57–67. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_4.
Texto completo da fonteAltman, Eitan. "Convergence of discounted constrained MDPs". In Constrained Markov Decision Processes, 193–98. Boca Raton: Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-17.
Texto completo da fonteAltman, Eitan. "Convergence as the horizon tends to infinity". In Constrained Markov Decision Processes, 199–203. Boca Raton: Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-18.
Texto completo da fonteKersting, G., e F. C. Klebaner. "Explosions in Markov Processes and Submartingale Convergence." In Athens Conference on Applied Probability and Time Series Analysis, 127–36. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-0749-8_9.
Texto completo da fonteCai, Yuzhi. "How Rates of Convergence for Gibbs Fields Depend on the Interaction and the Kind of Scanning Used". In Markov Processes and Controlled Markov Chains, 489–98. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_31.
Texto completo da fonteBernou, Armand. "On Subexponential Convergence to Equilibrium of Markov Processes". In Lecture Notes in Mathematics, 143–74. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96409-2_5.
Texto completo da fonteFeng, Jin, e Thomas Kurtz. "Large deviations for Markov processes and nonlinear semigroup convergence". In Mathematical Surveys and Monographs, 79–96. Providence, Rhode Island: American Mathematical Society, 2006. http://dx.doi.org/10.1090/surv/131/05.
Texto completo da fontePop-Stojanovic, Z. R. "Convergence in Energy and the Sector Condition for Markov Processes". In Seminar on Stochastic Processes, 1984, 165–72. Boston, MA: Birkhäuser Boston, 1986. http://dx.doi.org/10.1007/978-1-4684-6745-1_10.
Texto completo da fonteNegoro, Akira, e Masaaki Tsuchiya. "Convergence and uniqueness theorems for markov processes associated with Lévy operators". In Lecture Notes in Mathematics, 348–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/bfb0078492.
Texto completo da fonteZverkina, Galina. "Ergodicity and Polynomial Convergence Rate of Generalized Markov Modulated Poisson Processes". In Communications in Computer and Information Science, 367–81. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66242-4_29.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Convergence de processus de Markov"
Shi, Zhengbin. "Volatility Prediction Algorithm in Enterprise Financial Risk Management Based on Markov Chain Algorithm". In 2023 International Conference on Intelligent Computing, Communication & Convergence (ICI3C), 152–56. IEEE, 2023. http://dx.doi.org/10.1109/ici3c60830.2023.00039.
Texto completo da fonteMajeed, Sultan Javed, e Marcus Hutter. "On Q-learning Convergence for Non-Markov Decision Processes". In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/353.
Texto completo da fonteAmiri, Mohsen, e Sindri Magnússon. "On the Convergence of TD-Learning on Markov Reward Processes with Hidden States". In 2024 European Control Conference (ECC). IEEE, 2024. http://dx.doi.org/10.23919/ecc64448.2024.10591108.
Texto completo da fonteTakagi, Hideaki, Muneo Kitajima, Tetsuo Yamamoto e Yongbing Zhang. "Search process evaluation for a hierarchical menu system by Markov chains". In ITCom 2001: International Symposium on the Convergence of IT and Communications, editado por Robert D. van der Mei e Frank Huebner-Szabo de Bucs. SPIE, 2001. http://dx.doi.org/10.1117/12.434312.
Texto completo da fonteHongbin Liang, Lin X. Cai, Hangguan Shan, Xuemin Shen e Daiyuan Peng. "Adaptive resource allocation for media services based on semi-Markov decision process". In 2010 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2010. http://dx.doi.org/10.1109/ictc.2010.5674663.
Texto completo da fonteDing, Dongsheng, Kaiqing Zhang, Tamer Basar e Mihailo R. Jovanovic. "Convergence and optimality of policy gradient primal-dual method for constrained Markov decision processes". In 2022 American Control Conference (ACC). IEEE, 2022. http://dx.doi.org/10.23919/acc53348.2022.9867805.
Texto completo da fonteTayeb, Shahab, Miresmaeil Mirnabibaboli e Shahram Latifi. "Load Balancing in WSNs using a Novel Markov Decision Process Based Routing Algorithm". In 2016 6th International Conference on IT Convergence and Security (ICITCS). IEEE, 2016. http://dx.doi.org/10.1109/icitcs.2016.7740350.
Texto completo da fonteFerreira Salvador, Paulo J., e Rui J. M. T. Valadas. "Framework based on Markov modulated Poisson processes for modeling traffic with long-range dependence". In ITCom 2001: International Symposium on the Convergence of IT and Communications, editado por Robert D. van der Mei e Frank Huebner-Szabo de Bucs. SPIE, 2001. http://dx.doi.org/10.1117/12.434317.
Texto completo da fonteShi, Chongyang, Yuheng Bu e Jie Fu. "Information-Theoretic Opacity-Enforcement in Markov Decision Processes". In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/749.
Texto completo da fonteHorák, Karel, Branislav Bošanský e Krishnendu Chatterjee. "Goal-HSVI: Heuristic Search Value Iteration for Goal POMDPs". In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/662.
Texto completo da fonteRelatórios de organizações sobre o assunto "Convergence de processus de Markov"
Athreya, Krishna B., Hani Doss e Jayaram Sethuraman. A Proof of Convergence of the Markov Chain Simulation Method. Fort Belvoir, VA: Defense Technical Information Center, julho de 1992. http://dx.doi.org/10.21236/ada255456.
Texto completo da fonteSethuraman, Jayaram. Easily Verifiable Conditions for the Convergence of the Markov Chain Monte Carlo Method. Fort Belvoir, VA: Defense Technical Information Center, dezembro de 1995. http://dx.doi.org/10.21236/ada308874.
Texto completo da fonteAthreya, Krishna B., Hani Doss e Jayaram Sethuraman. Easy-to-Apply Results for Establishing Convergence of Markov Chains in Bayesian Analysis. Fort Belvoir, VA: Defense Technical Information Center, fevereiro de 1993. http://dx.doi.org/10.21236/ada264015.
Texto completo da fonteBledsoe, Keith C. Implement Method for Automated Testing of Markov Chain Convergence into INVERSE for ORNL12-RS-108J: Advanced Multi-Dimensional Forward and Inverse Modeling. Office of Scientific and Technical Information (OSTI), abril de 2015. http://dx.doi.org/10.2172/1234327.
Texto completo da fonteŠiljak, Dženita. The Effects of Institutions on the Transition of the Western Balkans. Külügyi és Külgazdasági Intézet, 2022. http://dx.doi.org/10.47683/kkielemzesek.ke-2022.19.
Texto completo da fonteQuevedo, Fernando, Paolo Giordano e Mauricio Mesquita Moreira. El tratamiento de las asimetrías en los acuerdos de integración regional. Inter-American Development Bank, agosto de 2004. http://dx.doi.org/10.18235/0009450.
Texto completo da fonteBriones, Roehlano, Ivory Myka Galang, Isabel Espineli, Aniceto Jr Orbeta e Marife Ballesteros. Endline Study Report and Policy Study for the ConVERGE Project. Philippine Institute for Development Studies, setembro de 2023. http://dx.doi.org/10.62986/dp2023.13.
Texto completo da fonteHori, Tsuneki, Sergio Lacambra Ayuso, Ana María Torres, Lina Salazar, Gilberto Romero, Rolando Durán, Ginés Suarez, Lizardo Narváez e Ernesto Visconti. Índice de Gobernabilidad y Políticas Públicas en Gestión de Riesgo de Desastres (iGOPP): Informe Nacional de Perú. Inter-American Development Bank, outubro de 2015. http://dx.doi.org/10.18235/0010086.
Texto completo da fonteOcampo, José Antonio, Roberto Steiner Sampedro, Mauricio Villamizar Villegas, Bibiana Taboada Arango, Jaime Jaramillo Vallejo, Olga Lucia Acosta Navarro e Leonardo Villar Gómez. Informe de la Junta Directiva al Congreso de la República - Marzo de 2023. Banco de la República, março de 2023. http://dx.doi.org/10.32468/inf-jun-dir-con-rep.3-2023.
Texto completo da fonteOcampo-Gaviria, José Antonio, Roberto Steiner Sampedro, Mauricio Villamizar Villegas, Bibiana Taboada Arango, Jaime Jaramillo Vallejo, Olga Lucia Acosta-Navarro e Leonardo Villar Gómez. Report of the Board of Directors to the Congress of Colombia - March 2023. Banco de la República de Colombia, junho de 2023. http://dx.doi.org/10.32468/inf-jun-dir-con-rep-eng.03-2023.
Texto completo da fonte