Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Convergence de processus de Markov“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Convergence de processus de Markov" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Convergence de processus de Markov"
Abakuks, A., S. N. Ethier und T. G. Kurtz. „Markov Processes: Characterization and Convergence.“ Biometrics 43, Nr. 2 (Juni 1987): 484. http://dx.doi.org/10.2307/2531839.
Der volle Inhalt der QuellePerkins, Edwin, S. N. Ethier und T. G. Kurtz. „Markov Processes, Characterization and Convergence.“ Journal of the Royal Statistical Society. Series A (Statistics in Society) 151, Nr. 2 (1988): 367. http://dx.doi.org/10.2307/2982773.
Der volle Inhalt der QuelleSwishchuk, Anatoliy, und M. Shafiqul Islam. „Diffusion Approximations of the Geometric Markov Renewal Processes and Option Price Formulas“. International Journal of Stochastic Analysis 2010 (19.12.2010): 1–21. http://dx.doi.org/10.1155/2010/347105.
Der volle Inhalt der QuelleHWANG, CHII-RUEY. „ACCELERATING MONTE CARLO MARKOV PROCESSES“. COSMOS 01, Nr. 01 (Mai 2005): 87–94. http://dx.doi.org/10.1142/s0219607705000085.
Der volle Inhalt der QuelleAldous, David J. „Book Review: Markov processes: Characterization and convergence“. Bulletin of the American Mathematical Society 16, Nr. 2 (01.04.1987): 315–19. http://dx.doi.org/10.1090/s0273-0979-1987-15533-9.
Der volle Inhalt der QuelleFranz, Uwe, Volkmar Liebscher und Stefan Zeiser. „Piecewise-Deterministic Markov Processes as Limits of Markov Jump Processes“. Advances in Applied Probability 44, Nr. 3 (September 2012): 729–48. http://dx.doi.org/10.1239/aap/1346955262.
Der volle Inhalt der QuelleFranz, Uwe, Volkmar Liebscher und Stefan Zeiser. „Piecewise-Deterministic Markov Processes as Limits of Markov Jump Processes“. Advances in Applied Probability 44, Nr. 03 (September 2012): 729–48. http://dx.doi.org/10.1017/s0001867800005851.
Der volle Inhalt der QuelleMacci, Claudio. „Continuous-time Markov additive processes: Composition of large deviations principles and comparison between exponential rates of convergence“. Journal of Applied Probability 38, Nr. 4 (Dezember 2001): 917–31. http://dx.doi.org/10.1239/jap/1011994182.
Der volle Inhalt der QuelleDeng, Chang-Song, René L. Schilling und Yan-Hong Song. „Subgeometric rates of convergence for Markov processes under subordination“. Advances in Applied Probability 49, Nr. 1 (März 2017): 162–81. http://dx.doi.org/10.1017/apr.2016.83.
Der volle Inhalt der QuelleCrank, Keith N., und Prem S. Puri. „A method of approximating Markov jump processes“. Advances in Applied Probability 20, Nr. 1 (März 1988): 33–58. http://dx.doi.org/10.2307/1427269.
Der volle Inhalt der QuelleDissertationen zum Thema "Convergence de processus de Markov"
Lachaud, Béatrice. „Détection de la convergence de processus de Markov“. Phd thesis, Université René Descartes - Paris V, 2005. http://tel.archives-ouvertes.fr/tel-00010473.
Der volle Inhalt der QuelleWang, Xinyu. „Sur la convergence sous-exponentielle de processus de Markov“. Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00840858.
Der volle Inhalt der QuelleHahn, Léo. „Interacting run-and-tumble particles as piecewise deterministic Markov processes : invariant distribution and convergence“. Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0084.
Der volle Inhalt der Quelle1. Simulating active and metastable systems with piecewise deterministic Markov processes (PDMPs): - Which dynamics to choose to efficiently simulate metastable states? - How to directly exploit the non-equilibrium nature of PDMPs to study the modeled physical systems? 2. Modeling active systems with PDMPs: - What conditions must a system meet to be modeled by a PDMP? - In which cases does the system have a stationary distribution? - How to calculate dynamic quantities (e.g., transition rates) in this framework? 3. Improving simulation techniques for equilibrium systems: - Can results obtained in the context of non-equilibrium systems be used to accelerate the simulation of equilibrium systems? - How to use topological information to adapt the dynamics in real-time?
Bouguet, Florian. „Étude quantitative de processus de Markov déterministes par morceaux issus de la modélisation“. Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S040/document.
Der volle Inhalt der QuelleThe purpose of this Ph.D. thesis is the study of piecewise deterministic Markov processes, which are often used for modeling many natural phenomena. Precisely, we shall focus on their long time behavior as well as their speed of convergence to equilibrium, whenever they possess a stationary probability measure. Providing sharp quantitative bounds for this speed of convergence is one of the main orientations of this manuscript, which will usually be done through coupling methods. We shall emphasize the link between Markov processes and mathematical fields of research where they may be of interest, such as partial differential equations. The last chapter of this thesis is devoted to the introduction of a unified approach to study the long time behavior of inhomogeneous Markov chains, which can provide functional limit theorems with the help of asymptotic pseudotrajectories
Rochet, Sophie. „Convergence des algorithmes génétiques : modèles stochastiques et épistasie“. Aix-Marseille 1, 1998. http://www.theses.fr/1998AIX11032.
Der volle Inhalt der QuelleBertoncini, Olivier. „Convergence abrupte et métastabilité“. Phd thesis, Rouen, 2007. http://www.theses.fr/2007ROUES038.
Der volle Inhalt der QuelleThe aim of this thesis is to link two phenomena concerning the asymptotical behavior of stochastic processes, which were disjoined up to now. The abrupt convergence or cutoff phenomenon on one hand, and metastability on the other hand. In the cutoff case an abrupt convergence towards the equilibrium measure occurs at a time which can be determined, whereas metastability is linked to a great uncertainty of the time at which we leave some equilibrium. We propose a common framework to compare and study both phenomena : that of discrete time birth and death chains on N with drift towards zero. Under the drift hypothesis, we prove that there is an abrupt convergence towards zero, metastability in the other direction, and that the last exit in the metastability is the time reverse of a typical cutoff path. We extend our approach to the Ehrenfest model, which allows us to prove abrupt convergence and metastability with a weaker drift hypothesis
Bertoncini, Olivier. „Convergence abrupte et métastabilité“. Phd thesis, Université de Rouen, 2007. http://tel.archives-ouvertes.fr/tel-00218132.
Der volle Inhalt der QuelleOn montre que sous l'hypothèse de dérive il y a convergence abrupte vers zéro et métastabilité dans l'autre sens. De plus la dernière excursion dans la métastabilité est la renversée temporelle d'une trajectoire typique de cutoff.
On étend notre approche au modèle d'Ehrenfest, ce qui nous permet de montrer la convergence abrupte et la métastabilité sous une hypothèse de dérive plus faible.
Tagorti, Manel. „Sur les abstractions et les projections des processus décisionnels de Markov de grande taille“. Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0005/document.
Der volle Inhalt der QuelleMarkov Decision Processes (MDP) are a mathematical formalism of many domains of artifical intelligence such as planning, machine learning, reinforcement learning... Solving an MDP means finding the optimal strategy or policy of an agent interacting in a stochastic environment. When the size of this system becomes very large it becomes hard to solve this problem with classical methods. This thesis deals with the resolution of MDPs with large state space. It studies some resolution methods such as: abstractions and the projection methods. It shows the limits of some approachs and identifies some structures that may be interesting for the MDP resolution. This thesis focuses also on projection methods, the Least square temporal difference algorithm LSTD(λ). An estimate of the rate of the convergence of this algorithm has been derived with an emphasis on the role played by the parameter [lambda]. This analysis has then been generalized to the case of Least square non stationary policy iteration LS(λ)NSPI . We compute a performance bound for LS([lambda])NSPI by bounding the error between the value computed given a fixed iteration and the value computed under the optimal policy, that we aim to determine
Tagorti, Manel. „Sur les abstractions et les projections des processus décisionnels de Markov de grande taille“. Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0005.
Der volle Inhalt der QuelleMarkov Decision Processes (MDP) are a mathematical formalism of many domains of artifical intelligence such as planning, machine learning, reinforcement learning... Solving an MDP means finding the optimal strategy or policy of an agent interacting in a stochastic environment. When the size of this system becomes very large it becomes hard to solve this problem with classical methods. This thesis deals with the resolution of MDPs with large state space. It studies some resolution methods such as: abstractions and the projection methods. It shows the limits of some approachs and identifies some structures that may be interesting for the MDP resolution. This thesis focuses also on projection methods, the Least square temporal difference algorithm LSTD(λ). An estimate of the rate of the convergence of this algorithm has been derived with an emphasis on the role played by the parameter [lambda]. This analysis has then been generalized to the case of Least square non stationary policy iteration LS(λ)NSPI . We compute a performance bound for LS([lambda])NSPI by bounding the error between the value computed given a fixed iteration and the value computed under the optimal policy, that we aim to determine
Gavra, Iona Alexandra. „Algorithmes stochastiques d'optimisation sous incertitude sur des structures complexes : convergence et applications“. Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30141/document.
Der volle Inhalt der QuelleThe main topics of this thesis involve the development of stochastic algorithms for optimization under uncertainty, the study of their theoretical properties and applications. The proposed algorithms are modified versions of simulated an- nealing that use only unbiased estimators of the cost function. We study their convergence using the tools developed in the theory of Markov processes: we use properties of infinitesimal generators and functional inequalities to measure the distance between their probability law and a target one. The first part is concerned with quantum graphs endowed with a probability measure on their vertex set. Quantum graphs are continuous versions of undirected weighted graphs. The starting point of the present work was the question of finding Fréchet means on such a graph. The Fréchet mean is an extension of the Euclidean mean to general metric spaces and is defined as an element that minimizes the sum of weighted square distances to all vertices. Our method relies on a Langevin formulation of a noisy simulated annealing dealt with using homogenization. In order to establish the convergence in probability of the process, we study the evolution of the relative entropy of its law with respect to a convenient Gibbs measure. Using functional inequalities (Poincare and Sobolev) and Gronwall's Lemma, we then show that the relative entropy goes to zero. We test our method on some real data sets and propose an heuristic method to adapt the algorithm to huge graphs, using a preliminary clustering. In the same framework, we introduce a definition of principal component analysis for quantum graphs. This implies, once more, a stochastic optimization problem, this time on the space of the graph's geodesics. We suggest an algorithm for finding the first principal component and conjecture the convergence of the associated Markov process to the wanted set. On the second part, we propose a modified version of the simulated annealing algorithm for solving a stochastic global optimization problem on a finite space. Our approach is inspired by the general field of Monte Carlo methods and relies on a Markov chain whose probability transition at each step is defined with the help of mini batches of increasing (random) size. We prove the algorithm's convergence in probability towards the optimal set, provide convergence rate and its optimized parametrization to ensure a minimal number of evaluations for a given accuracy and a confidence level close to 1. This work is completed with a set of numerical experiments and the assessment of the practical performance both on benchmark test cases and on real world examples
Bücher zum Thema "Convergence de processus de Markov"
G, Kurtz Thomas, Hrsg. Markov processes: Characterization and convergence. New York: Wiley, 1986.
Den vollen Inhalt der Quelle findenRoberts, Gareth O. Convergence of slice sampler Markov chains. [Toronto: University of Toronto, 1997.
Den vollen Inhalt der Quelle findenBaxter, John Robert. Rates of convergence for everywhere-positive markov chains. [Toronto, Ont.]: University of Toronto, Dept. of Statistics, 1994.
Den vollen Inhalt der Quelle findenRoberts, Gareth O. Quantitative bounds for convergence rates of continuous time Markov processes. [Toronto]: University of Toronto, Dept. of Statistics, 1996.
Den vollen Inhalt der Quelle findenRoberts, Gareth O. On convergence rates of Gibbs samplers for uniform distributions. [Toronto: University of Toronto, 1997.
Den vollen Inhalt der Quelle findenCowles, Mary Kathryn. Possible biases induced by MCMC convergence diagnostics. Toronto: University of Toronto, Dept. of Statistics, 1997.
Den vollen Inhalt der Quelle findenYuen, Wai Kong. Applications of Cheeger's constant to the convergence rate of Markov chains on Rn. Toronto: University of Toronto, Dept. of Statistics, 1997.
Den vollen Inhalt der Quelle findenCowles, Mary Kathryn. A simulation approach to convergence rates for Markov chain Monte Carlo algorithms. [Toronto]: University of Toronto, Dept. of Statistics, 1996.
Den vollen Inhalt der Quelle findenWirsching, Günther J. The dynamical system generated by the 3n + 1 function. Berlin: Springer, 1998.
Den vollen Inhalt der Quelle findenPetrone, Sonia. A note on convergence rates of Gibbs sampling for nonparametric mixtures. Toronto: University of Toronto, Dept. of Statistics, 1998.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Convergence de processus de Markov"
Zhang, Hanjun, Qixiang Mei, Xiang Lin und Zhenting Hou. „Convergence Property of Standard Transition Functions“. In Markov Processes and Controlled Markov Chains, 57–67. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_4.
Der volle Inhalt der QuelleAltman, Eitan. „Convergence of discounted constrained MDPs“. In Constrained Markov Decision Processes, 193–98. Boca Raton: Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-17.
Der volle Inhalt der QuelleAltman, Eitan. „Convergence as the horizon tends to infinity“. In Constrained Markov Decision Processes, 199–203. Boca Raton: Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-18.
Der volle Inhalt der QuelleKersting, G., und F. C. Klebaner. „Explosions in Markov Processes and Submartingale Convergence.“ In Athens Conference on Applied Probability and Time Series Analysis, 127–36. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-0749-8_9.
Der volle Inhalt der QuelleCai, Yuzhi. „How Rates of Convergence for Gibbs Fields Depend on the Interaction and the Kind of Scanning Used“. In Markov Processes and Controlled Markov Chains, 489–98. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_31.
Der volle Inhalt der QuelleBernou, Armand. „On Subexponential Convergence to Equilibrium of Markov Processes“. In Lecture Notes in Mathematics, 143–74. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96409-2_5.
Der volle Inhalt der QuelleFeng, Jin, und Thomas Kurtz. „Large deviations for Markov processes and nonlinear semigroup convergence“. In Mathematical Surveys and Monographs, 79–96. Providence, Rhode Island: American Mathematical Society, 2006. http://dx.doi.org/10.1090/surv/131/05.
Der volle Inhalt der QuellePop-Stojanovic, Z. R. „Convergence in Energy and the Sector Condition for Markov Processes“. In Seminar on Stochastic Processes, 1984, 165–72. Boston, MA: Birkhäuser Boston, 1986. http://dx.doi.org/10.1007/978-1-4684-6745-1_10.
Der volle Inhalt der QuelleNegoro, Akira, und Masaaki Tsuchiya. „Convergence and uniqueness theorems for markov processes associated with Lévy operators“. In Lecture Notes in Mathematics, 348–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/bfb0078492.
Der volle Inhalt der QuelleZverkina, Galina. „Ergodicity and Polynomial Convergence Rate of Generalized Markov Modulated Poisson Processes“. In Communications in Computer and Information Science, 367–81. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66242-4_29.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Convergence de processus de Markov"
Shi, Zhengbin. „Volatility Prediction Algorithm in Enterprise Financial Risk Management Based on Markov Chain Algorithm“. In 2023 International Conference on Intelligent Computing, Communication & Convergence (ICI3C), 152–56. IEEE, 2023. http://dx.doi.org/10.1109/ici3c60830.2023.00039.
Der volle Inhalt der QuelleMajeed, Sultan Javed, und Marcus Hutter. „On Q-learning Convergence for Non-Markov Decision Processes“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/353.
Der volle Inhalt der QuelleAmiri, Mohsen, und Sindri Magnússon. „On the Convergence of TD-Learning on Markov Reward Processes with Hidden States“. In 2024 European Control Conference (ECC). IEEE, 2024. http://dx.doi.org/10.23919/ecc64448.2024.10591108.
Der volle Inhalt der QuelleTakagi, Hideaki, Muneo Kitajima, Tetsuo Yamamoto und Yongbing Zhang. „Search process evaluation for a hierarchical menu system by Markov chains“. In ITCom 2001: International Symposium on the Convergence of IT and Communications, herausgegeben von Robert D. van der Mei und Frank Huebner-Szabo de Bucs. SPIE, 2001. http://dx.doi.org/10.1117/12.434312.
Der volle Inhalt der QuelleHongbin Liang, Lin X. Cai, Hangguan Shan, Xuemin Shen und Daiyuan Peng. „Adaptive resource allocation for media services based on semi-Markov decision process“. In 2010 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2010. http://dx.doi.org/10.1109/ictc.2010.5674663.
Der volle Inhalt der QuelleDing, Dongsheng, Kaiqing Zhang, Tamer Basar und Mihailo R. Jovanovic. „Convergence and optimality of policy gradient primal-dual method for constrained Markov decision processes“. In 2022 American Control Conference (ACC). IEEE, 2022. http://dx.doi.org/10.23919/acc53348.2022.9867805.
Der volle Inhalt der QuelleTayeb, Shahab, Miresmaeil Mirnabibaboli und Shahram Latifi. „Load Balancing in WSNs using a Novel Markov Decision Process Based Routing Algorithm“. In 2016 6th International Conference on IT Convergence and Security (ICITCS). IEEE, 2016. http://dx.doi.org/10.1109/icitcs.2016.7740350.
Der volle Inhalt der QuelleFerreira Salvador, Paulo J., und Rui J. M. T. Valadas. „Framework based on Markov modulated Poisson processes for modeling traffic with long-range dependence“. In ITCom 2001: International Symposium on the Convergence of IT and Communications, herausgegeben von Robert D. van der Mei und Frank Huebner-Szabo de Bucs. SPIE, 2001. http://dx.doi.org/10.1117/12.434317.
Der volle Inhalt der QuelleShi, Chongyang, Yuheng Bu und Jie Fu. „Information-Theoretic Opacity-Enforcement in Markov Decision Processes“. In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/749.
Der volle Inhalt der QuelleHorák, Karel, Branislav Bošanský und Krishnendu Chatterjee. „Goal-HSVI: Heuristic Search Value Iteration for Goal POMDPs“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/662.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Convergence de processus de Markov"
Athreya, Krishna B., Hani Doss und Jayaram Sethuraman. A Proof of Convergence of the Markov Chain Simulation Method. Fort Belvoir, VA: Defense Technical Information Center, Juli 1992. http://dx.doi.org/10.21236/ada255456.
Der volle Inhalt der QuelleSethuraman, Jayaram. Easily Verifiable Conditions for the Convergence of the Markov Chain Monte Carlo Method. Fort Belvoir, VA: Defense Technical Information Center, Dezember 1995. http://dx.doi.org/10.21236/ada308874.
Der volle Inhalt der QuelleAthreya, Krishna B., Hani Doss und Jayaram Sethuraman. Easy-to-Apply Results for Establishing Convergence of Markov Chains in Bayesian Analysis. Fort Belvoir, VA: Defense Technical Information Center, Februar 1993. http://dx.doi.org/10.21236/ada264015.
Der volle Inhalt der QuelleBledsoe, Keith C. Implement Method for Automated Testing of Markov Chain Convergence into INVERSE for ORNL12-RS-108J: Advanced Multi-Dimensional Forward and Inverse Modeling. Office of Scientific and Technical Information (OSTI), April 2015. http://dx.doi.org/10.2172/1234327.
Der volle Inhalt der QuelleŠiljak, Dženita. The Effects of Institutions on the Transition of the Western Balkans. Külügyi és Külgazdasági Intézet, 2022. http://dx.doi.org/10.47683/kkielemzesek.ke-2022.19.
Der volle Inhalt der QuelleQuevedo, Fernando, Paolo Giordano und Mauricio Mesquita Moreira. El tratamiento de las asimetrías en los acuerdos de integración regional. Inter-American Development Bank, August 2004. http://dx.doi.org/10.18235/0009450.
Der volle Inhalt der QuelleBriones, Roehlano, Ivory Myka Galang, Isabel Espineli, Aniceto Jr Orbeta und Marife Ballesteros. Endline Study Report and Policy Study for the ConVERGE Project. Philippine Institute for Development Studies, September 2023. http://dx.doi.org/10.62986/dp2023.13.
Der volle Inhalt der QuelleHori, Tsuneki, Sergio Lacambra Ayuso, Ana María Torres, Lina Salazar, Gilberto Romero, Rolando Durán, Ginés Suarez, Lizardo Narváez und Ernesto Visconti. Índice de Gobernabilidad y Políticas Públicas en Gestión de Riesgo de Desastres (iGOPP): Informe Nacional de Perú. Inter-American Development Bank, Oktober 2015. http://dx.doi.org/10.18235/0010086.
Der volle Inhalt der QuelleOcampo, José Antonio, Roberto Steiner Sampedro, Mauricio Villamizar Villegas, Bibiana Taboada Arango, Jaime Jaramillo Vallejo, Olga Lucia Acosta Navarro und Leonardo Villar Gómez. Informe de la Junta Directiva al Congreso de la República - Marzo de 2023. Banco de la República, März 2023. http://dx.doi.org/10.32468/inf-jun-dir-con-rep.3-2023.
Der volle Inhalt der QuelleOcampo-Gaviria, José Antonio, Roberto Steiner Sampedro, Mauricio Villamizar Villegas, Bibiana Taboada Arango, Jaime Jaramillo Vallejo, Olga Lucia Acosta-Navarro und Leonardo Villar Gómez. Report of the Board of Directors to the Congress of Colombia - March 2023. Banco de la República de Colombia, Juni 2023. http://dx.doi.org/10.32468/inf-jun-dir-con-rep-eng.03-2023.
Der volle Inhalt der Quelle