Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Convergence of Markov processes“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Convergence of Markov processes" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Convergence of Markov processes"
Abakuks, A., S. N. Ethier und T. G. Kurtz. „Markov Processes: Characterization and Convergence.“ Biometrics 43, Nr. 2 (Juni 1987): 484. http://dx.doi.org/10.2307/2531839.
Der volle Inhalt der QuellePerkins, Edwin, S. N. Ethier und T. G. Kurtz. „Markov Processes, Characterization and Convergence.“ Journal of the Royal Statistical Society. Series A (Statistics in Society) 151, Nr. 2 (1988): 367. http://dx.doi.org/10.2307/2982773.
Der volle Inhalt der QuelleFranz, Uwe, Volkmar Liebscher und Stefan Zeiser. „Piecewise-Deterministic Markov Processes as Limits of Markov Jump Processes“. Advances in Applied Probability 44, Nr. 3 (September 2012): 729–48. http://dx.doi.org/10.1239/aap/1346955262.
Der volle Inhalt der QuelleFranz, Uwe, Volkmar Liebscher und Stefan Zeiser. „Piecewise-Deterministic Markov Processes as Limits of Markov Jump Processes“. Advances in Applied Probability 44, Nr. 03 (September 2012): 729–48. http://dx.doi.org/10.1017/s0001867800005851.
Der volle Inhalt der QuelleHWANG, CHII-RUEY. „ACCELERATING MONTE CARLO MARKOV PROCESSES“. COSMOS 01, Nr. 01 (Mai 2005): 87–94. http://dx.doi.org/10.1142/s0219607705000085.
Der volle Inhalt der QuelleAldous, David J. „Book Review: Markov processes: Characterization and convergence“. Bulletin of the American Mathematical Society 16, Nr. 2 (01.04.1987): 315–19. http://dx.doi.org/10.1090/s0273-0979-1987-15533-9.
Der volle Inhalt der QuelleSwishchuk, Anatoliy, und M. Shafiqul Islam. „Diffusion Approximations of the Geometric Markov Renewal Processes and Option Price Formulas“. International Journal of Stochastic Analysis 2010 (19.12.2010): 1–21. http://dx.doi.org/10.1155/2010/347105.
Der volle Inhalt der QuelleCrank, Keith N., und Prem S. Puri. „A method of approximating Markov jump processes“. Advances in Applied Probability 20, Nr. 1 (März 1988): 33–58. http://dx.doi.org/10.2307/1427269.
Der volle Inhalt der QuelleCrank, Keith N., und Prem S. Puri. „A method of approximating Markov jump processes“. Advances in Applied Probability 20, Nr. 01 (März 1988): 33–58. http://dx.doi.org/10.1017/s0001867800017936.
Der volle Inhalt der QuelleDeng, Chang-Song, René L. Schilling und Yan-Hong Song. „Subgeometric rates of convergence for Markov processes under subordination“. Advances in Applied Probability 49, Nr. 1 (März 2017): 162–81. http://dx.doi.org/10.1017/apr.2016.83.
Der volle Inhalt der QuelleDissertationen zum Thema "Convergence of Markov processes"
Hahn, Léo. „Interacting run-and-tumble particles as piecewise deterministic Markov processes : invariant distribution and convergence“. Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0084.
Der volle Inhalt der Quelle1. Simulating active and metastable systems with piecewise deterministic Markov processes (PDMPs): - Which dynamics to choose to efficiently simulate metastable states? - How to directly exploit the non-equilibrium nature of PDMPs to study the modeled physical systems? 2. Modeling active systems with PDMPs: - What conditions must a system meet to be modeled by a PDMP? - In which cases does the system have a stationary distribution? - How to calculate dynamic quantities (e.g., transition rates) in this framework? 3. Improving simulation techniques for equilibrium systems: - Can results obtained in the context of non-equilibrium systems be used to accelerate the simulation of equilibrium systems? - How to use topological information to adapt the dynamics in real-time?
Pötzelberger, Klaus. „On the Approximation of finite Markov-exchangeable processes by mixtures of Markov Processes“. Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1991. http://epub.wu.ac.at/526/1/document.pdf.
Der volle Inhalt der QuelleSeries: Forschungsberichte / Institut für Statistik
Drozdenko, Myroslav. „Weak Convergence of First-Rare-Event Times for Semi-Markov Processes“. Doctoral thesis, Västerås : Mälardalen University, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-394.
Der volle Inhalt der QuelleYuen, Wai Kong. „Application of geometric bounds to convergence rates of Markov chains and Markov processes on R[superscript]n“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ58619.pdf.
Der volle Inhalt der QuelleKaijser, Thomas. „Convergence in distribution for filtering processes associated to Hidden Markov Models with densities“. Linköpings universitet, Matematik och tillämpad matematik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-92590.
Der volle Inhalt der QuelleLachaud, Béatrice. „Détection de la convergence de processus de Markov“. Phd thesis, Université René Descartes - Paris V, 2005. http://tel.archives-ouvertes.fr/tel-00010473.
Der volle Inhalt der QuelleFisher, Diana. „Convergence analysis of MCMC method in the study of genetic linkage with missing data“. Huntington, WV : [Marshall University Libraries], 2005. http://www.marshall.edu/etd/descript.asp?ref=568.
Der volle Inhalt der QuelleWang, Xinyu. „Sur la convergence sous-exponentielle de processus de Markov“. Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00840858.
Der volle Inhalt der QuelleBouguet, Florian. „Étude quantitative de processus de Markov déterministes par morceaux issus de la modélisation“. Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S040/document.
Der volle Inhalt der QuelleThe purpose of this Ph.D. thesis is the study of piecewise deterministic Markov processes, which are often used for modeling many natural phenomena. Precisely, we shall focus on their long time behavior as well as their speed of convergence to equilibrium, whenever they possess a stationary probability measure. Providing sharp quantitative bounds for this speed of convergence is one of the main orientations of this manuscript, which will usually be done through coupling methods. We shall emphasize the link between Markov processes and mathematical fields of research where they may be of interest, such as partial differential equations. The last chapter of this thesis is devoted to the introduction of a unified approach to study the long time behavior of inhomogeneous Markov chains, which can provide functional limit theorems with the help of asymptotic pseudotrajectories
Chotard, Alexandre. „Markov chain Analysis of Evolution Strategies“. Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112230/document.
Der volle Inhalt der QuelleIn this dissertation an analysis of Evolution Strategies (ESs) using the theory of Markov chains is conducted. Proofs of divergence or convergence of these algorithms are obtained, and tools to achieve such proofs are developed.ESs are so called "black-box" stochastic optimization algorithms, i.e. information on the function to be optimized are limited to the values it associates to points. In particular, gradients are unavailable. Proofs of convergence or divergence of these algorithms can be obtained through the analysis of Markov chains underlying these algorithms. The proofs of log-linear convergence and of divergence obtained in this thesis in the context of a linear function with or without constraint are essential components for the proofs of convergence of ESs on wide classes of functions.This dissertation first gives an introduction to Markov chain theory, then a state of the art on ESs and on black-box continuous optimization, and present already established links between ESs and Markov chains.The contributions of this thesis are then presented:o General mathematical tools that can be applied to a wider range of problems are developed. These tools allow to easily prove specific Markov chain properties (irreducibility, aperiodicity and the fact that compact sets are small sets for the Markov chain) on the Markov chains studied. Obtaining these properties without these tools is a ad hoc, tedious and technical process, that can be of very high difficulty.o Then different ESs are analyzed on different problems. We study a (1,\lambda)-ES using cumulative step-size adaptation on a linear function and prove the log-linear divergence of the step-size; we also study the variation of the logarithm of the step-size, from which we establish a necessary condition for the stability of the algorithm with respect to the dimension of the search space. Then we study an ES with constant step-size and with cumulative step-size adaptation on a linear function with a linear constraint, using resampling to handle unfeasible solutions. We prove that with constant step-size the algorithm diverges, while with cumulative step-size adaptation, depending on parameters of the problem and of the ES, the algorithm converges or diverges log-linearly. We then investigate the dependence of the convergence or divergence rate of the algorithm with parameters of the problem and of the ES. Finally we study an ES with a sampling distribution that can be non-Gaussian and with constant step-size on a linear function with a linear constraint. We give sufficient conditions on the sampling distribution for the algorithm to diverge. We also show that different covariance matrices for the sampling distribution correspond to a change of norm of the search space, and that this implies that adapting the covariance matrix of the sampling distribution may allow an ES with cumulative step-size adaptation to successfully diverge on a linear function with any linear constraint.Finally, these results are summed-up, discussed, and perspectives for future work are explored
Bücher zum Thema "Convergence of Markov processes"
G, Kurtz Thomas, Hrsg. Markov processes: Characterization and convergence. New York: Wiley, 1986.
Den vollen Inhalt der Quelle findenRoberts, Gareth O. Convergence of slice sampler Markov chains. [Toronto: University of Toronto, 1997.
Den vollen Inhalt der Quelle findenBaxter, John Robert. Rates of convergence for everywhere-positive markov chains. [Toronto, Ont.]: University of Toronto, Dept. of Statistics, 1994.
Den vollen Inhalt der Quelle findenRoberts, Gareth O. Quantitative bounds for convergence rates of continuous time Markov processes. [Toronto]: University of Toronto, Dept. of Statistics, 1996.
Den vollen Inhalt der Quelle findenYuen, Wai Kong. Applications of Cheeger's constant to the convergence rate of Markov chains on Rn. Toronto: University of Toronto, Dept. of Statistics, 1997.
Den vollen Inhalt der Quelle findenRoberts, Gareth O. On convergence rates of Gibbs samplers for uniform distributions. [Toronto: University of Toronto, 1997.
Den vollen Inhalt der Quelle findenCowles, Mary Kathryn. Possible biases induced by MCMC convergence diagnostics. Toronto: University of Toronto, Dept. of Statistics, 1997.
Den vollen Inhalt der Quelle findenCowles, Mary Kathryn. A simulation approach to convergence rates for Markov chain Monte Carlo algorithms. [Toronto]: University of Toronto, Dept. of Statistics, 1996.
Den vollen Inhalt der Quelle findenWirsching, Günther J. The dynamical system generated by the 3n + 1 function. Berlin: Springer, 1998.
Den vollen Inhalt der Quelle findenPetrone, Sonia. A note on convergence rates of Gibbs sampling for nonparametric mixtures. Toronto: University of Toronto, Dept. of Statistics, 1998.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Convergence of Markov processes"
Zhang, Hanjun, Qixiang Mei, Xiang Lin und Zhenting Hou. „Convergence Property of Standard Transition Functions“. In Markov Processes and Controlled Markov Chains, 57–67. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_4.
Der volle Inhalt der QuelleAltman, Eitan. „Convergence of discounted constrained MDPs“. In Constrained Markov Decision Processes, 193–98. Boca Raton: Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-17.
Der volle Inhalt der QuelleAltman, Eitan. „Convergence as the horizon tends to infinity“. In Constrained Markov Decision Processes, 199–203. Boca Raton: Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-18.
Der volle Inhalt der QuelleKersting, G., und F. C. Klebaner. „Explosions in Markov Processes and Submartingale Convergence.“ In Athens Conference on Applied Probability and Time Series Analysis, 127–36. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-0749-8_9.
Der volle Inhalt der QuelleCai, Yuzhi. „How Rates of Convergence for Gibbs Fields Depend on the Interaction and the Kind of Scanning Used“. In Markov Processes and Controlled Markov Chains, 489–98. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_31.
Der volle Inhalt der QuelleBernou, Armand. „On Subexponential Convergence to Equilibrium of Markov Processes“. In Lecture Notes in Mathematics, 143–74. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96409-2_5.
Der volle Inhalt der QuellePop-Stojanovic, Z. R. „Convergence in Energy and the Sector Condition for Markov Processes“. In Seminar on Stochastic Processes, 1984, 165–72. Boston, MA: Birkhäuser Boston, 1986. http://dx.doi.org/10.1007/978-1-4684-6745-1_10.
Der volle Inhalt der QuelleFeng, Jin, und Thomas Kurtz. „Large deviations for Markov processes and nonlinear semigroup convergence“. In Mathematical Surveys and Monographs, 79–96. Providence, Rhode Island: American Mathematical Society, 2006. http://dx.doi.org/10.1090/surv/131/05.
Der volle Inhalt der QuelleNegoro, Akira, und Masaaki Tsuchiya. „Convergence and uniqueness theorems for markov processes associated with Lévy operators“. In Lecture Notes in Mathematics, 348–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/bfb0078492.
Der volle Inhalt der QuelleZverkina, Galina. „Ergodicity and Polynomial Convergence Rate of Generalized Markov Modulated Poisson Processes“. In Communications in Computer and Information Science, 367–81. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66242-4_29.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Convergence of Markov processes"
Majeed, Sultan Javed, und Marcus Hutter. „On Q-learning Convergence for Non-Markov Decision Processes“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/353.
Der volle Inhalt der QuelleAmiri, Mohsen, und Sindri Magnússon. „On the Convergence of TD-Learning on Markov Reward Processes with Hidden States“. In 2024 European Control Conference (ECC). IEEE, 2024. http://dx.doi.org/10.23919/ecc64448.2024.10591108.
Der volle Inhalt der QuelleDing, Dongsheng, Kaiqing Zhang, Tamer Basar und Mihailo R. Jovanovic. „Convergence and optimality of policy gradient primal-dual method for constrained Markov decision processes“. In 2022 American Control Conference (ACC). IEEE, 2022. http://dx.doi.org/10.23919/acc53348.2022.9867805.
Der volle Inhalt der QuelleShi, Chongyang, Yuheng Bu und Jie Fu. „Information-Theoretic Opacity-Enforcement in Markov Decision Processes“. In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/749.
Der volle Inhalt der QuelleFerreira Salvador, Paulo J., und Rui J. M. T. Valadas. „Framework based on Markov modulated Poisson processes for modeling traffic with long-range dependence“. In ITCom 2001: International Symposium on the Convergence of IT and Communications, herausgegeben von Robert D. van der Mei und Frank Huebner-Szabo de Bucs. SPIE, 2001. http://dx.doi.org/10.1117/12.434317.
Der volle Inhalt der QuelleTakagi, Hideaki, Muneo Kitajima, Tetsuo Yamamoto und Yongbing Zhang. „Search process evaluation for a hierarchical menu system by Markov chains“. In ITCom 2001: International Symposium on the Convergence of IT and Communications, herausgegeben von Robert D. van der Mei und Frank Huebner-Szabo de Bucs. SPIE, 2001. http://dx.doi.org/10.1117/12.434312.
Der volle Inhalt der QuelleHongbin Liang, Lin X. Cai, Hangguan Shan, Xuemin Shen und Daiyuan Peng. „Adaptive resource allocation for media services based on semi-Markov decision process“. In 2010 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2010. http://dx.doi.org/10.1109/ictc.2010.5674663.
Der volle Inhalt der QuelleTayeb, Shahab, Miresmaeil Mirnabibaboli und Shahram Latifi. „Load Balancing in WSNs using a Novel Markov Decision Process Based Routing Algorithm“. In 2016 6th International Conference on IT Convergence and Security (ICITCS). IEEE, 2016. http://dx.doi.org/10.1109/icitcs.2016.7740350.
Der volle Inhalt der QuelleChanron, Vincent, und Kemper Lewis. „A Study of Convergence in Decentralized Design“. In ASME 2003 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2003. http://dx.doi.org/10.1115/detc2003/dac-48782.
Der volle Inhalt der QuelleKuznetsova, Natalia, und Zhanna Pisarenko. „Financial convergence at the world financial market: pension funds and insurance entities prospects: case of China, EU, USA“. In Contemporary Issues in Business, Management and Economics Engineering. Vilnius Gediminas Technical University, 2019. http://dx.doi.org/10.3846/cibmee.2019.037.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Convergence of Markov processes"
Adler, Robert J., Stamatis Gambanis und Gennady Samorodnitsky. On Stable Markov Processes. Fort Belvoir, VA: Defense Technical Information Center, September 1987. http://dx.doi.org/10.21236/ada192892.
Der volle Inhalt der QuelleAthreya, Krishna B., Hani Doss und Jayaram Sethuraman. A Proof of Convergence of the Markov Chain Simulation Method. Fort Belvoir, VA: Defense Technical Information Center, Juli 1992. http://dx.doi.org/10.21236/ada255456.
Der volle Inhalt der QuelleAbdel-Hameed, M. Markovian Shock Models, Deterioration Processes, Stratified Markov Processes Replacement Policies. Fort Belvoir, VA: Defense Technical Information Center, Dezember 1985. http://dx.doi.org/10.21236/ada174646.
Der volle Inhalt der QuelleNewell, Alan. Markovian Shock Models, Deterioration Processes, Stratified Markov Processes and Replacement Policies. Fort Belvoir, VA: Defense Technical Information Center, Mai 1986. http://dx.doi.org/10.21236/ada174995.
Der volle Inhalt der QuelleCinlar, E. Markov Processes Applied to Control, Reliability and Replacement. Fort Belvoir, VA: Defense Technical Information Center, April 1989. http://dx.doi.org/10.21236/ada208634.
Der volle Inhalt der QuelleRohlicek, J. R., und A. S. Willsky. Structural Decomposition of Multiple Time Scale Markov Processes,. Fort Belvoir, VA: Defense Technical Information Center, Oktober 1987. http://dx.doi.org/10.21236/ada189739.
Der volle Inhalt der QuelleSerfozo, Richard F. Poisson Functionals of Markov Processes and Queueing Networks. Fort Belvoir, VA: Defense Technical Information Center, Dezember 1987. http://dx.doi.org/10.21236/ada191217.
Der volle Inhalt der QuelleSerfozo, R. F. Poisson Functionals of Markov Processes and Queueing Networks,. Fort Belvoir, VA: Defense Technical Information Center, Dezember 1987. http://dx.doi.org/10.21236/ada194289.
Der volle Inhalt der QuelleDraper, Bruce A., und J. Ross Beveridge. Learning to Populate Geospatial Databases via Markov Processes. Fort Belvoir, VA: Defense Technical Information Center, Dezember 1999. http://dx.doi.org/10.21236/ada374536.
Der volle Inhalt der QuelleSethuraman, Jayaram. Easily Verifiable Conditions for the Convergence of the Markov Chain Monte Carlo Method. Fort Belvoir, VA: Defense Technical Information Center, Dezember 1995. http://dx.doi.org/10.21236/ada308874.
Der volle Inhalt der Quelle