Rozprawy doktorskie na temat „Markov chain”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Markov chain”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Bakra, Eleni. "Aspects of population Markov chain Monte Carlo and reversible jump Markov chain Monte Carlo". Thesis, University of Glasgow, 2009. http://theses.gla.ac.uk/1247/.
Pełny tekst źródłaHolenstein, Roman. "Particle Markov chain Monte Carlo". Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/7319.
Pełny tekst źródłaByrd, Jonathan Michael Robert. "Parallel Markov Chain Monte Carlo". Thesis, University of Warwick, 2010. http://wrap.warwick.ac.uk/3634/.
Pełny tekst źródłaYildirak, Sahap Kasirga. "The Identificaton Of A Bivariate Markov Chain Market Model". Phd thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/1257898/index.pdf.
Pełny tekst źródłaMartin, Russell Andrew. "Paths, sampling, and markov chain decomposition". Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/29383.
Pełny tekst źródłaEstandia, Gonzalez Luna Antonio. "Stable approximations for Markov-chain filters". Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/38303.
Pełny tekst źródłaZhang, Yichuan. "Scalable geometric Markov chain Monte Carlo". Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20978.
Pełny tekst źródłaFang, Youhan. "Efficient Markov Chain Monte Carlo Methods". Thesis, Purdue University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10809188.
Pełny tekst źródłaGenerating random samples from a prescribed distribution is one of the most important and challenging problems in machine learning, Bayesian statistics, and the simulation of materials. Markov Chain Monte Carlo (MCMC) methods are usually the required tool for this task, if the desired distribution is known only up to a multiplicative constant. Samples produced by an MCMC method are real values in N-dimensional space, called the configuration space. The distribution of such samples converges to the target distribution in the limit. However, existing MCMC methods still face many challenges that are not well resolved. Difficulties for sampling by using MCMC methods include, but not exclusively, dealing with high dimensional and multimodal problems, high computation cost due to extremely large datasets in Bayesian machine learning models, and lack of reliable indicators for detecting convergence and measuring the accuracy of sampling. This dissertation focuses on new theory and methodology for efficient MCMC methods that aim to overcome the aforementioned difficulties.
One contribution of this dissertation is generalizations of hybrid Monte Carlo (HMC). An HMC method combines a discretized dynamical system in an extended space, called the state space, and an acceptance test based on the Metropolis criterion. The discretized dynamical system used in HMC is volume preserving—meaning that in the state space, the absolute Jacobian of a map from one point on the trajectory to another is 1. Volume preservation is, however, not necessary for the general purpose of sampling. A general theory allowing the use of non-volume preserving dynamics for proposing MCMC moves is proposed. Examples including isokinetic dynamics and variable mass Hamiltonian dynamics with an explicit integrator, are all designed with fewer restrictions based on the general theory. Experiments show improvement in efficiency for sampling high dimensional multimodal problems. A second contribution is stochastic gradient samplers with reduced bias. An in-depth analysis of the noise introduced by the stochastic gradient is provided. Two methods to reduce the bias in the distribution of samples are proposed. One is to correct the dynamics by using an estimated noise based on subsampled data, and the other is to introduce additional variables and corresponding dynamics to adaptively reduce the bias. Extensive experiments show that both methods outperform existing methods. A third contribution is quasi-reliable estimates of effective sample size. Proposed is a more reliable indicator—the longest integrated autocorrelation time over all functions in the state space—for detecting the convergence and measuring the accuracy of MCMC methods. The superiority of the new indicator is supported by experiments on both synthetic and real problems.
Minor contributions include a general framework of changing variables, and a numerical integrator for the Hamiltonian dynamics with fourth order accuracy. The idea of changing variables is to transform the potential energy function as a function of the original variable to a function of the new variable, such that undesired properties can be removed. Two examples are provided and preliminary experimental results are obtained for supporting this idea. The fourth order integrator is constructed by combining the idea of the simplified Takahashi-Imada method and a two-stage Hessian-based integrator. The proposed method, called two-stage simplified Takahashi-Imada method, shows outstanding performance over existing methods in high-dimensional sampling problems.
Chotard, Alexandre. "Markov chain Analysis of Evolution Strategies". Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112230/document.
Pełny tekst źródłaIn this dissertation an analysis of Evolution Strategies (ESs) using the theory of Markov chains is conducted. Proofs of divergence or convergence of these algorithms are obtained, and tools to achieve such proofs are developed.ESs are so called "black-box" stochastic optimization algorithms, i.e. information on the function to be optimized are limited to the values it associates to points. In particular, gradients are unavailable. Proofs of convergence or divergence of these algorithms can be obtained through the analysis of Markov chains underlying these algorithms. The proofs of log-linear convergence and of divergence obtained in this thesis in the context of a linear function with or without constraint are essential components for the proofs of convergence of ESs on wide classes of functions.This dissertation first gives an introduction to Markov chain theory, then a state of the art on ESs and on black-box continuous optimization, and present already established links between ESs and Markov chains.The contributions of this thesis are then presented:o General mathematical tools that can be applied to a wider range of problems are developed. These tools allow to easily prove specific Markov chain properties (irreducibility, aperiodicity and the fact that compact sets are small sets for the Markov chain) on the Markov chains studied. Obtaining these properties without these tools is a ad hoc, tedious and technical process, that can be of very high difficulty.o Then different ESs are analyzed on different problems. We study a (1,\lambda)-ES using cumulative step-size adaptation on a linear function and prove the log-linear divergence of the step-size; we also study the variation of the logarithm of the step-size, from which we establish a necessary condition for the stability of the algorithm with respect to the dimension of the search space. Then we study an ES with constant step-size and with cumulative step-size adaptation on a linear function with a linear constraint, using resampling to handle unfeasible solutions. We prove that with constant step-size the algorithm diverges, while with cumulative step-size adaptation, depending on parameters of the problem and of the ES, the algorithm converges or diverges log-linearly. We then investigate the dependence of the convergence or divergence rate of the algorithm with parameters of the problem and of the ES. Finally we study an ES with a sampling distribution that can be non-Gaussian and with constant step-size on a linear function with a linear constraint. We give sufficient conditions on the sampling distribution for the algorithm to diverge. We also show that different covariance matrices for the sampling distribution correspond to a change of norm of the search space, and that this implies that adapting the covariance matrix of the sampling distribution may allow an ES with cumulative step-size adaptation to successfully diverge on a linear function with any linear constraint.Finally, these results are summed-up, discussed, and perspectives for future work are explored
Neuhoff, Daniel. "Reversible Jump Markov Chain Monte Carlo". Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17461.
Pełny tekst źródłaThe four studies of this thesis are concerned predominantly with the dynamics of macroeconomic time series, both in the context of a simple DSGE model, as well as from a pure time series modeling perspective.
Murray, Iain Andrew. "Advances in Markov chain Monte Carlo methods". Thesis, University College London (University of London), 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.487199.
Pełny tekst źródłaHan, Xiao-liang. "Markov Chain Monte Carlo and sampling efficiency". Thesis, University of Bristol, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333974.
Pełny tekst źródłaFan, Yanan. "Efficient implementation of Markov chain Monte Carlo". Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343307.
Pełny tekst źródłaBrooks, Stephen Peter. "Convergence diagnostics for Markov Chain Monte Carlo". Thesis, University of Cambridge, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.363913.
Pełny tekst źródłaGraham, Matthew McKenzie. "Auxiliary variable Markov chain Monte Carlo methods". Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28962.
Pełny tekst źródłaHua, Zhili. "Markov Chain Modeling for Multi-Server Clusters". W&M ScholarWorks, 2005. https://scholarworks.wm.edu/etd/1539626843.
Pełny tekst źródłaMatthews, James. "Markov chains for sampling matchings". Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/3072.
Pełny tekst źródłaLindahl, John, i Douglas Persson. "Data-driven test case design of automatic test cases using Markov chains and a Markov chain Monte Carlo method". Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43498.
Pełny tekst źródła郭慈安 i Chi-on Michael Kwok. "Some results on higher order Markov Chain models". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1988. http://hub.hku.hk/bib/B31208654.
Pełny tekst źródłaKwok, Chi-on Michael. "Some results on higher order Markov Chain models /". [Hong Kong] : University of Hong Kong, 1988. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12432076.
Pełny tekst źródłaedu, rdlyons@indiana. "Markov Chain Intersections and the Loop--Erased Walk". ESI preprints, 2001. ftp://ftp.esi.ac.at/pub/Preprints/esi1058.ps.
Pełny tekst źródłaStormark, Kristian. "Multiple Proposal Strategies for Markov Chain Monte Carlo". Thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9330.
Pełny tekst źródłaThe multiple proposal methods represent a recent simulation technique for Markov Chain Monte Carlo that allows several proposals to be considered at each step of transition. Motivated by the ideas of Quasi Monte Carlo integration, we examine how strongly correlated proposals can be employed to construct Markov chains with improved mixing properties. We proceed by giving a concise introduction to the Monte Carlo and Markov Chain Monte Carlo theory, and we supply a short discussion of the standard simulation algorithms and the difficulties of efficient sampling. We then examine two multiple proposal methods suggested in the literature, and we indicate the possibility of a unified formulation of the two methods. More essentially, we report some systematic exploration strategies for the two multiple proposals methods. In particular, we present schemes for the utilization of well-distributed point sets and maximally spread search directions. We also include a simple construction procedure for the latter type of point set. A numerical examination of the multiple proposal methods are performed on two simple test problems. We find that the systematic exploration approach may provide a significant improvement of the mixing, especially when the probability mass of the target distribution is ``easy to miss'' by independent sampling. For both test problems, we find that the best results are obtained with the QMC schemes. In particular, we find that the gain is most pronounced for a relatively moderate number of proposal. With fewer proposals, the properties of the well-distributed point sets will no be that relevant. For a large number of proposals, the independent sampling approach will be more competitive, since the coverage of the local neighborhood then will be better.
Backåker, Fredrik. "The Google Markov Chain: convergence speed and eigenvalues". Thesis, Uppsala universitet, Matematisk statistik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-176610.
Pełny tekst źródłaSanborn, Adam N. "Uncovering mental representations with Markov chain Monte Carlo". [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3278468.
Pełny tekst źródłaSource: Dissertation Abstracts International, Volume: 68-10, Section: B, page: 6994. Adviser: Richard M. Shiffrin. Title from dissertation home page (viewed May 21, 2008).
Suzuki, Yuya. "Rare-event Simulation with Markov Chain Monte Carlo". Thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-138950.
Pełny tekst źródłaGudmundsson, Thorbjörn. "Rare-event simulation with Markov chain Monte Carlo". Doctoral thesis, KTH, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157522.
Pełny tekst źródłaQC 20141216
Jindasawat, Jutaporn. "Testing the order of a Markov chain model". Thesis, University of Newcastle Upon Tyne, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.446197.
Pełny tekst źródłaHastie, David. "Towards automatic reversible jump Markov Chain Monte Carlo". Thesis, University of Bristol, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.414179.
Pełny tekst źródłaGroff, Jeffrey R. "Markov chain models of calcium puffs and sparks". W&M ScholarWorks, 2008. https://scholarworks.wm.edu/etd/1539623333.
Pełny tekst źródłaGuha, Subharup. "Benchmark estimation for Markov Chain Monte Carlo samplers". The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1085594208.
Pełny tekst źródłaLi, Shuying. "Phylogenetic tree construction using markov chain monte carlo /". The Ohio State University, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487942182323916.
Pełny tekst źródłaXu, Jason Qian. "Markov Chain Monte Carlo and Non-Reversible Methods". Thesis, The University of Arizona, 2012. http://hdl.handle.net/10150/244823.
Pełny tekst źródłaLevitz, Michael. "Separation, completeness, and Markov properties for AMP chain graph models /". Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/9564.
Pełny tekst źródłaZhu, Dongmei, i 朱冬梅. "Construction of non-standard Markov chain models with applications". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/202358.
Pełny tekst źródłapublished_or_final_version
Mathematics
Doctoral
Doctor of Philosophy
Möllering, Karin. "Inventory rationing : a new modeling approach using Markov chain theory /". Köln : Kölner Wiss.-Verl, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=2942052&prov=M&dokv̲ar=1&doke̲xt=htm.
Pełny tekst źródłaFrühwirth-Schnatter, Sylvia, Stefan Pittner, Andrea Weber i Rudolf Winter-Ebmer. "Analysing plant closure effects using time-varying mixture-of-experts Markov chain clustering". Institute of Mathematical Statistics, 2018. http://dx.doi.org/10.1214/17-AOAS1132.
Pełny tekst źródłaBanisch, Sven [Verfasser]. "Markov chain aggregation for agent-based models / Sven Banisch". Bielefeld : Universitätsbibliothek Bielefeld, 2014. http://d-nb.info/1057957089/34.
Pełny tekst źródłaBentley, Jason Phillip. "Exact Markov chain Monte Carlo and Bayesian linear regression". Thesis, University of Canterbury. Mathematics and Statistics, 2009. http://hdl.handle.net/10092/2534.
Pełny tekst źródłaPooley, James P. "Exploring phonetic category structure with Markov chain Monte Carlo". Connect to resource, 2008. http://hdl.handle.net/1811/32221.
Pełny tekst źródłaJosefsson, Marcus, i Erik Rasmusson. "A Markov Chain Approach to Monetary Policy Decision Making". Thesis, KTH, Matematik (Inst.), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-103872.
Pełny tekst źródłaMeddin, Mona. "Genetic algorithms : a markov chain and detail balance approach". Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/29196.
Pełny tekst źródłaNathan, Shaoul. "Derivatives pricing in a Markov chain jump-diffusion setting". Thesis, London School of Economics and Political Science (University of London), 2005. http://etheses.lse.ac.uk/1789/.
Pełny tekst źródłaFung, Siu-leung, i 馮紹樑. "Higher-order Markov chain models for categorical data sequences". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B26666224.
Pełny tekst źródłaAngelino, Elaine Lee. "Accelerating Markov chain Monte Carlo via parallel predictive prefetching". Thesis, Harvard University, 2014. http://nrs.harvard.edu/urn-3:HUL.InstRepos:13070022.
Pełny tekst źródłaEngineering and Applied Sciences
Sagir, Yavuz. "Dynamic bandwidth provisioning using Markov chain based on RSVP". Thesis, Monterey, California: Naval Postgraduate School, 2013. http://hdl.handle.net/10945/37708.
Pełny tekst źródłaAn important aspect of wireless communication is efficiency. Efficient network resource management and quality of service (QoS) are parameters that need to be achieved especially when considering network delays. The cooperative nature of unmanned ground vehicle (UGV) networks requires that bandwidth allocation be shared fairly between individual UGV nodes, depending on necessity. In this thesis, we study the problem of dynamic bandwidth provisioning in a UGV network. Specifically, we integrate the use of a basic statistical model, known as the Markov chain with a widely known, network bandwidth reservation protocol, known as the Resource Reservation Protocol (RSVP). The Markov chain results are used with RSVP to identify specific bandwidth allocation requirements along a path such that data transmission along that path is successful. Using a wireless simulation program known as Qualnet, we analyze the bandwidth efficiency and show that this algorithm provides higher bandwidth guarantees and better overall QoS when compared with solely using RSVP in wireless communication networks.
Vaičiulytė, Ingrida. "Study and application of Markov chain Monte Carlo method". Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20141209_112440-55390.
Pełny tekst źródłaDisertacijoje nagrinėjami Markovo grandinės Monte-Karlo (MCMC) adaptavimo metodai, skirti efektyviems skaitiniams duomenų analizės sprendimų priėmimo su iš anksto nustatytu patikimumu algoritmams sudaryti. Suformuluoti ir išspręsti hierarchiniu būdu sudarytų daugiamačių skirstinių (asimetrinio t skirstinio, Puasono-Gauso modelio, stabiliojo simetrinio vektoriaus dėsnio) parametrų vertinimo uždaviniai. Adaptuotai MCMC procedūrai sukurti yra pritaikytas nuoseklaus Monte-Karlo imčių generavimo metodas, įvedant statistinį stabdymo kriterijų ir imties tūrio reguliavimą. Statistiniai uždaviniai išspręsti šiuo metodu leidžia atskleisti aktualias MCMC metodų skaitmeninimo problemų ypatybes. MCMC algoritmų efektyvumas tiriamas pasinaudojant disertacijoje sudarytu statistinio modeliavimo metodu. Atlikti eksperimentai su sportininkų duomenimis ir sveikatos industrijai priklausančių įmonių finansiniais duomenimis patvirtino, kad metodo skaitinės savybės atitinka teorinį modelį. Taip pat sukurti metodai ir algoritmai pritaikyti sociologinių duomenų analizės modeliui sudaryti. Atlikti tyrimai parodė, kad adaptuotas MCMC algoritmas leidžia gauti nagrinėjamų skirstinių parametrų įvertinius per mažesnį grandžių skaičių ir maždaug du kartus sumažinti skaičiavimų apimtį. Disertacijoje sukonstruoti algoritmai gali būti pritaikyti stochastinio pobūdžio sistemų tyrimui ir kitiems statistikos uždaviniams spręsti MCMC metodu.
Pereira, Fernanda Chaves. "Bayesian Markov chain Monte Carlo methods in general insurance". Thesis, City University London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342720.
Pełny tekst źródłaMangoubi, Oren (Oren Rami). "Integral geometry, Hamiltonian dynamics, and Markov Chain Monte Carlo". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104583.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (pages 97-101).
This thesis presents applications of differential geometry and graph theory to the design and analysis of Markov chain Monte Carlo (MCMC) algorithms. MCMC algorithms are used to generate samples from an arbitrary probability density [pi] in computationally demanding situations, since their mixing times need not grow exponentially with the dimension of [pi]. However, if [pi] has many modes, MCMC algorithms may still have very long mixing times. It is therefore crucial to understand and reduce MCMC mixing times, and there is currently a need for global mixing time bounds as well as algorithms that mix quickly for multi-modal densities. In the Gibbs sampling MCMC algorithm, the variance in the size of modes intersected by the algorithm's search-subspaces can grow exponentially in the dimension, greatly increasing the mixing time. We use integral geometry, together with the Hessian of r and the Chern-Gauss-Bonnet theorem, to correct these distortions and avoid this exponential increase in the mixing time. Towards this end, we prove a generalization of the classical Crofton's formula in integral geometry that can allow one to greatly reduce the variance of Crofton's formula without introducing a bias. Hamiltonian Monte Carlo (HMC) algorithms are some the most widely-used MCMC algorithms. We use the symplectic properties of Hamiltonians to prove global Cheeger-type lower bounds for the mixing times of HMC algorithms, including Riemannian Manifold HMC as well as No-U-Turn HMC, the workhorse of the popular Bayesian software package Stan. One consequence of our work is the impossibility of energy-conserving Hamiltonian Markov chains to search for far-apart sub-Gaussian modes in polynomial time. We then prove another generalization of Crofton's formula that applies to Hamiltonian trajectories, and use our generalized Crofton formula to improve the convergence speed of HMC-based integration on manifolds. We also present a generalization of the Hopf fibration acting on arbitrary- ghost-valued random variables. For [beta] = 4, the geometry of the Hopf fibration is encoded by the quaternions; we investigate the extent to which the elegant properties of this encoding are preserved when one replaces quaternions with general [beta] > 0 ghosts.
by Oren Mangoubi.
Ph. D.
Doerschuk, Peter Charles. "A Markov chain approach to electrocardiogram modeling and analysis". Thesis, Massachusetts Institute of Technology, 1985. http://hdl.handle.net/1721.1/15224.
Pełny tekst źródłaMICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.
Bibliography: leaves 393-401.
by Peter Charles Doerschuk.
Ph.D.
Persing, Adam. "Some contributions to particle Markov chain Monte Carlo algorithms". Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/23277.
Pełny tekst źródła