Дисертації з теми "Markov chain Monte Carlo samplers"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Markov chain Monte Carlo samplers.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Markov chain Monte Carlo samplers".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Guha, Subharup. "Benchmark estimation for Markov Chain Monte Carlo samplers." The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1085594208.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Sisson, Scott Antony. "Markov chains for genetics and extremes." Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391095.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Pang, Wan-Kai. "Modelling ordinal categorical data : a Gibbs sampler approach." Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323876.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Verhelst, Norman D., Reinhold Hatzinger, and Patrick Mair. "The Rasch Sampler." Foundation for Open Access Statistics, 2007. http://dx.doi.org/10.18637/jss.v020.i04.

Повний текст джерела
Анотація:
The Rasch sampler is an efficient algorithm to sample binary matrices with given marginal sums. It is a Markov chain Monte Carlo (MCMC) algorithm. The program can handle matrices of up to 1024 rows and 64 columns. A special option allows to sample square matrices with given marginals and fixed main diagonal, a problem prominent in social network analysis. In all cases the stationary distribution is uniform. The user has control on the serial dependency. (authors' abstract)
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zhu, Qingyun. "Product Deletion and Supply Chain Management." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-dissertations/527.

Повний текст джерела
Анотація:
One of the most significant changes in the evolution of modern business management is that organizations no longer compete as individual entities in the market, but as interlocking supply chains. Markets are no longer simply trading desks but dynamic ecosystems where people, organizations and the environment interact. Products and associated materials and resources are links that bridge supply chains from upstream (sourcing and manufacturing) to downstream (delivering and consuming). The lifecycle of a product plays a critical role in supply chains. Supply chains may be composed by, designed around, and modified for products. Product-related issues greatly impact supply chains. Existing studies have advanced product management and product lifecycle management literature through dimensions of product innovation, product growth, product line extensions, product efficiencies, and product acquisition. Product deletion, rationalization, or reduction research is limited but is a critical issue for many reasons. Sustainability is an important reason for this managerial decision. This study, grounded from multiple literature streams in both marketing and supply chain fields, identified relations and propositions to form a firm-level analysis on the role of supply chains in organizational product deletion decisions. Interviews, observational and archival data from international companies (i.e.: Australia, China, India, and Iran) contributed to the empirical support as case studies through a grounded theory approach. Bayesian analysis, an underused empirical analysis tool, was utilized to provide insights into this underdeveloped research stream; and its relationship to qualitative research enhances broader methodological understanding. Gibbs sampler and reversible jump Markov chain Monte Carlo (MCMC) simulation were used for Bayesian analysis based on collected data. The integrative findings are exploratory but provide insights for a number of research propositions.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Al, Hakmani Rahab. "Bayesian Estimation of Mixture IRT Models using NUTS." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1641.

Повний текст джерела
Анотація:
The No-U-Turn Sampler (NUTS) is a relatively new Markov chain Monte Carlo (MCMC) algorithm that avoids the random walk behavior that common MCMC algorithms such as Gibbs sampling or Metropolis Hastings usually exhibit. Given the fact that NUTS can efficiently explore the entire space of the target distribution, the sampler converges to high-dimensional target distributions more quickly than other MCMC algorithms and is hence less computational expensive. The focus of this study is on applying NUTS to one of the complex IRT models, specifically the two-parameter mixture IRT (Mix2PL) model, and further to examine its performance in estimating model parameters when sample size, test length, and number of latent classes are manipulated. The results indicate that overall, NUTS performs well in recovering model parameters. However, the recovery of the class membership of individual persons is not satisfactory for the three-class conditions. Also, the results indicate that WAIC performs better than LOO in recovering the number of latent classes, in terms of the proportion of the time the correct model was selected as the best fitting model. However, when the effective number of parameters was also considered in selecting the best fitting model, both fully Bayesian fit indices perform equally well. In addition, the results suggest that when multiple latent classes exist, using either fully Bayesian fit indices (WAIC or LOO) would not select the conventional IRT model. On the other hand, when all examinees came from a single unified population, fitting MixIRT models using NUTS causes problems in convergence.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Lu, Pingbo. "Calibrated Bayes factors for model selection and model averaging." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1343396705.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Deng, Wei. "Multiple imputation for marginal and mixed models in longitudinal data with informative missingness." Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1126890027.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xiii, 108 p.; also includes graphics. Includes bibliographical references (p. 104-108). Available online via OhioLINK's ETD Center
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wu, Yi-Fang. "Accuracy and variability of item parameter estimates from marginal maximum a posteriori estimation and Bayesian inference via Gibbs samplers." Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/5879.

Повний текст джерела
Анотація:
Item response theory (IRT) uses a family of statistical models for estimating stable characteristics of items and examinees and defining how these characteristics interact in describing item and test performance. With a focus on the three-parameter logistic IRT (Birnbaum, 1968; Lord, 1980) model, the current study examines the accuracy and variability of the item parameter estimates from the marginal maximum a posteriori estimation via an expectation-maximization algorithm (MMAP/EM) and the Markov chain Monte Carlo Gibbs sampling (MCMC/GS) approach. In the study, the various factors which have an impact on the accuracy and variability of the item parameter estimates are discussed, and then further evaluated through a large scale simulation. The factors of interest include the composition and length of tests, the distribution of underlying latent traits, the size of samples, and the prior distributions of discrimination, difficulty, and pseudo-guessing parameters. The results of the two estimation methods are compared to determine the lower limit--in terms of test length, sample size, test characteristics, and prior distributions of item parameters--at which the methods can satisfactorily recover item parameters and efficiently function in reality. For practitioners, the results help to define limits on the appropriate use of the BILOG-MG (which implements MMAP/EM) and also, to assist in deciding the utility of OpenBUGS (which carries out MCMC/GS) for item parameter estimation in practice.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Fu, Shuting. "Bayesian Logistic Regression Model with Integrated Multivariate Normal Approximation for Big Data." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/451.

Повний текст джерела
Анотація:
The analysis of big data is of great interest today, and this comes with challenges of improving precision and efficiency in estimation and prediction. We study binary data with covariates from numerous small areas, where direct estimation is not reliable, and there is a need to borrow strength from the ensemble. This is generally done using Bayesian logistic regression, but because there are numerous small areas, the exact computation for the logistic regression model becomes challenging. Therefore, we develop an integrated multivariate normal approximation (IMNA) method for binary data with covariates within the Bayesian paradigm, and this procedure is assisted by the empirical logistic transform. Our main goal is to provide the theory of IMNA and to show that it is many times faster than the exact logistic regression method with almost the same accuracy. We apply the IMNA method to the health status binary data (excellent health or otherwise) from the Nepal Living Standards Survey with more than 60,000 households (small areas). We estimate the proportion of Nepalese in excellent health condition for each household. For these data IMNA gives estimates of the household proportions as precise as those from the logistic regression model and it is more than fifty times faster (20 seconds versus 1,066 seconds), and clearly this gain is transferable to bigger data problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Frühwirth-Schnatter, Sylvia. "Bayesian Model Discrimination and Bayes Factors for Normal Linear State Space Models." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1993. http://epub.wu.ac.at/108/1/document.pdf.

Повний текст джерела
Анотація:
It is suggested to discriminate between different state space models for a given time series by means of a Bayesian approach which chooses the model that minimizes the expected loss. Practical implementation of this procedures requires a fully Bayesian analysis for both the state vector and the unknown hyperparameters which is carried out by Markov chain Monte Carlo methods. Application to some non-standard situations such as testing hypotheses on the boundary of the parameter space, discriminating non-nested models and discrimination of more than two models is discussed in detail. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Bakra, Eleni. "Aspects of population Markov chain Monte Carlo and reversible jump Markov chain Monte Carlo." Thesis, University of Glasgow, 2009. http://theses.gla.ac.uk/1247/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Helali, Amine. "Vitesse de convergence de l'échantillonneur de Gibbs appliqué à des modèles de la physique statistique." Thesis, Brest, 2019. http://www.theses.fr/2019BRES0002/document.

Повний текст джерела
Анотація:
Les méthodes de Monte Carlo par chaines de Markov MCMC sont des outils mathématiques utilisés pour simuler des mesures de probabilités π définies sur des espaces de grandes dimensions. Une des questions les plus importantes dans ce contexte est de savoir à quelle vitesse converge la chaine de Markov P vers la mesure invariante π. Pour mesurer la vitesse de convergence de la chaine de Markov P vers sa mesure invariante π nous utilisons la distance de la variation totale. Il est bien connu que la vitesse de convergence d’une chaine de Markov réversible P dépend de la deuxième plus grande valeur propre en valeur absolue de la matrice P notée β!. Une partie importante dans l’estimation de β! consiste à estimer la deuxième plus grande valeur propre de la matrice P, qui est notée β1. Diaconis et Stroock (1991) ont introduit une méthode basée sur l’inégalité de Poincaré pour estimer β1 pour le cas général des chaines de Markov réversibles avec un nombre fini d'état. Dans cette thèse, nous utilisons la méthode de Shiu et Chen (2015) pour étudier le cas de l'algorithme de l'échantillonneur de Gibbs pour le modèle d'Ising unidimensionnel avec trois états ou plus appelé aussi modèle de Potts. Puis, nous généralisons le résultat de Shiu et Chen au cas du modèle d’Ising deux- dimensionnel avec deux états. Les résultats obtenus minorent ceux introduits par Ingrassia (1994). Puis nous avons pensé à perturber l'échantillonneur de Gibbs afin d’améliorer sa vitesse de convergence vers l'équilibre
Monte Carlo Markov chain methods MCMC are mathematical tools used to simulate probability measures π defined on state spaces of high dimensions. The speed of convergence of this Markov chain X to its invariant state π is a natural question to study in this context.To measure the convergence rate of a Markov chain we use the total variation distance. It is well known that the convergence rate of a reversible Markov chain depends on its second largest eigenvalue in absolute value denoted by β!. An important part in the estimation of β! is the estimation of the second largest eigenvalue which is denoted by β1.Diaconis and Stroock (1991) introduced a method based on Poincaré inequality to obtain a bound for β1 for general finite state reversible Markov chains.In this thesis we use the Chen and Shiu approach to study the case of the Gibbs sampler for the 1−D Ising model with three and more states which is also called Potts model. Then, we generalize the result of Shiu and Chen (2015) to the case of the 2−D Ising model with two states.The results we obtain improve the ones obtained by Ingrassia (1994). Then, we introduce some method to disrupt the Gibbs sampler in order to improve its convergence rate to equilibrium
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Holenstein, Roman. "Particle Markov chain Monte Carlo." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/7319.

Повний текст джерела
Анотація:
Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) methods have emerged as the two main tools to sample from high-dimensional probability distributions. Although asymptotic convergence of MCMC algorithms is ensured under weak assumptions, the performance of these latters is unreliable when the proposal distributions used to explore the space are poorly chosen and/or if highly correlated variables are updated independently. In this thesis we propose a new Monte Carlo framework in which we build efficient high-dimensional proposal distributions using SMC methods. This allows us to design effective MCMC algorithms in complex scenarios where standard strategies fail. We demonstrate these algorithms on a number of example problems, including simulated tempering, nonlinear non-Gaussian state-space model, and protein folding.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Byrd, Jonathan Michael Robert. "Parallel Markov Chain Monte Carlo." Thesis, University of Warwick, 2010. http://wrap.warwick.ac.uk/3634/.

Повний текст джерела
Анотація:
The increasing availability of multi-core and multi-processor architectures provides new opportunities for improving the performance of many computer simulations. Markov Chain Monte Carlo (MCMC) simulations are widely used for approximate counting problems, Bayesian inference and as a means for estimating very highdimensional integrals. As such MCMC has found a wide variety of applications in fields including computational biology and physics,financial econometrics, machine learning and image processing. This thesis presents a number of new method for reducing the runtime of Markov Chain Monte Carlo simulations by using SMP machines and/or clusters. Two of the methods speculatively perform iterations in parallel, reducing the runtime of MCMC programs whilst producing statistically identical results to conventional sequential implementations. The other methods apply only to problem domains that can be presented as an image, and involve using various means of dividing the image into subimages that can be proceed with some degree of independence. Where possible the thesis includes a theoretical analysis of the reduction in runtime that may be achieved using our technique under perfect conditions, and in all cases the methods are tested and compared on selection of multi-core and multi-processor architectures. A framework is provided to allow easy construction of MCMC application that implement these parallelisation methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Hörmann, Wolfgang, and Josef Leydold. "Improved Perfect Slice Sampling." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 2003. http://epub.wu.ac.at/868/1/document.pdf.

Повний текст джерела
Анотація:
Perfect slice sampling is a method to turn Markov Chain Monte Carlo (MCMC) samplers into exact generators for independent random variates. The originally proposed method is rather slow and thus several improvements have been suggested. However, two of them are erroneous. In this article we give a short introduction to perfect slice sampling, point out incorrect methods, and give a new improved version of the original algorithm. (author's abstract)
Series: Preprint Series / Department of Applied Statistics and Data Processing
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Zhang, Yichuan. "Scalable geometric Markov chain Monte Carlo." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20978.

Повний текст джерела
Анотація:
Markov chain Monte Carlo (MCMC) is one of the most popular statistical inference methods in machine learning. Recent work shows that a significant improvement of the statistical efficiency of MCMC on complex distributions can be achieved by exploiting geometric properties of the target distribution. This is known as geometric MCMC. However, many such methods, like Riemannian manifold Hamiltonian Monte Carlo (RMHMC), are computationally challenging to scale up to high dimensional distributions. The primary goal of this thesis is to develop novel geometric MCMC methods applicable to large-scale problems. To overcome the computational bottleneck of computing second order derivatives in geometric MCMC, I propose an adaptive MCMC algorithm using an efficient approximation based on Limited memory BFGS. I also propose a simplified variant of RMHMC that is able to work effectively on larger scale than the previous methods. Finally, I address an important limitation of geometric MCMC, namely that is only available for continuous distributions. I investigate a relaxation of discrete variables to continuous variables that allows us to apply the geometric methods. This is a new direction of MCMC research which is of potential interest to many applications. The effectiveness of the proposed methods is demonstrated on a wide range of popular models, including generalised linear models, conditional random fields (CRFs), hierarchical models and Boltzmann machines.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Fang, Youhan. "Efficient Markov Chain Monte Carlo Methods." Thesis, Purdue University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10809188.

Повний текст джерела
Анотація:

Generating random samples from a prescribed distribution is one of the most important and challenging problems in machine learning, Bayesian statistics, and the simulation of materials. Markov Chain Monte Carlo (MCMC) methods are usually the required tool for this task, if the desired distribution is known only up to a multiplicative constant. Samples produced by an MCMC method are real values in N-dimensional space, called the configuration space. The distribution of such samples converges to the target distribution in the limit. However, existing MCMC methods still face many challenges that are not well resolved. Difficulties for sampling by using MCMC methods include, but not exclusively, dealing with high dimensional and multimodal problems, high computation cost due to extremely large datasets in Bayesian machine learning models, and lack of reliable indicators for detecting convergence and measuring the accuracy of sampling. This dissertation focuses on new theory and methodology for efficient MCMC methods that aim to overcome the aforementioned difficulties.

One contribution of this dissertation is generalizations of hybrid Monte Carlo (HMC). An HMC method combines a discretized dynamical system in an extended space, called the state space, and an acceptance test based on the Metropolis criterion. The discretized dynamical system used in HMC is volume preserving—meaning that in the state space, the absolute Jacobian of a map from one point on the trajectory to another is 1. Volume preservation is, however, not necessary for the general purpose of sampling. A general theory allowing the use of non-volume preserving dynamics for proposing MCMC moves is proposed. Examples including isokinetic dynamics and variable mass Hamiltonian dynamics with an explicit integrator, are all designed with fewer restrictions based on the general theory. Experiments show improvement in efficiency for sampling high dimensional multimodal problems. A second contribution is stochastic gradient samplers with reduced bias. An in-depth analysis of the noise introduced by the stochastic gradient is provided. Two methods to reduce the bias in the distribution of samples are proposed. One is to correct the dynamics by using an estimated noise based on subsampled data, and the other is to introduce additional variables and corresponding dynamics to adaptively reduce the bias. Extensive experiments show that both methods outperform existing methods. A third contribution is quasi-reliable estimates of effective sample size. Proposed is a more reliable indicator—the longest integrated autocorrelation time over all functions in the state space—for detecting the convergence and measuring the accuracy of MCMC methods. The superiority of the new indicator is supported by experiments on both synthetic and real problems.

Minor contributions include a general framework of changing variables, and a numerical integrator for the Hamiltonian dynamics with fourth order accuracy. The idea of changing variables is to transform the potential energy function as a function of the original variable to a function of the new variable, such that undesired properties can be removed. Two examples are provided and preliminary experimental results are obtained for supporting this idea. The fourth order integrator is constructed by combining the idea of the simplified Takahashi-Imada method and a two-stage Hessian-based integrator. The proposed method, called two-stage simplified Takahashi-Imada method, shows outstanding performance over existing methods in high-dimensional sampling problems.

Стилі APA, Harvard, Vancouver, ISO та ін.
19

Neuhoff, Daniel. "Reversible Jump Markov Chain Monte Carlo." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17461.

Повний текст джерела
Анотація:
Die vier in der vorliegenden Dissertation enthaltenen Studien beschäftigen sich vorwiegend mit dem dynamischen Verhalten makroökonomischer Zeitreihen. Diese Dynamiken werden sowohl im Kontext eines einfachen DSGE Modells, als auch aus der Sichtweise reiner Zeitreihenmodelle untersucht.
The four studies of this thesis are concerned predominantly with the dynamics of macroeconomic time series, both in the context of a simple DSGE model, as well as from a pure time series modeling perspective.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Andersson, Lovisa. "An application of Bayesian Hidden Markov Models to explore traffic flow conditions in an urban area." Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385187.

Повний текст джерела
Анотація:
This study employs Bayesian Hidden Markov Models as method to explore vehicle traffic flow conditions in an urban area in Stockholm, based on sensor data from separate road positions. Inter-arrival times are used as the observed sequences. These sequences of inter-arrival times are assumed to be generated from the distributions of four different (and hidden) traffic flow states; nightly free flow, free flow, mixture and congestion. The filtered and smoothed probability distributions of the hidden states and the most probable state sequences are obtained by using the forward, forward-backward and Viterbi algorithms. The No-U-Turn sampler is used to sample from the posterior distributions of all unknown parameters. The obtained results show in a satisfactory way that the Hidden Markov Models can detect different traffic flow conditions. Some of the models have problems with divergence, but the obtained results from those models still show satisfactory results. In fact, two of the models that converged seemed to overestimate the presence of congested traffic and all the models that not converged seem to do adequate estimations of the probability of being in a congested state. Since the interest of this study lies in estimating the current traffic flow condition, and not in doing parameter inference, the model choice of Bayesian Hidden Markov Models is satisfactory. Due to the unsupervised nature of the problematization of this study, it is difficult to evaluate the accuracy of the results. However, a model with simulated data and known states was also implemented, which resulted in a high classification accuracy. This indicates that the choice of Hidden Markov Models is a good model choice for estimating traffic flow conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Luo, Yuqun. "Incorporation of Genetic Marker Information in Estimating Modelparameters for Complex Traits with Data From Large Complex Pedigrees." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1039109696.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Murray, Iain Andrew. "Advances in Markov chain Monte Carlo methods." Thesis, University College London (University of London), 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.487199.

Повний текст джерела
Анотація:
Probability distributions over many variables occur frequently in Bayesian inference, statistical physics and simulation studies. Samples from distributions give insight into their typical behavior and can allow approximation of any quantity of interest, such as expectations or normalizing constants. Markov chain Monte Carlo (MCMC), introduced by Metropolis et al. (1953), allows r sampling from distributions with intractable normalization, and remains one of most important tools for approximate computation with probability distributions. I While not needed by MCMC, normalizers are key quantities: in Bayesian statistics marginal likelihoods are needed for model comparison; in statistical physics many physical quantities relate to the partition function. In this thesis we propose and investigate several new Monte Carlo algorithms, both for evaluating normalizing constants and for improved sampling of distributions. Many MCMC correctness proofs rely on using reversible transition operators; this can lead to chains exploring by slow random walks. After reviewing existing MCMC algorithms, we develop a new framework for constructing non-reversible transition operators from existing reversible ones. Next we explore and extend MCMC-based algorithms for computing normalizing constants. In particular we develop a newMCMC operator and Nested Sampling approach for the Potts model. Our results demonstrate that these approaches can be superior to finding normalizing constants by annealing methods and can obtain better posterior samples. Finally we consider 'doubly-intractable' distributions with extra unknown normalizer terms that do not cancel in standard MCMC algorithms. We propose using several deterministic approximations for the unknown terms, and investigate their interaction with sampling algorithms. We then develop novel exact-sampling-based MCMC methods, the Exchange Algorithm and Latent Histories. For the first time these algorithms do not require separate approximation before sampling begins. Moreover, the Exchange Algorithm outperforms the only alternative sampling algorithm for doubly intractable distributions.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Han, Xiao-liang. "Markov Chain Monte Carlo and sampling efficiency." Thesis, University of Bristol, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333974.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Fan, Yanan. "Efficient implementation of Markov chain Monte Carlo." Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343307.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Brooks, Stephen Peter. "Convergence diagnostics for Markov Chain Monte Carlo." Thesis, University of Cambridge, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.363913.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Graham, Matthew McKenzie. "Auxiliary variable Markov chain Monte Carlo methods." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28962.

Повний текст джерела
Анотація:
Markov chain Monte Carlo (MCMC) methods are a widely applicable class of algorithms for estimating integrals in statistical inference problems. A common approach in MCMC methods is to introduce additional auxiliary variables into the Markov chain state and perform transitions in the joint space of target and auxiliary variables. In this thesis we consider novel methods for using auxiliary variables within MCMC methods to allow approximate inference in otherwise intractable models and to improve sampling performance in models exhibiting challenging properties such as multimodality. We first consider the pseudo-marginal framework. This extends the Metropolis–Hastings algorithm to cases where we only have access to an unbiased estimator of the density of target distribution. The resulting chains can sometimes show ‘sticking’ behaviour where long series of proposed updates are rejected. Further the algorithms can be difficult to tune and it is not immediately clear how to generalise the approach to alternative transition operators. We show that if the auxiliary variables used in the density estimator are included in the chain state it is possible to use new transition operators such as those based on slice-sampling algorithms within a pseudo-marginal setting. This auxiliary pseudo-marginal approach leads to easier to tune methods and is often able to improve sampling efficiency over existing approaches. As a second contribution we consider inference in probabilistic models defined via a generative process with the probability density of the outputs of this process only implicitly defined. The approximate Bayesian computation (ABC) framework allows inference in such models when conditioning on the values of observed model variables by making the approximation that generated observed variables are ‘close’ rather than exactly equal to observed data. Although making the inference problem more tractable, the approximation error introduced in ABC methods can be difficult to quantify and standard algorithms tend to perform poorly when conditioning on high dimensional observations. This often requires further approximation by reducing the observations to lower dimensional summary statistics. We show how including all of the random variables used in generating model outputs as auxiliary variables in a Markov chain state can allow the use of more efficient and robust MCMC methods such as slice sampling and Hamiltonian Monte Carlo (HMC) within an ABC framework. In some cases this can allow inference when conditioning on the full set of observed values when standard ABC methods require reduction to lower dimensional summaries for tractability. Further we introduce a novel constrained HMC method for performing inference in a restricted class of differentiable generative models which allows conditioning the generated observed variables to be arbitrarily close to observed data while maintaining computational tractability. As a final topicwe consider the use of an auxiliary temperature variable in MCMC methods to improve exploration of multimodal target densities and allow estimation of normalising constants. Existing approaches such as simulated tempering and annealed importance sampling use temperature variables which take on only a discrete set of values. The performance of these methods can be sensitive to the number and spacing of the temperature values used, and the discrete nature of the temperature variable prevents the use of gradient-based methods such as HMC to update the temperature alongside the target variables. We introduce new MCMC methods which instead use a continuous temperature variable. This both removes the need to tune the choice of discrete temperature values and allows the temperature variable to be updated jointly with the target variables within a HMC method.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Stormark, Kristian. "Multiple Proposal Strategies for Markov Chain Monte Carlo." Thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9330.

Повний текст джерела
Анотація:

The multiple proposal methods represent a recent simulation technique for Markov Chain Monte Carlo that allows several proposals to be considered at each step of transition. Motivated by the ideas of Quasi Monte Carlo integration, we examine how strongly correlated proposals can be employed to construct Markov chains with improved mixing properties. We proceed by giving a concise introduction to the Monte Carlo and Markov Chain Monte Carlo theory, and we supply a short discussion of the standard simulation algorithms and the difficulties of efficient sampling. We then examine two multiple proposal methods suggested in the literature, and we indicate the possibility of a unified formulation of the two methods. More essentially, we report some systematic exploration strategies for the two multiple proposals methods. In particular, we present schemes for the utilization of well-distributed point sets and maximally spread search directions. We also include a simple construction procedure for the latter type of point set. A numerical examination of the multiple proposal methods are performed on two simple test problems. We find that the systematic exploration approach may provide a significant improvement of the mixing, especially when the probability mass of the target distribution is ``easy to miss'' by independent sampling. For both test problems, we find that the best results are obtained with the QMC schemes. In particular, we find that the gain is most pronounced for a relatively moderate number of proposal. With fewer proposals, the properties of the well-distributed point sets will no be that relevant. For a large number of proposals, the independent sampling approach will be more competitive, since the coverage of the local neighborhood then will be better.

Стилі APA, Harvard, Vancouver, ISO та ін.
28

Sanborn, Adam N. "Uncovering mental representations with Markov chain Monte Carlo." [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3278468.

Повний текст джерела
Анотація:
Thesis (Ph.D.)--Indiana University, Dept. of Psychological and Brain Sciences and Program in Neuroscience, 2007.
Source: Dissertation Abstracts International, Volume: 68-10, Section: B, page: 6994. Adviser: Richard M. Shiffrin. Title from dissertation home page (viewed May 21, 2008).
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Suzuki, Yuya. "Rare-event Simulation with Markov Chain Monte Carlo." Thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-138950.

Повний текст джерела
Анотація:
In this thesis, we consider random sums with heavy-tailed increments. By the term random sum, we mean a sum of random variables where the number of summands is also random. Our interest is to analyse the tail behaviour of random sums and to construct an efficient method to calculate quantiles. For the sake of efficiency, we simulate rare-events (tail-events) using a Markov chain Monte Carlo (MCMC) method. The asymptotic behaviour of sum and the maximum of heavy-tailed random sums is identical. Therefore we compare random sum and maximum value for various distributions, to investigate from which point one can use the asymptotic approximation. Furthermore, we propose a new method to estimate quantiles and the estimator is shown to be efficient.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Gudmundsson, Thorbjörn. "Rare-event simulation with Markov chain Monte Carlo." Doctoral thesis, KTH, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157522.

Повний текст джерела
Анотація:
Stochastic simulation is a popular method for computing probabilities or expecta- tions where analytical answers are difficult to derive. It is well known that standard methods of simulation are inefficient for computing rare-event probabilities and there- fore more advanced methods are needed to those problems. This thesis presents a new method based on Markov chain Monte Carlo (MCMC) algorithm to effectively compute the probability of a rare event. The conditional distri- bution of the underlying process given that the rare event occurs has the probability of the rare event as its normalising constant. Using the MCMC methodology a Markov chain is simulated, with that conditional distribution as its invariant distribution, and information about the normalising constant is extracted from its trajectory. In the first two papers of the thesis, the algorithm is described in full generality and applied to four problems of computing rare-event probability in the context of heavy- tailed distributions. The assumption of heavy-tails allows us to propose distributions which approximate the conditional distribution conditioned on the rare event. The first problem considers a random walk Y1 + · · · + Yn exceeding a high threshold, where the increments Y are independent and identically distributed and heavy-tailed. The second problem is an extension of the first one to a heavy-tailed random sum Y1+···+YN exceeding a high threshold,where the number of increments N is random and independent of Y1 , Y2 , . . .. The third problem considers the solution Xm to a stochastic recurrence equation, Xm = AmXm−1 + Bm, exceeding a high threshold, where the innovations B are independent and identically distributed and heavy-tailed and the multipliers A satisfy a moment condition. The fourth problem is closely related to the third and considers the ruin probability for an insurance company with risky investments. In last two papers of this thesis, the algorithm is extended to the context of light- tailed distributions and applied to four problems. The light-tail assumption ensures the existence of a large deviation principle or Laplace principle, which in turn allows us to propose distributions which approximate the conditional distribution conditioned on the rare event. The first problem considers a random walk Y1 + · · · + Yn exceeding a high threshold, where the increments Y are independent and identically distributed and light-tailed. The second problem considers a discrete-time Markov chains and the computation of general expectation, of its sample path, related to rare-events. The third problem extends the the discrete-time setting to Markov chains in continuous- time. The fourth problem is closely related to the third and considers a birth-and-death process with spatial intensities and the computation of first passage probabilities. An unbiased estimator of the reciprocal probability for each corresponding prob- lem is constructed with efficient rare-event properties. The algorithms are illustrated numerically and compared to existing importance sampling algorithms.

QC 20141216

Стилі APA, Harvard, Vancouver, ISO та ін.
31

Hastie, David. "Towards automatic reversible jump Markov Chain Monte Carlo." Thesis, University of Bristol, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.414179.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Li, Shuying. "Phylogenetic tree construction using markov chain monte carlo /." The Ohio State University, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487942182323916.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Xu, Jason Qian. "Markov Chain Monte Carlo and Non-Reversible Methods." Thesis, The University of Arizona, 2012. http://hdl.handle.net/10150/244823.

Повний текст джерела
Анотація:
The bulk of Markov chain Monte Carlo applications make use of reversible chains, relying on the Metropolis-Hastings algorithm or similar methods. While reversible chains have the advantage of being relatively easy to analyze, it has been shown that non-reversible chains may outperform them in various scenarios. Neal proposes an algorithm that transforms a general reversible chain into a non-reversible chain with a construction that does not increase the asymptotic variance. These modified chains work to avoid diffusive backtracking behavior which causes Markov chains to be trapped in one position for too long. In this paper, we provide an introduction to MCMC, and discuss the Metropolis algorithm and Neal’s algorithm. We introduce a decaying memory algorithm inspired by Neal’s idea, and then analyze and compare the performance of these chains on several examples.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Bentley, Jason Phillip. "Exact Markov chain Monte Carlo and Bayesian linear regression." Thesis, University of Canterbury. Mathematics and Statistics, 2009. http://hdl.handle.net/10092/2534.

Повний текст джерела
Анотація:
In this work we investigate the use of perfect sampling methods within the context of Bayesian linear regression. We focus on inference problems related to the marginal posterior model probabilities. Model averaged inference for the response and Bayesian variable selection are considered. Perfect sampling is an alternate form of Markov chain Monte Carlo that generates exact sample points from the posterior of interest. This approach removes the need for burn-in assessment faced by traditional MCMC methods. For model averaged inference, we find the monotone Gibbs coupling from the past (CFTP) algorithm is the preferred choice. This requires the predictor matrix be orthogonal, preventing variable selection, but allowing model averaging for prediction of the response. Exploring choices of priors for the parameters in the Bayesian linear model, we investigate sufficiency for monotonicity assuming Gaussian errors. We discover that a number of other sufficient conditions exist, besides an orthogonal predictor matrix, for the construction of a monotone Gibbs Markov chain. Requiring an orthogonal predictor matrix, we investigate new methods of orthogonalizing the original predictor matrix. We find that a new method using the modified Gram-Schmidt orthogonalization procedure performs comparably with existing transformation methods, such as generalized principal components. Accounting for the effect of using an orthogonal predictor matrix, we discover that inference using model averaging for in-sample prediction of the response is comparable between the original and orthogonal predictor matrix. The Gibbs sampler is then investigated for sampling when using the original predictor matrix and the orthogonal predictor matrix. We find that a hybrid method, using a standard Gibbs sampler on the orthogonal space in conjunction with the monotone CFTP Gibbs sampler, provides the fastest computation and convergence to the posterior distribution. We conclude the hybrid approach should be used when the monotone Gibbs CFTP sampler becomes impractical, due to large backwards coupling times. We demonstrate large backwards coupling times occur when the sample size is close to the number of predictors, or when hyper-parameter choices increase model competition. The monotone Gibbs CFTP sampler should be taken advantage of when the backwards coupling time is small. For the problem of variable selection we turn to the exact version of the independent Metropolis-Hastings (IMH) algorithm. We reiterate the notion that the exact IMH sampler is redundant, being a needlessly complicated rejection sampler. We then determine a rejection sampler is feasible for variable selection when the sample size is close to the number of predictors and using Zellner’s prior with a small value for the hyper-parameter c. Finally, we use the example of simulating from the posterior of c conditional on a model to demonstrate how the use of an exact IMH view-point clarifies how the rejection sampler can be adapted to improve efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Pooley, James P. "Exploring phonetic category structure with Markov chain Monte Carlo." Connect to resource, 2008. http://hdl.handle.net/1811/32221.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Angelino, Elaine Lee. "Accelerating Markov chain Monte Carlo via parallel predictive prefetching." Thesis, Harvard University, 2014. http://nrs.harvard.edu/urn-3:HUL.InstRepos:13070022.

Повний текст джерела
Анотація:
We present a general framework for accelerating a large class of widely used Markov chain Monte Carlo (MCMC) algorithms. This dissertation demonstrates that MCMC inference can be accelerated in a model of parallel computation that uses speculation to predict and complete computational work ahead of when it is known to be useful. By exploiting fast, iterative approximations to the target density, we can speculatively evaluate many potential future steps of the chain in parallel. In Bayesian inference problems, this approach can accelerate sampling from the target distribution, without compromising exactness, by exploiting subsets of data. It takes advantage of whatever parallel resources are available, but produces results exactly equivalent to standard serial execution. In the initial burn-in phase of chain evaluation, it achieves speedup over serial evaluation that is close to linear in the number of available cores.
Engineering and Applied Sciences
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Vaičiulytė, Ingrida. "Study and application of Markov chain Monte Carlo method." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20141209_112440-55390.

Повний текст джерела
Анотація:
Markov chain Monte Carlo adaptive methods by creating computationally effective algorithms for decision-making of data analysis with the given accuracy are analyzed in this dissertation. The tasks for estimation of parameters of the multivariate distributions which are constructed in hierarchical way (skew t distribution, Poisson-Gaussian model, stable symmetric vector law) are described and solved in this research. To create the adaptive MCMC procedure, the sequential generating method is applied for Monte Carlo samples, introducing rules for statistical termination and for sample size regulation of Markov chains. Statistical tasks, solved by this method, reveal characteristics of relevant computational problems including MCMC method. Effectiveness of the MCMC algorithms is analyzed by statistical modeling method, constructed in the dissertation. Tests made with sportsmen data and financial data of enterprises, belonging to health-care industry, confirmed that numerical properties of the method correspond to the theoretical model. The methods and algorithms created also are applied to construct the model for sociological data analysis. Tests of algorithms have shown that adaptive MCMC algorithm allows to obtain estimators of examined distribution parameters in lower number of chains, and reducing the volume of calculations approximately two times. The algorithms created in this dissertation can be used to test the systems of stochastic type and to solve other statistical... [to full text]
Disertacijoje nagrinėjami Markovo grandinės Monte-Karlo (MCMC) adaptavimo metodai, skirti efektyviems skaitiniams duomenų analizės sprendimų priėmimo su iš anksto nustatytu patikimumu algoritmams sudaryti. Suformuluoti ir išspręsti hierarchiniu būdu sudarytų daugiamačių skirstinių (asimetrinio t skirstinio, Puasono-Gauso modelio, stabiliojo simetrinio vektoriaus dėsnio) parametrų vertinimo uždaviniai. Adaptuotai MCMC procedūrai sukurti yra pritaikytas nuoseklaus Monte-Karlo imčių generavimo metodas, įvedant statistinį stabdymo kriterijų ir imties tūrio reguliavimą. Statistiniai uždaviniai išspręsti šiuo metodu leidžia atskleisti aktualias MCMC metodų skaitmeninimo problemų ypatybes. MCMC algoritmų efektyvumas tiriamas pasinaudojant disertacijoje sudarytu statistinio modeliavimo metodu. Atlikti eksperimentai su sportininkų duomenimis ir sveikatos industrijai priklausančių įmonių finansiniais duomenimis patvirtino, kad metodo skaitinės savybės atitinka teorinį modelį. Taip pat sukurti metodai ir algoritmai pritaikyti sociologinių duomenų analizės modeliui sudaryti. Atlikti tyrimai parodė, kad adaptuotas MCMC algoritmas leidžia gauti nagrinėjamų skirstinių parametrų įvertinius per mažesnį grandžių skaičių ir maždaug du kartus sumažinti skaičiavimų apimtį. Disertacijoje sukonstruoti algoritmai gali būti pritaikyti stochastinio pobūdžio sistemų tyrimui ir kitiems statistikos uždaviniams spręsti MCMC metodu.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Pereira, Fernanda Chaves. "Bayesian Markov chain Monte Carlo methods in general insurance." Thesis, City University London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342720.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Mangoubi, Oren (Oren Rami). "Integral geometry, Hamiltonian dynamics, and Markov Chain Monte Carlo." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104583.

Повний текст джерела
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 97-101).
This thesis presents applications of differential geometry and graph theory to the design and analysis of Markov chain Monte Carlo (MCMC) algorithms. MCMC algorithms are used to generate samples from an arbitrary probability density [pi] in computationally demanding situations, since their mixing times need not grow exponentially with the dimension of [pi]. However, if [pi] has many modes, MCMC algorithms may still have very long mixing times. It is therefore crucial to understand and reduce MCMC mixing times, and there is currently a need for global mixing time bounds as well as algorithms that mix quickly for multi-modal densities. In the Gibbs sampling MCMC algorithm, the variance in the size of modes intersected by the algorithm's search-subspaces can grow exponentially in the dimension, greatly increasing the mixing time. We use integral geometry, together with the Hessian of r and the Chern-Gauss-Bonnet theorem, to correct these distortions and avoid this exponential increase in the mixing time. Towards this end, we prove a generalization of the classical Crofton's formula in integral geometry that can allow one to greatly reduce the variance of Crofton's formula without introducing a bias. Hamiltonian Monte Carlo (HMC) algorithms are some the most widely-used MCMC algorithms. We use the symplectic properties of Hamiltonians to prove global Cheeger-type lower bounds for the mixing times of HMC algorithms, including Riemannian Manifold HMC as well as No-U-Turn HMC, the workhorse of the popular Bayesian software package Stan. One consequence of our work is the impossibility of energy-conserving Hamiltonian Markov chains to search for far-apart sub-Gaussian modes in polynomial time. We then prove another generalization of Crofton's formula that applies to Hamiltonian trajectories, and use our generalized Crofton formula to improve the convergence speed of HMC-based integration on manifolds. We also present a generalization of the Hopf fibration acting on arbitrary- ghost-valued random variables. For [beta] = 4, the geometry of the Hopf fibration is encoded by the quaternions; we investigate the extent to which the elegant properties of this encoding are preserved when one replaces quaternions with general [beta] > 0 ghosts.
by Oren Mangoubi.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Persing, Adam. "Some contributions to particle Markov chain Monte Carlo algorithms." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/23277.

Повний текст джерела
Анотація:
Hidden Markov models (HMMs) (Cappe et al., 2005) and discrete time stopped Markov processes (Del Moral, 2004, Section 2.2.3) are used to model phenomena in a wide range of fields. However, as practitioners develop more intricate models, analytical Bayesian inference becomes very difficult. In light of this issue, this work focuses on sampling from the posteriors of HMMs and stopped Markov processes using sequential Monte Carlo (SMC) (Doucet et al. 2008, Doucet et al. 2001, Gordon et al. 1993) and, more importantly, particle Markov chain Monte Carlo (PMCMC) (Andrieu et al., 2010). The thesis consists of three major contributions, which enhance the performance of PMCMC. The first work focuses on HMMs, and it begins by introducing a new SMC smoothing (Briers et al. 2010, Fearnhead et al. 2010) estimate of the HMM's normalising constant; we prove the estimate's unbiasedness and a central limit theorem. We use this estimate to develop new PMCMC algorithms that, under certain algorithmic settings, require less computational time than the algorithms of Andrieu et al. (2010). Our new estimate also leads to the discovery of an optimal setting for the smoothers of Briers et al. (2010) and Fearnhead et al. (2010). As this setting is not available for the general class of HMMs, we develop three algorithms for approximating it. The second major work builds from Jasra et al. (2013) and Whiteley et al. (2012) to develop new SMC and PMCMC algorithms that draw from HMMs whose observations have intractable density functions. While these types of algorithms have appeared before (see Jasra et al. 2013, Jasra et al. 2012, and Martin et al. 2012), this work uses twisted proposals as in Whiteley et al. (2012) to reduce the variance of SMC estimates of the normalising constant to improve the convergence of PMCMC in some scenarios. Finally, the third project is concerned with inferring the unknown parameters of stopped Markov processes that are only observed upon reaching their terminal sets. Bayesian inference has not been attempted on this class of problems before. The parameters are inferred through two new adaptive and non-adaptive PMCMC algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Tu, Zhuowen. "Image Parsing by Data-Driven Markov Chain Monte Carlo." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1038347031.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Paul, Rajib. "Theoretical And Algorithmic Developments In Markov Chain Monte Carlo." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218184168.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Cheal, Ryan. "Markov Chain Monte Carlo methods for simulation in pedigrees." Thesis, University of Bath, 1996. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362254.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

BALDIOTI, HUGO RIBEIRO. "MARKOV CHAIN MONTE CARLO FOR NATURAL INFLOW ENERGY SCENARIOS SIMULATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=36058@1.

Повний текст джерела
Анотація:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE EXCELENCIA ACADEMICA
Constituído por uma matriz eletro-energética predominantemente hídrica e território de proporções continentais, o Brasil apresenta características únicas, sendo possível realizar o aproveitamento dos fartos recursos hídricos presentes no território nacional. Aproximadamente 65 por cento da capacidade de geração de energia elétrica advém de recursos hidrelétricos enquanto 28 por cento de recursos termelétricos. Sabe-se que regimes hidrológicos de vazões naturais são de natureza estocástica e em função disso é preciso tratá-los para que se possa planejar a operação do sistema, sendo assim, o despacho hidrotérmico é de suma importância e caracterizado por sua dependência estocástica. A partir das vazões naturais é possível calcular a Energia Natural Afluente (ENA) que será utilizada diretamente no processo de simulação de séries sintéticas que, por sua vez, são utilizadas no processo de otimização, responsável pelo cálculo da política ótima visando minimizar os custos de operação do sistema. Os estudos referentes a simulação de cenários sintéticos de ENA vêm se desenvolvendo com novas propostas metodológicas ao longo dos anos. Tais desenvolvimentos muitas vezes pressupõem Gaussianidade dos dados, de forma que seja possível ajustar uma distribuição paramétrica nos mesmos. Percebeu-se que na maioria dos casos reais, no contexto do Setor Elétrico Brasileiro, os dados não podem ser tratados desta forma, uma vez que apresentam em sua densidade comportamentos de cauda relevantes e uma acentuada assimetria. É necessário para o planejamento da operação do Sistema Interligado Nacional (SIN) que a assimetria intrínseca a este comportamento seja passível de reprodução. Dessa forma, este trabalho propõe duas abordagens não paramétricas para simulação de cenários. A primeira refere-se ao processo de amostragem dos resíduos das séries de ENA, para tanto, utiliza-se a técnica Markov Chain Monte Carlo (MCMC) e o Kernel Density Estimation. A segunda metodologia proposta aplica o MCMC Interconfigurações diretamente nas séries de ENA para simulação de cenários sintéticos a partir de uma abordagem inovadora para transição entre as matrizes e períodos. Os resultados da implementação das metodologias, observados graficamente e a partir de testes estatísticos de aderência ao histórico de dados, apontam que as propostas conseguem reproduzir com uma maior acurácia as características assimétricas sem perder a capacidade de reproduzir estatísticas básicas. Destarte, pode-se afirmar que os modelos propostos são boas alternativas em relação ao modelo vigente utilizado pelo setor elétrico brasileiro.
Consisting of an electro-energetic matrix with hydro predominance and a continental proportion territory, Brazil presents unique characteristics, being able to make use of the abundant water resources in the national territory. Approximately 65 percent of the electricity generation capacity comes from hydropower while 28 percent from thermoelectric plants. It is known that hydrological regimes have a stochastic nature and it is necessary to treat them so the energy system can be planned, thus the hydrothermal dispatch is extremely important and characterized by its stochastic dependence. From the natural streamflows it is possible to calculate the Natural Inflow Energy (NIE) that will be used directly in the synthetic series simulation process, which, in turn, are used on the optimization process, responsible for optimal policy calculation in order to minimize the system operational costs. The studies concerning the simulation of synthetic scenarios of NIE have been developing with new methodological proposals over the years. Such developments often presuppose data Gaussianity, so that a parametric distribution can be fitted to them. It was noticed that in the majority of real cases, in the context of the Brazilian Electrical Sector, the data cannot be treated like that, since they present in their density relevant tail behavior and skewness. It is necessary for the National Interconnected System (SIN) operational planning that the intrinsic skewness behavior is amenable to reproduction. Thus, this paper proposes two non-parametric approaches to scenarios simulation. The first one refers to the process of NIE series residues sampling, using a Markov Chain Monte Carlo (MCMC) technique and the Kernel Density Estimation. The second methodology is also proposed where the MCMC is applied periodically and directly in the NIE series to simulate synthetic scenarios using an innovative approach for transitions between matrices. The methodologies implementation results, observed graphically and based on statistical tests of adherence to the historical data, indicate that the proposals can reproduce with greater accuracy the asymmetric characteristics without losing the ability to reproduce basic statistics. Thus, one can conclude that the proposed models are good alternatives in relation to the current model of the Brazilian Electric Sector.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Wu, Miaodan. "Markov chain Monte Carlo methods applied to Bayesian data analysis." Thesis, University of Cambridge, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.625087.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Rao, V. A. P. "Markov chain Monte Carlo for continuous-time discrete-state systems." Thesis, University College London (University of London), 2012. http://discovery.ucl.ac.uk/1349490/.

Повний текст джерела
Анотація:
A variety of phenomena are best described using dynamical models which operate on a discrete state space and in continuous time. Examples include Markov (and semi-Markov) jump processes, continuous-time Bayesian networks, renewal processes and other point processes. These continuous-time, discrete-state models are ideal building blocks for Bayesian models in fields such as systems biology, genetics, chemistry, computing networks, human-computer interactions etc. However, a challenge towards their more widespread use is the computational burden of posterior inference; this typically involves approximations like time discretization and can be computationally intensive. In this thesis, we describe a new class of Markov chain Monte Carlo methods that allow efficient computation while still being exact. The core idea is an auxiliary variable Gibbs sampler that alternately resamples a random discretization of time given the state-trajectory of the system, and then samples a new trajectory given this discretization. We introduce this idea by relating it to a classical idea called uniformization, and use it to develop algorithms that outperform the state-of-the-art for models based on the Markov jump process. We then extend the scope of these samplers to a wider class of models such as nonstationary renewal processes, and semi-Markov jump processes. By developing a more general framework beyond uniformization, we remedy various limitations of the original algorithms, allowing us to develop MCMC samplers for systems with infinite state spaces, unbounded rates, as well as systems indexed by more general continuous spaces than time.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Higdon, David. "Spatial applications of Markov chain Monte Carlo for Bayesian inference /." Thesis, Connect to this title online; UW restricted, 1994. http://hdl.handle.net/1773/8942.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Wu, Chang-Ye. "Acceleration Strategies of Markov Chain Monte Carlo for Bayesian Computation." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLED019/document.

Повний текст джерела
Анотація:
Les algorithmes MCMC sont difficiles à mettre à l'échelle, car ils doivent balayer l'ensemble des données à chaque itération, ce qui interdit leurs applications dans de grands paramètres de données. En gros, tous les algorithmes MCMC évolutifs peuvent être divisés en deux catégories: les méthodes de partage et de conquête et les méthodes de sous-échantillonnage. Le but de ce projet est de réduire le temps de calcul induit par des fonctions complexes ou à grande efficacité
MCMC algorithms are difficult to scale, since they need to sweep over the whole data set at each iteration, which prohibits their applications in big data settings. Roughly speaking, all scalable MCMC algorithms can be divided into two categories: divide-and-conquer methods and subsampling methods. The aim of this project is to reduce the computing time induced by complex or largelikelihood functions
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Karawatzki, Roman, and Josef Leydold. "Automatic Markov Chain Monte Carlo Procedures for Sampling from Multivariate Distributions." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 2005. http://epub.wu.ac.at/294/1/document.pdf.

Повний текст джерела
Анотація:
Generating samples from multivariate distributions efficiently is an important task in Monte Carlo integration and many other stochastic simulation problems. Markov chain Monte Carlo has been shown to be very efficient compared to "conventional methods", especially when many dimensions are involved. In this article we propose a Hit-and-Run sampler in combination with the Ratio-of-Uniforms method. We show that it is well suited for an algorithm to generate points from quite arbitrary distributions, which include all log-concave distributions. The algorithm works automatically in the sense that only the mode (or an approximation of it) and an oracle is required, i.e., a subroutine that returns the value of the density function at any point x. We show that the number of evaluations of the density increases slowly with dimension. (author's abstract)
Series: Preprint Series / Department of Applied Statistics and Data Processing
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Cui, Tiangang. "Bayesian calibration of geothermal reservoir models via Markov Chain Monte Carlo." Thesis, University of Auckland, 2010. http://hdl.handle.net/2292/5944.

Повний текст джерела
Анотація:
The aim of the research described in this thesis is the development of methods for solving computationally intensive computer model calibration problems by sample based inference. Although our primary focus is calibrating computer models of geothermal reservoirs, the methodology we have developed can be applied to a wide range of computer model calibration problems. In this study, the Bayesian framework is employed to construct the posterior distribution over all model parameters consistent with the measured data, accounting for various uncertainties in the calibration process. To construct the posterior distribution for computer model calibration problems, several methods such as the additive bias framework of Kennedy and O'Hagan (2001) and the enhanced error model (Kaipio and Somersalo, 2007) are investigated. Then, the solutions of computer model calibration problems are given by estimating the expected value of statistics of interest over the posterior distribution. Markov chain Monte Carlo (MCMC) sampling, Metropolis-Hastings (MH) algorithm (Metropolis et al., 1953; Hastings, 1970) in particular, is empoyed to explore the posterior distribution, and Monte Carlo integration is used to calculating the expected values.
Whole document restricted until September 2011, but available by request, use the feedback form to request access.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії