To see the other types of publications on this topic, follow the link: Markov chain Monte Carlo samplers.

Journal articles on the topic 'Markov chain Monte Carlo samplers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Markov chain Monte Carlo samplers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

South, L. F., A. N. Pettitt, and C. C. Drovandi. "Sequential Monte Carlo Samplers with Independent Markov Chain Monte Carlo Proposals." Bayesian Analysis 14, no. 3 (September 2019): 753–76. http://dx.doi.org/10.1214/18-ba1129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Everitt, Richard G., Richard Culliford, Felipe Medina-Aguayo, and Daniel J. Wilson. "Sequential Monte Carlo with transformations." Statistics and Computing 30, no. 3 (November 17, 2019): 663–76. http://dx.doi.org/10.1007/s11222-019-09903-y.

Full text
Abstract:
AbstractThis paper examines methodology for performing Bayesian inference sequentially on a sequence of posteriors on spaces of different dimensions. For this, we use sequential Monte Carlo samplers, introducing the innovation of using deterministic transformations to move particles effectively between target distributions with different dimensions. This approach, combined with adaptive methods, yields an extremely flexible and general algorithm for Bayesian model comparison that is suitable for use in applications where the acceptance rate in reversible jump Markov chain Monte Carlo is low. We use this approach on model comparison for mixture models, and for inferring coalescent trees sequentially, as data arrives.
APA, Harvard, Vancouver, ISO, and other styles
3

Xiaopeng Xu, Xiaopeng Xu, Chuancai Liu Xiaopeng Xu, Hongji Yang Chuancai Liu, and Xiaochun Zhang Hongji Yang. "A Multi-Trajectory Monte Carlo Sampler." 網際網路技術學刊 23, no. 5 (September 2022): 1117–28. http://dx.doi.org/10.53106/160792642022092305020.

Full text
Abstract:
<p>Markov Chain Monte Carlo techniques based on Hamiltonian dynamics can sample the first or last principal components of multivariate probability models using simulated trajectories. However, when components&rsquo; scales span orders of magnitude, these approaches may be unable of accessing all components adequately. While it is possible to reconcile the first and last components by alternating between two different types of trajectories, the sampling of intermediate components may be imprecise. In this paper, a function generalizing the kinetic energies of Hamiltonian Monte Carlo and Riemannian Manifold Hamiltonian Monte Carlo is proposed, and it is found that the methods based on a specific form of the function can more accurately sample normal distributions. Additionally, the multi-particle algorithm&rsquo;s reasoning is given after a review of some statistical ideas.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
4

Dellaportas, Petros, and Ioannis Kontoyiannis. "Control variates for estimation based on reversible Markov chain Monte Carlo samplers." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 74, no. 1 (November 3, 2011): 133–61. http://dx.doi.org/10.1111/j.1467-9868.2011.01000.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jones, Galin L., Gareth O. Roberts, and Jeffrey S. Rosenthal. "Convergence of Conditional Metropolis-Hastings Samplers." Advances in Applied Probability 46, no. 2 (June 2014): 422–45. http://dx.doi.org/10.1239/aap/1401369701.

Full text
Abstract:
We consider Markov chain Monte Carlo algorithms which combine Gibbs updates with Metropolis-Hastings updates, resulting in a conditional Metropolis-Hastings sampler (CMH sampler). We develop conditions under which the CMH sampler will be geometrically or uniformly ergodic. We illustrate our results by analysing a CMH sampler used for drawing Bayesian inferences about the entire sample path of a diffusion process, based only upon discrete observations.
APA, Harvard, Vancouver, ISO, and other styles
6

Jones, Galin L., Gareth O. Roberts, and Jeffrey S. Rosenthal. "Convergence of Conditional Metropolis-Hastings Samplers." Advances in Applied Probability 46, no. 02 (June 2014): 422–45. http://dx.doi.org/10.1017/s0001867800007151.

Full text
Abstract:
We consider Markov chain Monte Carlo algorithms which combine Gibbs updates with Metropolis-Hastings updates, resulting in aconditional Metropolis-Hastings sampler(CMH sampler). We develop conditions under which the CMH sampler will be geometrically or uniformly ergodic. We illustrate our results by analysing a CMH sampler used for drawing Bayesian inferences about the entire sample path of a diffusion process, based only upon discrete observations.
APA, Harvard, Vancouver, ISO, and other styles
7

Levy, Roy. "The Rise of Markov Chain Monte Carlo Estimation for Psychometric Modeling." Journal of Probability and Statistics 2009 (2009): 1–18. http://dx.doi.org/10.1155/2009/537139.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) estimation strategies represent a powerful approach to estimation in psychometric models. Popular MCMC samplers and their alignment with Bayesian approaches to modeling are discussed. Key historical and current developments of MCMC are surveyed, emphasizing how MCMC allows the researcher to overcome the limitations of other estimation paradigms, facilitates the estimation of models that might otherwise be intractable, and frees the researcher from certain possible misconceptions about the models.
APA, Harvard, Vancouver, ISO, and other styles
8

Kilic, Zeliha, Max Schweiger, Camille Moyer, and Steve Pressé. "Monte Carlo samplers for efficient network inference." PLOS Computational Biology 19, no. 7 (July 18, 2023): e1011256. http://dx.doi.org/10.1371/journal.pcbi.1011256.

Full text
Abstract:
Accessing information on an underlying network driving a biological process often involves interrupting the process and collecting snapshot data. When snapshot data are stochastic, the data’s structure necessitates a probabilistic description to infer underlying reaction networks. As an example, we may imagine wanting to learn gene state networks from the type of data collected in single molecule RNA fluorescence in situ hybridization (RNA-FISH). In the networks we consider, nodes represent network states, and edges represent biochemical reaction rates linking states. Simultaneously estimating the number of nodes and constituent parameters from snapshot data remains a challenging task in part on account of data uncertainty and timescale separations between kinetic parameters mediating the network. While parametric Bayesian methods learn parameters given a network structure (with known node numbers) with rigorously propagated measurement uncertainty, learning the number of nodes and parameters with potentially large timescale separations remain open questions. Here, we propose a Bayesian nonparametric framework and describe a hybrid Bayesian Markov Chain Monte Carlo (MCMC) sampler directly addressing these challenges. In particular, in our hybrid method, Hamiltonian Monte Carlo (HMC) leverages local posterior geometries in inference to explore the parameter space; Adaptive Metropolis Hastings (AMH) learns correlations between plausible parameter sets to efficiently propose probable models; and Parallel Tempering takes into account multiple models simultaneously with tempered information content to augment sampling efficiency. We apply our method to synthetic data mimicking single molecule RNA-FISH, a popular snapshot method in probing transcriptional networks to illustrate the identified challenges inherent to learning dynamical models from these snapshots and how our method addresses them.
APA, Harvard, Vancouver, ISO, and other styles
9

Guha, Subharup, Steven N. MacEachern, and Mario Peruggia. "Benchmark Estimation for Markov chain Monte Carlo Samples." Journal of Computational and Graphical Statistics 13, no. 3 (September 2004): 683–701. http://dx.doi.org/10.1198/106186004x2598.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Siems, Tobias. "Markov Chain Monte Carlo on finite state spaces." Mathematical Gazette 104, no. 560 (June 18, 2020): 281–87. http://dx.doi.org/10.1017/mag.2020.51.

Full text
Abstract:
We elaborate the idea behind Markov chain Monte Carlo (MCMC) methods in a mathematically coherent, yet simple and understandable way. To this end, we prove a pivotal convergence theorem for finite Markov chains and a minimal version of the Perron-Frobenius theorem. Subsequently, we briefly discuss two fundamental MCMC methods, the Gibbs and Metropolis-Hastings sampler. Only very basic knowledge about matrices, convergence of real sequences and probability theory is required.
APA, Harvard, Vancouver, ISO, and other styles
11

Tian, Lu, Jun S. Liu, and L. J. Wei. "Implementation of Estimating Function-Based Inference Procedures With Markov Chain Monte Carlo Samplers." Journal of the American Statistical Association 102, no. 479 (September 2007): 881–88. http://dx.doi.org/10.1198/016214506000000122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Heckman, Jonathan J., Jeffrey G. Bernstein, and Ben Vigoda. "MCMC with strings and branes: The suburban algorithm (Extended Version)." International Journal of Modern Physics A 32, no. 22 (August 10, 2017): 1750133. http://dx.doi.org/10.1142/s0217751x17501330.

Full text
Abstract:
Motivated by the physics of strings and branes, we develop a class of Markov chain Monte Carlo (MCMC) algorithms involving extended objects. Starting from a collection of parallel Metropolis–Hastings (MH) samplers, we place them on an auxiliary grid, and couple them together via nearest neighbor interactions. This leads to a class of “suburban samplers” (i.e. spread out Metropolis). Coupling the samplers in this way modifies the mixing rate and speed of convergence for the Markov chain, and can in many cases allow a sampler to more easily overcome free energy barriers in a target distribution. We test these general theoretical considerations by performing several numerical experiments. For suburban samplers with a fluctuating grid topology, performance is strongly correlated with the average number of neighbors. Increasing the average number of neighbors above zero initially leads to an increase in performance, though there is a critical connectivity with effective dimension [Formula: see text], above which “groupthink” takes over, and the performance of the sampler declines.
APA, Harvard, Vancouver, ISO, and other styles
13

Vihola, Matti, and Jordan Franks. "On the use of approximate Bayesian computation Markov chain Monte Carlo with inflated tolerance and post-correction." Biometrika 107, no. 2 (February 3, 2020): 381–95. http://dx.doi.org/10.1093/biomet/asz078.

Full text
Abstract:
Summary Approximate Bayesian computation enables inference for complicated probabilistic models with intractable likelihoods using model simulations. The Markov chain Monte Carlo implementation of approximate Bayesian computation is often sensitive to the tolerance parameter: low tolerance leads to poor mixing and large tolerance entails excess bias. We propose an approach that involves using a relatively large tolerance for the Markov chain Monte Carlo sampler to ensure sufficient mixing and post-processing the output, leading to estimators for a range of finer tolerances. We introduce an approximate confidence interval for the related post-corrected estimators and propose an adaptive approximate Bayesian computation Markov chain Monte Carlo algorithm, which finds a balanced tolerance level automatically based on acceptance rate optimization. Our experiments show that post-processing-based estimators can perform better than direct Markov chain Monte Carlo targeting a fine tolerance, that our confidence intervals are reliable, and that our adaptive algorithm leads to reliable inference with little user specification.
APA, Harvard, Vancouver, ISO, and other styles
14

Chib, Siddhartha, and Edward Greenberg. "Markov Chain Monte Carlo Simulation Methods in Econometrics." Econometric Theory 12, no. 3 (August 1996): 409–31. http://dx.doi.org/10.1017/s0266466600006794.

Full text
Abstract:
We present several Markov chain Monte Carlo simulation methods that have been widely used in recent years in econometrics and statistics. Among these is the Gibbs sampler, which has been of particular interest to econometricians. Although the paper summarizes some of the relevant theoretical literature, its emphasis is on the presentation and explanation of applications to important models that are studied in econometrics. We include a discussion of some implementation issues, the use of the methods in connection with the EM algorithm, and how the methods can be helpful in model specification questions. Many of the applications of these methods are of particular interest to Bayesians, but we also point out ways in which frequentist statisticians may find the techniques useful.
APA, Harvard, Vancouver, ISO, and other styles
15

Sun, Shiliang, Jing Zhao, Minghao Gu, and Shanhu Wang. "Variational Hybrid Monte Carlo for Efficient Multi-Modal Data Sampling." Entropy 25, no. 4 (March 24, 2023): 560. http://dx.doi.org/10.3390/e25040560.

Full text
Abstract:
The Hamiltonian Monte Carlo (HMC) sampling algorithm exploits Hamiltonian dynamics to construct efficient Markov Chain Monte Carlo (MCMC), which has become increasingly popular in machine learning and statistics. Since HMC uses the gradient information of the target distribution, it can explore the state space much more efficiently than random-walk proposals, but may suffer from high autocorrelation. In this paper, we propose Langevin Hamiltonian Monte Carlo (LHMC) to reduce the autocorrelation of the samples. Probabilistic inference involving multi-modal distributions is very difficult for dynamics-based MCMC samplers, which is easily trapped in the mode far away from other modes. To tackle this issue, we further propose a variational hybrid Monte Carlo (VHMC) which uses a variational distribution to explore the phase space and find new modes, and it is capable of sampling from multi-modal distributions effectively. A formal proof is provided that shows that the proposed method can converge to target distributions. Both synthetic and real datasets are used to evaluate its properties and performance. The experimental results verify the theory and show superior performance in multi-modal sampling.
APA, Harvard, Vancouver, ISO, and other styles
16

Koike, Takaaki, and Marius Hofert. "Markov Chain Monte Carlo Methods for Estimating Systemic Risk Allocations." Risks 8, no. 1 (January 15, 2020): 6. http://dx.doi.org/10.3390/risks8010006.

Full text
Abstract:
In this paper, we propose a novel framework for estimating systemic risk measures and risk allocations based on Markov Chain Monte Carlo (MCMC) methods. We consider a class of allocations whose jth component can be written as some risk measure of the jth conditional marginal loss distribution given the so-called crisis event. By considering a crisis event as an intersection of linear constraints, this class of allocations covers, for example, conditional Value-at-Risk (CoVaR), conditional expected shortfall (CoES), VaR contributions, and range VaR (RVaR) contributions as special cases. For this class of allocations, analytical calculations are rarely available, and numerical computations based on Monte Carlo (MC) methods often provide inefficient estimates due to the rare-event character of the crisis events. We propose an MCMC estimator constructed from a sample path of a Markov chain whose stationary distribution is the conditional distribution given the crisis event. Efficient constructions of Markov chains, such as the Hamiltonian Monte Carlo and Gibbs sampler, are suggested and studied depending on the crisis event and the underlying loss distribution. The efficiency of the MCMC estimators is demonstrated in a series of numerical experiments.
APA, Harvard, Vancouver, ISO, and other styles
17

Cappé, Olivier, Christian P. Robert, and Tobias Rydén. "Reversible jump, birth-and-death and more general continuous time Markov chain Monte Carlo samplers." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 65, no. 3 (July 8, 2003): 679–700. http://dx.doi.org/10.1111/1467-9868.00409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Chaudhary, A. K. "Bayesian Analysis of Two Parameter Complementary Exponential Power Distribution." NCC Journal 3, no. 1 (June 14, 2018): 1–23. http://dx.doi.org/10.3126/nccj.v3i1.20244.

Full text
Abstract:
In this paper, the Markov chain Monte Carlo (MCMC) method is used to estimate the parameters of CEP distribution based on a complete sample. A procedure is developed to obtain Bayes estimates of the parameters of the CEP distribution using Markov Chain Monte Carlo (MCMC) simulation method in OpenBUGS, established software for Bayesian analysis using Markov Chain Monte Carlo (MCMC) methods. The MCMC methods have been shown to be easier to implement computationally, the estimates always exist and are statistically consistent, and their probability intervals are convenient to construct. The R functions are developed to study the statistical properties, model validation and comparison tools of the distribution and the output analysis of MCMC samples generated from OpenBUGS. A real data set is considered for illustration under uniform and gamma sets of priors. NCC Journal Vol. 3, No. 1, 2018, Page: 1-23
APA, Harvard, Vancouver, ISO, and other styles
19

Boys, R. J., and D. A. Henderson. "On Determining the Order of Markov Dependence of an Observed Process Governed by a Hidden Markov Model." Scientific Programming 10, no. 3 (2002): 241–51. http://dx.doi.org/10.1155/2002/683164.

Full text
Abstract:
This paper describes a Bayesian approach to determining the order of a finite state Markov chain whose transition probabilities are themselves governed by a homogeneous finite state Markov chain. It extends previous work on homogeneous Markov chains to more general and applicable hidden Markov models. The method we describe uses a Markov chain Monte Carlo algorithm to obtain samples from the (posterior) distribution for both the order of Markov dependence in the observed sequence and the other governing model parameters. These samples allow coherent inferences to be made straightforwardly in contrast to those which use information criteria. The methods are illustrated by their application to both simulated and real data sets.
APA, Harvard, Vancouver, ISO, and other styles
20

Raveendran, Nishanthi, and Georgy Sofronov. "A Markov Chain Monte Carlo Algorithm for Spatial Segmentation." Information 12, no. 2 (January 30, 2021): 58. http://dx.doi.org/10.3390/info12020058.

Full text
Abstract:
Spatial data are very often heterogeneous, which indicates that there may not be a unique simple statistical model describing the data. To overcome this issue, the data can be segmented into a number of homogeneous regions (or domains). Identifying these domains is one of the important problems in spatial data analysis. Spatial segmentation is used in many different fields including epidemiology, criminology, ecology, and economics. To solve this clustering problem, we propose to use the change-point methodology. In this paper, we develop a new spatial segmentation algorithm within the framework of the generalized Gibbs sampler. We estimate the average surface profile of binary spatial data observed over a two-dimensional regular lattice. We illustrate the performance of the proposed algorithm with examples using artificially generated and real data sets.
APA, Harvard, Vancouver, ISO, and other styles
21

Shafii, Mahyar, Bryan Tolson, and L. Shawn Matott. "Improving the efficiency of Monte Carlo Bayesian calibration of hydrologic models via model pre-emption." Journal of Hydroinformatics 17, no. 5 (February 23, 2015): 763–70. http://dx.doi.org/10.2166/hydro.2015.043.

Full text
Abstract:
Bayesian inference via Markov Chain Monte Carlo (MCMC) sampling and sequential Monte Carlo (SMC) sampling are popular methods for uncertainty analysis in hydrological modelling. However, application of these methodologies can incur significant computational costs. This study investigated using model pre-emption for improving the computational efficiency of MCMC and SMC samplers in the context of hydrological modelling. The proposed pre-emption strategy facilitates early termination of low-likelihood simulations and results in reduction of unnecessary simulation time steps. The proposed approach is incorporated into two samplers and applied to the calibration of three rainfall–runoff models. Results show that overall pre-emption savings range from 5 to 21%. Furthermore, results indicate that pre-emption savings are greatest during the pre-convergence ‘burn-in’ period (i.e., between 8 and 39%) and decrease as the algorithms converge towards high likelihood regions of parameter space. The observed savings are achieved with absolutely no change in the posterior set of parameters.
APA, Harvard, Vancouver, ISO, and other styles
22

McClintock, Thomas, and Eduardo Rozo. "Reconstructing probability distributions with Gaussian processes." Monthly Notices of the Royal Astronomical Society 489, no. 3 (September 2, 2019): 4155–60. http://dx.doi.org/10.1093/mnras/stz2426.

Full text
Abstract:
ABSTRACT Modern cosmological analyses constrain physical parameters using Markov Chain Monte Carlo (MCMC) or similar sampling techniques. Oftentimes, these techniques are computationally expensive to run and require up to thousands of CPU hours to complete. Here we present a method for reconstructing the log-probability distributions of completed experiments from an existing chain (or any set of posterior samples). The reconstruction is performed using Gaussian process regression for interpolating the log-probability. This allows for easy resampling, importance sampling, marginalization, testing different samplers, investigating chain convergence, and other operations. As an example use case, we reconstruct the posterior distribution of the most recent Planck 2018 analysis. We then resample the posterior, and generate a new chain with 40 times as many points in only 30 min. Our likelihood reconstruction tool is made publicly available online.
APA, Harvard, Vancouver, ISO, and other styles
23

Holmes, C. C., and B. K. Mallick. "Bayesian Radial Basis Functions of Variable Dimension." Neural Computation 10, no. 5 (July 1, 1998): 1217–33. http://dx.doi.org/10.1162/089976698300017421.

Full text
Abstract:
A Bayesian framework for the analysis of radial basis functions (RBF) is proposed that accommodates uncertainty in the dimension of the model. A distribution is defined over the space of all RBF models of a given basis function, and posterior densities are computed using reversible jump Markov chain Monte Carlo samplers (Green, 1995). This alleviates the need to select the architecture during the modeling process. The resulting networks are shown to adjust their size to the complexity of the data.
APA, Harvard, Vancouver, ISO, and other styles
24

Efendiev, Yalchin, Bangti Jin, Presho Michael, and Xiaosi Tan. "Multilevel Markov Chain Monte Carlo Method for High-Contrast Single-Phase Flow Problems." Communications in Computational Physics 17, no. 1 (December 19, 2014): 259–86. http://dx.doi.org/10.4208/cicp.021013.260614a.

Full text
Abstract:
AbstractIn this paper we propose a general framework for the uncertainty quantification of quantities of interest for high-contrast single-phase flow problems. It is based on the generalized multiscale finite element method (GMsFEM) and multilevel Monte Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online GMsFEM stage can be varied to determine the solution resolution and the computational cost, and to efficiently generate samples at different levels. In particular, it is cheap to generate samples on coarse grids but with low resolution, and it is expensive to generate samples on fine grids with high accuracy. By suitably choosing the number of samples at different levels, one can leverage the expensive computation in larger fine-grid spaces toward smaller coarse-grid spaces, while retaining the accuracy of the final Monte Carlo estimate. Further, we describe a multilevel Markov chain Monte Carlo method, which sequentially screens the proposal with different levels of approximations and reduces the number of evaluations required on fine grids, while combining the samples at different levels to arrive at an accurate estimate. The framework seamlessly integrates the multiscale features of the GMsFEM with the multilevel feature of the MLMC methods following the work in [26], and our numerical experiments illustrate its efficiency and accuracy in comparison with standard Monte Carlo estimates.
APA, Harvard, Vancouver, ISO, and other styles
25

SETIAWANI, PUTU AMANDA, KOMANG DHARMAWAN, and I. WAYAN SUMARJAYA. "IMPLEMENTASI METODE MARKOV CHAIN MONTE CARLO DALAM PENENTUAN HARGA KONTRAK BERJANGKA KOMODITAS." E-Jurnal Matematika 4, no. 3 (August 30, 2015): 122. http://dx.doi.org/10.24843/mtk.2015.v04.i03.p099.

Full text
Abstract:
The aim of the research is to implement Markov Chain Monte Carlo (MCMC) simulation method to price the futures contract of cocoa commodities. The result shows that MCMC is more flexible than Standard Monte Carlo (SMC) simulation method because MCMC method uses hit-and-run sampler algorithm to generate proposal movements that are subsequently accepted or rejected with a probability that depends on the distribution of the target that we want to be achieved. This research shows that MCMC method is suitable to be used to simulate the model of cocoa commodity price movement. The result of this research is a simulation of future contract prices for the next three months and future contract prices that must be paid at the time the contract expires. Pricing future contract by using MCMC method will produce the cheaper contract price if it compares to Standard Monte Carlo simulation.
APA, Harvard, Vancouver, ISO, and other styles
26

Campbell, Edward P., and Bryson C. Bates. "Regionalization of rainfall-runoff model parameters using Markov Chain Monte Carlo samples." Water Resources Research 37, no. 3 (March 2001): 731–39. http://dx.doi.org/10.1029/2000wr900349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Liu, Ao, Zhibing Zhao, Chao Liao, Pinyan Lu, and Lirong Xia. "Learning Plackett-Luce Mixtures from Partial Preferences." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4328–35. http://dx.doi.org/10.1609/aaai.v33i01.33014328.

Full text
Abstract:
We propose an EM-based framework for learning Plackett-Luce model and its mixtures from partial orders. The core of our framework is the efficient sampling of linear extensions of partial orders under Plackett-Luce model. We propose two Markov Chain Monte Carlo (MCMC) samplers: Gibbs sampler and the generalized repeated insertion method tuned by MCMC (GRIM-MCMC), and prove the efficiency of GRIM-MCMC for a large class of preferences.Experiments on synthetic data show that the algorithm with Gibbs sampler outperforms that with GRIM-MCMC. Experiments on real-world data show that the likelihood of test dataset increases when (i) partial orders provide more information; or (ii) the number of components in mixtures of PlackettLuce model increases.
APA, Harvard, Vancouver, ISO, and other styles
28

Baele, Guy, Mandev S. Gill, Philippe Lemey, and Marc A. Suchard. "Hamiltonian Monte Carlo sampling to estimate past population dynamics using the skygrid coalescent model in a Bayesian phylogenetics framework." Wellcome Open Research 5 (March 30, 2020): 53. http://dx.doi.org/10.12688/wellcomeopenres.15770.1.

Full text
Abstract:
Nonparametric coalescent-based models are often employed to infer past population dynamics over time. Several of these models, such as the skyride and skygrid models, are equipped with a block-updating Markov chain Monte Carlo sampling scheme to efficiently estimate model parameters. The advent of powerful computational hardware along with the use of high-performance libraries for statistical phylogenetics has, however, made the development of alternative estimation methods feasible. We here present the implementation and performance assessment of a Hamiltonian Monte Carlo gradient-based sampler to infer the parameters of the skygrid model. The skygrid is a popular and flexible coalescent-based model for estimating population dynamics over time and is available in BEAST 1.10.5, a widely-used software package for Bayesian pylogenetic and phylodynamic analysis. Taking into account the increased computational cost of gradient evaluation, we report substantial increases in effective sample size per time unit compared to the established block-updating sampler. We expect gradient-based samplers to assume an increasingly important role for different classes of parameters typically estimated in Bayesian phylogenetic and phylodynamic analyses.
APA, Harvard, Vancouver, ISO, and other styles
29

Koblents, Eugenia, Inés P. Mariño, and Joaquín Míguez. "Bayesian Computation Methods for Inference in Stochastic Kinetic Models." Complexity 2019 (January 20, 2019): 1–15. http://dx.doi.org/10.1155/2019/7160934.

Full text
Abstract:
In this paper we investigate Monte Carlo methods for the approximation of the posterior probability distributions in stochastic kinetic models (SKMs). SKMs are multivariate Markov jump processes that model the interactions among species in biological systems according to a set of usually unknown parameters. The tracking of the species populations together with the estimation of the interaction parameters is a Bayesian inference problem for which Markov chain Monte Carlo (MCMC) methods have been a typical computational tool. Specifically, the particle MCMC (pMCMC) method has been shown to be effective, while computationally demanding method applicable to this problem. Recently, it has been shown that an alternative approach to Bayesian computation, namely, the class of adaptive importance samplers, may be more efficient than classical MCMC-like schemes, at least for certain applications. For example, the nonlinear population Monte Carlo (NPMC) algorithm has yielded promising results with a low dimensional SKM (the classical predator-prey model). In this paper we explore the application of both pMCMC and NPMC to analyze complex autoregulatory feedback networks modelled by SKMs. We demonstrate numerically how the populations of the relevant species in the network can be tracked and their interaction rates estimated, even in scenarios with partial observations. NPMC schemes attain an appealing trade-off between accuracy and computational cost that can make them advantageous in many practical applications.
APA, Harvard, Vancouver, ISO, and other styles
30

ZHAO, DI, and SHENGHUA NI. "PARALLEL MULTI-PROPOSAL AND MULTI-CHAIN MCMC FOR CALCULATING P-VALUE OF GENOME-WIDE ASSOCIATION STUDY." Parallel Processing Letters 23, no. 03 (September 2013): 1350008. http://dx.doi.org/10.1142/s0129626413500084.

Full text
Abstract:
In this paper, by the novel idea of integrating multiple-proposal algorithm and multiple-chain algorithm by parallel computing, we develop a highly efficient sampler for approximating statistical distributions: parallel Multi-proposal and Multi-chain Markov Chain Monte Carlo (pMPMC3), and we illustrate the high performance of this sampler by calculating P-value (odds ratio significance) for Genome Wide Association Study (GWAS). Computational results show that, by setting the convergence condition as the standard deviation of P-value is less than 10−3, pMPMC3 with 4 proposals and 4 chains obtains a convergent P-value within 106 iterations, while the conventional method Monte Carlo simulation does not obtain convergent P-values even in 107 iterations. We also test pMPMC3 by changing the number of chains, the number of proposals and the size of the dataset on a cluster with maximum 600 processes, the algorithm scales well.
APA, Harvard, Vancouver, ISO, and other styles
31

Vaikundamoorthy, K. "Diagnosis of blood cancer using Markov chain Monte Carlo trace model." International Journal of Biomathematics 10, no. 03 (February 20, 2017): 1750034. http://dx.doi.org/10.1142/s1793524517500346.

Full text
Abstract:
Maximum probability of existence of cancer in human bodies is normally diagnosed very late, so that it is highly cumbersome for physicians to cure. Reliability in predicting cancer at initial stage is always needed, so that curing and medical recovery is possible. In this paper, an investigation was made to diagnose the presence of blood cancer using Markov Chain Monte Carlo (MCMC) trace model, which is most efficient on a wide range of complex Bayesian statistical models. The analysis was carried out using version 18 of SPSS AMOS software. Totally, 19 components were considered from the blood samples of 750 patients. Various factors such as class, age, lymphatics, block of affarc, block of lymph c, block of lymph s, bypass, extravasate, regeneration of, early uptake in, lym nodes dimin, lym nodes enlar, change in lym, defect in node, changes in node, changes in strue special forms, dislocation, exclusion of node, number of nodes in blood cancer are analyzed. The maximum likelihood estimators of the parameters were derived and assessed their performance through a Monte Carlo simulation study. The convergence in prior distribution and posterior distribution takes irregular position in the diagrams and thus blood cancer is diagnosed through this model.
APA, Harvard, Vancouver, ISO, and other styles
32

Li, P. J., D. W. Xu, and J. Zhang. "Probability-Based Structural Health Monitoring Through Markov Chain Monte Carlo Sampling." International Journal of Structural Stability and Dynamics 16, no. 07 (August 3, 2016): 1550039. http://dx.doi.org/10.1142/s021945541550039x.

Full text
Abstract:
The classical nonuniqueness problem exists due to uncertainty in the finite element (FE) calibration field. Namely, multiple models with different intrinsic parameters may all fit the observed data well, thus the selected single “best” model probably is not the truly best model to reflect the structural intrinsic property. A probability-based method using a population of FE models, not the single “best” method, is proposed to deal with the nonuniqueness problem. In this method, the Markov Chain Monte Carlo (MCMC) technique is first performed to sample the key structural parameters representing the main sources of uncertainty. Then a FE model population is generated using the samples, and the posterior probability of each model is evaluated by calculating the correlation between the simulation results and measurements through the Bayesian theorem. Finally, all the FE models from the stochastic sampling with their posterior probabilities are used for structural identification (St-Id) and performance evaluation. The advantage of the proposed method is that it not only identifies the magnitudes of structural parameters, but also generates their probability distributions for subsequent probability-based reliability analysis and risk evaluation. The feature provided by the stochastic sampling and statistical techniques makes the proposed method suitable for dealing with uncertainty. The example of the Phase I IASC-ASCE benchmark structure investigated demonstrates the effectiveness of the proposed method for probability-based structural health monitoring.
APA, Harvard, Vancouver, ISO, and other styles
33

Dosso, Stan. "Efficient reversible-jump Markov-chain Monte Carlo sampling in trans-dimensional Bayesian geoacoustic inversion." Journal of the Acoustical Society of America 152, no. 4 (October 2022): A158. http://dx.doi.org/10.1121/10.0015880.

Full text
Abstract:
This paper considers efficient computational approaches to estimate the posterior probability density (PPD) of seabed geoacoustic profiles in the Bayesian inversion of ocean acoustic data, a numerically intensive problem. Trans-dimensional (trans-D) inversion is applied, which samples probabilistically over an unknown number of seabed layers as well as the layer geoacoustic properties and parameters of the error model (variances and autoregressive coefficients). Sampling is based on the reversible-jump Markov-chain Monte Carlo algorithm, the efficiency of which depends strongly on the formulation of the proposal density by which new candidate models are generated for probabilistic acceptance/rejection. A highly efficient proposal density is presented which combines principal-component (PC) reparameterization with parallel tempering. PC reparameterization applies an adaptive linearized approximation to the PPD as the proposal density, which provides effective directions and length scales for model perturbations in high-dimensional parameter spaces. Parallel tempering considers a series of interacting Markov chains with successively relaxed likelihood functions, which greatly improves the sampling of multi-modal parameter spaces and trans-D transitions. These approaches are combined by computing different PC reparameterizations for each Markov chain in the parallel tempering formulation. Inversion results are presented as marginal probability profiles for geoacoustic properties, marginalized over the number of layers.
APA, Harvard, Vancouver, ISO, and other styles
34

de Figueiredo, Leandro Passos, Dario Grana, Mauro Roisenberg, and Bruno B. Rodrigues. "Multimodal Markov chain Monte Carlo method for nonlinear petrophysical seismic inversion." GEOPHYSICS 84, no. 5 (September 1, 2019): M1—M13. http://dx.doi.org/10.1190/geo2018-0839.1.

Full text
Abstract:
One of the main objectives in the reservoir characterization is estimating the rock properties based on seismic measurements. We have developed a stochastic sampling method for the joint prediction of facies and petrophysical properties, assuming a nonparametric mixture prior distribution and a nonlinear forward model. The proposed methodology is based on a Markov chain Monte Carlo (MCMC) method specifically designed for multimodal distributions for nonlinear problems. The vector of model parameters includes the facies sequence along the seismic trace as well as the continuous petrophysical properties, such as porosity, mineral fractions, and fluid saturations. At each location, the distribution of petrophysical properties is assumed to be multimodal and nonparametric with as many modes as the number of facies; therefore, along the seismic trace, the distribution is multimodal with the number of modes being equal to the number of facies power the number of samples. Because of the nonlinear forward model, the large number of modes and as a consequence the large dimension of the model space, the analytical computation of the full posterior distribution is not feasible. We then numerically evaluate the posterior distribution by using an MCMC method in which we iteratively sample the facies, by moving from one mode to another, and the petrophysical properties, by sampling within the same mode. The method is extended to multiple seismic traces by applying a first-order Markov chain that accounts for the lateral continuity of the model properties. We first validate the method using a synthetic 2D reservoir model and then we apply the method to a real data set acquired in a carbonate field.
APA, Harvard, Vancouver, ISO, and other styles
35

Gu, Minghao, Shiliang Sun, and Yan Liu. "Dynamical Sampling with Langevin Normalization Flows." Entropy 21, no. 11 (November 10, 2019): 1096. http://dx.doi.org/10.3390/e21111096.

Full text
Abstract:
In Bayesian machine learning, sampling methods provide the asymptotically unbiased estimation for the inference of the complex probability distributions, where Markov chain Monte Carlo (MCMC) is one of the most popular sampling methods. However, MCMC can lead to high autocorrelation of samples or poor performances in some complex distributions. In this paper, we introduce Langevin diffusions to normalization flows to construct a brand-new dynamical sampling method. We propose the modified Kullback-Leibler divergence as the loss function to train the sampler, which ensures that the samples generated from the proposed method can converge to the target distribution. Since the gradient function of the target distribution is used during the process of calculating the modified Kullback-Leibler, which makes the integral of the modified Kullback-Leibler intractable. We utilize the Monte Carlo estimator to approximate this integral. We also discuss the situation when the target distribution is unnormalized. We illustrate the properties and performances of the proposed method on varieties of complex distributions and real datasets. The experiments indicate that the proposed method not only takes the advantage of the flexibility of neural networks but also utilizes the property of rapid convergence to the target distribution of the dynamics system and demonstrate superior performances competing with dynamics based MCMC samplers.
APA, Harvard, Vancouver, ISO, and other styles
36

LeSage, James P., Yao-Yu Chih, and Colin Vance. "Markov Chain Monte Carlo estimation of spatial dynamic panel models for large samples." Computational Statistics & Data Analysis 138 (October 2019): 107–25. http://dx.doi.org/10.1016/j.csda.2019.04.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Bouchard-Côté, Alexandre, Sebastian J. Vollmer, and Arnaud Doucet. "The Bouncy Particle Sampler: A Nonreversible Rejection-Free Markov Chain Monte Carlo Method." Journal of the American Statistical Association 113, no. 522 (April 3, 2018): 855–67. http://dx.doi.org/10.1080/01621459.2017.1294075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Smith, A. F. M., and G. O. Roberts. "Bayesian Computation Via the Gibbs Sampler and Related Markov Chain Monte Carlo Methods." Journal of the Royal Statistical Society: Series B (Methodological) 55, no. 1 (September 1993): 3–23. http://dx.doi.org/10.1111/j.2517-6161.1993.tb01466.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Atchadé, Yves, and Yizao Wang. "On the convergence rates of some adaptive Markov chain Monte Carlo algorithms." Journal of Applied Probability 52, no. 3 (September 2015): 811–25. http://dx.doi.org/10.1239/jap/1445543848.

Full text
Abstract:
In this paper we study the mixing time of certain adaptive Markov chain Monte Carlo (MCMC) algorithms. Under some regularity conditions, we show that the convergence rate of importance resampling MCMC algorithms, measured in terms of the total variation distance, is O(n-1). By means of an example, we establish that, in general, this algorithm does not converge at a faster rate. We also study the interacting tempering algorithm, a simplified version of the equi-energy sampler, and establish that its mixing time is of order O(n-1/2).
APA, Harvard, Vancouver, ISO, and other styles
40

Atchadé, Yves, and Yizao Wang. "On the convergence rates of some adaptive Markov chain Monte Carlo algorithms." Journal of Applied Probability 52, no. 03 (September 2015): 811–25. http://dx.doi.org/10.1017/s0021900200113452.

Full text
Abstract:
In this paper we study the mixing time of certain adaptive Markov chain Monte Carlo (MCMC) algorithms. Under some regularity conditions, we show that the convergence rate of importance resampling MCMC algorithms, measured in terms of the total variation distance, isO(n-1). By means of an example, we establish that, in general, this algorithm does not converge at a faster rate. We also study the interacting tempering algorithm, a simplified version of the equi-energy sampler, and establish that its mixing time is of orderO(n-1/2).
APA, Harvard, Vancouver, ISO, and other styles
41

Keery, John, Andrew Binley, Ahmed Elshenawy, and Jeremy Clifford. "Markov-chain Monte Carlo estimation of distributed Debye relaxations in spectral induced polarization." GEOPHYSICS 77, no. 2 (March 2012): E159—E170. http://dx.doi.org/10.1190/geo2011-0244.1.

Full text
Abstract:
There is growing interest in the link between electrical polarization and physical properties of geologic porous media. In particular, spectral characteristics may be controlled by the same pore geometric properties that influence fluid permeability of such media. Various models have been proposed to describe the spectral-induced-polarization (SIP) response of permeable rocks, and the links between these models and hydraulic properties have been explored, albeit empirically. Computation of the uncertainties in the parameters of such electrical models is essential for effective use of these relationships. The formulation of an electrical dispersion model in terms of a distribution of relaxation times and associated chargeabilities has been demonstrated to be an effective generalized approach; however, thus far, such an approach has only been considered in a deterministic framework. Here, we formulate a spectral model based on a distribution of polarizations. By using a simple polynomial descriptor of such a distribution, we are able to cast the model in a stochastic manner and solve it using a Markov-chain Monte Carlo (McMC) sampler, thus allowing the computation of model-parameter uncertainties. We apply the model to synthetic data and demonstrate that the stochastic method can provide posterior distributions of model parameters with narrow bounds around the true values when little or no noise is added to the synthetic data, with posterior distributions that broaden with increasing noise. We also apply our model to experimental measurements of six sandstone samples and compare physical properties of a number of samples of porous media with stochastic estimates of characteristic relaxation times. We demonstrate the utility of our method on electrical spectra with different response characteristics and show that a single metric of relaxation time for the SIP response is not sufficient to provide clear insight into the physical characteristics of a sample.
APA, Harvard, Vancouver, ISO, and other styles
42

Ayekple, Yao Elikem, Charles Kofi Tetteh, and Prince Kwaku Fefemwole. "Markov Chain Monte Carlo Method for Estimating Implied Volatility in Option Pricing." Journal of Mathematics Research 10, no. 6 (November 29, 2018): 108. http://dx.doi.org/10.5539/jmr.v10n6p108.

Full text
Abstract:
Using market covered European call option prices, the Independence Metropolis-Hastings Sampler algorithm for estimating Implied volatility in option pricing was proposed. This algorithm has an acceptance criteria which facilitate accurate approximation of this volatility from an independent path in the Black Scholes Model, from a set of finite data observation from the stock market. Assuming the underlying asset indeed follow the geometric brownian motion, inverted version of the Black Scholes model was used to approximate this Implied Volatility which was not directly seen in the real market: for which the BS model assumes the volatility to be a constant. Moreover, it is demonstrated that, the Implied Volatility from the options market tends to overstate or understate the actual expectation of the market. In addition, a 3-month market Covered European call option data, from 30 different stock companies was acquired from Optionistic.Com, which was used to estimate the Implied volatility. This accurately approximate the actual expectation of the market with low standard errors ranging between 0.0035 to 0.0275.
APA, Harvard, Vancouver, ISO, and other styles
43

Li, Jun, Philippe Vignal, Shuyu Sun, and Victor M. Calo. "On Stochastic Error and Computational Efficiency of the Markov Chain Monte Carlo Method." Communications in Computational Physics 16, no. 2 (August 2014): 467–90. http://dx.doi.org/10.4208/cicp.110613.280214a.

Full text
Abstract:
AbstractIn Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for the MCMC method but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them.
APA, Harvard, Vancouver, ISO, and other styles
44

Lam, Heung F., Jia H. Yang, Qin Hu, and Ching T. Ng. "Railway ballast damage detection by Markov chain Monte Carlo-based Bayesian method." Structural Health Monitoring 17, no. 3 (July 10, 2017): 706–24. http://dx.doi.org/10.1177/1475921717717106.

Full text
Abstract:
This article reports the development of a Bayesian method for assessing the damage status of railway ballast under a concrete sleeper based on vibration data of the in situ sleeper. One of the important contributions of the proposed method is to describe the variation of stiffness distribution of ballast using Lagrange polynomial, for which the order of the polynomial is decided by the Bayesian approach. The probability of various orders of polynomial conditional on a given set of measured vibration data is calculated. The order of polynomial with the highest probability is selected as the most plausible order and used for updating the ballast stiffness distribution. Due to the uncertain nature of railway ballast, the corresponding model updating problem is usually unidentifiable. To ensure the applicability of the proposed method even in unidentifiable cases, a computational efficient Markov chain Monte Carlo–based Bayesian method was employed in the proposed method for generating a set of samples in the important region of parameter space to approximate the posterior (updated) probability density function of ballast stiffness. The proposed ballast damage detection method was verified with roving hammer test data from a segment of full-scale ballasted track. The experimental verification results positively show the potential of the proposed method in ballast damage detection.
APA, Harvard, Vancouver, ISO, and other styles
45

Tolba, Ahlam, Ehab Almetwally, Neveen Sayed-Ahmed, Taghreed Jawa, Nagla Yehia, and Dina Ramadan. "Bayesian and non-Bayesian estimation methods to independent competing risks models with type II half logistic weibull sub-distributions with application to an automatic life test." Thermal Science 26, Spec. issue 1 (2022): 285–302. http://dx.doi.org/10.2298/tsci22s1285t.

Full text
Abstract:
In the survival data analysis, competing risks are commonly overlooked, and conventional statistical methods are used to analyze the event of interest. There may be more than one cause of death or failure in many experimental investigations of survival analysis. A competing risks model will be derived statistically applying Type-II half logistic weibull sub-distributions. Type-II half logistic weibull life?times failure model with independent causes. It is possible to estimate parameters and parametric functions using Bayesian and classical methods. A Bayes estimation is obtained by the Markov chain Monte-Carlo method. The posterior density function and the Metropolis-Hasting algorithm are used to calculate the Markov chain Monte-Carlo samples. Simulation data is used to evaluate the performance of the two methods according to the Type-II censored system. As a test of the discussed model, a real data set is provided.
APA, Harvard, Vancouver, ISO, and other styles
46

Lye, Adolphus, Alice Cicirello, and Edoardo Patelli. "An efficient and robust sampler for Bayesian inference: Transitional Ensemble Markov Chain Monte Carlo." Mechanical Systems and Signal Processing 167 (March 2022): 108471. http://dx.doi.org/10.1016/j.ymssp.2021.108471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Tsang, K. P., B. C. M. Wang, and L. Garrison. "PRM34 Estimating Markov Chain Transition Matrices in Limited Data Samples: A Monte Carlo Experiment." Value in Health 15, no. 4 (June 2012): A164—A165. http://dx.doi.org/10.1016/j.jval.2012.03.890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Mijatović, Aleksandar, and Jure Vogrinc. "Asymptotic variance for random walk Metropolis chains in high dimensions: logarithmic growth via the Poisson equation." Advances in Applied Probability 51, no. 4 (November 15, 2019): 994–1026. http://dx.doi.org/10.1017/apr.2019.40.

Full text
Abstract:
AbstractThere are two ways of speeding up Markov chain Monte Carlo algorithms: (a) construct more complex samplers that use gradient and higher-order information about the target and (b) design a control variate to reduce the asymptotic variance. While the efficiency of (a) as a function of dimension has been studied extensively, this paper provides the first results linking the efficiency of (b) with dimension. Specifically, we construct a control variate for a d-dimensional random walk Metropolis chain with an independent, identically distributed target using the solution of the Poisson equation for the scaling limit in [30]. We prove that the asymptotic variance of the corresponding estimator is bounded above by a multiple of $\log(d)/d$ over the spectral gap of the chain. The proof hinges on large deviations theory, optimal Young’s inequality and Berry–Esseen-type bounds. Extensions of the result to non-product targets are discussed.
APA, Harvard, Vancouver, ISO, and other styles
49

Felbermair, Samuel, Florian Lammer, Eva Trausinger-Binder, and Cornelia Hebenstreit. "Generation of a synthetic population for agent-based transport modelling with small sample travel survey data using statistical raster census data." International Journal of Traffic and Transportation Management 02, no. 02 (October 10, 2020): 09–17. http://dx.doi.org/10.5383/jttm.02.02.002.

Full text
Abstract:
This paper presents a step-by-step method to generate a synthetic population for agent-based transport modelling as input to MATSim software, which requires an activity chain for each agent. We make use of high spatial resolution statistical raster (250 m) census data, applying all calculations at this scale. Due to the small sample, size of travel survey data an Iterative Proportional Fitting method is not suitable. Therefore, we devise a method utilizing Bayesian networks, maximum likelihood and Markov Chain Monte Carlo simulation to reproduce attribute distribution and fit to raster margins. Stratified sampling along households is employed to generate activity chains for the synthetic population.
APA, Harvard, Vancouver, ISO, and other styles
50

Sen, Deborshee, Matthias Sachs, Jianfeng Lu, and David B. Dunson. "Efficient posterior sampling for high-dimensional imbalanced logistic regression." Biometrika 107, no. 4 (June 17, 2020): 1005–12. http://dx.doi.org/10.1093/biomet/asaa035.

Full text
Abstract:
Summary Classification with high-dimensional data is of widespread interest and often involves dealing with imbalanced data. Bayesian classification approaches are hampered by the fact that current Markov chain Monte Carlo algorithms for posterior computation become inefficient as the number $p$ of predictors or the number $n$ of subjects to classify gets large, because of the increasing computational time per step and worsening mixing rates. One strategy is to employ a gradient-based sampler to improve mixing while using data subsamples to reduce the per-step computational complexity. However, the usual subsampling breaks down when applied to imbalanced data. Instead, we generalize piecewise-deterministic Markov chain Monte Carlo algorithms to include importance-weighted and mini-batch subsampling. These maintain the correct stationary distribution with arbitrarily small subsamples and substantially outperform current competitors. We provide theoretical support for the proposed approach and demonstrate its performance gains in simulated data examples and an application to cancer data.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography