Siga este enlace para ver otros tipos de publicaciones sobre el tema: MCMC optimization.

Artículos de revistas sobre el tema "MCMC optimization"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "MCMC optimization".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Rong, Teng Zhong y Zhi Xiao. "MCMC Sampling Statistical Method to Solve the Optimization". Applied Mechanics and Materials 121-126 (octubre de 2011): 937–41. http://dx.doi.org/10.4028/www.scientific.net/amm.121-126.937.

Texto completo
Resumen
This paper designs a class of generalized density function and from which proposed a solution method for the multivariable nonlinear optimization problem based on MCMC statistical sampling. Theoretical analysis proved that the maximum statistic converge to the maximum point of probability density which establishing links between the optimization and MCMC sampling. This statistical computation algorithm demonstrates convergence property of maximum statistics in large samples and it is global search design to avoid on local optimal solution restrictions. The MCMC optimization algorithm has less iterate variables reserved so that the computing speed is relatively high. Finally, the MCMC sampling optimization algorithm is applied to solve TSP problem and compared with genetic algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zhang, Lihao, Zeyang Ye y Yuefan Deng. "Parallel MCMC methods for global optimization". Monte Carlo Methods and Applications 25, n.º 3 (1 de septiembre de 2019): 227–37. http://dx.doi.org/10.1515/mcma-2019-2043.

Texto completo
Resumen
Abstract We introduce a parallel scheme for simulated annealing, a widely used Markov chain Monte Carlo (MCMC) method for optimization. Our method is constructed and analyzed under the classical framework of MCMC. The benchmark function for optimization is used for validation and verification of the parallel scheme. The experimental results, along with the proof based on statistical theory, provide us with insights into the mechanics of the parallelization of simulated annealing for high parallel efficiency or scalability for large parallel computers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Martino, L., V. Elvira, D. Luengo, J. Corander y F. Louzada. "Orthogonal parallel MCMC methods for sampling and optimization". Digital Signal Processing 58 (noviembre de 2016): 64–84. http://dx.doi.org/10.1016/j.dsp.2016.07.013.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Yin, Long, Sheng Zhang, Kun Xiang, Yongqiang Ma, Yongzhen Ji, Ke Chen y Dongyu Zheng. "A New Stochastic Process of Prestack Inversion for Rock Property Estimation". Applied Sciences 12, n.º 5 (25 de febrero de 2022): 2392. http://dx.doi.org/10.3390/app12052392.

Texto completo
Resumen
In order to enrich the current prestack stochastic inversion theory, we propose a prestack stochastic inversion method based on adaptive particle swarm optimization combined with Markov chain Monte Carlo (MCMC). The MCMC could provide a stochastic optimization approach, and, with the APSO, have a better performance in global optimization methods. This method uses logging data to define a preprocessed model space. It also uses Bayesian statistics and Markov chains with a state transition matrix to update and evolve each generation population in the data domain, then adaptive particle swarm optimization is used to find the global optimal value in the finite model space. The method overcomes the problem of over-fitting deterministic inversion and improves the efficiency of stochastic inversion. Meanwhile, the fusion of multiple sources of information can reduce the non-uniqueness of solutions and improve the inversion accuracy. We derive the APSO algorithm in detail, give the specific workflow of prestack stochastic inversion, and verify the validity of the inversion theory through the inversion test of two-dimensional prestack data in real areas.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Yang, Fan y Jianwei Ren. "Reliability Analysis Based on Optimization Random Forest Model and MCMC". Computer Modeling in Engineering & Sciences 125, n.º 2 (2020): 801–14. http://dx.doi.org/10.32604/cmes.2020.08889.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Glynn, Peter W., Andrey Dolgin, Reuven Y. Rubinstein y Radislav Vaisman. "HOW TO GENERATE UNIFORM SAMPLES ON DISCRETE SETS USING THE SPLITTING METHOD". Probability in the Engineering and Informational Sciences 24, n.º 3 (23 de abril de 2010): 405–22. http://dx.doi.org/10.1017/s0269964810000057.

Texto completo
Resumen
The goal of this work is twofold. We show the following:1.In spite of the common consensus on the classic Markov chain Monte Carlo (MCMC) as a universal tool for generating samples on complex sets, it fails to generate points uniformly distributed on discrete ones, such as that defined by the constraints of integer programming. In fact, we will demonstrate empirically that not only does it fail to generate uniform points on the desired set, but typically it misses some of the points of the set.2.Thesplitting, also called thecloningmethod – originally designed for combinatorial optimization and for counting on discrete sets and presenting a combination of MCMC, like the Gibbs sampler, with a specially designed splitting mechanism—can also be efficiently used for generating uniform samples on these sets. Without introducing the appropriate splitting mechanism, MCMC fails. Although we do not have a formal proof, we guess (conjecture) that the main reason that the classic MCMC is not working is that its resulting chain is not irreducible. We provide valid statistical tests supporting the uniformity of generated samples by the splitting method and present supportive numerical results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Li, Chunyuan, Changyou Chen, Yunchen Pu, Ricardo Henao y Lawrence Carin. "Communication-Efficient Stochastic Gradient MCMC for Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 4173–80. http://dx.doi.org/10.1609/aaai.v33i01.33014173.

Texto completo
Resumen
Learning probability distributions on the weights of neural networks has recently proven beneficial in many applications. Bayesian methods such as Stochastic Gradient Markov Chain Monte Carlo (SG-MCMC) offer an elegant framework to reason about model uncertainty in neural networks. However, these advantages usually come with a high computational cost. We propose accelerating SG-MCMC under the masterworker framework: workers asynchronously and in parallel share responsibility for gradient computations, while the master collects the final samples. To reduce communication overhead, two protocols (downpour and elastic) are developed to allow periodic interaction between the master and workers. We provide a theoretical analysis on the finite-time estimation consistency of posterior expectations, and establish connections to sample thinning. Our experiments on various neural networks demonstrate that the proposed algorithms can greatly reduce training time while achieving comparable (or better) test accuracy/log-likelihood levels, relative to traditional SG-MCMC. When applied to reinforcement learning, it naturally provides exploration for asynchronous policy optimization, with encouraging performance improvement.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Yamaguchi, Kazuhiro y Kensuke Okada. "Variational Bayes Inference for the DINA Model". Journal of Educational and Behavioral Statistics 45, n.º 5 (31 de marzo de 2020): 569–97. http://dx.doi.org/10.3102/1076998620911934.

Texto completo
Resumen
In this article, we propose a variational Bayes (VB) inference method for the deterministic input noisy AND gate model of cognitive diagnostic assessment. The proposed method, which applies the iterative algorithm for optimization, is derived based on the optimal variational posteriors of the model parameters. The proposed VB inference enables much faster computation than the existing Markov chain Monte Carlo (MCMC) method, while still offering the benefits of a full Bayesian framework. A simulation study revealed that the proposed VB estimation adequately recovered the parameter values. Moreover, an example using real data revealed that the proposed VB inference method provided similar estimates to MCMC estimation with much faster computation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Xu, Haoyu, Tao Zhang, Yiqi Luo, Xin Huang y Wei Xue. "Parameter calibration in global soil carbon models using surrogate-based optimization". Geoscientific Model Development 11, n.º 7 (27 de julio de 2018): 3027–44. http://dx.doi.org/10.5194/gmd-11-3027-2018.

Texto completo
Resumen
Abstract. Soil organic carbon (SOC) has a significant effect on carbon emissions and climate change. However, the current SOC prediction accuracy of most models is very low. Most evaluation studies indicate that the prediction error mainly comes from parameter uncertainties, which can be improved by parameter calibration. Data assimilation techniques have been successfully employed for the parameter calibration of SOC models. However, data assimilation algorithms, such as the sampling-based Bayesian Markov chain Monte Carlo (MCMC), generally have high computation costs and are not appropriate for complex global land models. This study proposes a new parameter calibration method based on surrogate optimization techniques to improve the prediction accuracy of SOC. Experiments on three types of soil carbon cycle models, including the Community Land Model with the Carnegie–Ames–Stanford Approach biogeochemistry submodel (CLM-CASA') and two microbial models show that the surrogate-based optimization method is effective and efficient in terms of both accuracy and cost. Compared to predictions using the tuned parameter values through Bayesian MCMC, the root mean squared errors (RMSEs) between the predictions using the calibrated parameter values with surrogate-base optimization and the observations could be reduced by up to 12 % for different SOC models. Meanwhile, the corresponding computational cost is lower than other global optimization algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Kitchen, James L., Jonathan D. Moore, Sarah A. Palmer y Robin G. Allaby. "MCMC-ODPR: Primer design optimization using Markov Chain Monte Carlo sampling". BMC Bioinformatics 13, n.º 1 (2012): 287. http://dx.doi.org/10.1186/1471-2105-13-287.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Li, Xin y Albert C. Reynolds. "A Gaussian Mixture Model as a Proposal Distribution for Efficient Markov-Chain Monte Carlo Characterization of Uncertainty in Reservoir Description and Forecasting". SPE Journal 25, n.º 01 (23 de septiembre de 2019): 001–36. http://dx.doi.org/10.2118/182684-pa.

Texto completo
Resumen
Summary Generating an estimate of uncertainty in production forecasts has become nearly standard in the oil industry, but is often performed with procedures that yield at best a highly approximate uncertainty quantification. Formally, the uncertainty quantification of a production forecast can be achieved by generating a correct characterization of the posterior probability-density function (PDF) of reservoir-model parameters conditional to dynamic data and then sampling this PDF correctly. Although Markov-chain Monte Carlo (MCMC) provides a theoretically rigorous method for sampling any target PDF that is known up to a normalizing constant, in reservoir-engineering applications, researchers have found that it might require extraordinarily long chains containing millions to hundreds of millions of states to obtain a correct characterization of the target PDF. When the target PDF has a single mode or has multiple modes concentrated in a small region, it might be possible to implement a proposal distribution dependent on a random walk so that the resulting MCMC algorithm derived from the Metropolis-Hastings acceptance probability can yield a good characterization of the posterior PDF with a computationally feasible chain length. However, for a high-dimensional multimodal PDF with modes separated by large regions of low or zero probability, characterizing the PDF with MCMC using a random walk is not computationally feasible. Although methods such as population MCMC exist for characterizing a multimodal PDF, their computational cost generally makes the application of these algorithms far too costly for field application. In this paper, we design a new proposal distribution using a Gaussian mixture PDF for use in MCMC where the posterior PDF can be multimodal with the modes spread far apart. Simply put, the method generates modes using a gradient-based optimization method and constructs a Gaussian mixture model (GMM) to use as the basic proposal distribution. Tests on three simple problems are presented to establish the validity of the method. The performance of the new MCMC algorithm is compared with that of random-walk MCMC and is also compared with that of population MCMC for a target PDF that is multimodal.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Pasani, Satwik y Shruthi Viswanath. "A Framework for Stochastic Optimization of Parameters for Integrative Modeling of Macromolecular Assemblies". Life 11, n.º 11 (5 de noviembre de 2021): 1183. http://dx.doi.org/10.3390/life11111183.

Texto completo
Resumen
Integrative modeling of macromolecular assemblies requires stochastic sampling, for example, via MCMC (Markov Chain Monte Carlo), since exhaustively enumerating all structural degrees of freedom is infeasible. MCMC-based methods usually require tuning several parameters, such as the move sizes for coarse-grained beads and rigid bodies, for sampling to be efficient and accurate. Currently, these parameters are tuned manually. To automate this process, we developed a general heuristic for derivative-free, global, stochastic, parallel, multiobjective optimization, termed StOP (Stochastic Optimization of Parameters) and applied it to optimize sampling-related parameters for the Integrative Modeling Platform (IMP). Given an integrative modeling setup, list of parameters to optimize, their domains, metrics that they influence, and the target ranges of these metrics, StOP produces the optimal values of these parameters. StOP is adaptable to the available computing capacity and converges quickly, allowing for the simultaneous optimization of a large number of parameters. However, it is not efficient at high dimensions and not guaranteed to find optima in complex landscapes. We demonstrate its performance on several examples of random functions, as well as on two integrative modeling examples, showing that StOP enhances the efficiency of sampling the posterior distribution, resulting in more good-scoring models and better sampling precision.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

SOLONEN, ANTTI, HEIKKI HAARIO, JEAN MICHEL TCHUENCHE y HERIETH RWEZAURA. "STUDYING THE IDENTIFIABILITY OF EPIDEMIOLOGICAL MODELS USING MCMC". International Journal of Biomathematics 06, n.º 02 (marzo de 2013): 1350008. http://dx.doi.org/10.1142/s1793524513500083.

Texto completo
Resumen
Studying different theoretical properties of epidemiological models has been widely addressed, while numerical studies and especially the calibration of models, which are often complicated and loaded with a high number of unknown parameters, against measured data have received less attention. In this paper, we describe how a combination of simulated data and Markov Chain Monte Carlo (MCMC) methods can be used to study the identifiability of model parameters with different type of measurements. Three known models are used as case studies to illustrate the importance of parameter identifiability: a basic SIR model, an influenza model with vaccination and treatment and a HIV–Malaria co-infection model. The analysis reveals that calibration of complex models commonly studied in mathematical epidemiology, such as the HIV–Malaria co-dynamics model, can be difficult or impossible, even if the system would be fully observed. The presented approach provides a tool for design and optimization of real-life field campaigns of collecting data, as well as for model selection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Nugroho, Widyo, Christiono Utomo y Nur Iriawan. "A Bayesian Pipe Failure Prediction for Optimizing Pipe Renewal Time in Water Distribution Networks". Infrastructures 7, n.º 10 (13 de octubre de 2022): 136. http://dx.doi.org/10.3390/infrastructures7100136.

Texto completo
Resumen
The sustainable management of the water supply system requires methodologies to monitor, repair, or replace the aging infrastructure, but more importantly, it must be able to assess the condition of the networks and predict their behavior over time. Among other infrastructure systems, the water distribution network is one of the essential civil infrastructure systems; therefore, the effective maintenance and renewal of the infrastructure’s physical assets are essential. This article aims to determine pipe failure prediction to optimize pipe renewal time. This research methodology investigates the most appropriate parameters for predicting pipe failure in the optimization. In particular, the non-homogeneous Poisson process (NHPP) with the Markov chain Monte Carlo (MCMC) approach is presented for Bayesian inference, while maximum likelihood (ML) is applied for frequentist inference as a comparison method. It is concluded that the two estimations are relatively appropriate for predicting failures, but MCMC estimation is closer to the total observed data. Based on life-cycle cost (LCC) analysis, the MCMC estimation generates flatter LCC curves and lower LCC values than the ML estimation, which affects the decision making of optimum pipe renewal in water distribution networks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Ahmadian, Yashar, Jonathan W. Pillow y Liam Paninski. "Efficient Markov Chain Monte Carlo Methods for Decoding Neural Spike Trains". Neural Computation 23, n.º 1 (enero de 2011): 46–96. http://dx.doi.org/10.1162/neco_a_00059.

Texto completo
Resumen
Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed into the spike trains of a group of neurons. The form of the GLM likelihood ensures that the posterior distribution over the stimuli that caused an observed set of spike trains is log concave so long as the prior is. This allows the maximum a posteriori (MAP) stimulus estimate to be obtained using efficient optimization algorithms. Unfortunately, the MAP estimate can have a relatively large average error when the posterior is highly nongaussian. Here we compare several Markov chain Monte Carlo (MCMC) algorithms that allow for the calculation of general Bayesian estimators involving posterior expectations (conditional on model parameters). An efficient version of the hybrid Monte Carlo (HMC) algorithm was significantly superior to other MCMC methods for gaussian priors. When the prior distribution has sharp edges and corners, on the other hand, the “hit-and-run” algorithm performed better than other MCMC methods. Using these algorithms, we show that for this latter class of priors, the posterior mean estimate can have a considerably lower average error than MAP, whereas for gaussian priors, the two estimators have roughly equal efficiency. We also address the application of MCMC methods for extracting nonmarginal properties of the posterior distribution. For example, by using MCMC to calculate the mutual information between the stimulus and response, we verify the validity of a computationally efficient Laplace approximation to this quantity for gaussian priors in a wide range of model parameters; this makes direct model-based computation of the mutual information tractable even in the case of large observed neural populations, where methods based on binning the spike train fail. Finally, we consider the effect of uncertainty in the GLM parameters on the posterior estimators.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

López-Santiago, J., L. Martino, M. A. Vázquez y J. Miguez. "A Bayesian inference and model selection algorithm with an optimization scheme to infer the model noise power". Monthly Notices of the Royal Astronomical Society 507, n.º 3 (10 de agosto de 2021): 3351–61. http://dx.doi.org/10.1093/mnras/stab2303.

Texto completo
Resumen
ABSTRACT Model fitting is possibly the most extended problem in science. Classical approaches include the use of least-squares fitting procedures and maximum likelihood methods to estimate the value of the parameters in the model. However, in recent years, Bayesian inference tools have gained traction. Usually, Markov chain Monte Carlo (MCMC) methods are applied to inference problems, but they present some disadvantages, particularly when comparing different models fitted to the same data set. Other Bayesian methods can deal with this issue in a natural and effective way. We have implemented an importance sampling (IS) algorithm adapted to Bayesian inference problems in which the power of the noise in the observations is not known a priori. The main advantage of IS is that the model evidence can be derived directly from the so-called importance weights – while MCMC methods demand considerable postprocessing. The use of our adaptive target adaptive importance sampling (ATAIS) method is shown by inferring, on the one hand, the parameters of a simulated flaring event that includes a damped oscillation and, on the other hand, real data from the Kepler mission. ATAIS includes a novel automatic adaptation of the target distribution. It automatically estimates the variance of the noise in the model. ATAIS admits parallelization, which decreases the computational run-times notably. We compare our method against a nested sampling method within a model selection problem.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Liu, Chenjian, Xiaoman Zheng y Yin Ren. "Parameter Optimization of the 3PG Model Based on Sensitivity Analysis and a Bayesian Method". Forests 11, n.º 12 (21 de diciembre de 2020): 1369. http://dx.doi.org/10.3390/f11121369.

Texto completo
Resumen
Sensitivity analysis and parameter optimization of stand models can improve their efficiency and accuracy, and increase their applicability. In this study, the sensitivity analysis, screening, and optimization of 63 model parameters of the Physiological Principles in Predicting Growth (3PG) model were performed by combining a sensitivity analysis method and the Markov chain Monte Carlo (MCMC) method of Bayesian posterior estimation theory. Additionally, a nine-year observational dataset of Chinese fir trees felled in the Shunchang Forest Farm, Nanping, was used to analyze, screen, and optimize the 63 model parameters of the 3PG model. The results showed the following: (1) The parameters that are most sensitive to stand stocking and diameter at breast height (DBH) are nWs(power in stem mass vs. diameter relationship), aWs(constant in stem mass vs. diameter relationship), alphaCx(maximum canopy quantum efficiency), k(extinction coefficient for PAR absorption by canopy), pRx(maximum fraction of NPP to roots), pRn(minimum fraction of NPP to roots), and CoeffCond(defines stomatal response to VPD); (2) MCMC can be used to optimize the parameters of the 3PG model, in which the posterior probability distributions of nWs, aWs, alphaCx, pRx, pRn, and CoeffCond conform to approximately normal or skewed distributions, and the peak value is prominent; and (3) compared with the accuracy before sensitivity analysis and a Bayesian method, the biomass simulation accuracy of the stand model was increased by 13.92%, and all indicators show that the accuracy of the improved model is superior. This method can be used to calibrate the parameters and analyze the uncertainty of multi-parameter complex stand growth models, which are important for the improvement of parameter estimation and simulation accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Grana, Dario, Leandro de Figueiredo y Klaus Mosegaard. "Markov chain Monte Carlo for petrophysical inversion". GEOPHYSICS 87, n.º 1 (12 de noviembre de 2021): M13—M24. http://dx.doi.org/10.1190/geo2021-0177.1.

Texto completo
Resumen
Stochastic petrophysical inversion is a method used to predict reservoir properties from seismic data. Recent advances in stochastic optimization allow generating multiple realizations of rock and fluid properties conditioned on seismic data. To match the measured data and represent the uncertainty of the model variables, many realizations are generally required. Stochastic sampling and optimization of spatially correlated models are computationally demanding. Monte Carlo methods allow quantifying the uncertainty of the model variables but are impractical for high-dimensional models with spatially correlated variables. We have developed a Bayesian approach based on an efficient implementation of the Markov chain Monte Carlo (MCMC) method for the inversion of seismic data for the prediction of reservoir properties. Our Bayesian approach includes an explicit vertical correlation model in the proposal distribution. It is applied trace by trace, and the lateral continuity model is imposed by using the previously simulated values at the adjacent traces as conditioning data for simulating the initial model at the current trace. The methodology is first presented for a 1D problem to test the vertical correlation, and it is extended to 2D problems by including the lateral correlation and comparing two novel implementations based on sequential sampling. Our method is applied to synthetic data to estimate the posterior distribution of the petrophysical properties conditioned on the measured seismic data. The results are compared with an MCMC implementation without lateral correlation and demonstrate the advantage of integrating a spatial correlation model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Librado, Pablo y Ludovic Orlando. "Struct-f4: a Rcpp package for ancestry profile and population structure inference from f4-statistics". Bioinformatics 38, n.º 7 (26 de enero de 2022): 2070–71. http://dx.doi.org/10.1093/bioinformatics/btac046.

Texto completo
Resumen
Abstract Summary Visualization and inference of population structure is increasingly important for fundamental and applied research. Here, we present Struct-f4, providing automated solutions to characterize and summarize the genetic ancestry profile of individuals, assess their genetic affinities, identify admixture sources and quantify admixture levels. Availability and implementation Struct-f4 is written in Rcpp and relies on f4-statistics and Markov Chain Monte Carlo (MCMC) optimization. It is freely available under GNU General Public License in Bitbucket (https://bitbucket.org/plibradosanz/structf4/). Supplementary information Supplementary data are available at Bioinformatics online.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Vrugt, J. A. y C. J. F. Ter Braak. "DREAM<sub>(D)</sub>: an adaptive Markov Chain Monte Carlo simulation algorithm to solve discrete, noncontinuous, and combinatorial posterior parameter estimation problems". Hydrology and Earth System Sciences 15, n.º 12 (13 de diciembre de 2011): 3701–13. http://dx.doi.org/10.5194/hess-15-3701-2011.

Texto completo
Resumen
Abstract. Formal and informal Bayesian approaches have found widespread implementation and use in environmental modeling to summarize parameter and predictive uncertainty. Successful implementation of these methods relies heavily on the availability of efficient sampling methods that approximate, as closely and consistently as possible the (evolving) posterior target distribution. Much of this work has focused on continuous variables that can take on any value within their prior defined ranges. Here, we introduce theory and concepts of a discrete sampling method that resolves the parameter space at fixed points. This new code, entitled DREAM(D) uses the recently developed DREAM algorithm (Vrugt et al., 2008, 2009a, b) as its main building block but implements two novel proposal distributions to help solve discrete and combinatorial optimization problems. This novel MCMC sampler maintains detailed balance and ergodicity, and is especially designed to resolve the emerging class of optimal experimental design problems. Three different case studies involving a Sudoku puzzle, soil water retention curve, and rainfall – runoff model calibration problem are used to benchmark the performance of DREAM(D). The theory and concepts developed herein can be easily integrated into other (adaptive) MCMC algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Wang, Shengchao, Liguo Han, Xiangbo Gong, Shaoyue Zhang, Xingguo Huang y Pan Zhang. "Full-Waveform Inversion of Time-Lapse Crosshole GPR Data Using Markov Chain Monte Carlo Method". Remote Sensing 13, n.º 22 (11 de noviembre de 2021): 4530. http://dx.doi.org/10.3390/rs13224530.

Texto completo
Resumen
Crosshole ground-penetrating radar (GPR) is an important tool for a wide range of geoscientific and engineering investigations, and the Markov chain Monte Carlo (MCMC) method is a heuristic global optimization method that can be used to solve the inversion problem. In this paper, we use time-lapse GPR full-waveform data to invert the dielectric permittivity. An inversion based on the MCMC method does not rely on an accurate initial model and can introduce any complex prior information. Time-lapse ground-penetrating radar has great potential to monitor the properties of a subsurface. For the time-lapse inversion, we used the double difference method to invert the time-lapse target area accurately and full-waveform data. We propose a local sampling strategy taking advantage of the a priori information in the Monte Carlo method, which can sample only the target area with a sequential Gibbs sampler. This method reduces the calculation and improves the inversion accuracy of the target area. We have provided inversion results of the synthetic time-lapse waveform data that show that the proposed method significantly improves accuracy in the target area.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Łatuszyński, Krzysztof y Wojciech Niemiro. "Rigorous confidence bounds for MCMC under a geometric drift condition". Journal of Complexity 27, n.º 1 (febrero de 2011): 23–38. http://dx.doi.org/10.1016/j.jco.2010.07.003.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Garg, Renu, Madhulika Dube y Hare Krishna. "Estimation of Parameters and Reliability Characteristics in Lindley Distribution Using Randomly Censored Data". Statistics, Optimization & Information Computing 8, n.º 1 (17 de febrero de 2020): 80–97. http://dx.doi.org/10.19139/soic-2310-5070-692.

Texto completo
Resumen
This article deals with the estimation of parameters and reliability characteristics of Lindley distribution underrandom censoring. Expected time on test based on randomly censored data is obtained. The maximum likelihood estimators of the unknown parameters and reliability characteristics are derived. The asymptotic, bootstrap p and bootstrap t confidence intervals of the parameters are constructed. The Bayes estimators of the parameters and reliability characteristics under squared error loss function using non-informative and gamma informative priors are obtained. For computing of Bayes estimates, Lindley approximation and MCMC methods are considered. Highest posterior density (HPD) credible intervals of the parameters are obtained using MCMC method. Various estimation procedures are compared using a Monte Carlo simulation study. Finally, a real data set is analyzed for illustration purposes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Gaucherel, C., F. Campillo, L. Misson, J. Guiot y J. J. Boreux. "Parameterization of a process-based tree-growth model: Comparison of optimization, MCMC and Particle Filtering algorithms". Environmental Modelling & Software 23, n.º 10-11 (octubre de 2008): 1280–88. http://dx.doi.org/10.1016/j.envsoft.2008.03.003.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Wang, Ji, Ru Zhang, Yuting Yan, Xiaoqiang Dong y Jun Ming Li. "Locating hazardous gas leaks in the atmosphere via modified genetic, MCMC and particle swarm optimization algorithms". Atmospheric Environment 157 (mayo de 2017): 27–37. http://dx.doi.org/10.1016/j.atmosenv.2017.03.009.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Fassina, A., D. Abate y P. Franz. "Bayesian inference applied to electron temperature data: computational performances and diagnostics integration". Journal of Instrumentation 17, n.º 09 (1 de septiembre de 2022): C09012. http://dx.doi.org/10.1088/1748-0221/17/09/c09012.

Texto completo
Resumen
Abstract Bayesian inference proves to be a robust tool for the fitting of parametric models on experimental datasets. In the case of electron kinetics, it can help the identification of non-thermal components in electron population and their relation with plasma parameters and dynamics. We present here a tool for electron distribution reconstruction based on MCMC (Monte Carlo Markov Chain) based Bayesian inference on Thomson Scattering data, discussing the computational performances of different algorithms and information metrics. Along, a possible integration between Soft X-ray spectroscopy and Thomson Scattering is presented, focusing on the parametric optimization of diagnostics spectral channels in different plasma regimes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Adam, Abuzar B. M., Xiaoyu Wan y Zhengqiang Wang. "Clustering and Auction-Based Power Allocation Algorithm for Energy Efficiency Maximization in Multi-Cell Multi-Carrier NOMA Networks". Applied Sciences 9, n.º 23 (21 de noviembre de 2019): 5034. http://dx.doi.org/10.3390/app9235034.

Texto completo
Resumen
In this paper, we investigate the energy efficiency (EE) maximization in multi-cell multi-carrier non-orthogonal multiple access (MCMC-NOMA) networks. To achieve this goal, an optimization problem is formulated then the solution is divided into two parts. First, we investigate the inter-cell interference mitigation and then we propose an auction-based non-cooperative game for power allocation for base stations. Finally, to guarantee the rate requirements for users, power is allocated fairly to users. The simulation results show that the proposed scheme has the best performance compared with the existing NOMA-based fractional transmit power allocation (FTPA) and the conventional orthogonal frequency division multiple access (OFDMA).
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Sasidharan, Balu Krishna, Saif Aljabab, Jatinder Saini, Tony Wong, George Laramore, Jay Liao, Upendra Parvathaneni y Stephen R. Bowen. "Clinical Monte Carlo versus Pencil Beam Treatment Planning in Nasopharyngeal Patients Receiving IMPT". International Journal of Particle Therapy 5, n.º 4 (1 de marzo de 2019): 32–40. http://dx.doi.org/10.14338/ijpt-18-00039.1.

Texto completo
Resumen
Abstract Purpose: Pencil beam (PB) analytical algorithms have been the standard of care for proton therapy dose calculations. The introduction of Monte Carlo (MC) algorithms may provide more robust and accurate planning and can improve therapeutic benefit. We conducted a dosimetric analysis to quantify the differences between MC and PB algorithms in the clinical setting of dose-painted nasopharyngeal cancer intensity-modulated proton radiotherapy. Patients and Methods: Plans of 14 patients treated with PB analytical algorithm optimized and calculated (PBPB) were retrospectively analyzed. The PBPB plans were recalculated using MC to generate PBMC plans and, finally, reoptimized and recalculated with MC to generate MCMC plans. The plans were compared across several dosimetric endpoints and correlated with documented toxicity. Robustness of the planning scenarios (PBPB, PBMC, MCMC) in the presence of setup and range uncertainties was compared. Results: A median decrease of up to 5 Gy (P &lt; .05) was observed in coverage of planning target volume high-risk, intermediate-risk, and low-risk volumes when PB plans were recalculated using the MC algorithm. This loss in coverage was regained by reoptimizing with MC, albeit with a slightly higher dose to normal tissues but within the standard tolerance limits. The robustness of both PB and MC plans remained similar in the presence of setup and range uncertainties. The MC-calculated mean dose to the oral avoidance structure, along with changes in global maximum dose between PB and MC dosimetry, may be associated with acute toxicity-related events. Conclusion: Retrospective analyses of plan dosimetry quantified a loss of coverage with PB that could be recovered under MC optimization. MC optimization should be performed for the complex dosimetry in patients with nasopharyngeal carcinoma before plan acceptance and should also be used in correlative studies of proton dosimetry with clinical endpoints.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

SUGIURA, Masayuki, Kohji TANAKA y Hiroki TSUJIKURA. "PROPOSAL OF THE OPTIMIZATION METHOD OF PARAMETERS IN THE WATER LEVEL PREDICTION MODEL BY USING MCMC ESTIMATION". Journal of Japan Society of Civil Engineers, Ser. B1 (Hydraulic Engineering) 74, n.º 4 (2018): I_1021—I_1026. http://dx.doi.org/10.2208/jscejhe.74.i_1021.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Speagle, Joshua S., Peter L. Capak, Daniel J. Eisenstein, Daniel C. Masters y Charles L. Steinhardt. "Exploring photometric redshifts as an optimization problem: an ensemble MCMC and simulated annealing-driven template-fitting approach". Monthly Notices of the Royal Astronomical Society 461, n.º 4 (24 de junio de 2016): 3432–42. http://dx.doi.org/10.1093/mnras/stw1503.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Wilson, Aaron, Alan Fern y Prasad Tadepalli. "Bayesian Policy Search for Multi-Agent Role Discovery". Proceedings of the AAAI Conference on Artificial Intelligence 24, n.º 1 (3 de julio de 2010): 624–29. http://dx.doi.org/10.1609/aaai.v24i1.7679.

Texto completo
Resumen
Bayesian inference is an appealing approach for leveraging prior knowledge in reinforcement learning (RL). In this paper we describe an algorithm for discovering different classes of roles for agents via Bayesian inference. In particular, we develop a Bayesian policy search approach for Multi-Agent RL (MARL), which is model-free and allows for priors on policy parameters. We present a novel optimization algorithm based on hybrid MCMC, which leverages both the prior and gradient information estimated from trajectories. Our experiments in a complex real-time strategy game demonstrate the effective discovery of roles from supervised trajectories, the use of discovered roles for successful transfer to similar tasks, and the discovery of roles through reinforcement learning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Adam, Abuzar B. M., Xiaoyu Wan y Zhengqiang Wang. "Energy Efficiency Maximization for Multi-Cell Multi-Carrier NOMA Networks". Sensors 20, n.º 22 (20 de noviembre de 2020): 6642. http://dx.doi.org/10.3390/s20226642.

Texto completo
Resumen
As energy efficiency (EE) is a key performance indicator for the future wireless network, it has become a significant research field in communication networks. In this paper, we consider multi-cell multi-carrier non-orthogonal multiple access (MCMC-NOMA) networks and investigate the EE maximization problem. As the EE maximization is a mixed-integer nonlinear programming NP-hard problem, it is difficult to solve directly by traditional optimization such as convex optimization. To handle the EE maximization problem, we decouple it into two subproblems. The first subproblem is user association, where we design a matching-based framework to perform the user association and the subcarriers’ assignment. The second subproblem is the power allocation problem for each user to maximize the EE of the systems. Since the EE maximization problem is still non-convex with respect to the power domain, we propose a two stage quadratic transform with both a single ratio quadratic and multidimensional quadratic transform to convert it into an equivalent convex optimization problem. The power allocation is obtained by iteratively solving the convex problem. Finally, the numerical results demonstrate that the proposed method could achieve better EE compared to existing approaches for non-orthogonal multiple access (NOMA) and considerably outperforms the fractional transmit power control (FTPC) scheme for orthogonal multiple access (OMA).
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Meng, Xiao-Kai, Yan-Bing Jia, Zhi-Heng Liu, Zhi-Qiang Yu, Pei-Jie Han, Zhu-Mao Lu y Tao Jin. "High-Voltage Cable Condition Assessment Method Based on Multi-Source Data Analysis". Energies 15, n.º 4 (14 de febrero de 2022): 1369. http://dx.doi.org/10.3390/en15041369.

Texto completo
Resumen
In view of the problem that the weight value given by the previous state evaluation method is fixed and single and cannot analyze the influence of the weight vector deviation on the evaluation result, a method based on the weight space Markov chain and Monte Carlo method (Markov chains Monte Carlo, MCMC) is proposed. The sampling method is used for evaluating the condition of high-voltage cables. The weight vector set obtained by MCMC sampling and the comprehensive degradation degree of the high-voltage cable sample are weighted and summed then compared in pairs to obtain the comprehensive degradation degree result. The status probability value and overall priority ranking probability of the object to be evaluated are obtained based on probability statistics, and the order of maintenance is determined according to the status probability value and the ranking result. It is realized that the cable line that needs to be identified in the follow-up defect is clarified according to the evaluation result. This is helpful for operational and maintenance personnel to more accurately implement the maintenance plan for the cable and improve the operational and maintenance efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Husaini, Noor Aida, Rozaida Ghazali, Nureize Arbaiy y Ayodele Lasisi. "MCS-MCMC for Optimising Architectures and Weights of Higher Order Neural Networks". International Journal of Intelligent Systems and Applications 12, n.º 5 (8 de octubre de 2020): 52–72. http://dx.doi.org/10.5815/ijisa.2020.05.05.

Texto completo
Resumen
The standard method to train the Higher Order Neural Networks (HONN) is the well-known Backpropagation (BP) algorithm. Yet, the current BP algorithm has several limitations including easily stuck into local minima, particularly when dealing with highly non-linear problems and utilise computationally intensive training algorithms. The current BP algorithm is also relying heavily on the initial weight values and other parameters picked. Therefore, in an attempt to overcome the BP drawbacks, we investigate a method called Modified Cuckoo Search-Markov chain Monté Carlo for optimising the weights in HONN and boost the learning process. This method, which lies in the Swarm Intelligence area, is notably successful in optimisation task. We compared the performance with several HONN-based network models and standard Multilayer Perceptron on four (4) time series datasets: Temperature, Ozone, Gold Close Price and Bitcoin Closing Price from various repositories. Simulation results indicate that this swarm-based algorithm outperformed or at least at par with the network models with current BP algorithm in terms of lower error rate.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Guo, Yanbing, Lingjuan Miao y Yusen Lin. "A Novel EM Implementation for Initial Alignment of SINS Based on Particle Filter and Particle Swarm Optimization". Mathematical Problems in Engineering 2019 (20 de febrero de 2019): 1–12. http://dx.doi.org/10.1155/2019/6793175.

Texto completo
Resumen
For nonlinear systems in which the measurement noise parameters vary over time, adaptive nonlinear filters can be applied to precisely estimate the states of systems. The expectation maximization (EM) algorithm, which alternately takes an expectation- (E-) step and a maximization- (M-) step, has been proposed to construct a theoretical framework for the adaptive nonlinear filters. Previous adaptive nonlinear filters based on the EM employ analytical algorithms to develop the two steps, but they cannot achieve high filtering accuracy because the strong nonlinearity of systems may invalidate the Gaussian assumption of the state distribution. In this paper, we propose an EM-based adaptive nonlinear filter APF to solve this problem. In the E-step, an improved particle filter PF_new is proposed based on the Gaussian sum approximation (GSA) and the Monte Carlo Markov chain (MCMC) to achieve the state estimation. In the M-step, the particle swarm optimization (PSO) is applied to estimate the measurement noise parameters. The performances of the proposed algorithm are illustrated in the simulations with Lorenz 63 model and in a semiphysical experiment of the initial alignment of the strapdown inertial navigation system (SINS) in large misalignment angles.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Shrestha, Ashish, Bishal Ghimire y Francisco Gonzalez-Longatt. "A Bayesian Model to Forecast the Time Series Kinetic Energy Data for a Power System". Energies 14, n.º 11 (4 de junio de 2021): 3299. http://dx.doi.org/10.3390/en14113299.

Texto completo
Resumen
Withthe massive penetration of electronic power converter (EPC)-based technologies, numerous issues are being noticed in the modern power system that may directly affect system dynamics and operational security. The estimation of system performance parameters is especially important for transmission system operators (TSOs) in order to operate a power system securely. This paper presents a Bayesian model to forecast short-term kinetic energy time series data for a power system, which can thus help TSOs to operate a respective power system securely. A Markov chain Monte Carlo (MCMC) method used as a No-U-Turn sampler and Stan’s limited-memory Broyden–Fletcher–Goldfarb–Shanno (LM-BFGS) algorithm is used as the optimization method here. The concept of decomposable time series modeling is adopted to analyze the seasonal characteristics of datasets, and numerous performance measurement matrices are used for model validation. Besides, an autoregressive integrated moving average (ARIMA) model is used to compare the results of the presented model. At last, the optimal size of the training dataset is identified, which is required to forecast the 30-min values of the kinetic energy with a low error. In this study, one-year univariate data (1-min resolution) for the integrated Nordic power system (INPS) are used to forecast the kinetic energy for sequences of 30 min (i.e., short-term sequences). Performance evaluation metrics such as the root-mean-square error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), and mean absolute scaled error (MASE) of the proposed model are calculated here to be 4.67, 3.865, 0.048, and 8.15, respectively. In addition, the performance matrices can be improved by up to 3.28, 2.67, 0.034, and 5.62, respectively, by increasing MCMC sampling. Similarly, 180.5 h of historic data is sufficient to forecast short-term results for the case study here with an accuracy of 1.54504 for the RMSE.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Gao, Guohua, Jeroen Vink, Fredrik Saaf y Terence Wells. "Strategies to Enhance the Performance of Gaussian Mixture Model Fitting for Uncertainty Quantification". SPE Journal 27, n.º 01 (18 de noviembre de 2021): 329–48. http://dx.doi.org/10.2118/204008-pa.

Texto completo
Resumen
Summary When formulating history matching within the Bayesian framework, we may quantify the uncertainty of model parameters and production forecasts using conditional realizations sampled from the posterior probability density function (PDF). It is quite challenging to sample such a posterior PDF. Some methods [e.g., Markov chain Monte Carlo (MCMC)] are very expensive, whereas other methods are cheaper but may generate biased samples. In this paper, we propose an unconstrained Gaussian mixture model (GMM) fitting method to approximate the posterior PDF and investigate new strategies to further enhance its performance. To reduce the central processing unit (CPU) time of handling bound constraints, we reformulate the GMM fitting formulation such that an unconstrained optimization algorithm can be applied to find the optimal solution of unknown GMM parameters. To obtain a sufficiently accurate GMM approximation with the lowest number of Gaussian components, we generate random initial guesses, remove components with very small or very large mixture weights after each GMM fitting iteration, and prevent their reappearance using a dedicated filter. To prevent overfitting, we add a new Gaussian component only if the quality of the GMM approximation on a (large) set of blind-test data sufficiently improves. The unconstrained GMM fitting method with the new strategies proposed in this paper is validated using nonlinear toy problems and then applied to a synthetic history-matching example. It can construct a GMM approximation of the posterior PDF that is comparable to the MCMC method, and it is significantly more efficient than the constrained GMM fitting formulation (e.g., reducing the CPU time by a factor of 800 to 7,300 for problems we tested), which makes it quite attractive for large-scale history-matching problems. NOTE: This paper is also published as part of the 2021 SPE Reservoir Simulation Conference Special Issue.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Sengupta, P. y S. Chakraborty. "Model reduction technique for Bayesian model updating of structural parameters using simulated modal data". Proceedings of the 12th Structural Engineering Convention, SEC 2022: Themes 1-2 1, n.º 1 (19 de diciembre de 2022): 1403–12. http://dx.doi.org/10.38208/acp.v1.670.

Texto completo
Resumen
An attempt has been made to study the effectiveness of model reduction technique for Bayesian approach of model updating with incomplete modal data sets. The inverse problems in system identification require the solution of a family of plausible values of model parameters based on available data. Specifically, an iterative model reduction algorithm is proposed based on a non-linear optimization method to solve the transformation parameter such that no prior choices of response parameters are required. The modal ordinates synthesized at the unmeasured degrees of freedom (DOF) from the reduced order model are used for a better estimate of likelihood functions. The reduced-order model is subsequently implemented for updating of unknown structural parameters. The present study also synthesizes the mode shape ordinates at unmeasured DOF from the reduced order model. The efficiency of the proposed model reduction algorithm is further studied by adding noises of varying percentages to the measured modal data sets. The proposed methodology is illustrated numerically to update the stiffness parameters of an eight-story shear building model considering simulated datasets contaminated by Gaussian error as evidence. The capability of the proposed model reduction algorithm coupled with Markov Chain Monte Carlo (MCMC) algorithm is compared with the case where only MCMC algorithm is used to investigate their effectiveness in updating model parameters. The numerical study focuses on the effect of reduced number of measurements for various measurement configurations in estimating the variation of errors in determining the modal data. Subsequently, its effects in reducing the uncertainty of model updating parameters are investigated. The effectiveness of the proposed model reduction algorithm is tested for number of modes equal to the number of master DOFs and gradually decrease of mode numbers from the number of master DOFs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Nielsen, Svend V., Andrew H. Vaughn, Kalle Leppälä, Michael J. Landis, Thomas Mailund y Rasmus Nielsen. "Bayesian inference of admixture graphs on Native American and Arctic populations". PLOS Genetics 19, n.º 2 (13 de febrero de 2023): e1010410. http://dx.doi.org/10.1371/journal.pgen.1010410.

Texto completo
Resumen
Admixture graphs are mathematical structures that describe the ancestry of populations in terms of divergence and merging (admixing) of ancestral populations as a graph. An admixture graph consists of a graph topology, branch lengths, and admixture proportions. The branch lengths and admixture proportions can be estimated using numerous numerical optimization methods, but inferring the topology involves a combinatorial search for which no polynomial algorithm is known. In this paper, we present a reversible jump MCMC algorithm for sampling high-probability admixture graphs and show that this approach works well both as a heuristic search for a single best-fitting graph and for summarizing shared features extracted from posterior samples of graphs. We apply the method to 11 Native American and Siberian populations and exploit the shared structure of high-probability graphs to characterize the relationship between Saqqaq, Inuit, Koryaks, and Athabascans. Our analyses show that the Saqqaq is not a good proxy for the previously identified gene flow from Arctic people into the Na-Dene speaking Athabascans.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Huang, Jiangfeng, Zhiliang Deng y Liwei Xu. "A Bayesian level set method for an inverse medium scattering problem in acoustics". Inverse Problems & Imaging 15, n.º 5 (2021): 1077. http://dx.doi.org/10.3934/ipi.2021029.

Texto completo
Resumen
<p style='text-indent:20px;'>In this work, we are interested in the determination of the shape of the scatterer for the two dimensional time harmonic inverse medium scattering problems in acoustics. The scatterer is assumed to be a piecewise constant function with a known value inside inhomogeneities and its shape is represented by the level set functions for which we investigate the information using the Bayesian method. In the Bayesian framework, the solution of the geometric inverse problem is defined as a posterior probability distribution. The well-posedness of the posterior distribution is discussed and the Markov chain Monte Carlo (MCMC) method is applied to generate samples from the posterior distribution. Numerical experiments are presented to demonstrate the effectiveness of the proposed method.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Noh, Yoojeong, K. K. Choi y Ikjin Lee. "Comparison study between MCMC-based and weight-based Bayesian methods for identification of joint distribution". Structural and Multidisciplinary Optimization 42, n.º 6 (27 de julio de 2010): 823–33. http://dx.doi.org/10.1007/s00158-010-0539-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Wakeland, Wayne y Jack Homer. "Addressing Parameter Uncertainty in a Health Policy Simulation Model Using Monte Carlo Sensitivity Methods". Systems 10, n.º 6 (18 de noviembre de 2022): 225. http://dx.doi.org/10.3390/systems10060225.

Texto completo
Resumen
We present a practical guide and step-by-step flowchart for establishing uncertainty in-tervals for key model outcomes in a simulation model in the face of uncertain parameters. The process start with Powell optimization to find a set of uncertain parameters (the optimum param-eter set or OPS) that minimize the model fitness error relative to historical data. Optimization also help in refinement of parameter uncertainty ranges. Next, traditional Monte Carlo (TMC) ran-domization or Markov Chain Monte Carlo (MCMC) is used to create a sample of parameter sets that fit the reference behavior data nearly as well as the OPS. Under the TMC method, the entire pa-rameter space is explored broadly with a large number of runs, and the results are sorted for se-lection of qualifying parameter sets (QPS) to ensure good fit and parameter distributions that are centrally located within the uncertainty ranges. In addition, the QPS outputs are graphed as sen-sitivity graphs or box-and-whisker plots for comparison with the historical data. Finally, alternative policies and scenarios are run against the OPS and all QPS, and uncertainty intervals are found for projected model outcomes. We illustrate the full parameter uncertainty approach with a (previ-ously published) system dynamics model of the U.S. opioid epidemic, and demonstrate how it can enrich policy modeling results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Martelli, Saulo, Daniela Calvetti, Erkki Somersalo y Marco Viceconti. "Stochastic modelling of muscle recruitment during activity". Interface Focus 5, n.º 2 (6 de abril de 2015): 20140094. http://dx.doi.org/10.1098/rsfs.2014.0094.

Texto completo
Resumen
Muscle forces can be selected from a space of muscle recruitment strategies that produce stable motion and variable muscle and joint forces. However, current optimization methods provide only a single muscle recruitment strategy. We modelled the spectrum of muscle recruitment strategies while walking. The equilibrium equations at the joints, muscle constraints, static optimization solutions and 15-channel electromyography (EMG) recordings for seven walking cycles were taken from earlier studies. The spectrum of muscle forces was calculated using Bayesian statistics and Markov chain Monte Carlo (MCMC) methods, whereas EMG-driven muscle forces were calculated using EMG-driven modelling. We calculated the differences between the spectrum and EMG-driven muscle force for 1–15 input EMGs, and we identified the muscle strategy that best matched the recorded EMG pattern. The best-fit strategy, static optimization solution and EMG-driven force data were compared using correlation analysis. Possible and plausible muscle forces were defined as within physiological boundaries and within EMG boundaries. Possible muscle and joint forces were calculated by constraining the muscle forces between zero and the peak muscle force. Plausible muscle forces were constrained within six selected EMG boundaries. The spectrum to EMG-driven force difference increased from 40 to 108 N for 1–15 EMG inputs. The best-fit muscle strategy better described the EMG-driven pattern ( R 2 = 0.94; RMSE = 19 N) than the static optimization solution ( R 2 = 0.38; RMSE = 61 N). Possible forces for 27 of 34 muscles varied between zero and the peak muscle force, inducing a peak hip force of 11.3 body-weights. Plausible muscle forces closely matched the selected EMG patterns; no effect of the EMG constraint was observed on the remaining muscle force ranges. The model can be used to study alternative muscle recruitment strategies in both physiological and pathophysiological neuromotor conditions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Singleton, Colin y Peter Grindrod. "Forecasting for Battery Storage: Choosing the Error Metric". Energies 14, n.º 19 (1 de octubre de 2021): 6274. http://dx.doi.org/10.3390/en14196274.

Texto completo
Resumen
We describe our approach to the Western Power Distribution (WPD) Presumed Open Data (POD) 6 MWh battery storage capacity forecasting competition, in which we finished second. The competition entails two distinct forecasting aims to maximise the daily evening peak reduction and using as much solar photovoltaic energy as possible. For the latter, we combine a Bayesian (MCMC) linear regression model with an average generation distribution. For the former, we introduce a new error metric that allows even a simple weighted average combined with a simple linear regression model to score very well using the competition performance metric.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Cemgil, A. T. y B. Kappen. "Monte Carlo Methods for Tempo Tracking and Rhythm Quantization". Journal of Artificial Intelligence Research 18 (1 de enero de 2003): 45–81. http://dx.doi.org/10.1613/jair.1121.

Texto completo
Resumen
We present a probabilistic generative model for timing deviations in expressive music performance. The structure of the proposed model is equivalent to a switching state space model. The switch variables correspond to discrete note locations as in a musical score. The continuous hidden variables denote the tempo. We formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. Exact computation of posterior features such as the MAP state is intractable in this model class, so we introduce Monte Carlo methods for integration and optimization. We compare Markov Chain Monte Carlo (MCMC) methods (such as Gibbs sampling, simulated annealing and iterative improvement) and sequential Monte Carlo methods (particle filters). Our simulation results suggest better results with sequential methods. The methods can be applied in both online and batch scenarios such as tempo tracking and transcription and are thus potentially useful in a number of music applications such as adaptive automatic accompaniment, score typesetting and music information retrieval.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Davis, Andrew D., Stefanie Hassel, Stephen R. Arnott, Geoffrey B. Hall, Jacqueline K. Harris, Mojdeh Zamyadi, Jonathan Downar et al. "Biophysical compartment models for single-shell diffusion MRI in the human brain: a model fitting comparison". Physics in Medicine & Biology 67, n.º 5 (28 de febrero de 2022): 055009. http://dx.doi.org/10.1088/1361-6560/ac46de.

Texto completo
Resumen
Abstract Clinically oriented studies commonly acquire diffusion MRI (dMRI) data with a single non-zero b-value (i.e. single-shell) and diffusion weighting of b = 1000 s mm−2. To produce microstructural parameter maps, the tensor model is usually used, despite known limitations. Although compartment models have demonstrated improved fits in multi-shell dMRI data, they are rarely used for single-shell parameter maps, where their effectiveness is unclear from the literature. Here, various compartment models combining isotropic balls and symmetric tensors were fitted to single-shell dMRI data to investigate model fitting optimization and extract the most information possible. Full testing was performed in 5 subjects, and 3 subjects with multi-shell data were included for comparison. The results were tested and confirmed in a further 50 subjects. The Markov chain Monte Carlo (MCMC) model fitting technique outperformed non-linear least squares. Using MCMC, the 2-fibre-orientation mono-exponential ball and stick model ( BS ME 2 ) provided artifact-free, stable results, in little processing time. The analogous ball and zeppelin model ( BZ 2 ) also produced stable, low-noise parameter maps, though it required much greater computing resources (50 000 burn-in steps). In single-shell data, the gamma-distributed diffusivity ball and stick model ( BS GD 2 ) underperformed relative to other models, despite being an often-used software default. It produced artifacts in the diffusivity maps even with extremely long processing times. Neither increased diffusion weighting nor a greater number of gradient orientations improved BS GD 2 fits. In white matter (WM), the tensor produced the best fit as measured by Bayesian information criterion. This result contrasts with studies using multi-shell data. However, in crossing fibre regions the tensor confounded geometric effects with fractional anisotropy (FA): the planar/linear WM FA ratio was 49%, while BZ 2 and BS ME 2 retained 76% and 83% of restricted fraction, respectively. As a result, the BZ 2 and BS ME 2 models are strong candidates to optimize information extraction from single-shell dMRI studies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Mohamed, Linah, Mike Christie y Vasily Demyanov. "Comparison of Stochastic Sampling Algorithms for Uncertainty Quantification". SPE Journal 15, n.º 01 (17 de noviembre de 2009): 31–38. http://dx.doi.org/10.2118/119139-pa.

Texto completo
Resumen
Summary History matching and uncertainty quantification are two important research topics in reservoir simulation currently. In the Bayesian approach, we start with prior information about a reservoir (e.g., from analog outcrop data) and update our reservoir models with observations (e.g., from production data or time-lapse seismic). The goal of this activity is often to generate multiple models that match the history and use the models to quantify uncertainties in predictions of reservoir performance. A critical aspect of generating multiple history-matched models is the sampling algorithm used to generate the models. Algorithms that have been studied include gradient methods, genetic algorithms, and the ensemble Kalman filter (EnKF). This paper investigates the efficiency of three stochastic sampling algorithms: Hamiltonian Monte Carlo (HMC) algorithm, Particle Swarm Optimization (PSO) algorithm, and the Neighbourhood Algorithm (NA). HMC is a Markov chain Monte Carlo (MCMC) technique that uses Hamiltonian dynamics to achieve larger jumps than are possible with other MCMC techniques. PSO is a swarm intelligence algorithm that uses similar dynamics to HMC to guide the search but incorporates acceleration and damping parameters to provide rapid convergence to possible multiple minima. NA is a sampling technique that uses the properties of Voronoi cells in high dimensions to achieve multiple history-matched models. The algorithms are compared by generating multiple history- matched reservoir models and comparing the Bayesian credible intervals (p10-p50-p90) produced by each algorithm. We show that all the algorithms are able to find equivalent match qualities for this example but that some algorithms are able to find good fitting models quickly, whereas others are able to find a more diverse set of models in parameter space. The effects of the different sampling of model parameter space are compared in terms of the p10-p50-p90 uncertainty envelopes in forecast oil rate. These results show that algorithms based on Hamiltonian dynamics and swarm intelligence concepts have the potential to be effective tools in uncertainty quantification in the oil industry.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Guan, Shufeng, Lingling Wang y Chuanwen Jiang. "Optimal scheduling of regional integrated energy system considering multiple uncertainties". E3S Web of Conferences 256 (2021): 02027. http://dx.doi.org/10.1051/e3sconf/202125602027.

Texto completo
Resumen
Integrated energy system (IES) is an effective way to realize the efficient utilization of energy. Under the deregulated electricity market, IES operator gains profits by providing customers with energy service, including electricity, heat or cooling energy. With the deepening of market reform, higher penetration rate of renewable energy, economic risks embed in the IES. Based on this, an optimal scheduling model of regional IES considering uncertainties is proposed, aiming at maximizing the profits. Scenario analysis method has been adopted to model the uncertainties: Markov-Chain-Monte-Carlo (MCMC) sampling method, which has a better performance in fitting the probability distribution, is utilized to generate scenarios; K-means clustering method is applied to narrow down the sampling sets. By replacing the parameters in the deterministic model with the sampling sets, a series of optimal results can be achieved. The case study shows that the cooling storage tank can improve the economic benefits about 4.97% by converting electricity to cooling energy at lower price period and releasing energy at peak hours. Besides, through the proposed optimization model, operators can have a straight understanding of the venture brought by the uncertainties and a more reliable scheduling result is formed for reference.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Aboutaleb, Youssef M., Mazen Danaf, Yifei Xie y Moshe E. Ben-Akiva. "Sparse covariance estimation in logit mixture models". Econometrics Journal 24, n.º 3 (19 de marzo de 2021): 377–98. http://dx.doi.org/10.1093/ectj/utab008.

Texto completo
Resumen
Summary This paper introduces a new data-driven methodology for estimating sparse covariance matrices of the random coefficients in logit mixture models. Researchers typically specify covariance matrices in logit mixture models under one of two extreme assumptions: either an unrestricted full covariance matrix (allowing correlations between all random coefficients), or a restricted diagonal matrix (allowing no correlations at all). Our objective is to find optimal subsets of correlated coefficients for which we estimate covariances. We propose a new estimator, called MISC (mixed integer sparse covariance), that uses a mixed-integer optimization (MIO) program to find an optimal block diagonal structure specification for the covariance matrix, corresponding to subsets of correlated coefficients, for any desired sparsity level using Markov Chain Monte Carlo (MCMC) posterior draws from the unrestricted full covariance matrix. The optimal sparsity level of the covariance matrix is determined using out-of-sample validation. We demonstrate the ability of MISC to correctly recover the true covariance structure from synthetic data. In an empirical illustration using a stated preference survey on modes of transportation, we use MISC to obtain a sparse covariance matrix indicating how preferences for attributes are related to one another.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Vrugt, J. A. "DREAM<sub>(D)</sub>: an adaptive markov chain monte carlo simulation algorithm to solve discrete, noncontinuous, posterior parameter estimation problems". Hydrology and Earth System Sciences Discussions 8, n.º 2 (26 de abril de 2011): 4025–52. http://dx.doi.org/10.5194/hessd-8-4025-2011.

Texto completo
Resumen
Abstract. Formal and informal Bayesian approaches are increasingly being used to treat forcing, model structural, parameter and calibration data uncertainty, and summarize hydrologic prediction uncertainty. This requires posterior sampling methods that approximate the (evolving) posterior distribution. We recently introduced the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm, an adaptive Markov Chain Monte Carlo (MCMC) method that is especially designed to solve complex, high-dimensional and multimodal posterior probability density functions. The method runs multiple chains in parallel, and maintains detailed balance and ergodicity. Here, I present the latest algorithmic developments, and introduce a discrete sampling variant of DREAM that samples the parameter space at fixed points. The development of this new code, DREAM(D), has been inspired by the existing class of integer optimization problems, and emerging class of experimental design problems. Such non-continuous parameter estimation problems are of considerable theoretical and practical interest. The theory developed herein is applicable to DREAM(ZS) (Vrugt et al., 2011) and MT-DREAM(ZS) (Laloy and Vrugt, 2011) as well. Two case studies involving a sudoku puzzle and rainfall – runoff model calibration problem are used to illustrate DREAM(D).
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía