Journal articles on the topic 'Bayesian Simulated Inference'

To see the other types of publications on this topic, follow the link: Bayesian Simulated Inference.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Bayesian Simulated Inference.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Beaumont, Mark A., Wenyang Zhang, and David J. Balding. "Approximate Bayesian Computation in Population Genetics." Genetics 162, no. 4 (December 1, 2002): 2025–35. http://dx.doi.org/10.1093/genetics/162.4.2025.

Full text
Abstract:
Abstract We propose a new method for approximate Bayesian statistical inference on the basis of summary statistics. The method is suited to complex problems that arise in population genetics, extending ideas developed in this setting by earlier authors. Properties of the posterior distribution of a parameter, such as its mean or density curve, are approximated without explicit likelihood calculations. This is achieved by fitting a local-linear regression of simulated parameter values on simulated summary statistics, and then substituting the observed summary statistics into the regression equation. The method combines many of the advantages of Bayesian statistical inference with the computational efficiency of methods based on summary statistics. A key advantage of the method is that the nuisance parameters are automatically integrated out in the simulation step, so that the large numbers of nuisance parameters that arise in population genetics problems can be handled without difficulty. Simulation results indicate computational and statistical efficiency that compares favorably with those of alternative methods previously proposed in the literature. We also compare the relative efficiency of inferences obtained using methods based on summary statistics with those obtained directly from the data using MCMC.
APA, Harvard, Vancouver, ISO, and other styles
2

Creel, Michael. "Inference Using Simulated Neural Moments." Econometrics 9, no. 4 (September 24, 2021): 35. http://dx.doi.org/10.3390/econometrics9040035.

Full text
Abstract:
This paper studies method of simulated moments (MSM) estimators that are implemented using Bayesian methods, specifically Markov chain Monte Carlo (MCMC). Motivation and theory for the methods is provided by Chernozhukov and Hong (2003). The paper shows, experimentally, that confidence intervals using these methods may have coverage which is far from the nominal level, a result which has parallels in the literature that studies overidentified GMM estimators. A neural network may be used to reduce the dimension of an initial set of moments to the minimum number that maintains identification, as in Creel (2017). When MSM-MCMC estimation and inference is based on such moments, and using a continuously updating criteria function, confidence intervals have statistically correct coverage in all cases studied. The methods are illustrated by application to several test models, including a small DSGE model, and to a jump-diffusion model for returns of the S&P 500 index.
APA, Harvard, Vancouver, ISO, and other styles
3

Flury, Thomas, and Neil Shephard. "BAYESIAN INFERENCE BASED ONLY ON SIMULATED LIKELIHOOD: PARTICLE FILTER ANALYSIS OF DYNAMIC ECONOMIC MODELS." Econometric Theory 27, no. 5 (May 17, 2011): 933–56. http://dx.doi.org/10.1017/s0266466610000599.

Full text
Abstract:
We note that likelihood inference can be based on an unbiased simulation-based estimator of the likelihood when it is used inside a Metropolis–Hastings algorithm. This result has recently been introduced in statistics literature by Andrieu, Doucet, and Holenstein (2010, Journal of the Royal Statistical Society, Series B, 72, 269–342) and is perhaps surprising given the results on maximum simulated likelihood estimation. Bayesian inference based on simulated likelihood can be widely applied in microeconomics, macroeconomics, and financial econometrics. One way of generating unbiased estimates of the likelihood is through a particle filter. We illustrate these methods on four problems, producing rather generic methods. Taken together, these methods imply that if we can simulate from an economic model, we can carry out likelihood–based inference using its simulations.
APA, Harvard, Vancouver, ISO, and other styles
4

Hu, Zheng Dong, Liu Xin Zhang, Fei Yue Zhou, and Zhi Jun Li. "Statistic Inference for Inertial Instrumentation Error Model Using Bayesian Network." Applied Mechanics and Materials 392 (September 2013): 719–24. http://dx.doi.org/10.4028/www.scientific.net/amm.392.719.

Full text
Abstract:
For the parameter estimation problem of inertial instrumentation error models, a Bayesian network is founded to fuse the calibration data and make error coefficients statistical inference in this paper. First the fundamental of Bayesian network is stated and then how to establish network for a typical case of inertial instrumentation error coefficients estimation is illustrated. Since the difficult high-dimension integral calculus for model parameter can be avoidable, WinBUGS software based on MCMC method is used for calculation and inference. The simulated results show that using Bayesian network to make statistical inference for inertial instrumentation error model is reasonable and effective.
APA, Harvard, Vancouver, ISO, and other styles
5

Jeffrey, Niall, and Filipe B. Abdalla. "Parameter inference and model comparison using theoretical predictions from noisy simulations." Monthly Notices of the Royal Astronomical Society 490, no. 4 (October 18, 2019): 5749–56. http://dx.doi.org/10.1093/mnras/stz2930.

Full text
Abstract:
ABSTRACT When inferring unknown parameters or comparing different models, data must be compared to underlying theory. Even if a model has no closed-form solution to derive summary statistics, it is often still possible to simulate mock data in order to generate theoretical predictions. For realistic simulations of noisy data, this is identical to drawing realizations of the data from a likelihood distribution. Though the estimated summary statistic from simulated data vectors may be unbiased, the estimator has variance that should be accounted for. We show how to correct the likelihood in the presence of an estimated summary statistic by marginalizing over the true summary statistic in the framework of a Bayesian hierarchical model. For Gaussian likelihoods where the covariance must also be estimated from simulations, we present an alteration to the Sellentin–Heavens corrected likelihood. We show that excluding the proposed correction leads to an incorrect estimate of the Bayesian evidence with Joint Light-Curve Analysis data. The correction is highly relevant for cosmological inference that relies on simulated data for theory (e.g. weak lensing peak statistics and simulated power spectra) and can reduce the number of simulations required.
APA, Harvard, Vancouver, ISO, and other styles
6

de Campos, Luis M., José A. Gámez, and Serafı́n Moral. "Partial abductive inference in Bayesian belief networks by simulated annealing." International Journal of Approximate Reasoning 27, no. 3 (September 2001): 263–83. http://dx.doi.org/10.1016/s0888-613x(01)00043-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

CARDIAL, Marcílio Ramos Pereira, Juliana Betini FACHINI-GOMES, and Eduardo Yoshio NAKANO. "EXPONENTIATED DISCRETE WEIBULL DISTRIBUTION FOR CENSORED DATA." REVISTA BRASILEIRA DE BIOMETRIA 38, no. 1 (March 28, 2020): 35. http://dx.doi.org/10.28951/rbb.v38i1.425.

Full text
Abstract:
This paper further develops the statistical inference procedure of the exponentiated discrete Weibull distribution (EDW) for data with the presence of censoring. This generalization of the discrete Weibull distribution has the advantage of being suitable to model non-monotone failure rates, such as those with bathtub and unimodal distributions. Inferences about EDW distribution are presented using both frequentist and bayesian approaches. In addition, the classical Likelihood Ratio Test and a Full Bayesian Significance Test (FBST) were performed to test the parameters of EDW distribution. The method presented is applied to simulated data and illustrated with a real dataset regarding patients diagnosed with head and neck cancer.
APA, Harvard, Vancouver, ISO, and other styles
8

Üstündağ, Dursun, and Mehmet Cevri. "Recovering Sinusoids from Noisy Data Using Bayesian Inference with Simulated Annealing." Mathematical and Computational Applications 16, no. 2 (August 1, 2011): 382–91. http://dx.doi.org/10.3390/mca16020382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pascual-Izarra, C., and G. García. "Simulated annealing and Bayesian inference applied to experimental stopping force determination." Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms 228, no. 1-4 (January 2005): 388–91. http://dx.doi.org/10.1016/j.nimb.2004.10.076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sha, Naijun, and Hao Yang Teng. "A Bayes Inference for Step-Stress Accelerated Life Testing." International Journal of Statistics and Probability 6, no. 6 (September 15, 2017): 1. http://dx.doi.org/10.5539/ijsp.v6n6p1.

Full text
Abstract:
In this article, we present a Bayesian analysis with convex tent priors for step-stress accelerated life testing (SSALT) using a proportional hazard (PH) model. As flexible as the cumulative exposure (CE) model in fitting step-stress data and its attractive mathematical properties, the PH model makes Bayesian inference much more accessible than the CE model. Two sampling methods through Markov chain Monte Carlo algorithms are employed for posterior inference of parameters. The performance of the methodology is investigated using both simulated and real data sets.
APA, Harvard, Vancouver, ISO, and other styles
11

Achcar, Jorge Alberto, and Fernando Antonio Moala. "Use of copula functions for the reliability of series systems." International Journal of Quality & Reliability Management 32, no. 6 (June 1, 2015): 617–34. http://dx.doi.org/10.1108/ijqrm-10-2013-0161.

Full text
Abstract:
Purpose – The purpose of this paper is to provide a new method to estimate the reliability of series system by using copula functions. This problem is of great interest in industrial and engineering applications. Design/methodology/approach – The authors introduce copula functions and consider a Bayesian analysis for the proposed models with application to the simulated data. Findings – The use of copula functions for modeling the bivariate distribution could be a good alternative to estimate the reliability of a two components series system. From the results of this study, the authors observe that they get accurate Bayesian inferences for the reliability function considering large samples sizes. The Bayesian parametric models proposed also allow the assessment of system reliability for multicomponent systems simultaneously. Originality/value – Usually, the studies of systems reliability engineering assume independence among the component lifetimes. In the approach the authors consider a dependence structure. Using standard classical inference methods based on asymptotical normality of the maximum likelihood estimators for the parameters the authors could have great computational difficulties and possibly, not accurate inference results, which there is not found in the approach.
APA, Harvard, Vancouver, ISO, and other styles
12

Azzolina, Danila, Giulia Lorenzoni, Silvia Bressan, Liviana Da Dalt, Ileana Baldi, and Dario Gregori. "Handling Poor Accrual in Pediatric Trials: A Simulation Study Using a Bayesian Approach." International Journal of Environmental Research and Public Health 18, no. 4 (February 21, 2021): 2095. http://dx.doi.org/10.3390/ijerph18042095.

Full text
Abstract:
In the conduction of trials, a common situation is related to potential difficulties in recruiting the planned sample size as provided by the study design. A Bayesian analysis of such trials might provide a framework to combine prior evidence with current evidence, and it is an accepted approach by regulatory agencies. However, especially for small trials, the Bayesian inference may be severely conditioned by the prior choices. The Renal Scarring Urinary Infection (RESCUE) trial, a pediatric trial that was a candidate for early termination due to underrecruitment, served as a motivating example to investigate the effects of the prior choices on small trial inference. The trial outcomes were simulated by assuming 50 scenarios combining different sample sizes and true absolute risk reduction (ARR). The simulated data were analyzed via the Bayesian approach using 0%, 50%, and 100% discounting factors on the beta power prior. An informative inference (0% discounting) on small samples could generate data-insensitive results. Instead, the 50% discounting factor ensured that the probability of confirming the trial outcome was higher than 80%, but only for an ARR higher than 0.17. A suitable option to maintain data relevant to the trial inference is to define a discounting factor based on the prior parameters. Nevertheless, a sensitivity analysis of the prior choices is highly recommended.
APA, Harvard, Vancouver, ISO, and other styles
13

Dutta, Ritabrata, Antonietta Mira, and Jukka-Pekka Onnela. "Bayesian inference of spreading processes on networks." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 474, no. 2215 (July 2018): 20180129. http://dx.doi.org/10.1098/rspa.2018.0129.

Full text
Abstract:
Infectious diseases are studied to understand their spreading mechanisms, to evaluate control strategies and to predict the risk and course of future outbreaks. Because people only interact with few other individuals, and the structure of these interactions influence spreading processes, the pairwise relationships between individuals can be usefully represented by a network. Although the underlying transmission processes are different, the network approach can be used to study the spread of pathogens in a contact network or the spread of rumours in a social network. We study simulated simple and complex epidemics on synthetic networks and on two empirical networks, a social/contact network in an Indian village and an online social network. Our goal is to learn simultaneously the spreading process parameters and the first infected node, given a fixed network structure and the observed state of nodes at several time points. Our inference scheme is based on approximate Bayesian computation, a likelihood-free inference technique. Our method is agnostic about the network topology and the spreading process. It generally performs well and, somewhat counter-intuitively, the inference problem appears to be easier on more heterogeneous network topologies, which enhances its future applicability to real-world settings where few networks have homogeneous topologies.
APA, Harvard, Vancouver, ISO, and other styles
14

Gosky, Ross M., and Joel Sanqui. "A Simulation Study on Increasing Capture Periods in Bayesian Closed Population Capture-Recapture Models with Heterogeneity." Journal of Modern Applied Statistical Methods 18, no. 1 (April 28, 2020): 2–23. http://dx.doi.org/10.22237/jmasm/1556668920.

Full text
Abstract:
Capture-Recapture models are useful in estimating unknown population sizes. A common modeling challenge for closed population models involves modeling unequal animal catchability in each capture period, referred to as animal heterogeneity. Inference about population size N is dependent on the assumed distribution of animal capture probabilities in the population, and that different models can fit a data set equally well but provide contradictory inferences about N. Three common Bayesian Capture-Recapture heterogeneity models are studied with simulated data to study the prevalence of contradictory inferences is in different population sizes with relatively low capture probabilities, specifically at different numbers of capture periods in the study.
APA, Harvard, Vancouver, ISO, and other styles
15

Imai, Kosuke, Ying Lu, and Aaron Strauss. "Bayesian and Likelihood Inference for 2 × 2 Ecological Tables: An Incomplete-Data Approach." Political Analysis 16, no. 1 (August 13, 2007): 41–69. http://dx.doi.org/10.1093/pan/mpm017.

Full text
Abstract:
Ecological inference is a statistical problem where aggregate-level data are used to make inferences about individual-level behavior. In this article, we conduct a theoretical and empirical study of Bayesian and likelihood inference for 2 × 2 ecological tables by applying the general statistical framework of incomplete data. We first show that the ecological inference problem can be decomposed into three factors: distributional effects, which address the possible misspecification of parametric modeling assumptions about the unknown distribution of missing data; contextual effects, which represent the possible correlation between missing data and observed variables; and aggregation effects, which are directly related to the loss of information caused by data aggregation. We then examine how these three factors affect inference and offer new statistical methods to address each of them. To deal with distributional effects, we propose a nonparametric Bayesian model based on a Dirichlet process prior, which relaxes common parametric assumptions. We also identify the statistical adjustments necessary to account for contextual effects. Finally, although little can be done to cope with aggregation effects, we offer a method to quantify the magnitude of such effects in order to formally assess its severity. We use simulated and real data sets to empirically investigate the consequences of these three factors and to evaluate the performance of our proposed methods. C code, along with an easy-to-use R interface, is publicly available for implementing our proposed methods (Imai, Lu, and Strauss, forthcoming).
APA, Harvard, Vancouver, ISO, and other styles
16

Navascués, Miguel, Raphaël Leblois, and Concetta Burgarella. "Demographic inference through approximate-Bayesian-computation skyline plots." PeerJ 5 (July 18, 2017): e3530. http://dx.doi.org/10.7717/peerj.3530.

Full text
Abstract:
The skyline plot is a graphical representation of historical effective population sizes as a function of time. Past population sizes for these plots are estimated from genetic data, without a priori assumptions on the mathematical function defining the shape of the demographic trajectory. Because of this flexibility in shape, skyline plots can, in principle, provide realistic descriptions of the complex demographic scenarios that occur in natural populations. Currently, demographic estimates needed for skyline plots are estimated using coalescent samplers or a composite likelihood approach. Here, we provide a way to estimate historical effective population sizes using an Approximate Bayesian Computation (ABC) framework. We assess its performance using simulated and actual microsatellite datasets. Our method correctly retrieves the signal of contracting, constant and expanding populations, although the graphical shape of the plot is not always an accurate representation of the true demographic trajectory, particularly for recent changes in size and contracting populations. Because of the flexibility of ABC, similar approaches can be extended to other types of data, to multiple populations, or to other parameters that can change through time, such as the migration rate.
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Yaxuan, Huw A. Ogilvie, and Luay Nakhleh. "Practical Speedup of Bayesian Inference of Species Phylogenies by Restricting the Space of Gene Trees." Molecular Biology and Evolution 37, no. 6 (February 20, 2020): 1809–18. http://dx.doi.org/10.1093/molbev/msaa045.

Full text
Abstract:
Abstract Species tree inference from multilocus data has emerged as a powerful paradigm in the postgenomic era, both in terms of the accuracy of the species tree it produces as well as in terms of elucidating the processes that shaped the evolutionary history. Bayesian methods for species tree inference are desirable in this area as they have been shown not only to yield accurate estimates, but also to naturally provide measures of confidence in those estimates. However, the heavy computational requirements of Bayesian inference have limited the applicability of such methods to very small data sets. In this article, we show that the computational efficiency of Bayesian inference under the multispecies coalescent can be improved in practice by restricting the space of the gene trees explored during the random walk, without sacrificing accuracy as measured by various metrics. The idea is to first infer constraints on the trees of the individual loci in the form of unresolved gene trees, and then to restrict the sampler to consider only resolutions of the constrained trees. We demonstrate the improvements gained by such an approach on both simulated and biological data.
APA, Harvard, Vancouver, ISO, and other styles
18

Luo, Ruikun, Yifan Wang, Yifan Weng, Victor Paul, Mark J. Brudnak, Paramsothy Jayakumar, Matt Reed, Jeffrey L. Stein, Tulga Ersal, and X. Jessie Yang. "Toward Real-time Assessment of Workload: A Bayesian Inference Approach." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (November 2019): 196–200. http://dx.doi.org/10.1177/1071181319631293.

Full text
Abstract:
Workload management is of critical concern in teleoperation of unmanned vehicles, because high workload can lead to sub-optimal task performance and can harm human operators’ long-term well-being. In the present study, we conducted a human-in-the-loop experiment, where the human operator teleoperated a simulated High Mobility Multipurpose Wheeled Vehicle (HMMWV) and performed a secondary visual search task. We measured participants’ gaze trajectory and pupil size, based on which their workload level was estimated. We proposed and tested a Bayesian inference (BI) model for assessing workload in real time. Results show that the BI model can achieve an encouraging 0.69 F1 score, 0.70 precision, and 0.69 recall.
APA, Harvard, Vancouver, ISO, and other styles
19

Abdo, Ammar, Naomie Salim, and Ali Ahmed. "Implementing Relevance Feedback in Ligand-Based Virtual Screening Using Bayesian Inference Network." Journal of Biomolecular Screening 16, no. 9 (August 23, 2011): 1081–88. http://dx.doi.org/10.1177/1087057111416658.

Full text
Abstract:
Recently, the use of the Bayesian network as an alternative to existing tools for similarity-based virtual screening has received noticeable attention from researchers in the chemoinformatics field. The main aim of the Bayesian network model is to improve the retrieval effectiveness of similarity-based virtual screening. To this end, different models of the Bayesian network have been developed. In our previous works, the retrieval performance of the Bayesian network was observed to improve significantly when multiple reference structures or fragment weightings were used. In this article, the authors enhance the Bayesian inference network (BIN) using the relevance feedback information. In this approach, a few high-ranking structures of unknown activity were filtered from the outputs of BIN, based on a single active reference structure, to form a set of active reference structures. This set of active reference structures was used in two distinct techniques for carrying out such BIN searching: reweighting the fragments in the reference structures and group fusion techniques. Simulated virtual screening experiments with three MDL Drug Data Report data sets showed that the proposed techniques provide simple ways of enhancing the cost-effectiveness of ligand-based virtual screening searches, especially for higher diversity data sets.
APA, Harvard, Vancouver, ISO, and other styles
20

Ekmekci, Canberk, and Mujdat Cetin. "Model-Based Bayesian Deep Learning Architecture for Linear Inverse Problems in Computational Imaging." Electronic Imaging 2021, no. 15 (January 18, 2021): 201–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.15.coimg-201.

Full text
Abstract:
We propose a neural network architecture combined with specific training and inference procedures for linear inverse problems arising in computational imaging to reconstruct the underlying image and to represent the uncertainty about the reconstruction. The proposed architecture is built from the model-based reconstruction perspective, which enforces data consistency and eliminates the artifacts in an alternating manner. The training and the inference procedures are based on performing approximate Bayesian analysis on the weights of the proposed network using a variational inference method. The proposed architecture with the associated inference procedure is capable of characterizing uncertainty while performing reconstruction with a modelbased approach. We tested the proposed method on a simulated magnetic resonance imaging experiment. We showed that the proposed method achieved an adequate reconstruction capability and provided reliable uncertainty estimates in the sense that the regions having high uncertainty provided by the proposed method are likely to be the regions where reconstruction errors occur.
APA, Harvard, Vancouver, ISO, and other styles
21

Park, Seongyu, Samudrajit Thapa, Yeongjin Kim, Michael A. Lomholt, and Jae-Hyung Jeon. "Bayesian inference of Lévy walks via hidden Markov models." Journal of Physics A: Mathematical and Theoretical 54, no. 48 (November 10, 2021): 484001. http://dx.doi.org/10.1088/1751-8121/ac31a1.

Full text
Abstract:
Abstract The Lévy walk (LW) is a non-Brownian random walk model that has been found to describe anomalous dynamic phenomena in diverse fields ranging from biology over quantum physics to ecology. Recurrently occurring problems are to examine whether observed data are successfully quantified by a model classified as LWs or not and extract the best model parameters in accordance with the data. Motivated by such needs, we propose a hidden Markov model for LWs and computationally realize and test the corresponding Bayesian inference method. We introduce a Markovian decomposition scheme to approximate a renewal process governed by a power-law waiting time distribution. Using this, we construct the likelihood function of LWs based on a hidden Markov model and the forward algorithm. With the LW trajectories simulated at various conditions, we perform the Bayesian inference for parameter estimation and model classification. We show that the power-law exponent of the flight-time distribution can be successfully extracted even at the condition that the mean-squared displacement does not display the expected scaling exponent due to the noise or insufficient trajectory length. It is also demonstrated that the Bayesian method performs remarkably inferring the LW trajectories from given unclassified trajectory data set if the noise level is moderate.
APA, Harvard, Vancouver, ISO, and other styles
22

González Burgos, Jorge. "Bayesian methods in psychological research: the case of IRT." International Journal of Psychological Research 3, no. 1 (June 30, 2010): 163–75. http://dx.doi.org/10.21500/20112084.861.

Full text
Abstract:
Bayesian methods have become increasingly popular in social sciences due to its flexibility in accommodating numerous models from different fields. The domain of item response theory is a good example of fruitful research, incorporating in the lasts years new developments and models, which are being estimated using the Bayesian approach. This is partly because of the availability of free software such as WinBUGS and R, which has permitted researchers to explore new possibilities. In this paper we outline the Bayesian inference for some IRT models. It is briefly explained how the Bayesian method works. The implementation of Bayesian estimation in conventional software is discussed and sets of codes for running the analyses are provided. All the applications are exemplified using simulated and real data sets.
APA, Harvard, Vancouver, ISO, and other styles
23

Goodwin, Thomas, Christian Evenhuis, Stephen Woodcock, and Matias Quiroz. "Bayesian Inference on the Keller–Segel Model." ANZIAM Journal 61 (August 10, 2020): C181—C196. http://dx.doi.org/10.21914/anziamj.v61i0.15185.

Full text
Abstract:
The Keller–Segel (KS) model is a system of partial differential equations that describe chemotaxis—how cells move in response to chemical stimulus. Simulated data in the form of cell counts are used to carry out Bayesian inference on the ks model. A Bayesian analysis on the ks model is performed on three sets of initial conditions. First, the KS model is solved numerically using a finite difference method and Bayesian inference is performed on parameters of the model such as the cell diffusion and chemical sensitivity. We investigate the predictive posterior distribution of future data and the convergence of the 95% credible interval of cell diffusion at different grid sizes using the three different initial conditions. References D. Balding and D. L. S. McElwain. A mathematical model of tumour-induced capillary growth. J. Theor. Biol., 114(1):53–73, 1985. doi:10.1016/S0022-5193(85)80255-1. D. A. Brown and H. C. Berg. Temporal stimulation of chemotaxis in Escherichia coli. Proc. Nat. Acad. Sci., 71(4):1388–1392, 1974. doi:10.1073/pnas.71.4.1388. H. Chisholm. The Encyclop\T1\ae dia britannica: a dictionary of arts, sciences, literature and general information, volume 6. Encyclopaedia Britannica Co., 1910. F. W. Dahlquist, P. Lovely, and D. E. Koshland. Quantitative analysis of bacterial migration in chemotaxis. Nature New Biol., 236(65):120–123, 1972. doi:10.1038/newbio236120a0. J. Goodman and J. Weare. Ensemble samplers with affine invariance. Commun. Appl. Math. Comput. Sci., 5(1):65–80, 2010. URL https://projecteuclid.org/euclid.camcos/1513731992. K. Gustafson and T. Abe. The third boundary condition–-was it Robin's? Math. Intell., 20(1):63–71, 1998. doi:10.1007/BF03024402. L. Harvath and R. R. Aksamit. Oxidized n-formylmethionyl-leucyl-phenylalanine: Effect on the activation of human monocyte and neutrophil chemotaxis and superoxide production. J. Immun., 133(3):1471–1476, 1984. URL https://www.jimmunol.org/content/133/3/1471. E. F. Keller and L. A. Segel. Initiation of slime mold aggregation viewed as an instability. J. Theor. Bio., 26(3):399–415, 1970. doi:10.1016/0022-5193(70)90092-5. R. Mesibov, G. W. Ordal, and J. Adler. The range of attractant concentrations for bacterial chemotaxis and the threshold and size of response over this range: Weber law and related phenomena. J. Gen. Physiol., 62(2):203–223, 1973. doi:10.1085/jgp.62.2.203. J. A. Sherratt, E. H. Sage, and J. D. Murray. Chemical control of eukaryotic cell movement: A new model. J. Theor. Biol., 162(1):23–40, 1993. doi:10.1006/jtbi.1993.1074. R. T. Tranquillo, S. H. Zigmond, and D. A. Lauffenburger. Measurement of the chemotaxis coefficient for human neutrophils in the under-agarose migration assay. Cell Motil. Cytoskel., 11(1):1–15, 1988. doi:10.1002/cm.970110102. A. W. van der Vaart. Asymptotic Statistics, volume 3. Cambridge University Press, 2000. doi:10.1017/CBO9780511802256.
APA, Harvard, Vancouver, ISO, and other styles
24

Godsey, Brian. "Discovery of miR-mRNA interactions via simultaneous Bayesian inference of gene networks and clusters using sequence-based predictions and expression data." Journal of Integrative Bioinformatics 10, no. 1 (March 1, 2013): 33–45. http://dx.doi.org/10.1515/jib-2013-227.

Full text
Abstract:
Summary MicroRNAs (miRs) are known to interfere with mRNA expression, and much work has been put into predicting and inferring miR-mRNA interactions. Both sequence-based interaction predictions as well as interaction inference based on expression data have been proven somewhat successful; furthermore, models that combine the two methods have had even more success. In this paper, I further refine and enrich the methods of miR-mRNA interaction discovery by integrating a Bayesian clustering algorithm into a model of prediction-enhanced miR-mRNA target inference, creating an algorithm called PEACOAT, which is written in the R language. I show that PEACOAT improves the inference of miR-mRNA target interactions using both simulated data and a data set of microarrays from samples of multiple myeloma patients. In simulated networks of 25 miRs and mRNAs, our methods using clustering can improve inference in roughly two-thirds of cases, and in the multiple myeloma data set, KEGG pathway enrichment was found to be more significant with clustering than without. Our findings are consistent with previous work in clustering of non-miR genetic networks and indicate that there could be a significant advantage to clustering of miR and mRNA expression data as a part of interaction inference.
APA, Harvard, Vancouver, ISO, and other styles
25

Gu, A., X. Huang, W. Sheu, G. Aldering, A. S. Bolton, K. Boone, A. Dey, et al. "GIGA-Lens: Fast Bayesian Inference for Strong Gravitational Lens Modeling." Astrophysical Journal 935, no. 1 (August 1, 2022): 49. http://dx.doi.org/10.3847/1538-4357/ac6de4.

Full text
Abstract:
Abstract We present GIGA-Lens: a gradient-informed, GPU-accelerated Bayesian framework for modeling strong gravitational lensing systems, implemented in TensorFlow and JAX. The three components, optimization using multistart gradient descent, posterior covariance estimation with variational inference, and sampling via Hamiltonian Monte Carlo, all take advantage of gradient information through automatic differentiation and massive parallelization on graphics processing units (GPUs). We test our pipeline on a large set of simulated systems and demonstrate in detail its high level of performance. The average time to model a single system on four Nvidia A100 GPUs is 105 s. The robustness, speed, and scalability offered by this framework make it possible to model the large number of strong lenses found in current surveys and present a very promising prospect for the modeling of  ( 10 5 ) lensing systems expected to be discovered in the era of the Vera C. Rubin Observatory, Euclid, and the Nancy Grace Roman Space Telescope.
APA, Harvard, Vancouver, ISO, and other styles
26

Lin, En-Tzu, Fergus Hayes, Gavin P. Lamb, Ik Siong Heng, Albert K. H. Kong, Michael J. Williams, Surojit Saha, and John Veitch. "A Bayesian Inference Framework for Gamma-ray Burst Afterglow Properties." Universe 7, no. 9 (September 17, 2021): 349. http://dx.doi.org/10.3390/universe7090349.

Full text
Abstract:
In the field of multi-messenger astronomy, Bayesian inference is commonly adopted to compare the compatibility of models given the observed data. However, to describe a physical system like neutron star mergers and their associated gamma-ray burst (GRB) events, usually more than ten physical parameters are incorporated in the model. With such a complex model, likelihood evaluation for each Monte Carlo sampling point becomes a massive task and requires a significant amount of computational power. In this work, we perform quick parameter estimation on simulated GRB X-ray light curves using an interpolated physical GRB model. This is achieved by generating a grid of GRB afterglow light curves across the parameter space and replacing the likelihood with a simple interpolation function in the high-dimensional grid that stores all light curves. This framework, compared to the original method, leads to a ∼90× speedup per likelihood estimation. It will allow us to explore different jet models and enable fast model comparison in the future.
APA, Harvard, Vancouver, ISO, and other styles
27

Maniatis, G., N. Demiris, A. Kranis, G. Banos, and A. Kominakis. "Comparison of inference methods of genetic parameters with an application to body weight in broilers." Archives Animal Breeding 58, no. 2 (July 27, 2015): 277–86. http://dx.doi.org/10.5194/aab-58-277-2015.

Full text
Abstract:
Abstract. REML (restricted maximum likelihood) has become the standard method of variance component estimation in animal breeding. Inference in Bayesian animal models is typically based upon Markov chain Monte Carlo (MCMC) methods, which are generally flexible but time-consuming. Recently, a new Bayesian computational method, integrated nested Laplace approximation (INLA), has been introduced for making fast non-sampling-based Bayesian inference for hierarchical latent Gaussian models. This paper is concerned with the comparison of estimates provided by three representative programs (ASReml, WinBUGS and the R package AnimalINLA) of the corresponding methods (REML, MCMC and INLA), with a view to their applicability for the typical animal breeder. Gaussian and binary as well as simulated data were used to assess the relative efficiency of the methods. Analysis of 2319 records of body weight at 35 days of age from a broiler line suggested a purely additive animal model, in which the heritability estimates ranged from 0.31 to 0.34 for the Gaussian trait and from 0.19 to 0.36 for the binary trait, depending on the estimation method. Although in need of further development, AnimalINLA seems a fast program for Bayesian modeling, particularly suitable for the inference of Gaussian traits, while WinBUGS appeared to successfully accommodate a complicated structure between the random effects. However, ASReml remains the best practical choice for the serious animal breeder.
APA, Harvard, Vancouver, ISO, and other styles
28

Kumar, Sudhir, Antonia Chroni, Koichiro Tamura, Maxwell Sanderford, Olumide Oladeinde, Vivian Aly, Tracy Vu, and Sayaka Miura. "PathFinder: Bayesian inference of clone migration histories in cancer." Bioinformatics 36, Supplement_2 (December 2020): i675—i683. http://dx.doi.org/10.1093/bioinformatics/btaa795.

Full text
Abstract:
Abstract Summary Metastases cause a vast majority of cancer morbidity and mortality. Metastatic clones are formed by dispersal of cancer cells to secondary tissues, and are not medically detected or visible until later stages of cancer development. Clone phylogenies within patients provide a means of tracing the otherwise inaccessible dynamic history of migrations of cancer cells. Here, we present a new Bayesian approach, PathFinder, for reconstructing the routes of cancer cell migrations. PathFinder uses the clone phylogeny, the number of mutational differences among clones, and the information on the presence and absence of observed clones in primary and metastatic tumors. By analyzing simulated datasets, we found that PathFinder performes well in reconstructing clone migrations from the primary tumor to new metastases as well as between metastases. It was more challenging to trace migrations from metastases back to primary tumors. We found that a vast majority of errors can be corrected by sampling more clones per tumor, and by increasing the number of genetic variants assayed per clone. We also identified situations in which phylogenetic approaches alone are not sufficient to reconstruct migration routes. In conclusion, we anticipate that the use of PathFinder will enable a more reliable inference of migration histories and their posterior probabilities, which is required to assess the relative preponderance of seeding of new metastasis by clones from primary tumors and/or existing metastases. Availability and implementation PathFinder is available on the web at https://github.com/SayakaMiura/PathFinder.
APA, Harvard, Vancouver, ISO, and other styles
29

Moreno, Elías, Francisco-José Vázquez-Polo, and Miguel A. Negrín. "Bayesian meta-analysis: The role of the between-sample heterogeneity." Statistical Methods in Medical Research 27, no. 12 (May 16, 2017): 3643–57. http://dx.doi.org/10.1177/0962280217709837.

Full text
Abstract:
The random effect approach for meta-analysis was motivated by a lack of consistent assessment of homogeneity of treatment effect before pooling. The random effect model assumes that the distribution of the treatment effect is fully heterogenous across the experiments. However, other models arising by grouping some of the experiments are plausible. We illustrate on simulated binary experiments that the fully heterogenous model gives a poor meta-inference when fully heterogeneity is not the true model and that the knowledge of the true cluster model considerably improves the inference. We propose the use of a Bayesian model selection procedure for estimating the true cluster model, and Bayesian model averaging to incorporate into the meta-analysis the clustering estimation. A well-known meta-analysis for six major multicentre trials to assess the efficacy of a given dose of aspirin in post-myocardial infarction patients is reanalysed.
APA, Harvard, Vancouver, ISO, and other styles
30

Stoica, R. S., M. Deaconu, A. Philippe, and L. Hurtado-Gil. "Shadow Simulated Annealing: A new algorithm for approximate Bayesian inference of Gibbs point processes." Spatial Statistics 43 (June 2021): 100505. http://dx.doi.org/10.1016/j.spasta.2021.100505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

He, Qiao-Le, and Liming Zhao. "Bayesian inference based process design and uncertainty analysis of simulated moving bed chromatographic systems." Separation and Purification Technology 246 (September 2020): 116856. http://dx.doi.org/10.1016/j.seppur.2020.116856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Ying, and Bruce Rannala. "Bayesian inference of fine-scale recombination rates using population genomic data." Philosophical Transactions of the Royal Society B: Biological Sciences 363, no. 1512 (October 7, 2008): 3921–30. http://dx.doi.org/10.1098/rstb.2008.0172.

Full text
Abstract:
Recently, several statistical methods for estimating fine-scale recombination rates using population samples have been developed. However, currently available methods that can be applied to large-scale data are limited to approximated likelihoods. Here, we developed a full-likelihood Markov chain Monte Carlo method for estimating recombination rate under a Bayesian framework. Genealogies underlying a sampling of chromosomes are effectively modelled by using marginal individual single nucleotide polymorphism genealogies related through an ancestral recombination graph. The method is compared with two existing composite-likelihood methods using simulated data. Simulation studies show that our method performs well for different simulation scenarios. The method is applied to two human population genetic variation datasets that have been studied by sperm typing. Our results are consistent with the estimates from sperm crossover analysis.
APA, Harvard, Vancouver, ISO, and other styles
33

Yang, Lijuan, Zheng Tian, Jinhuan Wen, and Weidong Yan. "Adaptive Non-Rigid Point Set Registration Based on Variational Bayesian." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 36, no. 5 (October 2018): 942–48. http://dx.doi.org/10.1051/jnwpu/20183650942.

Full text
Abstract:
For the existence of outliers in non-rigid point set registration, a method based on Bayesian student's t mixture model(SMM) is proposed. Under the framework of variational Bayesian, the point set registration problem is converted to maximize the variational lower bound of log-likelihood, where the transformation parameters are found through variational inference. By prior model, the constraint over spatial regularization is incorporated into the Bayesian SMM, which can adaptively be determined for different data sets. Compared with Gaussian distribution, the student's t distribution is more robust to outliers. The experimental comparative analysis of simulated points and real images verify the effectiveness of the proposed method on the non-rigid point set registration with outliers.
APA, Harvard, Vancouver, ISO, and other styles
34

Friston, Karl, and Ivan Herreros. "Active Inference and Learning in the Cerebellum." Neural Computation 28, no. 9 (September 2016): 1812–39. http://dx.doi.org/10.1162/neco_a_00863.

Full text
Abstract:
This letter offers a computational account of Pavlovian conditioning in the cerebellum based on active inference and predictive coding. Using eyeblink conditioning as a canonical paradigm, we formulate a minimal generative model that can account for spontaneous blinking, startle responses, and (delay or trace) conditioning. We then establish the face validity of the model using simulated responses to unconditioned and conditioned stimuli to reproduce the sorts of behavior that are observed empirically. The scheme’s anatomical validity is then addressed by associating variables in the predictive coding scheme with nuclei and neuronal populations to match the (extrinsic and intrinsic) connectivity of the cerebellar (eyeblink conditioning) system. Finally, we try to establish predictive validity by reproducing selective failures of delay conditioning, trace conditioning, and extinction using (simulated and reversible) focal lesions. Although rather metaphorical, the ensuing scheme can account for a remarkable range of anatomical and neurophysiological aspects of cerebellar circuitry—and the specificity of lesion-deficit mappings that have been established experimentally. From a computational perspective, this work shows how conditioning or learning can be formulated in terms of minimizing variational free energy (or maximizing Bayesian model evidence) using exactly the same principles that underlie predictive coding in perception.
APA, Harvard, Vancouver, ISO, and other styles
35

Farine, Damien R., and Ariana Strandburg-Peshkin. "Estimating uncertainty and reliability of social network data using Bayesian inference." Royal Society Open Science 2, no. 9 (September 2015): 150367. http://dx.doi.org/10.1098/rsos.150367.

Full text
Abstract:
Social network analysis provides a useful lens through which to view the structure of animal societies, and as a result its use is increasingly widespread. One challenge that many studies of animal social networks face is dealing with limited sample sizes, which introduces the potential for a high level of uncertainty in estimating the rates of association or interaction between individuals. We present a method based on Bayesian inference to incorporate uncertainty into network analyses. We test the reliability of this method at capturing both local and global properties of simulated networks, and compare it to a recently suggested method based on bootstrapping. Our results suggest that Bayesian inference can provide useful information about the underlying certainty in an observed network. When networks are well sampled, observed networks approach the real underlying social structure. However, when sampling is sparse, Bayesian inferred networks can provide realistic uncertainty estimates around edge weights. We also suggest a potential method for estimating the reliability of an observed network given the amount of sampling performed. This paper highlights how relatively simple procedures can be used to estimate uncertainty and reliability in studies using animal social network analysis.
APA, Harvard, Vancouver, ISO, and other styles
36

Hafych, V., A. Caldwell, R. Agnello, C. C. Ahdida, M. Aladi, M. C. Amoedo Goncalves, Y. Andrebe, et al. "Analysis of proton bunch parameters in the AWAKE experiment." Journal of Instrumentation 16, no. 11 (November 1, 2021): P11031. http://dx.doi.org/10.1088/1748-0221/16/11/p11031.

Full text
Abstract:
Abstract A precise characterization of the incoming proton bunch parameters is required to accurately simulate the self-modulation process in the Advanced Wakefield Experiment (AWAKE). This paper presents an analysis of the parameters of the incoming proton bunches used in the later stages of the AWAKE Run 1 data-taking period. The transverse structure of the bunch is observed at multiple positions along the beamline using scintillating or optical transition radiation screens. The parameters of a model that describes the bunch transverse dimensions and divergence are fitted to represent the observed data using Bayesian inference. The analysis is tested on simulated data and then applied to the experimental data.
APA, Harvard, Vancouver, ISO, and other styles
37

Ahmed, Ali, Ammar Abdo, and Naomie Salim. "Ligand-Based Virtual Screening Using Bayesian Inference Network and Reweighted Fragments." Scientific World Journal 2012 (2012): 1–7. http://dx.doi.org/10.1100/2012/410914.

Full text
Abstract:
Many of the similarity-based virtual screening approaches assume that molecular fragments that are not related to the biological activity carry the same weight as the important ones. This was the reason that led to the use of Bayesian networks as an alternative to existing tools for similarity-based virtual screening. In our recent work, the retrieval performance of the Bayesian inference network (BIN) was observed to improve significantly when molecular fragments were reweighted using the relevance feedback information. In this paper, a set of active reference structures were used to reweight the fragments in the reference structure. In this approach, higher weights were assigned to those fragments that occur more frequently in the set of active reference structures while others were penalized. Simulated virtual screening experiments with MDL Drug Data Report datasets showed that the proposed approach significantly improved the retrieval effectiveness of ligand-based virtual screening, especially when the active molecules being sought had a high degree of structural heterogeneity.
APA, Harvard, Vancouver, ISO, and other styles
38

Zeng, Fan Guang, Guang Min Wu, John D. Mai, and Jian Ming Chen. "Bayesian MRF Modeling and Graph Cuts for Phase Unwrapping with Discontinuity Phase Flaws:A Comparative Study." Applied Mechanics and Materials 496-500 (January 2014): 1915–18. http://dx.doi.org/10.4028/www.scientific.net/amm.496-500.1915.

Full text
Abstract:
Phase unwrapping (PU) is a difficult task commonly found in applications involving interferometric synthetic aperture radar (InSAR), magnetic resonance imaging (MRI) and optical surface profile measurements; all of which involve mathematically ill-posed problems. Conventional algorithms exhibit strong shortcomings in PU when phase discontinuity flaws exist. To simulate these situations, we are custom-designed test data with a phase discontinuity flaw. This simulated data is a 3D Gaussian distribution with an arc-shaped notch as a phase flaw. PU is carried out by Bayesian inference and MRF (Markov Random Field) modeling. A graph cut algorithm is employed for optimization with respect to energy minimization. Three other conventional algorithms are also employed and their PU performance is compared. The results show the good performance and effectiveness of the Bayesian MRF modeling method. These experimental results are important references for phase unwrapping problems when phase discontinuities exist.
APA, Harvard, Vancouver, ISO, and other styles
39

Yang, Lingli, Balgobin Nandram, and Jai Won Choi. "Bayesian Predictive Inference Under Nine Methods for Incorporating Survey Weights." International Journal of Statistics and Probability 12, no. 1 (January 31, 2023): 33. http://dx.doi.org/10.5539/ijsp.v12n1p33.

Full text
Abstract:
Sample surveys play a significant role in obtaining reliable estimators of finite population quantities, and survey weights are used to deal with selection bias and  non-response bias. The main idea of this research is to compare the performance of nine methods with differently constructed survey weights, and we can use these methods for non-probability sampling after weights are estimated (e.g. quasi-randomization). The original survey weights are calibrated to the population size. In particular, the base model does not include survey weights or design weights. We use original survey weights, adjusted survey weights, trimmed survey weights, and adjusted trimmed survey weights into pseudo-likelihood function to build unnormalized or normalized posterior distributions. In this research, we focus on binary data, which occur in many different situations. A simulation study is performed and we analyze the simulated data using average posterior mean, average posterior standard deviation, average relative bias, average posterior root mean squared error, and the coverage rate of  95% credible intervals. We also performed an application on body mass index to further understand these nine methods. The results show that methods with trimmed weights are preferred than methods with untrimmed weights, and methods with adjusted weights have higher variability than methods with unadjusted weights.
APA, Harvard, Vancouver, ISO, and other styles
40

Sajid, Noor, Karl Friston, Justyna Ekert, Cathy Price, and David Green. "Neuromodulatory Control and Language Recovery in Bilingual Aphasia: An Active Inference Approach." Behavioral Sciences 10, no. 10 (October 21, 2020): 161. http://dx.doi.org/10.3390/bs10100161.

Full text
Abstract:
Understanding the aetiology of the diverse recovery patterns in bilingual aphasia is a theoretical challenge with implications for treatment. Loss of control over intact language networks provides a parsimonious starting point that can be tested using in-silico lesions. We simulated a complex recovery pattern (alternate antagonism and paradoxical translation) to test the hypothesis—from an established hierarchical control model—that loss of control was mediated by constraints on neuromodulatory resources. We used active (Bayesian) inference to simulate a selective loss of sensory precision; i.e., confidence in the causes of sensations. This in-silico lesion altered the precision of beliefs about task relevant states, including appropriate actions, and reproduced exactly the recovery pattern of interest. As sensory precision has been linked to acetylcholine release, these simulations endorse the conjecture that loss of neuromodulatory control can explain this atypical recovery pattern. We discuss the relevance of this finding for other recovery patterns.
APA, Harvard, Vancouver, ISO, and other styles
41

López-Santiago, J., L. Martino, M. A. Vázquez, and J. Miguez. "A Bayesian inference and model selection algorithm with an optimization scheme to infer the model noise power." Monthly Notices of the Royal Astronomical Society 507, no. 3 (August 10, 2021): 3351–61. http://dx.doi.org/10.1093/mnras/stab2303.

Full text
Abstract:
ABSTRACT Model fitting is possibly the most extended problem in science. Classical approaches include the use of least-squares fitting procedures and maximum likelihood methods to estimate the value of the parameters in the model. However, in recent years, Bayesian inference tools have gained traction. Usually, Markov chain Monte Carlo (MCMC) methods are applied to inference problems, but they present some disadvantages, particularly when comparing different models fitted to the same data set. Other Bayesian methods can deal with this issue in a natural and effective way. We have implemented an importance sampling (IS) algorithm adapted to Bayesian inference problems in which the power of the noise in the observations is not known a priori. The main advantage of IS is that the model evidence can be derived directly from the so-called importance weights – while MCMC methods demand considerable postprocessing. The use of our adaptive target adaptive importance sampling (ATAIS) method is shown by inferring, on the one hand, the parameters of a simulated flaring event that includes a damped oscillation and, on the other hand, real data from the Kepler mission. ATAIS includes a novel automatic adaptation of the target distribution. It automatically estimates the variance of the noise in the model. ATAIS admits parallelization, which decreases the computational run-times notably. We compare our method against a nested sampling method within a model selection problem.
APA, Harvard, Vancouver, ISO, and other styles
42

Poynor, Valerie, and Athanasios Kottas. "Nonparametric Bayesian inference for mean residual life functions in survival analysis." Biostatistics 20, no. 2 (January 19, 2018): 240–55. http://dx.doi.org/10.1093/biostatistics/kxx075.

Full text
Abstract:
SUMMARY Modeling and inference for survival analysis problems typically revolves around different functions related to the survival distribution. Here, we focus on the mean residual life (MRL) function, which provides the expected remaining lifetime given that a subject has survived (i.e. is event-free) up to a particular time. This function is of direct interest in reliability, medical, and actuarial fields. In addition to its practical interpretation, the MRL function characterizes the survival distribution. We develop general Bayesian nonparametric inference for MRL functions built from a Dirichlet process mixture model for the associated survival distribution. The resulting model for the MRL function admits a representation as a mixture of the kernel MRL functions with time-dependent mixture weights. This model structure allows for a wide range of shapes for the MRL function. Particular emphasis is placed on the selection of the mixture kernel, taken to be a gamma distribution, to obtain desirable properties for the MRL function arising from the mixture model. The inference method is illustrated with a data set of two experimental groups and a data set involving right censoring. The supplementary material available at Biostatistics online provides further results on empirical performance of the model, using simulated data examples.
APA, Harvard, Vancouver, ISO, and other styles
43

YUAN, AO, GUANJIE CHEN, CHARLES ROTIMI, and GEORGE E. BONNEY. "A STATISTICAL FRAMEWORK FOR HAPLOTYPE BLOCK INFERENCE." Journal of Bioinformatics and Computational Biology 03, no. 05 (October 2005): 1021–38. http://dx.doi.org/10.1142/s021972000500151x.

Full text
Abstract:
The existence of haplotype blocks transmitted from parents to offspring has been suggested recently. This has created an interest in the inference of the block structure and length. The motivation is that haplotype blocks that are characterized well will make it relatively easier to quickly map all the genes carrying human diseases. To study the inference of haplotype block systematically, we propose a statistical framework. In this framework, the optimal haplotype block partitioning is formulated as the problem of statistical model selection; missing data can be handled in a standard statistical way; population strata can be implemented; block structure inference/hypothesis testing can be performed; prior knowledge, if present, can be incorporated to perform a Bayesian inference. The algorithm is linear in the number of loci, instead of NP-hard for many such algorithms. We illustrate the applications of our method to both simulated and real data sets.
APA, Harvard, Vancouver, ISO, and other styles
44

Wu, Qianqian, Kate Smith-Miles, and Tianhai Tian. "Approximate Bayesian computation schemes for parameter inference of discrete stochastic models using simulated likelihood density." BMC Bioinformatics 15, Suppl 12 (2014): S3. http://dx.doi.org/10.1186/1471-2105-15-s12-s3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Hashem, Atef F., Salem A. Alyami, and Alaa H. Abdel-Hamid. "Inference for a Progressive-Stress Model Based on Ordered Ranked Set Sampling under Type-II Censoring." Mathematics 10, no. 15 (August 4, 2022): 2771. http://dx.doi.org/10.3390/math10152771.

Full text
Abstract:
The progressive-stress accelerated life test is discussed under the ordered ranked set sampling procedure. It is assumed that the lifetime of an item under use stress is exponentially distributed and the law of inverse power is considered as the relationship between the scale parameter and the applied stress. The involved parameters are estimated using the Bayesian technique, under symmetric and asymmetric loss functions, based on ordered ranked set samples and simple random samples subject to type-II censoring. Real and simulated data sets are used to illustrate the theoretical results presented in this paper. Finally, a simulation study followed by numerical calculations is performed to evaluate the Bayesian estimation performance based on the two sampling types.
APA, Harvard, Vancouver, ISO, and other styles
46

Sraj, Ihab, Mohamed Iskandarani, Ashwanth Srinivasan, W. Carlisle Thacker, Justin Winokur, Alen Alexanderian, Chia-Ying Lee, Shuyi S. Chen, and Omar M. Knio. "Bayesian Inference of Drag Parameters Using AXBT Data from Typhoon Fanapi." Monthly Weather Review 141, no. 7 (July 1, 2013): 2347–67. http://dx.doi.org/10.1175/mwr-d-12-00228.1.

Full text
Abstract:
Abstract The authors introduce a three-parameter characterization of the wind speed dependence of the drag coefficient and apply a Bayesian formalism to infer values for these parameters from airborne expendable bathythermograph (AXBT) temperature data obtained during Typhoon Fanapi. One parameter is a multiplicative factor that amplifies or attenuates the drag coefficient for all wind speeds, the second is the maximum wind speed at which drag coefficient saturation occurs, and the third is the drag coefficient's rate of change with increasing wind speed after saturation. Bayesian inference provides optimal estimates of the parameters as well as a non-Gaussian probability distribution characterizing the uncertainty of these estimates. The efficiency of this approach stems from the use of adaptive polynomial expansions to build an inexpensive surrogate for the high-resolution numerical model that couples simulated winds to the oceanic temperature data, dramatically reducing the computational burden of the Markov chain Monte Carlo sampling. These results indicate that the most likely values for the drag coefficient saturation and the corresponding wind speed are about 2.3 × 10−3 and 34 m s−1, respectively; the data were not informative regarding the drag coefficient behavior at higher wind speeds.
APA, Harvard, Vancouver, ISO, and other styles
47

Akbar, Muhammad, Abdulmohsen Obied Alshamari, Muhammad Tariq, Alhelali Marwan, Basim S. O. Alsaedi, and Ishfaq Ahmad. "Bayesian versus Classical Econometric Inference to Revisit the Role of Human Capital in Economic Growth." Mathematical Problems in Engineering 2022 (May 27, 2022): 1–10. http://dx.doi.org/10.1155/2022/7251670.

Full text
Abstract:
Application of Bayesian inference to analyze real economic phenomena is rare in the literature on applied economics. This study contributes in two ways. Firstly, it contributes to methodological advancement in the literature on applied economic modeling by estimating a structural model using the classical econometric framework as well as the Bayesian two-stage econometric framework. The performance of the two approaches is compared due to the small sample size and the best model is selected. Secondly, the study is used to get fresh evidence about the impact of human capital upon economic growth in the form of Bayes mean estimates along with their Highest Posterior Density Intervals (HPDIs) which give certain ranges of estimates within which the parameters are likely to lie. Annual data on the Pakistan economy ranging from 1965 to 2019 is used for the estimation of the model. Classical estimates are obtained using the efficient GMM method. Bayes mean estimates are simulated using a Bayesian two-stage procedure assuming multivariate normal-Wishart informative priors. Results show that the Bayesian econometric framework gives more precise parameters’ estimates as compared to the classical econometric framework, and hence, the Bayesian inference may be preferred over classical inference, especially in the case of a small sample size. The Bayes estimates show that a 1% increase in education capital and health capital causes raising economic growth by 0.0091% and 0.1778%, respectively, with a 0.95 probability that the estimates are likely to lie within the intervals 0.0085%–0.0097% and 0.1606%–0.1952%, respectively. Hence, human capital might be considered a vital factor to achieve economic growth in Pakistan. Moreover, health capital shows strong effects as compared to education capital in the process of economic growth.
APA, Harvard, Vancouver, ISO, and other styles
48

Alahmadi, Amani A., Jennifer A. Flegg, Davis G. Cochrane, Christopher C. Drovandi, and Jonathan M. Keith. "A comparison of approximate versus exact techniques for Bayesian parameter inference in nonlinear ordinary differential equation models." Royal Society Open Science 7, no. 3 (March 2020): 191315. http://dx.doi.org/10.1098/rsos.191315.

Full text
Abstract:
The behaviour of many processes in science and engineering can be accurately described by dynamical system models consisting of a set of ordinary differential equations (ODEs). Often these models have several unknown parameters that are difficult to estimate from experimental data, in which case Bayesian inference can be a useful tool. In principle, exact Bayesian inference using Markov chain Monte Carlo (MCMC) techniques is possible; however, in practice, such methods may suffer from slow convergence and poor mixing. To address this problem, several approaches based on approximate Bayesian computation (ABC) have been introduced, including Markov chain Monte Carlo ABC (MCMC ABC) and sequential Monte Carlo ABC (SMC ABC). While the system of ODEs describes the underlying process that generates the data, the observed measurements invariably include errors. In this paper, we argue that several popular ABC approaches fail to adequately model these errors because the acceptance probability depends on the choice of the discrepancy function and the tolerance without any consideration of the error term. We observe that the so-called posterior distributions derived from such methods do not accurately reflect the epistemic uncertainties in parameter values. Moreover, we demonstrate that these methods provide minimal computational advantages over exact Bayesian methods when applied to two ODE epidemiological models with simulated data and one with real data concerning malaria transmission in Afghanistan.
APA, Harvard, Vancouver, ISO, and other styles
49

Chevrolat, J. P., F. Rutigliano, and J. L. Golmard. "Mixed Bayesian Networks: A Mixture of Gaussian Distributions." Methods of Information in Medicine 33, no. 05 (1994): 535–42. http://dx.doi.org/10.1055/s-0038-1635056.

Full text
Abstract:
Abstract:Mixed Bayesian networks are probabilistic models associated with a graphical representation, where the graph is directed and the random variables are discrete or continuous. We propose a comprehensive method for estimating the density functions of continuous variables, using a graph structure and a set of samples. The principle of the method is to learn the shape of densities from a sample of continuous variables. The densities are approximated by a mixture of Gaussian distributions. The estimation algorithm is a stochastic version of the Expectation Maximization algorithm (Stochastic EM algorithm). The inference algorithm corresponding to our model is a variant of junction three method, adapted to our specific case. The approach is illustrated by a simulated example from the domain of pharmacokinetics. Tests show that the true distributions seem sufficiently fitted for practical application.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhao, Juan, Xia Bai, Tao Shan, and Ran Tao. "Block Sparse Bayesian Recovery with Correlated LSM Prior." Wireless Communications and Mobile Computing 2021 (October 6, 2021): 1–11. http://dx.doi.org/10.1155/2021/9942694.

Full text
Abstract:
Compressed sensing can recover sparse signals using a much smaller number of samples than the traditional Nyquist sampling theorem. Block sparse signals (BSS) with nonzero coefficients occurring in clusters arise naturally in many practical scenarios. Utilizing the sparse structure can improve the recovery performance. In this paper, we consider recovering arbitrary BSS with a sparse Bayesian learning framework by inducing correlated Laplacian scale mixture (LSM) prior, which can model the dependence of adjacent elements of the block sparse signal, and then a block sparse Bayesian learning algorithm is proposed via variational Bayesian inference. Moreover, we present a fast version of the proposed recovery algorithm, which does not involve the computation of matrix inversion and has robust recovery performance in the low SNR case. The experimental results with simulated data and ISAR imaging show that the proposed algorithms can efficiently reconstruct BSS and have good antinoise ability in noisy environments.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography