To see the other types of publications on this topic, follow the link: Bayesian estimate.

Journal articles on the topic 'Bayesian estimate'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Bayesian estimate.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Rahman, Mohammad Lutfor, Steven G. Gilmour, Peter J. Zemroch, and Pauline R. Ziman. "Bayesian analysis of fuel economy experiments." Journal of Statistical Research 54, no. 1 (August 25, 2020): 43–63. http://dx.doi.org/10.47302/jsr.2020540103.

Full text
Abstract:
Statistical analysts can encounter difficulties in obtaining point and interval estimates for fixed effects when sample sizes are small and there are two or more error strata to consider. Standard methods can lead to certain variance components being estimated as zero which often seems contrary to engineering experience and judgement. Shell Global Solutions (UK) has encountered such challenges and is always looking for ways to make its statistical techniques as robust as possible. In this instance, the challenge was to estimate fuel effects and confidence limits from small-sample fuel economy experiments where both test-to-test and day-to-day variation had to be taken into account. Using likelihood-based methods, the experimenters estimated the day-to-day variance component to be zero which was unrealistic. The reason behind this zero estimate is that the data set is not large enough to estimate it reliably. The experimenters were also unsure about the fixed parameter estimates obtained by likelihood methods in linear mixed models. In this paper, we looked for an alternative to compare the likelihood estimates against and found the Bayesian platform to be appropriate. Bayesian methods assuming some non-informative and weakly informative priors enable us to compare the parameter estimates and the variance components. Profile likelihood and bootstrap based methods verified that the Bayesian point and interval estimates were not unreasonable. Also, simulation studies have assessed the quality of likelihood and Bayesian estimates in this study.
APA, Harvard, Vancouver, ISO, and other styles
2

Sanger, Terence D. "Bayesian Filtering of Myoelectric Signals." Journal of Neurophysiology 97, no. 2 (February 2007): 1839–45. http://dx.doi.org/10.1152/jn.00936.2006.

Full text
Abstract:
Surface electromyography is used in research, to estimate the activity of muscle, in prosthetic design, to provide a control signal, and in biofeedback, to provide subjects with a visual or auditory indication of muscle contraction. Unfortunately, successful applications are limited by the variability in the signal and the consequent poor quality of estimates. I propose to use a nonlinear recursive filter based on Bayesian estimation. The desired filtered signal is modeled as a combined diffusion and jump process and the measured electromyographic (EMG) signal is modeled as a random process with a density in the exponential family and rate given by the desired signal. The rate is estimated on-line by calculating the full conditional density given all past measurements from a single electrode. The Bayesian estimate gives the filtered signal that best describes the observed EMG signal. This estimate yields results with very low short-time variability but also with the capability of very rapid response to change. The estimate approximates isometric joint torque with lower error and higher signal-to-noise ratio than current linear methods. Use of the nonlinear filter significantly reduces noise compared with current algorithms, and it may therefore permit more effective use of the EMG signal for prosthetic control, biofeedback, and neurophysiology research.
APA, Harvard, Vancouver, ISO, and other styles
3

Fässler, Sascha M. M., Andrew S. Brierley, and Paul G. Fernandes. "A Bayesian approach to estimating target strength." ICES Journal of Marine Science 66, no. 6 (February 12, 2009): 1197–204. http://dx.doi.org/10.1093/icesjms/fsp008.

Full text
Abstract:
Abstract Fässler, S. M. M., Brierley, A. S., and Fernandes, P. G. 2009. A Bayesian approach to estimating target strength. – ICES Journal of Marine Science, 66: 1197–1204. Currently, conventional models of target strength (TS) vs. fish length, based on empirical measurements, are used to estimate fish density from integrated acoustic data. These models estimate a mean TS, averaged over variables that modulate fish TS (tilt angle, physiology, and morphology); they do not include information about the uncertainty of the mean TS, which could be propagated through to estimates of fish abundance. We use Bayesian methods, together with theoretical TS models and in situ TS data, to determine the uncertainty in TS estimates of Atlantic herring (Clupea harengus). Priors for model parameters (surface swimbladder volume, tilt angle, and s.d. of the mean TS) were used to estimate posterior parameter distributions and subsequently build a probabilistic TS model. The sensitivity of herring abundance estimates to variation in the Bayesian TS model was also evaluated. The abundance of North Sea herring from the area covered by the Scottish acoustic survey component was estimated using both the conventional TS–length formula (5.34×109 fish) and the Bayesian TS model (mean = 3.17×109 fish): this difference was probably because of the particular scattering model employed and the data used in the Bayesian model. The study demonstrates the relative importance of potential bias and precision of TS estimation and how the latter can be so much less important than the former.
APA, Harvard, Vancouver, ISO, and other styles
4

Christ, Theodore J., and Christopher David Desjardins. "Curriculum-Based Measurement of Reading: An Evaluation of Frequentist and Bayesian Methods to Model Progress Monitoring Data." Journal of Psychoeducational Assessment 36, no. 1 (June 15, 2017): 55–73. http://dx.doi.org/10.1177/0734282917712174.

Full text
Abstract:
Curriculum-Based Measurement of Oral Reading (CBM-R) is often used to monitor student progress and guide educational decisions. Ordinary least squares regression (OLSR) is the most widely used method to estimate the slope, or rate of improvement (ROI), even though published research demonstrates OLSR’s lack of validity and reliability, and imprecision of ROI estimates, especially after brief duration of monitoring (6-10 weeks). This study illustrates and examines the use of Bayesian methods to estimate ROI. Conditions included four progress monitoring durations (6, 8, 10, and 30 weeks), two schedules of data collection (weekly, biweekly), and two ROI growth distributions that broadly corresponded with ROIs for general and special education populations. A Bayesian approach with alternate prior distributions for the ROIs is presented and explored. Results demonstrate that Bayesian estimates of ROI were more precise than OLSR with comparable reliabilities, and Bayesian estimates were consistently within the plausible range of ROIs in contrast to OLSR, which often provided unrealistic estimates. Results also showcase the influence the priors had estimated ROIs and the potential dangers of prior distribution misspecification.
APA, Harvard, Vancouver, ISO, and other styles
5

Al-Hossain, Abdullah Y. "Burr-X Model Estimate using Bayesian and non-Bayesian Approaches." Journal of Mathematics and Statistics 12, no. 2 (February 1, 2016): 77–85. http://dx.doi.org/10.3844/jmssp.2016.77.85.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ambrose, Paul G., Jeffrey P. Hammel, Sujata M. Bhavnani, Christopher M. Rubino, Evelyn J. Ellis-Grosse, and George L. Drusano. "Frequentist and Bayesian Pharmacometric-Based Approaches To Facilitate Critically Needed New Antibiotic Development: Overcoming Lies, Damn Lies, and Statistics." Antimicrobial Agents and Chemotherapy 56, no. 3 (December 12, 2011): 1466–70. http://dx.doi.org/10.1128/aac.01743-10.

Full text
Abstract:
ABSTRACTAntimicrobial drug development has greatly diminished due to regulatory uncertainty about the magnitude of the antibiotic treatment effect. Herein we evaluate the utility of pharmacometric-based analyses for determining the magnitude of the treatment effect. Frequentist and Bayesian pharmacometric-based logistic regression analyses were conducted by using data from a phase 3 clinical trial of tigecycline-treated patients with hospital-acquired pneumonia (HAP) to evaluate relationships between the probability of microbiological or clinical success and the free-drug area under the concentration-time curve from time zero to 24 h (AUC0-24)/MIC ratio. By using both the frequentist and Bayesian approaches, the magnitude of the treatment effect was determined using three different methods based on the probability of success at free-drug AUC0-24/MIC ratios of 0.01 and 25. Differences in point estimates of the treatment effect for microbiological response (method 1) were larger using the frequentist approach than using the Bayesian approach (Bayesian estimate, 0.395; frequentist estimate, 0.637). However, the Bayesian credible intervals were tighter than the frequentist confidence intervals, demonstrating increased certainty with the former approach. The treatment effect determined by taking the difference in the probabilities of success between the upper limit of a 95% interval for the minimal exposure and the lower limit of a 95% interval at the maximal exposure (method 2) was greater for the Bayesian analysis (Bayesian estimate, 0.074; frequentist estimate, 0.004). After utilizing bootstrapping to determine the lower 95% bounds for the treatment effect (method 3), treatment effect estimates were still higher for the Bayesian analysis (Bayesian estimate, 0.301; frequentist estimate, 0.166). These results demonstrate the utility of frequentist and Bayesian pharmacometric-based analyses for the determination of the treatment effect using contemporary trial endpoints. Additionally, as demonstrated by using pharmacokinetic-pharmacodynamic data, the magnitude of the treatment effect for patients with HAP is large.
APA, Harvard, Vancouver, ISO, and other styles
7

Emelyanov, V. E., and S. P. Matyuk. "BAYESIAN ESTIMATE OF TELECOMMUNICATION SYSTEMS PREPAREDNESS." Civil Aviation High Technologies 24, no. 1 (February 22, 2021): 16–22. http://dx.doi.org/10.26467/2079-0619-2021-24-1-16-22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

AGBAJE, Olorunsola F., Stephen D. LUZIO, Ahmed I. S. ALBARRAK, David J. LUNN, David R. OWENS, and Roman HOVORKA. "Bayesian hierarchical approach to estimate insulin sensitivity by minimal model." Clinical Science 105, no. 5 (November 1, 2003): 551–60. http://dx.doi.org/10.1042/cs20030117.

Full text
Abstract:
We adopted Bayesian analysis in combination with hierarchical (population) modelling to estimate simultaneously population and individual insulin sensitivity (SI) and glucose effectiveness (SG) with the minimal model of glucose kinetics using data collected during insulin-modified intravenous glucose tolerance test (IVGTT) and made comparison with the standard non-linear regression analysis. After fasting overnight, subjects with newly presenting Type II diabetes according to World Health Organization criteria (n=65; 53 males, 12 females; age, 54±9 years; body mass index, 30.4±5.2 kg/m2; means±S.D.) underwent IVGTT consisting of a 0.3 g of glucose bolus/kg of body weight given at time zero for 2 min, followed by 0.05 unit of insulin/kg of body weight at 20 min. Bayesian inference was carried out using vague prior distributions and log-normal distributions to guarantee non-negativity and, thus, physiological plausibility of model parameters and associated credible intervals. Bayesian analysis gave estimates of SI in all subjects. Non-linear regression analysis failed in four cases, where Bayesian analysis-derived SI was located in the lower quartile and was estimated with lower precision. The population means of SI and SG provided by Bayesian analysis and non-linear regression were identical, but the interquartile range given by Bayesian analysis was tighter by approx. 20% for SI and by approx. 15% for SG. Individual insulin sensitivities estimated by the two methods were highly correlated (rS=0.98; P<0.001). However, the correlation in the lower 20% centile of the insulin-sensitivity range was significantly lower than the correlation in the upper 80% centile (rS=0.71 compared with rS=0.99; P<0.001). We conclude that the Bayesian hierarchical analysis is an appealing method to estimate SI and SG, as it avoids parameter estimation failures, and should be considered when investigating insulin-resistant subjects.
APA, Harvard, Vancouver, ISO, and other styles
9

Richard, Michael D., and Richard P. Lippmann. "Neural Network Classifiers Estimate Bayesian a posteriori Probabilities." Neural Computation 3, no. 4 (December 1991): 461–83. http://dx.doi.org/10.1162/neco.1991.3.4.461.

Full text
Abstract:
Many neural network classifiers provide outputs which estimate Bayesian a posteriori probabilities. When the estimation is accurate, network outputs can be treated as probabilities and sum to one. Simple proofs show that Bayesian probabilities are estimated when desired network outputs are 1 of M (one output unity, all others zero) and a squared-error or cross-entropy cost function is used. Results of Monte Carlo simulations performed using multilayer perceptron (MLP) networks trained with backpropagation, radial basis function (RBF) networks, and high-order polynomial networks graphically demonstrate that network outputs provide good estimates of Bayesian probabilities. Estimation accuracy depends on network complexity, the amount of training data, and the degree to which training data reflect true likelihood distributions and a priori class probabilities. Interpretation of network outputs as Bayesian probabilities allows outputs from multiple networks to be combined for higher level decision making, simplifies creation of rejection thresholds, makes it possible to compensate for differences between pattern class probabilities in training and test data, allows outputs to be used to minimize alternative risk functions, and suggests alternative measures of network performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Ben Zaabza, Hafedh, Abderrahmen Ben Gara, Hedi Hammami, Mohamed Amine Ferchichi, and Boulbaba Rekik. "Estimation of variance components of milk, fat, and protein yields of Tunisian Holstein dairy cattle using Bayesian and REML methods." Archives Animal Breeding 59, no. 2 (June 1, 2016): 243–48. http://dx.doi.org/10.5194/aab-59-243-2016.

Full text
Abstract:
Abstract. A multi-trait repeatability animal model under restricted maximum likelihood (REML) and Bayesian methods was used to estimate genetic parameters of milk, fat, and protein yields in Tunisian Holstein cows. The estimates of heritability for milk, fat, and protein yields from the REML procedure were 0.21 ± 0.05, 0.159 ± 0.04, and 0.158 ± 0.04, respectively. The corresponding results from the Bayesian procedure were 0.273 ± 0.02, 0.198 ± 0.01, and 0.187 ± 0.01. Heritability estimates tended to be larger via the Bayesian than those obtained by the REML method. Genetic and permanent environmental variances estimated by REML were smaller than those obtained by the Bayesian analysis. Inversely, REML estimates of the residual variances were larger than Bayesian estimates. Genetic and permanent correlation estimates were on the other hand comparable by both REML and Bayesian methods with permanent environmental being larger than genetic correlations. Results from this study confirm previous reports on genetic parameters for milk traits in Tunisian Holsteins and suggest that a multi-trait approach can be an alternative for implementing a routine genetic evaluation of the Tunisian dairy cattle population.
APA, Harvard, Vancouver, ISO, and other styles
11

Wei, Cheng Dong, Fu Wang, and Huan Qi Wei. "Bayesian Estimate of Exponential Parameter with Missing Data." Applied Mechanics and Materials 321-324 (June 2013): 904–8. http://dx.doi.org/10.4028/www.scientific.net/amm.321-324.904.

Full text
Abstract:
We discuss the empirical Bayesian estimation and the noninformative prior Bayesian estimation of Exponential parameter in the missing data occasion. By setting different prior distributions, we get different bayesian risks and compare the numerical simulation results through the MATLAB programming.
APA, Harvard, Vancouver, ISO, and other styles
12

Lau, John W., Tak Kuen Siu, and Hailiang Yang. "On Bayesian Mixture Credibility." ASTIN Bulletin 36, no. 02 (November 2006): 573–88. http://dx.doi.org/10.2143/ast.36.2.2017934.

Full text
Abstract:
We introduce a class of Bayesian infinite mixture models first introduced by Lo (1984) to determine the credibility premium for a non-homogeneous insurance portfolio. The Bayesian infinite mixture models provide us with much flexibility in the specification of the claim distribution. We employ the sampling scheme based on a weighted Chinese restaurant process introduced in Lo et al. (1996) to estimate a Bayesian infinite mixture model from the claim data. The Bayesian sampling scheme also provides a systematic way to cluster the claim data. This can provide some insights into the risk characteristics of the policyholders. The estimated credibility premium from the Bayesian infinite mixture model can be written as a linear combination of the prior estimate and the sample mean of the claim data. Estimation results for the Bayesian mixture credibility premiums will be presented.
APA, Harvard, Vancouver, ISO, and other styles
13

Lau, John W., Tak Kuen Siu, and Hailiang Yang. "On Bayesian Mixture Credibility." ASTIN Bulletin 36, no. 2 (November 2006): 573–88. http://dx.doi.org/10.1017/s0515036100014677.

Full text
Abstract:
We introduce a class of Bayesian infinite mixture models first introduced by Lo (1984) to determine the credibility premium for a non-homogeneous insurance portfolio. The Bayesian infinite mixture models provide us with much flexibility in the specification of the claim distribution. We employ the sampling scheme based on a weighted Chinese restaurant process introduced in Lo et al. (1996) to estimate a Bayesian infinite mixture model from the claim data. The Bayesian sampling scheme also provides a systematic way to cluster the claim data. This can provide some insights into the risk characteristics of the policyholders. The estimated credibility premium from the Bayesian infinite mixture model can be written as a linear combination of the prior estimate and the sample mean of the claim data. Estimation results for the Bayesian mixture credibility premiums will be presented.
APA, Harvard, Vancouver, ISO, and other styles
14

MOLINARES, CARLOS A., and CHRIS P. TSOKOS. "BAYESIAN RELIABILITY APPROACH TO THE POWER LAW PROCESS WITH SENSITIVITY ANALYSIS TO PRIOR SELECTION." International Journal of Reliability, Quality and Safety Engineering 20, no. 01 (February 2013): 1350004. http://dx.doi.org/10.1142/s0218539313500046.

Full text
Abstract:
The intensity function is the key entity to the power law process, also known as the Weibull process or nonhomogeneous Poisson process. It gives the rate of change of the reliability of a system as a function of time. We illustrate that a Bayesian analysis is applicable to the power law process through the intensity function. First, we show using real data, that one of the two parameters in the intensity function behaves as a random variable. With a sequence of estimates of the subject parameter we proceeded to identify the probability distribution that characterizes its behavior. Using the commonly used squared-error loss function we obtain a Bayesian reliability estimate of the power law process. Also a simulation procedure shows the superiority of the Bayesian estimate with respect to the maximum likelihood estimate and the better performance of the proposed estimate with respect to its maximum likelihood counterpart. As well, it was found that the Bayesian estimate is sensitive to a prior selection.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Qingyang, and Xuan Shi. "A mixture copula Bayesian network model for multimodal genomic data." Cancer Informatics 16 (January 1, 2017): 117693511770238. http://dx.doi.org/10.1177/1176935117702389.

Full text
Abstract:
Gaussian Bayesian networks have become a widely used framework to estimate directed associations between joint Gaussian variables, where the network structure encodes the decomposition of multivariate normal density into local terms. However, the resulting estimates can be inaccurate when the normality assumption is moderately or severely violated, making it unsuitable for dealing with recent genomic data such as the Cancer Genome Atlas data. In the present paper, we propose a mixture copula Bayesian network model which provides great flexibility in modeling non-Gaussian and multimodal data for causal inference. The parameters in mixture copula functions can be efficiently estimated by a routine expectation–maximization algorithm. A heuristic search algorithm based on Bayesian information criterion is developed to estimate the network structure, and prediction can be further improved by the best-scoring network out of multiple predictions from random initial values. Our method outperforms Gaussian Bayesian networks and regular copula Bayesian networks in terms of modeling flexibility and prediction accuracy, as demonstrated using a cell signaling data set. We apply the proposed methods to the Cancer Genome Atlas data to study the genetic and epigenetic pathways that underlie serous ovarian cancer.
APA, Harvard, Vancouver, ISO, and other styles
16

Phang, Sen, Pietro Ravani, Jeffrey Schaefer, Bruce Wright, and Kevin Mclaughlin. "Internal Medicine residents use heuristics to estimate disease probability." Canadian Medical Education Journal 6, no. 2 (December 11, 2015): e71-e77. http://dx.doi.org/10.36834/cmej.36653.

Full text
Abstract:
Background: Training in Bayesian reasoning may have limited impact on accuracy of probability estimates. In this study, our goal was to explore whether residents previously exposed to Bayesian reasoning use heuristics rather than Bayesian reasoning to estimate disease probabilities. We predicted that if residents use heuristics then post-test probability estimates would be increased by non-discriminating clinical features or a high anchor for a target condition.Method: We randomized 55 Internal Medicine residents to different versions of four clinical vignettes and asked them to estimate probabilities of target conditions. We manipulated the clinical data for each vignette to be consistent with either 1) using a representative heuristic, by adding non-discriminating prototypical clinical features of the target condition, or 2) using anchoring with adjustment heuristic, by providing a high or low anchor for the target condition.Results: When presented with additional non-discriminating data the odds of diagnosing the target condition were increased (odds ratio (OR) 2.83, 95% confidence interval [1.30, 6.15], p = 0.009). Similarly, the odds of diagnosing the target condition were increased when a high anchor preceded the vignette (OR 2.04, [1.09, 3.81], p = 0.025).Conclusions: Our findings suggest that despite previous exposure to the use of Bayesian reasoning, residents use heuristics, such as the representative heuristic and anchoring with adjustment, to estimate probabilities. Potential reasons for attribute substitution include the relative cognitive ease of heuristics vs. Bayesian reasoning or perhaps residents in their clinical practice use gist traces rather than precise probability estimates when diagnosing.
APA, Harvard, Vancouver, ISO, and other styles
17

Lehky, Sidney R. "Bayesian Estimation of Stimulus Responses in Poisson Spike Trains." Neural Computation 16, no. 7 (July 1, 2004): 1325–43. http://dx.doi.org/10.1162/089976604323057407.

Full text
Abstract:
A Bayesian method is developed for estimating neural responses to stimuli, using likelihood functions incorporating the assumption that spike trains follow either pure Poisson statistics or Poisson statistics with a refractory period. The Bayesian and standard estimates of the mean and variance of responses are similar and asymptotically converge as the size of the data sample increases. However, the Bayesian estimate of the variance of the variance is much lower. This allows the Bayesian method to provide more precise interval estimates of responses. Sensitivity of the Bayesian method to the Poisson assumption was tested by conducting simulations perturbing the Poisson spike trains with noise. This did not affect Bayesian estimates of mean and variance to a significant degree, indicating that the Bayesian method is robust. The Bayesian estimates were less affected by the presence of noise than estimates provided by the standard method.
APA, Harvard, Vancouver, ISO, and other styles
18

Sparacino, Giovanni, Stefano Milani, Edoardo Arslan, and Claudio Cobelli. "A Bayesian approach to estimate evoked potentials." Computer Methods and Programs in Biomedicine 68, no. 3 (June 2002): 233–48. http://dx.doi.org/10.1016/s0169-2607(01)00175-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Smith, Jordan W., Lindsey S. Smart, Monica A. Dorning, Lauren Nicole Dupéy, Andréanne Méley, and Ross K. Meentemeyer. "Bayesian methods to estimate urban growth potential." Landscape and Urban Planning 163 (July 2017): 1–16. http://dx.doi.org/10.1016/j.landurbplan.2017.03.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Han, Ming, and Yuanyao Ding. "Synthesized expected Bayesian method of parametric estimate." Journal of Systems Science and Systems Engineering 13, no. 1 (March 2004): 98–111. http://dx.doi.org/10.1007/s11518-006-0156-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Caraiani, P. "Bayesian estimation of the Okun coefficient for Romania." Acta Oeconomica 60, no. 1 (March 1, 2010): 79–92. http://dx.doi.org/10.1556/aoecon.60.2010.1.5.

Full text
Abstract:
In this paper I use a New Keynesian model with unemployment and estimate it for the Romanian economy using Bayesian techniques. I use the estimated model to derive an estimation of the Okun coefficient. I alternatively estimate the Okun coefficient using the Bayesian linear regression. The results show that the Okun coefficient is high in the Romanian economy implying that the current crisis will have a severe impact on the labour market as well as important social effects.
APA, Harvard, Vancouver, ISO, and other styles
22

Okasha, Hassan M., Heba S. Mohammed, and Yuhlong Lio. "E-Bayesian Estimation of Reliability Characteristics of a Weibull Distribution with Applications." Mathematics 9, no. 11 (May 31, 2021): 1261. http://dx.doi.org/10.3390/math9111261.

Full text
Abstract:
Given a progressively type-II censored sample, the E-Bayesian estimates, which are the expected Bayesian estimates over the joint prior distributions of the hyper-parameters in the gamma prior distribution of the unknown Weibull rate parameter, are developed for any given function of unknown rate parameter under the square error loss function. In order to study the impact from the selection of hyper-parameters for the prior, three different joint priors of the hyper-parameters are utilized to establish the theoretical properties of the E-Bayesian estimators for four functions of the rate parameter, which include an identity function (that is, a rate parameter) as well as survival, hazard rate and quantile functions. A simulation study is also conducted to compare the three E-Bayesian and a Bayesian estimate as well as the maximum likelihood estimate for each of the four functions considered. Moreover, two real data sets from a medical study and industry life test, respectively, are used for illustration. Finally, concluding remarks are addressed.
APA, Harvard, Vancouver, ISO, and other styles
23

Cu Thi, Phuong, James Ball, and Ngoc Dao. "Uncertainty Estimation Using the Glue and Bayesian Approaches in Flood Estimation: A case Study—Ba River, Vietnam." Water 10, no. 11 (November 13, 2018): 1641. http://dx.doi.org/10.3390/w10111641.

Full text
Abstract:
In the last few decades tremendous progress has been made in the use of catchment models for the analysis and understanding of hydrologic systems. A common application involves the use of these models to predict flows at catchment outputs. However, the outputs predicted by these models are often deterministic because they focused only on the most probable forecast without an explicit estimate of the associated uncertainty. This paper uses Bayesian and Generalized Likelihood Uncertainty Estimation (GLUE) approaches to estimate uncertainty in catchment modelling parameter values and uncertainty in design flow estimates. Testing of join probability of both these estimates has been conducted for a monsoon catchment in Vietnam. The paper focuses on computational efficiency and the differences in results, regardless of the philosophies and mathematical rigor of both methods. It was found that the application of GLUE and Bayesian techniques resulted in parameter values that were statistically different. The design flood quantiles estimated by the GLUE method were less scattered than those resulting from the Bayesian approach when using a closer threshold value (1 standard deviation departed from the mean). More studies are required to evaluate the impact of threshold in GLUE on design flood estimation.
APA, Harvard, Vancouver, ISO, and other styles
24

Rabie, Abdalla, and Junping Li. "E-Bayesian Estimation Based on Burr-X Generalized Type-II Hybrid Censored Data." Symmetry 11, no. 5 (May 3, 2019): 626. http://dx.doi.org/10.3390/sym11050626.

Full text
Abstract:
In this article, we are concerned with the E-Bayesian (the expectation of Bayesian estimate) method, the maximum likelihood and the Bayesian estimation methods of the shape parameter, and the reliability function of one-parameter Burr-X distribution. A hybrid generalized Type-II censored sample from one-parameter Burr-X distribution is considered. The Bayesian and E-Bayesian approaches are studied under squared error and LINEX loss functions by using the Markov chain Monte Carlo method. Confidence intervals for maximum likelihood estimates, as well as credible intervals for the E-Bayesian and Bayesian estimates, are constructed. Furthermore, an example of real-life data is presented for the sake of the illustration. Finally, the performance of the E-Bayesian estimation method is studied then compared with the performance of the Bayesian and maximum likelihood methods.
APA, Harvard, Vancouver, ISO, and other styles
25

de Lima, Max Sousa, and Gregorio Saravia Atuncar. "A Bayesian method to estimate the optimal bandwidth for multivariate kernel estimator." Journal of Nonparametric Statistics 23, no. 1 (March 2011): 137–48. http://dx.doi.org/10.1080/10485252.2010.485200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Vilar, M. J., J. Ranta, S. Virtanen, and H. Korkeala. "Bayesian Estimation of the True Prevalence and of the Diagnostic Test Sensitivity and Specificity of EnteropathogenicYersiniain Finnish Pig Serum Samples." BioMed Research International 2015 (2015): 1–7. http://dx.doi.org/10.1155/2015/931542.

Full text
Abstract:
Bayesian analysis was used to estimate the pig’s and herd’s true prevalence of enteropathogenicYersiniain serum samples collected from Finnish pig farms. The sensitivity and specificity of the diagnostic test were also estimated for the commercially available ELISA which is used for antibody detection against enteropathogenicYersinia. The Bayesian analysis was performed in two steps; the first step estimated the prior true prevalence of enteropathogenicYersiniawith data obtained from a systematic review of the literature. In the second step, data of the apparent prevalence (cross-sectional study data), prior true prevalence (first step), and estimated sensitivity and specificity of the diagnostic methods were used for building the Bayesian model. The true prevalence ofYersiniain slaughter-age pigs was 67.5% (95% PI 63.2–70.9). The true prevalence ofYersiniain sows was 74.0% (95% PI 57.3–82.4). The estimates of sensitivity and specificity values of the ELISA were 79.5% and 96.9%.
APA, Harvard, Vancouver, ISO, and other styles
27

Muharisa, Catrin, Ferra Yanuar, and Dodi Devianto. "Simulation Study The Using of Bayesian Quantile Regression in Nonnormal Error." CAUCHY 5, no. 3 (December 5, 2018): 121. http://dx.doi.org/10.18860/ca.v5i3.5633.

Full text
Abstract:
The purposes of this paper is to introduce the ability of the Bayesian quantile regression method in overcoming the problem of the nonnormal errors using asymmetric laplace distribution on simulation study. <strong>Method: </strong>We generate data and set distribution of error is asymmetric laplace distribution error, which is non normal data. In this research, we solve the nonnormal problem using quantile regression method and Bayesian quantile regression method and then we compare. The approach of the quantile regression is to separate or divide the data into any quantiles, estimate the conditional quantile function and minimize absolute error that is asymmetrical. Bayesian regression method used the asymmetric laplace distribution in likelihood function. Markov Chain Monte Carlo method using Gibbs sampling algorithm is applied then to estimate the parameter in Bayesian regression method. Convergency and confidence interval of parameter estimated are also checked. <strong>Result: </strong>Bayesian quantile regression method results has more significance parameter and smaller confidence interval than quantile regression method. <strong>Conclusion: </strong>This study proves that Bayesian quantile regression method can produce acceptable parameter estimate for nonnormal error.
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Lu, Daping Bi, and Jifei Pan. "Two-Dimensional Angle Estimation of Two-Parallel Nested Arrays Based on Sparse Bayesian Estimation." Sensors 18, no. 10 (October 19, 2018): 3553. http://dx.doi.org/10.3390/s18103553.

Full text
Abstract:
To increase the number of estimable signal sources, two-parallel nested arrays are proposed, which consist of two subarrays with sensors, and can estimate the two-dimensional (2-D) direction of arrival (DOA) of signal sources. To solve the problem of direction finding with two-parallel nested arrays, a 2-D DOA estimation algorithm based on sparse Bayesian estimation is proposed. Through a vectorization matrix, smoothing reconstruction matrix and singular value decomposition (SVD), the algorithm reduces the size of the sparse dictionary and data noise. A sparse Bayesian learning algorithm is used to estimate one dimension angle. By a joint covariance matrix, another dimension angle is estimated, and the estimated angles from two dimensions can be automatically paired. The simulation results show that the number of DOA signals that can be estimated by the proposed two-parallel nested arrays is much larger than the number of sensors. The proposed two-dimensional DOA estimation algorithm has excellent estimation performance.
APA, Harvard, Vancouver, ISO, and other styles
29

Staggs, Vincent S., and Byron J. Gajewski. "Bayesian and frequentist approaches to assessing reliability and precision of health-care provider quality measures." Statistical Methods in Medical Research 26, no. 3 (March 17, 2015): 1341–49. http://dx.doi.org/10.1177/0962280215577410.

Full text
Abstract:
Our purpose was to compare frequentist, empirical Bayes, and Bayesian hierarchical model approaches to estimating reliability of health care quality measures, including construction of credible intervals to quantify uncertainty in reliability estimates, using data on inpatient fall rates on hospital nursing units. Precision of reliability estimates and Bayesian approaches to estimating reliability are not well studied. We analyzed falls data from 2372 medical units; the rate of unassisted falls per 1000 inpatient days was the measure of interest. The Bayesian methods “shrunk” the observed fall rates and frequentist reliability estimates toward their posterior means. We examined the association between reliability and precision in fall rate rankings by plotting the length of a 90% credible interval for each unit’s percentile rank against the unit’s estimated reliability. Precision of rank estimates tended to increase as reliability increased but was limited even at higher reliability levels: Among units with reliability >0.8, only 5.5% had credible interval length <20; among units with reliability >0.9, only 31.9% had credible interval length <20. Thus, a high reliability estimate may not be sufficient to ensure precise differentiation among providers. Bayesian approaches allow for assessment of this precision.
APA, Harvard, Vancouver, ISO, and other styles
30

Reyad, Hesham, Adil Mousa Younis, and Amal Alsir Alkhedir. "Comparison of estimates using censored samples from Gompertz model: Bayesian, E-Bayesian, hierarchical Bayesian and empirical Bayesian schemes." International Journal of Advanced Statistics and Probability 4, no. 1 (April 3, 2016): 47. http://dx.doi.org/10.14419/ijasp.v4i1.5914.

Full text
Abstract:
<p>This paper aims to introduce a comparative study for the E-Bayesian criteria with three various Bayesian approaches; Bayesian, hierarchical Bayesian and empirical Bayesian. This study is concerned to estimate the shape parameter and the hazard function of the Gompertz distribution based on type-II censoring. All estimators are obtained under symmetric loss function [squared error loss (SELF))] and three different asymmetric loss functions [quadratic loss function (QLF), entropy loss function (ELF) and LINEX loss function (LLF)]. Comparisons among all estimators are achieved in terms of mean square error (MSE) via Monte Carlo simulation.</p>
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Chenlan, Chongjie Zhang, and X. Jessie Yang. "Automation reliability and trust: A Bayesian inference approach." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (September 2018): 202–6. http://dx.doi.org/10.1177/1541931218621048.

Full text
Abstract:
Research shows that over repeated interactions with automation, human operators are able to learn how reliable the automation is and update their trust in automation. The goal of the present study is to investigate if this learning and inference process approximately follow the principle of Bayesian probabilistic inference. First, we applied Bayesian inference to estimate human operators’ perceived system reliability and found high correlations between the Bayesian estimates and the perceived reliability for the majority of the participants. We then correlated the Bayesian estimates with human operators’ reported trust and found moderate correlations for a large portion of the participants. Our results suggest that human operators’ learning and inference process for automation reliability can be approximated by Bayesian inference.
APA, Harvard, Vancouver, ISO, and other styles
32

McCann, Brian T. "Using Bayesian Updating to Improve Decisions under Uncertainty." California Management Review 63, no. 1 (August 28, 2020): 26–40. http://dx.doi.org/10.1177/0008125620948264.

Full text
Abstract:
Decision making requires managers to constantly estimate the probability of uncertain outcomes and update those estimates in light of new information. This article provides guidance to managers on how they can improve that process by more explicitly adopting a Bayesian approach. Clear understanding and application of the Bayesian approach leads to more accurate probability estimates, resulting in better informed decisions. More importantly, adopting a Bayesian approach, even informally, promises to improve the quality of managerial thinking, analysis, and decisions in a variety of additional ways.
APA, Harvard, Vancouver, ISO, and other styles
33

Thorson, James T., and Jim Berkson. "Multispecies estimation of Bayesian priors for catchability trends and density dependence in the US Gulf of Mexico." Canadian Journal of Fisheries and Aquatic Sciences 67, no. 6 (June 2010): 936–54. http://dx.doi.org/10.1139/f10-040.

Full text
Abstract:
Fishery-dependent catch-per-unit-effort (CPUE) derived indices of stock abundance are commonly used in fishery stock assessment models and may be significantly biased due to changes in catchability over time. Factors causing time-varying catchability include density-dependent habitat selection and technology improvements such as global positioning systems. In this study, we develop a novel multispecies method to estimate Bayesian priors for catchability functional parameters. This method uses the deviance information criterion to select a parsimonious functional model for catchability among 10 hierarchical and measurement error models. The parsimonious model is then applied to multispecies data, while excluding one species at a time, to develop Bayesian priors that can be used for each excluded species. We use this method to estimate catchability trends and density dependence for seven stocks and four gears in the Gulf of Mexico by comparing CPUE-derived index data with abundance estimates from virtual population analysis calibrated with fishery-independent indices. Catchability density dependence estimates mean that CPUE indices are hyperstable, implying that stock rebuilding in the Gulf may be progressing faster than previously estimated. This method for estimating Bayesian priors can provide a parsimonious method to compensate for time-varying catchability and uses multispecies fishery data in a novel manner.
APA, Harvard, Vancouver, ISO, and other styles
34

Vehtari, Aki, and Jouko Lampinen. "Bayesian Model Assessment and Comparison Using Cross-Validation Predictive Densities." Neural Computation 14, no. 10 (October 1, 2002): 2439–68. http://dx.doi.org/10.1162/08997660260293292.

Full text
Abstract:
In this work, we discuss practical methods for the assessment, comparison, and selection of complex hierarchical Bayesian models. A natural way to assess the goodness of the model is to estimate its future predictive capability by estimating expected utilities. Instead of just making a point estimate, it is important to obtain the distribution of the expected utility estimate because it describes the uncertainty in the estimate. The distributions of the expected utility estimates can also be used to compare models, for example, by computing the probability of one model having a better expected utility than some other model. We propose an approach using cross-validation predictive densities to obtain expected utility estimates and Bayesian bootstrap to obtain samples from their distributions. We also discuss the probabilistic assumptions made and properties of two practical cross-validation methods, importance sampling and k-fold cross-validation. As illustrative examples, we use multilayer perceptron neural networks and gaussian processes with Markov chain Monte Carlo sampling in one toy problem and two challenging real-world problems.
APA, Harvard, Vancouver, ISO, and other styles
35

Nguefack-Tsague, Georges, and Ingo Bulla. "A Focused Bayesian Information Criterion." Advances in Statistics 2014 (October 14, 2014): 1–8. http://dx.doi.org/10.1155/2014/504325.

Full text
Abstract:
Myriads of model selection criteria (Bayesian and frequentist) have been proposed in the literature aiming at selecting a single model regardless of its intended use. An honorable exception in the frequentist perspective is the “focused information criterion” (FIC) aiming at selecting a model based on the parameter of interest (focus). This paper takes the same view in the Bayesian context; that is, a model may be good for one estimand but bad for another. The proposed method exploits the Bayesian model averaging (BMA) machinery to obtain a new criterion, the focused Bayesian model averaging (FoBMA), for which the best model is the one whose estimate is closest to the BMA estimate. In particular, for two models, this criterion reduces to the classical Bayesian model selection scheme of choosing the model with the highest posterior probability. The new method is applied in linear regression, logistic regression, and survival analysis. This criterion is specially important in epidemiological studies in which the objective is often to determine a risk factor (focus) for a disease, adjusting for potential confounding factors.
APA, Harvard, Vancouver, ISO, and other styles
36

Amry, Zul. "Bayesian Estimate of Parameters for ARMA Model Forecasting." Tatra Mountains Mathematical Publications 75, no. 1 (April 1, 2020): 23–32. http://dx.doi.org/10.2478/tmmp-2020-0002.

Full text
Abstract:
AbstractThis paper presents a Bayesian approach to finding the Bayes estimator of parameters for ARMA model forecasting under normal-gamma prior assumption with a quadratic loss function in mathematical expression. Obtaining the conditional posterior predictive density is based on the normal-gamma prior and the conditional predictive density, whereas its marginal conditional posterior predictive density is obtained using the conditional posterior predictive density. Furthermore, the Bayes estimator of parameters is derived from the marginal conditional posterior predictive density.
APA, Harvard, Vancouver, ISO, and other styles
37

Dose, Volker. "Bayesian estimate of the Newtonian constant of gravitation." Measurement Science and Technology 18, no. 1 (November 30, 2006): 176–82. http://dx.doi.org/10.1088/0957-0233/18/1/022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Hogan, Craig J., Keith A. Olive, and Sean T. Scully. "A Bayesian Estimate of the Primordial Helium Abundance." Astrophysical Journal 489, no. 2 (1997): L119—L122. http://dx.doi.org/10.1086/316783.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Barry, Daniel. "Asymptotic IMSE for a nonparametric bayesian regression estimate." Communications in Statistics - Theory and Methods 17, no. 10 (January 1988): 3277–93. http://dx.doi.org/10.1080/03610928808829803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Poli, I. "A Bayesian non-parametric estimate for multivariate regression." Journal of Econometrics 28, no. 2 (May 1985): 171–82. http://dx.doi.org/10.1016/0304-4076(85)90117-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Caron, Franco, Fabrizio Ruggeri, and Beatrice Pierini. "A Bayesian approach to improving estimate to complete." International Journal of Project Management 34, no. 8 (November 2016): 1687–702. http://dx.doi.org/10.1016/j.ijproman.2016.09.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Jackman, Simon. "Estimation and Inference Are Missing Data Problems: Unifying Social Science Statistics via Bayesian Simulation." Political Analysis 8, no. 4 (July 18, 2000): 307–32. http://dx.doi.org/10.1093/oxfordjournals.pan.a029818.

Full text
Abstract:
Bayesian simulation is increasingly exploited in the social sciences for estimation and inference of model parameters. But an especially useful (if often overlooked) feature of Bayesian simulation is that it can be used to estimate any function of model parameters, including “auxiliary” quantities such as goodness-of-fit statistics, predicted values, and residuals. Bayesian simulation treats these quantities as if they were missing data, sampling from their implied posterior densities. Exploiting this principle also lets researchers estimate models via Bayesian simulation where maximum-likelihood estimation would be intractable. Bayesian simulation thus provides a unified solution for quantitative social science. I elaborate these ideas in a variety of contexts: these include generalized linear models for binary responses using data on bill cosponsorship recently reanalyzed in Political Analysis, item—response models for the measurement of respondent's levels of political information in public opinion surveys, the estimation and analysis of legislators' ideal points from roll-call data, and outlier-resistant regression estimates of incumbency advantage in U.S. Congressional elections
APA, Harvard, Vancouver, ISO, and other styles
43

Oladyshkin, Sergey, and Wolfgang Nowak. "The Connection between Bayesian Inference and Information Theory for Model Selection, Information Gain and Experimental Design." Entropy 21, no. 11 (November 4, 2019): 1081. http://dx.doi.org/10.3390/e21111081.

Full text
Abstract:
We show a link between Bayesian inference and information theory that is useful for model selection, assessment of information entropy and experimental design. We align Bayesian model evidence (BME) with relative entropy and cross entropy in order to simplify computations using prior-based (Monte Carlo) or posterior-based (Markov chain Monte Carlo) BME estimates. On the one hand, we demonstrate how Bayesian model selection can profit from information theory to estimate BME values via posterior-based techniques. Hence, we use various assumptions including relations to several information criteria. On the other hand, we demonstrate how relative entropy can profit from BME to assess information entropy during Bayesian updating and to assess utility in Bayesian experimental design. Specifically, we emphasize that relative entropy can be computed avoiding unnecessary multidimensional integration from both prior and posterior-based sampling techniques. Prior-based computation does not require any assumptions, however posterior-based estimates require at least one assumption. We illustrate the performance of the discussed estimates of BME, information entropy and experiment utility using a transparent, non-linear example. The multivariate Gaussian posterior estimate includes least assumptions and shows the best performance for BME estimation, information entropy and experiment utility from posterior-based sampling.
APA, Harvard, Vancouver, ISO, and other styles
44

Yanuar, Ferra, Rahmi Febriyuni, and Izzati Rahmi HG. "Bayesian Generalized Self Method to Estimate Scale Parameter of Invers Rayleigh Distribution." CAUCHY 6, no. 4 (May 30, 2021): 270–78. http://dx.doi.org/10.18860/ca.v6i4.11482.

Full text
Abstract:
The purposes of this study are to estimate the scale parameter of Invers Rayleigh distribution under MLE and Bayesian Generalized square error loss function (SELF). The posterior distribution is considered to use two types of prior, namely Jeffrey’s prior and exponential distribution. The proposed methods are then employed in the real data. Several criteria for the selection model are considered in order to identify the method which results in a suitable value of parameter estimated. This study found that Bayesian Generalized SELF under Jeffrey’s prior yielded better estimation values that MLE and Bayesian Generalized SELF under exponential distribution.
APA, Harvard, Vancouver, ISO, and other styles
45

Jamnia, Abdul Rashid, Ahmad Ali Keikha, Mahmoud Ahmadpour, Abdoul Ahad Cissé, and Mohammad Rokouei. "Applying bayesian population assessment models to artisanal, multispecies fisheries in the Northern Mokran Sea, Iran." Nature Conservation 28 (August 13, 2018): 61–89. http://dx.doi.org/10.3897/natureconservation.28.25212.

Full text
Abstract:
Small-scale fisheries substantially contribute to the reduction of poverty, local economies and food safety in many countries. However, limited and low-quality catches and effort data for small-scale fisheries complicate the stock assessment and management. Bayesian modelling has been advocated when assessing fisheries with limited data. Specifically, Bayesian models can incorporate information of the multiple sources, improve precision in the stock assessments and provide specific levels of uncertainty for estimating the relevant parameters. In this study, therefore, the state-space Bayesian generalised surplus production models will be used in order to estimate the stock status of fourteen Demersal fish species targeted by small-scale fisheries in Sistan and Baluchestan, Iran. The model was estimated using Markov chain Monte Carlo (MCMC) and Gibbs Sampling. Model parameter estimates were evaluated by the formal convergence and stationarity diagnostic tests, indicating convergence and accuracy. They were also aligned with existing parameter estimates for fourteen species of the other locations. This suggests model reliability and demonstrates the utility of Bayesian models. According to estimated fisheries’ management reference points, all assessed fish stocks appear to be overfished. Overfishing considered, the current fisheries management strategies for the small-scale fisheries may need some adjustments to warrant the long-term viability of the fisheries.
APA, Harvard, Vancouver, ISO, and other styles
46

Witzany, Jiří. "A Bayesian Approach to Measurement of Backtest Overfitting." Risks 9, no. 1 (January 8, 2021): 18. http://dx.doi.org/10.3390/risks9010018.

Full text
Abstract:
Quantitative investment strategies are often selected from a broad class of candidate models estimated and tested on historical data. Standard statistical techniques to prevent model overfitting such as out-sample backtesting turn out to be unreliable in situations when the selection is based on results of too many models tested on the holdout sample. There is an ongoing discussion of how to estimate the probability of backtest overfitting and adjust the expected performance indicators such as the Sharpe ratio in order to reflect properly the effect of multiple testing. We propose a consistent Bayesian approach that yields the desired robust estimates on the basis of a Markov chain Monte Carlo (MCMC) simulation. The approach is tested on a class of technical trading strategies where a seemingly profitable strategy can be selected in the naïve approach.
APA, Harvard, Vancouver, ISO, and other styles
47

Witzany, Jiří. "A Bayesian Approach to Measurement of Backtest Overfitting." Risks 9, no. 1 (January 8, 2021): 18. http://dx.doi.org/10.3390/risks9010018.

Full text
Abstract:
Quantitative investment strategies are often selected from a broad class of candidate models estimated and tested on historical data. Standard statistical techniques to prevent model overfitting such as out-sample backtesting turn out to be unreliable in situations when the selection is based on results of too many models tested on the holdout sample. There is an ongoing discussion of how to estimate the probability of backtest overfitting and adjust the expected performance indicators such as the Sharpe ratio in order to reflect properly the effect of multiple testing. We propose a consistent Bayesian approach that yields the desired robust estimates on the basis of a Markov chain Monte Carlo (MCMC) simulation. The approach is tested on a class of technical trading strategies where a seemingly profitable strategy can be selected in the naïve approach.
APA, Harvard, Vancouver, ISO, and other styles
48

Keil, Alexander P., Eric J. Daza, Stephanie M. Engel, Jessie P. Buckley, and Jessie K. Edwards. "A Bayesian approach to the g-formula." Statistical Methods in Medical Research 27, no. 10 (March 2, 2017): 3183–204. http://dx.doi.org/10.1177/0962280217694665.

Full text
Abstract:
Epidemiologists often wish to estimate quantities that are easy to communicate and correspond to the results of realistic public health interventions. Methods from causal inference can answer these questions. We adopt the language of potential outcomes under Rubin’s original Bayesian framework and show that the parametric g-formula is easily amenable to a Bayesian approach. We show that the frequentist properties of the Bayesian g-formula suggest it improves the accuracy of estimates of causal effects in small samples or when data are sparse. We demonstrate an approach to estimate the effect of environmental tobacco smoke on body mass index among children aged 4–9 years who were enrolled in a longitudinal birth cohort in New York, USA. We provide an algorithm and supply SAS and Stan code that can be adopted to implement this computational approach more generally.
APA, Harvard, Vancouver, ISO, and other styles
49

Viana, Marlos A. G. "Bayesian Joint Estimation of Binomial Proportions." Journal of Educational Statistics 16, no. 4 (December 1991): 331–43. http://dx.doi.org/10.3102/10769986016004331.

Full text
Abstract:
Testing the hypothesis H that k > 1 binomial parameters are equal and jointly estimating these parameters are related problems. A Bayesian argument can simultaneously answer these inference questions: to test the hypothesis H, the posterior probability λ = λ (H | x) of H given the experimental data x can be used; to estimate each binomial parameter, their Bayesian estimates under H and the alternative hypothesis H̄ are combined with weights λ and 1 – λ, respectively.
APA, Harvard, Vancouver, ISO, and other styles
50

Riedel, Michael, Stan E. Dosso, and Laurens Beran. "Uncertainty estimation for amplitude variation with offset (AVO) inversion." GEOPHYSICS 68, no. 5 (September 2003): 1485–96. http://dx.doi.org/10.1190/1.1620621.

Full text
Abstract:
This paper uses a Bayesian approach for inverting seismic amplitude versus offset (AVO) data to provide estimates and uncertainties of the viscoelastic physical parameters at an interface. The inversion is based on Gibbs' sampling approach to determine properties of the posterior probability distribution (PPD), such as the posterior mean, maximum a posteriori (MAP) estimate, marginal probability distributions, and covariances. The Bayesian formulation represents a fully nonlinear inversion; the results are compared to those of standard linearized inversion. The nonlinear and linearized approaches are applied to synthetic test cases which consider AVO inversion for shallow marine environments with both unconsolidated and consolidated seabeds. The result of neglecting attenuation in the seabed is investigated, and the effects of data factors such as independent and systematic errors and the range of incident angles are considered. The Bayesian approach is also applied to estimate the physical parameters and uncertainties from AVO data collected at two sites along a seismic line in the Baltic Sea with differing sediment types; it clearly identifies the distinct seabed compositions. Data uncertainties (independent and systematic) required for this analysis are estimated using a maximum‐likelihood approach.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography