To see the other types of publications on this topic, follow the link: Estimators.

Journal articles on the topic 'Estimators'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Estimators.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Benkhaled, Abdelkader, Abdenour Hamdaoui, and Mekki Terbeche. "MINIMAX SHRINKAGE ESTIMATORS AND ESTIMATORS DOMINATING THE JAMES-STEIN ESTIMATOR UNDER THE BALANCED LOSS FUNCTION." Eurasian Mathematical Journal 13, no. 2 (2022): 18–36. http://dx.doi.org/10.32523/2077-9879-2022-13-2-18-36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Abonazel, Mohamed. "Bias correction methods for dynamic panel data models with fixed effects." International Journal of Applied Mathematical Research 6, no. 2 (May 24, 2017): 58. http://dx.doi.org/10.14419/ijamr.v6i2.7774.

Full text
Abstract:
This paper considers the estimation methods for dynamic panel data (DPD) models with fixed effects, which suggested in econometric literature, such as least squares (LS) and generalized method of moments (GMM). These methods obtain biased estimators for DPD models. The LS estimator is inconsistent when the time dimension (T) is short regardless of the cross-sectional dimension (N). Although consistent estimates can be obtained by GMM procedures, the inconsistent LS estimator has a relatively low variance and hence can lead to an estimator with lower root mean square error after the bias is removed. Therefore, we discuss in this paper the different methods to correct the bias of LS and GMM estimations. The analytical expressions for the asymptotic biases of the LS and GMM estimators have been presented for large N and finite T. Finally; we display new estimators that presented by Youssef and Abonazel [40] as more efficient estimators than the conventional estimators.
APA, Harvard, Vancouver, ISO, and other styles
3

Hamdaoui, Abdenour, Waleed Almutiry, Mekki Terbeche, and Abdelkader Benkhaled. "Comparison of Risk Ratios of Shrinkage Estimators in High Dimensions." Mathematics 10, no. 1 (December 24, 2021): 52. http://dx.doi.org/10.3390/math10010052.

Full text
Abstract:
In this paper, we analyze the risk ratios of several shrinkage estimators using a balanced loss function. The James–Stein estimator is one of a group of shrinkage estimators that has been proposed in the existing literature. For these estimators, sufficient criteria for minimaxity have been established, and the James–Stein estimator’s minimaxity has been derived. We demonstrate that the James–Stein estimator’s minimaxity is still valid even when the parameter space has infinite dimension. It is shown that the positive-part version of the James–Stein estimator is substantially superior to the James–Stein estimator, and we address the asymptotic behavior of their risk ratios to the maximum likelihood estimator (MLE) when the dimensions of the parameter space are infinite. Finally, a simulation study is carried out to verify the performance evaluation of the considered estimators.
APA, Harvard, Vancouver, ISO, and other styles
4

Cahalan, Jennifer A., Jason Gasper, and Jennifer Mondragon. "Catch estimation in the federal trawl fisheries off Alaska: a simulation approach to compare the statistical properties of three trip-specific catch estimators." Canadian Journal of Fisheries and Aquatic Sciences 72, no. 7 (July 2015): 1024–36. http://dx.doi.org/10.1139/cjfas-2014-0347.

Full text
Abstract:
Quantifying catch has been recognized worldwide as a critical component in fisheries management. Assessment of discard is challenging because of the requirement for at-sea observation, which is both logistically difficult and costly to fishery agencies. Statistical estimators using robust sampling methods may yield accurate and imprecise estimates given the variability associated with many at-sea discard species and inability for agencies to obtain high sampling fractions. However, biased estimates occur if an inappropriate estimator is used. Using Alaska trawl fisheries as an example, we investigated the statistical properties and implementation issues for three commonly used estimators: the ratio estimator; a simple mean estimator; and a deterministic imputation method currently in use in federal fisheries off Alaska. We used a simulation approach to evaluate the performance of these estimators to estimate trip-specific catch. Several statistical properties were evaluated: bias of the estimators, variability of the estimators, and accuracy of the variance estimators. The simple mean estimator had the best performance for vessels landing catch at shoreside processors. The choice of estimator was less clear for vessels processing catch, owing to sensitivity associated with species composition and implementation issues for the simple mean and ratio estimators.
APA, Harvard, Vancouver, ISO, and other styles
5

Hinrichsen, Richard A. "The accuracy of alternative stochastic growth rate estimates for salmon populations." Canadian Journal of Fisheries and Aquatic Sciences 59, no. 6 (June 1, 2002): 1014–23. http://dx.doi.org/10.1139/f02-065.

Full text
Abstract:
The accuracies of four alternative estimators of stochastic growth rate for salmon populations are examined using bootstrapping. The first estimator is based on a stochastic Leslie matrix model that uses age-specific spawner counts. The other three estimators use spawner counts with limited age-structure information: a Botsford–Brittnacher model method and two diffusion approximation methods, namely, the least squares approach of Dennis and the robust approach of Holmes. Accuracy of the estimators was quantified using median bias and interquartile ranges of the stochastic growth rate estimates. The Botsford–Brittnacher estimator was found to be unreliable due to large bias. Of the remaining estimators, the stochastic Leslie approach tended to produce the most reliable estimates but had the greatest data demands. With severe lognormal measurement error, the Dennis estimators produced less biased estimates than the other methods, but precision of the stochastic growth rate was generally highest using the stochastic Leslie estimator.
APA, Harvard, Vancouver, ISO, and other styles
6

Payandeh, Bijan, and Alan R. Ek. "Distance methods and density estimators." Canadian Journal of Forest Research 16, no. 5 (October 1, 1986): 918–24. http://dx.doi.org/10.1139/x86-163.

Full text
Abstract:
The relative performance of five distance–density estimators was evaluated for the n-tree circular, semicircular, and strip plots on several simulated and natural tree populations. Results indicate that for the n-tree circular plot, the ratio estimator performed very well for most populations examined and for n > 10. The performance of both the maximum likelihood and first moment estimators was affected to a great degree by the spatial pattern of the populations, but they performed satisfactorily for the random and uniform populations and for large n values (i.e., n > 10). Smaltschinski's estimator resulted in nearly bias-free estimates for nonaggregated populations and for all n, but performed poorly otherwise. The generalized Prodan estimator performed well for the random population, but overestimated the density otherwise. The relative performance of all estimators for the n-tree semicircular plot was quite similar to that of estimators for the n-tree circular plot, except that the former tended to produce lower density estimates. For n-tree strip plots, the generalized Prodan and the ratio estimator performed very well for the nonaggregated populations and for n > 10. All other estimators resulted in density estimates lower than those for n-tree circular and semicircular plots.
APA, Harvard, Vancouver, ISO, and other styles
7

M. Elgohary, Mervat, Mohamed R. Abonazel, Nahed M. Helmy, and Abeer R. Azazy. "New robust-ridge estimators for partially linear model." International Journal of Applied Mathematical Research 8, no. 2 (November 17, 2019): 46. http://dx.doi.org/10.14419/ijamr.v8i2.29932.

Full text
Abstract:
This paper considers the partially linear model when the explanatory variables are highly correlated as well as the dataset contains outliers. We propose new robust biased estimators for this model under these conditions. The proposed estimators combine least trimmed squares and ridge estimations, based on the spline partial residuals technique. The performance of the proposed estimators and the Speckman-spline estimator has been examined by a Monte Carlo simulation study. The results indicated that the proposed estimators are more efficient and reliable than the Speckman-spline estimator.
APA, Harvard, Vancouver, ISO, and other styles
8

Little, Roderick J., and Roger J. Lewis. "Estimands, Estimators, and Estimates." JAMA 326, no. 10 (September 14, 2021): 967. http://dx.doi.org/10.1001/jama.2021.2886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Beauducel, André, Christopher Harms, and Norbert Hilger. "Reliability Estimates for Three Factor Score Estimators." International Journal of Statistics and Probability 5, no. 6 (October 26, 2016): 94. http://dx.doi.org/10.5539/ijsp.v5n6p94.

Full text
Abstract:
Estimates for the reliability of Thurstone’s regression factor score estimator, Bartlett’s factor score estimator, and McDonald’s factor score estimator were proposed. Moreover, conditions for equal reliability of the factor score estimators were presented and the reliability estimates were compared by means of simulation studies. Under conditions inducing unequal reliabilities, reliability estimates were largest for the regression score estimator and lowest for McDonald’s factor score estimator. We provide an R-script and an SPSS-script for the computation of the respective reliability estimates.
APA, Harvard, Vancouver, ISO, and other styles
10

Khan, Dost Muhammad, Muhammad Ali, Zubair Ahmad, Sadaf Manzoor, and Sundus Hussain. "A New Efficient Redescending M-Estimator for Robust Fitting of Linear Regression Models in the Presence of Outliers." Mathematical Problems in Engineering 2021 (November 22, 2021): 1–11. http://dx.doi.org/10.1155/2021/3090537.

Full text
Abstract:
Robust regression is an important iterative procedure that seeks analyzing data sets that are contaminated with outliers and unusual observations and reducing their impact over regression coefficients. Robust estimation methods have been introduced to deal with the problem of outliers and provide efficient and stable estimates in their presence. Various robust estimators have been developed in the literature to restrict the unbounded influence of the outliers or leverage points on the model estimates. Here, a new redescending M-estimator is proposed using a novel objective function with the prime focus on getting highly robust and efficient estimates that give promising results. It is evident from the results that, for normal and clean data, the proposed estimator is almost as efficient as ordinary least square method and, however, becomes highly resistant to outliers when it is used for contaminated datasets. The simulation study is being carried out to assess the performance of the proposed redescending M-estimator over different data generation scenarios including normal, t-distribution, and double exponential distributions with different levels of outliers’ contamination, and the results are compared with the existing redescending M-estimators, e.g., Huber, Tukey Biweight, Hampel, and Andrew-Sign function. The performance of the proposed estimators was also checked using real-life data applications of the estimators and found that the proposed estimators give promising results as compared to the existing estimators.
APA, Harvard, Vancouver, ISO, and other styles
11

Lady, James M., and John R. Skalski. "Estimators of stream residence time of Pacific salmon (Oncorhynchus spp.) based on release-recapture data." Canadian Journal of Fisheries and Aquatic Sciences 55, no. 12 (December 1, 1998): 2580–87. http://dx.doi.org/10.1139/f98-132.

Full text
Abstract:
The area-under-the-curve method is a widely used method for estimating salmon escapement. The method depends on obtaining an accurate estimate of stream residence time, or stream life. This paper develops two estimators of stream residence time based on release-recapture data: a nonparametric estimator and a parametric estimator. Monte Carlo simulations showed that with an adequate release size and number of sampling occasions, both estimators provide precise estimates of stream residence time. If there is significant right censoring, however, the parametric estimator is significantly less biased. If the data are too sparse, the parametric estimator performs poorly and often fails. The stream residence time of spawning sockeye salmon (Oncorhynchus nerka) in Iliamna Lake, Alaska, was estimated using the estimators developed here. Because the estimators also provide the variance of the estimates, the precision of the stream residence time estimate could be assessed, and we were able to test and reject the hypothesis that the stream residence time for females is equal to that of males. Both estimators are applicable to estimating the life expectancy of any fish or wildlife population with release-recapture techniques.
APA, Harvard, Vancouver, ISO, and other styles
12

Malik, Sachin, and Rajesh Singh. "Some Improved Multivariate-Ratio-Type Estimators Using Geometric and Harmonic Means in Stratified Random Sampling." ISRN Probability and Statistics 2012 (August 26, 2012): 1–7. http://dx.doi.org/10.5402/2012/509186.

Full text
Abstract:
Auxiliary variable is commonly used in survey sampling to improve the precision of estimates. Whenever there is auxiliary information available, we want to utilize it in the method of estimation to obtain the most efficient estimator. In this paper using multiauxiliary information we have proposed estimators based on geometric and harmonic mean. It was also shown that estimators based on harmonic mean and geometric mean are less biased than Olkin (1958) and Singh (1967) estimators under certain conditions. However, the MSE of Olkin (1958) estimator and geometric and harmonic estimators are same up to the first order of approximations.
APA, Harvard, Vancouver, ISO, and other styles
13

Bhat, S. S., and R. Vidya. "Performance of Ridge Estimators Based on Weighted Geometric Mean and Harmonic Mean." Journal of Scientific Research 12, no. 1 (January 1, 2020): 1–13. http://dx.doi.org/10.3329/jsr.v12i1.40525.

Full text
Abstract:
Ordinary least squares estimator (OLS) becomes unstable if there is a linear dependence between any two predictors. When such situation arises ridge estimator will yield more stable estimates to the regression coefficients than OLS estimator. Here we suggest two modified ridge estimators based on weights, where weights being the first two largest eigen values. We compare their MSE with some of the existing ridge estimators which are defined in the literature. Performance of the suggested estimators is evaluated empirically for a wide range of degree of multicollinearity. Simulation study indicates that the performance of the suggested estimators is slightly better and more stable with respect to degree of multicollinearity, sample size, and error variance.
APA, Harvard, Vancouver, ISO, and other styles
14

Hone, J. "A Test of the Accuracy of Line and Strip Transect Estimators in Aerial Survey." Wildlife Research 15, no. 5 (1988): 493. http://dx.doi.org/10.1071/wr9880493.

Full text
Abstract:
The accuracy and precision of eight line transect estimators and one strip transect estimator were examined by helicopter aerial survey. Carcasses of feral pigs were counted in an area of treeless floodplain and Eucalyptus woodland. The ratio and Cox's methods, Fourier series, exponential power series, half-normal, exponential polynomial, negative exponential, hermite polynomial and hazard rate estimators gave accurate estimates. Using the survey method described, most estimators were of similar accuracy and precision, but the Fourier series estimator was the most accurate.
APA, Harvard, Vancouver, ISO, and other styles
15

Shaheen, Nazia, Muhammad Nouman Qureshi, Osama Abdulaziz Alamri, and Muhammad Hanif. "Optimized inferences of finite population mean using robust parameters in systematic sampling." PLOS ONE 18, no. 1 (January 23, 2023): e0278619. http://dx.doi.org/10.1371/journal.pone.0278619.

Full text
Abstract:
In this article, we have proposed a generalized estimator for mean estimation by combining the ratio and regression methods of estimation in the presence of auxiliary information using systematic sampling. We incorporated some robust parameters of the auxiliary variable to obtain precise estimates of the proposed estimator. The mathematical expressions for bias and mean square error of proposed the estimator are derived under large sample approximation. Many other generalized ratio and product-type estimators are obtained from the proposed estimator using different choices of scalar constants. Some special cases are also discussed in which the proposed generalized estimator reduces to the usual mean, classical ratio, product, and regression type estimators. Mathematical conditions are obtained for which the proposed estimator will perform more precisely than the challenging estimators mentioned in this article. The efficiency of the proposed estimator is evaluated using four populations. Results showed that the proposed estimator is efficient and useful for survey sampling in comparison to the other existing estimators.
APA, Harvard, Vancouver, ISO, and other styles
16

Mohammadzaheri, Morteza, Mohammadreza Emadi, Mojtaba Ghodsi, Issam M. Bahadur, Musaab Zarog, and Ashraf Saleem. "Development of a Charge Estimator for Piezoelectric Actuators." International Journal of Artificial Intelligence and Machine Learning 10, no. 1 (January 2020): 31–44. http://dx.doi.org/10.4018/ijaiml.2020010103.

Full text
Abstract:
Charge of a piezoelectric actuator is proportional to its displacement for a wide area of operating. Hence, a charge estimator can estimate displacement for such actuators. However, existing charge estimators take a sizable portion of the excitation voltage, i.e. voltage drop. Digital charge estimators have presented the smallest voltage drop. This article first investigates digital charge estimators and suggests a design guideline to (i) maximise accuracy and (ii) minimise the voltage drop. Digital charge estimators have a sensing resistor; an estimator with a constant resistance is shown to violate the design guideline; while, all existing digital charge estimators use one or a few intuitively chosen resistors. That is, existing estimators witness unnecessarily large inaccuracy and/or voltage drop. This research develops charge estimators with varying resistors, fulfilling the design guideline. Several methods are tested to estimates the sensing resistance based on operating conditions, and radial basis function networks models excel in terms of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
17

Al-Omari, Amer Ibrahim, SidAhmed Benchiha, and Ibrahim M. Almanjahie. "Efficient Estimation of Two-Parameter Xgamma Distribution Parameters Using Ranked Set Sampling Design." Mathematics 10, no. 17 (September 2, 2022): 3170. http://dx.doi.org/10.3390/math10173170.

Full text
Abstract:
An efficient method such as ranked set sampling is used for estimating the population parameters when the actual observation measurement is expensive and complicated. In this paper, we consider the problem of estimating the two-parameter xgamma (TPXG) distribution parameters under the ranked set sampling as well as the simple random sampling design. Various estimation methods, including the weighted least-square estimator, maximum likelihood estimators, least-square estimator, Cramer–von Mises, the maximum product of spacings estimators, and Anderson–Darling estimators, are considered. A comparison between the ranked set sampling and simple random sampling estimators, with the same number of measurement units, is conducted using a simulation study in terms of the bias, mean squared errors, and efficiency of estimators. The merit of the ranked set sampling estimators is examined using real data of bank customers. The results indicate that estimations using the ranked set sampling method are more efficient than the simple random sampling competitor considered in this study.
APA, Harvard, Vancouver, ISO, and other styles
18

Chadyšas, V., and D. Krapavickaitė. "Investigation of Accuracy of a Calibrated Estimator of a Ratio by Modelling." Nonlinear Analysis: Modelling and Control 10, no. 4 (October 25, 2005): 333–42. http://dx.doi.org/10.15388/na.2005.10.4.15113.

Full text
Abstract:
Estimator of finite population parameter – ratio of totals of two variables – is investigated by modelling in the case of simple random sampling. Traditional estimator of the ratio is compared with the calibrated estimator of the ratio introduced by Plikusas [1]. The Taylor series expansion of the estimators are used for the expressions of approximate biases and approximate variances [2]. Some estimator of bias is introduced in this paper. Using data of artificial population the accuracy of two estimators of the ratio is compared by modelling. Dependence of the estimates of mean square error of the estimators of the ratio on the correlation coefficient of variables which are used in the numerator and denominator, is also shown in the modelling.
APA, Harvard, Vancouver, ISO, and other styles
19

Polacheck, Tom, Ray Hilborn, and Andre E. Punt. "Fitting Surplus Production Models: Comparing Methods and Measuring Uncertainty." Canadian Journal of Fisheries and Aquatic Sciences 50, no. 12 (December 1, 1993): 2597–607. http://dx.doi.org/10.1139/f93-284.

Full text
Abstract:
Three approaches are commonly used to fit surplus production models to observed data: effort-averaging methods; process-error estimators; and observation-error estimators. We compare these approaches using real and simulated data sets, and conclude that they yield substantially different interpretations of productivity. Effort-averaging methods assume the stock is in equilibrium relative to the recent effort; this assumption is rarely satisfied and usually leads to overestimation of potential yield and optimum effort. Effort-averaging methods will almost always produce what appears to be "reasonable" estimates of maximum sustainable yield and optimum effort, and the r2 statistic used to evaluate the goodness of fit can provide an unrealistic illusion of confidence about the parameter estimates obtained. Process-error estimators produce much less reliable estimates than observation-error estimators. The observation-error estimator provides the lowest estimates of maximum sustainable yield and optimum effort and is the least biased and the most precise (shown in Monte-Carlo trials). We suggest that observation-error estimators be used when fitting surplus production models, that effort-averaging methods be abandoned, and that process-error estimators should only be applied if simulation studies and practical experience suggest that they will be superior to observation-error estimators.
APA, Harvard, Vancouver, ISO, and other styles
20

Krieg, Sabine, Harm Jan Boonstra, and Marc Smeets. "Small-Area Estimation with Zero-Inflated Data – a Simulation Study." Journal of Official Statistics 32, no. 4 (December 1, 2016): 963–86. http://dx.doi.org/10.1515/jos-2016-0051.

Full text
Abstract:
Abstract Many target variables in official statistics follow a semicontinuous distribution with a mixture of zeros and continuously distributed positive values. Such variables are called zero inflated. When reliable estimates for subpopulations with small sample sizes are required, model-based small-area estimators can be used, which improve the accuracy of the estimates by borrowing information from other subpopulations. In this article, three small-area estimators are investigated. The first estimator is the EBLUP, which can be considered the most common small-area estimator and is based on a linear mixed model that assumes normal distributions. Therefore, the EBLUP is model misspecified in the case of zero-inflated variables. The other two small-area estimators are based on a model that takes zero inflation explicitly into account. Both the Bayesian and the frequentist approach are considered. These small-area estimators are compared with each other and with design-based estimation in a simulation study with zero-inflated target variables. Both a simulation with artificial data and a simulation with real data from the Dutch Household Budget Survey are carried out. It is found that the small-area estimators improve the accuracy compared to the design-based estimator. The amount of improvement strongly depends on the properties of the population and the subpopulations of interest.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, H. G., H. T. Schreuder, and C. T. Scott. "Combining estimates that are both in error subject to marginal constraints." Canadian Journal of Forest Research 20, no. 10 (October 1, 1990): 1675–79. http://dx.doi.org/10.1139/x90-221.

Full text
Abstract:
Outside estimates with measures of reliability can be combined with existing tabular estimates of resource statistics to produce new, more precise estimates. But cell estimates do not sum to the original marginal or overall totals when this is done. A method is given to adjust the unchanged cell values to maintain additivity. Classical and bootstrap variance estimators are given for the n × 2 case of combining a cell proportion with an outside estimate assumed to be binomially distributed under fixed marginal constraints, and for the n × m case of combining a cell proportion with a binomially distributed outside estimate under no marginal constraints except that the table total is fixed. For a 3 × 2 test case, a bootstrap variance estimator yielded reliable estimates of precision for the adjusted cell proportions in most cases. For the n × m case, a classical variance estimator was more stable than the bootstrap variance estimators and was less biased than the other variance estimators studied.
APA, Harvard, Vancouver, ISO, and other styles
22

Huitema, Bradley E., and Joseph W. McKean. "Two Reduced-Bias Autocorrelation Estimators: rF1 and rF2." Perceptual and Motor Skills 78, no. 1 (February 1994): 323–30. http://dx.doi.org/10.2466/pms.1994.78.1.323.

Full text
Abstract:
Among the problems associated with the application of time-series analysis to typical psychological data are difficulties in parameter estimation. For example, estimates of autocorrelation coefficients are known to be biased in the small-sample case. Previous work by the present authors has shown that, in the case of conventional autocorrelation estimators of ρ1 the degree of bias is more severe than is predicted by formulas that are based on large-sample theory. Two new autocorrelation estimators, rF1 and rF2, were proposed; a Monte Carlo experiment was carried out to evaluate the properties of these statistics. The results demonstrate that both estimators provide major reduction of bias. The average absolute bias of rF2 is somewhat smaller than that of rF1 at all sample sizes, but both are far less biased than is the conventional estimator found in most time-series software. The reduction in bias comes at the price of an increase in error variance. A comparison of the properties of these estimators with those of other estimators suggested in 1991 shows advantages and disadvantages for each. It is recommended that the choice among autocorrelation estimators be based upon the nature of the application. The new estimator rF2 is especially appropriate when pooling estimates from several samples.
APA, Harvard, Vancouver, ISO, and other styles
23

Newey, Whitney K. "Kernel Estimation of Partial Means and a General Variance Estimator." Econometric Theory 10, no. 2 (June 1994): 1–21. http://dx.doi.org/10.1017/s0266466600008409.

Full text
Abstract:
Econometric applications of kernel estimators are proliferating, suggesting the need for convenient variance estimates and conditions for asymptotic normality. This paper develops a general “delta-method” variance estimator for functionals of kernel estimators. Also, regularity conditions for asymptotic normality are given, along with a guide to verify them for particular estimators. The general results are applied to partial means, which are averages of kernel estimators over some of their arguments with other arguments held fixed. Partial means have econometric applications, such as consumer surplus estimation, and are useful for estimation of additive nonparametric models.
APA, Harvard, Vancouver, ISO, and other styles
24

Ijaz, Muhammad, Syed Muhammad Asim, Atta ullah, and Ibrahim Mahariq. "Flexible Robust Regression-Ratio Type Estimators and Its Applications." Mathematical Problems in Engineering 2022 (September 28, 2022): 1–6. http://dx.doi.org/10.1155/2022/8977392.

Full text
Abstract:
In real-world situations, the data set under examination may contain uncommon noisy measurements that unreasonably affect the data’s outcome and produce incorrect model estimates. Practitioners employed robust-type estimators to reduce the weight of the noisy measurements in a data set in such a scenario. Using auxiliary information that will produce reliable estimates, we have looked at a few flexible robust-type estimators in this study. In order to estimate the population mean, this study presents unique flexible robust regression type ratio estimators that take into account the data from the midrange and interdecile range of the auxiliary variables. Up to the first order of approximate computation, the bias and mean square were calculated. In order to compare the flexibility of the proposed estimator to those of the existing estimators, theoretical conditions were also obtained. We took into account data sets containing outliers for empirical computation, and it was found that the suggested estimators produce results with higher precision than the existing estimators.
APA, Harvard, Vancouver, ISO, and other styles
25

Aladeitan, BENEDICTA, Adewale F. Lukman, Esther Davids, Ebele H. Oranye, and Golam B. M. Kibria. "Unbiased K-L estimator for the linear regression model." F1000Research 10 (August 19, 2021): 832. http://dx.doi.org/10.12688/f1000research.54990.1.

Full text
Abstract:
Background: In the linear regression model, the ordinary least square (OLS) estimator performance drops when multicollinearity is present. According to the Gauss-Markov theorem, the estimator remains unbiased when there is multicollinearity, but the variance of its regression estimates become inflated. Estimators such as the ridge regression estimator and the K-L estimators were adopted as substitutes to the OLS estimator to overcome the problem of multicollinearity in the linear regression model. However, the estimators are biased, though they possess a smaller mean squared error when compared to the OLS estimator. Methods: In this study, we developed a new unbiased estimator using the K-L estimator and compared its performance with some existing estimators theoretically, simulation wise and by adopting real-life data. Results: Theoretically, the estimator even though unbiased also possesses a minimum variance when compared with other estimators. Results from simulation and real-life study showed that the new estimator produced smaller mean square error (MSE) and had the smallest mean square prediction error (MSPE). This further strengthened the findings of the theoretical comparison using both the MSE and the MSPE as criterion. Conclusions: By simulation and using a real-life application that focuses on modelling, the high heating values of proximate analysis was conducted to support the theoretical findings. This new method of estimation is recommended for parameter estimation with and without multicollinearity in a linear regression model.
APA, Harvard, Vancouver, ISO, and other styles
26

Gould, W. R., L. A. Stefanski, and K. H. Pollock. "Use of simulation–extrapolation estimation in catch–effort analyses." Canadian Journal of Fisheries and Aquatic Sciences 56, no. 7 (July 1, 1999): 1234–40. http://dx.doi.org/10.1139/f99-052.

Full text
Abstract:
All catch-effort estimation methods implicitly assume catch and effort are known quantities, whereas in many cases, they have been estimated and are subject to error. We evaluate the application of a simulation-based estimation procedure for measurement error models (J.R. Cook and L.A. Stefanski. 1994. J. Am. Stat. Assoc. 89: 1314-1328) in catch-effort studies. The technique involves a simulation component and an extrapolation step, hence the name SIMEX estimation. We describe SIMEX estimation in general terms and illustrate its use with applications to real and simulated catch and effort data. Correcting for measurement error with SIMEX estimation resulted in population size and catchability coefficient estimates that were substantially less than naive estimates, which ignored measurement errors in some cases. In a simulation of the procedure, we compared estimators from SIMEX with "naive" estimators that ignore measurement errors in catch and effort to determine the ability of SIMEX to produce bias-corrected estimates. The SIMEX estimators were less biased than the naive estimators but in some cases were also more variable. Despite the bias reduction, the SIMEX estimator had a larger mean squared error than the naive estimator for one of two artificial populations studied. However, our results suggest the SIMEX estimator may outperform the naive estimator in terms of bias and precision for larger populations.
APA, Harvard, Vancouver, ISO, and other styles
27

Fischer, Christoph, and Joachim Saborowski. "Variance estimation for mean growth from successive double sampling for stratification." Canadian Journal of Forest Research 50, no. 12 (December 2020): 1405–11. http://dx.doi.org/10.1139/cjfr-2020-0058.

Full text
Abstract:
Double sampling for stratification (2SS) is a sampling design that is widely used for forest inventories. We present the mathematical derivation of two appropriate variance estimators for mean growth from repeated 2SS with updated stratification on each measurement occasion. Both estimators account for substratification based on the transition of sampling units among the strata due to the updated allocation. For the first estimator, sizes of the substrata were estimated from the second-phase sample (sample plots), whereas the respective sizes in the second variance estimator relied on the larger first-phase sample. The estimators were empirically compared with a modified version of Cochran’s well-known 2SS variance estimator that ignores substratification. This was done by performing bootstrap resampling on data from two German forest districts. The major findings were as follows: (i) accounting for substratification, as implemented in both new estimators, has substantial impact in terms of significantly smaller variance estimates and bias compared with the estimator without substratification, and (ii) the second estimator with substrata sizes being estimated from the first-phase sample shows a smaller bias than the first estimator.
APA, Harvard, Vancouver, ISO, and other styles
28

Odell, P. L., D. Dorsett, D. Young, and J. Igwe. "Estimator models for combining vector estimators." Mathematical and Computer Modelling 12, no. 12 (1989): 1627–42. http://dx.doi.org/10.1016/0895-7177(89)90338-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Magnussen, Steen, and Thomas Nord-Larsen. "A Jackknife Estimator of Variance for a Random Tessellated Stratified Sampling Design." Forest Science 65, no. 5 (March 12, 2019): 543–47. http://dx.doi.org/10.1093/forsci/fxy070.

Full text
Abstract:
Abstract Semisystematic sampling designs—in which a population area frame is tessellated into cells, and a randomly located sample is taken from each cell—affords random tessellated stratified (RTS) Horvitz–Thompson-type estimators. Forest inventory applications with RTS estimators are rare, possibly because of computational complexities with the estimation of variance. To reduce this challenge, we propose a jackknife estimator of variance for RTS designs. We demonstrate an application with a model-assisted ratio of totals estimator and data from the Danish National Forest Inventory. RTS estimators of standard error were, as a rule, smaller than comparable estimates obtained under the assumption of simple random sampling. The proposed jackknife estimator performed well.
APA, Harvard, Vancouver, ISO, and other styles
30

Bouquet, Alban, Mikko Sillanpää, and Jarmo Juga. "Estimating inbreeding using dense marker panels and pedigree information." Suomen Maataloustieteellisen Seuran Tiedote, no. 28 (January 31, 2012): 1–6. http://dx.doi.org/10.33354/smst.75434.

Full text
Abstract:
The aim of this simulation study was to compare the accuracy and bias of different inbreeding (F) estimators exploiting dense panels of diallelic markers and pedigree information. All genotype simulations were started by generating an ancestral population at mutation-drift equilibrium considering an effective size of 1000 and a mutation rate (µ) of 5.10-4. Two types of subpopulation were derived from the ancestral population for 10 discrete generations. They differed by the level of selection applied both on males and females: no selection or a structure close to a breeding program with selection of the best 40 males and 500 females on EBV with accuracy of 0.85 and 0.71, respectively, on a trait with heritability of 0.3. Marker panels were made up of 36 000 biallelic markers (18 per cM) and were available for animals in the last 4 generations. Pedigrees were recorded on the last 8 generations. For each scenario, 30 replicates were carried out. Analysed estimators were the correlation (VR1) and regression (VR3) estimators described to build the genomic relationship matrix by VanRaden in 2008. Other estimators included the weighted corrected similarity (WCS) estimator published by Ritland in 1996 and a modified WCS estimator accounting for pedigree information (WPCS). Pedigree-based inbreeding (PED) was also estimated using exhaustive pedigree information. Inbreeding estimates were correlated and regressed to the true simulated genomic F values to assess the precision and bias of estimators, respectively. Main results show that use of dense marker information improves the estimation of F, whatever the scenario. The accuracy of F estimates and the bias were increased in presence of selection, except for PED. Across scenarios, VR3, WCS and WPCS were the most correlated with true F values. In the situation where pedigree was exhaustive, VR3 performed as well as WCS and WPCS but had a larger variability over replicates. Although less biased on average, VR1 was less accurate than other estimators especially when allele frequencies were not properly defined. Accounting for pedigree information into WCS did not increase its estimation accuracy and did not reduce bias in the tested scenarios. Finally, error in estimating inbreeding trends over time in selected populations was greater for some marker-based estimators (VR3, VR1) than PED estimator. WCS and WPCS rendered the most accurate estimations of inbreeding trends. Thus, results indicate that WCS, which can be also used with multiallelic markers, is a promising estimator both to build the genomic relationship matrix for genomic evaluations and to better assess genetic diversity in selected populations.
APA, Harvard, Vancouver, ISO, and other styles
31

Kline, Patrick, and Christopher R. Walters. "On Heckits, LATE, and Numerical Equivalence." Econometrica 87, no. 2 (2019): 677–96. http://dx.doi.org/10.3982/ecta15444.

Full text
Abstract:
Structural econometric methods are often criticized for being sensitive to functional form assumptions. We study parametric estimators of the local average treatment effect (LATE) derived from a widely used class of latent threshold crossing models and show they yield LATE estimates algebraically equivalent to the instrumental variables (IV) estimator. Our leading example is Heckman's (1979) two‐step (“Heckit”) control function estimator which, with two‐sided non‐compliance, can be used to compute estimates of a variety of causal parameters. Equivalence with IV is established for a semiparametric family of control function estimators and shown to hold at interior solutions for a class of maximum likelihood estimators. Our results suggest differences between structural and IV estimates often stem from disagreements about the target parameter rather than from functional form assumptions per se. In cases where equivalence fails, reporting structural estimates of LATE alongside IV provides a simple means of assessing the credibility of structural extrapolation exercises.
APA, Harvard, Vancouver, ISO, and other styles
32

Liu, Jie, Wenqian Dong, Qingqing Zhou, and Dong Li. "Fauce." Proceedings of the VLDB Endowment 14, no. 11 (July 2021): 1950–63. http://dx.doi.org/10.14778/3476249.3476254.

Full text
Abstract:
Cardinality estimation is a fundamental and critical problem in databases. Recently, many estimators based on deep learning have been proposed to solve this problem and they have achieved promising results. However, these estimators struggle to provide accurate results for complex queries, due to not capturing real inter-column and inter-table correlations. Furthermore, none of these estimators contain the uncertainty information about their estimations. In this paper, we present a join cardinality estimator called Fauce. Fauce learns the correlations across all columns and all tables in the database. It also contains the uncertainty information of each estimation. Among all studied learned estimators, our results are promising: (1) Fauce is a light-weight estimator, it has 10× faster inference speed than the state of the art estimator; (2) Fauce is robust to the complex queries, it provides 1.3×--6.7× smaller estimation errors for complex queries compared with the state of the art estimator; (3) To the best of our knowledge, Fauce is the first estimator that incorporates uncertainty information for cardinality estimation into a deep learning model.
APA, Harvard, Vancouver, ISO, and other styles
33

SÖKÜT AÇAR, Tuğba. "Kibria-Lukman Estimator for General Linear Regression Model with AR(2) Errors: A Comparative Study with Monte Carlo Simulation." Journal of New Theory, no. 41 (December 31, 2022): 1–17. http://dx.doi.org/10.53570/jnt.1139885.

Full text
Abstract:
The sensitivity of the least-squares estimation in a regression model is impacted by multicollinearity and autocorrelation problems. To deal with the multicollinearity, Ridge, Liu, and Ridge-type biased estimators have been presented in the statistical literature. The recently proposed Kibria-Lukman estimator is one of the Ridge-type estimators. The literature has compared the Kibria-Lukman estimator with the others using the mean square error criterion for the linear regression model. It was achieved in a study conducted on the Kibria-Lukman estimator's performance under the first-order autoregressive erroneous autocorrelation. When there is an autocorrelation problem with the second-order, evaluating the performance of the Kibria-Lukman estimator according to the mean square error criterion makes this paper original. The scalar mean square error of the Kibria-Lukman estimator under the second-order autoregressive error structure was evaluated using a Monte Carlo simulation and two real examples, and compared with the Generalized Least-squares, Ridge, and Liu estimators. The findings revealed that when the variance of the model was small, the mean square error of the Kibria-Lukman estimator gave very close values with the popular biased estimators. As the model variance grew, Kibria-Lukman did not give fairly similar values with popular biased estimators as in the model with small variance. However, according to the mean square error criterion the Kibria-Lukman estimator outperformed the Generalized Least-Squares estimator in all possible cases.
APA, Harvard, Vancouver, ISO, and other styles
34

Ferraz, Cristiano, Jacques Delincé, André Leite, and Raydonal Ospina. "Bootstrap Assessment of Crop Area Estimates Using Satellite Pixels Counting." Stats 5, no. 2 (April 25, 2022): 422–39. http://dx.doi.org/10.3390/stats5020025.

Full text
Abstract:
Crop area estimates based on counting pixels over classified satellite images are a promising application of remote sensing to agriculture. However, such area estimates are biased, and their variance is a function of the error rates of the classification rule. To redress the bias, estimators (direct and inverse) relying on the so-called confusion matrix have been proposed, but analytic estimators for variances can be tricky to derive. This article proposes a bootstrap method for assessing statistical properties of such estimators based on information from a sample confusion matrix. The proposed method can be applied to any other type of estimator that is built upon confusion matrix information. The resampling procedure is illustrated in a small study to assess the biases and variances of estimates using purely pixel counting and estimates provided by both direct and inverse estimators. The method has the advantage of being simple to implement even when the sample confusion matrix is generated under unequal probability sample design. The results show the limitations of estimates based solely on pixel counting as well as respective advantages and drawbacks of the direct and inverse estimators with respect to their feasibility, unbiasedness, and variance.
APA, Harvard, Vancouver, ISO, and other styles
35

Ferraz, Cristiano, Jacques Delincé, André Leite, and Raydonal Ospina. "Bootstrap Assessment of Crop Area Estimates Using Satellite Pixels Counting." Stats 5, no. 2 (April 25, 2022): 422–39. http://dx.doi.org/10.3390/stats5020025.

Full text
Abstract:
Crop area estimates based on counting pixels over classified satellite images are a promising application of remote sensing to agriculture. However, such area estimates are biased, and their variance is a function of the error rates of the classification rule. To redress the bias, estimators (direct and inverse) relying on the so-called confusion matrix have been proposed, but analytic estimators for variances can be tricky to derive. This article proposes a bootstrap method for assessing statistical properties of such estimators based on information from a sample confusion matrix. The proposed method can be applied to any other type of estimator that is built upon confusion matrix information. The resampling procedure is illustrated in a small study to assess the biases and variances of estimates using purely pixel counting and estimates provided by both direct and inverse estimators. The method has the advantage of being simple to implement even when the sample confusion matrix is generated under unequal probability sample design. The results show the limitations of estimates based solely on pixel counting as well as respective advantages and drawbacks of the direct and inverse estimators with respect to their feasibility, unbiasedness, and variance.
APA, Harvard, Vancouver, ISO, and other styles
36

Williams, Erik H., and Michael H. Prager. "Comparison of equilibrium and nonequilibrium estimators for the generalized production model." Canadian Journal of Fisheries and Aquatic Sciences 59, no. 9 (September 1, 2002): 1533–52. http://dx.doi.org/10.1139/f02-123.

Full text
Abstract:
Parameter estimation for the logistic (Schaefer) production model commonly uses an observation-error estimator, here termed the NM estimator, combining nonlinear-function minimization and forward projection of estimated population state. Although the NM estimator can be used to fit the generalized (Pella–Tomlinson) production model, an equilibrium approximation method (EA) is often used, despite calls in the literature to abandon equilibrium estimators. We examined relative merits of NM and EA estimators for the generalized model by fitting 48 000 simulated data sets describing five basic stock trajectories and widely varying population characteristics. Simulated populations followed the generalized production model exactly but were observed with random error. Both estimators were used on each data set to estimate several quantities of management interest. The estimates from NM were usually more accurate and precise than EA estimates, and overall, NM outperformed EA. This is the most comprehensive study of the question to date—the first to examine the generalized production model—and it demonstrates clearly why equilibrium estimators should be abandoned. Although valuable when introduced, they are no longer computationally necessary, they are no less demanding of data, and their performance is poor.
APA, Harvard, Vancouver, ISO, and other styles
37

Jaśko, Przemysław, and Daniel Kosiorowski. "Conditional Covariance Prediction in Portfolio Analysis Using MCD and PCS Robust Multivariate Scatter Estimators." Przegląd Statystyczny 63, no. 2 (June 30, 2016): 149–72. http://dx.doi.org/10.5604/01.3001.0014.1157.

Full text
Abstract:
In this paper we compare two matrix estimators of multivariate scatter – the minimal covariance determinant estimator MCD with a new proposal an estimator minimizing an incongruence criterion PCS in a context of their applications in economics. We analyze the estimators using simulation studies and using empirical examples related to issues of portfolio building. In a decision process we often make use of multivariate scatter estimators. Incorrect value of these estimates may result in financial losses. In this paper we compare two robust multivariate scatter estimators – MCD (minimum covariance determinant) and recently proposed PCS (projection congruent subset), which are affine equivariant and have high breakdown points. In the empirical analysis we make use of them in the procedure of weights setting for minimum vari ance and equal risk contribution (ERC) portfolios.
APA, Harvard, Vancouver, ISO, and other styles
38

Lendle, Samuel David, Bruce Fireman, and Mark J. van der Laan. "Balancing Score Adjusted Targeted Minimum Loss-based Estimation." Journal of Causal Inference 3, no. 2 (September 1, 2015): 139–55. http://dx.doi.org/10.1515/jci-2012-0012.

Full text
Abstract:
AbstractAdjusting for a balancing score is sufficient for bias reduction when estimating causal effects including the average treatment effect and effect among the treated. Estimators that adjust for the propensity score in a nonparametric way, such as matching on an estimate of the propensity score, can be consistent when the estimated propensity score is not consistent for the true propensity score but converges to some other balancing score. We call this property the balancing score property, and discuss a class of estimators that have this property. We introduce a targeted minimum loss-based estimator (TMLE) for a treatment-specific mean with the balancing score property that is additionally locally efficient and doubly robust. We investigate the new estimator’s performance relative to other estimators, including another TMLE, a propensity score matching estimator, an inverse probability of treatment weighted estimator, and a regression-based estimator in simulation studies.
APA, Harvard, Vancouver, ISO, and other styles
39

Topolewski, Marcin. "Analysis of Possibility of Application of Higher-Order Hierarchical Credibility Estimators in Automobile Insurance." Przegląd Statystyczny 62, no. 2 (June 30, 2015): 199–217. http://dx.doi.org/10.5604/01.3001.0014.1742.

Full text
Abstract:
The work concerns the applicability of credibility estimators of order higher than one, called hierarchical estimators, to posterior estimates of claim ratio in motor insurance systems based on the history of the insured. The first part presents the idea of the credibility estimation method. In the second part the linear credibility estimator is discussed. The third part introduces appropriate for risk models in the form of claims distributions, which are used to derive the corresponding hierarchical estimates of the first order. The fourth section expands the discussion of the hierarchical estimators of the order two. In the empirical fifth part previously presented solutions are used to estimate the loss ratio for the sample insurance portfolio, results are discussed and conclusions concerning the possibility of application of second order hierarchical estimators into motor insurance are formulated.
APA, Harvard, Vancouver, ISO, and other styles
40

RATHER, KHALID UL ISLAM, M. IQBAL JEELANI, M. YOUNIS SHAH, AFSHAN TABASSUM, and S. E. H. RIZVI. "DIFFERENCE-CUM-EXPONENTIAL EFFICIENT ESTIMATOR OF POPULATION VARIANCE." Journal of Science and Arts 22, no. 2 (June 30, 2022): 367–74. http://dx.doi.org/10.46939/j.sci.arts-22.2-a10.

Full text
Abstract:
In the current investigation, we have suggested a Difference Cum-Exponential Type Efficient Estimator of Population variance of the study variable using information on the auxiliary variable. Up to the first order of approximation, the proposed estimator's bias and mean square error (MSE) expressions are derived and suggested optimum estimator is also found, with its optimal qualities are investigated. The suggested estimator is proven to be more competent than sample variance, classic ratio estimators based on Isaki , Singh et al. and Kadilar and Cingi estimators in [1-3]. Numerical study is also carried out by using real data sets.
APA, Harvard, Vancouver, ISO, and other styles
41

Poměnková, Jitka. "Nonparametric estimate remarks." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 54, no. 3 (2006): 93–100. http://dx.doi.org/10.11118/actaun200654030093.

Full text
Abstract:
Kernel smoothers belong to the most popular nonparametric functional estimates. They provide a simple way of finding structure in data. The idea of the kernel smoothing can be applied to a simple fixed design regression model. This article is focused on kernel smoothing for fixed design regresion model with three types of estimators, the Gasser-Müller estimator, the Nadaraya-Watson estimator and the local linear estimator. At the end of this article figures for ilustration of desribed estimators on simulated and real data sets are shown.
APA, Harvard, Vancouver, ISO, and other styles
42

Mandallaz, Daniel, and Alexander Massey. "Comparison of estimators in one-phase two-stage Poisson sampling in forest inventories." Canadian Journal of Forest Research 42, no. 10 (October 2012): 1865–71. http://dx.doi.org/10.1139/x2012-110.

Full text
Abstract:
In the context of Poisson sampling, numerous adjustments to classical estimators have been proposed that are intended to compensate for inflated variance due to random sample size. However, such adjustments have never been applied to extensive forest inventories. This work investigates the performances of four estimators for the timber volume in one-phase two-stage forest inventories, where trees in the first stage are selected, at the plot level, by concentric circles or angle-count methods and a subset thereof are selected by Poisson sampling for further measurements to get a better estimation. The original two-stage estimator is the sum of two components: the first is the mean of Horwitz–Thompson estimators using simple volume approximations, based on diameter and species alone, of all first-stage trees in each inventory plot, and the second is the mean of Horwitz–Thompson estimators based on the differences between the simple volume approximations and refined volume determinations based on further diameter and height measurements on the second-stage trees within each inventory plot. This two-stage estimator is particularly useful because it provides unbiased estimates even if the simple prediction model is not correct, which is particularly important for small area estimation. The other three estimators rely on adjustments of the second component of the original estimator that are adapted from estimators proposed in the literature by L.R. Grosenbaugh and C.-E. Särndal. It turns out that these adjustments introduce a negligible bias and that the original simple estimator performs just as well or even better than the new estimators with respect to the variance.
APA, Harvard, Vancouver, ISO, and other styles
43

Slaoui, Y., and A. Jmaei. "Recursive and non-recursive regression estimators using Bernstein polynomials." Theory of Stochastic Processes 26(42), no. 1 (December 27, 2022): 60–95. http://dx.doi.org/10.37863/tsp-2899660400-77.

Full text
Abstract:
If a regression function has a bounded support, the kernel estimates often exceed the boundaries and are therefore biased on and near these limits. In this paper, we focus on mitigating this boundary problem. We apply Bernstein polynomials and the Robbins-Monro algorithm to construct a non-recursive and recursive regression estimator. We study the asymptotic properties of these estimators, and we compare them with those of the Nadaraya-Watson estimator and the generalized Révész estimator introduced by [21]. In addition, through some simulation studies, we show that our non-recursive estimator has the lowest integrated root mean square error (ISE) in most of the considered cases. Finally, using a set of real data, we demonstrate how our non-recursive and recursive regression estimators can lead to very satisfactory estimates, especially near the boundaries.
APA, Harvard, Vancouver, ISO, and other styles
44

Southwell, C., and K. Weaver. "Evaluation of analystical procedures for density estimation from line-transect data: data grouping, data truncation and the unit of analysis." Wildlife Research 20, no. 4 (1993): 433. http://dx.doi.org/10.1071/wr9930433.

Full text
Abstract:
We examined three aspects of line-transect analytical procedures: data grouping, data truncation and the use of individuals or clusters as the analytical unit. Bias and precision of density estimation in relation to various levels of these factors were assessed for 4 types of line-transect estimator (simple parametric, generalised parametric, non-parametric and quasi-strip) using line-transect survey data from macropod populations of known density. The effect of data grouping on bias and precision varied between estimators. Bias was stable across all grouping levels tested for the simple parametric estimator, and stable across aU but the coarsest grouping level for the generalised parametric and non-parametric estimators, but varied substantially across the range of levels tested for the quasi-strip estimator. Precision improved as the number of grouping levels increased for all estimators tested, but the extent of improvement varied between estimators, and for the estimator most affected, improvement was marginal beyond intermediate grouping levels. Density estimates were generally more accurate and precise when analysed in ungrouped form than in grouped form. No effects of truncation on bias or precision were detected. Varying the analytical unit did not affect bias, but precision was significantly lower for cluster analysis than individual analysis for all estimators.
APA, Harvard, Vancouver, ISO, and other styles
45

Milligan, Brook G. "Maximum-Likelihood Estimation of Relatedness." Genetics 163, no. 3 (March 1, 2003): 1153–67. http://dx.doi.org/10.1093/genetics/163.3.1153.

Full text
Abstract:
Abstract Relatedness between individuals is central to many studies in genetics and population biology. A variety of estimators have been developed to enable molecular marker data to quantify relatedness. Despite this, no effort has been given to characterize the traditional maximum-likelihood estimator in relation to the remainder. This article quantifies its statistical performance under a range of biologically relevant sampling conditions. Under the same range of conditions, the statistical performance of five other commonly used estimators of relatedness is quantified. Comparison among these estimators indicates that the traditional maximum-likelihood estimator exhibits a lower standard error under essentially all conditions. Only for very large amounts of genetic information do most of the other estimators approach the likelihood estimator. However, the likelihood estimator is more biased than any of the others, especially when the amount of genetic information is low or the actual relationship being estimated is near the boundary of the parameter space. Even under these conditions, the amount of bias can be greatly reduced, potentially to biologically irrelevant levels, with suitable genetic sampling. Additionally, the likelihood estimator generally exhibits the lowest root mean-square error, an indication that the bias in fact is quite small. Alternative estimators restricted to yield only biologically interpretable estimates exhibit lower standard errors and greater bias than do unrestricted ones, but generally do not improve over the maximum-likelihood estimator and in some cases exhibit even greater bias. Although some nonlikelihood estimators exhibit better performance with respect to specific metrics under some conditions, none approach the high level of performance exhibited by the likelihood estimator across all conditions and all metrics of performance.
APA, Harvard, Vancouver, ISO, and other styles
46

Ahmad, Sohaib, Sardar Hussain, Uzma Yasmeen, Muhammad Aamir, Javid Shabbir, M. El-Morshedy, Afrah Al-Bossly, and Zubair Ahmad. "A simulation study: Using dual ancillary variable to estimate population mean under stratified random sampling." PLOS ONE 17, no. 11 (November 28, 2022): e0275875. http://dx.doi.org/10.1371/journal.pone.0275875.

Full text
Abstract:
In this paper, we propose an improved ratio-in-regression type estimator for the finite population mean under stratified random sampling, by using the ancillary varaible as well as rank of the ancillary varaible. Expressions of the bias and mean square error of the estimators are derived up to the first order of approximation. The present work focused on proper use of the ancillary variable, and it was discussed how ancillary variable can improve the precision of the estimates. Two real data sets as well as simulation study are carried out to observe the performances of the estimators. We demonstrate theoretically and numerically that proposed estimator performs well as compared to all existing estimators.
APA, Harvard, Vancouver, ISO, and other styles
47

Villanueva, Beatriz, and Javier Moro. "Variance and efficiency of the combined estimator in incomplete block designs of use in forest genetics: a numerical study." Canadian Journal of Forest Research 31, no. 1 (January 1, 2001): 71–77. http://dx.doi.org/10.1139/x00-138.

Full text
Abstract:
The efficiency of combined interblock-intrablock and intrablock analysis for the estimation of treatment contrasts in alpha designs is compared using Monte-Carlo simulation. The combined estimator considers treatments and replications as fixed effects and blocks as random effects, whereas the intrablock estimator considers treatments, replications, and blocks as fixed effects. The variances of the estimators are used as the criterion for comparison. The combined estimator yields more accurate estimates than the intrablock estimator when the ratio of the block to the error variance is small, especially for designs with the fewest degrees of freedom. The accuracy of both estimators is similar when the ratio of variances is large. The variance of the combined estimator is very close to that of the best linear unbiased estimator except for designs with small number of replicates and families or provenances. Approximations commonly used for the variance of the combined estimator when variances of the random effects are unknown are studied. The downward or negative bias in the estimates of the variance given by the standard approximation used in statistical packages is largest under the conditions in which the combined estimator is more efficient than the intrablock estimator. Estimates of the relative efficiency of combined estimators have an upward bias that can exceed 10% of the true value in small- and middle-sized designs with two or three replicates. In designs with four or more replicates, often used in forest genetics, the bias is negligible.
APA, Harvard, Vancouver, ISO, and other styles
48

Fan, Guo-Liang, and Tian-Heng Chen. "Asymptotic normality of a simple linear EV regression model with martingale difference errors." Filomat 28, no. 9 (2014): 1817–25. http://dx.doi.org/10.2298/fil1409817f.

Full text
Abstract:
This paper considers the estimation of a linear EV (errors-in-variables) regression model under martingale difference errors. The usual least squares estimations lead to biased estimators of the unknown parametric when measurement errors are ignored. By correcting the attenuation we propose a modified least squares estimator for a parametric component and construct the estimators of another parameter component and error variance. The asymptotic normalities are also obtained for these estimators. The simulation study indicates that the modified least squares method performs better than the usual least squares method.
APA, Harvard, Vancouver, ISO, and other styles
49

Mauro, Francisco, Vicente J. Monleon, Andrew N. Gray, Olaf Kuegler, Hailemariam Temesgen, Andrew T. Hudak, Patrick A. Fekety, and Zhiqiang Yang. "Comparison of Model-Assisted Endogenous Poststratification Methods for Estimation of Above-Ground Biomass Change in Oregon, USA." Remote Sensing 14, no. 23 (November 28, 2022): 6024. http://dx.doi.org/10.3390/rs14236024.

Full text
Abstract:
Quantifying above-ground biomass changes, ΔAGB, is key for understanding carbon dynamics. National Forest Inventories, NFIs, aims at providing precise estimates of ΔAGB relying on model-assisted estimators that incorporate auxiliary information to reduce uncertainty. Poststratification estimators, PS, are commonly used for this task. Recently proposed endogenous poststratification, EPS, methods have the potential to improve the precision of PS estimates of ΔAGB. Using the state of Oregon, USA, as a testing area, we developed a formal comparison between three EPS methods, traditional PS estimators used in the region, and the Horvitz-Thompson, HT, estimator. Results showed that gains in performance with respect to the HT estimator were 9.71% to 19.22% larger for EPS than for PS. Furthermore, EPS methods easily accommodated a large number of auxiliary variables, and the inclusion of independent predictions of ΔAGB as an additional auxiliary variable resulted in further gains in performance.
APA, Harvard, Vancouver, ISO, and other styles
50

Saikkonen, Pentti. "Asymptotically Efficient Estimation of Cointegration Regressions." Econometric Theory 7, no. 1 (March 1991): 1–21. http://dx.doi.org/10.1017/s0266466600004217.

Full text
Abstract:
An asymptotic optimality theory for the estimation of cointegration regressions is developed in this paper. The theory applies to a reasonably wide class of estimators without making any specific assumptions about the probability distribution or short-run dynamics of the data-generating process. Due to the nonstandard nature of the estimation problem, the conventional minimum variance criterion does not provide a convenient measure of asymptotic efficiency. An alternative criterion, based on the concentration or peakedness of the limiting distribution of an estimator, is therefore adopted. The limiting distribution of estimators with maximum asymptotic efficiency is characterized in the paper and used to discuss the optimality of some known estimators. A new asymptotically efficient estimator is also introduced. This estimator is obtained from the ordinary least-squares estimator by a time domain correction which is nonparametric in the sense that no assumption of a finite parameter model is required. The estimator can be computed with least squares without any initial estimations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography