Journal articles on the topic 'Estimator Procedure'

To see the other types of publications on this topic, follow the link: Estimator Procedure.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Estimator Procedure.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gould, W. R., L. A. Stefanski, and K. H. Pollock. "Use of simulation–extrapolation estimation in catch–effort analyses." Canadian Journal of Fisheries and Aquatic Sciences 56, no. 7 (July 1, 1999): 1234–40. http://dx.doi.org/10.1139/f99-052.

Full text
Abstract:
All catch-effort estimation methods implicitly assume catch and effort are known quantities, whereas in many cases, they have been estimated and are subject to error. We evaluate the application of a simulation-based estimation procedure for measurement error models (J.R. Cook and L.A. Stefanski. 1994. J. Am. Stat. Assoc. 89: 1314-1328) in catch-effort studies. The technique involves a simulation component and an extrapolation step, hence the name SIMEX estimation. We describe SIMEX estimation in general terms and illustrate its use with applications to real and simulated catch and effort data. Correcting for measurement error with SIMEX estimation resulted in population size and catchability coefficient estimates that were substantially less than naive estimates, which ignored measurement errors in some cases. In a simulation of the procedure, we compared estimators from SIMEX with "naive" estimators that ignore measurement errors in catch and effort to determine the ability of SIMEX to produce bias-corrected estimates. The SIMEX estimators were less biased than the naive estimators but in some cases were also more variable. Despite the bias reduction, the SIMEX estimator had a larger mean squared error than the naive estimator for one of two artificial populations studied. However, our results suggest the SIMEX estimator may outperform the naive estimator in terms of bias and precision for larger populations.
APA, Harvard, Vancouver, ISO, and other styles
2

Cordue, Patrick L. "Designing optimal estimators for fish stock assessment." Canadian Journal of Fisheries and Aquatic Sciences 55, no. 2 (February 1, 1998): 376–86. http://dx.doi.org/10.1139/f97-228.

Full text
Abstract:
Many estimation procedures are used in the provision of fisheries stock assessment advice. Most procedures use estimators that have optimal large-sample characteristics, but these are often applied to small-sample data sets. In this paper, a minimum integrated average expected loss (MIAEL) estimation procedure is presented. By its design a MIAEL estimator has optimal characteristics for the type of data it is applied to, given that the model assumptions of the particular problem are satisfied. The estimation procedure is developed within a decision-theoretic framework and illustrated with a Bernoulli and a fisheries example. MIAEL estimation is related to optimal Bayes estimation, as both procedures seek an estimator that minimizes an integrated loss function. In most fisheries applications a global MIAEL estimator will be difficult to determine, and a MIAEL estimator will need to be found within a given class of estimators. "Squared f-error," a generalization of the common squared error loss function is defined. It is shown that an estimator can be improved (for a given squared f-error loss function) by using its best linear transformation which is the MIAEL estimator within the class of linear transformations (in f space).
APA, Harvard, Vancouver, ISO, and other styles
3

Rautenbach, H. M., and J. J. J. Roux. "Statistical analysis based on quaternion normal random variables." Suid-Afrikaanse Tydskrif vir Natuurwetenskap en Tegnologie 4, no. 3 (March 18, 1985): 120–27. http://dx.doi.org/10.4102/satnt.v4i3.1042.

Full text
Abstract:
The quaternion normal distribution is derived and a number of characteristics are highlighted. The maximum likelihood estimation procedure in the quaternion case is examined and the conclusion is reached that the estimation procedure is simplified if the unknown parameters of the associated real probability density function are estimated. The quaternion estimator is then obtained by regarding these estimators as the components of the quaternion estimator. By means of a example attention is given to a test criterium which can be used in the quaternion model.
APA, Harvard, Vancouver, ISO, and other styles
4

Schreuder, H. T., Z. Ouyang, and M. Williams. "Point-Poisson, point-pps, and modified point-pps sampling: efficiency and variance estimation." Canadian Journal of Forest Research 22, no. 8 (August 1, 1992): 1071–78. http://dx.doi.org/10.1139/x92-142.

Full text
Abstract:
Modified point-pps (probability proportional to size) sampling selects at least one sample tree per point and yields a fixed sample size. Point-Poisson sampling is as efficient as this modified procedure but less efficient than regular point-pps sampling in a simulation study estimating total volume using either the Horvitz–Thompson (ŶHT) or the weighted regression estimator (Ŷwr). Point-pps sampling is somewhat more efficient than point-Poisson sampling for all estimators except ŶHT, and point-Poisson sampling is always somewhat more efficient than modified point-pps sampling across.all estimators. For board foot volume the regression estimators are more efficient than ŶHT for all three procedures. Point-pps sampling is always most efficient, except for ŶHT, and point-Poisson sampling is always more efficient than the modified point-pps procedure. We recommend using Ŷgr (generalized regression estimator), Ŷwr, or ŶHT for total volume and Ŷgr for board foot volume. Three variance estimators estimate the variances of the regression estimates with small bias; we recommend the simple bootstrap variance estimator because it is simple to compute and does as well as its two main competitors. It does well for ŶHT, too, for all three procedures and should be used for ŶHT in point-Ppisson sampling in preference to the Grosenbaugh variance approximation. An unbiased variance estimator is given for ŶHT with the modified point-pps procedure, but the simple bootstrap variance is equally good.
APA, Harvard, Vancouver, ISO, and other styles
5

Koppelman, Frank S., and Laurie A. Garrow. "Efficiently Estimating Nested Logit Models with Choice-Based Samples." Transportation Research Record: Journal of the Transportation Research Board 1921, no. 1 (January 2005): 63–69. http://dx.doi.org/10.1177/0361198105192100108.

Full text
Abstract:
Choice-based samples oversample infrequently chosen alternatives to obtain an effective representation of the behavior of people who select these alternatives. However, the use of choice-based samples requires recognition of the sampling process in formulating the estimation procedure. In general, this procedure can be accomplished by applying weights to the observed choices in the estimation process. Unfortunately, the use of such weighted estimation procedures for choice models does not yield efficient estimators. However, for the special case of the multinomial logit model with a full set of alternative-specific constants, the standard maximum likelihood estimator–-which is efficient–-can be used with adjustment of the alternative-specific constants. The same maximum likelihood estimator can also be used with adjustment to estimate nested logit models with choice-based samples. The proof of this property is qualitatively described, and examples demonstrate how to apply the adjustment procedure.
APA, Harvard, Vancouver, ISO, and other styles
6

Fatima, Mehreen, Saman Hanif Shahbaz, Muhammad Hanif, and Muhammad Qaiser Shahbaz. "A modified regression-cum-ratio estimator for finite population mean in presence of nonresponse using ranked set sampling." AIMS Mathematics 7, no. 4 (2022): 6478–88. http://dx.doi.org/10.3934/math.2022361.

Full text
Abstract:
<abstract> <p>Several situations arise where decision-making is required for some characteristics of an asymmetrical population for example estimation of the weekly number of server breakdowns at a company. The estimation methods based upon classical sampling designs are not suitable in such situations and some specialized methods and/or estimators are required. The ranked set sampling is a procedure that is suitable in such situations. In this paper, a new estimator is proposed that can be used to estimate population characteristics in case of asymmetrical populations. The proposed estimator is useful for estimation of population mean in the presence of non-response in study variable by using ranked set sampling procedure. The estimator is based upon two auxiliary variables to reduce the effect of asymmetry. The use of two auxiliary variables is also helpful in minimizing the variation in the estimation of the population mean of the study variable. The ranked set sampling procedure is used to get better accuracy as the actual measurements may be time-consuming, expensive, or difficult to obtain in a small sample size. The use of ranked set sampling also reduces the effect of asymmetry in the characteristics under study. The expressions for the mean square error and bias for the proposed estimators have been derived. The performance of the proposed estimator is evaluated by using real-life data and a simulation study is carried out to get an overview of efficiency. The relative efficiency of the proposed estimator is compared with some existing estimators. It has been found that the proposed estimator is highly efficient as compared with Mohanty's regression cum ratio estimator in simple random sampling and is more reliable in the case of non-response with a small sample size.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Liqiong, Antonio F. Galvao, and Suyong Song. "Quantile Regression with Generated Regressors." Econometrics 9, no. 2 (April 12, 2021): 16. http://dx.doi.org/10.3390/econometrics9020016.

Full text
Abstract:
This paper studies estimation and inference for linear quantile regression models with generated regressors. We suggest a practical two-step estimation procedure, where the generated regressors are computed in the first step. The asymptotic properties of the two-step estimator, namely, consistency and asymptotic normality are established. We show that the asymptotic variance-covariance matrix needs to be adjusted to account for the first-step estimation error. We propose a general estimator for the asymptotic variance-covariance, establish its consistency, and develop testing procedures for linear hypotheses in these models. Monte Carlo simulations to evaluate the finite-sample performance of the estimation and inference procedures are provided. Finally, we apply the proposed methods to study Engel curves for various commodities using data from the UK Family Expenditure Survey. We document strong heterogeneity in the estimated Engel curves along the conditional distribution of the budget share of each commodity. The empirical application also emphasizes that correctly estimating confidence intervals for the estimated Engel curves by the proposed estimator is of importance for inference.
APA, Harvard, Vancouver, ISO, and other styles
8

Chang, Yen-Ching. "Speeding up estimation of the Hurst exponent by a two-stage procedure from a large to small range." Engineering Computations 34, no. 1 (March 6, 2017): 3–17. http://dx.doi.org/10.1108/ec-01-2016-0036.

Full text
Abstract:
Purpose The Hurst exponent has been very important in telling the difference between fractal signals and explaining their significance. For estimators of the Hurst exponent, accuracy and efficiency are two inevitable considerations. The main purpose of this study is to raise the execution efficiency of the existing estimators, especially the fast maximum likelihood estimator (MLE), which has optimal accuracy. Design/methodology/approach A two-stage procedure combining a quicker method and a more accurate one to estimate the Hurst exponent from a large to small range will be developed. For the best possible accuracy, the data-induction method is currently ideal for the first-stage estimator and the fast MLE is the best candidate for the second-stage estimator. Findings For signals modeled as discrete-time fractional Gaussian noise, the proposed two-stage estimator can save up to 41.18 per cent the computational time of the fast MLE while remaining almost as accurate as the fast MLE, and even for signals modeled as discrete-time fractional Brownian motion, it can also save about 35.29 per cent except for smaller data sizes. Originality/value The proposed two-stage estimation procedure is a novel idea. It can be expected that other fields of parameter estimation can apply the concept of the two-stage estimation procedure to raise computational performance while remaining almost as accurate as the more accurate of two estimators.
APA, Harvard, Vancouver, ISO, and other styles
9

Sohail, Muhammad Umair, Nursel Koyuncu, and Muhammad Areeb Iqbal Sethi. "Almost Unbiased Estimation of Coefficient of Dispression from Imputed Data." STATISTICS, COMPUTING AND INTERDISCIPLINARY RESEARCH 3, no. 2 (December 31, 2021): 143–54. http://dx.doi.org/10.52700/scir.v3i2.55.

Full text
Abstract:
This article develops an almost unbiased estimation of coefficient of dispersion by the productive use of coefficient of dispersion of the auxiliary variable in two phase sampling. Expressions for variances of the proposed estimators are obtained up to first order of approximation. The relative efficiencies of proposed unbiased ratio estimator are compared with navie estimator by using simulated data sets. Thus, we conclude that the proposed imputation procedure is more efficient than traditional estimator.
APA, Harvard, Vancouver, ISO, and other styles
10

Sohail, Muhammad Umair, Nursel Koyuncu, and Muhammad Areeb Iqbal Sethi. "Almost Unbiased Estimation of Coefficient of Dispression from Imputed Data." STATISTICS, COMPUTING AND INTERDISCIPLINARY RESEARCH 3, no. 2 (December 31, 2021): 143–54. http://dx.doi.org/10.52700/scir.v3i2.55.

Full text
Abstract:
This article develops an almost unbiased estimation of coefficient of dispersion by the productive use of coefficient of dispersion of the auxiliary variable in two phase sampling. Expressions for variances of the proposed estimators are obtained up to first order of approximation. The relative efficiencies of proposed unbiased ratio estimator are compared with navie estimator by using simulated data sets. Thus, we conclude that the proposed imputation procedure is more efficient than traditional estimator.
APA, Harvard, Vancouver, ISO, and other styles
11

Green, Edwin J., and William E. Strawderman. "Stein-rule estimation of coefficients for 18 eastern hardwood cubic volume equations." Canadian Journal of Forest Research 16, no. 2 (April 1, 1986): 249–55. http://dx.doi.org/10.1139/x86-044.

Full text
Abstract:
A Stein-rule estimator, which shrinks least squares estimates of regression parameters toward their weighted average, was employed to estimate the coefficient in the constant form factor volume equation for 18 species simultaneously. The Stein-rule procedure was applied to ordinary least squares estimates and weighted least squares estimates. Simulation tests on independent validation data sets revealed that the Stein-rule estimates were biased, but predicted better than the corresponding least squares estimates. The Stein-rule procedures also yielded lower estimated mean square errors for the volume equation coefficient than the corresponding least squares procedure. Different methods of withdrawing sample data from the total sample available for each species revealed that the superiority of Stein-rule procedures over least squares decreased as the sample size increased and that the Stein-rule procedures were robust to unequal sample sizes, at least on the scale studied here.
APA, Harvard, Vancouver, ISO, and other styles
12

DU, JIANG, ZHONGZHAN ZHANG, and ZHIMENG SUN. "VARIABLE SELECTION FOR PARTIALLY LINEAR VARYING COEFFICIENT QUANTILE REGRESSION MODEL." International Journal of Biomathematics 06, no. 03 (May 2013): 1350015. http://dx.doi.org/10.1142/s1793524513500150.

Full text
Abstract:
In this paper, we propose a variable selection procedure for partially linear varying coefficient model under quantile loss function with adaptive Lasso penalty. The functional coefficients are estimated by B-spline approximations. The proposed procedure simultaneously selects significant variables and estimates unknown parameters. The major advantage of the proposed procedures over the existing ones is easy to implement using existing software, and it requires no specification of the error distributions. Under the regularity conditions, we show that the proposed procedure can be as efficient as the Oracle estimator, and derive the optimal convergence rate of the functional coefficients. A simulation study and a real data application are undertaken to assess the finite sample performance of the proposed variable selection procedure.
APA, Harvard, Vancouver, ISO, and other styles
13

Sirota, A. A., A. O. Donskikh, A. V. Akimov, and D. A. Minakov. "Multivariate mixed kernel density estimators and their application in machine learning for classification of biological objects based on spectral measurements." Computer Optics 43, no. 4 (August 2019): 677–91. http://dx.doi.org/10.18287/2412-6179-2019-43-4-677-691.

Full text
Abstract:
A problem of non-parametric multivariate density estimation for machine learning and data augmentation is considered. A new mixed density estimation method based on calculating the convolution of independently obtained kernel density estimates for unknown distributions of informative features and a known (or independently estimated) density for non-informative interference occurring during measurements is proposed. Properties of the mixed density estimates obtained using this method are analyzed. The method is compared with a conventional Parzen-Rosenblatt window method applied directly to the training data. The equivalence of the mixed kernel density estimator and the data augmentation procedure based on the known (or estimated) statistical model of interference is theoretically and experimentally proven. The applicability of the mixed density estimators for training of machine learning algorithms for the classification of biological objects (elements of grain mixtures) based on spectral measurements in the visible and near-infrared regions is evaluated.
APA, Harvard, Vancouver, ISO, and other styles
14

Khan, Dost Muhammad, Muhammad Ali, Zubair Ahmad, Sadaf Manzoor, and Sundus Hussain. "A New Efficient Redescending M-Estimator for Robust Fitting of Linear Regression Models in the Presence of Outliers." Mathematical Problems in Engineering 2021 (November 22, 2021): 1–11. http://dx.doi.org/10.1155/2021/3090537.

Full text
Abstract:
Robust regression is an important iterative procedure that seeks analyzing data sets that are contaminated with outliers and unusual observations and reducing their impact over regression coefficients. Robust estimation methods have been introduced to deal with the problem of outliers and provide efficient and stable estimates in their presence. Various robust estimators have been developed in the literature to restrict the unbounded influence of the outliers or leverage points on the model estimates. Here, a new redescending M-estimator is proposed using a novel objective function with the prime focus on getting highly robust and efficient estimates that give promising results. It is evident from the results that, for normal and clean data, the proposed estimator is almost as efficient as ordinary least square method and, however, becomes highly resistant to outliers when it is used for contaminated datasets. The simulation study is being carried out to assess the performance of the proposed redescending M-estimator over different data generation scenarios including normal, t-distribution, and double exponential distributions with different levels of outliers’ contamination, and the results are compared with the existing redescending M-estimators, e.g., Huber, Tukey Biweight, Hampel, and Andrew-Sign function. The performance of the proposed estimators was also checked using real-life data applications of the estimators and found that the proposed estimators give promising results as compared to the existing estimators.
APA, Harvard, Vancouver, ISO, and other styles
15

Robin, Jean-Marc, and Richard J. Smith. "TESTS OF RANK." Econometric Theory 16, no. 2 (April 2000): 151–75. http://dx.doi.org/10.1017/s0266466600162012.

Full text
Abstract:
This paper considers tests for the rank of a matrix for which a root-T consistent estimator is available. However, in contrast to tests associated with the minimum chi-square and asymptotic least squares principles, the estimator's asymptotic variance matrix is not required to be either full or of known rank. Test statistics based on certain estimated characteristic roots are proposed whose limiting distributions are a weighted sum of independent chi-squared variables. These weights may be simply estimated, yielding convenient estimators for the limiting distributions of the proposed statistics. A sequential testing procedure is presented that yields a consistent estimator for the rank of a matrix. A simulation experiment is conducted comparing the characteristic root statistics advocated in this paper with statistics based on the Wald and asymptotic least squares principles.
APA, Harvard, Vancouver, ISO, and other styles
16

Prabakaran, T. Edwin, and B. Chandrasekar. "Simultaneous Equivariant Estimation for Location - Scale Models with a Common Scale Parameter." Calcutta Statistical Association Bulletin 48, no. 3-4 (September 1998): 145–56. http://dx.doi.org/10.1177/0008068319980303.

Full text
Abstract:
In this paper, we develop a procedure for simultaneous equivariant estimation of the pa rameters of location - scale models having a common scale parameter. Minimum risk equivariant (MRE) estimator is determined for any invariant loss function. MRE estimator is characterized with respect to a quadratic- type loss function (Zacks, 1971, p. 102) and its uniqueness is observed . Four optimality criteria for comparing vector equivariant estimators are considered and their equivalence is established .
APA, Harvard, Vancouver, ISO, and other styles
17

Kantar, Yeliz Mert, and Ibrahim Arik. "The Use of the Data Transformation Techniques in Estimating the Shape Parameter of the Weibull Distribution for the Wind Speed." International Journal of Energy Optimization and Engineering 3, no. 3 (July 2014): 20–33. http://dx.doi.org/10.4018/ijeoe.2014070102.

Full text
Abstract:
In recent years, the Weibull distribution has been commonly used and recommended to model the wind speed. Therefore, many estimators have been proposed to find the best method to estimate the parameters of the Weibull distribution. Particularly, the estimator based on regression procedures with the Weibull probability plot are often used because of its computational simplicity and graphical presentation. However, when the procedure is applied, in many cases heteroscedasticity or non-normality of the error terms may be encountered. One way to handle this problem is using transformation techniques. In this study, the regression estimation based on data transformation is considered to estimate the parameters of the Weibull distribution. The simulation results show that the considered estimator based on the data transformation for the shape parameter of the Weibull distribution provides better performance than least squares estimator in terms of bias and mean square errors for the most of the considered cases.
APA, Harvard, Vancouver, ISO, and other styles
18

Greene, William. "Fixed Effects Vector Decomposition: A Magical Solution to the Problem of Time-Invariant Variables in Fixed Effects Models?" Political Analysis 19, no. 2 (2011): 135–46. http://dx.doi.org/10.1093/pan/mpq034.

Full text
Abstract:
Plümper and Troeger (2007) propose a three-step procedure for the estimation of a fixed effects (FE) model that, it is claimed, “provides the most reliable estimates under a wide variety of specifications common to real world data.” Their fixed effects vector decomposition (FEVD) estimator is startlingly simple, involving three simple steps, each requiring nothing more than ordinary least squares (OLS). Large gains in efficiency are claimed for cases of time-invariant and slowly time-varying regressors. A subsequent literature has compared the estimator to other estimators of FE models, including the estimator of Hausman and Taylor (1981) also (apparently) with impressive gains in efficiency. The article also claims to provide an efficient estimator for parameters on time-invariant variables (TIVs) in the FE model. None of the claims are correct. The FEVD estimator simply reproduces (identically) the linear FE (dummy variable) estimator then substitutes an inappropriate covariance matrix for the correct one. The consistency result follows from the fact that OLS in the FE model is consistent. The “efficiency” gains are illusory. The claim that the estimator provides an estimator for the coefficients on TIVs in an FE model is also incorrect. That part of the parameter vector remains unidentified. The “estimator” relies upon a strong assumption that turns the FE model into a type of random effects model.
APA, Harvard, Vancouver, ISO, and other styles
19

Gorgees, Hazim Mansoor, and Fatimah Assim Mahdi. "The Comparison Between Different Approaches to Overcome the Multicollinearity Problem in Linear Regression Models." Ibn AL- Haitham Journal For Pure and Applied Science 31, no. 1 (May 14, 2018): 212. http://dx.doi.org/10.30526/31.1.1841.

Full text
Abstract:
In the presence of multi-collinearity problem, the parameter estimation method based on the ordinary least squares procedure is unsatisfactory. In 1970, Hoerl and Kennard insert analternative method labeled as estimator of ridge regression. In such estimator, ridge parameter plays an important role in estimation. Various methods were proposed by many statisticians to select the biasing constant (ridge parameter). Another popular method that is used to deal with the multi-collinearity problem is the principal component method. In this paper,we employ the simulation technique to compare the performance of principal component estimator with some types of ordinary ridge regression estimators based on the value of the biasing constant (ridge parameter). The mean square error (MSE) is used as a criterion to assess the performance of such estimators.
APA, Harvard, Vancouver, ISO, and other styles
20

Putter, Hein, and Cristian Spitoni. "Non-parametric estimation of transition probabilities in non-Markov multi-state models: The landmark Aalen–Johansen estimator." Statistical Methods in Medical Research 27, no. 7 (October 20, 2016): 2081–92. http://dx.doi.org/10.1177/0962280216674497.

Full text
Abstract:
The topic non-parametric estimation of transition probabilities in non-Markov multi-state models has seen a remarkable surge of activity recently. Two recent papers have used the idea of subsampling in this context. The first paper, by de Uña Álvarez and Meira-Machado, uses a procedure based on (differences between) Kaplan–Meier estimators derived from a subset of the data consisting of all subjects observed to be in the given state at the given time. The second, by Titman, derived estimators of transition probabilities that are consistent in general non-Markov multi-state models. Here, we show that the same idea of subsampling, used in both these papers, combined with the Aalen–Johansen estimate of the state occupation probabilities derived from that subset, can also be used to obtain a relatively simple and intuitive procedure which we term landmark Aalen–Johansen. We show that the landmark Aalen–Johansen estimator yields a consistent estimator of the transition probabilities in general non-Markov multi-state models under the same conditions as needed for consistency of the Aalen–Johansen estimator of the state occupation probabilities. Simulation studies show that the landmark Aalen–Johansen estimator has good small sample properties and is slightly more efficient than the other estimators.
APA, Harvard, Vancouver, ISO, and other styles
21

Lee, Kyuseok. "A weighted Fama-MacBeth two-step panel regression procedure: asymptotic properties, finite-sample adjustment, and performance." Studies in Economics and Finance 37, no. 2 (May 28, 2020): 347–60. http://dx.doi.org/10.1108/sef-08-2019-0322.

Full text
Abstract:
Purpose In a recent paper, Yoon and Lee (2019) (YL hereafter) propose a weighted Fama and MacBeth (FMB hereafter) two-step panel regression procedure and provide evidence that their weighted FMB procedure produces more efficient coefficient estimators than the usual unweighted FMB procedure. The purpose of this study is to supplement and improve their weighted FMB procedure, as they provide neither asymptotic results (i.e. consistency and asymptotic distribution) nor evidence on how close their standard error estimator is to the true standard error. Design/methodology/approach First, asymptotic results for the weighted FMB coefficient estimator are provided. Second, a finite-sample-adjusted standard error estimator is provided. Finally, the performance of the adjusted standard error estimator compared to the true standard error is assessed. Findings It is found that the standard error estimator proposed by Yoon and Lee (2019) is asymptotically consistent, although the finite-sample-adjusted standard error estimator proposed in this study works better and helps to reduce bias. The findings of Yoon and Lee (2019) are confirmed even when the average R2 over time is very small with about 1% or 0.1%. Originality/value The findings of this study strongly suggest that the weighted FMB regression procedure, in particular the finite-sample-adjusted procedure proposed here, is a computationally simple but more powerful alternative to the usual unweighted FMB procedure. In addition, to the best of the authors’ knowledge, this is the first study that presents a formal proof of the asymptotic distribution for the FMB coefficient estimator.
APA, Harvard, Vancouver, ISO, and other styles
22

Udagawa, Takuma, Haruka Kiyohara, Yusuke Narita, Yuta Saito, and Kei Tateno. "Policy-Adaptive Estimator Selection for Off-Policy Evaluation." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (June 26, 2023): 10025–33. http://dx.doi.org/10.1609/aaai.v37i8.26195.

Full text
Abstract:
Off-policy evaluation (OPE) aims to accurately evaluate the performance of counterfactual policies using only offline logged data. Although many estimators have been developed, there is no single estimator that dominates the others, because the estimators' accuracy can vary greatly depending on a given OPE task such as the evaluation policy, number of actions, and noise level. Thus, the data-driven estimator selection problem is becoming increasingly important and can have a significant impact on the accuracy of OPE. However, identifying the most accurate estimator using only the logged data is quite challenging because the ground-truth estimation accuracy of estimators is generally unavailable. This paper thus studies this challenging problem of estimator selection for OPE for the first time. In particular, we enable an estimator selection that is adaptive to a given OPE task, by appropriately subsampling available logged data and constructing pseudo policies useful for the underlying estimator selection task. Comprehensive experiments on both synthetic and real-world company data demonstrate that the proposed procedure substantially improves the estimator selection compared to a non-adaptive heuristic. Note that complete version with technical appendix is available on arXiv: http://arxiv.org/abs/2211.13904.
APA, Harvard, Vancouver, ISO, and other styles
23

Henriques-Rodrigues, Lígia, and M. Ivette Gomes. "Box-Cox Transformations and Bias Reduction in Extreme Value Theory." Computational and Mathematical Methods 2022 (March 10, 2022): 1–15. http://dx.doi.org/10.1155/2022/3854763.

Full text
Abstract:
The Box-Cox transformations are used to make the data more suitable for statistical analysis. We know from the literature that this transformation of the data can increase the rate of convergence of the tail of the distribution to the generalized extreme value distribution, and as a byproduct, the bias of the estimation procedure is reduced. The reduction of bias of the Hill estimator has been widely addressed in the literature of extreme value theory. Several techniques have been used to achieve such reduction of bias, either by removing the main component of the bias of the Hill estimator of the extreme value index (EVI) or by constructing new estimators based on generalized means or norms that generalize the Hill estimator. We are going to study the Box-Cox Hill estimator introduced by Teugels and Vanroelen, in 2004, proving the consistency and asymptotic normality of the estimator and addressing the choice and estimation of the power and shift parameters of the Box-Cox transformation for the EVI estimation. The performance of the estimators under study will be illustrated for finite samples through small-scale Monte Carlo simulation studies.
APA, Harvard, Vancouver, ISO, and other styles
24

Li, Wei, Yuwen Gu, and Lan Liu. "Demystifying a class of multiply robust estimators." Biometrika 107, no. 4 (May 25, 2020): 919–33. http://dx.doi.org/10.1093/biomet/asaa026.

Full text
Abstract:
Summary For estimating the population mean of a response variable subject to ignorable missingness, a new class of methods, called multiply robust procedures, has been proposed. The advantage of multiply robust procedures over the traditional doubly robust methods is that they permit the use of multiple candidate models for both the propensity score and the outcome regression, and they are consistent if any one of the multiple models is correctly specified, a property termed multiple robustness. This paper shows that, somewhat surprisingly, multiply robust estimators are special cases of doubly robust estimators, where the final propensity score and outcome regression models are certain combinations of the candidate models. To further improve model specifications in the doubly robust estimators, we adapt a model mixing procedure as an alternative method for combining multiple candidate models. We show that multiple robustness and asymptotic normality can also be achieved by our mixing-based doubly robust estimator. Moreover, our estimator and its theoretical properties are not confined to parametric models. Numerical examples demonstrate that the proposed estimator is comparable to and can even outperform existing multiply robust estimators.
APA, Harvard, Vancouver, ISO, and other styles
25

Wally Zaher, Jiddah r., and Ali Hameed Yousif. "Proposing Shrinkage Estimator of MCP and Elastic-Net penalties in Quantile Regression Model." Wasit Journal of Pure sciences 1, no. 3 (December 24, 2022): 126–34. http://dx.doi.org/10.31185/wjps.73.

Full text
Abstract:
In some studies, there is a need to estimate the conditional distribution of the response variable at different points, and this is not available in linear regression. The alternative procedure to deal with these problems is quantile regression. In this research, a new estimator for estimating and selecting variables is proposed in the quantile regression model. A new estimator was combines two estimators Minimax Concave Penalty (MCP) and Elastic-Net called shrinkage estimator. It was compared with estimators (Minimax Concave Penalty (MCP) and Elastic-Net) by using simulation and based on Mean Square Error (MSE) and measures of sparsity False Positive Rate (FPR) and False negative rate (FNR ). We concluded that the proposed method is the best in terms of estimation and selection of variables
APA, Harvard, Vancouver, ISO, and other styles
26

Carrasco, Marine, and Rachidi Kotchoni. "EFFICIENT ESTIMATION USING THE CHARACTERISTIC FUNCTION." Econometric Theory 33, no. 2 (February 22, 2016): 479–526. http://dx.doi.org/10.1017/s0266466616000025.

Full text
Abstract:
The method of moments procedure proposed by Carrasco and Florens (2000) permits full exploitation of the information contained in the characteristic function and yields an estimator which is asymptotically as efficient as the maximum likelihood estimator. However, this estimation procedure depends on a regularization or tuning parameterαthat needs to be selected. The aim of the present paper is to provide a way to optimally chooseαby minimizing the approximate mean square error (AMSE) of the estimator. Following an approach similar to that of Donald and Newey (2001), we derive a higher-order expansion of the estimator from which we characterize the finite sample dependence of the AMSE onα. We propose to select the regularization parameter by minimizing an estimate of the AMSE. We show that this procedure delivers a consistent estimator ofα. Moreover, the data-driven selection of the regularization parameter preserves the consistency, asymptotic normality, and efficiency of the CGMM estimator. Simulation experiments based on a CIR model show the relevance of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
27

Amihud, Yakov, and Clifford M. Hurvich. "Predictive Regressions: A Reduced-Bias Estimation Method." Journal of Financial and Quantitative Analysis 39, no. 4 (December 2004): 813–41. http://dx.doi.org/10.1017/s0022109000003227.

Full text
Abstract:
AbstractStandard predictive regressions produce biased coefficient estimates in small samples when the regressors are Gaussian first-order autoregressive with errors that are correlated with the error series of the dependent variable. See Stambaugh (1999) for the single regressor model. This paper proposes a direct and convenient method to obtain reduced-bias estimators for single and multiple regressor models by employing an augmented regression, adding a proxy for the errors in the autoregressive model. We derive bias expressions for both the ordinary least-squares and our reduced-bias estimated coefficients. For the standard errors of the estimated predictive coefficients, we develop a heuristic estimator that performs well in simulations, for both the single predictor model and an important specification of the multiple predictor model. The effectiveness of our method is demonstrated by simulations and empirical estimates of common predictive models in finance. Our empirical results show that some of the predictive variables that were significant under ordinary least squares become insignificant under our estimation procedure.
APA, Harvard, Vancouver, ISO, and other styles
28

Adhya, Sumanta. "Bootstrap Variance Estimation for Semiparametric Finite Population Distribution Function Estimator." Calcutta Statistical Association Bulletin 70, no. 1 (May 2018): 17–32. http://dx.doi.org/10.1177/0008068318765583.

Full text
Abstract:
Estimating finite population distribution function (FPDF) emerges as an important problem to the survey statisticians since the pioneering work of Chambers and Dunstan [1] . It unifies estimation of standard finite population parameters, namely, mean and quantiles. Regarding this, estimating variance of FPDF estimator is an important task for accessing the quality of the estimtor and drawing inferences (e.g., confidence interval estimation) on finite population parameters. Due to non-linearity of FPDF estimator, resampling-based methods are developed earlier for parametric or non-parametric Chambers–Dunstan estimator. Here, we attempt the problem of estimating variance of P-splines-based semiparametric model-based Chambers–Dunstan type estimator of the FPDF. The proposed variance estimator involes bootstrapping. Here, the bootstrap procedure is non-trivial since it does not imitate the full mechanism of two-stage sample generating procedure from an infinite hypothetical population (superpopulation). We have established the weak consistency of the proposed resampling-based variance estimator for specific sampling designs, e.g., simple random sampling. Also, the satisfactory empirical performance of the poposed estimator has been shown through simulation studies and a real life example.
APA, Harvard, Vancouver, ISO, and other styles
29

Tong, Jiayi, Jing Huang, Jessica Chubak, Xuan Wang, Jason H. Moore, Rebecca A. Hubbard, and Yong Chen. "An augmented estimation procedure for EHR-based association studies accounting for differential misclassification." Journal of the American Medical Informatics Association 27, no. 2 (October 16, 2019): 244–53. http://dx.doi.org/10.1093/jamia/ocz180.

Full text
Abstract:
Abstract Objectives The ability to identify novel risk factors for health outcomes is a key strength of electronic health record (EHR)-based research. However, the validity of such studies is limited by error in EHR-derived phenotypes. The objective of this study was to develop a novel procedure for reducing bias in estimated associations between risk factors and phenotypes in EHR data. Materials and Methods The proposed method combines the strengths of a gold-standard phenotype obtained through manual chart review for a small validation set of patients and an automatically-derived phenotype that is available for all patients but is potentially error-prone (hereafter referred to as the algorithm-derived phenotype). An augmented estimator of associations is obtained by optimally combining these 2 phenotypes. We conducted simulation studies to evaluate the performance of the augmented estimator and conducted an analysis of risk factors for second breast cancer events using data on a cohort from Kaiser Permanente Washington. Results The proposed method was shown to reduce bias relative to an estimator using only the algorithm-derived phenotype and reduce variance compared to an estimator using only the validation data. Discussion Our simulation studies and real data application demonstrate that, compared to the estimator using validation data only, the augmented estimator has lower variance (ie, higher statistical efficiency). Compared to the estimator using error-prone EHR-derived phenotypes, the augmented estimator has smaller bias. Conclusions The proposed estimator can effectively combine an error-prone phenotype with gold-standard data from a limited chart review in order to improve analyses of risk factors using EHR data.
APA, Harvard, Vancouver, ISO, and other styles
30

Chernikova, Oksana Sergeevna, and Yuliya Sergeevna Chetvertakova. "Two-stage parametric identification procedure to predict satellite orbital motion." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 5 (October 1, 2022): 5348. http://dx.doi.org/10.11591/ijece.v12i5.pp5348-5354.

Full text
Abstract:
<span>The paper presents a new step-by-step procedure for constructing a navigation satellite motion model. At the first stage of the procedure, the parameters of the radiation pressure model are estimated using the maximum likelihood method. The statistic estimator based on the continuous-discrete adaptive unscented Kalman filter is proposed for the solar radiation model parameters estimation. Step-by-step scheme of filtering algorithm used for the software development are given. At the second stage, the parameters of the unaccounted perturbations model are estimated based on the results of residual differences measurements. The obtained results lead to significant improvement of prediction quality of the satellite trajectory.</span>
APA, Harvard, Vancouver, ISO, and other styles
31

Politis, Dimitris N. "HIGHER-ORDER ACCURATE, POSITIVE SEMIDEFINITE ESTIMATION OF LARGE-SAMPLE COVARIANCE AND SPECTRAL DENSITY MATRICES." Econometric Theory 27, no. 4 (March 3, 2011): 703–44. http://dx.doi.org/10.1017/s0266466610000484.

Full text
Abstract:
A new class of large-sample covariance and spectral density matrix estimators is proposed based on the notion of flat-top kernels. The new estimators are shown to be higher-order accurate when higher-order accuracy is possible. A discussion on kernel choice is presented as well as a supporting finite-sample simulation. The problem of spectral estimation under a potential lack of finite fourth moments is also addressed. The higher-order accuracy of flat-top kernel estimators typically comes at the sacrifice of the positive semidefinite property. Nevertheless, we show how a flat-top estimator can be modified to become positive semidefinite (even strictly positive definite) while maintaining its higher-order accuracy. In addition, an easy (and consistent) procedure for optimal bandwidth choice is given; this procedure estimates the optimal bandwidth associated with each individual element of the target matrix, automatically sensing (and adapting to) the underlying correlation structure.
APA, Harvard, Vancouver, ISO, and other styles
32

Wall, Melanie M., and Yasuo Amemiya. "Generalized Appended Product Indicator Procedure for Nonlinear Structural Equation Analysis." Journal of Educational and Behavioral Statistics 26, no. 1 (March 2001): 1–29. http://dx.doi.org/10.3102/10769986026001001.

Full text
Abstract:
Interest in considering nonlinear structural equation models is well documented in the behavioral and social sciences as well as in the education and marketing literature. This article considers estimation of polynomial structural models. An existing method is shown to have a limitation that the produced estimator is inconsistent for most practical situations. A new procedure is introduced and defined for a general model using products of observed indicators. The resulting estimator is consistent without assuming any distributional form for the underlying factors or errors. Identification assessment and standard error estimation are discussed. A simulation study addresses statistical issues including comparisons of discrepancy functions and the choice of appended product indicators. Application of the new procedure in a substance abuse prevention study is also reported.
APA, Harvard, Vancouver, ISO, and other styles
33

Corstange, Daniel. "Sensitive Questions, Truthful Answers? Modeling the List Experiment with LISTIT." Political Analysis 17, no. 1 (2009): 45–63. http://dx.doi.org/10.1093/pan/mpn013.

Full text
Abstract:
Standard estimation procedures assume that empirical observations are accurate reflections of the true values of the dependent variable, but this assumption is dubious when modeling self-reported data on sensitive topics. List experiments (a.k.a. item count techniques) can nullify incentives for respondents to misrepresent themselves to interviewers, but current data analysis techniques are limited to difference-in-means tests. I present a revised procedure and statistical estimator called LISTIT that enable multivariate modeling of list experiment data. Monte Carlo simulations and a field test in Lebanon explore the behavior of this estimator.
APA, Harvard, Vancouver, ISO, and other styles
34

Rolling, Craig A., Yuhong Yang, and Dagmar Velez. "COMBINING ESTIMATES OF CONDITIONAL TREATMENT EFFECTS." Econometric Theory 35, no. 6 (November 6, 2018): 1089–110. http://dx.doi.org/10.1017/s0266466618000397.

Full text
Abstract:
Estimating a treatment’s effect on an outcome conditional on covariates is a primary goal of many empirical investigations. Accurate estimation of the treatment effect given covariates can enable the optimal treatment to be applied to each unit or guide the deployment of limited treatment resources for maximum program benefit. Applications of conditional treatment effect estimation are found in direct marketing, economic policy, and personalized medicine. When estimating conditional treatment effects, the typical practice is to select a statistical model or procedure based on sample data. However, combining estimates from the candidate procedures often provides a more accurate estimate than the selection of a single procedure. This article proposes a method of model combination that targets accurate estimation of the treatment effect conditional on covariates. We provide a risk bound for the resulting estimator under squared error loss and illustrate the method using data from a labor skills training program.
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Anthony, Piya Chootinan, Seungkyu Ryu, Ming Lee, and Will Recker. "An intersection turning movement estimation procedure based on path flow estimator." Journal of Advanced Transportation 46, no. 2 (November 9, 2010): 161–76. http://dx.doi.org/10.1002/atr.151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Stehr, Mads, and Markus Kiderlen. "Improving the Cavalieri estimator under non-equidistant sampling and dropouts." Image Analysis & Stereology 39, no. 3 (November 25, 2020): 197–212. http://dx.doi.org/10.5566/ias.2422.

Full text
Abstract:
Motivated by the stereological problem of volume estimation from parallel section profiles, the so-called Newton-Cotes integral estimators based on random sampling nodes are analyzed. These estimators generalize the classical Cavalieri estimator and its variant for non-equidistant sampling nodes, the generalized Cavalieri estimator, and have typically a substantially smaller variance than the latter. The present paper focuses on the following points in relation to Newton-Cotes estimators: the treatment of dropouts, the construction of variance estimators, and, finally, their application in volume estimation of convex bodies.Dropouts are eliminated points in the initial stationary point process of sampling nodes, modeled by independent thinning. Among other things, exact representations of the variance are given in terms of the thinning probability and increments of the initial points under two practically relevant sampling models. The paper presents a general estimation procedure for the variance of Newton-Cotes estimators based on the sampling nodes in a bounded interval. Finally, the findings are illustrated in an application of volume estimation for three-dimensional convex bodies with sufficiently smooth boundaries.
APA, Harvard, Vancouver, ISO, and other styles
37

Donkers, Bas, and Marcia Schafgans. "SPECIFICATION AND ESTIMATION OF SEMIPARAMETRIC MULTIPLE-INDEX MODELS." Econometric Theory 24, no. 6 (July 17, 2008): 1584–606. http://dx.doi.org/10.1017/s0266466608080626.

Full text
Abstract:
We propose an easy to use derivative-based two-step estimation procedure for semiparametric index models, where the number of indexes is not known a priori. In the first step various functionals involving the derivatives of the unknown function are estimated using nonparametric kernel estimators, in particular the average outer product of the gradient (AOPG). By testing the rank of the AOPG we determine the required number of indexes. Subsequently, we estimate the index parameters in a method of moments framework, with moment conditions constructed using the estimated average derivative functionals. The estimator readily extends to multiple equation models and is shown to be root-N-consistent and asymptotically normal.
APA, Harvard, Vancouver, ISO, and other styles
38

PEARN, W. L., S. L. YANG, K. S. CHEN, and P. C. LIN. "TESTING PROCESS CAPABILITY USING THE INDEX Cpmk WITH AN APPLICATION." International Journal of Reliability, Quality and Safety Engineering 08, no. 01 (March 2001): 15–34. http://dx.doi.org/10.1142/s0218539301000360.

Full text
Abstract:
Numerous process capability indices, including Cp, Cpk, Cpm, and Cpmk, have been proposed to provide measures on process potential and performance. Procedures using the estimators of Cp, Cpk, and Cpm have been proposed for the practitioners to use in judging whether a process meets the capability requirement. In this paper, based on the theory of testing hypothesis, we develop a step-by-step procedure, using estimator of Cpmk for the practitioners to use in making decisions. Then, the proposed procedure is applied to an audio-speaker driver manufacturing process, to demonstrate how we may apply the procedure to actual data collected in the factory.
APA, Harvard, Vancouver, ISO, and other styles
39

Hirose, Kei, and Hiroki Masuda. "Robust Relative Error Estimation." Entropy 20, no. 9 (August 24, 2018): 632. http://dx.doi.org/10.3390/e20090632.

Full text
Abstract:
Relative error estimation has been recently used in regression analysis. A crucial issue of the existing relative error estimation procedures is that they are sensitive to outliers. To address this issue, we employ the γ -likelihood function, which is constructed through γ -cross entropy with keeping the original statistical model in use. The estimating equation has a redescending property, a desirable property in robust statistics, for a broad class of noise distributions. To find a minimizer of the negative γ -likelihood function, a majorize-minimization (MM) algorithm is constructed. The proposed algorithm is guaranteed to decrease the negative γ -likelihood function at each iteration. We also derive asymptotic normality of the corresponding estimator together with a simple consistent estimator of the asymptotic covariance matrix, so that we can readily construct approximate confidence sets. Monte Carlo simulation is conducted to investigate the effectiveness of the proposed procedure. Real data analysis illustrates the usefulness of our proposed procedure.
APA, Harvard, Vancouver, ISO, and other styles
40

Eyheramendy, Susana, Felipe Elorrieta, and Wilfredo Palma. "An autoregressive model for irregular time series of variable stars." Proceedings of the International Astronomical Union 12, S325 (October 2016): 259–62. http://dx.doi.org/10.1017/s1743921317000448.

Full text
Abstract:
AbstractThis paper discusses an autoregressive model for the analysis of irregularly observed time series. The properties of this model are studied and a maximum likelihood estimation procedure is proposed. The finite sample performance of this estimator is assessed by Monte Carlo simulations, showing accurate estimators. We implement this model to the residuals after fitting an harmonic model to light-curves from periodic variable stars from the Optical Gravitational Lensing Experiment (OGLE) and Hipparcos surveys, showing that the model can identify time dependency structure that remains in the residuals when, for example, the period of the light-curves was not properly estimated.
APA, Harvard, Vancouver, ISO, and other styles
41

Cordue, P. L., and R. I. C. C. Francis. "Accuracy and Choice in Risk Estimation for Fisheries Assessment." Canadian Journal of Fisheries and Aquatic Sciences 51, no. 4 (April 1, 1994): 817–29. http://dx.doi.org/10.1139/f94-080.

Full text
Abstract:
Risk analysis has been used recently to enhance scientific advice to managers by providing estimates of risk to the fishery of different management strategies. However, little consideration has been given to the accuracy of these estimates. We present a reformulation and generalization of the risk analysis procedure of Francis (1992. Can. J. Fish. Aquat. Sci. 49: 922–930) and use simulation methods to examine the properties of a number of alternative risk estimators for two of New Zealand's main fisheries. It is shown that the choice of estimator can strongly affect the final estimates of risk and that the risk estimators can be alarmingly inaccurate. The accuracy of estimates is also shown to vary according to the type of risk being estimated, so analysts may improve the accuracy of their estimates by choosing the type of risk they estimate.
APA, Harvard, Vancouver, ISO, and other styles
42

Abbasi, Azhar Mehmood, and Muhammad Yousaf Shad. "Sensitive proportion in ranked set sampling." PLOS ONE 16, no. 8 (August 31, 2021): e0256699. http://dx.doi.org/10.1371/journal.pone.0256699.

Full text
Abstract:
This paper considers the concomitant-based rank set sampling (CRSS) for estimation of the sensitive proportion. It is shown that CRSS procedure provides an unbiased estimator of the population sensitive proportion, and it is always more precise than corresponding sample sensitive proportion (Warner SL (1965)) that based on simple random sampling (SRS) without increasing sampling cost. Additionally, a new estimator based on ratio method is introduced using CRSS protocol, preserving the respondent’s confidentiality through a randomizing device. The numerical results of these estimators are obtained by using numerical integration technique. An application to real data is also given to support the methods.
APA, Harvard, Vancouver, ISO, and other styles
43

Sriliana, Idhia, I. Nyoman Budiantara, and Vita Ratnasari. "A Truncated Spline and Local Linear Mixed Estimator in Nonparametric Regression for Longitudinal Data and Its Application." Symmetry 14, no. 12 (December 19, 2022): 2687. http://dx.doi.org/10.3390/sym14122687.

Full text
Abstract:
Longitudinal data modeling is widely carried out using parametric methods. However, when the parametric model is misspecified, the obtained estimator might be severely biased and lead to erroneous conclusions. In this study, we propose a new estimation method for longitudinal data modeling using a mixed estimator in nonparametric regression. The objective of this study was to estimate the nonparametric regression curve for longitudinal data using two combined estimators: truncated spline and local linear. The weighted least square method with a two-stage estimation procedure was used to obtain the regression curve estimation of the proposed model. To account for within-subject correlations in the longitudinal data, a symmetric weight matrix was given in the regression curve estimation. The best model was determined by minimizing the generalized cross-validation value. Furthermore, an application to a longitudinal dataset of the poverty gap index in Bengkulu Province, Indonesia, was conducted to illustrate the performance of the proposed mixed estimator. Compared to the single estimator, the truncated spline and local linear mixed estimator had better performance in longitudinal data modeling based on the GCV value. Additionally, the empirical results of the best model indicated that the proposed model could explain the data variation exceptionally well.
APA, Harvard, Vancouver, ISO, and other styles
44

Lian, Heng, and Hua Liang. "GENERALIZED ADDITIVE PARTIAL LINEAR MODELS WITH HIGH-DIMENSIONAL COVARIATES." Econometric Theory 29, no. 6 (August 7, 2013): 1136–61. http://dx.doi.org/10.1017/s0266466613000029.

Full text
Abstract:
This paper studies generalized additive partial linear models with high-dimensional covariates. We are interested in which components (including parametric and nonparametric components) are nonzero. The additive nonparametric functions are approximated by polynomial splines. We propose a doubly penalized procedure to obtain an initial estimate and then use the adaptive least absolute shrinkage and selection operator to identify nonzero components and to obtain the final selection and estimation results. We establish selection and estimation consistency of the estimator in addition to asymptotic normality for the estimator of the parametric components by employing a penalized quasi-likelihood. Thus our estimator is shown to have an asymptotic oracle property. Monte Carlo simulations show that the proposed procedure works well with moderate sample sizes.
APA, Harvard, Vancouver, ISO, and other styles
45

Jaśko, Przemysław, and Daniel Kosiorowski. "Conditional Covariance Prediction in Portfolio Analysis Using MCD and PCS Robust Multivariate Scatter Estimators." Przegląd Statystyczny 63, no. 2 (June 30, 2016): 149–72. http://dx.doi.org/10.5604/01.3001.0014.1157.

Full text
Abstract:
In this paper we compare two matrix estimators of multivariate scatter – the minimal covariance determinant estimator MCD with a new proposal an estimator minimizing an incongruence criterion PCS in a context of their applications in economics. We analyze the estimators using simulation studies and using empirical examples related to issues of portfolio building. In a decision process we often make use of multivariate scatter estimators. Incorrect value of these estimates may result in financial losses. In this paper we compare two robust multivariate scatter estimators – MCD (minimum covariance determinant) and recently proposed PCS (projection congruent subset), which are affine equivariant and have high breakdown points. In the empirical analysis we make use of them in the procedure of weights setting for minimum vari ance and equal risk contribution (ERC) portfolios.
APA, Harvard, Vancouver, ISO, and other styles
46

Emvalomatis, Grigorios, Spiro E. Stefanou, and Alfons Oude Lansink. "Estimation of Stochastic Frontier Models with Fixed Effects through Monte Carlo Maximum Likelihood." Journal of Probability and Statistics 2011 (2011): 1–13. http://dx.doi.org/10.1155/2011/568457.

Full text
Abstract:
Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are updated using information from the data and are robust to possible correlation of the group-specific constant terms with the explanatory variables. Monte Carlo experiments are performed in the specific context of stochastic frontier models to examine and compare the sampling properties of the proposed estimator with those of the random-effects and correlated random-effects estimators. The results suggest that the estimator is unbiased even in short panels. An application to a cross-country panel of EU manufacturing industries is presented as well. The proposed estimator produces a distribution of efficiency scores suggesting that these industries are highly efficient, while the other estimators suggest much poorer performance.
APA, Harvard, Vancouver, ISO, and other styles
47

He, Ruixuan, Xiaoran Liu, Kai Mei, Guangwei Gong, Jun Xiong, and Jibo Wei. "Iterative Joint Estimation Procedure of Channel and PDP for OFDM Systems." Entropy 24, no. 11 (November 15, 2022): 1664. http://dx.doi.org/10.3390/e24111664.

Full text
Abstract:
The power-delay profile (PDP) estimation of wireless channels is an important step to generate a channel correlation matrix for channel linear minimum mean square error (LMMSE) estimation. Estimated channel frequency response can be used to obtain time dispersion characteristics that can be exploited by adaptive orthogonal frequency division multiplexing (OFDM) systems. In this paper, a joint estimator for PDP and LMMSE channel estimation is proposed. For LMMSE channel estimation, we apply a candidate set of frequency-domain channel correlation functions (CCF) and select the one that best matches the current channel to construct the channel correlation matrix. The initial candidate set is generated based on the traditional CCF calculation method for different scenarios. Then, the result of channel estimation is used as an input for the PDP estimation whereas the estimated PDP is further used to update the candidate channel correlation matrix. The enhancement of LMMSE channel estimation and PDP estimation can be achieved by the iterative joint estimation procedure. Analysis and simulation results show that in different communication scenarios, the PDP estimation error of the proposed method can approach the Cramér–Rao lower bound (CRLB) after a finite number of iterations. Moreover, the mean square error of channel estimation is close to the performance of accurate PDP-assisted LMMSE.
APA, Harvard, Vancouver, ISO, and other styles
48

Kitamura, Yuichi, and Peter C. B. Phillips. "Efficient IV Estimation in Nonstationary Regression." Econometric Theory 11, no. 5 (October 1995): 1095–130. http://dx.doi.org/10.1017/s026646660000997x.

Full text
Abstract:
A limit theory for instrumental variables (IV) estimation that allows for possibly nonstationary processes was developed in Kitamura and Phillips (1992, Fully Modified IV, GIVE, and GMM Estimation with Possibly Non-stationary Regressors and Instruments, mimeo, Yale University). This theory covers a case that is important for practitioners, where the nonstationarity of the regressors may not be of full rank, and shows that the fully modified (FM) regression procedure of Phillips and Hansen (1990) is still applicable. FM. versions of the generalized method of moments (GMM) estimator and the generalized instrumental variables estimator (GIVE) were also developed, and these estimators (FM-GMM and FM-GIVE) were designed specifically to take advantage of potential stationarity in the regressors (or unknown linear combinations of them). These estimators were shown to deliver efficiency gains over FM-IV in the estimation of the stationary components of a model.This paper provides an overview of the FM-IV, FM-GMM, and FM-GIVE procedures and investigates the small sample properties of these estimation procedures by simulations. We compare the following five estimation methods: ordinary least squares, crude (conventional) IV, FM-IV, FM-GMM, and FM-GIVE. Our findings are as follows, (i) In terms of overall performance in both stationary and nonstationary cases, FM-IV is more concentrated and better centered than OLS and crude IV, though it has a higher root mean square error than crude IV due to occasional outliers, (ii) Among FM-IV, FM-GMM, and FM-GIVE, (a) when applied to the stationary coefficients, FM-GIVE generally outperforms FM-IV and FM-GMM by a wide margin, whereas the difference between the latter two is quite small when the AR roots of the stationary processes are rather large; and (b) when applied to the nonstationary coefficients, the three estimators are numerically very close. The performance of the FM-GIVE estimator is generally very encouraging.
APA, Harvard, Vancouver, ISO, and other styles
49

Mishra, Sandeep. "AN EFFICIENT COMPROMISED IMPUTATION METHOD FOR ESTIMATING POPULATION MEAN." International Journal of Engineering Technologies and Management Research 9, no. 9 (September 5, 2022): 1–16. http://dx.doi.org/10.29121/ijetmr.v9.i9.2022.1216.

Full text
Abstract:
This paper suggests a modified new ratio-product-exponential imputation procedure to deal with missing data in order to estimate a finite population mean in a simple random sample without replacement. The bias and mean squared error of our proposed estimator are obtained to the first degree of approximation. We derive conditions for the parameters under which the proposed estimator has smaller mean squared error than the sample mean, ratio, and product estimators. We carry out an empirical study which shows that the proposed estimator outperforms the traditional estimators using real data.
APA, Harvard, Vancouver, ISO, and other styles
50

Hacker, Ronald B., Michael R. Constable, and Gavin J. Melville. "A step-point transect technique for estimation of kangaroo populations in sheep-grazed paddocks." Rangeland Journal 24, no. 2 (2002): 326. http://dx.doi.org/10.1071/rj02019.

Full text
Abstract:
This paper presents an empirical and theoretical evaluation of a step-point transect procedure for estimating the relative and absolute abundance of kangaroos in sheep-grazed rangeland paddocks. The method assumes that the proportion of kangaroos in the total (sheep + kangaroo) population can be estimated from their proportional representation in the dung. The actual population of kangaroos can then be estimated if the population of sheep is known. Proportional representation in the dung is estimated by the probability that dung of a given species will lie closest to a 'random' point. For operational simplicity, sample points are not strictly random but are located systematically along walked transects. The procedure was applied in seven paddocks spread throughout the Western Division of NSW and south-west Queensland where actual kangaroo and sheep populations were determined by ground survey and from station records, respectively. Dung was also collected from sample transects to allow comparison of step-point population estimates with corresponding estimates based on dung weight or pellet counts. A theoretical estimator of the kangaroo population based on step-point data was also derived and compared with actual populations estimated from ground surveys. Results indicate that the step-point transect procedure should produce acceptable estimates of both relative and absolute kangaroo populations in sheep-grazed rangeland paddocks. The appropriate equations, and minimum sample sizes, are provided. The step-point transect technique has the practical advantage that it avoids the tedious sampling procedures inherent in count- or weight-based methods. Further, in contrast to these methods, it may be expected to become both more reliable and less time consuming as the density of dung increases, thus providing the greatest operational advantage under circumstances where reliable estimation is most required.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography