Academic literature on the topic 'And shrinkage estimator (SE)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'And shrinkage estimator (SE).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "And shrinkage estimator (SE)"

1

Zakerzadeh, Hojatollah, Ali Akbar Jafari, and Mahdieh Karimi. "Optimal Shrinkage Estimations for the Parameters of Exponential Distribution Based on Record Values." Revista Colombiana de Estadística 39, no. 1 (January 18, 2016): 33–44. http://dx.doi.org/10.15446/rce.v39n1.55137.

Full text
Abstract:
<p>This paper studies shrinkage estimation after the preliminary test for the parameters of exponential distribution based on record values. The optimal value of shrinkage coefficients is also obtained based on the minimax regret criterion. The maximum likelihood, pre-test, and shrinkage estimators are compared using a simulation study. The results to estimate the scale parameter show that the optimal shrinkage estimator is better than the maximum likelihood estimator in all cases, and when the prior guess is near the true value, the pre-test estimator is better than shrinkage estimator. The results to estimate the location parameter show that the optimal shrinkage estimator is better than maximum likelihood estimator when a prior guess is close<br />to the true value. All estimators are illustrated by a numerical example.</p>
APA, Harvard, Vancouver, ISO, and other styles
2

Saleh, A. K. Md Ehsanes, M. Arashi, M. Norouzirad, and B. M. Goalm Kibria. "On shrinkage and selection: ANOVA model." Journal of Statistical Research 51, no. 2 (February 1, 2018): 165–91. http://dx.doi.org/10.47302/jsr.2017510205.

Full text
Abstract:
This paper considers the estimation of the parameters of an ANOVA model when sparsity is suspected. Accordingly, we consider the least square estimator (LSE), restricted LSE, preliminary test and Stein-type estimators, together with three penalty estimators, namely, the ridge estimator, subset selection rules (hard threshold estimator) and the LASSO (soft threshold estimator). We compare and contrast the L2-risk of all the estimators with the lower bound of L2-risk of LASSO in a family of diagonal projection scheme which is also the lower bound of the exact L2-risk of LASSO. The result of this comparison is that neither LASSO nor the LSE, preliminary test, and Stein-type estimators outperform each other uniformly. However, when the model is sparse, LASSO outperforms all estimators except “ridge” estimator since both LASSO and ridge are L2-risk equivalent under sparsity. We also find that LASSO and the restricted LSE are L2-risk equivalent and both outperform all estimators (except ridge) depending on the dimension of sparsity. Finally, ridge estimator outperforms all estimators uniformly. Our finding are based on L2-risk of estimators and lower bound of the risk of LASSO together with tables of efficiency and graphical display of efficiency and not based on simulation.
APA, Harvard, Vancouver, ISO, and other styles
3

Hansen, Bruce E. "SHRINKAGE EFFICIENCY BOUNDS." Econometric Theory 31, no. 4 (October 2, 2014): 860–79. http://dx.doi.org/10.1017/s0266466614000693.

Full text
Abstract:
This paper is an extension of Magnus (2002, Econometrics Journal 5, 225–236) to multiple dimensions. We consider estimation of a multivariate normal mean under sum of squared error loss. We construct the efficiency bound (the lowest achievable risk) for minimax shrinkage estimation in the class of minimax orthogonally invariate estimators satisfying the sufficient conditions of Efron and Morris (1976, Annals of Statistics 4, 11–21). This allows us to compare the regret of existing orthogonally invariate shrinkage estimators. We also construct a new shrinkage estimator which achieves substantially lower maximum regret than existing estimators.
APA, Harvard, Vancouver, ISO, and other styles
4

Afshari, Mahmoud, and Hamid Karamikabir. "The Location Parameter Estimation of Spherically Distributions with Known Covariance Matrices." Statistics, Optimization & Information Computing 8, no. 2 (February 20, 2020): 499–506. http://dx.doi.org/10.19139/soic-2310-5070-710.

Full text
Abstract:
This paper presents shrinkage estimators of the location parameter vector for spherically symmetric distributions. We suppose that the mean vector is non-negative constraint and the components of diagonal covariance matrix is known.We compared the present estimator with natural estimator by using risk function.We show that when the covariance matrices are known, under the balance error loss function, shrinkage estimator has the smaller risk than the natural estimator. Simulation results are provided to examine the shrinkage estimators.
APA, Harvard, Vancouver, ISO, and other styles
5

Prakash, Gyan. "Some Estimators for the Pareto Distribution." Journal of Scientific Research 1, no. 2 (April 22, 2009): 236–47. http://dx.doi.org/10.3329/jsr.v1i2.1642.

Full text
Abstract:
We derive some shrinkage test-estimators and the Bayes estimators for the shape parameter of a Pareto distribution under the general entropy loss (GEL) function. The properties have been studied in terms of relative efficiency. The choices of shrinkage factor are also suggested.  Keywords: General entropy loss; Shrinkage factor; Shrinkage test-estimator; Bayes estimator; Relative efficiency. © 2009 JSR Publications. ISSN: 2070-0237 (Print); 2070-0245 (Online). All rights reserved. DOI: 10.3329/jsr.v1i2.1642 Â
APA, Harvard, Vancouver, ISO, and other styles
6

Kambo, N. S., B. R. Fanda, and Z. A. Al-Hemyari. "On huntseerger type shrinkage estimator." Communications in Statistics - Theory and Methods 21, no. 3 (January 1992): 823–41. http://dx.doi.org/10.1080/03610929208830817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hassan, N. J., J. Mahdi Hadad, and A. Hawad Nasar. "Bayesian Shrinkage Estimator of Burr XII Distribution." International Journal of Mathematics and Mathematical Sciences 2020 (June 22, 2020): 1–6. http://dx.doi.org/10.1155/2020/7953098.

Full text
Abstract:
In this paper, we derive the generalized Bayesian shrinkage estimator of parameter of Burr XII distribution under three loss functions: squared error, LINEX, and weighted balance loss functions. Therefore, we obtain three generalized Bayesian shrinkage estimators (GBSEs). In this approach, we find the posterior risk function (PRF) of the generalized Bayesian shrinkage estimator (GBSE) with respect to each loss function. The constant formula of GBSE is computed by minimizing the PRF. In special cases, we derive two new GBSEs under the weighted loss function. Finally, we give our conclusion.
APA, Harvard, Vancouver, ISO, and other styles
8

OKHRIN, YAREMA, and WOLFGANG SCHMID. "ESTIMATION OF OPTIMAL PORTFOLIO WEIGHTS." International Journal of Theoretical and Applied Finance 11, no. 03 (May 2008): 249–76. http://dx.doi.org/10.1142/s0219024908004798.

Full text
Abstract:
The paper discusses finite sample properties of optimal portfolio weights, estimated expected portfolio return, and portfolio variance. The first estimator assumes the asset returns to be independent, while the second takes them to be predictable using a linear regression model. The third and the fourth approaches are based on a shrinkage technique and a Bayesian methodology, respectively. In the first two cases, we establish the moments of the weights and the portfolio returns. A consistent estimator of the shrinkage parameter for the third estimator is then derived. The advantages of the shrinkage approach are assessed in an empirical study.
APA, Harvard, Vancouver, ISO, and other styles
9

Carriero, Andrea, George Kapetanios, and Massilimiano Marcellino. "A Shrinkage Instrumental Variable Estimator for Large Datasets." Articles 91, no. 1-2 (May 20, 2016): 67–87. http://dx.doi.org/10.7202/1036914ar.

Full text
Abstract:
This paper proposes and discusses an instrumental variable estimator that can be of particular relevance when many instruments are available and/or the number of instruments is large relative to the total number of observations. Intuition and recent work (see, e.g., Hahn, 2002) suggest that parsimonious devices used in the construction of the final instruments may provide effective estimation strategies. Shrinkage is a well known approach that promotes parsimony. We consider a new shrinkage 2SLS estimator. We derive a consistency result for this estimator under general conditions, and via Monte Carlo simulation show that this estimator has good potential for inference in small samples.
APA, Harvard, Vancouver, ISO, and other styles
10

Al-Zahrani, Bander. "On the Estimation of Reliability of Weighted Weibull Distribution: A Comparative Study." International Journal of Statistics and Probability 5, no. 4 (June 11, 2016): 1. http://dx.doi.org/10.5539/ijsp.v5n4p1.

Full text
Abstract:
The paper gives a description of estimation for the reliability function of weighted Weibull distribution. The maximum likelihood estimators for the unknown parameters are obtained. Nonparametric methods such as empirical method, kernel density estimator and a modified shrinkage estimator are provided. The Markov chain Monte Carlo method is used to compute the Bayes estimators assuming gamma and Jeffrey priors. The performance of the maximum likelihood, nonparametric methods and Bayesian estimators is assessed through a real data set.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "And shrinkage estimator (SE)"

1

Hoque, Zahirul. "Improved estimation for linear models under different loss functions." University of Southern Queensland, Faculty of Sciences, 2004. http://eprints.usq.edu.au/archive/00001438/.

Full text
Abstract:
This thesis investigates improved estimators of the parameters of the linear regression models with normal errors, under sample and non-sample prior information about the value of the parameters. The estimators considered are the unrestricted estimator (UE), restricted estimator (RE), shrinkage restricted estimator (SRE), preliminary test estimator (PTE), shrinkage preliminary test estimator (SPTE), and shrinkage estimator (SE). The performances of the estimators are investigated with respect to bias, squared error and linex loss. For the analyses of the risk functions of the estimators, analytical, graphical and numerical procedures are adopted. In Part I the SRE, SPTE and SE of the slope and intercept parameters of the simple linear regression model are considered. The performances of the estimators are investigated with respect to their biases and mean square errors. The efficiencies of the SRE, SPTE and SE relative to the UE are obtained. It is revealed that under certain conditions, SE outperforms the other estimators considered in this thesis. In Part II in addition to the likelihood ratio (LR) test, the Wald (W) and Lagrange multiplier (LM) tests are used to define the SPTE and SE of the parameter vector of the multiple linear regression model with normal errors. Moreover, the modified and size-corrected W, LR and LM tests are used in the definition of SPTE. It is revealed that a great deal of conflict exists among the quadratic biases (QB) and quadratic risks (QR) of the SPTEs under the three original tests. The use of the modified tests reduces the conflict among the QRs, but not among the QBs. However, the use of the size-corrected tests in the definition of the SPTE almost eliminates the conflict among both QBs and QRs. It is also revealed that there is a great deal of conflict among the performances of the SEs when the three original tests are used as the preliminary test statistics. With respect to quadratic bias, the W test statistic based SE outperforms that based on the LR and LM test statistics. However, with respect to the QR criterion, the LM test statistic based SE outperforms the W and LM test statistics based SEs, under certain conditions. In Part III the performance of the PTE of the slope parameter of the simple linear regression model is investigated under the linex loss function. This is motivated by increasing criticism of the squared error loss function for its inappropriateness in many real life situations where underestimation of a parameter is more serious than its overestimation or vice-versa. It is revealed that under the linex loss function the PTE outperforms the UE if the nonsample prior information about the value of the parameter is not too far from its true value. Like the linex loss function, the risk function of the PTE is also asymmetric. However, if the magnitude of the scale parameter of the linex loss is very small, the risk of the PTE is nearly symmetric.
APA, Harvard, Vancouver, ISO, and other styles
2

Mahdi, Tahir Naweed. "Shrinkage estimation in prediction." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq30515.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Tae-Hwan. "The shrinkage least absolute deviation estimator in large samples and its application to the Treynor-Black model /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1998. http://wwwlib.umi.com/cr/ucsd/fullcit?p9901433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chan, Tsz-hin, and 陳子軒. "Hybrid bootstrap procedures for shrinkage-type estimators." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48521826.

Full text
Abstract:
In statistical inference, one is often interested in estimating the distribution of a root, which is a function of the data and the parameters only. Knowledge of the distribution of a root is useful for inference problems such as hypothesis testing and the construction of a confidence set. Shrinkage-type estimators have become popular in statistical inference due to their smaller mean squared errors. In this thesis, the performance of different bootstrap methods is investigated for estimating the distributions of roots which are constructed based on shrinkage estimators. Focus is on two shrinkage estimation problems, namely the James-Stein estimation and the model selection problem in simple linear regression. A hybrid bootstrap procedure and a bootstrap test method are proposed to estimate the distributions of the roots of interest. In the two shrinkage problems, the asymptotic errors of the traditional n-out-of-n bootstrap, m-out-of-n bootstrap and the proposed methods are derived under a moving parameter framework. The problem of the lack of uniform consistency of the n-out-of-n and the m-out-of-n bootstraps is exposed. It is shown that the proposed methods have better overall performance, in the sense that they yield improved convergence rates over almost the whole range of possible values of the underlying parameters. Simulation studies are carried out to illustrate the theoretical findings.
published_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
5

Remenyi, Norbert. "Contributions to Bayesian wavelet shrinkage." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45898.

Full text
Abstract:
This thesis provides contributions to research in Bayesian modeling and shrinkage in the wavelet domain. Wavelets are a powerful tool to describe phenomena rapidly changing in time, and wavelet-based modeling has become a standard technique in many areas of statistics, and more broadly, in sciences and engineering. Bayesian modeling and estimation in the wavelet domain have found useful applications in nonparametric regression, image denoising, and many other areas. In this thesis, we build on the existing techniques and propose new methods for applications in nonparametric regression, image denoising, and partially linear models. The thesis consists of an overview chapter and four main topics. In Chapter 1, we provide an overview of recent developments and the current status of Bayesian wavelet shrinkage research. The chapter contains an extensive literature review consisting of almost 100 references. The main focus of the overview chapter is on nonparametric regression, where the observations come from an unknown function contaminated with Gaussian noise. We present many methods which employ model-based and adaptive shrinkage of the wavelet coefficients through Bayes rules. These includes new developments such as dependence models, complex wavelets, and Markov chain Monte Carlo (MCMC) strategies. Some applications of Bayesian wavelet shrinkage, such as curve classification, are discussed. In Chapter 2, we propose the Gibbs Sampling Wavelet Smoother (GSWS), an adaptive wavelet denoising methodology. We use the traditional mixture prior on the wavelet coefficients, but also formulate a fully Bayesian hierarchical model in the wavelet domain accounting for the uncertainty of the prior parameters by placing hyperpriors on them. Since a closed-form solution to the Bayes estimator does not exist, the procedure is computational, in which the posterior mean is computed via MCMC simulations. We show how to efficiently develop a Gibbs sampling algorithm for the proposed model. The developed procedure is fully Bayesian, is adaptive to the underlying signal, and provides good denoising performance compared to state-of-the-art methods. Application of the method is illustrated on a real data set arising from the analysis of metabolic pathways, where an iterative shrinkage procedure is developed to preserve the mass balance of the metabolites in the system. We also show how the methodology can be extended to complex wavelet bases. In Chapter 3, we propose a wavelet-based denoising methodology based on a Bayesian hierarchical model using a double Weibull prior. The interesting feature is that in contrast to the mixture priors traditionally used by some state-of-the-art methods, the wavelet coefficients are modeled by a single density. Two estimators are developed, one based on the posterior mean and the other based on the larger posterior mode; and we show how to calculate these estimators efficiently. The methodology provides good denoising performance, comparable even to state-of-the-art methods that use a mixture prior and an empirical Bayes setting of hyperparameters; this is demonstrated by simulations on standard test functions. An application to a real-word data set is also considered. In Chapter 4, we propose a wavelet shrinkage method based on a neighborhood of wavelet coefficients, which includes two neighboring coefficients and a parental coefficient. The methodology is called Lambda-neighborhood wavelet shrinkage, motivated by the shape of the considered neighborhood. We propose a Bayesian hierarchical model using a contaminated exponential prior on the total mean energy in the Lambda-neighborhood. The hyperparameters in the model are estimated by the empirical Bayes method, and the posterior mean, median, and Bayes factor are obtained and used in the estimation of the total mean energy. Shrinkage of the neighboring coefficients is based on the ratio of the estimated and observed energy. The proposed methodology is comparable and often superior to several established wavelet denoising methods that utilize neighboring information, which is demonstrated by extensive simulations. An application to a real-world data set from inductance plethysmography is considered, and an extension to image denoising is discussed. In Chapter 5, we propose a wavelet-based methodology for estimation and variable selection in partially linear models. The inference is conducted in the wavelet domain, which provides a sparse and localized decomposition appropriate for nonparametric components with various degrees of smoothness. A hierarchical Bayes model is formulated on the parameters of this representation, where the estimation and variable selection is performed by a Gibbs sampling procedure. For both the parametric and nonparametric part of the model we are using point-mass-at-zero contamination priors with a double exponential spread distribution. In this sense we extend the model of Chapter 2 to partially linear models. Only a few papers in the area of partially linear wavelet models exist, and we show that the proposed methodology is often superior to the existing methods with respect to the task of estimating model parameters. Moreover, the method is able to perform Bayesian variable selection by a stochastic search for the parametric part of the model.
APA, Harvard, Vancouver, ISO, and other styles
6

Vumbukani, Bokang C. "Comparison of ridge and other shrinkage estimation techniques." Master's thesis, University of Cape Town, 2006. http://hdl.handle.net/11427/4364.

Full text
Abstract:
Includes bibliographical references.
Shrinkage estimation is an increasingly popular class of biased parameter estimation techniques, vital when the columns of the matrix of independent variables X exhibit dependencies or near dependencies. These dependencies often lead to serious problems in least squares estimation: inflated variances and mean squared errors of estimates unstable coefficients, imprecision and improper estimation. Shrinkage methods allow for a little bias and at the same time introduce smaller mean squared error and variances for the biased estimators, compared to those of unbiased estimators. However, shrinkage methods are based on the shrinkage factor, of which estimation depends on the unknown values, often computed from the OLS solution. We argue that the instability of OLS estimates may have an adverse effect on performance of shrinkage estimators. Hence a new method for estimating the shrinkage factors is proposed and applied on ridge and generalized ridge regression. We propose that the new shrinkage factors should be based on the principal components instead of the unstable OLS estimates.
APA, Harvard, Vancouver, ISO, and other styles
7

Mergel, Victor. "Divergence loss for shrinkage estimation, prediction and prior selection." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0015678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ullah, Bashir. "Some contributions to positive part shrinkage estimation in various models." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ30263.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tang, Tianyuan, and 唐田园. "On uniform consistency of confidence regions based on shrinkage-type estimators." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B47152035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Serra, Puertas Jorge. "Shrinkage corrections of sample linear estimators in the small sample size regime." Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/404386.

Full text
Abstract:
We are living in a data deluge era where the dimensionality of the data gathered by inexpensive sensors is growing at a fast pace, whereas the availability of independent samples of the observed data is limited. Thus, classical statistical inference methods relying on the assumption that the sample size is large, compared to the observation dimension, are suffering a severe performance degradation. Within this context, this thesis focus on a popular problem in signal processing, the estimation of a parameter, observed through a linear model. This inference is commonly based on a linear filtering of the data. For instance, beamforming in array signal processing, where a spatial filter steers the beampattern of the antenna array towards a direction to obtain the signal of interest (SOI). In signal processing the design of the optimal filters relies on the optimization of performance measures such as the Mean Square Error (MSE) and the Signal to Interference plus Noise Ratio (SINR). When the first two moments of the SOI are known, the optimization of the MSE leads to the Linear Minimum Mean Square Error (LMMSE). When such statistical information is not available one may force a no distortion constraint towards the SOI in the optimization of the MSE, which is equivalent to maximize the SINR. This leads to the Minimum Variance Distortionless Response (MVDR) method. The LMMSE and MVDR are optimal, though unrealizable in general, since they depend on the inverse of the data correlation, which is not known. The common approach to circumvent this problem is to substitute it for the inverse of the sample correlation matrix (SCM), leading to the sample LMMSE and sample MVDR. This approach is optimal when the number of available statistical samples tends to infinity for a fixed observation dimension. This large sample size scenario hardly holds in practice and the sample methods undergo large performance degradations in the small sample size regime, which may be due to short stationarity constraints or to a system with a high observation dimension. The aim of this thesis is to propose corrections of sample estimators, such as the sample LMMSE and MVDR, to circumvent their performance degradation in the small sample size regime. To this end, two powerful tools are used, shrinkage estimation and random matrix theory (RMT). Shrinkage estimation introduces a structure on the filters that forces some corrections in small sample size situations. They improve sample based estimators by optimizing a bias variance tradeoff. As direct optimization of these shrinkage methods leads to unrealizable estimators, then a consistent estimate of these optimal shrinkage estimators is obtained, within the general asymptotics where both the observation dimension and the sample size tend to infinity, but at a fixed rate. That is, RMT is used to obtain consistent estimates within an asymptotic regime that deals naturally with the small sample size. This RMT approach does not require any assumptions about the distribution of the observations. The proposed filters deal directly with the estimation of the SOI, which leads to performance gains compared to related work methods based on optimizing a metric related to the data covariance estimate or proposing rather ad-hoc regularizations of the SCM. Compared to related work methods which also treat directly the estimation of the SOI and which are based on a shrinkage of the SCM, the proposed filter structure is more general. It contemplates corrections of the inverse of the SCM and considers the related work methods as particular cases. This leads to performance gains which are notable when there is a mismatch in the signature vector of the SOI. This mismatch and the small sample size are the main sources of degradation of the sample LMMSE and MVDR. Thus, in the last part of this thesis, unlike the previous proposed filters and the related work, we propose a filter which treats directly both sources of degradation.
Estamos viviendo en una era en la que la dimensión de los datos, recogidos por sensores de bajo precio, está creciendo a un ritmo elevado, pero la disponibilidad de muestras estadísticamente independientes de los datos es limitada. Así, los métodos clásicos de inferencia estadística sufren una degradación importante, ya que asumen un tamaño muestral grande comparado con la dimensión de los datos. En este contexto, esta tesis se centra en un problema popular en procesado de señal, la estimación lineal de un parámetro observado mediante un modelo lineal. Por ejemplo, la conformación de haz en procesado de agrupaciones de antenas, donde un filtro enfoca el haz hacia una dirección para obtener la señal asociada a una fuente de interés (SOI). El diseño de los filtros óptimos se basa en optimizar una medida de prestación como el error cuadrático medio (MSE) o la relación señal a ruido más interferente (SINR). Cuando hay información sobre los momentos de segundo orden de la SOI, la optimización del MSE lleva a obtener el estimador lineal de mínimo error cuadrático medio (LMMSE). Cuando esa información no está disponible, se puede forzar la restricción de no distorsión de la SOI en la optimización del MSE, que es equivalente a maximizar la SINR. Esto conduce al estimador de Capon (MVDR). El LMMSE y MVDR son óptimos, pero no son realizables, ya que dependen de la inversa de la matriz de correlación de los datos, que no es conocida. El procedimiento habitual para solventar este problema es sustituirla por la inversa de la correlación muestral (SCM), esto lleva al LMMSE y MVDR muestral. Este procedimiento es óptimo cuando el tamaño muestral tiende a infinito y la dimensión de los datos es fija. En la práctica este tamaño muestral elevado no suele producirse y los métodos LMMSE y MVDR muestrales sufren una degradación importante en este régimen de tamaño muestral pequeño. Éste se puede deber a periodos cortos de estacionariedad estadística o a sistemas cuya dimensión sea elevada. El objetivo de esta tesis es proponer correcciones de los estimadores LMMSE y MVDR muestrales que permitan combatir su degradación en el régimen de tamaño muestral pequeño. Para ello se usan dos herramientas potentes, la estimación shrinkage y la teoría de matrices aleatorias (RMT). La estimación shrinkage introduce una estructura de los estimadores que mejora los estimadores muestrales mediante la optimización del compromiso entre media y varianza del estimador. La optimización directa de los métodos shrinkage lleva a métodos no realizables. Por eso luego se propone obtener una estimación consistente de ellos en el régimen asintótico en el que tanto la dimensión de los datos como el tamaño muestral tienden a infinito, pero manteniendo un ratio constante. Es decir RMT se usa para obtener estimaciones consistentes en un régimen asintótico que trata naturalmente las situaciones de tamaño muestral pequeño. Esta metodología basada en RMT no requiere suposiciones sobre el tipo de distribución de los datos. Los filtros propuestos tratan directamente la estimación de la SOI, esto lleva a ganancias de prestaciones en comparación a otros métodos basados en optimizar una métrica relacionada con la estimación de la covarianza de los datos o regularizaciones ad hoc de la SCM. La estructura de filtro propuesta es más general que otros métodos que también tratan directamente la estimación de la SOI y que se basan en un shrinkage de la SCM. Contemplamos correcciones de la inversa de la SCM y los métodos del estado del arte son casos particulares. Esto lleva a ganancias de prestaciones que son notables cuando hay una incertidumbre en el vector de firma asociado a la SOI. Esa incertidumbre y el tamaño muestral pequeño son las fuentes de degradación de los LMMSE y MVDR muestrales. Así, en la última parte de la tesis, a diferencia de métodos propuestos previamente en la tesis y en la literatura, se propone un filtro que trata de forma directa ambas fuentes de degradación.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "And shrinkage estimator (SE)"

1

Fourdrinier, Dominique, William E. Strawderman, and Martin T. Wells. Shrinkage Estimation. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-02185-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tsukuma, Hisayuki, and Tatsuya Kubokawa. Shrinkage Estimation for Mean and Covariance Matrices. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-1596-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Saleh, A. K. Md Ehsanes. On shrinkage least squares estimation in a parallelism problem. Ottawa, Ont., Canada: Dept. of Mathematics and Statistics, Carleton University, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Improving efficiency by shrinkage: The James-Stein and ridge regression estimators. New York: Marcel Dekker, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Trovant, Mike. A numerical model for the estimation of volumetric shrinkage formation in metals casting. Ottawa: National Library of Canada, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Misiewicz, John M. Extension of aggregation and shrinkage techniques used in the estimation of Marine Corps Officer attrition rates. Monterey, Calif: Naval Postgraduate School, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Read, Robert R. The use of shrinkage techniques in the estimation of attrition rates for large scale manpower models. Monterey, Calif: Naval Postgraduate School, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dickinson, Charles R. Refinement and extension of shrinkage techniques in loss rate estimation of Marine Corps officer manpower models. Monterey, California: Naval Postgraduate School, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ottaviano, Victor B. National mechanical estimator. 2nd ed. Lilburn, GA: Fairmont Press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Koenker, Roger W. On Boscovich's estimator. [Urbana, Ill.]: College of Commerce and Business Administration, University of Illinois at Urbana-Champaign, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "And shrinkage estimator (SE)"

1

Chaturvedi, Anoop, Suchita Kesarwani, and Ram Chandra. "Simultaneous Prediction Based on Shrinkage Estimator." In Recent Advances in Linear Models and Related Areas, 181–204. Heidelberg: Physica-Verlag HD, 2008. http://dx.doi.org/10.1007/978-3-7908-2064-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bondar, James V. "How Much Improvement can a Shrinkage Estimator Give?" In Advances in the Statistical Sciences: Foundations of Statistical Inference, 93–103. Dordrecht: Springer Netherlands, 1987. http://dx.doi.org/10.1007/978-94-009-4788-7_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jurečková, Jana, and Xavier Milhaud. "Shrinkage of Maximum Likelihood Estimator of Multivariate Location." In Asymptotic Statistics, 303–18. Heidelberg: Physica-Verlag HD, 1994. http://dx.doi.org/10.1007/978-3-642-57984-4_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ahmed, S. Ejaz, T. Quadir, and S. Nkurunziza. "Optimal Shrinkage Estimation." In International Encyclopedia of Statistical Science, 1025–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-04898-2_430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Keener, Robert W. "Empirical Bayes and Shrinkage Estimators." In Theoretical Statistics, 205–18. New York, NY: Springer New York, 2009. http://dx.doi.org/10.1007/978-0-387-93839-4_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rudaś, Krzysztof, and Szymon Jaroszewicz. "Shrinkage Estimators for Uplift Regression." In Machine Learning and Knowledge Discovery in Databases, 607–23. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-46150-8_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ahmed, S. Ejaz, S. Chitsaz, and S. Fallahpour. "Optimal Shrinkage Preliminary Test Estimation." In International Encyclopedia of Statistical Science, 1028–30. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-04898-2_431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Judge, George G. "Shrinkage-Biased Estimation in Econometrics." In The New Palgrave Dictionary of Economics, 1–8. London: Palgrave Macmillan UK, 2008. http://dx.doi.org/10.1057/978-1-349-95121-5_1923-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Judge, George G. "Shrinkage-Biased Estimation in Econometrics." In The New Palgrave Dictionary of Economics, 12286–92. London: Palgrave Macmillan UK, 2018. http://dx.doi.org/10.1057/978-1-349-95189-5_1923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lancewicki, Tomer. "Kernel Matrix Regularization via Shrinkage Estimation." In Advances in Intelligent Systems and Computing, 1292–305. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01177-2_94.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "And shrinkage estimator (SE)"

1

Agarwal, Deepak K. "Shrinkage estimator generalizations of Proximal Support Vector Machines." In the eighth ACM SIGKDD international conference. New York, New York, USA: ACM Press, 2002. http://dx.doi.org/10.1145/775047.775073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rucci, Alessio, Stefano Tebaldini, and Fabio Rocca. "SKP-shrinkage estimator for SAR multi-baselines applications." In 2010 IEEE Radar Conference. IEEE, 2010. http://dx.doi.org/10.1109/radar.2010.5494531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pascal, Frederic, and Yacine Chitour. "Shrinkage covariance matrix estimator applied to STAP detection." In 2014 IEEE Statistical Signal Processing Workshop (SSP). IEEE, 2014. http://dx.doi.org/10.1109/ssp.2014.6884641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shenoy, H. Vikram, A. P. Vinod, and Cuntai Guan. "Shrinkage estimator based regularization for EEG motor imagery classification." In 2015 10th International Conference on Information, Communications and Signal Processing (ICICS). IEEE, 2015. http://dx.doi.org/10.1109/icics.2015.7459836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Breloy, Arnaud, Esa Ollila, and Frederic Pascal. "Spectral Shrinkage of Tyler's $M$-Estimator of Covariance Matrix." In 2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP). IEEE, 2019. http://dx.doi.org/10.1109/camsap45676.2019.9022652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Srinath, K. Pavan, and Ramji Venkataramanan. "Cluster-seeking shrinkage estimators." In 2016 IEEE International Symposium on Information Theory (ISIT). IEEE, 2016. http://dx.doi.org/10.1109/isit.2016.7541418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Damasceno, Filipe F. R., Marcelo B. A. Veras, Diego P. P. Mesquita, Joao P. P. Gomes, and Carlos E. F. de Brito. "Shrinkage k-Means: A Clustering Algorithm Based on the James-Stein Estimator." In 2016 5th Brazilian Conference on Intelligent Systems (BRACIS). IEEE, 2016. http://dx.doi.org/10.1109/bracis.2016.084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kontos, Kevin, and Gianluca Bontempi. "An improved shrinkage estimator to infer regulatory networks with Gaussian graphical models." In the 2009 ACM symposium. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1529282.1529448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chakraborty, Rudrasis, Yifei Xing, Minxuan Duan, and Stella X. Yu. "C-SURE: Shrinkage Estimator and Prototype Classifier for Complex-Valued Deep Learning." In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2020. http://dx.doi.org/10.1109/cvprw50498.2020.00048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dimmery, Drew, Eytan Bakshy, and Jasjeet Sekhon. "Shrinkage Estimators in Online Experiments." In KDD '19: The 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3292500.3330771.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "And shrinkage estimator (SE)"

1

Cheng, Xu, Zhipeng Liao, and Frank Schorfheide. Shrinkage Estimation of High-Dimensional Factor Models with Structural Instabilities. Cambridge, MA: National Bureau of Economic Research, January 2014. http://dx.doi.org/10.3386/w19792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gao, Hong-ye. Choice of Thresholds for Wavelet Shrinkage Estimate of the Spectrum,. Fort Belvoir, VA: Defense Technical Information Center, January 1995. http://dx.doi.org/10.21236/ada290168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Walsh, Stephen J., and Mark F. Tardiff. Exploration of regularized covariance estimates with analytical shrinkage intensity for producing invertible covariance matrices in high dimensional hyperspectral data. Office of Scientific and Technical Information (OSTI), October 2007. http://dx.doi.org/10.2172/1171912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Doersksen, R. E., and David L. Malmquist. Weld Shrinkage Study. Fort Belvoir, VA: Defense Technical Information Center, December 1993. http://dx.doi.org/10.21236/ada457049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Spencer, J. Brock. Cure shrinkage in casting resins. Office of Scientific and Technical Information (OSTI), February 2015. http://dx.doi.org/10.2172/1170250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Poston, Wendy, George Rogers, Carey Priebe, and Jeffrey Solka. Resistive Grid Kernel Estimator (RGKE). Fort Belvoir, VA: Defense Technical Information Center, June 1992. http://dx.doi.org/10.21236/ada253520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Padgett, W. J. A Nonparametric Quantile Estimator: Computation. Fort Belvoir, VA: Defense Technical Information Center, May 1986. http://dx.doi.org/10.21236/ada169939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Leung, Chuk L., Daniel A. Scola, Christopher D. Simone, and Parag Katijar. Development of Processable, Low Cure Shrinkage Adhesives. Fort Belvoir, VA: Defense Technical Information Center, February 2003. http://dx.doi.org/10.21236/ada411520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lio, Y. L., and W. J. Padgett. A Generalized Quantile Estimator under Censoring. Fort Belvoir, VA: Defense Technical Information Center, February 1987. http://dx.doi.org/10.21236/ada188280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Brazelton, Sandra. Interactive Time Recursive State Estimator Program. Fort Belvoir, VA: Defense Technical Information Center, May 1985. http://dx.doi.org/10.21236/ada189441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography