Academic literature on the topic 'Local polynomial kernel estimators'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Local polynomial kernel estimators.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Local polynomial kernel estimators"

1

Hiabu, Munir, María Dolores Martínez-Miranda, Jens Perch Nielsen, Jaap Spreeuw, Carsten Tanggaard, and Andrés M. Villegas. "Global Polynomial Kernel Hazard Estimation." Revista Colombiana de Estadística 38, no. 2 (July 15, 2015): 399–411. http://dx.doi.org/10.15446/rce.v38n2.51668.

Full text
Abstract:
<p>This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically reduces bias with unchanged variance. A simulation study investigates the finite-sample properties of GPA. The method is tested on local constant and local linear estimators. From the simulation experiment we conclude that the global estimator improves the goodness-of-fit. An especially encouraging result is that the bias-correction works well for small samples, where traditional bias reduction methods have a tendency to fail.</p>
APA, Harvard, Vancouver, ISO, and other styles
2

Hedger, Richard D., François Martin, Julian J. Dodson, Daniel Hatin, François Caron, and Fred G. Whoriskey. "The optimized interpolation of fish positions and speeds in an array of fixed acoustic receivers." ICES Journal of Marine Science 65, no. 7 (June 30, 2008): 1248–59. http://dx.doi.org/10.1093/icesjms/fsn109.

Full text
Abstract:
Abstract Hedger, R. D., Martin, F., Dodson, J, J., Hatin, D., Caron, F., and Whoriskey, F. G. 2008. The optimized interpolation of fish positions and speeds in an array of fixed acoustic receivers. – ICES Journal of Marine Science, 65: 1248–1259. The principal method for interpolating the positions and speeds of tagged fish within an array of fixed acoustic receivers is the weighted-mean method, which uses a box-kernel estimator, one of the simplest smoothing options available. This study aimed to determine the relative error of alternative, non-parametric regression methods for estimating these parameters. It was achieved by predicting the positions and speeds of three paths made through a dense array of fixed acoustic receivers within a coastal embayment (Gaspé Bay, Québec, Canada) by a boat with a GPS trailing an ultrasonic transmitter. Transmitter positions and speeds were estimated from the receiver data using kernel estimators, with box and normal kernels and the kernel size determined arbitrarily, and by several non-parametric methods, i.e. a kernel estimator, a smoothing spline, and local polynomial regression, with the kernel size or smoothing span determined by cross-validation. Prediction error of the kernel estimator was highly dependent upon kernel size, and a normal kernel produced less error than the box kernel. Of the methods using cross-validation, local polynomial regression produced least error, suggesting it as the optimal method for interpolation. Prediction error was also strongly dependent on array density. The local polynomial regression method was used to determine the movement patterns of a sample of tagged Atlantic salmon (Salmo salar) smolt and kelt, and American eel (Anguilla rostrata). Analysis of the estimates from local polynomial regression suggested that this was a suitable method for monitoring patterns of fish movement.
APA, Harvard, Vancouver, ISO, and other styles
3

Gu, Jingping, Qi Li, and Jui-Chung Yang. "Multivariate Local Polynomial Kernel Estimators: Leading Bias and Asymptotic Distribution." Econometric Reviews 34, no. 6-10 (December 17, 2014): 979–1010. http://dx.doi.org/10.1080/07474938.2014.956615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Armstrong, Timothy B., and Michal Kolesár. "Simple and honest confidence intervals in nonparametric regression." Quantitative Economics 11, no. 1 (2020): 1–39. http://dx.doi.org/10.3982/qe1199.

Full text
Abstract:
We consider the problem of constructing honest confidence intervals (CIs) for a scalar parameter of interest, such as the regression discontinuity parameter, in nonparametric regression based on kernel or local polynomial estimators. To ensure that our CIs are honest, we use critical values that take into account the possible bias of the estimator upon which the CIs are based. We show that this approach leads to CIs that are more efficient than conventional CIs that achieve coverage by undersmoothing or subtracting an estimate of the bias. We give sharp efficiency bounds of using different kernels, and derive the optimal bandwidth for constructing honest CIs. We show that using the bandwidth that minimizes the maximum mean‐squared error results in CIs that are nearly efficient and that in this case, the critical value depends only on the rate of convergence. For the common case in which the rate of convergence is n −2/5, the appropriate critical value for 95% CIs is 2.18, rather than the usual 1.96 critical value. We illustrate our results in a Monte Carlo analysis and an empirical application.
APA, Harvard, Vancouver, ISO, and other styles
5

Su, Liyun, Tianshun Yan, Yanyong Zhao, and Fenglan Li. "Local Polynomial Regression Solution for Differential Equations with Initial and Boundary Values." Mathematical Problems in Engineering 2013 (2013): 1–5. http://dx.doi.org/10.1155/2013/530932.

Full text
Abstract:
Numerical solutions of the linear differential boundary issues are obtained by using a local polynomial estimator method with kernel smoothing. To achieve this, a combination of a local polynomial-based method and its differential form has been used. The computed results with the use of this technique have been compared with the exact solution and other existing methods to show the required accuracy of it. The effectiveness of this method is verified by three illustrative examples. The presented method is seen to be a very reliable alternative method to some existing techniques for such realistic problems. Numerical results show that the solution of this method is more accurate than that of other methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Kong, Efang, and Yingcun Xia. "UNIFORM BAHADUR REPRESENTATION FOR NONPARAMETRIC CENSORED QUANTILE REGRESSION: A REDISTRIBUTION-OF-MASS APPROACH." Econometric Theory 33, no. 1 (February 15, 2016): 242–61. http://dx.doi.org/10.1017/s0266466615000262.

Full text
Abstract:
Censored quantile regressions have received a great deal of attention in the literature. In a linear setup, recent research has found that an estimator based on the idea of “redistribution-of-mass” in Efron (1967, Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, vol. 4, pp. 831–853, University of California Press) has better numerical performance than other available methods. In this paper, this idea is combined with the local polynomial kernel smoothing for nonparametric quantile regression of censored data. We derive the uniform Bahadur representation for the estimator and, more importantly, give theoretical justification for its improved efficiency over existing estimation methods. We include an example to illustrate the usefulness of such a uniform representation in the context of sufficient dimension reduction in regression analysis. Finally, simulations are used to investigate the finite sample performance of the new estimator.
APA, Harvard, Vancouver, ISO, and other styles
7

Rahayu, Putri Indi, and Pardomuan Robinson Sihombing. "PENERAPAN REGRESI NONPARAMETRIK KERNEL DAN SPLINE DALAM MEMODELKAN RETURN ON ASSET (ROA) BANK SYARIAH DI INDONESIA." JURNAL MATEMATIKA MURNI DAN TERAPAN EPSILON 14, no. 2 (March 2, 2021): 115. http://dx.doi.org/10.20527/epsilon.v14i2.2968.

Full text
Abstract:
Sharia Bank Return On Assets (ROA) modeling in Indonesia in 2018 aims to analyze the relationship pattern of Retturn On Assets (ROA) with interest rates. The analysis that is often used for modeling is regression analysis. Regression analysis is divided into two, namely parametric and nonparametric. The most commonly used nonparametric regression methods are kernel and spline regression. In this study, the nonparametric regression used was kernel regression with the Nadaraya-Watson (NWE) estimator and Local Polynomial (LPE) estimator, while the spline regression was smoothing spline and B-splines. The fitting curve results show that the best model is the B-splines regression model with a degree of 3 and the number of knots 5. This is because the B-splines regression model has a smooth curve and more closely follows the distribution of data compared to other regression curves. The B-splines regression model has a determination coefficient of R ^ 2 of 74.92%,%, meaning that the amount of variation in the ROA variable described by the B-splines regression model is 74.92%, while the remaining 25.8% is explained by other variables not included in the model.
APA, Harvard, Vancouver, ISO, and other styles
8

Cattaneo, Matias D., Michael Jansson, and Xinwei Ma. "Simple Local Polynomial Density Estimators." Journal of the American Statistical Association 115, no. 531 (July 22, 2019): 1449–55. http://dx.doi.org/10.1080/01621459.2019.1635480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kikechi, Conlet Biketi, and Richard Onyino Simwa. "On Comparison of Local Polynomial Regression Estimators for P=0 and P=1 in a Model Based Framework." International Journal of Statistics and Probability 7, no. 4 (June 27, 2018): 104. http://dx.doi.org/10.5539/ijsp.v7n4p104.

Full text
Abstract:
This article discusses the local polynomial regression estimator for and the local polynomial regression estimator for in a finite population. The performance criterion exploited in this study focuses on the efficiency of the finite population total estimators. Further, the discussion explores analytical comparisons between the two estimators with respect to asymptotic relative efficiency. In particular, asymptotic properties of the local polynomial regression estimator of finite population total for are derived in a model based framework. The results of the local polynomial regression estimator for are compared with those of the local polynomial regression estimator for studied by Kikechi et al (2018). Variance comparisons are made using the local polynomial regression estimator for and the local polynomial regression estimator for which indicate that the estimators are asymptotically equivalently efficient. Simulation experiments carried out show that the local polynomial regression estimator outperforms the local polynomial regression estimator in the linear, quadratic and bump populations.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Shouyin, and Na Chen. "Learning by atomic norm regularization with polynomial kernels." International Journal of Wavelets, Multiresolution and Information Processing 13, no. 05 (September 2015): 1550035. http://dx.doi.org/10.1142/s0219691315500356.

Full text
Abstract:
In this paper, we propose a learning scheme for regression generated by atomic norm regularization and data independent hypothesis spaces. The hypothesis spaces based on polynomial kernels are trained from finite datasets, which is independent of the given sample. We also present an error analysis for the proposed atomic norm regularization algorithm with polynomial kernels. When dealing with algorithms with polynomial kernels, the regularization error is a main difficulty. We estimate the regularization error by local polynomial reproduction formula. Better error estimates are derived by applying projection and iteration techniques. Our study shows that the proposed algorithm has a fast convergence rate with O(mζ-1), which is the best convergence rate in the literature.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Local polynomial kernel estimators"

1

Dharmasena, Tibbotuwa Deniye Kankanamge Lasitha Sandamali, and Sandamali dharmasena@rmit edu au. "Sequential Procedures for Nonparametric Kernel Regression." RMIT University. Mathematical and Geospatial Sciences, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090119.134815.

Full text
Abstract:
In a nonparametric setting, the functional form of the relationship between the response variable and the associated predictor variables is unspecified; however it is assumed to be a smooth function. The main aim of nonparametric regression is to highlight an important structure in data without any assumptions about the shape of an underlying regression function. In regression, the random and fixed design models should be distinguished. Among the variety of nonparametric regression estimators currently in use, kernel type estimators are most popular. Kernel type estimators provide a flexible class of nonparametric procedures by estimating unknown function as a weighted average using a kernel function. The bandwidth which determines the influence of the kernel has to be adapted to any kernel type estimator. Our focus is on Nadaraya-Watson estimator and Local Linear estimator which belong to a class of kernel type regression estimators called local polynomial kerne l estimators. A closely related problem is the determination of an appropriate sample size that would be required to achieve a desired confidence level of accuracy for the nonparametric regression estimators. Since sequential procedures allow an experimenter to make decisions based on the smallest number of observations without compromising accuracy, application of sequential procedures to a nonparametric regression model at a given point or series of points is considered. The motivation for using such procedures is: in many applications the quality of estimating an underlying regression function in a controlled experiment is paramount; thus, it is reasonable to invoke a sequential procedure of estimation that chooses a sample size based on recorded observations that guarantees a preassigned accuracy. We have employed sequential techniques to develop a procedure for constructing a fixed-width confidence interval for the predicted value at a specific point of the independent variable. These fixed-width confidence intervals are developed using asymptotic properties of both Nadaraya-Watson and local linear kernel estimators of nonparametric kernel regression with data-driven bandwidths and studied for both fixed and random design contexts. The sample sizes for a preset confidence coefficient are optimized using sequential procedures, namely two-stage procedure, modified two-stage procedure and purely sequential procedure. The proposed methodology is first tested by employing a large-scale simulation study. The performance of each kernel estimation method is assessed by comparing their coverage accuracy with corresponding preset confidence coefficients, proximity of computed sample sizes match up to optimal sample sizes and contrasting the estimated values obtained from the two nonparametric methods with act ual values at given series of design points of interest. We also employed the symmetric bootstrap method which is considered as an alternative method of estimating properties of unknown distributions. Resampling is done from a suitably estimated residual distribution and utilizes the percentiles of the approximate distribution to construct confidence intervals for the curve at a set of given design points. A methodology is developed for determining whether it is advantageous to use the symmetric bootstrap method to reduce the extent of oversampling that is normally known to plague Stein's two-stage sequential procedure. The procedure developed is validated using an extensive simulation study and we also explore the asymptotic properties of the relevant estimators. Finally, application of our proposed sequential nonparametric kernel regression methods are made to some problems in software reliability and finance.
APA, Harvard, Vancouver, ISO, and other styles
2

Doruska, Paul F. "Methods for Quantitatively Describing Tree Crown Profiles of Loblolly pine (Pinus taeda L.)." Diss., Virginia Tech, 1998. http://hdl.handle.net/10919/30638.

Full text
Abstract:
Physiological process models, productivity studies, and wildlife abundance studies all require accurate representations of tree crowns. In the past, geometric shapes or flexible mathematical equations approximating geometric shapes were used to represent crown profiles. Crown profile of loblolly pine (Pinus taeda L.) was described using single-regressor, nonparametric regression analysis in an effort to improve crown representations. The resulting profiles were compared to more traditional representations. Nonparametric regression may be applicable when an underlying parametric model cannot be identified. The modeler does not specify a functional form. Rather, a data-driven technique is used to determine the shape a curve. The modeler determines the amount of local curvature to be depicted in the curve. A class of local-polynomial estimators which contains the popular kernel estimator as a special case was investigated. Kernel regression appears to fit closely to the interior data points but often possesses bias problems at the boundaries of the data, a feature less exhibited by local linear or local quadratic regression. When using nonparametric regression, decisions must be made regarding polynomial order and bandwidth. Such decisions depend on the presence of local curvature, desired degree of smoothing, and, for bandwidth in particular, the minimization of some global error criterion. In the present study, a penalized PRESS criterion (PRESS*) was selected as the global error criterion. When individual- tree, crown profile data are available, the technique of nonparametric regression appears capable of capturing more of the tree to tree variation in crown shape than multiple linear regression and other published functional forms. Thus, modelers should consider the use of nonparametric regression when describing crown profiles as well as in any regression situation where traditional techniques perform unsatisfactorily or fail.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Madani, Soffana. "Contributions à l’estimation à noyau de fonctionnelles de la fonction de répartition avec applications en sciences économiques et de gestion." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1183/document.

Full text
Abstract:
La répartition des revenus d'une population, la distribution des instants de défaillance d'un matériel et l'évolution des bénéfices des contrats d'assurance vie - étudiées en sciences économiques et de gestion – sont liées a des fonctions continues appartenant à la classe des fonctionnelles de la fonction de répartition. Notre thèse porte sur l'estimation à noyau de fonctionnelles de la fonction de répartition avec applications en sciences économiques et de gestion. Dans le premier chapitre, nous proposons des estimateurs polynomiaux locaux dans le cadre i.i.d. de deux fonctionnelles de la fonction de répartition, notées LF et TF , utiles pour produire des estimateurs lisses de la courbe de Lorenz et du temps total de test normalisé (scaled total time on test transform). La méthode d'estimation est décrite dans Abdous, Berlinet et Hengartner (2003) et nous prouvons le bon comportement asymptotique des estimateurs polynomiaux locaux. Jusqu'alors, Gastwirth (1972) et Barlow et Campo (1975) avaient défini des estimateurs continus par morceaux de la courbe de Lorenz et du temps total de test normalisé, ce qui ne respectait pas la propriété de continuité des courbes initiales. Des illustrations sur données simulées et réelles sont proposées. Le second chapitre a pour but de fournir des estimateurs polynomiaux locaux dans le cadre i.i.d. des dérivées successives des fonctionnelles de la fonction de répartition explorées dans le chapitre précédent. A part l'estimation de la dérivée première de la fonction TF qui se traite à l'aide de l'estimation lisse de la fonction de répartition, la méthode d'estimation employée est l'approximation polynomiale locale des fonctionnelles de la fonction de répartition détaillée dans Berlinet et Thomas-Agnan (2004). Divers types de convergence ainsi que la normalité asymptotique sont obtenus, y compris pour la densité et ses dérivées successives. Des simulations apparaissent et sont commentées. Le point de départ du troisième chapitre est l'estimateur de Parzen-Rosenblatt (Rosenblatt (1956), Parzen (1964)) de la densité. Nous améliorons dans un premier temps le biais de l'estimateur de Parzen-Rosenblatt et de ses dérivées successives à l'aide de noyaux d'ordre supérieur (Berlinet (1993)). Nous démontrons ensuite les nouvelles conditions de normalité asymptotique de ces estimateurs. Enfin, nous construisons une méthode de correction des effets de bord pour les estimateurs des dérivées de la densité, grâce aux dérivées d'ordre supérieur. Le dernier chapitre s'intéresse au taux de hasard, qui contrairement aux deux fonctionnelles de la fonction de répartition traitées dans le premier chapitre, n'est pas un rapport de deux fonctionnelles linéaires de la fonction de répartition. Dans le cadre i.i.d., les estimateurs à noyau du taux de hasard et de ses dérivées successives sont construits à partir des estimateurs à noyau de la densité et ses dérivées successives. La normalité asymptotique des premiers estimateurs est logiquement obtenue à partir de celle des seconds. Nous nous plaçons ensuite dans le modèle à intensité multiplicative, un cadre plus général englobant des données censurées et dépendantes. Nous menons la procédure à terme de Ramlau-Hansen (1983) afin d'obtenir les bonnes propriétés asymptotiques des estimateurs du taux de hasard et de ses dérivées successives puis nous tentons d'appliquer l'approximation polynomiale locale dans ce contexte. Le taux d'accumulation du surplus dans le domaine de la participation aux bénéfices pourra alors être estimé non parametriquement puisqu'il dépend des taux de transition (taux de hasard d'un état vers un autre) d'une chaine de Markov (Ramlau-Hansen (1991), Norberg (1999))
The income distribution of a population, the distribution of failure times of a system and the evolution of the surplus in with-profit policies - studied in economics and management - are related to continuous functions belonging to the class of functionals of the distribution function. Our thesis covers the kernel estimation of some functionals of the distribution function with applications in economics and management. In the first chapter, we offer local polynomial estimators in the i.i.d. case of two functionals of the distribution function, written LF and TF , which are useful to produce the smooth estimators of the Lorenz curve and the scaled total time on test transform. The estimation method is described in Abdous, Berlinet and Hengartner (2003) and we prove the good asymptotic behavior of the local polynomial estimators. Until now, Gastwirth (1972) and Barlow and Campo (1975) have defined continuous piecewise estimators of the Lorenz curve and the scaled total time on test transform, which do not respect the continuity of the original curves. Illustrations on simulated and real data are given. The second chapter is intended to provide smooth estimators in the i.i.d. case of the derivatives of the two functionals of the distribution function presented in the last chapter. Apart from the estimation of the first derivative of the function TF with a smooth estimation of the distribution function, the estimation method is the local polynomial approximation of functionals of the distribution function detailed in Berlinet and Thomas-Agnan (2004). Various types of convergence and asymptotic normality are obtained, including the probability density function and its derivatives. Simulations appear and are discussed. The starting point of the third chapter is the Parzen-Rosenblatt estimator (Rosenblatt (1956), Parzen (1964)) of the probability density function. We first improve the bias of this estimator and its derivatives by using higher order kernels (Berlinet (1993)). Then we find the modified conditions for the asymptotic normality of these estimators. Finally, we build a method to remove boundary effects of the estimators of the probability density function and its derivatives, thanks to higher order derivatives. We are interested, in this final chapter, in the hazard rate function which, unlike the two functionals of the distribution function explored in the first chapter, is not a fraction of two linear functionals of the distribution function. In the i.i.d. case, kernel estimators of the hazard rate and its derivatives are produced from the kernel estimators of the probability density function and its derivatives. The asymptotic normality of the first estimators is logically obtained from the second ones. Then, we are placed in the multiplicative intensity model, a more general framework including censored and dependent data. We complete the described method in Ramlau-Hansen (1983) to obtain good asymptotic properties of the estimators of the hazard rate and its derivatives and we try to adopt the local polynomial approximation in this context. The surplus rate in with-profit policies will be nonparametrically estimated as its mathematical expression depends on transition rates (hazard rates from one state to another) in a Markov chain (Ramlau-Hansen (1991), Norberg (1999))
APA, Harvard, Vancouver, ISO, and other styles
4

Krishnan, Sunder Ram. "Optimum Savitzky-Golay Filtering for Signal Estimation." Thesis, 2013. http://hdl.handle.net/2005/3293.

Full text
Abstract:
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE). The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity. A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression. Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature. Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.
APA, Harvard, Vancouver, ISO, and other styles
5

-Shuenn, Deng Wen, and 鄧文舜. "The Study of Kernel Regression Function Polygons and Local Linear Ridge Regression Estimators." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/60535152004594408945.

Full text
Abstract:
博士
國立東華大學
應用數學系
90
In the field of random design nonparametric regression, we examine two kernel estimators involving, respectively, piecewise linear interpolation of kernel regression function estimates and local ridge regression. Efforts dedicated to understanding their properties bring forth the following main messages. The kernel estimate of a regression function inherits its smoothness properties from the kernel function chosen by the investigator. Nevertheless, practical regression function estimates are often presented in interpolated form, using the exact kernel estimates only at some equally spaced grids of points. The asymptotic integrated mean square error (AIMSE) properties of such polygon type estimate, namely kernel regression function polygons (KRFP), are investigated. Call the "optimal kernel" the minimizer of the AIMSE. Epanechnikov kernel is not the optimal kernel unless for the case that the distance between every two consecutive grids is of smaller order in magnitude than the bandwidth used by the kernel regression function estimator. If the distance and bandwidth are of the same order in magnitude, we obtain the optimal kernel from the class of degree-two polynomials through numerical calculations. In this case, the best AIMSE performances deteriorate as the distance is increased to reduce the computational effort. When the distance is of larger order in magnitude than the bandwidth, then uniform kernel serves as the optimal kernel for KRFP. Local linear estimator (LLE) has many attractive asymptotic features. In finite sample situations, however, its conditional variance may become arbitrarily large. To cope with this difficulty, which can translate into the spurious rough appearance of the regression function estimate when design becomes sparse or clustered, Seifert and Gasser (1996)suggest "ridging" the LLE and propose the local linear ridge regression estimator (LLRRE). In this dissertation, local and numerical properties of the LLRRE are studied. It is shown that its finite sample mean square errors, both conditional and unconditional, are bounded above by finite constants. If the ridge regression parameters are not selected properly, then the resulting LLRRE suffers some drawbacks. For example, it is asymptotically biased and has boundary effects, and fails to inherit the nice asymptotic bias quality of the LLE. Letting the ridge parameters depend on sample size and converge to 0 as the sample size increases, we are able to ensure LLRRE the nice asymptotic features of the LLE under some mild conditions. Simulation studies demonstrate that the LLRRE using cross-validated bandwidth and ridge parameters could have smaller sample mean integrated square error than the LLE using cross-validated bandwidth, in reasonable sample sizes.
APA, Harvard, Vancouver, ISO, and other styles
6

Tilahun, Gelila. "Statistical Methods for Dating Collections of Historical Documents." Thesis, 2011. http://hdl.handle.net/1807/29890.

Full text
Abstract:
The problem in this thesis was originally motivated by problems presented with documents of Early England Data Set (DEEDS). The central problem with these medieval documents is the lack of methods to assign accurate dates to those documents which bear no date. With the problems of the DEEDS documents in mind, we present two methods to impute missing features of texts. In the first method, we suggest a new class of metrics for measuring distances between texts. We then show how to combine the distances between the texts using statistical smoothing. This method can be adapted to settings where the features of the texts are ordered or unordered categoricals (as in the case of, for example, authorship assignment problems). In the second method, we estimate the probability of occurrences of words in texts using nonparametric regression techniques of local polynomial fitting with kernel weight to generalized linear models. We combine the estimated probability of occurrences of words of a text to estimate the probability of occurrence of a text as a function of its feature -- the feature in this case being the date in which the text is written. The application and results of our methods to the DEEDS documents are presented.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Local polynomial kernel estimators"

1

Cai, Zongwu. Functional Coefficient Models for Economic and Financial Data. Edited by Frédéric Ferraty and Yves Romain. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780199568444.013.6.

Full text
Abstract:
This article discusses the use of functional coefficient models for economic and financial data analysis. It first provides an overview of recent developments in the nonparametric estimation and testing of functional coefficient models, with particular emphasis on the kernel local polynomial smoothing method, before considering misspecification testing as an important econometric question when fitting a functional (varying) coefficient model or a trending time-varying coefficient model. It then describes two major real-life applications of functional coefficient models in economics and finance: the first deals with the use of functional coefficient instrumental-variable models to investigate the empirical relation between wages and education in a random sample of young Australian female workers from the 1985 wave of the Australian Longitudinal Survey, and the second is concerned with the use of functional coefficient beta models to analyze the common stock price of Microsoft stock (MSFT) during the year 2000 using the daily closing prices.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Local polynomial kernel estimators"

1

Eggermont, Paul P. B., and Vincent N. LaRiccia. "Local Polynomial Estimators." In Springer Series in Statistics, 169–203. New York, NY: Springer New York, 2009. http://dx.doi.org/10.1007/b12285_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rojas, H. E., H. D. Rojas, and A. S. Cruz. "Denoising of Electrical Signals Produced by Partial Discharges in Distribution Transformers Using the Local Polynomial Approximation and the Criterion of Non-parametric Estimators." In Lecture Notes in Electrical Engineering, 740–50. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31680-8_72.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"Kernel and Local Polynomial Methods." In Nonparametric Models for Longitudinal Data, 97–126. Chapman and Hall/CRC, 2018. http://dx.doi.org/10.1201/b20631-15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fan, Yangin, and Emmanuel Guerre. "Multivariate Local Polynomial Estimators: Uniform Boundary Properties and Asymptotic Linear Representation." In Advances in Econometrics, 489–537. Emerald Group Publishing Limited, 2016. http://dx.doi.org/10.1108/s0731-905320160000036023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rachdi, Mustapha, Ali Laksaci, Ali Hamié, Jacques Demongeot, and Idir Ouassou. "Curves Classification by Using a Local Likelihood Function and Its Practical Usefulness for Real Data." In Fuzzy Systems and Data Mining VI. IOS Press, 2020. http://dx.doi.org/10.3233/faia200691.

Full text
Abstract:
We extend the classical approach in supervised classification based on the local likelihood estimation to the functional covariates case. The estimation procedure of the functional parameter (slope parameter) in the linear model when the covariate is of functional kind is investigated. We show, on simulated as well on real data, that classification error rates estimated using test samples, and the estimation procedure by local likelihood seem to lead to better estimators than the classical kernel estimation. In addition, this approach is no longer assuming that the linear predictors have a specific parametric form. However, this approach also has two drawbacks. Indeed, it was more expensive and slower than the kernel regression. Thus, as mentioned earlier, kernels other than the Gaussian kernel can lead to a divergence of the Newton-Raphson algorithm. In contrast, using a Gaussian kernel, 4 to 6 iterations are then sufficient to achieve convergence.
APA, Harvard, Vancouver, ISO, and other styles
6

Wen, Kuangyu, and Ximing Wu. "Generalized Empirical Likelihood-Based Kernel Estimation of Spatially Similar Densities." In Advances in Info-Metrics, 385–99. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190636685.003.0014.

Full text
Abstract:
This study concerns the estimation of spatially similar densities, each with a small number of observations. To achieve flexibility and improved efficiency, we propose kernel-based estimators that are refined by generalized empirical likelihood probability weights associated with spatial moment conditions. We construct spatial moments based on spline basis functions that facilitate desirable local customization. Monte Carlo simulations demonstrate the good performance of the proposed method. To illustruate its usefulness, we apply this method to the estimation of crop yield distributions that are known to be spatically similar.
APA, Harvard, Vancouver, ISO, and other styles
7

Shoji, Isao. "Nonparametric Estimation of Nonlinear Dynamics by Local Linear Approximation." In Chaos and Complexity Theory for Management, 368–79. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2509-9.ch019.

Full text
Abstract:
This chapter discusses nonparametric estimation of nonlinear dynamical system models by a method of metric-based local linear approximation. By specifying a metric such as the standard metric or the square metric on the Euclidean space and a weighting function based on such as the exponential function or the cut-off function, it is possible to estimate values of an unknown vector field from experimental data. It can be shown the local linear fitting with the Gaussian kernel, or the local polynomial modeling of degree one, is included in the class of the proposed method. In addition, conducting simulation studies for estimating random oscillations, the chapter shows the method numerically works well.
APA, Harvard, Vancouver, ISO, and other styles
8

Priya, Ebenezer, and Srinivasan Subramanian. "Automated Method of Analysing Sputum Smear Tuberculosis Images Using Multifractal Approach." In Biomedical Signal and Image Processing in Patient Care, 184–215. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-2829-6.ch010.

Full text
Abstract:
In this chapter, an attempt has been made to automate the analysis of positive and negative Tuberculosis (TB) sputum smear images using multifractal approach. The smear images (N=100) recorded under standard image acquisition protocol are considered. The images are subjected to multifractal analysis and the corresponding spectrum parameters are extracted. Most significant parameters are selected based on the principal component analysis. Further, these parameters are subjected to classification using support vector machine classifier with different kernels. Results demonstrate that the multifractal analysis is capable of discriminating positive and negative TB images. The values of apex, broadness and aperture of the singularity spectrum are higher for TB positive than negative images and are statistically significant. The performance estimators obtained in the classification process show that the polynomial kernel performs better. It appears that this method of texture analysis could be useful for automated analysis of TB using digital sputum smear images.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Local polynomial kernel estimators"

1

Sherstobitov, A. I., V. I. Marchuk, D. V. Timofeev, V. V. Voronin, and K. O. Egiazarian. "Local feature descriptor based on 2D local polynomial approximation kernel indices." In IS&T/SPIE Electronic Imaging, edited by Karen O. Egiazarian, Sos S. Agaian, and Atanas P. Gotchev. SPIE, 2014. http://dx.doi.org/10.1117/12.2041610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Silalahi, Divo Dharma, and Habshah Midi. "Considering a non-polynomial basis for local kernel regression problem." In 2ND INTERNATIONAL CONFERENCE AND WORKSHOP ON MATHEMATICAL ANALYSIS 2016 (ICWOMA2016). Author(s), 2017. http://dx.doi.org/10.1063/1.4972168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Abdous, B., and A. Berlinet. "Reproducing kernel Hilbert spaces and local polynomial estimation of smooth functionals." In Proceedings of the 7th International ISAAC Congress. WORLD SCIENTIFIC, 2010. http://dx.doi.org/10.1142/9789814313179_0033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dharmasena, L. Sandamali, P. Zeephongsekul, and Chathuri L. Jayasinghe. "Software Reliability Growth Models Based on Local Polynomial Modeling with Kernel Smoothing." In 2011 IEEE 22nd International Symposium on Software Reliability Engineering (ISSRE). IEEE, 2011. http://dx.doi.org/10.1109/issre.2011.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shieh, Meng-Dar, and Hsin-En Fang. "Using Support Vector Regression in the Study of Product Form Images." In ASME 2008 International Mechanical Engineering Congress and Exposition. ASMEDC, 2008. http://dx.doi.org/10.1115/imece2008-69150.

Full text
Abstract:
In this paper, Support Vector Regression (SVR) training models using three different kernels: polynomial, Radial Basis Function (RBF), and mixed kernels, are constructed to demonstrate the training performance of unarranged data obtained from 32 virtual 3-D computer models. The 32 samples used as input data for training the three SVR models are represented by the coordination value sets of points extracted from 3-D models built by the 3-D software according to the shapes of 32 actual hairdryer products. To train the SVR model, an adjective (streamline) is used to evaluate all the 32 samples by 37 subjects. Then the scores of all the subjects are averaged to be the target values of the training models. In addition, a technique called k-fold cross-validation (C-V) is used to find the optimal parameter combination for optimizing the SVR models. The performance of the SVR using these three kernels to estimate the product image values is determined by the values of the Root Mean Square Error (RMSE). The results show that the optimal SVR model using the polynomial kernel performed better than the one using the RBF kernel. However, it is important to note that the mixed kernel had the best performance of the three. It is also shown in this study that the single RBF has a local characteristic and cannot process the broadly distributed data well. It can, however, be used to improve the power of the SVR by combining with the polynomial kernel.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography