Journal articles on the topic 'Nonparametric smoothing method'

To see the other types of publications on this topic, follow the link: Nonparametric smoothing method.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Nonparametric smoothing method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Takezawa, K. "Use of nonparametric smoothing method for growth analysis." Japanese Journal of Biometrics 9, no. 1 (1988): 11–18. http://dx.doi.org/10.5691/jjb.9.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kuželka, K., and R. Marušák. "Use of nonparametric regression methods for developing a local stem form model." Journal of Forest Science 60, No. 11 (November 14, 2014): 464–71. http://dx.doi.org/10.17221/56/2014-jfs.

Full text
Abstract:
A local mean stem curve of spruce was represented using regression splines. Abilities of smoothing spline and P-spline to model the mean stem curve were evaluated using data of 85 carefully measured stems of Norway spruce. For both techniques the optimal amount of smoothing was investigated in dependence on the number of training stems using a cross-validation method. Representatives of main groups of parametric models – single models, segmented models and models with variable coefficient – were compared with spline models using five statistic criteria. Both regression splines performed comparably or better as all representatives of parametric models independently of the numbers of stems used as training data.  
APA, Harvard, Vancouver, ISO, and other styles
3

Mahmoud, Hamdy F. F. "Parametric Versus Semi and Nonparametric Regression Models." International Journal of Statistics and Probability 10, no. 2 (February 23, 2021): 90. http://dx.doi.org/10.5539/ijsp.v10n2p90.

Full text
Abstract:
There are three common types of regression models: parametric, semiparametric and nonparametric regression. The model should be used to fit the real data depends on how much information is available about the form of the relationship between the response variable and explanatory variables, and the random error distribution that is assumed. Researchers need to be familiar with each modeling approach requirements. In this paper, differences between these models, common estimation methods, robust estimation, and applications are introduced. For parametric models, there are many known methods of estimation, such as least squares and maximum likelihood methods which are extensively studied but they require strong assumptions. On the other hand, nonparametric regression models are free of assumptions regarding the form of the response-explanatory variables relationships but estimation methods, such as kernel and spline smoothing are computationally expensive and smoothing parameters need to be obtained. For kernel smoothing there two common estimators: local constant and local linear smoothing methods. In terms of bias, especially at the boundaries of the data range, local linear is better than local constant estimator.  Robust estimation methods for linear models are well studied, however the robust estimation methods in nonparametric regression methods are limited. A robust estimation method for the semiparametric and nonparametric regression models is introduced.
APA, Harvard, Vancouver, ISO, and other styles
4

Zheng, Xu. "Testing parametric conditional distributions using the nonparametric smoothing method." Metrika 75, no. 4 (November 23, 2010): 455–69. http://dx.doi.org/10.1007/s00184-010-0336-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

You, Wei Zhen, and Xiao Pin Zhong. "Modeling System Reliability Using a Nonparametric Method." Applied Mechanics and Materials 687-691 (November 2014): 1193–97. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1193.

Full text
Abstract:
System reliability is an important problem especially in reliability engineering. The frequency a system failure happens is represented by failure rate. We use failure rate instead of system reliability to analyze a particular system.Traditional parametric models cannot give a good fit to complex systems, wetherefore employed a nonparametric method in this paper. Gaussian smoothing is also applied on the failure rate curves. Compared with parametric models, the nonparametric model yields more accurateestimation of system failure rate.
APA, Harvard, Vancouver, ISO, and other styles
6

Rezapour, Mahdi, and Khaled Ksaibati. "Semi and Nonparametric Conditional Probability Density, a Case Study of Pedestrian Crashes." Open Transportation Journal 15, no. 1 (December 31, 2021): 280–88. http://dx.doi.org/10.2174/1874447802115010280.

Full text
Abstract:
Background: Kernel-based methods have gained popularity as employed model residual’s distribution might not be defined by any classical parametric distribution. Kernel-based method has been extended to estimate conditional densities instead of conditional distributions when data incorporate both discrete and continuous attributes. The method often has been based on smoothing parameters to use optimal values for various attributes. Thus, in case of an explanatory variable being independent of the dependent variable, that attribute would be dropped in the nonparametric method by assigning a large smoothing parameter, giving them uniform distributions so their variances to the model’s variance would be minimal. Objectives: The objective of this study was to identify factors to the severity of pedestrian crashes based on an unbiased method. Especially, this study was conducted to evaluate the applicability of kernel-based techniques of semi- and nonparametric methods on the crash dataset by means of confusion techniques. Methods: In this study, two non- and semi-parametric kernel-based methods were implemented to model the severity of pedestrian crashes. The estimation of the semi-parametric densities is based on the adoptive local smoothing and maximization of the quasi-likelihood function, which is similar somehow to the likelihood of the binary logit model. On the other hand, the nonparametric method is based on the selection of optimal smoothing parameters in estimation of the conditional probability density function to minimize mean integrated squared error (MISE). The performances of those models are evaluated by their prediction power. To have a benchmark for comparison, the standard logistic regression was also employed. Although those methods have been employed in other fields, this is one of the earliest studies that employed those techniques in the context of traffic safety. Results: The results highlighted that the nonparametric kernel-based method outperforms the semi-parametric (single-index model) and the standard logit model based on the confusion matrices. To have a vision about the bandwidth selection method for removal of the irrelevant attributes in nonparametric approach, we added some noisy predictors to the models and a comparison was made. Extensive discussion has been made in the content of this study regarding the methodological approach of the models. Conclusion: To summarize, alcohol and drug involvement, driving on non-level grade, and bad lighting conditions are some of the factors that increase the likelihood of pedestrian crash severity. This is one of the earliest studies that implemented the methods in the context of transportation problems. The nonparametric method is especially recommended to be used in the field of traffic safety when there are uncertainties regarding the importance of predictors as the technique would automatically drop unimportant predictors.
APA, Harvard, Vancouver, ISO, and other styles
7

Kaushanskiy, Vadim, and Victor Lapshin. "A nonparametric method for term structure fitting with automatic smoothing." Applied Economics 48, no. 58 (May 21, 2016): 5654–66. http://dx.doi.org/10.1080/00036846.2016.1181835.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

OSMANI, El Hassene, Mounir Haddou, Naceurdine Bensalem, and Lina Abdallah. "A new smoothing method for nonlinear complementarity problems involving P0-function." Statistics, Optimization & Information Computing 10, no. 4 (September 29, 2022): 1267–92. http://dx.doi.org/10.19139/soic-2310-5070-1493.

Full text
Abstract:
In this paper, we present a family of smoothing methods to solve nonlinear complementarity problems (NCPs) involving P0-function. Several regularization or approximation techniques like Fisher-Burmeister’s method, interior-point methods (IPMs) approaches, or smoothing methods already exist. All the corresponding methods solve a sequence of nonlinear systems of equations and depend on parameters that are difficult to drive to zero. The main novelty of our approach is to consider the smoothing parameters as variables that converge by themselves to zero. We do not need any complicated updating strategy, and then obtain nonparametric algorithms. We prove some global and local convergence results and present several numerical experiments, comparisons, and applications that show the efficiency of our approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Jayanti, Putu Gita Karlina, Rahma Anisa, Muhammad Nur Aidi, and Erfiani. "Penerapan Teknik Prapemrosesan Smoothing Spline pada Data Hasil Pengukuran Alat Pemantau Kadar Glukosa Darah Non-Invasif." Xplore: Journal of Statistics 2, no. 2 (August 31, 2018): 15–23. http://dx.doi.org/10.29244/xplore.v2i2.90.

Full text
Abstract:
A non-invasive blood glucose monitoring device is performed without injuring the limbs. One method of measurement in the form of qualitative and relatively simple to use because the process is fast and requires a cheap cost, namely Fourier Transform Infrared (FTIR). Spectroscopic results allow for a shifting of the scatter, since the same object is measured several times incorrectly producing the same spectrum, requiring a preprocessing method to reduce the problem. However, in some cases it is difficult to identify the existing data pattern, so that a nonparametric approach is needed to identify the pattern of data held so that in the process of calibration model obtained accurate results. Smoothing Spline is one nonparametric method is piecewise polynomial, which is a piece of polynomial that has a segmented property on the hose k that formed at knot points, thus providing flexibility in constructing the shape of the curve that we have. The Smoothing Spline method produces an optimum value when the GCV value is minimum on the use of a linear order with sixteen knot points. The resulting varians value after Smoothing Spline method is smaller than before smoothing, this indicates that this method can minimize the effect of liquefaction in the non-invasive blood glucose value spectrum. In addition, Smoothing Spline method can also capture data patterns well.
APA, Harvard, Vancouver, ISO, and other styles
10

Ampa, Andi Tenri, I. Nyoman Budiantara, and Ismaini Zain. "Selection of Optimal Smoothing Parameters in Mixed Estimator of Kernel and Fourier Series in Semiparametric Regression." Journal of Physics: Conference Series 2123, no. 1 (November 1, 2021): 012035. http://dx.doi.org/10.1088/1742-6596/2123/1/012035.

Full text
Abstract:
Abstract In this article, we propose a new method of selecting smoothing parameters in semiparametric regression. This method is used in semiparametric regression estimation where the nonparametric component is partially approximated by multivariable Fourier Series and partly approached by multivariable Kernel. Selection of smoothing parameters using the method with Generalized Cross-Validation (GCV). To see the performance of this method, it is then applied to the data drinking water quality sourced from Regional Drinking Water Company (PDAM) Surabaya by using Fourier Series with trend and Gaussian Kernel. The results showed that this method contributed a good performance in selecting the optimal smoothing parameters.
APA, Harvard, Vancouver, ISO, and other styles
11

Xu, Zheng. "An alternative circular smoothing method to nonparametric estimation of periodic functions." Journal of Applied Statistics 43, no. 9 (January 8, 2016): 1649–72. http://dx.doi.org/10.1080/02664763.2015.1117590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Xiong, Qiang, Yuan Li, and XingFa Zhang. "The Profile Likelihood Estimation for Single-Index ARCH(p)-M Model." Mathematical Problems in Engineering 2014 (2014): 1–17. http://dx.doi.org/10.1155/2014/189426.

Full text
Abstract:
We propose a class of single-index ARCH(p)-M models and investigate estimators of the parametric and nonparametric components. We first estimate the nonparametric component using local linear smoothing technique and then construct an estimator of parametric component by using profile quasimaximum likelihood method. Under regularity conditions, the asymptotic properties of our estimators are established.
APA, Harvard, Vancouver, ISO, and other styles
13

Lestari, B., Fatmawati, I. N. Budiantara, and N. Chamidah. "Smoothing parameter selection method for multiresponse nonparametric regression model using smoothing spline and Kernel estimators approaches." Journal of Physics: Conference Series 1397 (December 2019): 012064. http://dx.doi.org/10.1088/1742-6596/1397/1/012064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Mariati, Ni Putu Ayu Mirah, I. Nyoman Budiantara, and Vita Ratnasari. "Combination Estimation of Smoothing Spline and Fourier Series in Nonparametric Regression." Journal of Mathematics 2020 (July 1, 2020): 1–10. http://dx.doi.org/10.1155/2020/4712531.

Full text
Abstract:
So far, most of the researchers developed one type of estimator in nonparametric regression. But in reality, in daily life, data with mixed patterns were often encountered, especially data patterns which partly changed at certain subintervals, and some others followed a recurring pattern in a certain trend. The estimator method used for the data pattern was a mixed estimator method of smoothing spline and Fourier series. This regression model was approached by the component smoothing spline and Fourier series. From this process, the mixed estimator was completed using two estimation stages. The first stage was the estimation with penalized least squares (PLS), and the second stage was the estimation with least squares (LS). Those estimators were then implemented using simulated data. The simulated data were gained by generating two different functions, namely, polynomial and trigonometric functions with the size of the sample being 100. The whole process was then repeated 50 times. The experiment of the two functions was modeled using a mixture of the smoothing spline and Fourier series estimators with various smoothing and oscillation parameters. The generalized cross validation (GCV) minimum was selected as the best model. The simulation results showed that the mixed estimators gave a minimum (GCV) value of 11.98. From the minimum GCV results, it was obtained that the mean square error (MSE) was 0.71 and R2 was 99.48%. So, the results obtained indicated that the model was good for a mixture estimator of smoothing spline and Fourier series.
APA, Harvard, Vancouver, ISO, and other styles
15

Wu, Zifeng, Zhouxiang Wu, and Laurence R. Rilett. "Innovative Nonparametric Method for Data Outlier Filtering." Transportation Research Record: Journal of the Transportation Research Board 2674, no. 10 (September 18, 2020): 167–76. http://dx.doi.org/10.1177/0361198120945697.

Full text
Abstract:
Outlier filtering of empirical travel time data is essential for traffic analyses. Most of the widely applied outlier filtering algorithms are parametric in nature and based on assumed data distributions. The assumption, however, might not hold under unstable traffic conditions. This paper proposes a nonparametric outlier filtering method based on a robust locally weighted regression scatterplot smoothing model. The proposed method identifies outliers based on a data point’s standard residual in the robust local regression model. This approach fits a regression surface with no constraint on parametric distributions and limited influence from outliers. The proposed outlier filtering algorithm can be applied to various data collection technologies and for real-time applications. The performance of the new outlier filtering algorithm is compared with the moving standard deviation method and other traditional filtering algorithms. The test sites include GPS data of an Interstate highway in Indiana and Bluetooth data of an urban arterial roadway in Texas. It is shown that the proposed filtering algorithm has several advantages over the traditional filtering algorithms.
APA, Harvard, Vancouver, ISO, and other styles
16

Filipa Mourão, Maria, Ana Cristina Braga, and Pedro Nuno Oliveira. "CRIB conditional on gender: nonparametric ROC curve." International Journal of Health Care Quality Assurance 27, no. 8 (October 7, 2014): 656–63. http://dx.doi.org/10.1108/ijhcqa-04-2013-0047.

Full text
Abstract:
Purpose – The purpose of this paper is to use the kernel method to produce a smoothed receiver operating characteristic (ROC) curve and show how baby gender can influence Clinical Risk Index for Babies (CRIB) scale according to survival risks. Design/methodology/approach – To obtain the ROC curve, conditioned by covariates, two methods may be followed: first, indirect adjustment, in which the covariate is first modeled within groups and then by generating a modified distribution curve; second, direct smoothing in which covariate effects is modeled within the ROC curve itself. To verify if new-born gender and weight affects the classification according to the CRIB scale, the authors use the direct method. The authors sampled 160 Portuguese babies. Findings – The smoothing applied to the ROC curves indicates that the curve's original shape does not change when a bandwidth h=0.1 is used. Furthermore, gender seems to be a significant covariate in predicting baby deaths. A higher value was obtained for the area under curve (AUC) when conditional on female babies. Practical implications – The challenge is to determine whether gender discriminates between dead and surviving babies. Originality/value – The authors constructed empirical ROC curves for CRIB data and empirical ROC curves conditioned on gender. The authors calculate the corresponding AUC and tested the difference between them. The authors also constructed smooth ROC curves for two approaches.
APA, Harvard, Vancouver, ISO, and other styles
17

Gao, Yuan, Lingju Chen, Jiancheng Jiang, and Honglong You. "Nonparametric Estimation of the Ruin Probability in the Classical Compound Poisson Risk Model." Journal of Risk and Financial Management 13, no. 12 (November 29, 2020): 298. http://dx.doi.org/10.3390/jrfm13120298.

Full text
Abstract:
In this paper we study estimating ruin probability which is an important problem in insurance. Our work is developed upon the existing nonparametric estimation method for the ruin probability in the classical risk model, which employs the Fourier transform but requires smoothing on the density of the sizes of claims. We propose a nonparametric estimation approach which does not involve smoothing and thus is free of the bandwidth choice. Compared with the Fourier-transformation-based estimators, our estimators have simpler forms and thus are easier to calculate. We establish asymptotic distributions of our estimators, which allows us to consistently estimate the asymptotic variances of our estimators with the plug-in principle and enables interval estimates of the ruin probability.
APA, Harvard, Vancouver, ISO, and other styles
18

Wahyuningsih, Trionika Dian, Sri Sulistijowati Handajani, and Diari Indriati. "Penerapan Generalized Cross Validation dalam Model Regresi Smoothing Spline pada Produksi Ubi Jalar di Jawa Tengah." Indonesian Journal of Applied Statistics 1, no. 2 (March 13, 2019): 117. http://dx.doi.org/10.13057/ijas.v1i2.26250.

Full text
Abstract:
<p>Sweet Potato is a useful plant as a source carbohydrates, proteins, and is used as an animal feed and ingredient industry. Based on data from the Badan Pusat Statistik (BPS), the production fluctuations of the sweet potato in Central Java from year to year are caused by many factor. The production of sweet potato and the factors that affected it if they are described into a pattern of relationships then they do not have a specific pattern and do not follow a particular distribution, such as harvest area, the allocation of subsidized urea fertilizer, and the allocation of subsidized organic fertilizer. Therefore, the production model of sweet potato could be applied into nonparametric regression model. The approach used for nonparametric regression in this study is smoothing spline regression. The method used in regression smoothing spline is generalized cross validation (GCV). The value of the smoothing parameter (λ) is chosen from the minimum GCV value. The results of the study show that the optimum λ value for the factors of harvest area, urea fertilizer and organic fertilizer are 5.57905e-14, 2.51426e-06, and 3.227217e-13 that they result a minimum GCV i.e 2.29272e-21, 1.38391e-16, and 3.46813e-24.</p><p> </p><p><strong>Keywords</strong>: Sweet potato; nonparametric; smoothing spline; generalized cross validation.</p>
APA, Harvard, Vancouver, ISO, and other styles
19

Bukhtoyarov, Vladimir Viktorovich, and Vadim Sergeevich Tynchenko. "Design of Computational Models for Hydroturbine Units Based on a Nonparametric Regression Approach with Adaptation by Evolutionary Algorithms." Computation 9, no. 8 (July 28, 2021): 83. http://dx.doi.org/10.3390/computation9080083.

Full text
Abstract:
This article deals with the problem of designing regression models for evaluating the parameters of the operation of complex technological equipment—hydroturbine units. A promising approach to the construction of regression models based on nonparametric Nadaraya–Watson kernel estimates is considered. A known problem in applying this approach is to determine the effective values of kernel-smoothing coefficients. Kernel-smoothing factors significantly impact the accuracy of the regression model, especially under conditions of variability of noise and parameters of samples in the input space of models. This fully corresponds to the characteristics of the problem of estimating the parameters of hydraulic turbines. We propose to use the evolutionary genetic algorithm with an addition in the form of a local-search stage to adjust the smoothing coefficients. This ensures the local convergence of the tuning procedure, which is important given the high sensitivity of the quality criterion of the nonparametric model. On a set of test problems, the results were obtained showing a reduction in the modeling error by 20% and 28% for the methods of adjusting the coefficients by the standard and hybrid genetic algorithms, respectively, in comparison with the case of an arbitrary choice of the values of such coefficients. For the task of estimating the parameters of the operation of a hydroturbine unit, a number of promising approaches to constructing regression models based on artificial neural networks, multidimensional adaptive splines, and an evolutionary method of genetic programming were included in the research. The proposed nonparametric approach with a hybrid smoothing coefficient tuning scheme was found to be most effective with a reduction in modeling error of about 5% compared with the best of the alternative approaches considered in the study, which, according to the results of numerical experiments, was the method of multivariate adaptive regression splines.
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Haofeng, Hongxia Jin, Xuejun Jiang, and Jingzhi Li. "Model Selection for High Dimensional Nonparametric Additive Models via Ridge Estimation." Mathematics 10, no. 23 (December 1, 2022): 4551. http://dx.doi.org/10.3390/math10234551.

Full text
Abstract:
In ultrahigh dimensional data analysis, to keep computational performance well and good statistical properties still working, nonparametric additive models face increasing challenges. To overcome them, we introduce a methodology of model selection for high dimensional nonparametric additive models. Our approach is to propose a novel group screening procedure via nonparametric smoothing ridge estimation (GRIE) to find the importance of each covariate. It is then combined with the sure screening property of GRIE and the model selection property of extended Bayesian information criteria (EBIC) to select the suitable sub-models in nonparametric additive models. Theoretically, we establish the strong consistency of model selection for the proposed method. Extensive simulations and two real datasets illustrate the outstanding performance of the GRIE-EBIC method.
APA, Harvard, Vancouver, ISO, and other styles
21

Lappi, Juha. "A multivariate, nonparametric stem-curve prediction method." Canadian Journal of Forest Research 36, no. 4 (April 1, 2006): 1017–27. http://dx.doi.org/10.1139/x05-305.

Full text
Abstract:
The paper presents a general method for predicting the stem curve, volume, and merchantable height of a tree if breast height diameter (DBH) is measured, or if DBH and total height (H) as well as diameters at any heights are measured. Estimates for prediction variances are obtained both for diameters and volumes. The approach is multivariate and nonparametric. At the estimation stage, a multivariate model is developed for the total height and a fixed set of diameters: four diameters at absolute heights below breast height and eight diameters at relative distances between the breast height and the top of the tree. The expected values and variances of the dimensions and the correlations between dimensions are expressed as functions of DBH. These functions were estimated using smoothing splines. The model is applied by predicting unobserved dimensions from the observed dimensions using a linear predictor. If total height is not measured, then prediction is done using an approach based on two-point distributions. Correlation of total heights of different trees in the same stand is also modeled, and with this model, measured total heights in a stand can be used to predict unmeasured total heights. The approach provides both a detailed analysis of variation and covariation of stem curves and a practical prediction method.
APA, Harvard, Vancouver, ISO, and other styles
22

Lin, Kuo-Chin, Yi-Ju Chen, and Yu Shyr. "A nonparametric smoothing method for assessing GEE models with longitudinal binary data." Statistics in Medicine 27, no. 22 (September 30, 2008): 4428–39. http://dx.doi.org/10.1002/sim.3315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Bester, C. Alan, Timothy G. Conley, Christian B. Hansen, and Timothy J. Vogelsang. "FIXED-b ASYMPTOTICS FOR SPATIALLY DEPENDENT ROBUST NONPARAMETRIC COVARIANCE MATRIX ESTIMATORS." Econometric Theory 32, no. 1 (November 19, 2014): 154–86. http://dx.doi.org/10.1017/s0266466614000814.

Full text
Abstract:
This paper develops a method for performing inference using spatially dependent data. We consider test statistics formed using nonparametric covariance matrix estimators that account for heteroskedasticity and spatial correlation (spatial HAC). We provide distributions of commonly used test statistics under “fixed-b” asymptotics, in which HAC smoothing parameters are proportional to the sample size. Under this sequence, spatial HAC estimators are not consistent but converge to nondegenerate limiting random variables that depend on the HAC smoothing parameters, the HAC kernel, and the shape of the spatial region in which the data are located. We illustrate the performance of the “fixed-b” approximation in the spatial context through a simulation example.
APA, Harvard, Vancouver, ISO, and other styles
24

Poměnková, Jitka. "USA business cycle identification – a comparative study of chosen methods." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 55, no. 6 (2007): 125–32. http://dx.doi.org/10.11118/actaun200755060125.

Full text
Abstract:
Presented paper deals with comparison of chosen methods used for the business cycle identification. With respect to this aim nonparametric method (kernel smoothing) and Box-Jenkins methodology were used. This comparison is performed by application on economic activity in USA 1960/Q01–2007/Q01. The residuals are tested by Box-Pierce test. Identified trend is discussed with chosen historical events which affect business cycle in the USA.
APA, Harvard, Vancouver, ISO, and other styles
25

Lin, Kuo-Chin, and Yi-Ju Chen. "A goodness-of-fit test for logistic-normal models using nonparametric smoothing method." Journal of Statistical Planning and Inference 141, no. 2 (February 2011): 1069–76. http://dx.doi.org/10.1016/j.jspi.2010.09.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Rafajłowicz, Ewaryst, Mirosław Pawlak, and Angsar Steland. "Nonlinear Image Processing and Filtering: A Unified Approach Based on Vertically Weighted Regression." International Journal of Applied Mathematics and Computer Science 18, no. 1 (March 1, 2008): 49–61. http://dx.doi.org/10.2478/v10006-008-0005-z.

Full text
Abstract:
Nonlinear Image Processing and Filtering: A Unified Approach Based on Vertically Weighted RegressionA class of nonparametric smoothing kernel methods for image processing and filtering that possess edge-preserving properties is examined. The proposed approach is a nonlinearly modified version of the classical nonparametric regression estimates utilizing the concept of vertical weighting. The method unifies a number of known nonlinear image filtering and denoising algorithms such as bilateral and steering kernel filters. It is shown that vertically weighted filters can be realized by a structure of three interconnected radial basis function (RBF) networks. We also assess the performance of the algorithm by studying industrial images.
APA, Harvard, Vancouver, ISO, and other styles
27

Wilhelm, Daniel. "OPTIMAL BANDWIDTH SELECTION FOR ROBUST GENERALIZED METHOD OF MOMENTS ESTIMATION." Econometric Theory 31, no. 5 (October 2, 2014): 1054–77. http://dx.doi.org/10.1017/s026646661400067x.

Full text
Abstract:
A two-step generalized method of moments estimation procedure can be made robust to heteroskedasticity and autocorrelation in the data by using a nonparametric estimator of the optimal weighting matrix. This paper addresses the issue of choosing the corresponding smoothing parameter (or bandwidth) so that the resulting point estimate is optimal in a certain sense. We derive an asymptotically optimal bandwidth that minimizes a higher-order approximation to the asymptotic mean-squared error of the estimator of interest. We show that the optimal bandwidth is of the same order as the one minimizing the mean-squared error of the nonparametric plugin estimator, but the constants of proportionality are significantly different. Finally, we develop a data-driven bandwidth selection rule and show, in a simulation experiment, that it may substantially reduce the estimator’s mean-squared error relative to existing bandwidth choices, especially when the number of moment conditions is large.
APA, Harvard, Vancouver, ISO, and other styles
28

Han, Yu, Ling Luo, Bin Xie, and Chen Xu. "Nonparametric histogram segmentation-based automatic detection of yarns." Textile Research Journal 90, no. 11-12 (November 26, 2019): 1326–41. http://dx.doi.org/10.1177/0040517519890212.

Full text
Abstract:
Detection of yarns in fabric images is a basic task in real-time monitoring in fabric production processes since it relates to yarn density and fabric structure estimation. In this paper, a new detection method is proposed that can automatically and efficiently estimate the locations as well as the numbers of both weft and warp yarn in fabric images. The method has three sequential phases. First, the modulus of discrete partial derivatives at each pixel is projected onto the weft and warp directions to generate the accumulated histograms. Second, for each histogram, a monotone hypothesis of a nonparametric statistical approach is applied to segment the histogram. Third, according to the segmentation result, the locations of each weft and warp yarn are adaptively determined, while the fabric structure is also obtained. Numerical results demonstrate that, compared with classical yarn detection methods, which are based on image smoothing, the proposed method can estimate yarn locations and fabric structures with more accuracy, but also reduce the influence of yarn hairiness.
APA, Harvard, Vancouver, ISO, and other styles
29

Meng, Cheng, Xinlian Zhang, Jingyi Zhang, Wenxuan Zhong, and Ping Ma. "More efficient approximation of smoothing splines via space-filling basis selection." Biometrika 107, no. 3 (May 7, 2020): 723–35. http://dx.doi.org/10.1093/biomet/asaa019.

Full text
Abstract:
Summary We consider the problem of approximating smoothing spline estimators in a nonparametric regression model. When applied to a sample of size $n$, the smoothing spline estimator can be expressed as a linear combination of $n$ basis functions, requiring $O(n^3)$ computational time when the number $d$ of predictors is two or more. Such a sizeable computational cost hinders the broad applicability of smoothing splines. In practice, the full-sample smoothing spline estimator can be approximated by an estimator based on $q$ randomly selected basis functions, resulting in a computational cost of $O(nq^2)$. It is known that these two estimators converge at the same rate when $q$ is of order $O\{n^{2/(pr+1)}\}$, where $p\in [1,2]$ depends on the true function and $r &gt; 1$ depends on the type of spline. Such a $q$ is called the essential number of basis functions. In this article, we develop a more efficient basis selection method. By selecting basis functions corresponding to approximately equally spaced observations, the proposed method chooses a set of basis functions with great diversity. The asymptotic analysis shows that the proposed smoothing spline estimator can decrease $q$ to around $O\{n^{1/(pr+1)}\}$ when $d\leq pr+1$. Applications to synthetic and real-world datasets show that the proposed method leads to a smaller prediction error than other basis selection methods.
APA, Harvard, Vancouver, ISO, and other styles
30

ASTUTI, DEWA AYU DWI, I. GUSTI AYU MADE SRINADI, and MADE SUSILAWATI. "PENDEKATAN REGRESI NONPARAMETRIK DENGAN MENGGUNAKAN ESTIMATOR KERNEL PADA DATA KURS RUPIAH TERHADAP DOLAR AMERIKA SERIKAT." E-Jurnal Matematika 7, no. 4 (November 30, 2018): 305. http://dx.doi.org/10.24843/mtk.2018.v07.i04.p218.

Full text
Abstract:
Nonparametric regression can be applied for some data types one of them is time series data. The technique of this method is called smoothing technique. There are several smoothing techniques however this study used kernel estimator with seven kernel functions in data of rupiah exchange rate to US dollar. The analysis with R shows that by using minimum Generalized Cross Validation (GCV) criteria, seven functions produce various optimal bandwidth value but has similar curves estimation. The conclusion is that by using kernel estimator in time series data support that choosing the optimal bandwidth is more important than choosing the kernel functions.
APA, Harvard, Vancouver, ISO, and other styles
31

Ruzgas, Tomas, and Indrė Drulytė. "Kernel Density Estimators for Gaussian Mixture Models." Lietuvos statistikos darbai 52, no. 1 (December 20, 2013): 14–21. http://dx.doi.org/10.15388/ljs.2013.13919.

Full text
Abstract:
The problem of nonparametric estimation of probability density function is considered. The performance of kernel estimators based on various common kernels and a new kernel K (see (14)) with both fixed and adaptive smoothing bandwidth is compared in terms of the symmetric mean absolute percentage error using the Monte Carlo method. The kernel K is everywhere positive but has lighter tails than the Gaussian density. Gaussian mixture models from a collection introduced by Marron and Wand (1992) are taken for Monte Carlo simulations. The adaptive kernel method outperforms the smoothing with a fixed bandwidth in the majority of models. The kernel K shows better performance for Gaussian mixtures with considerably overlapping components and multiple peaks (double claw distribution).
APA, Harvard, Vancouver, ISO, and other styles
32

Lestari, Budi, Nur Chamidah, Dursun Aydin, and Ersin Yilmaz. "Reproducing Kernel Hilbert Space Approach to Multiresponse Smoothing Spline Regression Function." Symmetry 14, no. 11 (October 23, 2022): 2227. http://dx.doi.org/10.3390/sym14112227.

Full text
Abstract:
In statistical analyses, especially those using a multiresponse regression model approach, a mathematical model that describes a functional relationship between more than one response variables and one or more predictor variables is often involved. The relationship between these variables is expressed by a regression function. In the multiresponse nonparametric regression (MNR) model that is part of the multiresponse regression model, estimating the regression function becomes the main problem, as there is a correlation between the responses such that it is necessary to include a symmetric weight matrix into a penalized weighted least square (PWLS) optimization during the estimation process. This is, of course, very complicated mathematically. In this study, to estimate the regression function of the MNR model, we developed a PWLS optimization method for the MNR model proposed by a previous researcher, and used a reproducing kernel Hilbert space (RKHS) approach based on a smoothing spline to obtain the solution to the developed PWLS optimization. Additionally, we determined the symmetric weight matrix and optimal smoothing parameter, and investigated the consistency of the regression function estimator. We provide an illustration of the effects of the smoothing parameters for the estimation results using simulation data. In the future, the theory generated from this study can be developed within the scope of statistical inference, especially for the purpose of testing hypotheses involving multiresponse nonparametric regression models and multiresponse semiparametric regression models, and can be used to estimate the nonparametric component of a multiresponse semiparametric regression model used to model Indonesian toddlers' standard growth charts.
APA, Harvard, Vancouver, ISO, and other styles
33

Ahmad, Ibrahim A., and Iris S. Ran. "Kernel contrasts: a data-based method of choosing smoothing parameters in nonparametric density estimation." Journal of Nonparametric Statistics 16, no. 5 (October 2004): 671–707. http://dx.doi.org/10.1080/10485250310001652610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Farida, Yuniar, Ida Purwanti, and Nurissaidah Ulinnuha. "COMPARING GAUSSIAN AND EPANECHNIKOV KERNEL OF NONPARAMETRIC REGRESSION IN FORECASTING ISSI (INDONESIA SHARIA STOCK INDEX)." BAREKENG: Jurnal Ilmu Matematika dan Terapan 16, no. 1 (March 21, 2022): 323–32. http://dx.doi.org/10.30598/barekengvol16iss1pp321-330.

Full text
Abstract:
ISSI reflects the movement of sharia stock prices as a whole. It is necessary to forecast the share price to help investors determine whether the shares should be sold, bought, or retained. This study aims to predict the value of ISSI using nonparametric kernel regression. The kernel regression method is one of the nonparametric regression methods used to estimate conditional expectations using kernel functions. Kernel functions used in this study are gaussian and Epanechnikov kernel functions. The estimator used is the estimator Nadaraya-Watson. This study aims to compare the two kernel functions to predict the value of ISSI in the period from January 2016 to October 2019. The analysis results obtained the best method in predicting ISSI values, namely nonparametric kernel regression using Nadaraya-Watson estimator and Gaussian kernel function with the MAPE value of 15% and the coefficient of determination of 85%. Independent variables that significantly affect ISSI are interest rates, exchange rates, and inflation. Curve smoothing is done using bandwidth value (h) searched by the Silverman rule. The calculation result with the Silverman rule obtained a bandwidth value of 101832.7431.
APA, Harvard, Vancouver, ISO, and other styles
35

Adelfio, G. "Kernel estimation and display of a five-dimensional conditional intensity function." Nonlinear Processes in Geophysics 17, no. 2 (April 22, 2010): 237–44. http://dx.doi.org/10.5194/npg-17-237-2010.

Full text
Abstract:
Abstract. The aim of this paper is to find a convenient and effective method of displaying some second order properties in a neighbourhood of a selected point of the process. The used techniques are based on very general high-dimensional nonparametric smoothing developed to define a more general version of the conditional intensity function introduced in earlier earthquake studies by Vere-Jones (1978).
APA, Harvard, Vancouver, ISO, and other styles
36

Saumard, Matthieu, Marwa Elbouz, Michaël Aron, Ayman Alfalou, and Christian Brosseau. "Enhancing Optical Correlation Decision Performance for Face Recognition by Using a Nonparametric Kernel Smoothing Classification." Sensors 19, no. 23 (November 21, 2019): 5092. http://dx.doi.org/10.3390/s19235092.

Full text
Abstract:
Optical correlation has a rich history in image recognition applications from a database. In practice, it is simple to implement optically using two lenses or numerically using two Fourier transforms. Even if correlation is a reliable method for image recognition, it may jeopardize decision making according to the location, height, and shape of the correlation peak within the correlation plane. Additionally, correlation is very sensitive to image rotation and scale. To overcome these issues, in this study, we propose a method of nonparametric modelling of the correlation plane. Our method is based on a kernel estimation of the regression function used to classify the individual images in the correlation plane. The basic idea is to improve the decision by taking into consideration the energy shape and distribution in the correlation plane. The method relies on the calculation of the Hausdorff distance between the target correlation plane (of the image to recognize) and the correlation planes obtained from the database (the correlation planes computed from the database images). Our method is tested for a face recognition application using the Pointing Head Pose Image Database (PHPID) database. Overall, the results demonstrate good performances of this method compared to competitive methods in terms of good detection and very low false alarm rates.
APA, Harvard, Vancouver, ISO, and other styles
37

Monchot, Hervé, and Jacques Léchelle. "Statistical nonparametric methods for the study of fossil populations." Paleobiology 28, no. 1 (2002): 55–69. http://dx.doi.org/10.1666/0094-8373(2002)028<0055:snmfts>2.0.co;2.

Full text
Abstract:
The precise knowledge of the number and nature of the species belonging to a fossil assemblage as well as of the structure of each species (e.g., age, sex) is of great importance in paleontology. Mixture analysis based on the method of maximum likelihood is a modern statistical technique that concerns the problem of samples consisting of several components, the composition of which is not known. Nonparametric bootstrap and jackknife techniques are used to calculate a confidence interval for each estimated parameter (prior probability, mean, standard deviation) of each group. The bootstrap method is also used to evaluate mathematically how many groups are present in a sample. Experimental density smoothing using the kernel method appears to be a better solution than the use of histograms for the estimation of a distribution. This paper presents some basic concepts and procedures and discusses some preliminary results concerning sex ratios and mortality profile assessments using bones and tooth metric data of small (Ovis antiqua) and large (Bos primigenius) bovines from European Pleistocene sites.
APA, Harvard, Vancouver, ISO, and other styles
38

Xu, Hong-Xia, Han-Sheng Zhong, and Guo-Liang Fan. "Empirical Likelihood for Generalized Functional-Coefficient Regression Models with Multiple Smoothing Variables under Right Censoring Data." Discrete Dynamics in Nature and Society 2020 (April 1, 2020): 1–10. http://dx.doi.org/10.1155/2020/1261426.

Full text
Abstract:
Empirical likelihood as a nonparametric approach has been demonstrated to have many desirable merits for constructing a confidence region. The purpose of this article is to apply the empirical likelihood method to study the generalized functional-coefficient regression models with multiple smoothing variables when the response is subject to random right censoring. The coefficient functions with multiple smoothing variables can accommodate various nonlinear interaction effects between covariates. The empirical log-likelihood ratio of an unknown parameter is constructed and shown to have a standard chi-squared limiting distribution at the true parameter. Based on this, the confidence region of the unknown parameter can be constructed. Simulation studies are carried out to indicate that the empirical likelihood method performs better than a normal approximation-based approach for constructing the confidence region.
APA, Harvard, Vancouver, ISO, and other styles
39

Islamiyati, Anna, Anisa Anisa, Raupong Raupong, Jusmawati Massalesse, Nasrah Sirajang, Sitti Sahriman, and Alfiana Wahyuni. "Estimasi Model Regresi Spline Kubik Tersegmen dengan Metode Penalized Least Square." Al-Khwarizmi : Jurnal Pendidikan Matematika dan Ilmu Pengetahuan Alam 10, no. 2 (October 23, 2022): 139–48. http://dx.doi.org/10.24256/jpmipa.v10i2.3197.

Full text
Abstract:
Abstract:Nonparametric regression is used for data whose data pattern is non-parametric. One of the estimators that can be developed is a segmented cubic spline which is able to show several segmentation changes in the data. This article examines the estimation of segmented cubic spline nonparametric regression models using the Penalized Least Square estimation criteria. The method involves knot points and smoothing parameters simultaneously. In addition, the model is used to analyze data on BPJS claims based on patient age. The results show that the optimal model is at two-knot points, namely 26 and 52 with a smoothing parameter of 0.89. There are three segmentation changes from the cubic data, which consist of young people up to 26 years old, 26-52 years old, and 52 years and over. Abstrak:Regresi nonparametrik digunakan untuk data yang pola datanya bentuk non parametrik. Salah satu estimator yang dapat dikembangkan adalah spline kubik tersegmen yang mampu menunjukkan beberapa segmentasi perubahan pada data. Artikel ini mengkaji estimasi model regresi nonparametrik spline kubik tersegmen melalui kriteria estimasi menggunakan Penalized Least Square. Metode tersebut melibatkan titik knot dan parameter penghalus secara bersamaan. Selain itu, model digunakan untuk menganalisis data klaim BPJS berdasarkan usia pasien. Hasil menunjukkan bahwa model optimal pada dua titik knot yaitu 26 dan 52 dengan parameter penghalus sebesar 0,89. Terdapat tiga segmentasi perubahan data secara kubik, yaitu usia muda hingga 26 tahun, usia 26-52 tahun, dan usia 52 tahun ke atas.
APA, Harvard, Vancouver, ISO, and other styles
40

Hemingway, Halli, and Mark Kimsey. "Estimating Forest Productivity Using Site Characteristics, Multipoint Measures, and a Nonparametric Approach." Forest Science 66, no. 6 (August 15, 2020): 645–52. http://dx.doi.org/10.1093/forsci/fxaa023.

Full text
Abstract:
Abstract Understanding the productivity of forestland is essential in sustainable management of forest ecosystems. The most common measure of site productivity is breast height–age site index (BHASI). BHASI has limitations as a productivity measure and can compound error in predictive models. We explored the accuracy of productivity predictions using an alternative productivity measure (10-meter site index) and a nonparametric approach. An orthogonal sampling design ensured samples were collected across the range of conditions known to influence Douglas-fir (Pseudotsuga menziesii var. glauca) height-growth rates. Using climate, soil, and topographic data along with 10-meter site index measurements, we compared five possible models to estimate forest productivity. Model parameters, performance, and predictions were compared. Twelve validation sites were used to test the accuracy of model predictions. Model performance was significantly improved when smoothing span values were optimized and elevation was added as a predictor. A four-predictor nonparametric model with a bias-corrected Akaike information criterion–optimized smoothing span value produced the most accurate results and was used to produce forest productivity maps for the study area. The low resolution of currently available climatic data and the complex nature of the study area landscape necessitate a topographic variable for accurate productivity predictions. Study Implications Defining and understanding forest productivity is of interest to a wide variety of natural resource professionals including ecologists, climate change experts, forest biometricians, and forest managers. A new method of defining forest productivity using multipoint height-age pairs at 10 and 20 meters and calculated growth rates combined with an appropriate landscape-scale stratification and a nonparametric approach provides accurate productivity estimates. This method is more widely applicable and more precise for specific locations than previous productivity estimation methods. Better productivity and tree growth information will provide more accurate estimates of future forest condition and structure.
APA, Harvard, Vancouver, ISO, and other styles
41

Meira-Machado, Luís, Carmen Cadarso-Suárez, Francisco Gude, and Artur Araújo. "smoothHR: An R Package for Pointwise Nonparametric Estimation of Hazard Ratio Curves of Continuous Predictors." Computational and Mathematical Methods in Medicine 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/745742.

Full text
Abstract:
The Cox proportional hazards regression model has become the traditional choice for modeling survival data in medical studies. To introduce flexibility into the Cox model, several smoothing methods may be applied, and approaches based on splines are the most frequently considered in this context. To better understand the effects that each continuous covariate has on the outcome, results can be expressed in terms of splines-based hazard ratio (HR) curves, taking a specific covariate value as reference. Despite the potential advantages of using spline smoothing methods in survival analysis, there is currently no analytical method in theRsoftware to choose the optimal degrees of freedom in multivariable Cox models (with two or more nonlinear covariate effects). This paper describes anRpackage, calledsmoothHR, that allows the computation of pointwise estimates of the HRs—and their corresponding confidence limits—of continuous predictors introduced nonlinearly. In addition the package provides functions for choosing automatically the degrees of freedom in multivariable Cox models. The package is available from theRhomepage. We illustrate the use of the key functions of thesmoothHRpackage using data from a study on breast cancer and data on acute coronary syndrome, from Galicia, Spain.
APA, Harvard, Vancouver, ISO, and other styles
42

Mulier, Filip, and Vladimir Cherkassky. "Self-Organization as an Iterative Kernel Smoothing Process." Neural Computation 7, no. 6 (November 1995): 1165–77. http://dx.doi.org/10.1162/neco.1995.7.6.1165.

Full text
Abstract:
Kohonen's self-organizing map, when described in a batch processing mode, can be interpreted as a statistical kernel smoothing problem. The batch SOM algorithm consists of two steps. First, the training data are partitioned according to the Voronoi regions of the map unit locations. Second, the units are updated by taking weighted centroids of the data falling into the Voronoi regions, with the weighing function given by the neighborhood. Then, the neighborhood width is decreased and steps 1, 2 are repeated. The second step can be interpreted as a statistical kernel smoothing problem where the neighborhood function corresponds to the kernel and neighborhood width corresponds to kernel span. To determine the new unit locations, kernel smoothing is applied to the centroids of the Voronoi regions in the topological space. This interpretation leads to some new insights concerning the role of the neighborhood and dimensionality reduction. It also strengthens the algorithm's connection with the Principal Curve algorithm. A generalized self-organizing algorithm is proposed, where the kernel smoothing step is replaced with an arbitrary nonparametric regression method.
APA, Harvard, Vancouver, ISO, and other styles
43

Orzeszko, Witold. "Several Aspects of Nonparametric Prediction of Nonlinear Time Series." Przegląd Statystyczny 65, no. 1 (January 30, 2019): 7–24. http://dx.doi.org/10.5604/01.3001.0014.0522.

Full text
Abstract:
Nonparametric regression is an alternative to the parametric approach, which consists of applying parametric models, i.e. models of the certain functional form with a fixed number of parameters. As opposed to the parametric approach, nonparametric models have a general form, which can be approximated increasingly precisely when the sample size grows. Hereby they do not impose such restricted assumptions about the form of the modelling dependencies and in consequence, they are more flexible and let the data speak for themselves. That is why they are a promising tool for forecasting, especially in case of nonlinear time series. One of the most popular nonparametric regression method is the Nadaraya- Watson kernel smoothing. Nowadays, there are a number of variations of this method, like the local-linear kernel estimator, which combines the local linear approximation and the kernel estimator. In the paper a Monte Carlo study is conducted in order to assess the usefulness of the kernel smoothers to nonlinear time series forecasting and to compare them with the other techniques of forecasting.
APA, Harvard, Vancouver, ISO, and other styles
44

Ichiba, Tomoyuki, and Constantinos Kardaras. "Efficient Estimation of One-Dimensional Diffusion First Passage Time Densities via Monte Carlo Simulation." Journal of Applied Probability 48, no. 3 (September 2011): 699–712. http://dx.doi.org/10.1239/jap/1316796908.

Full text
Abstract:
We propose a method for estimating first passage time densities of one-dimensional diffusions via Monte Carlo simulation. Our approach involves a representation of the first passage time density as the expectation of a functional of the three-dimensional Brownian bridge. As the latter process can be simulated exactly, our method leads to almost unbiased estimators. Furthermore, since the density is estimated directly, a convergence of order 1 / √N, where N is the sample size, is achieved, which is in sharp contrast to the slower nonparametric rates achieved by kernel smoothing of cumulative distribution functions.
APA, Harvard, Vancouver, ISO, and other styles
45

Ichiba, Tomoyuki, and Constantinos Kardaras. "Efficient Estimation of One-Dimensional Diffusion First Passage Time Densities via Monte Carlo Simulation." Journal of Applied Probability 48, no. 03 (September 2011): 699–712. http://dx.doi.org/10.1017/s0021900200008251.

Full text
Abstract:
We propose a method for estimating first passage time densities of one-dimensional diffusions via Monte Carlo simulation. Our approach involves a representation of the first passage time density as the expectation of a functional of the three-dimensional Brownian bridge. As the latter process can be simulated exactly, our method leads to almost unbiased estimators. Furthermore, since the density is estimated directly, a convergence of order 1 / √N, where N is the sample size, is achieved, which is in sharp contrast to the slower nonparametric rates achieved by kernel smoothing of cumulative distribution functions.
APA, Harvard, Vancouver, ISO, and other styles
46

Huang, Meng, Yixiao Sun, and Halbert White. "A FLEXIBLE NONPARAMETRIC TEST FOR CONDITIONAL INDEPENDENCE." Econometric Theory 32, no. 6 (September 2, 2015): 1434–82. http://dx.doi.org/10.1017/s0266466615000286.

Full text
Abstract:
This paper proposes a nonparametric test for conditional independence that is easy to implement, yet powerful in the sense that it is consistent and achieves n−1/2 local power. The test statistic is based on an estimator of the topological “distance” between restricted and unrestricted probability measures corresponding to conditional independence or its absence. The distance is evaluated using a family of Generically Comprehensively Revealing (GCR) functions, such as the exponential or logistic functions, which are indexed by nuisance parameters. The use of GCR functions makes the test able to detect any deviation from the null. We use a kernel smoothing method when estimating the distance. An integrated conditional moment (ICM) test statistic based on these estimates is obtained by integrating out the nuisance parameters. We simulate the critical values using a conditional simulation approach. Monte Carlo experiments show that the test performs well in finite samples. As an application, we test an implication of the key assumption of unconfoundedness in the context of estimating the returns to schooling.
APA, Harvard, Vancouver, ISO, and other styles
47

Wu, Qian, Kyoung-Jae Won, and Hongzhe Li. "Nonparametric Tests for Differential Histone Enrichment with ChIP-Seq Data." Cancer Informatics 14s1 (January 2015): CIN.S13972. http://dx.doi.org/10.4137/cin.s13972.

Full text
Abstract:
Chromatin immunoprecipitation sequencing (ChIP-seq) is a powerful method for analyzing protein interactions with DNA. It can be applied to identify the binding sites of transcription factors (TFs) and genomic landscape of histone modification marks (HMs). Previous research has largely focused on developing peak-calling procedures to detect the binding sites for TFs. However, these procedures may fail when applied to ChIP-seq data of HMs, which have diffuse signals and multiple local peaks. In addition, it is important to identify genes with differential histone enrichment regions between two experimental conditions, such as different cellular states or different time points. Parametric methods based on Poisson/negative binomial distribution have been proposed to address this differential enrichment problem and most of these methods require biological replications. However, many ChIP-seq data usually have a few or even no replicates. We propose a nonparametric method to identify the genes with differential histone enrichment regions even without replicates. Our method is based on nonparametric hypothesis testing and kernel smoothing in order to capture the spatial differences in histone-enriched profiles. We demonstrate the method using ChIP-seq data on a comparative epigenomic profiling of adipogenesis of murine adipose stromal cells and the Encyclopedia of DNA Elements (ENCODE) ChIP-seq data. Our method identifies many genes with differential H3K27ac histone enrichment profiles at gene promoter regions between proliferating preadipocytes and mature adipocytes in murine 3T3-L1 cells. The test statistics also correlate with the gene expression changes well and are predictive to gene expression changes, indicating that the identified differentially enriched regions are indeed biologically meaningful.
APA, Harvard, Vancouver, ISO, and other styles
48

W, Anies Yulinda, Trias Novia L., Melati Tegarina, and Nur Chamidah. "ANALISIS PENGARUH ANGKA KEMATIAN BAYI TERHADAP ANGKA HARAPAN HIDUP DI PROVINSI JAWA TIMUR BERDASARKAN ESTIMATOR LEAST SQUARE SPINE." Contemporary Mathematics and Applications (ConMathA) 1, no. 1 (August 9, 2019): 56. http://dx.doi.org/10.20473/conmatha.v1i1.14775.

Full text
Abstract:
Life expectancy can be used to evaluate the government's performance for improving the welfare of the population in the health sector. Life expectancy is closely related to infant mortality rate. Theoretically, decreasing of infant mortality rate will cause increasing of life expectancy. A statistical method that can be used to model life expectancy is nonparametric regression model based on least square spline estimator. This method provides high flexibility to accommodate pattern of data by using smoothing technique. The best estimated model is order one spline model with one knot based on minimum generalized cross validation (GCV) value of 0.607. Each increasing of one infant mortality rate unit will cause decreasing of life expectancy of 0.2314 for infant mortality rate less than 27, and of 0.0666 for infant mortality rate more than and equals to 27. In addition, based on mean square error (MSE) of 0.492 and R2value of 76.59% for nonparametric model approach compared with MSE of 0.634 and R2 value of 71.8% for parametric model approach, we conclude that the use of nonparametric model approach based on least square spline estimator is better than that of parametric model approach.
APA, Harvard, Vancouver, ISO, and other styles
49

Aljobouri, Hadeel K., Hussain A. Jaber, and Ilyas Çankaya. "Optimal Analysis of Functional Magnetic Resonance (fMRI) Using Nonparametric Permutation Method." NeuroQuantology 20, no. 4 (April 13, 2022): 113–27. http://dx.doi.org/10.14704/nq.2022.20.4.nq22101.

Full text
Abstract:
The non-parametric permutation approach is an intuitive and flexible approach for the statistical analysis of fMRI data compared to the parametric techniques. This approach can also be used to verify the validity of less computationally expensive parametric approaches. The methodology concept and comparative features of non-parametric permutation methods have been handled and discussed by various researchers. However, no available explication of the method exists, also, no freely distributed program is implemented. Consequently, this technique has not been applied practically. In the current work, the App Designer Statistical Parametric and Non-parametric Mapping of Functional Magnetic Resonance Imaging (SPnPM fMRI) tool is proposed to address these issues. The SPnPM fMRI toolbox is an open-source package that is designed for crucial comparison analysis. The comparison is performed between the statistical parametric and non-parametric mapping of the second-level analysis of fMRI data. T-test utilizing Random Field Theory (RFT), smoothed pseudo-t-test applying a permutation test, and t-statistic utilizing a permutation test without smoothing, are carried out. In addition, the corresponding parametric results are qualitatively and quantitatively compared. The outcomes on real fMRI data show the non-parametric approach achieved suprathreshold voxels more than those achieved in the conventional approach. Such as the activation of the anterior cingulate at (3, 15, 45) coordinates equal to 402 voxels employing a non-parametric pseudo-t-test, while 75, 28 voxels at the same coordinates using the T-test non-parametric and t-test parametric using random field theory, respectively. Besides, more suitable for small sample sizes.
APA, Harvard, Vancouver, ISO, and other styles
50

Никулин, В. С., and А. И. Пестунов. "APPROXIMATION OF THE DISTRIBUTION DENSITY OF TIME BETWEEN FAILURES OF A COMPUTING SYSTEM BY THE ROSENBLATT-PARZEN NONPARAMETRIC METHOD." ВЕСТНИК ВОРОНЕЖСКОГО ГОСУДАРСТВЕННОГО ТЕХНИЧЕСКОГО УНИВЕРСИТЕТА, no. 1 (March 14, 2022): 36–41. http://dx.doi.org/10.36622/vstu.2022.18.1.004.

Full text
Abstract:
Определение причин возникновения отказов оборудования является одной из главных задач организации экспериментальной оценки надежности объектов. Решению данной задачи в наибольшей степени отвечает статистическое оценивание плотности распределения случайных величин. Под случайной величиной в теории надежности рассматривают такие временные показатели, как моменты отказа и восстановления оборудования, а также время работы между отказами и время, требуемое для его восстановления после отказа. Проведено исследование по аппроксимации плотности распределения времени работы между отказами по данным из эксплуатации составного оборудования вычислительной системы. Данное оборудование является высоконадежным и характеризуется малым количеством отказов, а также отсутствием априорной информации о законе распределения случайных величин. По этим причинам в качестве метода аппроксимации рассмотрен непараметрический метод Розенблатта-Парзена. В исследуемом методе функция ядра отвечает за гладкость, а параметр сглаживания за точность аппроксимации плотности распределения. В качестве ядра выбрана функция Гаусса, ранее рассмотренная в работах по теории надежности. На основе метода Хука-Дживса разработан алгоритм поиска оптимального параметра сглаживания, отвечающего за точность аппроксимации. Оценка влияния параметра сглаживания и объема выборки на точность аппроксимации проведена на основе анализа оценивания ошибок в метрике L 1 -пространства и графического представления. Проведенное исследование позволяет сделать выводы о том, что использование оптимального параметра сглаживания при наличии выборок различного объема позволяет уменьшить итоговую ошибку аппроксимации Determining the causes of equipment failures is one of the main tasks of organizing an experimental assessment of the reliability of objects. The solution of this problem is best met by statistical estimation of the distribution density of random variables. A random variable in reliability theory considers such time indicators as the moments of equipment failure and recovery, as well as the operating time between failures and the time required to restore it after the failure. In this work, we carried out a study on the approximation of the distribution density of the operation time between failures, according to data from the operation of the composite equipment of a computer system. This equipment is highly reliable and is characterized by a small number of failures, as well as the absence of a priori information about the law of distribution of random variables. For these reasons, the nonparametric Rosenblatt-Parzen method is considered as an approximation method. In the method under study, the kernel function is responsible for smoothness, and the smoothing parameter is responsible for the accuracy of the distribution density approximation. The Gaussian function, previously considered in works on reliability theory, is chosen as the kernel. Based on the Hooke-Jeeves method, we developed an algorithm for finding the optimal smoothing parameter responsible for the approximation accuracy. We carried out the assessment of the effect of the smoothing parameter and the sample size on the accuracy of the approximation on the basis of an analysis of the estimation of errors in the L -space metric and a graphical representation. The conducted study allows us to conclude that the use of the optimal smoothing parameter in the presence of samples of different sizes can reduce the final approximation error
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography