Dissertations / Theses on the topic 'Generalised smoothing'

To see the other types of publications on this topic, follow the link: Generalised smoothing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 26 dissertations / theses for your research on the topic 'Generalised smoothing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Yuyi. "Empirical likelihood with applications in time series." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/empirical-likelihood-with-applications-in-time-series(29c74808-f784-4306-8df9-26f45b30b553).html.

Full text
Abstract:
This thesis investigates the statistical properties of Kernel Smoothed Empirical Likelihood (KSEL, e.g. Smith, 1997 and 2004) estimator and various associated inference procedures in weakly dependent data. New tests for structural stability are proposed and analysed. Asymptotic analyses and Monte Carlo experiments are applied to assess these new tests, theoretically and empirically. Chapter 1 reviews and discusses some estimation and inferential properties of Empirical Likelihood (EL, Owen, 1988) for identically and independently distributed data and compares it with Generalised EL (GEL), GMM and other estimators. KSEL is extensively treated, by specialising kernel-smoothed GEL in the working paper of Smith (2004), some of whose results and proofs are extended and refined in Chapter 2. Asymptotic properties of some tests in Smith (2004) are also analysed under local alternatives. These special treatments on KSEL lay the foundation for analyses in Chapters 3 and 4, which would not otherwise follow straightforwardly. In Chapters 3 and 4, subsample KSEL estimators are proposed to assist the development of KSEL structural stability tests to diagnose for a given breakpoint and for an unknown breakpoint, respectively, based on relevant work using GMM (e.g. Hall and Sen, 1999; Andrews and Fair, 1988; Andrews and Ploberger, 1994). It is also original in these two chapters that moment functions are allowed to be kernel-smoothed after or before the sample split, and it is rigorously proved that these two smoothing orders are asymptotically equivalent. The overall null hypothesis of structural stability is decomposed according to the identifying and overidentifying restrictions, as Hall and Sen (1999) advocate in GMM, leading to a more practical and precise structural stability diagnosis procedure. In this framework, these KSEL structural stability tests are also proved via asymptotic analysis to be capable of identifying different sources of instability, arising from parameter value change or violation of overidentifying restrictions. The analyses show that these KSEL tests follow the same limit distributions as their counterparts using GMM. To examine the finite-sample performance of KSEL structural stability tests in comparison to GMM's, Monte Carlo simulations are conducted in Chapter 5 using a simple linear model considered by Hall and Sen (1999). This chapter details some relevant computational algorithms and permits different smoothing order, kernel type and prewhitening options. In general, simulation evidence seems to suggest that compared to GMM's tests, these newly proposed KSEL tests often perform comparably. However, in some cases, the sizes of these can be slightly larger, and the false null hypotheses are rejected with much higher frequencies. Thus, these KSEL based tests are valid theoretical and practical alternatives to GMM's.
APA, Harvard, Vancouver, ISO, and other styles
2

Sheppard, Therese. "Extending covariance structure analysis for multivariate and functional data." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/extending-covariance-structure-analysis-for-multivariate-and-functional-data(e2ad7f12-3783-48cf-b83c-0ca26ef77633).html.

Full text
Abstract:
For multivariate data, when testing homogeneity of covariance matrices arising from two or more groups, Bartlett's (1937) modified likelihood ratio test statistic is appropriate to use under the null hypothesis of equal covariance matrices where the null distribution of the test statistic is based on the restrictive assumption of normality. Zhang and Boos (1992) provide a pooled bootstrap approach when the data cannot be assumed to be normally distributed. We give three alternative bootstrap techniques to testing homogeneity of covariance matrices when it is both inappropriate to pool the data into one single population as in the pooled bootstrap procedure and when the data are not normally distributed. We further show that our alternative bootstrap methodology can be extended to testing Flury's (1988) hierarchy of covariance structure models. Where deviations from normality exist, we show, by simulation, that the normal theory log-likelihood ratio test statistic is less viable compared with our bootstrap methodology. For functional data, Ramsay and Silverman (2005) and Lee et al (2002) together provide four computational techniques for functional principal component analysis (PCA) followed by covariance structure estimation. When the smoothing method for smoothing individual profiles is based on using least squares cubic B-splines or regression splines, we find that the ensuing covariance matrix estimate suffers from loss of dimensionality. We show that ridge regression can be used to resolve this problem, but only for the discretisation and numerical quadrature approaches to estimation, and that choice of a suitable ridge parameter is not arbitrary. We further show the unsuitability of regression splines when deciding on the optimal degree of smoothing to apply to individual profiles. To gain insight into smoothing parameter choice for functional data, we compare kernel and spline approaches to smoothing individual profiles in a nonparametric regression context. Our simulation results justify a kernel approach using a new criterion based on predicted squared error. We also show by simulation that, when taking account of correlation, a kernel approach using a generalized cross validatory type criterion performs well. These data-based methods for selecting the smoothing parameter are illustrated prior to a functional PCA on a real data set.
APA, Harvard, Vancouver, ISO, and other styles
3

Baker, Jannah F. "Bayesian spatiotemporal modelling of chronic disease outcomes." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/104455/1/Jannah_Baker_Thesis.pdf.

Full text
Abstract:
This thesis contributes to Bayesian spatial and spatiotemporal methodology by investigating techniques for spatial imputation and joint disease modelling, and identifies high-risk individual profiles and geographic areas for type II diabetes mellitus (DMII) outcomes. DMII and related chronic conditions including hypertension, coronary arterial disease, congestive heart failure and chronic obstructive pulmonary disease are examples of ambulatory care sensitive conditions for which hospitalisation for complications is potentially avoidable with quality primary care. Bayesian spatial and spatiotemporal studies are useful for identifying small areas that would benefit from additional services to detect and manage these conditions early, thus avoiding costly sequelae.
APA, Harvard, Vancouver, ISO, and other styles
4

Utami, Zuliana Sri. "Penalized regression methods with application to generalized linear models, generalized additive models, and smoothing." Thesis, University of Essex, 2017. http://repository.essex.ac.uk/20908/.

Full text
Abstract:
Recently, penalized regression has been used for dealing problems which found in maximum likelihood estimation such as correlated parameters and a large number of predictors. The main issues in this regression is how to select the optimal model. In this thesis, Schall’s algorithm is proposed as an automatic selection of weight of penalty. The algorithm has two steps. First, the coefficient estimates are obtained with an arbitrary penalty weight. Second, an estimate of penalty weight λ can be calculated by the ratio of the variance of error and the variance of coefficient. The iteration is continued from step one until an estimate of penalty weight converge. The computational cost is minimized because the optimal weight of penalty could be obtained within a small number of iterations. In this thesis, Schall’s algorithm is investigated for ridge regression, lasso regression and two-dimensional histogram smoothing. The proposed algorithm are applied to real data sets and simulation data sets. In addition, a new algorithm for lasso regression is proposed. The performance of results of the algorithm was almost comparable in all applications. Schall’s algorithm can be an efficient algorithm for selection of weight of penalty.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Xiaohui 1969. "Combining the generalized linear model and spline smoothing to analyze examination data." Thesis, McGill University, 1993. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=26176.

Full text
Abstract:
This thesis aims to propose and specify a way to analyze test data. In order to analyze test data, it is necessary to estimate a special binary regression relation, called the item characteristic curve. The item characteristic curve is a relationship between the probabilities of examinees answering an item correctly and their abilities.
The statistical tools used in this thesis are the generalized linear models and spline smoothing. The method tries to combine the advantages of both parametric modeling and nonparametric regression to get a good estimate of the item characteristic curve. A special basis for spline smoothing is proposed which is motivated by the properties of the item characteristic curve. Based on the estimate of the item characteristic curve by this method, a more stable estimate of the item information function can be generated. Some illustrative analysis of simulated data are presented. The results seem to indicate that this method does have the advantages of both parametric modeling and nonparametric regression: it is faster to compute and more flexible than the methods using parametric models, for example, the three-parameter model in psychometrics, and on the other hand, it generates more stable estimate of derivatives than the purely nonparametric regression.
APA, Harvard, Vancouver, ISO, and other styles
6

Cao, Jiguo. "Generalized profiling method and the applications to adaptive penalized smoothing, generalized semiparametric additive models and estimating differential equations." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=102483.

Full text
Abstract:
Many statistical models involve three distinct groups of variables: local or nuisance parameters, global or structural parameters, and complexity parameters. In this thesis, we introduce the generalized profiling method to estimate these statistical models, which treats one group of parameters as an explicit or implicit function of other parameters. The dimensionality of the parameter space is reduced, and the optimization surface becomes smoother. The Newton-Raphson algorithm is applied to estimate these three distinct groups of parameters in three levels of optimization, with the gradients and Hessian matrices written out analytically by the Implicit Function Theorem it' necessary and allowing for different criteria for each level of optimization. Moreover, variances of global parameters are estimated by the Delta method and include the variation coming from complexity parameters. We also propose three applications of the generalized profiling method.
First, penalized smoothing is extended by allowing for a functional smoothing parameter, which is adaptive to the geometry of the underlying curve, which is called adaptive penalized smoothing. In the first level of optimization, the smooth ing coefficients are local parameters, estimated by minimizing sum of squared errors, conditional on the functional smoothing parameter. In the second level, the functional smoothing parameter is a complexity parameter, estimated by minimizing generalized cross-validation (GCV), treating the smoothing coefficients as explicit functions of the functional smoothing parameter. Adaptive penalized smoothing is shown to obtain better estimates for fitting functions and their derivatives.
Next, the generalized semiparametric additive models are estimated by three levels of optimization, allowing response variables in any kind of distribution. In the first level, the nonparametric functional parameters are nuisance parameters, estimated by maximizing the regularized likelihood function, conditional on the linear coefficients and the smoothing parameter. In the second level, the linear coefficients are structural parameters, estimated by maximizing the likelihood function with the nonparametric functional parameters treated as implicit functions of linear coefficients and the smoothing parameter. In the third level, the smoothing parameter is a complexity parameter, estimated by minimizing the approximated GCV with the linear coefficients treated as implicit functions of the smoothing parameter. This method is applied to estimate the generalized semiparametric additive model for the effect of air pollution on the public health.
Finally, parameters in differential equations (DE's) are estimated from noisy data with the generalized profiling method. In the first level of optimization, fitting functions are estimated to approximate DE solutions by penalized smoothing with the penalty term defined by DE's, fixing values of DE parameters. In the second level of optimization, DE parameters are estimated by weighted sum of squared errors, with the smoothing coefficients treated as an implicit function of DE parameters. The effects of the smoothing parameter on DE parameter estimates are explored and the optimization criteria for smoothing parameter selection are discussed. The method is applied to fit the predator-prey dynamic model to biological data, to estimate DE parameters in the HIV dynamic model from clinical trials, and to explore dynamic models for thermal decomposition of alpha-Pinene.
APA, Harvard, Vancouver, ISO, and other styles
7

Kaivanipour, Kivan. "Non-Life Insurance Pricing Using the Generalized Additive Model, Smoothing Splines and L-Curves." Thesis, KTH, Matematik (Avd.), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168389.

Full text
Abstract:
In non-life insurance, almost every tariff analysis involves continuous rating variables, such as the age of the policyholder or the weight of the insured vehicle. In the generalized linear model, continuous rating variables are categorized into intervals and all values within an interval are treated as identical. By using the generalized additive model, the categorization part of the generalized linear model can be avoided. This thesis will treat different methods for finding the optimal smoothing parameter within the generalized additive model. While the method of cross validation is commonly used for this purpose, a more uncommon method, the L-curve method, is investigated for its performance in comparison to the method of cross validation. Numerical computations on test data show that the L-curve method is significantly faster than the method of cross validation, but suffers from heavy under-smoothing and is thus not a suitable method for estimating the optimal smoothing parameter.
Nästan alla tariffanalyser inom sakförsäkring inkluderar kontinuerliga premieargument, såsom försäkringstagarens ålder eller vikten på det försäkrade fordonet. I den generaliserade linjära modellen så grupperas kontinuerliga premiearguments möjliga värden i intervaller och alla värden inom ett intervall behandlas som identiska. Genom att använda den generaliserade additativa modellen så slipper man arbetet med att dela in kontinuerliga premiearguments möjliga värden i intervaller. Detta examensarbete kommer att behandla olika metoder för att uppskatta den optimala smoothing-parametern inom den generaliserade additativa modellen. Metoden för korsvalidering används vanligen för detta ändamål. L-kurve-metoden, som är en mer ovanlig metod, undersöks för dess prestanda i jämförelse med metoden för korsvalidering. Numeriska beräkningar på testdata visar att L-kurve-metoden är betydligt snabbare än metoden för korsvalidering, men att den underutjämnar och därför inte är en lämplig metod för att uppskatta den optimala smoothing-parametern.
APA, Harvard, Vancouver, ISO, and other styles
8

Pya, Natalya. "Additive models with shape constraints." Thesis, University of Bath, 2010. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.527433.

Full text
Abstract:
In many practical situations when analyzing a dependence of one or more explanatory variables on a response variable it is essential to assume that the relationship of interest obeys certain shape constraints, such as monotonicity or monotonicity and convexity/concavity. In this thesis a new approach to shape preserving smoothing within generalized additive models has been developed. In contrast with previous quadratic programming based methods, the project develops intermediate rank penalized smoothers with shape constrained restrictions based on re-parameterized B-splines and penalties based on the P-spline ideas of Eilers and Marx (1996). Smoothing under monotonicity constraints and monotonicity together with convexity/concavity for univariate smooths; and smoothing of bivariate functions with monotonicity restrictions on both covariates and on only one of them are considered. The proposed shape constrained smoothing has been incorporated into generalized additive models with a mixture of unconstrained and shape restricted smooth terms (mono-GAM). A fitting procedure for mono-GAM is developed. Since a major challenge of any flexible regression method is its implementation in a computationally efficient and stable manner, issues such as convergence, rank deficiency of the working model matrix, initialization, and others have been thoroughly dealt with. A question about the limiting posterior distribution of the model parameters is solved, which allows us to construct Bayesian confidence intervals of the mono-GAM smooth terms by means of the delta method. The performance of these confidence intervals is examined by assessing realized coverage probabilities using simulation studies. The proposed modelling approach has been implemented in an R package monogam. The model setup is the same as in mgcv(gam) with the addition of shape constrained smooths. In order to be consistent with the unconstrained GAM, the package provides key functions similar to those associated with mgcv(gam). Performance and timing comparisons of mono-GAM with other alternative methods has been undertaken. The simulation studies show that the new method has practical advantages over the alternatives considered. Applications of mono-GAM to various data sets are presented which demonstrate its ability to model many practical situations.
APA, Harvard, Vancouver, ISO, and other styles
9

Björkwall, Susanna. "Stochastic claims reserving in non-life insurance : Bootstrap and smoothing models." Doctoral thesis, Stockholms universitet, Matematiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-55347.

Full text
Abstract:
In practice there is a long tradition of actuaries calculating reserve estimates according to deterministic methods without explicit reference to a stochastic model. For instance, the chain-ladder was originally a deterministic reserving method. Moreover, the actuaries often make ad hoc adjustments of the methods, for example, smoothing of the chain-ladder development factors, in order to fit the data set under analysis. However, stochastic models are needed in order to assess the variability of the claims reserve. The standard statistical approach would be to first specify a model, then find an estimate of the outstanding claims under that model, typically by maximum likelihood, and finally the model could be used to find the precision of the estimate. As a compromise between this approach and the actuary's way of working without reference to a model the object of the research area has often been to first construct a model and a method that produces the actuary's estimate and then use this model in order to assess the uncertainty of the estimate. A drawback of this approach is that the suggested models have been constructed to give a measure of the precision of the reserve estimate without the possibility of changing the estimate itself. The starting point of this thesis is the inconsistency between the deterministic approaches used in practice and the stochastic ones suggested in the literature. On one hand, the purpose of Paper I is to develop a bootstrap technique which easily enables the actuary to use other development factor methods than the pure chain-ladder relying on as few model assumptions as possible. This bootstrap technique is then extended and applied to the separation method in Paper II. On the other hand, the purpose of Paper III is to create a stochastic framework which imitates the ad hoc deterministic smoothing of chain-ladder development factors which is frequently used in practice.
APA, Harvard, Vancouver, ISO, and other styles
10

Hanh, Nguyen T. "Lasso for Autoregressive and Moving Average Coeffients via Residuals of Unobservable Time Series." University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo154471227291601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Xiaohui. "Bayesian classification and survival analysis with curve predictors." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Metwalli, Nader. "High angular resolution diffusion-weighted magnetic resonance imaging: adaptive smoothing and applications." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34854.

Full text
Abstract:
Diffusion-weighted magnetic resonance imaging (MRI) has allowed unprecedented non-invasive mapping of brain neural connectivity in vivo by means of fiber tractography applications. Fiber tractography has emerged as a useful tool for mapping brain white matter connectivity prior to surgery or in an intraoperative setting. The advent of high angular resolution diffusion-weighted imaging (HARDI) techniques in MRI for fiber tractography has allowed mapping of fiber tracts in areas of complex white matter fiber crossings. Raw HARDI images, as a result of elevated diffusion-weighting, suffer from depressed signal-to-noise ratio (SNR) levels. The accuracy of fiber tractography is dependent on the performance of the various methods extracting dominant fiber orientations from the HARDI-measured noisy diffusivity profiles. These methods will be sensitive to and directly affected by the noise. In the first part of the thesis this issue is addressed by applying an objective and adaptive smoothing to the noisy HARDI data via generalized cross-validation (GCV) by means of the smoothing splines on the sphere method for estimating the smooth diffusivity profiles in three dimensional diffusion space. Subsequently, fiber orientation distribution functions (ODFs) that reveal dominant fiber orientations in fiber crossings are then reconstructed from the smoothed diffusivity profiles using the Funk-Radon transform. Previous ODF smoothing techniques have been subjective and non-adaptive to data SNR. The GCV-smoothed ODFs from our method are accurate and are smoothed without external intervention facilitating more precise fiber tractography. Diffusion-weighted MRI studies in amyotrophic lateral sclerosis (ALS) have revealed significant changes in diffusion parameters in ALS patient brains. With the need for early detection of possibly discrete upper motor neuron (UMN) degeneration signs in patients with early ALS, a HARDI study is applied in order to investigate diffusion-sensitive changes reflected in the diffusion tensor imaging (DTI) measures axial and radial diffusivity as well as the more commonly used measures fractional anisotropy (FA) and mean diffusivity (MD). The hypothesis is that there would be added utility in considering axial and radial diffusivities which directly reflect changes in the diffusion tensors in addition to FA and MD to aid in revealing neurodegenerative changes in ALS. In addition, applying adaptive smoothing via GCV to the HARDI data further facilitates the application of fiber tractography by automatically eliminating spurious noisy peaks in reconstructed ODFs that would mislead fiber tracking.
APA, Harvard, Vancouver, ISO, and other styles
13

Ryu, Duchwan. "Regression analysis with longitudinal measurements." Texas A&M University, 2005. http://hdl.handle.net/1969.1/2398.

Full text
Abstract:
Bayesian approaches to the regression analysis for longitudinal measurements are considered. The history of measurements from a subject may convey characteristics of the subject. Hence, in a regression analysis with longitudinal measurements, the characteristics of each subject can be served as covariates, in addition to possible other covariates. Also, the longitudinal measurements may lead to complicated covariance structures within each subject and they should be modeled properly. When covariates are some unobservable characteristics of each subject, Bayesian parametric and nonparametric regressions have been considered. Although covariates are not observable directly, by virtue of longitudinal measurements, the covariates can be estimated. In this case, the measurement error problem is inevitable. Hence, a classical measurement error model is established. In the Bayesian framework, the regression function as well as all the unobservable covariates and nuisance parameters are estimated. As multiple covariates are involved, a generalized additive model is adopted, and the Bayesian backfitting algorithm is utilized for each component of the additive model. For the binary response, the logistic regression has been proposed, where the link function is estimated by the Bayesian parametric and nonparametric regressions. For the link function, introduction of latent variables make the computing fast. In the next part, each subject is assumed to be observed not at the prespecifiedtime-points. Furthermore, the time of next measurement from a subject is supposed to be dependent on the previous measurement history of the subject. For this outcome- dependent follow-up times, various modeling options and the associated analyses have been examined to investigate how outcome-dependent follow-up times affect the estimation, within the frameworks of Bayesian parametric and nonparametric regressions. Correlation structures of outcomes are based on different correlation coefficients for different subjects. First, by assuming a Poisson process for the follow- up times, regression models have been constructed. To interpret the subject-specific random effects, more flexible models are considered by introducing a latent variable for the subject-specific random effect and a survival distribution for the follow-up times. The performance of each model has been evaluated by utilizing Bayesian model assessments.
APA, Harvard, Vancouver, ISO, and other styles
14

Song, Song. "Confidence bands in quantile regression and generalized dynamic semiparametric factor models." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2010. http://dx.doi.org/10.18452/16341.

Full text
Abstract:
In vielen Anwendungen ist es notwendig, die stochastische Schwankungen der maximalen Abweichungen der nichtparametrischen Schätzer von Quantil zu wissen, zB um die verschiedene parametrische Modelle zu überprüfen. Einheitliche Konfidenzbänder sind daher für nichtparametrische Quantil Schätzungen der Regressionsfunktionen gebaut. Die erste Methode basiert auf der starken Approximation der empirischen Verfahren und Extremwert-Theorie. Die starke gleichmäßige Konsistenz liegt auch unter allgemeinen Bedingungen etabliert. Die zweite Methode beruht auf der Bootstrap Resampling-Verfahren. Es ist bewiesen, dass die Bootstrap-Approximation eine wesentliche Verbesserung ergibt. Der Fall von mehrdimensionalen und diskrete Regressorvariablen wird mit Hilfe einer partiellen linearen Modell behandelt. Das Verfahren wird mithilfe der Arbeitsmarktanalysebeispiel erklärt. Hoch-dimensionale Zeitreihen, die nichtstationäre und eventuell periodische Verhalten zeigen, sind häufig in vielen Bereichen der Wissenschaft, zB Makroökonomie, Meteorologie, Medizin und Financial Engineering, getroffen. Der typische Modelierungsansatz ist die Modellierung von hochdimensionalen Zeitreihen in Zeit Ausbreitung der niedrig dimensionalen Zeitreihen und hoch-dimensionale zeitinvarianten Funktionen über dynamische Faktorenanalyse zu teilen. Wir schlagen ein zweistufiges Schätzverfahren. Im ersten Schritt entfernen wir den Langzeittrend der Zeitreihen durch Einbeziehung Zeitbasis von der Gruppe Lasso-Technik und wählen den Raumbasis mithilfe der funktionalen Hauptkomponentenanalyse aus. Wir zeigen die Eigenschaften dieser Schätzer unter den abhängigen Szenario. Im zweiten Schritt erhalten wir den trendbereinigten niedrig-dimensionalen stochastischen Prozess (stationär).
In many applications it is necessary to know the stochastic fluctuation of the maximal deviations of the nonparametric quantile estimates, e.g. for various parametric models check. Uniform confidence bands are therefore constructed for nonparametric quantile estimates of regression functions. The first method is based on the strong approximations of the empirical process and extreme value theory. The strong uniform consistency rate is also established under general conditions. The second method is based on the bootstrap resampling method. It is proved that the bootstrap approximation provides a substantial improvement. The case of multidimensional and discrete regressor variables is dealt with using a partial linear model. A labor market analysis is provided to illustrate the method. High dimensional time series which reveal nonstationary and possibly periodic behavior occur frequently in many fields of science, e.g. macroeconomics, meteorology, medicine and financial engineering. One of the common approach is to separate the modeling of high dimensional time series to time propagation of low dimensional time series and high dimensional time invariant functions via dynamic factor analysis. We propose a two-step estimation procedure. At the first step, we detrend the time series by incorporating time basis selected by the group Lasso-type technique and choose the space basis based on smoothed functional principal component analysis. We show properties of this estimator under the dependent scenario. At the second step, we obtain the detrended low dimensional stochastic process (stationary).
APA, Harvard, Vancouver, ISO, and other styles
15

Létourneau, Étienne. "Impact of algorithm, iterations, post-smoothing, count level and tracer distribution on single-frame positrom emission tomography quantification using a generalized image space reconstruction algorithm." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=110750.

Full text
Abstract:
Positron Emission Tomography (PET) is a medical imaging technique tracing the functional processes inside a subject. One of the common applications of this device is to perform a subjective diagnostic from the images. However, quantitative imaging (QI) allows one to perform an objective analysis as well as providing extra information such as the time activity curves (TAC) and visual details that the eye can't see. The aim of this work was to, by comparing several reconstruction algorithms such as the MLEM PSF, ISRA PSF and its related algorithms and FBP for single-frame imaging, to develop a robust analysis on the quantitative performance depending on the region of interest (ROI), the size of the ROI, the noise level, the activity distribution and the post-smoothing parameters. By simulating an acquisition using a 2-D digital axial brain phantom on Matlab, comparison has been done on a quantitative point of view helped by visual figures as explanatory tools for all the techniques using the Mean Absolute Error (MAE) and the Bias-Variance relation. Results show that the performance of each algorithm depends mainly on the number of counts coming from the ROI and the iteration/post-smoothing combination that, when adequately chosen, allows nearly every algorithms to give similar quantitative results in most cases. Among the 10 analysed techniques, 3 distinguished themselves: ML-EM PSF, ISRA PSF with the smoothed expected data as weight and the FBP with an adequate post-smoothing were the main contenders for achieving the lowest MAE. Keywords: Positron Emission Tomography, Maximum-Likelihood Expectation-Maximization, Image Space Reconstruction Algorithm, Filtered Backprojection, Mean Absolute Error, Quantitative Imaging.
La tomographie par Émission de Positons est une technique d'imagerie médicale traçant les procédures fonctionnelles qui se déroulent dans le patient. L'une des applications courantes de cet appareil consiste à performer un diagnostique subjectif à partir des images obtenues. Cependant, l'imagerie quantitative (IQ) permet de performer une analyse objective en plus de nous procurer de l'information additionnelle telle que la courbe temps-activité (CTA) ainsi que des détails visuels qui échappent à l'œil. Le but de ce travail était, en comparant plusieurs algorithmes de reconstruction tels que le ML-EM PSF, le ISRA PSF et les algorithmes qui en découlent ainsi que la rétroprojection filtrée pour une image bidimensionnelle fixe, de développer une analyse robuste sur les performances quantitatives dépendamment de la localisation des régions d'intérêt (RdI), de leur taille, du niveau de bruit dans l'image, de la distribution de l'activité et des paramètres post-lissage. En simulant des acquisitions à partir d'une coupe axiale d'un cerveau digitale sur Matlab, une comparaison quantitative appuyée de figures qualitative en guise d'outils explicatifs a été effectuée pour toutes les techniques de reconstruction à l'aide de l'Erreur Absolue Moyenne (EAM) et de la relation Biais-Variance. Les résultats obtenus démontrent que la performance de chaque algorithme dépend principalement du nombre d'événements enregistré provenant de la RdI ainsi que de la combinaison itération/post-lissage utilisée qui, lorsque choisie adéquatement, permet à la majorité des algorithmes étudiés de donner des quantités similaires dans la majorité des cas. Parmi les 10 techniques analysées, 3 se sont démarquées : ML-EM PSF, ISRA PSF en utilisant les valeurs prévues avec lissage comme facteur de pondération et RPF avec un post-lissage adéquat les principaux prétendants pour atteindre l'EMA minimale. Mots-clés: Tomographie par émission de positons, Maximum-Likelihood Expectation-Maximization, Image Space Reconstruction Algorithm, Rétroprojection Filtrée, Erreur Absolue Moyenne, Imagerie quantitative.
APA, Harvard, Vancouver, ISO, and other styles
16

Thomas, Nicole. "Validation of Criteria Used to Predict Warfarin Dosing Decisions." BYU ScholarsArchive, 2004. https://scholarsarchive.byu.edu/etd/40.

Full text
Abstract:
People at risk for blood clots are often treated with anticoagulants, warfarin is such an anticoagulant. The dose's effect is measured by comparing the time for blood to clot to a control time called an INR value. Previous anticoagulant studies have addressed agreement between fingerstick (POC) devices and the standard laboratory, however these studies rely on mathematical formulas as criteria for clinical evaluations, i.e. clinical evaluation vs. precision and bias. Fourteen such criteria were found in the literature. There exists little consistency among these criteria for assessing clinical agreement, furthermore whether these methods of assessing agreement are reasonable estimates of clinical decision-making is unknown and has yet to be validated. One previous study compared actual clinical agreement by having two physicians indicate a dosing decision based on patient history and INR values. This analysis attempts to justify previously used mathematical criteria for clinical agreement. Generalized additive models with smoothing spline estimates were calculated for each of the 14 criteria and compared to the smoothing spline estimate for the method using actual physician decisions (considered the "gold standard"). The area between the criteria method spline and the gold standard method spline served as the comparison, using bootstrapping for statistical inference. Although some of the criteria methods performed better than others, none of them matched the gold standard. This stresses the need for clinical assessment of devices.
APA, Harvard, Vancouver, ISO, and other styles
17

Betnér, Staffan. "Trends in Forest Soil Acidity : A GAM Based Approach with Application on Swedish Forest Soil Inventory Data." Thesis, Uppsala universitet, Statistiska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-352392.

Full text
Abstract:
The acidification of soils has been a continuous process since at least the beginning of the 20th century. Therefore, an inquiry of how and when the soil pH levels have changed is relevant to gain better understanding of this process. The aim of this thesis is to study the average national soil pH level over time in Sweden and the local spatial differences within Sweden over time. With data from the Swedish National Forest Inventory, soil pH surfaces are estimated for each surveyed year together with the national average soil pH using a generalized additive modeling approach with one model for each pair of consecutive years. A decreasing trend in average national level soil pH was found together with some very weak evidence of year-to-year differences in the spatial structure of soil pH.
APA, Harvard, Vancouver, ISO, and other styles
18

Holanda, Amanda Amorim. "Modelos lineares parciais aditivos generalizados com suavização por meio de P-splines." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-31052018-113859/.

Full text
Abstract:
Neste trabalho apresentamos os modelos lineares parciais generalizados com uma variável explicativa contínua tratada de forma não paramétrica e os modelos lineares parciais aditivos generalizados com no mínimo duas variáveis explicativas contínuas tratadas de tal forma. São utilizados os P-splines para descrever a relação da variável resposta com as variáveis explicativas contínuas. Sendo assim, as funções de verossimilhança penalizadas, as funções escore penalizadas e as matrizes de informação de Fisher penalizadas são desenvolvidas para a obtenção das estimativas de máxima verossimilhança penalizadas por meio da combinação do algoritmo backfitting (Gauss-Seidel) e do processo iterativo escore de Fisher para os dois tipos de modelo. Em seguida, são apresentados procedimentos para a estimação do parâmetro de suavização, bem como dos graus de liberdade efetivos. Por fim, com o objetivo de ilustração, os modelos propostos são ajustados à conjuntos de dados reais.
In this work we present the generalized partial linear models with one continuous explanatory variable treated nonparametrically and the generalized additive partial linear models with at least two continuous explanatory variables treated in such a way. The P-splines are used to describe the relationship among the response and the continuous explanatory variables. Then, the penalized likelihood functions, penalized score functions and penalized Fisher information matrices are derived to obtain the penalized maximum likelihood estimators by the combination of the backfitting (Gauss-Seidel) algorithm and the Fisher escoring iterative method for the two types of model. In addition, we present ways to estimate the smoothing parameter as well as the effective degrees of freedom. Finally, for the purpose of illustration, the proposed models are fitted to real data sets.
APA, Harvard, Vancouver, ISO, and other styles
19

Sanja, Rapajić. "Iterativni postupci sa regularizacijom za rešavanje nelinearnih komplementarnih problema." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2005. https://www.cris.uns.ac.rs/record.jsf?recordId=6022&source=NDLTD&language=en.

Full text
Abstract:
U doktorskoj disertaciji razmatrani su iterativni postupci za rešavanje nelinearnih komplementarnih problema (NCP). Problemi ovakvog tipa javljaju se u teoriji optimizacije, inženjerstvu i ekonomiji. Matematički modeli mnogih prirodnih, društvenih i tehničkih procesa svode se takođe na ove probleme. Zbog izuzetno velike zastupljenosti NCP problema, njihovo rešavanje je veoma aktuelno. Među mnogobrojnim numeričkim postupcima koji se koriste u tu svrhu, u ovoj disertaciji posebna pažnja posvećena jegeneralizovanim postupcima Njutnovog tipa i iterativnim postupcima sa re-gularizacijom matrice jakobijana. Definisani su novi postupci za rešavanje NCP i dokazana je njihova lokalna ili globalna konvergencija. Dobijeni teorijski rezultati testirani su na relevantnim numeričkim primerima.
Iterative methods for nonlinear complementarity problems (NCP) are con-sidered in this doctoral dissertation. NCP problems appear in many math-ematical models from economy, engineering and optimization theory. Solv-ing NCP is very atractive in recent years. Among many numerical methods for NCP, we are interested in generalized Newton-type methods and Jaco-bian smoothing methođs. Several new methods for NCP are defined in this dissertation and their local or global convergence is proved. Theoretical results are tested on relevant numerical examples.
APA, Harvard, Vancouver, ISO, and other styles
20

Hancock, Penelope. "Finite element approximation of minimum generalised cross validation bivariate thin plate smoothing splines." Phd thesis, 2002. http://hdl.handle.net/1885/148534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

"Robust estimation for generalized additive models." 2010. http://library.cuhk.edu.hk/record=b5894486.

Full text
Abstract:
Wong, Ka Wai.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (leaves 46-49).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Background --- p.4
Chapter 2.1 --- Notation and Definitions --- p.4
Chapter 2.2 --- Influence Function of β --- p.5
Chapter 3 --- Methodology --- p.7
Chapter 3.1 --- Robust Estimating Equations --- p.7
Chapter 3.2 --- A General Algorithm for Robust GAM Estimation --- p.9
Chapter 4 --- Asymptotic Equivalence --- p.12
Chapter 5 --- Smoothing Parameter Selection --- p.16
Chapter 5.1 --- Robust Cross-Validation --- p.17
Chapter 5.2 --- Robust Information Criteria --- p.17
Chapter 6 --- Multiple Covariates --- p.19
Chapter 7 --- Simulation Study --- p.21
Chapter 8 --- Real Data Examples --- p.26
Chapter 8.1 --- Air Pollution Data --- p.26
Chapter 8.2 --- Bronchitis Data --- p.28
Chapter 9 --- Concluding Remarks --- p.31
Chapter A --- Auxiliary Lemmas and Proofs --- p.32
Chapter B --- Fisher Consistency Correction --- p.42
Chapter B.1 --- Poisson distribution --- p.42
Chapter B.2 --- Bernoulli distribution --- p.43
Chapter C --- Derivation of (5.2) --- p.44
Bibliography --- p.46
APA, Harvard, Vancouver, ISO, and other styles
22

Lin, Tzu-Ching, and 林子靖. "A smoothing Newton method based on the generalized Fischer-Burmeister function for MCPs." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/hg76p7.

Full text
Abstract:
碩士
國立臺灣師範大學
數學系
97
We present a smooth approximation for the generalized Fischer-Burmeister function where the 2-norm in the FB function is relaxed to a general p-norm (p > 1), and establish some favorable properties for it, for example, the Jacobian consistency. With the smoothing function, we transform the mixed complementarity problem (MCP) into solving a sequence of smooth system of equations.
APA, Harvard, Vancouver, ISO, and other styles
23

Su, Shao-Zu, and 蘇少祖. "Using Kernel Smoothing Approaches Imporves the Parameter Estimation based on Generalized Partial Credit Model." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/75909558041762230547.

Full text
Abstract:
碩士
國立臺中教育大學
教育測驗統計研究所
96
In this paper, a modified version of MMLE/EM is proposed. There are two modifications in the proposed algorithm. First, kernel density estimation technique is applied to estimate the distribution of ability parameter in E-step. Second, kernel density estimation technique is applied to estimate the item parameters and ability parameters with EAP in M-step. Finally, we use this methodology to estimate the ability and item parameters iteratively. This algorithm is named kernel smoothing - generalized partial credit model , KS-GPCM for short. In this paper, a simulation experiment based on the generalized partial credit model is conducted to compare the performances of PARSCALE and KS-GPCM. In the experiment, three types of distributions of ability parameters (normal, bi-mode and skewed distributions) are considered. Experimental results show as follow: (i) When distribution of ability parameter is normally distributed, RMSE of ability parameter of PARSCALE is less than KS-GPCM. (ii) When distributions of ability parameters are bimodal and skewness, RMSE of ability parameter of KS-GPCM is less than PARSCALE. (iii) When distribution of ability parameter is normally distributed, RMSE of slope and item step parameters of PARSCALE is less than KS-GPCM. (iv) When distributions of ability parameters are bimodal and skewness, RMSE of slope and item step parameters of KS-GPCM is less than PARSCALE.
APA, Harvard, Vancouver, ISO, and other styles
24

Shun-Te, O., and 歐順德. "The Monte Carlo Simulation Study of The Hybrid Model of Generalized Hidden Markov Model and Kernel smoothing based Item Response Theory." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/02075678413122792655.

Full text
Abstract:
碩士
亞洲大學
資訊工程學系碩士班
94
Item characteristic curve is the central concept of the item response theory, the accuracy of ICC estimation of IRT model that is unable to prove better or not with mathematical or statistical method or the logic rule. The main purpose of this study rely on simulation to compare the accuracy of ICC estimation of four IRT Models, i.e. three-parameter logistic IRT model, the hybrid model of GHMM and 2PL-IRT, kernel smoothing based IRT, the hybrid model of GHMM and kernel smoothing based IRT. Simulation utilized MATLAB software to develop programs and simulate the data needed. Supposing discrimination parameter, difficulty parameter and ability parameter are normal distribution, guessing parameter is uniform distribution, item numbers are 25, there are six different numbers of examinees : 100,200,500,1000,1500 and 2000. According to this study, several findings have been concluded as follows: 1. The hybrid model of GHMM and kernel smoothing based IRT is better than the other’s IRT models for the accuracy of ICC estimation. 2. No matter parameter or nonparameter IRT model with GHMM is more accurate for the accuracy of ICC estimation. 3. The size of examinees will influence the accuracy of ICC estimation, more examinees are more accurate.
APA, Harvard, Vancouver, ISO, and other styles
25

Tilahun, Gelila. "Statistical Methods for Dating Collections of Historical Documents." Thesis, 2011. http://hdl.handle.net/1807/29890.

Full text
Abstract:
The problem in this thesis was originally motivated by problems presented with documents of Early England Data Set (DEEDS). The central problem with these medieval documents is the lack of methods to assign accurate dates to those documents which bear no date. With the problems of the DEEDS documents in mind, we present two methods to impute missing features of texts. In the first method, we suggest a new class of metrics for measuring distances between texts. We then show how to combine the distances between the texts using statistical smoothing. This method can be adapted to settings where the features of the texts are ordered or unordered categoricals (as in the case of, for example, authorship assignment problems). In the second method, we estimate the probability of occurrences of words in texts using nonparametric regression techniques of local polynomial fitting with kernel weight to generalized linear models. We combine the estimated probability of occurrences of words of a text to estimate the probability of occurrence of a text as a function of its feature -- the feature in this case being the date in which the text is written. The application and results of our methods to the DEEDS documents are presented.
APA, Harvard, Vancouver, ISO, and other styles
26

Xu, Ganggang. "Variable Selection and Function Estimation Using Penalized Methods." Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-12-10451.

Full text
Abstract:
Penalized methods are becoming more and more popular in statistical research. This dissertation research covers two major aspects of applications of penalized methods: variable selection and nonparametric function estimation. The following two paragraphs give brief introductions to each of the two topics. Infinite variance autoregressive models are important for modeling heavy-tailed time series. We use a penalty method to conduct model selection for autoregressive models with innovations in the domain of attraction of a stable law indexed by alpha is an element of (0, 2). We show that by combining the least absolute deviation loss function and the adaptive lasso penalty, we can consistently identify the true model. At the same time, the resulting coefficient estimator converges at a rate of n^(?1/alpha) . The proposed approach gives a unified variable selection procedure for both the finite and infinite variance autoregressive models. While automatic smoothing parameter selection for nonparametric function estimation has been extensively researched for independent data, it is much less so for clustered and longitudinal data. Although leave-subject-out cross-validation (CV) has been widely used, its theoretical property is unknown and its minimization is computationally expensive, especially when there are multiple smoothing parameters. By focusing on penalized modeling methods, we show that leave-subject-out CV is optimal in that its minimization is asymptotically equivalent to the minimization of the true loss function. We develop an efficient Newton-type algorithm to compute the smoothing parameters that minimize the CV criterion. Furthermore, we derive one simplification of the leave-subject-out CV, which leads to a more efficient algorithm for selecting the smoothing parameters. We show that the simplified version of CV criteria is asymptotically equivalent to the unsimplified one and thus enjoys the same optimality property. This CV criterion also provides a completely data driven approach to select working covariance structure using generalized estimating equations in longitudinal data analysis. Our results are applicable to additive, linear varying-coefficient, nonlinear models with data from exponential families.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography