To see the other types of publications on this topic, follow the link: Estimators.

Dissertations / Theses on the topic 'Estimators'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Estimators.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Königkrämer, Sören. "Realised volatility estimators." Master's thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/8526.

Full text
Abstract:
Includes bibliographical references.
This dissertation is an investigation into realised volatility (RV) estimators. Here, RV is defined as the sum-of-squared-returns (SSR) and is a proxy for integrated volatility (IV), which is unobservable. The study focuses on a subset of the universe of RV estimators. We examine three categories of estimators: historical, high-frequency (HF) and implied. The need to estimate RV is predominantly in the hedging of options and is not concerned with speculation or forecasting. The main research questions are; (1) what is the best RV estimator in a historical study of S&P 500 data? (2) What is the best RV estimator in a Monte Carlo simulation when delta hedging synthetic options? (3) Do our findings support the stylized fact of `Asymmetry in time scales' (Cont, 2001)? In the answering of these questions, further avenues of investigation are explored. Firstly, the VIX is used as the implied volatility. Secondly, the Monte Carlo simulation generates stock price paths with random components in the stock price and the volatility at each time point. The distribution of the input volatility is varied. The question of asymmetry in time scales is addressed by varying the term and frequency of historical data. The results of the historical and Monte Carlo simulation are compared. The SSR and two of the HF estimators perform best in both cases. Accuracy of estimators using long term data is shown to perform very poorly.
APA, Harvard, Vancouver, ISO, and other styles
2

Vicinansa, Guilherme Scabin. "Algebraic estimators with applications." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-21092018-150106/.

Full text
Abstract:
In this work we address the problem of friction compensation in a pneumatic control valve. It is proposed a nonlinear control law that uses algebraic estimators in its structure, in order to adapt the controller to the aging of the valve. For that purpose we estimate parameters related to the valve\'s Karnopp model, necessary to friction compensation, online. The estimators and the controller are validated through simulations.
Nessa pesquisa, estudamos o problema de compensação de atrito em válvulas pneumáticas. É proposta uma lei de controle não linear que tem estimadores algébricos em sua estrutura, para adaptar o controlador ao envelhecimento da válvula. Para isso, estimam-se os valores de parâmetros relacionados ao modelo de Karnopp da válvula, necessários à compensação do atrito, de maneira online. Os estimadores e o controlador são validados através de simulações.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhan, Yihui. "Bootstrapping functional M-estimators /." Thesis, Connect to this title online; UW restricted, 1996. http://hdl.handle.net/1773/8958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tayade, Rajeshwary. "Robustness analysis of linear estimators." Texas A&M University, 2004. http://hdl.handle.net/1969.1/500.

Full text
Abstract:
Robustness of a system has been defined in various ways and a lot of work has been done to model the system robustness , but quantifying or measuring robustness has always been very difficult. In this research we consider a simple system of a linear estimator and then attempt to model the system performance and robustness in a geometrical manner which admits an analysis using the differential geometric concepts of slope and curvature. We try to compare two different types of curvatures, namely the curvature along the maximum slope of a surface and the square-root of the absolute value of sectional curvature of a surface, and observe the values to see if both of them can alternately be used in the process of understanding or measuring system robustness. In this process we have worked on two different examples and taken readings for many points to find if there is any consistency in the two curvatures.
APA, Harvard, Vancouver, ISO, and other styles
5

Rankin, Robert. "Hierarchical models and shrinkage estimators." Thesis, Rankin, Robert (2017) Hierarchical models and shrinkage estimators. PhD thesis, Murdoch University, 2017. https://researchrepository.murdoch.edu.au/id/eprint/38257/.

Full text
Abstract:
Capture-Mark-Recapture (CMR) models are a large class of hierarchical time-series models for estimating the abundance and survival of individually marked animals. Due to the complex multi-parameter nature of CMR models, CMR practitioners have been enthusiastic adopters of multi-model inference (MMI) techniques. By MMI, I refer loosely to a variety of techniques such as model-selection, model-averaging, Frequentist shrinkage estimators, and Hierarchical Bayesian random-e_ects. In this thesis, I develop and compare methods of MMI in CMR, with application to the movement and abundance of bottlenose dolphins, Tursiops sp., in Australia. I use novel ideas from the _eld of machine learning, as well as revisit old estimation problems like the Marginal Likelihood. As this thesis will show, there are many practical problems to the popular AIC/BIC-based methods in odontocetes CMR studies, such as singularities and boundary-value estimates. This is especially the case when there is a lot of demographic and/or temporal variation. Understanding such heterogeneity is important for conservation and ecological theory, such as the role of individual heterogeneity for estimating abundance (Chapter 3), or sex/age di_erences in movement patterns (Chapter 4). Such heterogeneity can also lead to severe non-identiability problems. I suggest practical solutions through HB models and shrinkage estimators. Chapter 2 reviews MMI theory and presents a new boosting algorithm for the Cormack-Jolly-Seber model. Chapter 3 presents a Hierarchical Bayesian (HB) version of Pollock's Closed Robust Design (PCRD), with emphasis on shrinkage priors and individual heterogeneity. Chapter 4 reviews Bayesian model selection and introduces a technique to estimate Marginal Likelihoods and Bayes Factors for hidden-Markov models, such as the PCRD. Chapter 5 generalizes the HB PCRD into a Multistate CRD model, with emphasis on shrinkage priors for inference on geographic state-transitions.
APA, Harvard, Vancouver, ISO, and other styles
6

Ogawa, James S. "Evaluating color fused image performance estimators." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1997. http://handle.dtic.mil/100.2/ADA340997.

Full text
Abstract:
Thesis (M.S. in Operations Research) Naval Postgraduate School, September 1997.
"September 1997." Thesis advisor(s): William K. Krebs. Includes bibliographical references (p. 241-243). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
7

Völcker, Björn. "Performance Analysis of Parametric Spectral Estimators." Doctoral thesis, KTH, Signals, Sensors and Systems, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chan, Eric Wai Chi. "Novel motion estimators for video compression." Thesis, University of Ottawa (Canada), 1994. http://hdl.handle.net/10393/6864.

Full text
Abstract:
In this thesis, the problem of motion estimation is addressed from two perspectives, namely, hardware architecture and reduced complexity algorithms in the spatial and transform domains. First, a VLSI architecture which implements the full search block matching algorithm in real time is presented. The interblock dependency is exploited and hence the architecture can meet the real time requirement in various applications. Most importantly, the architecture is simple, modular and cascadable. Hence the proposed architecture is easily implementable in VLSI as a codec. The spatial domain algorithm consists of a layered structure and alleviates the local optimum problem. Most importantly, it employs a simple matching criterion, namely, a modified pixel difference classification (MPDC) and hence results in a reduced computational complexity. In addition, the algorithm is compatible with the recently proposed MPEG-1 video compression standard. Simulation results indicate that the proposed algorithm provides a comparable performance (compared to the algorithms reported in the literature) at a significantly reduced computational complexity. In addition, the hardware implementation of the proposed algorithm is very simple because of the binary operations used in the matching criteria. Finally, we present a wavelet transform based fast multiresolution motion estimation (FMRME) scheme. Here, the wavelet transform is used to exploit both the spatial and temporal redundancies resulting in an efficient coder. In FMRME, the correlations among the orientation subimages of the wavelet pyramid structure are exploited resulting in an efficient motion estimation process. In addition, this significantly reduces side information for motion vectors which corresponds to significant improvements in coding performance of the FMRME based wavelet coder for video compression. Simulation results demonstrate the superior coding performance of the FMRME based wavelet transform coder. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
9

Korsell, Nicklas. "Statistical Properties of Preliminary Test Estimators." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shafie, Termeh. "Design-based estimators for snowball sampling." Stockholms universitet, Statistiska institutionen, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-88948.

Full text
Abstract:
Snowball sampling, where existing study subjects recruit further subjects from amongtheir acquaintances, is a popular approach when sampling from hidden populations.Since people with many in-links are more likely to be selected, there will be a selectionbias in the samples obtained. In order to eliminate this bias, the sample data must beweighted. However, the exact selection probabilities are unknown for snowball samplesand need to be approximated in an appropriate way. This paper proposes differentways of approximating the selection probabilities and develops weighting techniquesusing the inverse of the selection probabilities. Some numerical examples for smallgraphs and simulations on larger networks are provided to compare the efficiencyof the weighting techniques. The simulation results indicate that the suggested re-weighted estimators should be preferred to traditional estimators with equal sampleweights for the initial snowball sampling waves.
APA, Harvard, Vancouver, ISO, and other styles
11

Vafaei, Sanaz. "Alternative estimators for weak gravitational lensing." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42151.

Full text
Abstract:
Weak gravitational lensing provides a means to measure the total mass in the Universe. The incoming light from distant galaxies is disturbed by the inhomogeneity of the dark matter distribution along the line of sight. The correlations of shape in an observed galaxy population can be used to probe the total mass density fluctuations in the Universe. Studies of correlations between galaxy shapes have been the basis of weak lensing research. In this thesis we investigate various non-conventional weak lensing statistics that are complementary to the traditional two-point shear correlation functions. The goal is to constrain the matter density Ωm and normalization of matter power spectrum σ₈ parameters. These higher order statistics have long been advocated as a powerful tool to break measured degeneracies between cosmological parameters. Using ray-tracing simulations, which incorporate important survey features such as a realistic depth-dependent redshift distribution, we find that joint two- and three-point correlation function analysis is a much stronger probe of cosmology than the two-point analysis alone. We apply the higher order statistics technique to the 160 deg² of the Canada-France-Hawaii-Telescope Legacy Survey (CFHTLS) and show preliminary results from the joint two- and three-point likelihood analysis. We reveal the possibilities that lie in the projected mass probability distribution function to discriminate models with different values of the matter density parameter. In the process we develop a hybrid data set based on the simulations and the CFHTLenS data for systematics testing and covariance matrix estimations. Our error analysis includes all non-Gaussian terms, finding that the coupling between cosmic variance and shot noise is a non-negligible contribution.
APA, Harvard, Vancouver, ISO, and other styles
12

McElwain, Thomas P. "L-estimators used in CFAR detection." Thesis, Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/29199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Aloupis, Greg. "On computing geometric estimators of location." Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=31181.

Full text
Abstract:
Let S be a data set of n points in Rd, and m&d4; be a point in Rd which "best" describes S. Since the term "best" is subjective, there exist several definitions for finding m&d4; . However, it is generally agreed that such a definition, or estimator of location, should have certain statistical properties which make it robust. Most estimators of location assign a depth value to any point in Rd and define m&d4; to be a point with maximum depth. Here, new results are presented concerning the computational complexity of estimators of location. We prove that in R2 the computation of simplicial and halfspace depth of a point requires O (n log n) time, which matches the upper bound complexities of algorithms by Rousseeuw and Ruts. Our lower bounds also apply to two sign tests, that of Hodges and that of Oja and Nyblom. In addition, we propose algorithms which reduce the time complexity of calculating the points with greatest Oja and simplicial depth. Our fastest algorithms use O(n3 log n) and O(n4) time respectively, compared to the algorithms of Rousseeuw and Ruts which use O(n5 log n) time. One of our algorithms may also be used to find a point with minimum weighted sum of distances to a set of n lines in O( n2) time. This point is called the Fermat-Torricelli point of n lines by Roy Barbara, whose algorithm uses O( n3) time. Finally, we propose a new estimator which arises from the notion of hyperplane depth recently defined by Rousseeuw and Hubert.
APA, Harvard, Vancouver, ISO, and other styles
14

Naz, Shamsher Ali. "Linear and nonlinear polynomial based estimators." Thesis, University of Strathclyde, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.501676.

Full text
Abstract:
The thesis is concerned with the solution of signal processing estimation problems using polynomial techniques. An effort has been made in this thesis to show that the polynomial approach has a potential which can be used for linear as well as nonUnear estimation problems (in single-input and single-output as well as in multi-input multi-output systems). The goal is to derive estimators that have a simple structure, are computationally efficient and may be implemented in real time systems easily in comparison to the existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
15

Gilmore, John Patrick. "Explicit Runge-Kutta global error estimators." Thesis, Teesside University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.410876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Chandra, Hukum. "Improved direct estimators for small areas." Thesis, University of Southampton, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.446613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

JUNIOR, ROGERIO VAZ DE ALMEIDA. "CURVATURE ESTIMATORS FOR CURVES IN R4." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2011. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=21598@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO
Vamos apresentar neste trabalho dois métodos para calcular as propriedades diferenciais geométricas de uma curva discreta no R4. O primeiro é baseado em aproximações por comprimento de arco. O segundo é baseado na metodologia de derivação discreta. Esses métodos estimam numericamente as curvaturas k1, k2 e k3 e os vetores tangente, normal, binormal e trinormal para cada ponto da curva. São apresentados também cálculos dessas propriedades geométricas para curvas tanto na forma paramétrica como na forma implícita, com o objetivo final de testar a consistência dos métodos propostos comparando-os aos resultados teóricos.
We present new algorithms for computing the diferential geometry properties of a discrete curve in R4 based on two different methods: arc-lenght aproximation and discrete derivatives.
APA, Harvard, Vancouver, ISO, and other styles
18

Rice, Michael, Mohammad Saquib, and Erik Perrins. "Estimators for iNET-Formatted SOQPSK-TG." International Foundation for Telemetering, 2014. http://hdl.handle.net/10150/577462.

Full text
Abstract:
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA
This paper presents algorithms for estimating the frequency offset, multipath channel coefficients, and noise variance of iNET-formatted SOQPSK-TG. The estimators compare the received signal samples corresponding to the iNET preamble and attached sync marker (ASM) bits to a locally stored copy of the SOQPSK-TG samples corresponding to the same. The mean and variance of the three estimators over ten test channels derived from channel sounding experiments at Edwards AFB is presented. The results show that usable estimates are achievable.
APA, Harvard, Vancouver, ISO, and other styles
19

Zeileis, Achim. "Object-oriented Computation of Sandwich Estimators." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2006. http://epub.wu.ac.at/1644/1/document.pdf.

Full text
Abstract:
Sandwich covariance matrix estimators are a popular tool in applied regression modeling for performing inference that is robust to certain types of model misspecification. Suitable implementations are available in the R system for statistical computing for certain model fitting functions only (in particular lm()), but not for other standard regression functions, such as glm(), nls(), or survreg(). Therefore, conceptual tools and their translation to computational tools in the package sandwich are discussed, enabling the computation of sandwich estimators in general parametric models. Object orientation can be achieved by providing a few extractor functions-most importantly for the empirical estimating functions-from which various types of sandwich estimators can be computed.
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
20

Carvalho, Miguel de. "Extremum estimators and stochastic optimization methods." Doctoral thesis, Faculdade de Ciências e Tecnologia, 2009. http://hdl.handle.net/10362/5899.

Full text
Abstract:
Submitted in partial fulfillment for the Requirements for the Degree of PhD in Mathematics, in the Speciality of Statistics in the Faculdade de Ciências e Tecnologia
Extremum estimators is one of the broadest class of statistical methods for the obtention of consistent estimates. The Ordinary Least Squares (OLS), the Generalized Method of Moments (GMM) as well as the Maximum Likelihood (ML) methods are all given as solutions to an optimization problem of interest, and thus are particular instances of extremum estimators. One major concern regarding the computation of estimates of this type is related with the convergence features of the method used to assess the optimal solution. In fact, if the method employed can converge to a local solution, the consistency of the extremum estimator is no longer ensured. This thesis is concerned with the application of global stochastic search and optimization methods to the obtention of estimates based on extremum estimators. For such purpose, a stochastic search algorithm, is proposed and shown to be convergent. We provide applications to classical test functions, as well as to a problem of variance component in a mixed linear model.
FCT(Fundação para a Ciência e a Tecnologia)- SFRH/BD/1569/2004
APA, Harvard, Vancouver, ISO, and other styles
21

Dryver, Arthur L. "Adaptive sampling designs and associated estimators." Adobe Acrobat reader required to view the full dissertation, 1999. http://www.etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-9/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Gui, Wenhao. "Adaptive series estimators for copula densities." Tallahassee, Florida : Florida State University, 2009. http://etd.lib.fsu.edu/theses/available/etd-05072009-133639/.

Full text
Abstract:
Thesis (Ph. D.)--Florida State University, 2009.
Advisor: Marten Wegkamp, Florida State University, College of Arts and Sciences, Dept. of Statistics. Title and description from dissertation home page (viewed on Oct. 5, 2009). Document formatted into pages; contains xi, 87 pages. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
23

Hariharan, S. "Channel estimators for HF radio links." Thesis, Loughborough University, 1988. https://dspace.lboro.ac.uk/2134/6733.

Full text
Abstract:
The thesis is concerned with the estimation of the sampled impulse-response (SIR), of a time-varying HF channel, where the estimators are used in the receiver of a 4800 bits/s, quaternary phase shift keyed (QPSK) system, operating at 2400 bauds with an 1800 Hz carrier. T= FIF modems employing maximum-likelihood detectors at the receiver require accurate knowledge of the SIR of the channel. With this objective in view, the thesis considers a number of channel estimation techniques, using an idealised model of the data transmission system. The thesis briefly describes the ionospheric propagation medium and the factors affecting the data transmission over BF radio. It then presents an equivalent baseband model of the I-IF channel, that has three separate Rayleigh fading paths (sky waves), with a 2Hz frequency spread and transmission delays of 0,1.1 and 3 milliseconds relative to the first sky wave. Estimation techniques studied are, the Gradient estimator, the Recursive leastsquares (RLS) Kalman estimator, the Adaptive channel estimators, the Efficient channel estimator ( that takes into account prior knowledge of the number of fading paths in the channel ), and the Fast Transversal Filter (F-FF), estimator (which is a simplified form of the Kalman estimator). Several new algorithms based on the above mentioned estimation techniques are also proposed. Results of the computer simulation tests on the performance of the estimators, over a typical worst channel, are then presented. The estimators are reasonably optimized to achieve the minimum mean-square estimation error and adequate allowance has been made for stabilization before the commencement of actual measurements. The results, therefore, represent the steady-state performance of the estimators. The most significant result, obtained in this study, is the performance of the Adaptive estimator. When the characteristics of the channel are known, the Efficient estimators have the best performance and the Gradient estimators the poorest. Kalman estimators are the most complex and Gradient estimators are the simplest. Kalman estimators have a performance rather similar to that of Gradient estimators. In terms of both performance and complexity, the Adaptive estimator lies between the Kalman and Efficient estimators. FTF estimators are known to exhibit numerical instability, for which an effective stabilization technique is proposed. Simulation tests have shown that the mean squared estimation error is an adequate measurement for comparison of the performance of the estimators.
APA, Harvard, Vancouver, ISO, and other styles
24

Furlan, Benjamin, Martin Gächter, Bob Krebs, and Harald Oberhofer. "Democratization and real exchange rates." Wiley, 2016. http://dx.doi.org/10.1111/sjpe.12088.

Full text
Abstract:
In this article, we combine two so far separate strands of the economic literature and argue that democratization leads to a real exchange rate appreciation. We test this hypothesis empirically for a sample of countries observed from 1980 to 2007 by combining a difference-in-difference approach with propensity score matching estimators. Our empirical results reveal a strong and significant finding: democratization causes real exchange rates to appreciate. Consequently, the ongoing process of democratization observed in many parts of the world is likely to reduce exchange rate distortions.
APA, Harvard, Vancouver, ISO, and other styles
25

He, Qing. "Investigating the performance of process-observation-error-estimator and robust estimators in surplus production model: a simulation study." Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/76859.

Full text
Abstract:
This study investigated the performance of the three estimators of surplus production model including process-observation-error-estimator with normal distribution (POE_N), observation-error-estimator with normal distribution (OE_N), and process-error-estimator with normal distribution (PE_N). The estimators with fat-tailed distributions including Student's t distribution and Cauchy distribution were also proposed and their performances were compared with the estimators with normal distribution. This study used Bayesian method, revised Metropolis Hastings within Gibbs sampling algorithm (MHGS) that was previously used to solve POE_N (Millar and Meyer, 2000), developed the MHGS for the other estimators, and developed the methodologies which enabled all the estimators to deal with data containing multiple indices based on catch-per-unit-effort (CPUE). Simulation study was conducted based on parameter estimation from two example fisheries: the Atlantic weakfish (Cynoscion regalis) and the black sea bass (Centropristis striata) southern stock. Our results indicated that POE_N is the estimator with best performance among all six estimators with regard to both accuracy and precision for most of the cases. POE_N is also the robust estimator to outliers, atypical values, and autocorrelated errors. OE_N is the second best estimator. PE_N is often imprecise. Estimators with fat-tailed distribution usually result in some estimates more biased than estimators with normal distribution. The performance of POE_N and OE_N can be improved by fitting multiple indices. Our study suggested that POE_N be used for population dynamic models in future stock assessment. Multiple indices from valid surveys should be incorporated into stock assessment models. OE_N can be considered when multiple indices are available.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
26

Marguet, Aline. "Branching processes for structured populations and estimators for cell division." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX073/document.

Full text
Abstract:
Cette thèse porte sur l'étude probabiliste et statistique de populations sans interactions structurées par un trait. Elle est motivée par la compréhension des mécanismes de division et de vieillissement cellulaire. On modélise la dynamique de ces populations à l'aide d'un processus de Markov branchant à valeurs mesures. Chaque individu dans la population est caractérisé par un trait (l'âge, la taille, etc...) dont la dynamique au cours du temps suit un processus de Markov. Ce trait détermine le cycle de vie de chaque individu : sa durée de vie, son nombre de descendants et le trait à la naissance de ses descendants. Dans un premier temps, on s'intéresse à la question de l'échantillonnage uniforme dans la population. Nous décrivons le processus pénalisé, appelé processus auxiliaire, qui correspond au trait d'un individu "typique" dans la population en donnant son générateur infinitésimal. Dans un second temps, nous nous intéressons au comportement asymptotique de la mesure empirique associée au processus de branchement. Sous des hypothèses assurant l'ergodicité du processus auxiliaire, nous montrons que le processus auxiliaire correspond asymptotiquement au trait le long de sa lignée ancestrale d'un individu échantillonné uniformément dans la population. Enfin, à partir de données composées des traits à la naissance des individus dans l'arbre jusqu'à une génération donnée, nous proposons des estimateurs à noyau de la densité de transition de la chaine correspondant au trait le long d'une lignée ainsi que de sa mesure invariante. De plus, dans le cas d'une diffusion réfléchie sur un compact, nous estimons par maximum de vraisemblance le taux de division du processus. Nous montrons la consistance de cet estimateur ainsi que sa normalité asymptotique. L'implémentation numérique de l'estimateur est par ailleurs réalisée
We study structured populations without interactions from a probabilistic and a statistical point of view. The underlying motivation of this work is the understanding of cell division mechanisms and of cell aging. We use the formalism of branching measure-valued Markov processes. In our model, each individual is characterized by a trait (age, size, etc...) which moves according to a Markov process. The rate of division of each individual is a function of its trait and when a branching event occurs, the trait of the descendants at birth depends on the trait of the mother and on the number of descendants. First, we study the trait of a uniformly sampled individual in the population. We explicitly describe the penalized Markov process, named auxiliary process, corresponding to the dynamic of the trait of a "typical" individual by giving its associated infinitesimal generator. Then, we study the asymptotic behavior of the empirical measure associated with the branching process. Under assumptions assuring the ergodicity of the auxiliary process, we prove that the auxiliary process asymptotically corresponds to the trait along its ancestral lineage of a uniformly sampled individual in the population. Finally, we address the problem of parameter estimation in the case of a branching process structured by a diffusion. We consider data composed of the trait at birth of all individuals in the population until a given generation. We give kernel estimators for the transition density and the invariant measure of the chain corresponding to the trait of an individual along a lineage. Moreover, in the case of a reflected diffusion on a compact set, we use maximum likelihood estimation to reconstruct the division rate. We prove consistency and asymptotic normality for this estimator. We also carry out the numerical implementation of the estimator
APA, Harvard, Vancouver, ISO, and other styles
27

Dharmasena, Tibbotuwa Deniye Kankanamge Lasitha Sandamali, and Sandamali dharmasena@rmit edu au. "Sequential Procedures for Nonparametric Kernel Regression." RMIT University. Mathematical and Geospatial Sciences, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090119.134815.

Full text
Abstract:
In a nonparametric setting, the functional form of the relationship between the response variable and the associated predictor variables is unspecified; however it is assumed to be a smooth function. The main aim of nonparametric regression is to highlight an important structure in data without any assumptions about the shape of an underlying regression function. In regression, the random and fixed design models should be distinguished. Among the variety of nonparametric regression estimators currently in use, kernel type estimators are most popular. Kernel type estimators provide a flexible class of nonparametric procedures by estimating unknown function as a weighted average using a kernel function. The bandwidth which determines the influence of the kernel has to be adapted to any kernel type estimator. Our focus is on Nadaraya-Watson estimator and Local Linear estimator which belong to a class of kernel type regression estimators called local polynomial kerne l estimators. A closely related problem is the determination of an appropriate sample size that would be required to achieve a desired confidence level of accuracy for the nonparametric regression estimators. Since sequential procedures allow an experimenter to make decisions based on the smallest number of observations without compromising accuracy, application of sequential procedures to a nonparametric regression model at a given point or series of points is considered. The motivation for using such procedures is: in many applications the quality of estimating an underlying regression function in a controlled experiment is paramount; thus, it is reasonable to invoke a sequential procedure of estimation that chooses a sample size based on recorded observations that guarantees a preassigned accuracy. We have employed sequential techniques to develop a procedure for constructing a fixed-width confidence interval for the predicted value at a specific point of the independent variable. These fixed-width confidence intervals are developed using asymptotic properties of both Nadaraya-Watson and local linear kernel estimators of nonparametric kernel regression with data-driven bandwidths and studied for both fixed and random design contexts. The sample sizes for a preset confidence coefficient are optimized using sequential procedures, namely two-stage procedure, modified two-stage procedure and purely sequential procedure. The proposed methodology is first tested by employing a large-scale simulation study. The performance of each kernel estimation method is assessed by comparing their coverage accuracy with corresponding preset confidence coefficients, proximity of computed sample sizes match up to optimal sample sizes and contrasting the estimated values obtained from the two nonparametric methods with act ual values at given series of design points of interest. We also employed the symmetric bootstrap method which is considered as an alternative method of estimating properties of unknown distributions. Resampling is done from a suitably estimated residual distribution and utilizes the percentiles of the approximate distribution to construct confidence intervals for the curve at a set of given design points. A methodology is developed for determining whether it is advantageous to use the symmetric bootstrap method to reduce the extent of oversampling that is normally known to plague Stein's two-stage sequential procedure. The procedure developed is validated using an extensive simulation study and we also explore the asymptotic properties of the relevant estimators. Finally, application of our proposed sequential nonparametric kernel regression methods are made to some problems in software reliability and finance.
APA, Harvard, Vancouver, ISO, and other styles
28

Serasinghe, Shyamalee Kumary. "A simulation comparison of parametric and nonparametric estimators of quantiles from right censored data." Kansas State University, 2010. http://hdl.handle.net/2097/4318.

Full text
Abstract:
Master of Science
Department of Statistics
Paul I. Nelson
Quantiles are useful in describing distributions of component lifetimes. Data, consisting of the lifetimes of sample units, used to estimate quantiles are often censored. Right censoring, the setting investigated here, occurs, for example, when some test units may still be functioning when the experiment is terminated. This study investigated and compared the performance of parametric and nonparametric estimators of quantiles from right censored data generated from Weibull and Lognormal distributions, models which are commonly used in analyzing lifetime data. Parametric quantile estimators based on these assumed models were compared via simulation to each other and to quantile estimators obtained from the nonparametric Kaplan- Meier Estimator of the survival function. Various combinations of quantiles, censoring proportion, sample size, and distributions were considered. Our simulation show that the larger the sample size and the lower the censoring rate the better the performance of the estimates of the 5th percentile of Weibull data. The lognormal data are very sensitive to the censoring rate and we observed that for higher censoring rates the incorrect parametric estimates perform the best. If you do not know the underlying distribution of the data, it is risky to use parametric estimates of quantiles close to one. A limitation in using the nonparametric estimator of large quantiles is their instability when the censoring rate is high and the largest observations are censored. Key Words: Quantiles, Right Censoring, Kaplan-Meier estimator
APA, Harvard, Vancouver, ISO, and other styles
29

Pirie, Iain W. S. "A comparison of some alternatives to least squares multiple regression." Thesis, University of Aberdeen, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318910.

Full text
Abstract:
Multiple linear regression techniques are often employed in the statistical analysis of data. The most frequently used regression procedure is ordinary least squares. However, it is accepted that the method of least squares does not necessarily provide either accurate estimates of unknown regression coefficients or accurate predictions at future data points. Several classes of biased estimators have emerged as possible alternatives to ordinary least squares. We review the origins, definitions and properties of existing biased estimation procedures such as ridge, shrinkage, principal components and partial least squares regression. In addition, two new classes of estimator, multistage and multistep, are introduced. Simulation is the obvious means for assessing the relative merits of different estimation procedures. We review the design and results of previous simulation studies in which comparisons have been made between the performances of different estimation procedures. The designs of most previous studies are somewhat limited and unrealistic. Consequently, few clear guidelines have emerged regarding the circumstances in which individual procedures should either be applied or avoided. To provide some clarification, we conducted a series of simulation experiments that were designed to compare the performances of different regression procedures over a broad range of realistic situations.
APA, Harvard, Vancouver, ISO, and other styles
30

Puzey, Anthony Stephen. "The determination of mortality rates from observed data." Thesis, City University London, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358866.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Bahraoui, Zuhair. "Quantifying Risk using Copulae and Kernel Estimators." Doctoral thesis, Universitat de Barcelona, 2015. http://hdl.handle.net/10803/289620.

Full text
Abstract:
In this Thesis we addressed two very important themes related to quantitative risk management. On one hand, we provided relevant results about the analysis of extreme value distributions; on the other hand, we also presented different results concerning the dependence modelling between extreme value distributions. These results will be useful in the calculation of the capital requirement in the context of Solvency II in terms of quantifying the risk when the data contain extreme values. This can occur when we analyse operational risk and subscription risk, where the company could have losses that have a very low probability but can reach high values, i.e. "rare cases". Specifically, two lines of research were examined in this Thesis: the dependence between two random variables from the viewpoint of the copulae and the nonparametric methods to estimate the cumulative distribution function and quantile. In addition some questions related to the theory of extreme values were considered: extreme value copulae and maximum domain of attraction of extreme value mixture distributions. Inference on copulae was necessary for analysing the structure of dependence between variables. For this, using the definition of max-stable, we generalised the test of extreme value copula to cover a more extensive alternative hypothesis. In the context of copulae, nonparametric estimation of the cdf was useful for obtaining the pseudo-observations and for estimating the marginals. We proposed the use of new nonparametric methods that improve the accuracy in the risk estimations. To illustrate the usefulness of the methods analysed in this Thesis, we used data on the costs of accidents in auto insurance. Specifically, we used two databases, the first contains information from a sample of bivariate costs and the second contains information related to a sample of univariate cost for different types of policyholders. From our data, we found that the Gumbel copula with DTKE (double transformed kernel estimation) marginals provides a good fit. With this copula we obtained a balanced risk estimation that guarantees that the risk is not underestimated and, where it is relevant, not overestimated in excess. In a lot of analyses -in economics, finance, insurance, demography,...- the fit of cdf is very important for evaluating the probability of extreme situations. In these cases, the data are usually generated by a continuous random variable whose distribution may be the result of the mixture of different EVDs; then both the classical parametric models and the classical nonparametric estimates do not work for the estimation of the cdf. All those problems were addressed in last chapter. There we presented a method to estimate cdf that is suitable when the loss is a heavy tailed random variable. The proposed double transformation kernel using the bias-corrected technique, in general, provides good fit results for the Gumbel and Fréchet types of extreme value distributions, especially when the sample size is small. We show, when the sample size is small, that our proposed BCDTKE (bias-corrected double transformed kernel estimator) improves the classical kernel estimator and bias-corrected classical kernel estimator of the cumulative distribution function when the distribution is a right extreme value distribution and the maximum domain of attraction is the one associated with a Fréchet type distribution. Finally we provided some theoretical results about the maximum domain of attraction of extreme value mixture distributions.We concluded that the heavier tail (Fréchet type) prevails over the lighter tails (Gumbel type).
El objetivo esencial en el campo actuarial y las finanzas es analizar la distribución asociada a la pérdida total generada por un vector aleatorio multivariante. Los componentes de este vector son en general pérdidas dependientes entre ellos o se llaman factores de riesgo. El objetivo se reduce a estimar el riesgo de pérdida total teniendo en cuenta la relación entre estos factores de riesgo. El análisis de riesgos puede enfrentarse a dos problemas: 1. ¿Cuáles son las cópulas que mejor reflejan la estructura de dependencia entre estos factores? 2. ¿Cómo se estima la función de distribución de los marginales y se inserta en la cópula? En esta Tesis se investiga cómo responder a las dos preguntas anteriores, es decir, cómo seleccionar la cópula y la forma de estimar distribuciones marginales cuando tenemos valores extremos o eventos raros. Podemos decir que esta tesis se centra en dos aspectos fundamentales para la cuantificación del riesgo. El primero está relacionado con la teoría de las cópulas y estimación del riesgo de pérdida. En el segundo se analizan los métodos no paramétricos y semiparamétricos para estimar la función de distribución y los cuantiles. Para ilustrar la aplicabilidad de los métodos propuestos descritos en nuestro trabajo de investigación utilizamos una muestra bivariante de las pérdidas de una base de datos real extraída de las reclamaciones de una compañía de seguros de automóviles.
APA, Harvard, Vancouver, ISO, and other styles
32

Kanani, Entela. "Robust estimators for geodetic transformations and GIS /." Zürich, 2000. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=13521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Antonini, Claudia. "Folded Variance Estimators for Stationary Time Series." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6931.

Full text
Abstract:
This thesis is concerned with simulation output analysis. In particular, we are inter- ested in estimating the variance parameter of a steady-state output process. The estimation of the variance parameter has immediate applications in problems involving (i) the precision of the sample mean as a point estimator for the steady-state mean and #956;X, and (ii) confidence intervals for and #956;X. The thesis focuses on new variance estimators arising from Schrubens method of standardized time series (STS). The main idea behind STS is to let such series converge to Brownian bridge processes; then their properties are used to derive estimators for the variance parameter. Following an idea from Shorack and Wellner, we study different levels of folded Brownian bridges. A folded Brownian bridge is obtained from the standard Brownian bridge process by folding it down the middle and then stretching it so that it spans the interval [0,1]. We formulate the folded STS, and deduce a simplified expression for it. Similarly, we define the weighted area under the folded Brownian bridge, and we obtain its asymptotic properties and distribution. We study the square of the weighted area under the folded STS (known as the folded area estimator ) and the weighted area under the square of the folded STS (known as the folded Cram??von Mises, or CvM, estimator) as estimators of the variance parameter of a stationary time series. In order to obtain results on the bias of the estimators, we provide a complete finite-sample analysis based on the mean-square error of the given estimators. Weights yielding first-order unbiased estimators are found in the area and CvM cases. Finally, we perform Monte Carlo simulations to test the efficacy of the new estimators on a test bed of stationary stochastic processes, including the first-order moving average and autoregressive processes and the waiting time process in a single-server Markovian queuing system.
APA, Harvard, Vancouver, ISO, and other styles
34

Weir, Alison Diana. "Approximations for saddlepoint densities of M-estimators." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0025/NQ35367.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Nozza, Anna. "A study of estimators in linear models." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ47801.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Chan, Tsz-hin, and 陳子軒. "Hybrid bootstrap procedures for shrinkage-type estimators." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48521826.

Full text
Abstract:
In statistical inference, one is often interested in estimating the distribution of a root, which is a function of the data and the parameters only. Knowledge of the distribution of a root is useful for inference problems such as hypothesis testing and the construction of a confidence set. Shrinkage-type estimators have become popular in statistical inference due to their smaller mean squared errors. In this thesis, the performance of different bootstrap methods is investigated for estimating the distributions of roots which are constructed based on shrinkage estimators. Focus is on two shrinkage estimation problems, namely the James-Stein estimation and the model selection problem in simple linear regression. A hybrid bootstrap procedure and a bootstrap test method are proposed to estimate the distributions of the roots of interest. In the two shrinkage problems, the asymptotic errors of the traditional n-out-of-n bootstrap, m-out-of-n bootstrap and the proposed methods are derived under a moving parameter framework. The problem of the lack of uniform consistency of the n-out-of-n and the m-out-of-n bootstraps is exposed. It is shown that the proposed methods have better overall performance, in the sense that they yield improved convergence rates over almost the whole range of possible values of the underlying parameters. Simulation studies are carried out to illustrate the theoretical findings.
published_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
37

Warwick, Jane. "Selecting tuning parameters in minimum distance estimators." Thesis, Open University, 2002. http://oro.open.ac.uk/19918/.

Full text
Abstract:
Many minimum distance estimators have the potential to provide parameter estimates which are both robust and efficient and yet, despite these highly desirable theoretical properties, they are rarely used in practice. This is because the performance of these estimators is rarely guaranteed per se but obtained by placing a suitable value on some tuning parameter. Hence there is a risk involved in implementing these methods because if the value chosen for the tuning parameter is inappropriate for the data to which the method is applied, the resulting estimators may not have the desired theoretical properties and could even perform less well than one of the simpler, more widely used alternatives. There are currently no data-based methods available for deciding what value one should place on these tuning parameters hence the primary aim of this research is to develop an objective way of selecting values for the tuning parameters in minimum distance estimators so that the full potential of these estimators might be realised. This new method was initially developed to optimise the performance of the density power divergence estimator, which was proposed by Basu, Harris, Hjort and Jones [3]. The results were very promising so the method was then applied to two other minimum distance estimators and the results compared.
APA, Harvard, Vancouver, ISO, and other styles
38

Marsh, Patrick W. N. "Improved asymptotics for econometric estimators and tests." Thesis, University of Southampton, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.243082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

JUNIOR, JOAO DOMINGOS GOMES DA SILVA. "CURVATURE ESTIMATORS BASED ON PARAMETRIC CURVE FITTING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2005. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=6223@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Muitas aplicações em processamento de imagens e computação gráfica recaem em propriedades geométricas de curvas, particularmente suas curvaturas. Uma outra propriedade importante mas menos explorada é a torção, sendo esta para curvas no espaço. Vários métodos para estimar curvaturas de curvas planas são conhecidos, a maioria deles para curvas digitais. Nesta dissertação fazemos um levantamento desses métodos e propomos um novo método baseado em aproximações por parábolas e cúbicas paramétricas. Apresentamos uma análise teórica do método e também estudamos a influência do ruído no cálculo da curvatura e da torção. O novo estimador foi comparado com outros estimadores e mostrou-se bastante robusto.
Many applications in image processing and computer vision rely on geometric properties of curves, in particular their curvatures. Another important, but less exploited, property is the torsion for curves in space. Several methods of estimating the curvature of plane curves are known, most of them for digital curves. In this dissertation we survey these methods and propose a new method based on approximations by parabolic and cubic curves. We present a theoretical analysis of this method and also study the effect of noise. The new estimator is compared to other estimators and is seen to be very robust.
APA, Harvard, Vancouver, ISO, and other styles
40

Hiller, Jean-François 1974. "On error estimators in finite element analysis." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/88830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Astl, Stefan Ludwig. "Suboptimal LULU-estimators in measurements containing outliers." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85833.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: Techniques for estimating a signal in the presence of noise which contains outliers are currently not well developed. In this thesis, we consider a constant signal superimposed by a family of noise distributions structured as a tunable mixture f(x) = α g(x) + (1 − α) h(x) between finitesupport components of “well-behaved” noise with small variance g(x) and of “impulsive” noise h(x) with a large amplitude and strongly asymmetric character. When α ≈ 1, h(x) can for example model a cosmic ray striking an experimental detector. In the first part of our work, a method for obtaining the expected values of the positive and negative pulses in the first resolution level of a LULU Discrete Pulse Transform (DPT) is established. Subsequent analysis of sequences smoothed by the operators L1U1 or U1L1 of LULU-theory shows that a robust estimator for the location parameter for g is achieved in the sense that the contribution by h to the expected average of the smoothed sequences is suppressed to order (1 − α)2 or higher. In cases where the specific shape of h can be difficult to guess due to the assumed lack of data, it is thus also shown to be of lesser importance. Furthermore, upon smoothing a sequence with L1U1 or U1L1, estimators for the scale parameters of the model distribution become easily available. In the second part of our work, the same problem and data is approached from a Bayesian inference perspective. The Bayesian estimators are found to be optimal in the sense that they make full use of available information in the data. Heuristic comparison shows, however, that Bayes estimators do not always outperform the LULU estimators. Although the Bayesian perspective provides much insight into the logical connections inherent in the problem, its estimators can be difficult to obtain in analytic form and are slow to compute numerically. Suboptimal LULU-estimators are shown to be reasonable practical compromises in practical problems.
AFRIKAANSE OPSOMMING: Tegnieke om ’n sein af te skat in die teenwoordigheid van geraas wat uitskieters bevat is tans nie goed ontwikkel nie. In hierdie tesis aanskou ons ’n konstante sein gesuperponeer met ’n familie van geraasverdelings wat as verstelbare mengsel f(x) = α g(x) + (1 − α) h(x) tussen eindige-uitkomsruimte geraaskomponente g(x) wat “goeie gedrag” en klein variansie toon, plus “impulsiewe” geraas h(x) met groot amplitude en sterk asimmetriese karakter. Wanneer α ≈ 1 kan h(x) byvoorbeeld ’n kosmiese straal wat ’n eksperimentele apparaat tref modelleer. In die eerste gedeelte van ons werk word ’n metode om die verwagtingswaardes van die positiewe en negatiewe pulse in die eerste resolusievlak van ’n LULU Diskrete Pulse Transform (DPT) vasgestel. Die analise van rye verkry deur die inwerking van die gladstrykers L1U1 en U1L1 van die LULU-teorie toon dat hul verwagte gemiddelde waardes as afskatters van die liggingsparameter van g kan dien wat robuus is in die sin dat die bydrae van h tot die gemiddeld van orde grootte (1 − α)2 of hoër is. Die spesifieke vorm van h word dan ook onbelangrik. Daar word verder gewys dat afskatters vir die relevante skaalparameters van die model maklik verkry kan word na gladstryking met die operatore L1U1 of U1L1. In die tweede gedeelte van ons werk word dieselfde probleem en data vanuit ’n Bayesiese inferensie perspektief benader. Die Bayesiese afskatters word as optimaal bevind in die sin dat hulle vol gebruikmaak van die beskikbare inligting in die data. Heuristiese vergelyking wys egter dat Bayesiese afskatters nie altyd beter vaar as die LULU afskatters nie. Alhoewel die Bayesiese sienswyse baie insig in die logiese verbindings van die probleem gee, kan die afskatters moeilik wees om analities af te lei en stadig om numeries te bereken. Suboptimale LULU-beramers word voorgestel as redelike praktiese kompromieë in praktiese probleme.
APA, Harvard, Vancouver, ISO, and other styles
42

Stephanou, Michael Jared. "Sequential nonparametric estimation via Hermite series estimators." Doctoral thesis, Faculty of Science, 2020. http://hdl.handle.net/11427/32998.

Full text
Abstract:
Algorithms for estimating the statistical properties of streams of data in real time, as well as for the efficient analysis of massive data sets, are becoming particularly pertinent given the increasing ubiquity of such data. In this thesis we introduce novel approaches to sequential (online) estimation in both stationary and non-stationary settings based on Hermite series density estimators. In the univariate context we apply Hermite series based distribution function estimators to sequential cumulative distribution function estimation. These distribution function estimators are particularly useful because they allow the sequential estimation of the full cumulative distribution function. This is in contrast to the empirical distribution function estimator and smooth kernel distribution function estimator which only allow sequential cumulative probability estimation at predefined values on the support of the associated density function. We explore the asymptotic consistency and robustness properties of the Hermite series based cumulative distribution function estimator thereby redressing a gap in the literature. Given the sequential Hermite series based distribution function estimator, we obtain sequential quantile estimates numerically. Our algorithms go beyond existing sequential quantile estimation algorithms in that they allow arbitrary quantiles (as opposed to pre-specified quantiles) to be estimated at any point in time, in both the static and dynamic quantile estimation settings. In the bivariate context we introduce a Hermite series based sequential estimator for the Spearman's rank correlation coefficient and provide algorithms applicable in both the stationary and non-stationary settings. To treat the the non-stationary setting, we introduce a novel, exponentially weighted estimator for the Spearman's rank correlation, which allows the local nonparametric correlation of a bivariate data stream to be tracked. To the best of our knowledge this is the first algorithm to be proposed for estimating a time-varying Spearman's rank correlation that does not rely on a moving window approach. We explore the practical effectiveness of the Hermite series based estimators through real data and simulation studies, demonstrating competitive performance compared to leading existing algorithms. The potential applications of this work are manifold. Our sequential distribution function and quantile estimation algorithms can be applied to real time anomaly and outlier detection, real time provisioning for future demand as well as real time risk estimation for example. The Hermite series based Spearman's rank correlation estimator can be applied to fast and robust online calculation of correlation which may vary over time. Possible machine learning applications include fast feature selection and hierarchical clustering on massive data sets amongst others.
APA, Harvard, Vancouver, ISO, and other styles
43

Thiebaut, Nicolene Magrietha. "Statistical properties of forward selection regression estimators." Diss., University of Pretoria, 2011. http://hdl.handle.net/2263/29520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Teweldemedhin, Amanuel. "Nonparametric estimators of mean residual life function /." Available to subscribers only, 2008. http://proquest.umi.com/pqdweb?did=1594477641&sid=16&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Warwick, Jane. "Selecting tuning parameters in minimum distance estimators." n.p, 2001. http://ethos.bl.uk/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Sheng. "Adaptive error estimators for electromagnetic field solvers /." May be available electronically:, 2009. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Zaeva, Maria. "Maximum likelihood estimators for circular structural model." Birmingham, Ala. : University of Alabama at Birmingham, 2009. https://www.mhsl.uab.edu/dt/2009m/zaeva.pdf.

Full text
Abstract:
Thesis (M.S.)--University of Alabama at Birmingham, 2009.
Title from PDF title page (viewed Jan. 21, 2010). Additional advisors: Yulia Karpeshina, Ian Knowles, Rudi Weikard. Includes bibliographical references (p. 19).
APA, Harvard, Vancouver, ISO, and other styles
48

Andresen, Andreas. "Finite sample analysis of profile M-estimators." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2015. http://dx.doi.org/10.18452/17295.

Full text
Abstract:
In dieser Arbeit wird ein neuer Ansatz für die Analyse von Profile Maximierungsschätzern präsentiert. Es werden die Ergebnisse von Spokoiny (2011) verfeinert und angepasst für die Schätzung von Komponenten von endlich dimensionalen Parametern mittels der Maximierung eines Kriteriumfunktionals. Dabei werden Versionen des Wilks Phänomens und der Fisher-Erweiterung für endliche Stichproben hergeleitet und die dafür kritische Relation der Parameterdimension zum Stichprobenumfang gekennzeichnet für den Fall von identisch unabhängig verteilten Beobachtungen und eines hinreichend glatten Funktionals. Die Ergebnisse werden ausgeweitet für die Behandlung von Parametern in unendlich dimensionalen Hilberträumen. Dabei wir die Sieve-Methode von Grenander (1981) verwendet. Der Sieve-Bias wird durch übliche Regularitätsannahmen an den Parameter und das Funktional kontrolliert. Es wird jedoch keine Basis benötigt, die orthogonal in dem vom Model induzierten Skalarprodukt ist. Weitere Hauptresultate sind zwei Konvergenzaussagen für die alternierende Maximisierungsprozedur zur approximation des Profile-Schätzers. Alle Resultate werden anhand der Analyse der Projection Pursuit Prozedur von Friendman (1981) veranschaulicht. Die Verwendung von Daubechies-Wavelets erlaubt es unter natürlichen und üblichen Annahmen alle theoretischen Resultate der Arbeit anzuwenden.
This thesis presents a new approach to analyze profile M-Estimators for finite samples. The results of Spokoiny (2011) are refined and adapted to the estimation of components of a finite dimensional parameter using the maximization of a criterion functional. A finite sample versions of the Wilks phenomenon and Fisher expansion are obtained and the critical ratio of parameter dimension to sample size is derived in the setting of i.i.d. samples and a smooth criterion functional. The results are extended to parameters in infinite dimensional Hilbert spaces using the sieve approach of Grenander (1981). The sieve bias is controlled via common regularity assumptions on the parameter and functional. But our results do not rely on an orthogonal basis in the inner product induced by the model. Furthermore the thesis presents two convergence results for the alternating maximization procedure. All results are exemplified in an application to the Projection Pursuit Procedure of Friendman (1981). Under a set of natural and common assumptions all theoretical results can be applied using Daubechies wavelets.
APA, Harvard, Vancouver, ISO, and other styles
49

Puccio, Elena. "COVARIANCE AND CORRELATION ESTIMATORS IN BIPARTITE SYSTEMS." Doctoral thesis, Università degli Studi di Palermo, 2017. http://hdl.handle.net/10447/221177.

Full text
Abstract:
We present a weighted estimator of the covariance and correlation in bipartite complex systems with a double layer of heterogeneity. The advantage provided by the weighted estimators lies in the fact that the unweighted sample covariance and correlation can be shown to possess a bias. Indeed, such a bias affects real bipartite systems, and, for example, we report its effects on two empirical systems, one social and the other biological. On the contrary, our newly proposed weighted estimators remove the bias and are better suited to describe such systems.
APA, Harvard, Vancouver, ISO, and other styles
50

Comanescu, Mihai. "Flux and speed estimation techniques for sensorless control of induction motors." Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1116338965.

Full text
Abstract:
Thesis (Ph.D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xv, 109 p.; also includes graphics. Includes bibliographical references (p. 106-109). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography