To see the other types of publications on this topic, follow the link: Targeted Maximum Likelihood Estimation.

Dissertations / Theses on the topic 'Targeted Maximum Likelihood Estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Targeted Maximum Likelihood Estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Schnitzer, Mireille. "Targeted maximum likelihood estimation for longitudinal data." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=114242.

Full text
Abstract:
Semiparametric efficient methods in causal inference have been developed to robustly and efficiently estimate causal parameters. As in general causal estimation, the methods rely on a set of mathematical assumptions that translate into requirements of causal knowledge and confounder identification. Targeted maximum likelihood estimation (TMLE) methodology has been developed as a potential improvement on efficient estimating equations, in that it shares the qualities of double robustness (unbiasedness under partial misspecification) and semiparametric efficiency, but can be constructed to provide boundedness of parameter estimates, robustness to data sparsity, and a unique estimate.This thesis, composed primarily of three manuscripts, presents new research on the analysis of longitudinal and survival data with time-dependent confounders using TMLE. The first manuscript describes the construction of a two time-point TMLE using a generalized exponential distribution family member as the loss function for the outcome model. It demonstrates the robustness of the continuous version of this TMLE algorithm in a simulation study, and uses a modified version of the method in a simplified analysis of the PROmotion of Breastfeeding Intervention Trial (PROBIT) where evidence for a protective causal effect of breastfeeding on gastrointestinal infection is obtained.The second manuscript presents a description of several substitution estimators for longitudinal data, a specialized implementation of a longitudinal TMLE method, and a case study using the full PROBIT dataset. The K time point sequential TMLE algorithm employed (theory previously developed), implemented nonparametrically using Super Learner, differs fundamentally from the strategy used in the first manuscript, and offers some benefits in computation and ease of implementation. The analysis compares different durations of breastfeeding and the related exposure-specific (and censoring-free) mean counts of gastrointestinal infections over the first year of an infant's life and concludes that a protective effect is present. Simulated data mirroring the PROBIT dataset was generated, and the performance of TMLE was again assessed.The third manuscript develops a methodology to estimate marginal structural models for survival data. Utilizing the sequential longitudinal TMLE algorithm to estimate the exposure-specific survival curves for all exposure patterns, it demonstrates a way to combine inference in order to model the outcome using a linear specification. This article presents the theoretical construction of two different types of marginal structural models (modeling the log-odds survival and the hazard) and presents a simulation study demonstrating the unbiasedness of the technique. It then describes an analysis of the Canadian Co-infection Cohort study undertaken with one of the TMLE methods to fit survival curves and a model for the hazard function of development of end-stage liver disease (ESLD) conditional on time and clearance of the Hepatitis C virus.
Des méthodes d'analyse causale semi-paramétriques et efficaces ont été développées pour estimer les paramètres causaux efficacement et de façon robuste. Comme c'est le cas en général pour l'estimation causale, ces méthodes se basent sur un ensemble d'hypothèses mathématiques qui impliquent que la structure causale et les facteurs de confusion doivent être connus. La méthode d'estimation par le maximum de vraisemblance ciblé (TMLE) se veut une amélioration des équations d'estimation efficaces: elle a les propriétés de double robustesse (sans biais même avec une erreur de spécification partielle) et d'efficacité semi-paramétrique, mais peut également garantir des estimés finis pour les paramètres et la production d'un seul estimé en plus d'être robuste si les données sont éparses. Cette thèse, composée essentiellement de trois manuscrits, présente de nouvelles recherches sur l'analyse avec le TMLE de données longitudinales et de données de survie avec des facteurs de confusion variant dans le temps. Le premier manuscrit décrit la construction d'un TMLE à deux points dans le temps avec une distribution de la famille exponentielle généralisée comme fonction de perte du modèle de la réponse. Il démontre à l'aide d'une étude de simulation la robustesse de la version continue de cet algorithme TMLE, et utilise une version Poisson de la méthode pour une analyse simplifiée de l'étude PROmotion of Breastfeeding Intervention Trial (PROBIT) qui donne des signes d'un effet causal protecteur de l'allaitement sur les infections gastrointestinales. Le deuxième manuscrit présente une description de plusieurs estimateurs de substitution pour données longitudinales, une implémentation spéciale de la méthode TMLE longitudinale et une étude de cas du jeu de données PROBIT entier. Un algorithme TMLE séquentiel à K points dans le temps est utilisé (théorie déjà développée), lequel est implémenté de façon non-paramétrique avec le Super Learner. Cet algorithme diffère fondamentalement de la stratégie utilisée dans le premier manuscrit et offre des avantages en terme de calcul et de facilité d'implémentation. L'analyse compare les moyennes de dénombrements du nombre d'infections gastrointestinales dans la première année de vie d'un nouveau-né par durée d'allaitement et avec aucune censure, et conclut à la présence d'un effet protecteur. Des données simulées semblables au jeu de données PROBIT sont également générées, et la performance du TMLE de nouveau étudiée. Le troisième manuscrit développe une méthodologie pour estimer des modèles structurels marginaux pour données de survie. En utilisant l'algorithme séquentiel du TMLE longitudinal pour estimer des courbes de survie spécifiques à l'exposition pour tous les patrons d'exposition, il montre une façon de combiner les inférences pour modéliser la réponse à l'aide d'une spécification linéaire. Cet article présente la construction théorique de deux différents types de modèles structurels marginaux (modélisant le log du rapport des chances de survie et le risque) et présente une étude de simulation démontrant l'absence de biais de la technique. Il décrit ensuite une analyse de l'Étude de la Cohorte Canadienne de Co-Infection à l'aide d'une des méthodes TMLE pour ajuster des courbes de survie et un modèle pour la fonction de risque du développement de la maladie chronique du foie (ESLD) conditionnellement au temps et à l'élimination du virus de l'hépatite C.
APA, Harvard, Vancouver, ISO, and other styles
2

Sarovar, Varada. "Targeted Maximum Likelihood Estimation for Evaluation of the Health Impacts of Air Pollution." Thesis, University of California, Berkeley, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10279902.

Full text
Abstract:

The adverse effects of air pollution on human life is of serious concern for today’s society. Two population groups that are especially vulnerable to air pollution are pregnant women and their growing fetuses, and the focus of this thesis is to study the effects of air pollution on these populations. In order to address the methodological limitations in prior research, we quantify the impact of air pollution on various adverse pregnancy outcomes, utilizing machine learning and novel causal inference methods. Specifically, we utilize two semi-parametric, double robust, asymptotically efficient substitution estimators to estimate the causal attributable risk of various pregnancy outcomes of interest. Model fitting via machine learning algorithms helps to avoid reliance on misspecified parametric models and thereby improve both the robustness and precision of our estimates, ensuring meaningful statistical inference. Under assumptions, the causal attributable risk that we estimate translates to the absolute change in adverse pregnancy outcome risk that would be observed under a hypothetical intervention to change pollution levels, relative to currently observed levels. The estimated causal attributable risk provides a quantitative estimate of a quantity with more immediate public health and policy relevance.

APA, Harvard, Vancouver, ISO, and other styles
3

Khanafer, Sajida. "Sensory Integration During Goal Directed Reaches: The Effects of Manipulating Target Availability." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23422.

Full text
Abstract:
When using visual and proprioceptive information to plan a reach, it has been proposed that the brain combines these cues to estimate the object and/or limb’s location. Specifically, according to the maximum-likelihood estimation (MLE) model, more reliable sensory inputs are assigned a greater weight (Ernst & Banks, 2002). In this research we examined if the brain is able to adjust which sensory cue it weights the most. Specifically, we asked if the brain changes how it weights sensory information when the availability of a visual cue is manipulated. Twenty-four healthy subjects reached to visual (V), proprioceptive (P), or visual + proprioceptive (VP) targets under different visual delay conditions (e.g. on V and VP trials, the visual target was available for the entire reach, it was removed with the go-signal or it was removed 1, 2 or 5 seconds before the go-signal). Subjects completed 5 blocks of trials, with 90 trials per block. For 12 subjects, the visual delay was kept consistent within a block of trials, while for the other 12 subjects, different visual delays were intermixed within a block of trials. To establish which sensory cue subjects weighted the most, we compared endpoint positions achieved on V and P reaches to VP reaches. Results indicated that all subjects weighted sensory cues in accordance with the MLE model across all delay conditions and that these weights were similar regardless of the visual delay. Moreover, while errors increased with longer visual delays, there was no change in reaching variance. Thus, manipulating the visual environment was not enough to change subjects’ weighting strategy, further i
APA, Harvard, Vancouver, ISO, and other styles
4

Ruprecht, Jürg. "Maximum likelihood estimation of multipath channels /." [S.l.] : [s.n.], 1989. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=8789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Horbelt, Werner. "Maximum likelihood estimation in dynamical systems." [S.l. : s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=963810812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sabbagh, Yvonne. "Maximum Likelihood Estimation of Hammerstein Models." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2061.

Full text
Abstract:

In this Master's thesis, Maximum Likelihood-based parametric identification methods for discrete-time SISO Hammerstein models from perturbed observations on both input and output, are investigated.

Hammerstein models, consisting of a static nonlinear block followed by a dynamic linear one, are widely applied to modeling nonlinear dynamic systems, i.e., dynamic systems having nonlinearity at its input.

Two identification methods are proposed. The first one assumes a Hammerstein model where the input signal is noise-free and the output signal is perturbed with colored noise. The second assumes, however, white noises added to the input and output of the nonlinearity and to the output of the whole considered Hammerstein model. Both methods operate directly in the time domain and their properties are illustrated by a number of simulated examples. It should be observed that attention is focused on derivation, numerical calculation, and simulation corresponding to the first identification method mentioned above.

APA, Harvard, Vancouver, ISO, and other styles
7

Leeuw, Johannes Leonardus van der. "Maximum likelihood estimation of exact ARMA models /." Tilburg : Tilburg University Press, 1997. http://www.gbv.de/dms/goettingen/265169976.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ehlers, Rene. "Maximum likelihood estimation procedures for categorical data." Pretoria : [s.n.], 2002. http://upetd.up.ac.za/thesis/available/etd-07222005-124541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zou, Yiqun. "Attainment of Global Convergence in Maximum Likelihood Estimation." Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.511845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mariano, Machado Robson José. "Penalised maximum likelihood estimation for multi-state models." Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10060352/.

Full text
Abstract:
Multi-state models can be used to analyse processes where change of status over time is of interest. In medical research, processes are commonly defined by a set of living states and a dead state. Transition times between living states are often interval censored. In this case, models are usually formulated in a Markov processes framework. The likelihood function is then constructed using transition probabilities. Models are specified using proportional hazards for the effect of covariates on transition intensities. Time-dependency is usually defined by parametric models, which can represent a strong model assumption. Semiparametric hazards specification with splines is a more flexible method for modelling time-dependency in multi-state models. Penalised maximum likelihood is used to estimate these models. Selecting the optimal amount of smoothing is challenging as the problem involves multiple penalties. This thesis aims to develop methods to estimate multi-state models with splines for interval-censored data. We propose a penalised likelihood method to estimate multi-state models that allow for parametric and semiparametric hazards specifications. The estimation is based on a scoring algorithm, and a grid search method to estimate the smoothing parameters. This method is shown using an application to ageing research. Furthermore, we extend the proposed method by developing a computationally more efficient method to estimate multi-state models with splines. For this extension, the estimation is based on a scoring algorithm, and an automatic smoothing parameters selection. The extended method is illustrated with two data analyses and a simulation study.
APA, Harvard, Vancouver, ISO, and other styles
11

Weng, Yu. "Maximum Likelihood Estimation of Logistic Sinusoidal Regression Models." Thesis, University of North Texas, 2013. https://digital.library.unt.edu/ark:/67531/metadc407796/.

Full text
Abstract:
We consider the problem of maximum likelihood estimation of logistic sinusoidal regression models and develop some asymptotic theory including the consistency and joint rates of convergence for the maximum likelihood estimators. The key techniques build upon a synthesis of the results of Walker and Song and Li for the widely studied sinusoidal regression model and on making a connection to a result of Radchenko. Monte Carlo simulations are also presented to demonstrate the finite-sample performance of the estimators
APA, Harvard, Vancouver, ISO, and other styles
12

DeGroot, Don Johan. "Maximum likelihood estimation of spatially correlated soil properties." Thesis, Massachusetts Institute of Technology, 1985. http://hdl.handle.net/1721.1/15282.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Civil Engineering, 1985.
MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.
Bibliography: leaves 109-110.
by Don Johan DeGroot.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
13

John, Andrea. "Maximum likelihood estimation in mis-specified reliability distributions." Thesis, Swansea University, 2003. https://cronfa.swan.ac.uk/Record/cronfa42494.

Full text
Abstract:
This thesis examines some effects of fitting the wrong distribution to reliability data. The parametric analysis of any data usually assumes that the form of the underlying distribution is known. In practice, however, the choice of distribution is subject to error, so the analysis could involve estimating parameters from a mis-specified model. In this thesis, we consider theoretical and practical aspects of maximum likelihood estimation under such mis-specification. Due to its popularity and wide use, we take the Weibull distribution to be the mis-specified model, and look at the effects of fitting this distribution to data from underlying Burr, Gamma and Lognormal models. We use entropy to obtain the theoretical counterparts to the Weibull maximum likelihood estimates, and obtain theoretical results on the distribution of the mis-specified Weibull maximum likelihood estimates and quantiles such as B\Q. Initially, these results are obtained for complete data, and then extended to type I and II censoring regimes, where consideration of terms in the likelihood and entropy functions leads to a detailed consideration of the properties of order statistics of the distributions. We also carry out a similar investigation on accelerated data sets, where there is additional complexity due to links between accelerating factors and scale parameters in reliability distributions. These links are also open to mis-specification, so allowing for various combinations of true and mis-specified models. We present theoretical results for general scale-stress relationships, but focus on practical results for the Log-linear and Arrhenius models, since these are the two relationships most widely used. Finally, we link both acceleration and censoring, and obtain theoretical results for a type II censoring regime at the lowest stress level.
APA, Harvard, Vancouver, ISO, and other styles
14

White, Scott Ian. "Stochastic volatility: Maximum likelihood estimation and specification testing." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16220/1/Scott_White_Thesis.pdf.

Full text
Abstract:
Stochastic volatility (SV) models provide a means of tracking and forecasting the variance of financial asset returns. While SV models have a number of theoretical advantages over competing variance modelling procedures they are notoriously difficult to estimate. The distinguishing feature of the SV estimation literature is that those algorithms that provide accurate parameter estimates are conceptually demanding and require a significant amount of computational resources to implement. Furthermore, although a significant number of distinct SV specifications exist, little attention has been paid to how one would choose the appropriate specification for a given data series. Motivated by these facts, a likelihood based joint estimation and specification testing procedure for SV models is introduced that significantly overcomes the operational issues surrounding existing estimators. The estimation and specification testing procedures in this thesis are made possible by the introduction of a discrete nonlinear filtering (DNF) algorithm. This procedure uses the nonlinear filtering set of equations to provide maximum likelihood estimates for the general class of nonlinear latent variable problems which includes the SV model class. The DNF algorithm provides a fast and accurate implementation of the nonlinear filtering equations by treating the continuously valued state-variable as if it were a discrete Markov variable with a large number of states. When the DNF procedure is applied to the standard SV model, very accurate parameter estimates are obtained. Since the accuracy of the DNF is comparable to other procedures, its advantages are seen as ease and speed of implementation and the provision of online filtering (prediction) of variance. Additionally, the DNF procedure is very flexible and can be used for any dynamic latent variable problem with closed form likelihood and transition functions. Likelihood based specification testing for non-nested SV specifications is undertaken by formulating and estimating an encompassing model that nests two competing SV models. Likelihood ratio statistics are then used to make judgements regarding the optimal SV specification. The proposed framework is applied to SV models that incorporate either extreme returns or asymmetries.
APA, Harvard, Vancouver, ISO, and other styles
15

White, Scott Ian. "Stochastic volatility : maximum likelihood estimation and specification testing." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16220/.

Full text
Abstract:
Stochastic volatility (SV) models provide a means of tracking and forecasting the variance of financial asset returns. While SV models have a number of theoretical advantages over competing variance modelling procedures they are notoriously difficult to estimate. The distinguishing feature of the SV estimation literature is that those algorithms that provide accurate parameter estimates are conceptually demanding and require a significant amount of computational resources to implement. Furthermore, although a significant number of distinct SV specifications exist, little attention has been paid to how one would choose the appropriate specification for a given data series. Motivated by these facts, a likelihood based joint estimation and specification testing procedure for SV models is introduced that significantly overcomes the operational issues surrounding existing estimators. The estimation and specification testing procedures in this thesis are made possible by the introduction of a discrete nonlinear filtering (DNF) algorithm. This procedure uses the nonlinear filtering set of equations to provide maximum likelihood estimates for the general class of nonlinear latent variable problems which includes the SV model class. The DNF algorithm provides a fast and accurate implementation of the nonlinear filtering equations by treating the continuously valued state-variable as if it were a discrete Markov variable with a large number of states. When the DNF procedure is applied to the standard SV model, very accurate parameter estimates are obtained. Since the accuracy of the DNF is comparable to other procedures, its advantages are seen as ease and speed of implementation and the provision of online filtering (prediction) of variance. Additionally, the DNF procedure is very flexible and can be used for any dynamic latent variable problem with closed form likelihood and transition functions. Likelihood based specification testing for non-nested SV specifications is undertaken by formulating and estimating an encompassing model that nests two competing SV models. Likelihood ratio statistics are then used to make judgements regarding the optimal SV specification. The proposed framework is applied to SV models that incorporate either extreme returns or asymmetries.
APA, Harvard, Vancouver, ISO, and other styles
16

Zaeva, Maria. "Maximum likelihood estimators for circular structural model." Birmingham, Ala. : University of Alabama at Birmingham, 2009. https://www.mhsl.uab.edu/dt/2009m/zaeva.pdf.

Full text
Abstract:
Thesis (M.S.)--University of Alabama at Birmingham, 2009.
Title from PDF title page (viewed Jan. 21, 2010). Additional advisors: Yulia Karpeshina, Ian Knowles, Rudi Weikard. Includes bibliographical references (p. 19).
APA, Harvard, Vancouver, ISO, and other styles
17

Cule, Madeleine. "Maximum likelihood estimation of a multivariate log-concave density." Thesis, University of Cambridge, 2010. https://www.repository.cam.ac.uk/handle/1810/237061.

Full text
Abstract:
Density estimation is a fundamental statistical problem. Many methods are eithersensitive to model misspecification (parametric models) or difficult to calibrate, especiallyfor multivariate data (nonparametric smoothing methods). We propose an alternativeapproach using maximum likelihood under a qualitative assumption on the shape ofthe density, specifically log-concavity. The class of log-concave densities includes manycommon parametric families and has desirable properties. For univariate data, theseestimators are relatively well understood, and are gaining in popularity in theory andpractice. We discuss extensions for multivariate data, which require different techniques. After establishing existence and uniqueness of the log-concave maximum likelihoodestimator for multivariate data, we see that a reformulation allows us to compute itusing standard convex optimization techniques. Unlike kernel density estimation, orother nonparametric smoothing methods, this is a fully automatic procedure, and noadditional tuning parameters are required. Since the assumption of log-concavity is non-trivial, we introduce a method forassessing the suitability of this shape constraint and apply it to several simulated datasetsand one real dataset. Density estimation is often one stage in a more complicatedstatistical procedure. With this in mind, we show how the estimator may be used forplug-in estimation of statistical functionals. A second important extension is the use oflog-concave components in mixture models. We illustrate how we may use an EM-stylealgorithm to fit mixture models where the number of components is known. Applicationsto visualization and classification are presented. In the latter case, improvement over aGaussian mixture model is demonstrated. Performance for density estimation is evaluated in two ways. Firstly, we considerHellinger convergence (the usual metric of theoretical convergence results for nonparametricmaximum likelihood estimators). We prove consistency with respect to this metricand heuristically discuss rates of convergence and model misspecification, supportedby empirical investigation. Secondly, we use the mean integrated squared error todemonstrate favourable performance compared with kernel density estimates using avariety of bandwidth selectors, including sophisticated adaptive methods. Throughout, we emphasise the development of stable numerical procedures able tohandle the additional complexity of multivariate data.
APA, Harvard, Vancouver, ISO, and other styles
18

Hartford, Alan Hughes. "Computational approaches for maximum likelihood estimation for nonlinearmixed models." NCSU, 2000. http://www.lib.ncsu.edu/theses/available/etd-20000719-081254.

Full text
Abstract:

The nonlinear mixed model is an important tool for analyzingpharmacokinetic and other repeated-measures data.In particular, these models are used when the measured response for anindividual,,has a nonlinear relationship with unknown, random, individual-specificparameters,.Ideally, the method of maximum likelihood is used to find estimates forthe parameters ofthe model after integrating out the random effects in the conditionallikelihood. However, closed form solutions tothe integral are generally not available. As a result, methods have beenpreviously developed to find approximatemaximum likelihood estimates for the parameters in the nonlinear mixedmodel. These approximate methods include FirstOrder linearization, Laplace's approximation, importance sampling, andGaussian quadrature. The methods are availabletoday in several software packages for models of limited sophistication;constant conditional error variance is requiredfor proper utilization of most software. In addition, distributionalassumptions are needed. This work investigates howrobust two of these methods, First Order linearization and Laplace'sapproximation, are to these assumptions. The findingis that Laplace's approximation performs well, resulting in betterestimation than first order linearization when bothmodels converge to a solution.

A method must provide good estimates of the likelihood at points inthe parameter space near the solution. This workcompares this ability among the numerical integration techniques,Gaussian quadrature, importance sampling, and Laplace'sapproximation. A new "scaled" and "centered" version of Gaussianquadrature is found to be the most accurate technique.In addition, the technique requires evaluation of the integrand at onlya few abscissas. Laplace's method also performswell; it is more accurate than importance sampling with even 100importance samples over two dimensions. Even so,Laplace's method still does not perform as well as Gaussian quadrature.Overall, Laplace's approximation performs betterthan expected, and is shown to be a reliable method while stillcomputationally less demanding.

This work also introduces a new method to maximize the likelihood.This method can be sharpened to any desired levelof accuracy. Stochastic approximation is incorporated to continuesampling until enough information is gathered to resultin accurate estimation. This new method is shown to work well for linearmixed models, but is not yet successful for thenonlinear mixed model.

APA, Harvard, Vancouver, ISO, and other styles
19

Storer, Robert Hedley. "Adaptive estimation by maximum likelihood fitting of Johnson distributions." Diss., Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/24082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Qiang. "Maximum likelihood estimation of phylogenetic tree with evolutionary parameters." Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1083177084.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xi, 167 p.; also includes graphics Includes bibliographical references (p. 157-167). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
21

Fischer, Mareike. "Novel Mathematical Aspects of Phylogenetic Estimation." Thesis, University of Canterbury. Mathematics and Statistics, 2009. http://hdl.handle.net/10092/2331.

Full text
Abstract:
In evolutionary biology, genetic sequences carry with them a trace of the underlying tree that describes their evolution from a common ancestral sequence. Inferring this underlying tree is challenging. We investigate some curious cases in which different methods like Maximum Parsimony, Maximum Likelihood and distance-based methods lead to different trees. Moreover, we state that in some cases, ancestral sequences can be more reliably reconstructed when some of the leaves of the tree are ignored - even if these leaves are close to the root. While all these findings show problems inherent to either the assumed model or the applied method, sometimes an inaccurate tree reconstruction is simply due to insufficient data. This is particularly problematic when a rapid divergence event occurred in the distant past. We analyze an idealized form of this problem and determine a tight lower bound on the growth rate for the sequence length required to resolve the tree (independent of any particular branch length). Finally, we investigate the problem of intermediates in the fossil record. The extent of ‘gaps’ (missing transitional stages) has been used to argue against gradual evolution from a common ancestor. We take an analytical approach and demonstrate why, under certain sampling conditions, we may not expect intermediates to be found.
APA, Harvard, Vancouver, ISO, and other styles
22

Boscarino, Andrea. "Deep Learning Models with Stochastic Targets: an Application for Transprecision Computing." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20078/.

Full text
Abstract:
Il presente elaborato di tesi è parte di un ampio progetto finanziato dall’Unione Europea, sotto il programma Horizon 2020 per la ricerca e l’innovazione, Open Transprecision Computing (OPRECOMP). Il progetto, della durata di 4 anni, punta a superare l’assunto conservativo secondo cui ogni calcolo compiuto da sistemi e applicazioni computazionali debba essere eseguito utilizzando la massima precisione numerica. Tale assunto è finora risultato sensato in vista di un’efficienza computazionale sempre migliore col passare del tempo, secondo la legge di Moore. Com’è noto, nell’era attuale tale legge ha iniziato a perdere di validità con l’approssimarsi dei limiti fisici che impediscono ulteriori miglioramenti di grande ordine previsti di anno in anno, dando piuttosto spazio a miglioramenti marginali. L’approccio proposto dal progetto OPRECOMP (il cui sviluppo vuole beneficiare applicazioni che spaziano dai piccoli nodi computazionali per l’Internet-of-Things, fino ai centri computazionali di High Performance Computing) è basato sul paradigma del Transprecision Computing, che supera l’assunto della massima precisione in favore di calcoli approssimati; tramite tale paradigma si arriva ad un doppio vantaggio: computazioni più efficienti e brevi, e soprattutto, risparmio energetico. Per fare ciò, OPRECOMP sfrutta il principio secondo cui quasi ogni applicazione computazionale utilizza nodi intermedi di calcolo, le cui precisioni possono essere tarate (in modo controllato) con conseguenze minime sull’affidabilità del risultato finale. All'interno dell'elaborato vengono esplorate soluzioni e metodologie di machine learning (e in particolare modelli stocastici, ovvero distribuzioni probabilistiche caratterizzate da errore medio e varianza) con lo scopo di apprendere la relazione che incorre tra la scelta del numero di bit utilizzati per le variabili di alcuni benchmark matematici e il relativo errore rilevato rispetto alla stessa computazione eseguita a precisione massima.
APA, Harvard, Vancouver, ISO, and other styles
23

Xue, Huitian, and 薛惠天. "Maximum likelihood estimation of parameters with constraints in normaland multinomial distributions." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B47850012.

Full text
Abstract:
Motivated by problems in medicine, biology, engineering and economics, con- strained parameter problems arise in a wide variety of applications. Among them the application to the dose-response of a certain drug in development has attracted much interest. To investigate such a relationship, we often need to conduct a dose- response experiment with multiple groups associated with multiple dose levels of the drug. The dose-response relationship can be modeled by a shape-restricted normal regression. We develop an iterative two-step ascent algorithm to estimate normal means and variances subject to simultaneous constraints. Each iteration consists of two parts: an expectation{maximization (EM) algorithm that is utilized in Step 1 to compute the maximum likelihood estimates (MLEs) of the restricted means when variances are given, and a newly developed restricted De Pierro algorithm that is used in Step 2 to find the MLEs of the restricted variances when means are given. These constraints include the simple order, tree order, umbrella order, and so on. A bootstrap approach is provided to calculate standard errors of the restricted MLEs. Applications to the analysis of two real datasets on radioim-munological assay of cortisol and bioassay of peptides are presented to illustrate the proposed methods. Liu (2000) discussed the maximum likelihood estimation and Bayesian estimation in a multinomial model with simplex constraints by formulating this constrained parameter problem into an unconstrained parameter problem in the framework of missing data. To utilize the EM and data augmentation (DA) algorithms, he introduced latent variables {Zil;Yil} (to be defined later). However, the proposed DA algorithm in his paper did not provide the necessary individual conditional distributions of Yil given (the observed data and) the updated parameter estimates. Indeed, the EM algorithm developed in his paper is based on the assumption that{ Yil} are fixed given values. Fortunately, the EM algorithm is invariant under any choice of the value of Yil, so the final result is always correct. We have derived the aforesaid conditional distributions and hence provide a valid DA algorithm. A real data set is used for illustration.
published_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
24

Leroux, Brian. "Maximum likelihood estimation for mixture distributions and hidden Markov models." Thesis, University of British Columbia, 1989. http://hdl.handle.net/2429/29176.

Full text
Abstract:
This thesis deals with computational and theoretical aspects of maximum likelihood estimation for data from a mixture model and a hidden Markov model. A maximum penalized likelihood method is proposed for estimating the number of components in a mixture distribution. This method produces a consistent estimator of the unknown mixing distribution, in the sense of weak convergence of distribution functions. The proof of this result consists of establishing consistency results concerning maximum likelihood estimators (which have unrestricted number of components) and constrained maximum likelihood estimators (which assume a fixed finite number of components). In particular, a new proof of the consistency of maximum likelihood estimators is given. Also, the large sample limits of a sequence of constrained maximum likelihood estimators are identified as those distributions minimizing Kullback-Leibler divergence from the true distribution. If the number of components of the true mixture distribution is not greater than the assumed number, the constrained maximum likelihood estimator is consistent in the sense of weak convergence. If the assumed number of components is exactly correct, the estimators of the parameters which define the mixing distribution are also consistent (in a certain sense). An algorithm for computation of maximum likelihood estimates (and the maximum penalized likelihood estimate) is given. The EM algorithm is used to locate local maxima of the likelihood function and a method of automatically generating "good" starting values for each possible number of components is incorporated. The estimation of a Poisson mixture distribution is illustrated using a distribution of traffic accidents in a population and a sequence of observations of fetal movements. One way of looking at the finite mixture model is as a random sample of "states" from a mixing distribution and a sequence of conditionally independent observed variables with distributions determined by the states. In the hidden Markov model considered here, the sequence of states is modelled by a Markov chain. The use of the EM algorithm for finding local maxima of the likelihood function for the hidden Markov model is described. Problems arising in the implementation of the algorithm are discussed, including the automatic generation of starting values and a necessary adjustment to the forward-backward equations. The algorithm is applied, with Poisson component distributions, to the sequence of observations of fetal movements. The consistency of the maximum likelihood estimator for the hidden Markov model is proved. The proof requires the consideration of identifiability, ergodicity, entropy, cross-entropy, and convergence of the log-likelihood function. For instance, the conclusion of the Shannon-McMillan-Breiman theorem on entropy convergence is established for hidden Markov models. A class of doubly stochastic Poisson processes which corresponds to a continuous time version of the hidden Markov model is also considered. We discuss some preliminary work on the extension of the EM algorithm to these processes, and also the possibility of applying our method of proof of consistency of maximum likelihood estimators.
Science, Faculty of
Statistics, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
25

Strasser, Helmut. "The covariance structure of conditional maximum likelihood estimates." Oldenbourg Verlag, 2012. http://epub.wu.ac.at/3619/1/covariance_final.pdf.

Full text
Abstract:
In this paper we consider conditional maximum likelihood (cml) estimates for item parameters in the Rasch model under random subject parameters. We give a simple approximation for the asymptotic covariance matrix of the cml-estimates. The approximation is stated as a limit theorem when the number of item parameters goes to infinity. The results contain precise mathematical information on the order of approximation. The results enable the analysis of the covariance structure of cml-estimates when the number of items is large. Let us give a rough picture. The covariance matrix has a dominating main diagonal containing the asymptotic variances of the estimators. These variances are almost equal to the efficient variances under ml-estimation when the distribution of the subject parameter is known. Apart from very small numbers n of item parameters the variances are almost not affected by the number n. The covariances are more or less negligible when the number of item parameters is large. Although this picture intuitively is not surprising it has to be established in precise mathematical terms. This has been done in the present paper. The paper is based on previous results [5] of the author concerning conditional distributions of non-identical replications of Bernoulli trials. The mathematical background are Edgeworth expansions for the central limit theorem. These previous results are the basis of approximations for the Fisher information matrices of cmlestimates. The main results of the present paper are concerned with the approximation of the covariance matrices. Numerical illustrations of the results and numerical experiments based on the results are presented in Strasser, [6]. (author's abstract)
APA, Harvard, Vancouver, ISO, and other styles
26

Gandhi, Mital A. "Robust Kalman Filters Using Generalized Maximum Likelihood-Type Estimators." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/29902.

Full text
Abstract:
Estimation methods such as the Kalman filter identify best state estimates based on certain optimality criteria using a model of the system and the observations. A common assumption underlying the estimation is that the noise is Gaussian. In practical systems though, one quite frequently encounters thick-tailed, non-Gaussian noise. Statistically, contamination by this type of noise can be seen as inducing outliers among the data and leads to significant degradation in the KF. While many nonlinear methods to cope with non-Gaussian noise exist, a filter that is robust in the presence of outliers and maintains high statistical efficiency is desired. To solve this problem, a new robust Kalman filter framework is proposed that bounds the influence of observation, innovation, and structural outliers in a discrete linear system. This filter is designed to process the observations and predictions together, making it very effective in suppressing multiple outliers. In addition, it consists of a new prewhitening method that incorporates a robust multivariate estimator of location and covariance. Furthermore, the filter provides state estimates that are robust to outliers while maintaining a high statistical efficiency at the Gaussian distribution by applying a generalized maximum likelihood-type (GM) estimator. Finally, the filter incorporates the correct error covariance matrix that is derived using the GM-estimator's influence function. This dissertation also addresses robust state estimation for systems that follow a broad class of nonlinear models that possess two or more equilibrium points. Tracking state transitions from one equilibrium point to another rapidly and accurately in such models can be a difficult task, and a computationally simple solution is desirable. To that effect, a new robust extended Kalman filter is developed that exploits observational redundancy and the nonlinear weights of the GM-estimator to track the state transitions rapidly and accurately. Through simulations, the performances of the new filters are analyzed in terms of robustness to multiple outliers and estimation capabilities for the following applications: tracking autonomous systems, enhancing actual speech from cellular phones, and tracking climate transitions. Furthermore, the filters are compared with the state-of-the-art, i.e. the Hâ -filter for tracking an autonomous vehicle and the extended Kalman filter for sensing climate transitions.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
27

Kim, Hyunjung. "Unit Root Tests in Panel Data: Weighted Symmetric Estimation and Maximum Likelihood Estimation." NCSU, 2001. http://www.lib.ncsu.edu/theses/available/etd-20010823-091533.

Full text
Abstract:

There has been much interest in testing nonstationarity of panel data in the econometric literature. In the last decade, several tests based on the ordinary least squares and Lagrange multiplier methodhave been developed. In contrast to a unit root test in the univariate case,test statistics in panel data have Gaussian limiting distributions.This dissertation considers weighted symmetric estimation and maximum likelihood estimation in the autoregressive model with individual effects.The asymptotic distributions have been derived as the number of individuals and time periods become large. The power study from Monte Carloexperiments shows that the proposed test statistics perform substantiallybetter than those in previous studies even for small samples.As an example, we consider the real Gross Domestic Product per Capita for 12 countries.

APA, Harvard, Vancouver, ISO, and other styles
28

Duong, Chi-Hong. "Approches statistiques en pharmacoépidémiologie pour la prise en compte des facteurs de confusion indirectement mesurés dans les bases de données médico-administratives : Application aux médicaments pris au cours de la grossesse." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASR028.

Full text
Abstract:
Les bases de données médico-administratives sont de plus en plus utilisées en pharmacoépidémiologie. Néanmoins, l'existence de facteurs de confusion (FDC) non mesurés et non pris en compte peut biaiser les analyses. Dans ce travail, nous explorons l'intérêt d'exploiter la richesse des données avec la sélection à large échelle d'un grand nombre de covariables mesurées corrélées avec d'éventuels facteurs manquants pour les ajuster indirectement. Ce concept est à la base du score de propension en grande dimension (hdPS), et nous appliquons la même démarche à la G-computation (GC) et l'estimation Ciblée par Maximum de Vraisemblance (TMLE). Bien que ces méthodes aient été évaluées dans certaines études de simulation, leurs performances sur de grandes bases de données réelles restent peu étudiées. Cette thèse vise à évaluer leurs contributions à l'atténuation de l'effet de FDC directement ou indirectement mesurés dans le système national des données de santé (SNDS) pour des études de pharmacoépidémiologie chez la femme enceinte. Dans le chapitre 2, nous avons utilisé un ensemble de médicaments de référence en lien avec la prématurité pour comparer les performances des trois méthodes. Toutes ont diminué le biais de confusion, la GC donnant les meilleures performances. Dans le chapitre 3, nous avons réalisé une analyse par hdPS dans un contexte de modélisation plus complexe pour étudier le lien controversé entre les anti-inflammatoires non stéroïdiens (AINS) et la fausse couche spontanée (FCS). Nous avons implémenté un modèle de Cox avec variable dépendant du temps et l'approche “lag-time” visant à corriger d'autres biais (biais de temps immortel et biais protopathique). Nous avons comparé des analyses basées sur les facteurs d'ajustement choisis selon la littérature actuelle ou avec le hdPS. Dans ces deux types d'analyse, les AINS étaient associés à un surrisque de FCS, les différences observées dans les risques estimés pouvant s'expliquer en partie par la différence entre les estimands théoriques ciblés par les approches. Nos travaux permettent de confirmer la contribution des méthodes statistiques à atténuer le biais de confusion. Ils soulignent aussi des difficultés majeures rencontrées lors de leur application en pratique en lien avec la complexité de la modélisation et du plan d'étude, ainsi qu'avec leur coût computationnel
Healthcare administrative databases are increasingly used in pharmacoepidemiology. However, the existence of unmeasured and uncontrolled confounders can bias analyses. In this work, we explore the value of leveraging the richness of data through large-scale selection of a large number of measured covariates correlated with unmeasured confounders to indirectly adjust for them. This concept is the cornerstone of the High-dimensional propensity score (hdPS), and we apply the same approach to G-computation (GC) and Targeted Maximum Likelihood Estimation (TMLE). Although these methods have been evaluated in some simulation studies, their performance on large real-world databases remains underexplored. This thesis aims to assess their contributions to mitigating the effect of directly or indirectly measured confounders in the French administrative health care database (SNDS) for pharmacoepidemiological studies in pregnant women. In Chapter 2, we used a set of reference drugs related to prematurity to compare the performance of the three methods. All reduced confounding bias, with GC showing the best performance. In Chapter 3, we conducted an hdPS analysis in a more complex modeling setting to investigate the controversial association between non-steroidal anti-inflammatory drugs (NSAIDs) and miscarriage. We implemented a Cox model with time-dependent variables and the “lag-time” approach to address other biases (immortal time bias and protopathic bias). We compared analyses adjusted for factors chosen according to the current literature with those chosen by the hdPS algorithm. In both types of analysis, NSAIDs were associated with an increased risk of miscarriage, and the observed differences in estimated risks could partly be explained by the difference between the causal estimands targeted by the approaches. Our work confirms the contribution of statistical methods to reducing confounding bias. It also highlights major challenges encountered during their practical application, related to the complexity of modeling and study design, as well as their computational cost
APA, Harvard, Vancouver, ISO, and other styles
29

Hu, Huilin. "Large sample theory for pseudo-maximum likelihood estimates in semiparametric models /." Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/8936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Cheng, Yang. "Maximum likelihood estimation and computation in a random effect factor model." College Park, Md. : University of Maryland, 2004. http://hdl.handle.net/1903/1782.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2004.
Thesis research directed by: Mathematics. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
31

Chotikakamthorn, Nopporn. "A pre-filtering maximum likelihood approach to multiple source direction estimation." Thesis, Imperial College London, 1996. http://hdl.handle.net/10044/1/8634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yang, Jian. "Semiparametric maximum likelihood estimation of nonlinear regression models and GARCH models." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0007/NQ27861.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Skeen, Matthew E. (Matthew Edward). "Maximum likelihood estimation of fractional Brownian motion and Markov noise parameters." Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/42527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Thornton, K. M. "The use of sample spacings in parameter estimation with applications." Thesis, Cardiff University, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.238217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Gillan, Catherine C. "Using the piecewise exponential distribution to model the length of stay in a manpower planning system." Thesis, University of Ulster, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Yildirim, Sinan. "Maximum likelihood parameter estimation in time series models using sequential Monte Carlo." Thesis, University of Cambridge, 2013. https://www.repository.cam.ac.uk/handle/1810/244707.

Full text
Abstract:
Time series models are used to characterise uncertainty in many real-world dynamical phenomena. A time series model typically contains a static variable, called parameter, which parametrizes the joint law of the random variables involved in the definition of the model. When a time series model is to be fitted to some sequentially observed data, it is essential to decide on the value of the parameter that describes the data best, a procedure generally called parameter estimation. This thesis comprises novel contributions to the methodology on parameter estimation in time series models. Our primary interest is online estimation, although batch estimation is also considered. The developed methods are based on batch and online versions of expectation-maximisation (EM) and gradient ascent, two widely popular algorithms for maximum likelihood estimation (MLE). In the last two decades, the range of statistical models where parameter estimation can be performed has been significantly extended with the development of Monte Carlo methods. We provide contribution to the field in a similar manner, namely by combining EM and gradient ascent algorithms with sequential Monte Carlo (SMC) techniques. The time series models we investigate are widely used in statistical and engineering applications. The original work of this thesis is organised in Chapters 4 to 7. Chapter 4 contains an online EM algorithm using SMC for MLE in changepoint models, which are widely used to model heterogeneity in sequential data. In Chapter 5, we present batch and online EM algorithms using SMC for MLE in linear Gaussian multiple target tracking models. Chapter 6 contains a novel methodology for implementing MLE in a hidden Markov model having intractable probability densities for its observations. Finally, in Chapter 7 we formulate the nonnegative matrix factorisation problem as MLE in a specific hidden Markov model and propose online EM algorithms using SMC to perform MLE.
APA, Harvard, Vancouver, ISO, and other styles
37

SOUZA, MARCIO ALBUQUERQUE DE. "MAXIMUM LIKELIHOOD ESTIMATION OF THE DIRECTION-OF-ARRIVAL OF PSK MODULATED CARRIERS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2004. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=5718@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Em sistemas de comunicações móveis, a modulação digital em fase (PSK)é amplamente utilizada em esquemas de transmissão em rádio-propagação. Trabalhos anteriores consideraram alguns métodos baseados no critério de máxima verossimilhança (MV) para estimação de direção-de-chegada de sinais genéricos que atingem um conjunto (array) de sensores. Esta tese propõe um novo estimador MV para a direção-de-chegada, desenvolvido especificamente para sistemas de comunicação PSK. Dois modelos de transmissão são concebidos para estimação dos parâmetros: um mais idealizado, considerando todas as portadoras alinhadas no tempo com o receptor, e outro que considera este desalinhamento na forma de retardo. O número de parâmetros a serem conjuntamente estimados é significativamente reduzido ao se calcular o valor esperado dos sinais medidos no array de antenas com relação µas fases de modulação (dados de informação). O desempenho do estimador em vários cenários simulados é apresentado e comparado ao desempenho do estimador MV clássico desenvolvido sem considerar uma estrutura específica para o sinal. Limitantes de Cramér-Rao para os cenários de portadora única também são calculados. O método proposto se mostra mais robusto por apresentar melhor desempenho que o estimador MV clássico em todas as simulações.
In mobile communication systems, phase shift keying (PSK) modulation is widely used in digital transmission schemes. Previous works have considered several maximum likelihood (ML) methods for the direction-of-arrival (DOA) estimation of generic signals reaching a phased-array of sensors. This thesis proposes a new ML DOA estimator designed to be used in PSK communication systems. Two transmission models are considered for parameter estimation: a simpler one, considering all carrier clocks time-aligned with the receiver clock, and another that considers this misalignment as a delay for each carrier. The number of parameters to be jointly estimated is significantly reduced when the expected value of the antenna array measured signals with respect to the modulation phases is evaluated. The estimator performance in several simulation scenarios is presented and compared to the performance of a classic ML estimator designed for all sorts of signal models. Cramér-Rao bounds for single carrier scenarios are also evaluated. The proposed method robustly outperforms the classic ML estimator in all simulations.
APA, Harvard, Vancouver, ISO, and other styles
38

Irineo, Joseph B. (Joseph Bernard) 1976. "An object-oriented, maximum-likelihood parameter estimation program for GARCH(p,q)." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Schneider, Grant W. "Maximum Likelihood Estimation for Stochastic Differential Equations Using Sequential Kriging-Based Optimization." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1406912247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Steven Xiaogang. "Maximum weighted likelihood estimation." Thesis, 2001. http://hdl.handle.net/2429/13844.

Full text
Abstract:
A maximum weighted likelihood method is proposed to combine all the relevant data from different sources to improve the quality of statistical inference especially when the sample sizes are moderate or small. The linear weighted likelihood estimator (WLE), is studied in depth. The weak consistency, strong consistency and the asymptotic normality of the WLE are proved. The asymptotic properties of the WLE using adaptive weights are also established. A procedure for adaptively choosing the weights by using cross-validation is proposed in the thesis. The analytical forms of the "adaptive weights" are derived when the WLE is a linear combination of the MLE's. The weak consistency and asymptotic normality of the WLE with weights chosen by cross-validation criterion are established. The connection between WLE and theoretical information theory is discovered. The derivation of the weighted likelihood by using the maximum entropy principle is presented. The approximations of the distributions of the WLE by using saddlepoint approximation for small sample sizes are derived. The results of the application to the disease mapping are shown in the last chapter of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
41

"Optimal recursive maximum likelihood estimation." Sloan School of Management, Massachusetts Institute of Technology], 1987. http://hdl.handle.net/1721.1/2987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Seo, Byungtae. "Doubly-smoothed maximum likelihood estimation." 2007. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-2129/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Buot, Max. "Genetic algorithms and maximum likelihood estimation /." 2003. http://wwwlib.umi.com/dissertations/fullcit/3108787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Richardson, Alice. "Maximum likelihood estimation of variance components." Master's thesis, 1991. http://hdl.handle.net/1885/133923.

Full text
Abstract:
In this thesis, the Maximum Likelihood and Restricted Maximum Likelihood methods of estimating variance components are investigated for the one-way model. Expressions for the estimators and their variances are obtained, and algorithms for finding the estimates are tested by means of a Monte Carlo study. The quantitative effects of non-normality on the variability of estimates are discussed. Finally, diagnostic tests for identifying outliers and non-normality are proposed, and illustrated with data concerning soybean plant growth.
APA, Harvard, Vancouver, ISO, and other styles
45

Vicente, David José Marques. "Distributed Algorithms for Target Localization in Wireless Sensor Networks Using Hybrid Measurements." Master's thesis, 2017. http://hdl.handle.net/10362/27875.

Full text
Abstract:
This dissertation addresses the target localization problem in wireless sensor networks (WSNs). WSNs is now a widely applicable technology which can have numerous practical applications and offer the possibility to improve people’s lives. A required feature to many functions of a WSN, is the ability to indicate where the data reported by each sensor was measured. For this reason, locating each sensor node in a WSN is an essential issue that should be considered. In this dissertation, a performance analysis of two recently proposed distributed localization algorithms for cooperative 3-D wireless sensor networks (WSNs) is presented. The tested algorithms rely on distance and angle measurements obtained from received signal strength (RSS) and angle-of-arrival (AoA) information, respectively. The measurements are then used to derive a convex estimator, based on second-order cone programming (SOCP) relaxation techniques, and a non-convex one that can be formulated as a generalized trust region sub-problem (GTRS). Both estimators have shown excellent performance assuming a static network scenario, giving accurate location estimates in addition to converging in few iterations. The results obtained in this dissertation confirm the novel algorithms’ performance and accuracy. Additionally, a change to the algorithms is proposed, allowing the study of a more realistic and challenging scenario where different probabilities of communication failure between neighbor nodes at the broadcast phase are considered. Computational simulations performed in the scope of this dissertation, show that the algorithms’ performance holds for high probability of communication failure and that convergence is still achieved in a reasonable number of iterations.
APA, Harvard, Vancouver, ISO, and other styles
46

Mai, Anh Tien. "Revisiting optimization algorithms for maximum likelihood estimation." Thèse, 2012. http://hdl.handle.net/1866/9828.

Full text
Abstract:
Parmi les méthodes d’estimation de paramètres de loi de probabilité en statistique, le maximum de vraisemblance est une des techniques les plus populaires, comme, sous des conditions l´egères, les estimateurs ainsi produits sont consistants et asymptotiquement efficaces. Les problèmes de maximum de vraisemblance peuvent être traités comme des problèmes de programmation non linéaires, éventuellement non convexe, pour lesquels deux grandes classes de méthodes de résolution sont les techniques de région de confiance et les méthodes de recherche linéaire. En outre, il est possible d’exploiter la structure de ces problèmes pour tenter d’accélerer la convergence de ces méthodes, sous certaines hypothèses. Dans ce travail, nous revisitons certaines approches classiques ou récemment d´eveloppées en optimisation non linéaire, dans le contexte particulier de l’estimation de maximum de vraisemblance. Nous développons également de nouveaux algorithmes pour résoudre ce problème, reconsidérant différentes techniques d’approximation de hessiens, et proposons de nouvelles méthodes de calcul de pas, en particulier dans le cadre des algorithmes de recherche linéaire. Il s’agit notamment d’algorithmes nous permettant de changer d’approximation de hessien et d’adapter la longueur du pas dans une direction de recherche fixée. Finalement, nous évaluons l’efficacité numérique des méthodes proposées dans le cadre de l’estimation de modèles de choix discrets, en particulier les modèles logit mélangés.
Maximum likelihood is one of the most popular techniques to estimate the parameters of some given distributions. Under slight conditions, the produced estimators are consistent and asymptotically efficient. Maximum likelihood problems can be handled as non-linear programming problems, possibly non convex, that can be solved for instance using line-search methods and trust-region algorithms. Moreover, under some conditions, it is possible to exploit the structures of such problems in order to speedup convergence. In this work, we consider various non-linear programming techniques, either standard or recently developed, within the maximum likelihood estimation perspective. We also propose new algorithms to solve this estimation problem, capitalizing on Hessian approximation techniques and developing new methods to compute steps, in particular in the context of line-search approaches. More specifically, we investigate methods that allow us switching between Hessian approximations and adapting the step length along the search direction. We finally assess the numerical efficiency of the proposed methods for the estimation of discrete choice models, more precisely mixed logit models.
APA, Harvard, Vancouver, ISO, and other styles
47

Ehlers, Rene. "Maximum likelihood estimation procedures for categorical data." Diss., 2003. http://hdl.handle.net/2263/26533.

Full text
Abstract:
Please read the abstract in the section, 00front, of this document
Dissertation (MSc (Mathematical Statistics))--University of Pretoria, 2005.
Mathematics and Applied Mathematics
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
48

LIU, RONG-XUAN, and 劉鎔瑄. "Adversarial Image Description without Maximum Likelihood Estimation." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/c46qaj.

Full text
Abstract:
碩士
國立中正大學
電機工程研究所
106
A fully visible belief network trained with maximum-likelihood is a typical strategy to learn a language model. However such an approach yields the exposure bias due to different behaviors at training and inference stage: To predict the next symbol, the model has provided with preceding information that is available at training stage but not at inference stage, when it could result in worse predictions along with accumulated errors and increased sentence length. On the contrary, we train another neural paradigm for the image description via an adversarial fashion from scratch. We do not adopt any maximum-likelihood manner and address exposure bias. The generative model takes the learning objective of minimizing the earth mover’s distance to make the generator’s distribution indistinguishable from the empirical distribution. We also employ Gumbel-max trick as a continuous approximation of the one-hot word encoding, conquering the “non-differentiable sampling problem”. In this case training both the discriminator and generator requires only generic end-to-end back-propagation and gradient-based optimization methods. Experimental results show that our adversarial approach improves the performance on several evaluation metrics of the image captioning task.
APA, Harvard, Vancouver, ISO, and other styles
49

Choi, Ji Eun. "Stochastic Volatility Models and Simulated Maximum Likelihood Estimation." Thesis, 2011. http://hdl.handle.net/10012/6045.

Full text
Abstract:
Financial time series studies indicate that the lognormal assumption for the return of an underlying security is often violated in practice. This is due to the presence of time-varying volatility in the return series. The most common departures are due to a fat left-tail of the return distribution, volatility clustering or persistence, and asymmetry of the volatility. To account for these characteristics of time-varying volatility, many volatility models have been proposed and studied in the financial time series literature. Two main conditional-variance model specifications are the autoregressive conditional heteroscedasticity (ARCH) and the stochastic volatility (SV) models. The SV model, proposed by Taylor (1986), is a useful alternative to the ARCH family (Engle (1982)). It incorporates time-dependency of the volatility through a latent process, which is an autoregressive model of order 1 (AR(1)), and successfully accounts for the stylized facts of the return series implied by the characteristics of time-varying volatility. In this thesis, we review both ARCH and SV models but focus on the SV model and its variations. We consider two modified SV models. One is an autoregressive process with stochastic volatility errors (AR--SV) and the other is the Markov regime switching stochastic volatility (MSSV) model. The AR--SV model consists of two AR processes. The conditional mean process is an AR(p) model , and the conditional variance process is an AR(1) model. One notable advantage of the AR--SV model is that it better captures volatility persistence by considering the AR structure in the conditional mean process. The MSSV model consists of the SV model and a discrete Markov process. In this model, the volatility can switch from a low level to a high level at random points in time, and this feature better captures the volatility movement. We study the moment properties and the likelihood functions associated with these models. In spite of the simple structure of the SV models, it is not easy to estimate parameters by conventional estimation methods such as maximum likelihood estimation (MLE) or the Bayesian method because of the presence of the latent log-variance process. Of the various estimation methods proposed in the SV model literature, we consider the simulated maximum likelihood (SML) method with the efficient importance sampling (EIS) technique, one of the most efficient estimation methods for SV models. In particular, the EIS technique is applied in the SML to reduce the MC sampling error. It increases the accuracy of the estimates by determining an importance function with a conditional density function of the latent log variance at time t given the latent log variance and the return at time t-1. Initially we perform an empirical study to compare the estimation of the SV model using the SML method with EIS and the Markov chain Monte Carlo (MCMC) method with Gibbs sampling. We conclude that SML has a slight edge over MCMC. We then introduce the SML approach in the AR--SV models and study the performance of the estimation method through simulation studies and real-data analysis. In the analysis, we use the AIC and BIC criteria to determine the order of the AR process and perform model diagnostics for the goodness of fit. In addition, we introduce the MSSV models and extend the SML approach with EIS to estimate this new model. Simulation studies and empirical studies with several return series indicate that this model is reasonable when there is a possibility of volatility switching at random time points. Based on our analysis, the modified SV, AR--SV, and MSSV models capture the stylized facts of financial return series reasonably well, and the SML estimation method with the EIS technique works very well in the models and the cases considered.
APA, Harvard, Vancouver, ISO, and other styles
50

Tsai, Wen-Chi, and 蔡紋琦. "Maximum Likelihood Estimation of a Monotone Regression Function." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/26048287023325781218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography