Dissertations / Theses on the topic 'Maximum likelihood'

To see the other types of publications on this topic, follow the link: Maximum likelihood.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Maximum likelihood.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ruprecht, Jürg. "Maximum likelihood estimation of multipath channels /." [S.l.] : [s.n.], 1989. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=8789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Horbelt, Werner. "Maximum likelihood estimation in dynamical systems." [S.l. : s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=963810812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sabbagh, Yvonne. "Maximum Likelihood Estimation of Hammerstein Models." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2061.

Full text
Abstract:

In this Master's thesis, Maximum Likelihood-based parametric identification methods for discrete-time SISO Hammerstein models from perturbed observations on both input and output, are investigated.

Hammerstein models, consisting of a static nonlinear block followed by a dynamic linear one, are widely applied to modeling nonlinear dynamic systems, i.e., dynamic systems having nonlinearity at its input.

Two identification methods are proposed. The first one assumes a Hammerstein model where the input signal is noise-free and the output signal is perturbed with colored noise. The second assumes, however, white noises added to the input and output of the nonlinearity and to the output of the whole considered Hammerstein model. Both methods operate directly in the time domain and their properties are illustrated by a number of simulated examples. It should be observed that attention is focused on derivation, numerical calculation, and simulation corresponding to the first identification method mentioned above.

APA, Harvard, Vancouver, ISO, and other styles
4

Cho, Anna. "Constructing Phylogenetic Trees Using Maximum Likelihood." Scholarship @ Claremont, 2012. http://scholarship.claremont.edu/scripps_theses/46.

Full text
Abstract:
Maximum likelihood methods are used to estimate the phylogenetic trees for a set of species. The probabilities of DNA base substitutions are modeled by continuous-time Markov chains. We use these probabilities to estimate which DNA bases would produce the data that we observe. The topology of the tree is also determined using base substitution probabilities and conditional likelihoods. Felsenstein [2] introduced this method of finding an estimate for the maximum likelihood phylogenetic tree. We will explore this method in detail in this paper.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Ming De 1937. "Maximum likelihood restoration of binary objects." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276574.

Full text
Abstract:
A new approach, based on maximum likelihood, is developed for binary object image restoration. This considers the image formation process as a stochastic process, with noise as a random variable. The likelihood function is constructed for the cases of Gaussian and Poisson noise. An uphill climb method is used to find the object, defined by its "grain" positions, through maximizing the likelihood function for grain positions. In addition, some a priori information regarding object size and contour of shapes is used. This is summarized as a "neighbouring point" rule. Some examples of computer generated images with different signal-to-noise ratios are used to show the ability of the algorithm. These cases include both Gaussian and Poisson noise. For noiseless and low noise Gaussian cases, a modified uphill climb method is used. The results show that binary objects are fairly well restored for all noise cases considered.
APA, Harvard, Vancouver, ISO, and other styles
6

Norman, David. "En simuleringsstudie för test baserade på Maximum Likelihood- och Maximum Spacingskattningar." Thesis, Umeå universitet, Statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-105333.

Full text
Abstract:
Om vi har ett stickprov med X1, X2, ..., Xn oberoende lika fördelade observationer från en kontinuerlig fördelning med fördelningsfunktion Fθ, där θ är en okänd parameter, är Likelihoodkvottestet (LT) ofta en bra metod för att testa nollhypotesen H0: θ = θ0 mot HA: θ≠ θ0. För vissa typer av fördelningar saknar dock Likelihoodfunktionen övre begränsning (Ranneby, 1984) och (Ekström, 2013) vilket innebär att testet inte går att tillämpa då. Detta test jämförs med två nyare test, Ekströmtestet (ET) (Ekström, 2013) och Fidelitytestet (FT) som är ett modierat ET, som vi vet fungerar även i dylika fall. Under vissa generella villkor så vet vi att de tre testen är asymptotiskt ekvivalenta. Men om något är bättre än de andra för mindre stickprov har inte undersökts i någon större utsträckning tidigare. I den här simuleringsstudien har vi undersökt detta för en rad fördelningar och resultatet visar att med avseende på testens styrka så är LT över lag bäst under dessa förutsättningar. Skillnaden mellan LT och FT är dock liten.
APA, Harvard, Vancouver, ISO, and other styles
7

Patel, Rahul. "Maximum Likelihood – Expectation Maximum Reconstruction with Limited Dataset for Emission Tomography." Akron, OH : University of Akron, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=akron1175781554.

Full text
Abstract:
Thesis (M.S.)--University of Akron, Dept. of Biomedical Engineering, 2007.
"May, 2007." Title from electronic thesis title page (viewed 04/26/2009) Advisor, Dale Mugler; Co-Advisor, Anthony Passalaqua; Committee member, Daniel Sheffer; Department Chair, Daniel Sheffer; Dean of the College, George K. Haritos; Dean of the Graduate School, George R. Newkome. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
8

Andrews, Darren Thomas. "Maximum likelihood multivariate methods in analytical chemistry." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq24729.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pannu, Navraj Singh. "Improved crystal structure refinement through maximum likelihood." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq28974.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Leeuw, Johannes Leonardus van der. "Maximum likelihood estimation of exact ARMA models /." Tilburg : Tilburg University Press, 1997. http://www.gbv.de/dms/goettingen/265169976.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Schnitzer, Mireille. "Targeted maximum likelihood estimation for longitudinal data." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=114242.

Full text
Abstract:
Semiparametric efficient methods in causal inference have been developed to robustly and efficiently estimate causal parameters. As in general causal estimation, the methods rely on a set of mathematical assumptions that translate into requirements of causal knowledge and confounder identification. Targeted maximum likelihood estimation (TMLE) methodology has been developed as a potential improvement on efficient estimating equations, in that it shares the qualities of double robustness (unbiasedness under partial misspecification) and semiparametric efficiency, but can be constructed to provide boundedness of parameter estimates, robustness to data sparsity, and a unique estimate.This thesis, composed primarily of three manuscripts, presents new research on the analysis of longitudinal and survival data with time-dependent confounders using TMLE. The first manuscript describes the construction of a two time-point TMLE using a generalized exponential distribution family member as the loss function for the outcome model. It demonstrates the robustness of the continuous version of this TMLE algorithm in a simulation study, and uses a modified version of the method in a simplified analysis of the PROmotion of Breastfeeding Intervention Trial (PROBIT) where evidence for a protective causal effect of breastfeeding on gastrointestinal infection is obtained.The second manuscript presents a description of several substitution estimators for longitudinal data, a specialized implementation of a longitudinal TMLE method, and a case study using the full PROBIT dataset. The K time point sequential TMLE algorithm employed (theory previously developed), implemented nonparametrically using Super Learner, differs fundamentally from the strategy used in the first manuscript, and offers some benefits in computation and ease of implementation. The analysis compares different durations of breastfeeding and the related exposure-specific (and censoring-free) mean counts of gastrointestinal infections over the first year of an infant's life and concludes that a protective effect is present. Simulated data mirroring the PROBIT dataset was generated, and the performance of TMLE was again assessed.The third manuscript develops a methodology to estimate marginal structural models for survival data. Utilizing the sequential longitudinal TMLE algorithm to estimate the exposure-specific survival curves for all exposure patterns, it demonstrates a way to combine inference in order to model the outcome using a linear specification. This article presents the theoretical construction of two different types of marginal structural models (modeling the log-odds survival and the hazard) and presents a simulation study demonstrating the unbiasedness of the technique. It then describes an analysis of the Canadian Co-infection Cohort study undertaken with one of the TMLE methods to fit survival curves and a model for the hazard function of development of end-stage liver disease (ESLD) conditional on time and clearance of the Hepatitis C virus.
Des méthodes d'analyse causale semi-paramétriques et efficaces ont été développées pour estimer les paramètres causaux efficacement et de façon robuste. Comme c'est le cas en général pour l'estimation causale, ces méthodes se basent sur un ensemble d'hypothèses mathématiques qui impliquent que la structure causale et les facteurs de confusion doivent être connus. La méthode d'estimation par le maximum de vraisemblance ciblé (TMLE) se veut une amélioration des équations d'estimation efficaces: elle a les propriétés de double robustesse (sans biais même avec une erreur de spécification partielle) et d'efficacité semi-paramétrique, mais peut également garantir des estimés finis pour les paramètres et la production d'un seul estimé en plus d'être robuste si les données sont éparses. Cette thèse, composée essentiellement de trois manuscrits, présente de nouvelles recherches sur l'analyse avec le TMLE de données longitudinales et de données de survie avec des facteurs de confusion variant dans le temps. Le premier manuscrit décrit la construction d'un TMLE à deux points dans le temps avec une distribution de la famille exponentielle généralisée comme fonction de perte du modèle de la réponse. Il démontre à l'aide d'une étude de simulation la robustesse de la version continue de cet algorithme TMLE, et utilise une version Poisson de la méthode pour une analyse simplifiée de l'étude PROmotion of Breastfeeding Intervention Trial (PROBIT) qui donne des signes d'un effet causal protecteur de l'allaitement sur les infections gastrointestinales. Le deuxième manuscrit présente une description de plusieurs estimateurs de substitution pour données longitudinales, une implémentation spéciale de la méthode TMLE longitudinale et une étude de cas du jeu de données PROBIT entier. Un algorithme TMLE séquentiel à K points dans le temps est utilisé (théorie déjà développée), lequel est implémenté de façon non-paramétrique avec le Super Learner. Cet algorithme diffère fondamentalement de la stratégie utilisée dans le premier manuscrit et offre des avantages en terme de calcul et de facilité d'implémentation. L'analyse compare les moyennes de dénombrements du nombre d'infections gastrointestinales dans la première année de vie d'un nouveau-né par durée d'allaitement et avec aucune censure, et conclut à la présence d'un effet protecteur. Des données simulées semblables au jeu de données PROBIT sont également générées, et la performance du TMLE de nouveau étudiée. Le troisième manuscrit développe une méthodologie pour estimer des modèles structurels marginaux pour données de survie. En utilisant l'algorithme séquentiel du TMLE longitudinal pour estimer des courbes de survie spécifiques à l'exposition pour tous les patrons d'exposition, il montre une façon de combiner les inférences pour modéliser la réponse à l'aide d'une spécification linéaire. Cet article présente la construction théorique de deux différents types de modèles structurels marginaux (modélisant le log du rapport des chances de survie et le risque) et présente une étude de simulation démontrant l'absence de biais de la technique. Il décrit ensuite une analyse de l'Étude de la Cohorte Canadienne de Co-Infection à l'aide d'une des méthodes TMLE pour ajuster des courbes de survie et un modèle pour la fonction de risque du développement de la maladie chronique du foie (ESLD) conditionnellement au temps et à l'élimination du virus de l'hépatite C.
APA, Harvard, Vancouver, ISO, and other styles
12

Emhemmed, Yousef Mohammed. "Maximum likelihood analysis of neuronal spike trains." Thesis, University of Glasgow, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Srebro, Nathan 1974. "Maximum likelihood Markov networks : an algorithmic approach." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86593.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2001.
Includes bibliographical references (p. 110-112).
by Nathan Srebro.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
14

Rice, Michael, and Erik Perrins. "Maximum Likelihood Detection from Multiple Bit Sources." International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596443.

Full text
Abstract:
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV
This paper deals with the problem of producing the best bit stream from a number of input bit streams with varying degrees of reliability. The best source selector and smart source selector are recast as detectors, and the maximum likelihood bit detector (MLBD) is derived from basic principles under the assumption that each bit value is accompanied by a quality measure proportional to its probability of error. We show that both the majority voter and the best source selector are special cases of the MLBD and define the conditions under which these special cases occur. We give a mathematical proof that the MLBD is the same as or better than the best source selector.
APA, Harvard, Vancouver, ISO, and other styles
15

Krishnamurthi, Sumitha. "Performance of Recursive Maximum Likelihood Turbo Decoding." Ohio University / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1070481352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ehlers, Rene. "Maximum likelihood estimation procedures for categorical data." Pretoria : [s.n.], 2002. http://upetd.up.ac.za/thesis/available/etd-07222005-124541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Zaeva, Maria. "Maximum likelihood estimators for circular structural model." Birmingham, Ala. : University of Alabama at Birmingham, 2009. https://www.mhsl.uab.edu/dt/2009m/zaeva.pdf.

Full text
Abstract:
Thesis (M.S.)--University of Alabama at Birmingham, 2009.
Title from PDF title page (viewed Jan. 21, 2010). Additional advisors: Yulia Karpeshina, Ian Knowles, Rudi Weikard. Includes bibliographical references (p. 19).
APA, Harvard, Vancouver, ISO, and other styles
18

Lai, Yiu Pong. "Maximum likelihood normalization for robust speech recognition /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202003%20LAI.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 98-103). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
19

Johannes, Jan. "Verallgemeinerte Maximum-Likelihood-Methoden und der selbstinformative Grenzwert." Doctoral thesis, [S.l. : s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=96644258X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Jaldén, Joakim. "Maximum likelihood detection for the linear MIMO channel." Licentiate thesis, KTH, Signals, Sensors and Systems, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-498.

Full text
Abstract:

this thesis the problem of maximum likelihood (ML) detection for the linear multiple-input multiple-output (MIMO) channel is considered. The thesis investigates two algorithms previously proposed in the literature for implementing the ML detector, namely semide nite relaxation and sphere decoding.

The first algorithm, semide nite relaxation, is a suboptimal implementation of the ML detector meaning that it is not guaranteed to solve the maximum likelihood detection problem. Still, numerical evidence suggests that the performance of the semide nite relaxation detector is close to that of the true ML detector. A contribution made in this thesis is to derive conditions under which the semide nite relaxation estimate can be guaranteed to coincide with the ML estimate.

The second algorithm, the sphere decoder, can be used to solve the ML detection problem exactly. Numerical evidence has previously shown that the complexity of the sphere decoder is remarkably low for problems of moderate size. This has led to the widespread belief that the sphere decoder is of polynomial expected complexity. This is however unfortunately not true. Instead, in most scenarios encountered in digital communications, the expected complexity of the algorithm is exponential in the number of symbols jointly detected. However, for high signal to noise ratio the rate of exponential increase is small. In this thesis it is proved that for a large class of detection problems the expected complexity is lower bounded by an exponential function. Also, for the special case of an i.i.d. Rayleigh fading channel, an asymptotic analysis is presented which enables the computation of the expected complexity up to the linear term in the exponent.

APA, Harvard, Vancouver, ISO, and other styles
21

Zou, Yiqun. "Attainment of Global Convergence in Maximum Likelihood Estimation." Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.511845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Jaldén, Joakim. "Maximum likelihood detection for the linear MIMO channel /." Stockholm, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Strasser, Helmut. "The covariance structure of conditional maximum likelihood estimates." Oldenbourg Verlag, 2012. http://epub.wu.ac.at/3619/1/covariance_final.pdf.

Full text
Abstract:
In this paper we consider conditional maximum likelihood (cml) estimates for item parameters in the Rasch model under random subject parameters. We give a simple approximation for the asymptotic covariance matrix of the cml-estimates. The approximation is stated as a limit theorem when the number of item parameters goes to infinity. The results contain precise mathematical information on the order of approximation. The results enable the analysis of the covariance structure of cml-estimates when the number of items is large. Let us give a rough picture. The covariance matrix has a dominating main diagonal containing the asymptotic variances of the estimators. These variances are almost equal to the efficient variances under ml-estimation when the distribution of the subject parameter is known. Apart from very small numbers n of item parameters the variances are almost not affected by the number n. The covariances are more or less negligible when the number of item parameters is large. Although this picture intuitively is not surprising it has to be established in precise mathematical terms. This has been done in the present paper. The paper is based on previous results [5] of the author concerning conditional distributions of non-identical replications of Bernoulli trials. The mathematical background are Edgeworth expansions for the central limit theorem. These previous results are the basis of approximations for the Fisher information matrices of cmlestimates. The main results of the present paper are concerned with the approximation of the covariance matrices. Numerical illustrations of the results and numerical experiments based on the results are presented in Strasser, [6]. (author's abstract)
APA, Harvard, Vancouver, ISO, and other styles
24

Mariano, Machado Robson José. "Penalised maximum likelihood estimation for multi-state models." Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10060352/.

Full text
Abstract:
Multi-state models can be used to analyse processes where change of status over time is of interest. In medical research, processes are commonly defined by a set of living states and a dead state. Transition times between living states are often interval censored. In this case, models are usually formulated in a Markov processes framework. The likelihood function is then constructed using transition probabilities. Models are specified using proportional hazards for the effect of covariates on transition intensities. Time-dependency is usually defined by parametric models, which can represent a strong model assumption. Semiparametric hazards specification with splines is a more flexible method for modelling time-dependency in multi-state models. Penalised maximum likelihood is used to estimate these models. Selecting the optimal amount of smoothing is challenging as the problem involves multiple penalties. This thesis aims to develop methods to estimate multi-state models with splines for interval-censored data. We propose a penalised likelihood method to estimate multi-state models that allow for parametric and semiparametric hazards specifications. The estimation is based on a scoring algorithm, and a grid search method to estimate the smoothing parameters. This method is shown using an application to ageing research. Furthermore, we extend the proposed method by developing a computationally more efficient method to estimate multi-state models with splines. For this extension, the estimation is based on a scoring algorithm, and an automatic smoothing parameters selection. The extended method is illustrated with two data analyses and a simulation study.
APA, Harvard, Vancouver, ISO, and other styles
25

Weng, Yu. "Maximum Likelihood Estimation of Logistic Sinusoidal Regression Models." Thesis, University of North Texas, 2013. https://digital.library.unt.edu/ark:/67531/metadc407796/.

Full text
Abstract:
We consider the problem of maximum likelihood estimation of logistic sinusoidal regression models and develop some asymptotic theory including the consistency and joint rates of convergence for the maximum likelihood estimators. The key techniques build upon a synthesis of the results of Walker and Song and Li for the widely studied sinusoidal regression model and on making a connection to a result of Radchenko. Monte Carlo simulations are also presented to demonstrate the finite-sample performance of the estimators
APA, Harvard, Vancouver, ISO, and other styles
26

Xu, Kevin. "Maximum likelihood time-domain beamforming using simulated annealing." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80046.

Full text
Abstract:
Thesis (S.M.)--Joint Program in Oceanographic Engineering (Massachusetts Institute of Technology, Dept. of Ocean Engineering; and the Woods Hole Oceanographic Institution), 1999.
Bibliography: p. 111-112.
by Kevin Xu.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
27

DeGroot, Don Johan. "Maximum likelihood estimation of spatially correlated soil properties." Thesis, Massachusetts Institute of Technology, 1985. http://hdl.handle.net/1721.1/15282.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Civil Engineering, 1985.
MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.
Bibliography: leaves 109-110.
by Don Johan DeGroot.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
28

Richmond, Christ D. (Christ David). "Statistical analysis of adaptive maximum-likelihood signal estimator." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36952.

Full text
Abstract:
Thesis (Elec. E.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (leaves 56-57).
by Christ D. Richmond.
Elec.E.
APA, Harvard, Vancouver, ISO, and other styles
29

John, Andrea. "Maximum likelihood estimation in mis-specified reliability distributions." Thesis, Swansea University, 2003. https://cronfa.swan.ac.uk/Record/cronfa42494.

Full text
Abstract:
This thesis examines some effects of fitting the wrong distribution to reliability data. The parametric analysis of any data usually assumes that the form of the underlying distribution is known. In practice, however, the choice of distribution is subject to error, so the analysis could involve estimating parameters from a mis-specified model. In this thesis, we consider theoretical and practical aspects of maximum likelihood estimation under such mis-specification. Due to its popularity and wide use, we take the Weibull distribution to be the mis-specified model, and look at the effects of fitting this distribution to data from underlying Burr, Gamma and Lognormal models. We use entropy to obtain the theoretical counterparts to the Weibull maximum likelihood estimates, and obtain theoretical results on the distribution of the mis-specified Weibull maximum likelihood estimates and quantiles such as B\Q. Initially, these results are obtained for complete data, and then extended to type I and II censoring regimes, where consideration of terms in the likelihood and entropy functions leads to a detailed consideration of the properties of order statistics of the distributions. We also carry out a similar investigation on accelerated data sets, where there is additional complexity due to links between accelerating factors and scale parameters in reliability distributions. These links are also open to mis-specification, so allowing for various combinations of true and mis-specified models. We present theoretical results for general scale-stress relationships, but focus on practical results for the Log-linear and Arrhenius models, since these are the two relationships most widely used. Finally, we link both acceleration and censoring, and obtain theoretical results for a type II censoring regime at the lowest stress level.
APA, Harvard, Vancouver, ISO, and other styles
30

Tu, Ming-Wang. "Radar Image Processing Using Efficient Maximum Likelihood Estimator /." The Ohio State University, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487929230739301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

White, Scott Ian. "Stochastic volatility: Maximum likelihood estimation and specification testing." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16220/1/Scott_White_Thesis.pdf.

Full text
Abstract:
Stochastic volatility (SV) models provide a means of tracking and forecasting the variance of financial asset returns. While SV models have a number of theoretical advantages over competing variance modelling procedures they are notoriously difficult to estimate. The distinguishing feature of the SV estimation literature is that those algorithms that provide accurate parameter estimates are conceptually demanding and require a significant amount of computational resources to implement. Furthermore, although a significant number of distinct SV specifications exist, little attention has been paid to how one would choose the appropriate specification for a given data series. Motivated by these facts, a likelihood based joint estimation and specification testing procedure for SV models is introduced that significantly overcomes the operational issues surrounding existing estimators. The estimation and specification testing procedures in this thesis are made possible by the introduction of a discrete nonlinear filtering (DNF) algorithm. This procedure uses the nonlinear filtering set of equations to provide maximum likelihood estimates for the general class of nonlinear latent variable problems which includes the SV model class. The DNF algorithm provides a fast and accurate implementation of the nonlinear filtering equations by treating the continuously valued state-variable as if it were a discrete Markov variable with a large number of states. When the DNF procedure is applied to the standard SV model, very accurate parameter estimates are obtained. Since the accuracy of the DNF is comparable to other procedures, its advantages are seen as ease and speed of implementation and the provision of online filtering (prediction) of variance. Additionally, the DNF procedure is very flexible and can be used for any dynamic latent variable problem with closed form likelihood and transition functions. Likelihood based specification testing for non-nested SV specifications is undertaken by formulating and estimating an encompassing model that nests two competing SV models. Likelihood ratio statistics are then used to make judgements regarding the optimal SV specification. The proposed framework is applied to SV models that incorporate either extreme returns or asymmetries.
APA, Harvard, Vancouver, ISO, and other styles
32

White, Scott Ian. "Stochastic volatility : maximum likelihood estimation and specification testing." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16220/.

Full text
Abstract:
Stochastic volatility (SV) models provide a means of tracking and forecasting the variance of financial asset returns. While SV models have a number of theoretical advantages over competing variance modelling procedures they are notoriously difficult to estimate. The distinguishing feature of the SV estimation literature is that those algorithms that provide accurate parameter estimates are conceptually demanding and require a significant amount of computational resources to implement. Furthermore, although a significant number of distinct SV specifications exist, little attention has been paid to how one would choose the appropriate specification for a given data series. Motivated by these facts, a likelihood based joint estimation and specification testing procedure for SV models is introduced that significantly overcomes the operational issues surrounding existing estimators. The estimation and specification testing procedures in this thesis are made possible by the introduction of a discrete nonlinear filtering (DNF) algorithm. This procedure uses the nonlinear filtering set of equations to provide maximum likelihood estimates for the general class of nonlinear latent variable problems which includes the SV model class. The DNF algorithm provides a fast and accurate implementation of the nonlinear filtering equations by treating the continuously valued state-variable as if it were a discrete Markov variable with a large number of states. When the DNF procedure is applied to the standard SV model, very accurate parameter estimates are obtained. Since the accuracy of the DNF is comparable to other procedures, its advantages are seen as ease and speed of implementation and the provision of online filtering (prediction) of variance. Additionally, the DNF procedure is very flexible and can be used for any dynamic latent variable problem with closed form likelihood and transition functions. Likelihood based specification testing for non-nested SV specifications is undertaken by formulating and estimating an encompassing model that nests two competing SV models. Likelihood ratio statistics are then used to make judgements regarding the optimal SV specification. The proposed framework is applied to SV models that incorporate either extreme returns or asymmetries.
APA, Harvard, Vancouver, ISO, and other styles
33

Jeong, Minsoo. "Asymptotics for the maximum likelihood estimators of diffusion models." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Cule, Madeleine. "Maximum likelihood estimation of a multivariate log-concave density." Thesis, University of Cambridge, 2010. https://www.repository.cam.ac.uk/handle/1810/237061.

Full text
Abstract:
Density estimation is a fundamental statistical problem. Many methods are eithersensitive to model misspecification (parametric models) or difficult to calibrate, especiallyfor multivariate data (nonparametric smoothing methods). We propose an alternativeapproach using maximum likelihood under a qualitative assumption on the shape ofthe density, specifically log-concavity. The class of log-concave densities includes manycommon parametric families and has desirable properties. For univariate data, theseestimators are relatively well understood, and are gaining in popularity in theory andpractice. We discuss extensions for multivariate data, which require different techniques. After establishing existence and uniqueness of the log-concave maximum likelihoodestimator for multivariate data, we see that a reformulation allows us to compute itusing standard convex optimization techniques. Unlike kernel density estimation, orother nonparametric smoothing methods, this is a fully automatic procedure, and noadditional tuning parameters are required. Since the assumption of log-concavity is non-trivial, we introduce a method forassessing the suitability of this shape constraint and apply it to several simulated datasetsand one real dataset. Density estimation is often one stage in a more complicatedstatistical procedure. With this in mind, we show how the estimator may be used forplug-in estimation of statistical functionals. A second important extension is the use oflog-concave components in mixture models. We illustrate how we may use an EM-stylealgorithm to fit mixture models where the number of components is known. Applicationsto visualization and classification are presented. In the latter case, improvement over aGaussian mixture model is demonstrated. Performance for density estimation is evaluated in two ways. Firstly, we considerHellinger convergence (the usual metric of theoretical convergence results for nonparametricmaximum likelihood estimators). We prove consistency with respect to this metricand heuristically discuss rates of convergence and model misspecification, supportedby empirical investigation. Secondly, we use the mean integrated squared error todemonstrate favourable performance compared with kernel density estimates using avariety of bandwidth selectors, including sophisticated adaptive methods. Throughout, we emphasise the development of stable numerical procedures able tohandle the additional complexity of multivariate data.
APA, Harvard, Vancouver, ISO, and other styles
35

Hartford, Alan Hughes. "Computational approaches for maximum likelihood estimation for nonlinearmixed models." NCSU, 2000. http://www.lib.ncsu.edu/theses/available/etd-20000719-081254.

Full text
Abstract:

The nonlinear mixed model is an important tool for analyzingpharmacokinetic and other repeated-measures data.In particular, these models are used when the measured response for anindividual,,has a nonlinear relationship with unknown, random, individual-specificparameters,.Ideally, the method of maximum likelihood is used to find estimates forthe parameters ofthe model after integrating out the random effects in the conditionallikelihood. However, closed form solutions tothe integral are generally not available. As a result, methods have beenpreviously developed to find approximatemaximum likelihood estimates for the parameters in the nonlinear mixedmodel. These approximate methods include FirstOrder linearization, Laplace's approximation, importance sampling, andGaussian quadrature. The methods are availabletoday in several software packages for models of limited sophistication;constant conditional error variance is requiredfor proper utilization of most software. In addition, distributionalassumptions are needed. This work investigates howrobust two of these methods, First Order linearization and Laplace'sapproximation, are to these assumptions. The findingis that Laplace's approximation performs well, resulting in betterestimation than first order linearization when bothmodels converge to a solution.

A method must provide good estimates of the likelihood at points inthe parameter space near the solution. This workcompares this ability among the numerical integration techniques,Gaussian quadrature, importance sampling, and Laplace'sapproximation. A new "scaled" and "centered" version of Gaussianquadrature is found to be the most accurate technique.In addition, the technique requires evaluation of the integrand at onlya few abscissas. Laplace's method also performswell; it is more accurate than importance sampling with even 100importance samples over two dimensions. Even so,Laplace's method still does not perform as well as Gaussian quadrature.Overall, Laplace's approximation performs betterthan expected, and is shown to be a reliable method while stillcomputationally less demanding.

This work also introduces a new method to maximize the likelihood.This method can be sharpened to any desired levelof accuracy. Stochastic approximation is incorporated to continuesampling until enough information is gathered to resultin accurate estimation. This new method is shown to work well for linearmixed models, but is not yet successful for thenonlinear mixed model.

APA, Harvard, Vancouver, ISO, and other styles
36

Garnham, Janine B. "Parallelization of the maximum likelihood approach to phylogenetic inference /." Online version of thesis, 2007. http://hdl.handle.net/1850/4778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Storer, Robert Hedley. "Adaptive estimation by maximum likelihood fitting of Johnson distributions." Diss., Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/24082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Al-Nashi, Hamid Rasheed. "A maximum likelihood method to estimate EEG evoked potentials /." Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=72016.

Full text
Abstract:
A new method for the estimation of the EEG evoked potential (EP) is presented in this thesis. This method is based on a new model of the EEG response which is assumed to be the sum of the EP and independent correlated Gaussian noise representing the spontaneous EEG activity. The EP is assumed to vary in both shape and latency, with the shape variation represented by correlated Gaussian noise which is modulated by the EP. The latency of the EP is also assumed to vary over the ensemble of responses in a random manner governed by some unspecified probability density. No assumption on stationarity is needed for the noise.
With the model described in state-space form, a Kalman filter is constructed, and the variance of the innovation process of the response measurements is derived. A maximum likelihood solution to the EP estimation problem is then obtained via this innovation process.
Tests using simulated responses show that the method is effective in estimating the EP signal at signal-to-noise ratio as low as -6db. Other tests using real normal visual response data yield reasonably consistent EP estimates whose main components are narrower and larger than the ensemble average. In addition, the likelihood function obtained by our method can be used as a discriminant between normal and abnormal responses, and it requires smaller ensembles than other methods.
APA, Harvard, Vancouver, ISO, and other styles
39

Lázaro, Blasco Francisco [Verfasser]. "Fountain Codes under Maximum Likelihood Decoding / Francisco Lázaro Blasco." München : Verlag Dr. Hut, 2017. http://d-nb.info/1137023546/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Sagulenko, Pavel [Verfasser], and Richard [Akademischer Betreuer] Neher. "Maximum Likelihood Phylodynamic Analysis / Pavel Sagulenko ; Betreuer: Richard Neher." Tübingen : Universitätsbibliothek Tübingen, 2018. http://d-nb.info/1168729092/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Zhou, Dan. "Bayesian statistics & maximum likelihood in twinned crystal refinement." Thesis, University of York, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.425392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Gandhi, Mital A. "Robust Kalman Filters Using Generalized Maximum Likelihood-Type Estimators." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/29902.

Full text
Abstract:
Estimation methods such as the Kalman filter identify best state estimates based on certain optimality criteria using a model of the system and the observations. A common assumption underlying the estimation is that the noise is Gaussian. In practical systems though, one quite frequently encounters thick-tailed, non-Gaussian noise. Statistically, contamination by this type of noise can be seen as inducing outliers among the data and leads to significant degradation in the KF. While many nonlinear methods to cope with non-Gaussian noise exist, a filter that is robust in the presence of outliers and maintains high statistical efficiency is desired. To solve this problem, a new robust Kalman filter framework is proposed that bounds the influence of observation, innovation, and structural outliers in a discrete linear system. This filter is designed to process the observations and predictions together, making it very effective in suppressing multiple outliers. In addition, it consists of a new prewhitening method that incorporates a robust multivariate estimator of location and covariance. Furthermore, the filter provides state estimates that are robust to outliers while maintaining a high statistical efficiency at the Gaussian distribution by applying a generalized maximum likelihood-type (GM) estimator. Finally, the filter incorporates the correct error covariance matrix that is derived using the GM-estimator's influence function. This dissertation also addresses robust state estimation for systems that follow a broad class of nonlinear models that possess two or more equilibrium points. Tracking state transitions from one equilibrium point to another rapidly and accurately in such models can be a difficult task, and a computationally simple solution is desirable. To that effect, a new robust extended Kalman filter is developed that exploits observational redundancy and the nonlinear weights of the GM-estimator to track the state transitions rapidly and accurately. Through simulations, the performances of the new filters are analyzed in terms of robustness to multiple outliers and estimation capabilities for the following applications: tracking autonomous systems, enhancing actual speech from cellular phones, and tracking climate transitions. Furthermore, the filters are compared with the state-of-the-art, i.e. the Hâ -filter for tracking an autonomous vehicle and the extended Kalman filter for sensing climate transitions.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Qiang. "Maximum likelihood estimation of phylogenetic tree with evolutionary parameters." Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1083177084.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xi, 167 p.; also includes graphics Includes bibliographical references (p. 157-167). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
44

Sinnokrot, Mohanned Omar. "Space-time block codes with low maximum-likelihood decoding complexity." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31752.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Barry, John; Committee Co-Chair: Madisetti, Vijay; Committee Member: Andrew, Alfred; Committee Member: Li, Ye; Committee Member: Ma, Xiaoli; Committee Member: Stuber, Gordon. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
45

LIN, MING-YONG, and 林銘勇. "Pipelined maximum likelihood decoder." Thesis, 1988. http://ndltd.ncl.edu.tw/handle/93550850465726586460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Wang, Steven Xiaogang. "Maximum weighted likelihood estimation." Thesis, 2001. http://hdl.handle.net/2429/13844.

Full text
Abstract:
A maximum weighted likelihood method is proposed to combine all the relevant data from different sources to improve the quality of statistical inference especially when the sample sizes are moderate or small. The linear weighted likelihood estimator (WLE), is studied in depth. The weak consistency, strong consistency and the asymptotic normality of the WLE are proved. The asymptotic properties of the WLE using adaptive weights are also established. A procedure for adaptively choosing the weights by using cross-validation is proposed in the thesis. The analytical forms of the "adaptive weights" are derived when the WLE is a linear combination of the MLE's. The weak consistency and asymptotic normality of the WLE with weights chosen by cross-validation criterion are established. The connection between WLE and theoretical information theory is discovered. The derivation of the weighted likelihood by using the maximum entropy principle is presented. The approximations of the distributions of the WLE by using saddlepoint approximation for small sample sizes are derived. The results of the application to the disease mapping are shown in the last chapter of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
47

"Optimal recursive maximum likelihood estimation." Sloan School of Management, Massachusetts Institute of Technology], 1987. http://hdl.handle.net/1721.1/2987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Seo, Byungtae. "Doubly-smoothed maximum likelihood estimation." 2007. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-2129/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Buot, Max. "Genetic algorithms and maximum likelihood estimation /." 2003. http://wwwlib.umi.com/dissertations/fullcit/3108787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Juang, Jing-Tang, and 莊景棠. "Maximum entropy-type classification likelihood methods." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/65073862497665161959.

Full text
Abstract:
碩士
中原大學
應用數學研究所
95
In fuzzy cluster analysis , the fuzzy c-means (FCM) clustering algorithm is the best known and most used method. There are many generalized types of FCM. Some of them such as fuzzy classification maximum likelihood (FCML) induce to penalized fuzzy c-means (PFCM) , Maximum entropy classification (MEC) and alternative fuzzy c-means (AFCM) , will be studied in this thesis, then we can get better results. In this paper , we make the extension of the FCM , based on this class of fuzzy c-means clustering algorithm , we extend them by adding a regularization , the regularization is change by membership , we can derive a generalized type of fuzzy c-means clustering algorithms , called the maximum entropy clustering algorithm (MEC). By doing some numerical examples , for estimating the parameters of the normal mixtures , we find that MEC is more accuracy and effective then PFCM and FCM.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography