To see the other types of publications on this topic, follow the link: Maximum likelihood procedures.

Journal articles on the topic 'Maximum likelihood procedures'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Maximum likelihood procedures.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Green, David M. "Maximum‐likelihood procedures and the inattentive observer." Journal of the Acoustical Society of America 97, no. 6 (June 1995): 3749–60. http://dx.doi.org/10.1121/1.412390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Goutsias, John, and Jerry M. Mendel. "Maximum‐likelihood deconvolution: An optimization theory perspective." GEOPHYSICS 51, no. 6 (June 1986): 1206–20. http://dx.doi.org/10.1190/1.1442175.

Full text
Abstract:
A large number of deconvolution procedures have appeared in the literature during the last three decades, including a number of maximum‐likelihood deconvolution (MLD) procedures. The major advantages of the MLD procedures are (1) no assumption is required about the phase of the wavelet (most of the classical deconvolution techniques assume a minimum‐phase wavelet, an assumption that may not be appropriate for many data sets); (2) MLD procedures can resolve closely spaced events (i.e., they are high‐resolution techniques); and (3) they can efficiently handle modeling and measurement errors, as well as backscatter effects (i.e., reflections from small features). A comparative study of six different MLD procedures for estimating the input of a linear, time‐invariant system from measurements, which have been corrupted by additive noise, was made by using a common framework developed from fundamental optimization theory arguments. To date, only the Kormylo and the Chi‐t algorithms can be recommended.
APA, Harvard, Vancouver, ISO, and other styles
3

Kosfeld, R. "Efficient iteration procedures for maximum likelihood factor analysis." Statistische Hefte 28, no. 1 (December 1987): 301–15. http://dx.doi.org/10.1007/bf02932610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shore, Haim. "Response modeling methodology (RMM)—maximum likelihood estimation procedures." Computational Statistics & Data Analysis 49, no. 4 (June 2005): 1148–72. http://dx.doi.org/10.1016/j.csda.2004.07.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

De Leeuw, Jan, and Norman Verhelst. "Maximum Likelihood Estimation in Generalized Rasch Models." Journal of Educational Statistics 11, no. 3 (September 1986): 183–96. http://dx.doi.org/10.3102/10769986011003183.

Full text
Abstract:
We review various models and techniques that have been proposed for item analysis according to the ideas of Rasch. A general model is proposed that unifies them, and maximum likelihood procedures are discussed for this general model. We show that unconditional maximum likelihood estimation in the functional Rasch model, as proposed by Wright and Haberman, is an important special case. Conditional maximum likelihood estimation, as proposed by Rasch and Andersen, is another important special case. Both procedures are related to marginal maximum likelihood estimation in the structural Rasch model, which has been studied by Sanathanan, Andersen, Tjur, Thissen, and others. Our theoretical results lead to suggestions for alternative computational algorithms.
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Miin-Shen. "On a class of fuzzy classification maximum likelihood procedures." Fuzzy Sets and Systems 57, no. 3 (August 1993): 365–75. http://dx.doi.org/10.1016/0165-0114(93)90030-l.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Johnson, Roger W., and Donna V. Kliche. "Large Sample Comparison of Parameter Estimates in Gamma Raindrop Distributions." Atmosphere 11, no. 4 (March 29, 2020): 333. http://dx.doi.org/10.3390/atmos11040333.

Full text
Abstract:
Raindrop size distributions have been characterized through the gamma family. Over the years, quite a few estimates of these gamma parameters have been proposed. The natural question for the practitioner, then, is what estimation procedure should be used. We provide guidance in answering this question when a large sample size (>2000 drops) of accurately measured drops is available. Seven estimation procedures from the literature: five method of moments procedures, maximum likelihood, and a pseudo maximum likelihood procedure, were examined. We show that the two maximum likelihood procedures provide the best precision (lowest variance) in estimating the gamma parameters. Method of moments procedures involving higher-order moments, on the other hand, give rise to poor precision (high variance) in estimating these parameters. A technique called the delta method assisted in our comparison of these various estimation procedures.
APA, Harvard, Vancouver, ISO, and other styles
8

Rukhin, A. L., and J. Shi. "Recursive procedures for multiple decisions: finite time memory and stepwise maximum likelihood procedure." Statistical Papers 36, no. 1 (December 1995): 155–62. http://dx.doi.org/10.1007/bf02926028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Holly and Magnus. "A Note on Instrumental Variables and Maximum Likelihood Estimation Procedures." Annales d'Économie et de Statistique, no. 10 (1988): 121. http://dx.doi.org/10.2307/20075698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jain, Ram B., Samuel P. Caudill, Richard Y. Wang, and Elizabeth Monsell. "Evaluation of Maximum Likelihood Procedures To Estimate Left Censored Observations." Analytical Chemistry 80, no. 4 (February 2008): 1124–32. http://dx.doi.org/10.1021/ac0711788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Egbo, I., M. Egbo, and S. I. Onyeagu. "Performance of Robust Linear Classifier with Multivariate Binary Variables." Journal of Mathematics Research 7, no. 4 (November 3, 2015): 104. http://dx.doi.org/10.5539/jmr.v7n4p104.

Full text
Abstract:
<p>This paper focuses on the robust classification procedures in two group discriminant analysis with multivariate binary variables. A normal distribution based data set is generated using the R-software statistical analysis system 2.15.3 using Barlett’s approximation to chi-square, the data set was found to be homogenous and was subjected to five linear classifiers namely: maximum likelihood discriminant function, fisher’s linear discriminant function, likelihood ratio function, full multinomial function and nearest neighbour function rule. To judge the performance of these procedures, the apparent error rates for each procedure are obtained for different sample sizes. The results obtained ranked the procedures as follows: fisher’s linear discriminant function, maximum likelihood, full multinomial, likelihood function and nearest neigbour function.</p>
APA, Harvard, Vancouver, ISO, and other styles
12

Calcagnì, Antonio, Livio Finos, Gianmarco Altoé, and Massimiliano Pastore. "A Maximum Entropy Procedure to Solve Likelihood Equations." Entropy 21, no. 6 (June 15, 2019): 596. http://dx.doi.org/10.3390/e21060596.

Full text
Abstract:
In this article, we provide initial findings regarding the problem of solving likelihood equations by means of a maximum entropy (ME) approach. Unlike standard procedures that require equating the score function of the maximum likelihood problem at zero, we propose an alternative strategy where the score is instead used as an external informative constraint to the maximization of the convex Shannon’s entropy function. The problem involves the reparameterization of the score parameters as expected values of discrete probability distributions where probabilities need to be estimated. This leads to a simpler situation where parameters are searched in smaller (hyper) simplex space. We assessed our proposal by means of empirical case studies and a simulation study, the latter involving the most critical case of logistic regression under data separation. The results suggested that the maximum entropy reformulation of the score problem solves the likelihood equation problem. Similarly, when maximum likelihood estimation is difficult, as is the case of logistic regression under separation, the maximum entropy proposal achieved results (numerically) comparable to those obtained by the Firth’s bias-corrected approach. Overall, these first findings reveal that a maximum entropy solution can be considered as an alternative technique to solve the likelihood equation.
APA, Harvard, Vancouver, ISO, and other styles
13

Calzolari, Giorgio, Maria Gabriella Campolo, Antonino Di Pino, and Laura Magazzini. "Maximum likelihood estimation of an across-regime correlation parameter." Stata Journal: Promoting communications on statistics and Stata 21, no. 2 (June 2021): 430–61. http://dx.doi.org/10.1177/1536867x211025834.

Full text
Abstract:
In this article, we describe the mlcar command, which implements a maximum likelihood method to simultaneously estimate the regression coefficients of a two-regime endogenous switching model and the coefficient measuring the correlation of outcomes between the two regimes. This coefficient, known as the “across-regime” correlation parameter, is generally unidentified in the traditional estimation procedures.
APA, Harvard, Vancouver, ISO, and other styles
14

Motta, Anderson C. O., and Luiz K. Hotta. "Exact Maximum Likelihood and Bayesian Estimation of the Stochastic Volatility Model." Brazilian Review of Econometrics 23, no. 2 (November 2, 2003): 183. http://dx.doi.org/10.12660/bre.v23n22003.2724.

Full text
Abstract:
This paper considers the classical and Bayesian approaches to the estimation of the stochastic volatility (SV) model. The estimation procedures rely heavily on the fact that SV model can be written in the State Space Form (SSF) with non-Ga ussian disturbances. The first widely employed estimation procedure to use this model was the quasi-maximum likelihood estimator proposed by Harvey et al. The Bayesian approach was proposed by Jacquier et al.(1994). Lately, many papers have appeared in the literature dealing with non-Gaussian state space models which directly influenced the estimation of the SV model. Some of these methods proposed to estimate the SV model are compared using the Sao Paulo stock exchange index (IBOVESPA) and simulated series. The influence of outliers is also considered.
APA, Harvard, Vancouver, ISO, and other styles
15

Johnson, Roger W., Donna V. Kliche, and Paul L. Smith. "Comparison of Estimators for Parameters of Gamma Distributions with Left-Truncated Samples." Journal of Applied Meteorology and Climatology 50, no. 2 (February 1, 2011): 296–310. http://dx.doi.org/10.1175/2010jamc2478.1.

Full text
Abstract:
Abstract When fitting a raindrop size distribution using a gamma model from data collected by a disdrometer, some consideration needs to be given to the small drops that fail to be recorded (typical disdrometer minimum size thresholds being in the 0.3–0.5-mm range). To this end, a gamma estimation procedure using maximum likelihood estimation has recently been published. The current work adds another procedure that accounts for the left-truncation problem in the data; in particular, an L-moments procedure is developed. These two estimation procedures, along with a traditional method-of-moments procedure that also accounts for data truncation, are then compared via simulation of volume samples from known gamma drop size distributions. For the range of gamma distributions considered, the maximum likelihood and L-moments procedures—which perform comparably—are found to outperform the procedure of method-of-moments. As these three procedures do not yield simple estimates in closed form, salient details of the R statistical code used in the simulations are included.
APA, Harvard, Vancouver, ISO, and other styles
16

Jesus, Sérgio M. "Can Maximum Likelihood Estimators Improve Genetic Algorithm Search in Geoacoustic Inversion?" Journal of Computational Acoustics 06, no. 01n02 (March 1998): 73–82. http://dx.doi.org/10.1142/s0218396x98000077.

Full text
Abstract:
The principles for estimating sea floor parameters via matched-field processing (MFP)-based techniques are now well known. In pure MFP, source localization is often seen as a range-depth estimation problem while the use of MFP on geoacoustic estimation generally involves a computationally intensive optimization procedure. In the last few years, much effort has been devoted to developing new or improving upon existing search procedures to solve the optimization problem Little, or no, attention has been given to the ensemble MFP-optimization treating it as a single technique. The question addressed in this paper is centered on the relation between the MFP parameter estimator technique, defining the objective function and the search procedure used to optimize it. In particular, we are interested in questions like: Can a faster search or more accurate estimate be achieved with a "peaky" surface instead of a flat and ambiguous surface? Is the inversion process affected by cross-frequency estimators and model mismatch? Does the search procedure need to be modified, and if yes how to account for this "peaky" surface navigation? This paper attempts to answer these and other related questions in the context of the June '97 geoacoustic inversion workshop data set.
APA, Harvard, Vancouver, ISO, and other styles
17

Hicks, James E., and Mounir M. Abdel-Aal. "Maximum Likelihood Estimation for Combined Travel Choice Model Parameters." Transportation Research Record: Journal of the Transportation Research Board 1645, no. 1 (January 1998): 160–69. http://dx.doi.org/10.3141/1645-20.

Full text
Abstract:
Equilibrium models of combined location and travel choices solve for the modal link flow pattern, which simultaneously solves a constrained minimization problem and satisfies a set of equilibrium conditions characterizing a rational behavior for traveler choices in an urban transportation system. The minimization problem typically is made to be representative of the particular urban area being studied by including coefficients of travel costs and travel choices that have been estimated from locally available observed data. For large urban areas, in practice, it is possible to derive interzonal travel times and costs only from the travel model, because suitable observed data are nonexistent. In this case, the estimation problem is a function of the travel model variables and, at the same time, the travel model is a function of the parameters determined by the estimation problem. Procedures to computationally search for a stable solution to this bilevel optimization problem have been addressed with limited success. The parameter estimation is solved in an iterative procedure in which first parameters are held fixed and the travel model is solved, then travel patterns are held fixed and the maximum likelihood parameters are solved by the Newton-Raphson method. Each successive parameter estimation resulting from these two steps results in a new set of parameter values for the next iteration until stable values for the parameters are achieved. The quality of the convergence of the parameter estimates is reported.
APA, Harvard, Vancouver, ISO, and other styles
18

Madigan, Robert, and David Williams. "Maximum-likelihood psychometric procedures in two-alternative forced-choice: Evaluation and recommendations." Perception & Psychophysics 42, no. 3 (May 1987): 240–49. http://dx.doi.org/10.3758/bf03203075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Kuhnt, Sonja. "Outlier Identification Procedures for Contingency Tables using Maximum Likelihood and L1 Estimates." Scandinavian Journal of Statistics 31, no. 3 (September 2004): 431–42. http://dx.doi.org/10.1111/j.1467-9469.2004.02_057.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Millar, Russell B. "Maximum Likelihood Estimation of Mixed Stock Fishery Composition." Canadian Journal of Fisheries and Aquatic Sciences 44, no. 3 (March 1, 1987): 583–90. http://dx.doi.org/10.1139/f87-071.

Full text
Abstract:
In this paper the maximum likelihood method of estimating stock composition is initially presented under the assumption that all of the sampled characteristics are of discrete type (electrophoretic, meristic). The extension to allow continuous data (morphometric) is seen to follow from just a slight change in notation. It is shown that the classification method with error correction is a special case of the maximum likelihood method. Results from finite mixture theory are used to show that, in practice, the method is well defined and that the maximum likelihood estimates of composition are unique and can always be found using the EM (expectation–maximization) algorithm. A comprehensive discussion of the reliability of the estimates is undertaken. An explicit nonparametric (infinitesimal jackknife) estimator of estimate variability is presented. Procedures to reduce variance and bias are discussed. The composition of a test mixed stock fishery is estimated and many of the techniques proposed in the reliability discussion are used.
APA, Harvard, Vancouver, ISO, and other styles
21

Durbin, James. "Maximum Likelihood Estimation of the Parameters of a System of Simultaneous Regression Equations." Econometric Theory 4, no. 1 (April 1988): 159–70. http://dx.doi.org/10.1017/s0266466600011919.

Full text
Abstract:
Procedures for computing the full information maximum likelihood (FIML) estimates of the parameters of a system of simultaneous regression equations have been described by Koopmans, Rubin, and Leipnik, Chernoff and Divinsky, Brown, and Eisenpress. However, all of these methods are rather complicated since they are based on estimating equations that are expressed in an inconvenient form. In this paper, a transformation of the maximum likelihood (ML) equations is developed which not only leads to simpler computations but which also simplifies the study of the properties of the estimates. The equations are obtained in a form which is capable of solution by a modified Newton-Raphson iterative procedure. The form obtained also shows up very clearly the relation between the maximum likelihood estimates and those obtained by the three-stage least squares method of Zellner and Theil.
APA, Harvard, Vancouver, ISO, and other styles
22

Conese, Claudio, and Fabio Maselli. "Use of error matrices to improve area estimates with maximum likelihood classification procedures." Remote Sensing of Environment 40, no. 2 (May 1992): 113–24. http://dx.doi.org/10.1016/0034-4257(92)90009-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Xuedong, Qianying Zeng, and Qiankun Song. "Penalized Maximum Likelihood Method to a Class of Skewness Data Analysis." Mathematical Problems in Engineering 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/824816.

Full text
Abstract:
An extension of some standard likelihood and variable selection criteria based on procedures of linear regression models under the skew-normal distribution or the skew-tdistribution is developed. This novel class of models provides a useful generalization of symmetrical linear regression models, since the random term distributions cover both symmetric as well as asymmetric and heavy-tailed distributions. A generalized expectation-maximization algorithm is developed for computing thel1penalized estimator. Efficacy of the proposed methodology and algorithm is demonstrated by simulated data.
APA, Harvard, Vancouver, ISO, and other styles
24

Kitamura, Yuichi. "Estimation of Cointegrated Systems with I(2) Processes." Econometric Theory 11, no. 1 (February 1995): 1–24. http://dx.doi.org/10.1017/s0266466600009014.

Full text
Abstract:
This paper considers the properties of systems likelihood procedures for cointegrated systems when the I(2) variables are present. Two alternative methods are proposed: one is based on the full system likelihood, whereas another is based on the subsystem likelihood. By eliminating all unit roots in the system by the use of prior information concerning the presence of unit roots, these procedures yield estimates whose asymptotic distributions are mixed normal, free from nuisance parameters, and median-unbiased. Both methods are extensions of a full system maximum likelihood procedure by Phillips (1991a) to I(2) models. Three cases of cointegration with I(2) variables are considered in order to cover a wide variety of cointegration relationships. A triangular ECM representation and the two ML estimates are derived for each case, and the asymptotics are discussed as well. The asymptotic efficiency concerning the two estimates are considered.
APA, Harvard, Vancouver, ISO, and other styles
25

Xin, Hua, Zhifang Liu, Yuhlong Lio, and Tzong-Ru Tsai. "Accelerated Life Test Method for the Doubly Truncated Burr Type XII Distribution." Mathematics 8, no. 2 (January 23, 2020): 162. http://dx.doi.org/10.3390/math8020162.

Full text
Abstract:
The Burr type XII (BurrXII) distribution is very flexible for modeling and has earned much attention in the past few decades. In this study, the maximum likelihood estimation method and two Bayesian estimation procedures are investigated based on constant-stress accelerated life test (ALT) samples, which are obtained from the doubly truncated three-parameter BurrXII distribution. Because computational difficulty occurs for maximum likelihood estimation method, two Bayesian procedures are suggested to estimate model parameters and lifetime quantiles under the normal use condition. A Markov Chain Monte Carlo approach using the Metropolis–Hastings algorithm via Gibbs sampling is built to obtain Bayes estimators of the model parameters and to construct credible intervals. The proposed Bayesian estimation procedures are simple for practical use, and the obtained Bayes estimates are reliable for evaluating the reliability of lifetime products based on ALT samples. Monte Carlo simulations were conducted to evaluate the performance of these two Bayesian estimation procedures. Simulation results show that the second Bayesian estimation procedure outperforms the first Bayesian estimation procedure in terms of bias and mean squared error when users do not have sufficient knowledge to set up hyperparameters in the prior distributions. Finally, a numerical example about oil-well pumps is used for illustration.
APA, Harvard, Vancouver, ISO, and other styles
26

Lee, Sunbok. "Detecting Differential Item Functioning Using the Logistic Regression Procedure in Small Samples." Applied Psychological Measurement 41, no. 1 (September 24, 2016): 30–43. http://dx.doi.org/10.1177/0146621616668015.

Full text
Abstract:
The logistic regression (LR) procedure for testing differential item functioning (DIF) typically depends on the asymptotic sampling distributions. The likelihood ratio test (LRT) usually relies on the asymptotic chi-square distribution. Also, the Wald test is typically based on the asymptotic normality of the maximum likelihood (ML) estimation, and the Wald statistic is tested using the asymptotic chi-square distribution. However, in small samples, the asymptotic assumptions may not work well. The penalized maximum likelihood (PML) estimation removes the first-order finite sample bias from the ML estimation, and the bootstrap method constructs the empirical sampling distribution. This study compares the performances of the LR procedures based on the LRT, Wald test, penalized likelihood ratio test (PLRT), and bootstrap likelihood ratio test (BLRT) in terms of the statistical power and type I error for testing uniform and non-uniform DIF. The result of the simulation study shows that the LRT with the asymptotic chi-square distribution works well even in small samples.
APA, Harvard, Vancouver, ISO, and other styles
27

NADARAJAH, SARALEES, and SAMUEL KOTZ. "THE MAXIMUM FAILURE TIME DISTRIBUTION." International Journal of Structural Stability and Dynamics 09, no. 02 (June 2009): 369–81. http://dx.doi.org/10.1142/s0219455409003077.

Full text
Abstract:
For systems with parallel components, the variable of primary importance is the maximum of the failure times of the different components. In this paper, we study the exact probability distribution of the maximum failure time. Explicit expressions are derived for the cumulative distribution function, probability density function, hazard rate function, moment-generating function, nth moment, variance, skewness, kurtosis, mean deviation, Shannon entropy, and the order statistics. Estimation procedures are derived by the methods of moments and maximum likelihood. We expect that these results could be useful for performance assessment of parallel systems.
APA, Harvard, Vancouver, ISO, and other styles
28

Jedynak, Bruno M., and Sanjeev Khudanpur. "Maximum Likelihood Set for Estimating a Probability Mass Function." Neural Computation 17, no. 7 (July 1, 2005): 1508–30. http://dx.doi.org/10.1162/0899766053723078.

Full text
Abstract:
We propose a new method for estimating the probability mass function (pmf) of a discrete and finite random variable from a small sample. We focus on the observed counts—the number of times each value appears in the sample—and define the maximum likelihood set (MLS) as the set of pmfs that put more mass on the observed counts than on any other set of counts possible for the same sample size. We characterize the MLS in detail in this article. We show that the MLS is a diamond-shaped subset of the probability simplex [0, 1]k bounded by at most k × (k − 1) hyper-planes, where k is the number of possible values of the random variable. The MLS always contains the empirical distribution, as well as a family of Bayesian estimators based on a Dirichlet prior, particularly the well-known Laplace estimator. We propose to select from the MLS the pmf that is closest to a fixed pmf that encodes prior knowledge. When using Kullback-Leibler distance for this selection, the optimization problem comprises finding the minimum of a convex function over a domain defined by linear inequalities, for which standard numerical procedures are available. We apply this estimate to language modeling using Zipf's law to encode prior knowledge and show that this method permits obtaining state-of-the-art results while being conceptually simpler than most competing methods.
APA, Harvard, Vancouver, ISO, and other styles
29

Koppelman, Frank S., and Laurie A. Garrow. "Efficiently Estimating Nested Logit Models with Choice-Based Samples." Transportation Research Record: Journal of the Transportation Research Board 1921, no. 1 (January 2005): 63–69. http://dx.doi.org/10.1177/0361198105192100108.

Full text
Abstract:
Choice-based samples oversample infrequently chosen alternatives to obtain an effective representation of the behavior of people who select these alternatives. However, the use of choice-based samples requires recognition of the sampling process in formulating the estimation procedure. In general, this procedure can be accomplished by applying weights to the observed choices in the estimation process. Unfortunately, the use of such weighted estimation procedures for choice models does not yield efficient estimators. However, for the special case of the multinomial logit model with a full set of alternative-specific constants, the standard maximum likelihood estimator–-which is efficient–-can be used with adjustment of the alternative-specific constants. The same maximum likelihood estimator can also be used with adjustment to estimate nested logit models with choice-based samples. The proof of this property is qualitatively described, and examples demonstrate how to apply the adjustment procedure.
APA, Harvard, Vancouver, ISO, and other styles
30

Kliche, Donna V., Paul L. Smith, and Roger W. Johnson. "L-Moment Estimators as Applied to Gamma Drop Size Distributions." Journal of Applied Meteorology and Climatology 47, no. 12 (December 1, 2008): 3117–30. http://dx.doi.org/10.1175/2008jamc1936.1.

Full text
Abstract:
Abstract The traditional approach with experimental raindrop size data has been to use the method of moments in the fitting procedure to estimate the parameters for the raindrop size distribution function. However, the moment method is known to be biased and can have substantial errors. Therefore, the L-moment method, which is widely used by hydrologists, was investigated as an alternative. The L-moment method was applied, along with the moment and maximum likelihood methods, to samples taken from simulated gamma raindrop populations. A comparison of the bias and the errors involved in the L-moments, moments, and maximum likelihood procedures shows that, with samples covering the full range of drop sizes, L-moments and maximum likelihood outperform the method of moments. For small sample sizes the moment method gives a large bias and large error while the L-moment method gives results close to the true population values, outperforming even maximum likelihood results. Because the goal of this work is to understand the properties of the various fitting procedures, the investigation was expanded to include the effects of the absence of small drops in the samples (typical disdrometer minimum size thresholds are 0.3–0.5 mm). The results show that missing small drops (due to the instrumental constraint) can result in a large bias in the case of the L-moment and maximum likelihood fitting methods; this bias does not decrease much with increasing sample size. Because the very small drops have a negligible contribution to moments of order 2 or higher, the bias in the moment methods seems to be about the same as in the case of full samples. However, when moments of order less than 2 are needed (as in the case of modelers using moments 0 and 3), the moment method gives much larger bias. Therefore a modification of these methods is needed to handle the truncated-data situation.
APA, Harvard, Vancouver, ISO, and other styles
31

Yu, Jianqi. "Inferences on A Normal Mean with an Auxiliary Variable." Journal of Statistics: Advances in Theory and Applications 25, no. 2 (July 10, 2021): 51–59. http://dx.doi.org/10.18642/jsata_7100122198.

Full text
Abstract:
Inferential procedures for a normal mean with an auxiliary variable are developed. First, the maximum likelihood estimation of the mean and its distribution are derived. Second, an F statistic based on the maximum likelihood estimation is proposed, and the hypothesis testing and confidence estimation are outlined. Finally, to illustrate the advantage of using auxiliary variable, Monte Carlo simulations are performed. The results indicate that using auxiliary variable can improve the efficiency of inference.
APA, Harvard, Vancouver, ISO, and other styles
32

Adubisi, Obinna D., Ahmed Abdulkadir, and Chidi E. Adubisi. "A new hybrid form of the skew-t distribution: estimation methods comparison via Monte Carlo simulation and GARCH model application." Data Science in Finance and Economics 2, no. 2 (2022): 54–79. http://dx.doi.org/10.3934/dsfe.2022003.

Full text
Abstract:
<abstract> <p>In this work, estimating the exponentiated half logistic skew-t model parameters using some classical estimation procedures is considered. The finite sample performance of the EHL<sub>ST</sub> parameter estimates is examined through extensive Monte Carlo simulations. The ordering performance of the six criterions was based on the partial and overall ranks of the estimation procedures for all parameter combinations. The criterions performance ordering from finest to poorest, using the overall ranks is maximum likelihood, maximum product of spacing, Anderson-Darling, Cramer-von Mises, least squares and weighted least squares estimators for all the parameter combinations. The simulation results confirm the dominance of the maximum likelihood estimation method over other methods with the least overall rank but shows that the maximum product of spacing is most advantageous when the sample size is 200. More so, the EHL<sub>ST</sub> model efficacy is demonstrated through its application on Nigeria inflation rates dataset using the maximum likelihood and maximum product of spacing estimation procedures. Furthermore, the volatility modeling of the Nigeria inflation log-returns using the GARCH-type models with the EHL<sub>ST</sub> innovation density relative to ten commonly used innovation densities validates the superiority of the GARCH (1, 1) and GJRGARCH (1, 1) models with EHL<sub>ST</sub> innovation density in both in-sample and out-samples performance over other models.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
33

Brubaker, Linda B., Lisa J. Graumlich, and Patricia M. Anderson. "An evaluation of statistical techniques for discriminating Picea glauca from Picea mariana pollen in northern Alaska." Canadian Journal of Botany 65, no. 5 (May 1, 1987): 899–906. http://dx.doi.org/10.1139/b87-124.

Full text
Abstract:
Two statistical procedures, linear discrimination and maximum likelihood discrimination, were evaluated for use in estimating percentages of Picea glauca and P. mariana pollen in lake sediments of northern Alaska. Each procedure is based on a comparison of the dimensions of unclassified Picea pollen grains with those of reference pollen of each species. The reference pollen collection in this study consisted of 675 P. glauca and 600 P. mariana grains, representing 51 trees at 24 sites in Alaska. The reference collection was divided into two approximately equal parts; one was used to derive the discriminant functions of each technique and the other was used to test the accuracy of each function on populations of known composition. The maximum likelihood procedure produced unbiased estimates across the entire range of test populations (P. glauca: P. mariana ratios ranging from 5:95 to 95:5). The linear discrimination estimates were strongly biased in favor of the less frequent species in populations having less than 25% of either species. We applied both techniques to measurements of Picea pollen in surface sediments of 62 lakes in the boreal forest of northern Alaska. Picea pollen percentages estimated by maximum likelihood discrimination more closely revealed the distribution of Picea trees in the landscape.
APA, Harvard, Vancouver, ISO, and other styles
34

Lin, Chien-Tai, Yu Liu, Yun-Wei Li, Zhi-Wei Chen, and Hassan M. Okasha. "Further Properties and Estimations of Exponentiated Generalized Linear Exponential Distribution." Mathematics 9, no. 24 (December 20, 2021): 3328. http://dx.doi.org/10.3390/math9243328.

Full text
Abstract:
The recent exponentiated generalized linear exponential distribution is a generalization of the generalized linear exponential distribution and the exponentiated generalized linear exponential distribution. In this paper, we study some statistical properties of this distribution such as negative moments, moments of order statistics, mean residual lifetime, and their asymptotic distributions for sample extreme order statistics. Different estimation procedures include the maximum likelihood estimation, the corrected maximum likelihood estimation, the modified maximum likelihood estimation, the maximum product of spacing estimation, and the least squares estimation are compared via a Monte Carlo simulation study in terms of their biases, mean squared errors, and their rates of obtaining reliable estimates. Recommendations are made from the simulation results and a numerical example is presented to illustrate its use for modeling a rainfall data from Orlando, Florida.
APA, Harvard, Vancouver, ISO, and other styles
35

Crump, R. E., R. Thompson, C. S. Haley, and J. Mercer. "Individual animal model estimates of genetic correlations between performance test and reproduction traits of landrace pigs performance tested in a commercial nucleus herd." Animal Science 65, no. 2 (October 1997): 291–98. http://dx.doi.org/10.1017/s135772980001660x.

Full text
Abstract:
AbstractBivariate individual animal model estimates of genetic and environmental correlations between reproduction traits (number born alive and average piglet weight) and performance test traits (ultrasonic backfat depth, average daily food intake, average daily gain and food conversion ratio) of Landrace pigs were calculated. The estimates were produced using a derivative-free restricted maximum likelihood algorithm to calculate likelihoods for different combinations of covariance parameters. A quadratic approximation to the likelihood surface was used to estimate the maximum likelihood values with respect to the covariance parameters. For all combinations of performance test traits with reproduction traits the resulting genetic and residual correlation estimates were low, with a maximum absolute value of 0·233 for the genetic correlation between food conversion ratio and number born alive. Standard errors of genetic correlation estimates were between 0·11 and 0·15. There is expected to have been little effect upon reproduction traits from the rigorous selection carried out upon performance test traits over the years. When incorporating reproduction data into best linear unbiased prediction analysis procedures it should be possible to analyse performance test and reproduction traits from this population separately, thereby making savings on computer resources and time required for the analysis of all traits.
APA, Harvard, Vancouver, ISO, and other styles
36

HALL, NATHAN, LAINA MERCER, DAISY PHILLIPS, JONATHAN SHAW, and AMY D. ANDERSON. "Maximum likelihood estimation of individual inbreeding coefficients and null allele frequencies." Genetics Research 94, no. 3 (June 2012): 151–61. http://dx.doi.org/10.1017/s0016672312000341.

Full text
Abstract:
SummaryIn this paper, we developed and compared several expectation–maximization (EM) algorithms to find maximum likelihood estimates of individual inbreeding coefficients using molecular marker information. The first method estimates the inbreeding coefficient for a single individual and assumes that allele frequencies are known without error. The second method jointly estimates inbreeding coefficients and allele frequencies for a set of individuals that have been genotyped at several loci. The third method generalizes the second method to include the case in which null alleles may be present. In particular, it is able to jointly estimate individual inbreeding coefficients and allele frequencies, including the frequencies of null alleles, and accounts for missing data. We compared our methods with several other estimation procedures using simulated data and found that our methods perform well. The maximum likelihood estimators consistently gave among the lowest root-mean-square-error (RMSE) of all the estimators that were compared. Our estimator that accounts for null alleles performed particularly well and was able to tease apart the effects of null alleles, randomly missing genotypes and differing degrees of inbreeding among members of the datasets we analysed. To illustrate the performance of our estimators, we analysed previously published datasets on mice (Mus musculus) and white-tailed deer (Odocoileus virginianus).
APA, Harvard, Vancouver, ISO, and other styles
37

Brummel, Bradley J., and Fritz Drasgow. "The Effects Of Estimator Choice And Weighting Strategies On Confirmatory Factor Analysis With Stratified Samples." Applied Multivariate Research 13, no. 2 (August 9, 2010): 113. http://dx.doi.org/10.22329/amr.v13i2.3019.

Full text
Abstract:
Survey researchers often design stratified sampling strategies to target specific subpopulations within the larger population. This stratification can influence the population parameter estimates from these samples because they are not simple random samples of the population. There are three typical estimation options that account for the effects of this stratification in latent variable models: unweighted maximum likelihood, weighted maximum likelihood, and pseudo-maximum likelihood estimation. This paper examines the effects of these procedures on parameter estimates, standard errors, and fit statistics in Lisrel 8.7 (Jöreskog & Sörbom, 2004) and Mplus 3.0 (Muthén & Muthén, 2004). Options using several estimation methods will be compared to pseudo-maximum likelihood estimation. Results indicated the choice of estimation technique does not have a substantial effect on confirmatory factor analysis parameter estimates in large samples. However, standard errors of those parameter estimates and RMSEA values for assessing of model fit can be substantially affected by estimation technique.
APA, Harvard, Vancouver, ISO, and other styles
38

Angwenyi, David. "Estimation of Spatially Varying Parameters with Application to Hyperbolic SPDES." Journal of Applied Mathematics 2023 (January 10, 2023): 1–20. http://dx.doi.org/10.1155/2023/7909668.

Full text
Abstract:
Parameter estimation is a growing area of interest in statistical signal processing. Some parameters in real-life applications vary in space as opposed to those that are static. Most common methods in estimating parameters involve solving an optimization problem where the cost function is assembled variously, for example, maximum likelihood and maximum a posteriori methods. However, these methods do not have exact solutions to most real-life problems. It is for this reason that Monte Carlo methods are preferred. In this paper, we treat the estimation of parameters which vary with space. We use the Metropolis-Hastings algorithm as a selection criterion for the maximum filter likelihood. Comparisons are made with the use of joint estimation of both the spatially varying parameters and the state. We illustrate the procedures employed in this paper by means of two hyperbolic SPDEs: the advection and the wave equation. The Metropolis-Hastings procedure registers better estimates.
APA, Harvard, Vancouver, ISO, and other styles
39

CALABRIA, RAFFAELA, and GIANPAOLO PULCINI. "DISCONTINUOUS POINT PROCESSES FOR THE ANALYSIS OF REPAIRABLE UNITS." International Journal of Reliability, Quality and Safety Engineering 06, no. 04 (December 1999): 361–82. http://dx.doi.org/10.1142/s0218539399000334.

Full text
Abstract:
In this paper, a useful family of discontinuous point processes is proposed, which is able to analyze the failure pattern of repairable units subjected to repair actions that depart from the commonly assumed minimal repair policy. Maximum likelihood estimators of the parameters which index two special cases of the above family are derived, and procedures for testing the departure from the minimal repair assumption, based on asymptotic results, are discussed. An exact and unbiased testing procedure to be performed for small or moderate sample sizes is also proposed. Finally, numerical examples are given to illustrate the proposed estimation and testing procedures.
APA, Harvard, Vancouver, ISO, and other styles
40

Bowen, James, and Ming-hong Huang. "A comparison of maximum likelihood with method of moment procedures for separating individual and group effects." Journal of Personality and Social Psychology 58, no. 1 (1990): 90–94. http://dx.doi.org/10.1037/0022-3514.58.1.90.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Johnston, Ron, and Charles Pattie. "Ecological Inference and Entropy-Maximizing: An Alternative Estimation Procedure for Split-Ticket Voting." Political Analysis 8, no. 4 (July 18, 2000): 333–45. http://dx.doi.org/10.1093/oxfordjournals.pan.a029819.

Full text
Abstract:
Publication of King's A Solution to the Ecological Inference Problem has rekindled interest in the estimation of unknown cell values in two- and three-dimensional matrices from knowledge of the marginal sums. This paper outlines an entropy-maximizing (EM) procedure which employs more constraints than King's EI method and produces mathematical rather than statistical procedures: the estimates are maximum-likelihood values. The mathematics are outlined, and the procedure's use illustrated with a study of ticket-splitting at New Zealand's first (1996) general election using the mixed-member proportional representation system, for which official figures provide a check against the EM estimate of the number voting a straight party ticket in each constituency.
APA, Harvard, Vancouver, ISO, and other styles
42

Jandhyala, V. K., P. Liu, S. B. Fotopoulos, and I. B. MacNeill. "Change-Point Analysis of Polar Zone Radiosonde Temperature Data." Journal of Applied Meteorology and Climatology 53, no. 3 (March 2014): 694–714. http://dx.doi.org/10.1175/jamc-d-13-084.1.

Full text
Abstract:
AbstractA comprehensive change-point analysis of annual radiosonde temperature measurements collected at the surface, troposphere, tropopause, and lower-stratosphere levels at both the South and North Polar zones has been done. The data from each zone are modeled as a multivariate Gaussian series with a possible change point in both the mean vector as well as the covariance matrix. Prior to carrying out an analysis of the data, a methodology for computing the large sample distribution of the maximum likelihood estimator of the change point is first developed. The Bayesian approach for change-point estimation under conjugate priors is also developed. A simulation study is carried out to compare the maximum likelihood estimator and various Bayesian estimates. Then, a comprehensive change-point analysis under a multivariate framework is carried out on the temperature data for the period 1958–2008. Change detection is based on the likelihood ratio procedure, and change-point estimation is based on the maximum likelihood principle and other Bayesian procedures. The analysis showed strong evidence of change in the correlation between tropopause and lower-stratosphere layers at the South Polar zone subsequent to 1981. The analysis also showed evidence of a cooling effect at the tropopause and lower-stratosphere layers, as well as a warming effect at the surface and troposphere layers at both the South and North Polar zones.
APA, Harvard, Vancouver, ISO, and other styles
43

Burbano Moreno, Álvaro Alexander, Oscar Orlando Melo-Martinez, and Q. Qamarul Islam. "Inference in Multiple Linear Regression Model with Generalized Secant Hyperbolic Distribution Errors." Ingeniería y Ciencia 17, no. 33 (May 12, 2021): 45–70. http://dx.doi.org/10.17230/ingciencia.17.33.3.

Full text
Abstract:
We study multiple linear regression model under non-normally distributed random error by considering the family of generalized secant hyperbolic distributions. We derive the estimators of model parameters by using modified maximum likelihood methodology and explore the properties of the modified maximum likelihood estimators so obtained. We show that the proposed estimators are more efficient and robust than the commonly used least square estimators. We also develop the relevant test of hypothesis procedures and compared the performance of such tests vis-a-vis the classical tests that are based upon the least square approach.
APA, Harvard, Vancouver, ISO, and other styles
44

Burbano Moreno, Álvaro Alexander, Oscar Orlando Melo-Martinez, and M. Qamarul Islam. "Inference in Multiple Linear Regression Model with Generalized Secant Hyperbolic Distribution Errors." Ingeniería y Ciencia 17, no. 33 (May 12, 2021): 45–70. http://dx.doi.org/10.17230/10.17230/ingciencia.17.33.3.

Full text
Abstract:
We study multiple linear regression model under non-normally distributed random error by considering the family of generalized secant hyperbolic distributions. We derive the estimators of model parameters by using modified maximum likelihood methodology and explore the properties of the modified maximum likelihood estimators so obtained. We show that the proposed estimators are more efficient and robust than the commonly used least square estimators. We also develop the relevant test of hypothesis procedures and compared the performance of such tests vis-a-vis the classical tests that are based upon the least square approach.
APA, Harvard, Vancouver, ISO, and other styles
45

Reyes, Jimmy, Osvaldo Venegas, and Héctor W. Gómez. "Modified Slash Lindley Distribution." Journal of Probability and Statistics 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/6303462.

Full text
Abstract:
In this paper we introduce a new distribution, called the modified slash Lindley distribution, which can be seen as an extension of the Lindley distribution. We show that this new distribution provides more flexibility in terms of kurtosis and skewness than the Lindley distribution. We derive moments and some basic properties for the new distribution. Moment estimators and maximum likelihood estimators are calculated using numerical procedures. We carry out a simulation study for the maximum likelihood estimators. A fit of the proposed model indicates good performance when compared with other less flexible models.
APA, Harvard, Vancouver, ISO, and other styles
46

Wasserman, Larry, Aaditya Ramdas, and Sivaraman Balakrishnan. "Universal inference." Proceedings of the National Academy of Sciences 117, no. 29 (July 6, 2020): 16880–90. http://dx.doi.org/10.1073/pnas.1922664117.

Full text
Abstract:
We propose a general method for constructing confidence sets and hypothesis tests that have finite-sample guarantees without regularity conditions. We refer to such procedures as “universal.” The method is very simple and is based on a modified version of the usual likelihood-ratio statistic that we call “the split likelihood-ratio test” (split LRT) statistic. The (limiting) null distribution of the classical likelihood-ratio statistic is often intractable when used to test composite null hypotheses in irregular statistical models. Our method is especially appealing for statistical inference in these complex setups. The method we suggest works for any parametric model and also for some nonparametric models, as long as computing a maximum-likelihood estimator (MLE) is feasible under the null. Canonical examples arise in mixture modeling and shape-constrained inference, for which constructing tests and confidence sets has been notoriously difficult. We also develop various extensions of our basic methods. We show that in settings when computing the MLE is hard, for the purpose of constructing valid tests and intervals, it is sufficient to upper bound the maximum likelihood. We investigate some conditions under which our methods yield valid inferences under model misspecification. Further, the split LRT can be used with profile likelihoods to deal with nuisance parameters, and it can also be run sequentially to yield anytime-valid P values and confidence sequences. Finally, when combined with the method of sieves, it can be used to perform model selection with nested model classes.
APA, Harvard, Vancouver, ISO, and other styles
47

Zitzmann, Steffen, Julia-Kim Walther, Martin Hecht, and Benjamin Nagengast. "What Is the Maximum Likelihood Estimate When the Initial Solution to the Optimization Problem Is Inadmissible? The Case of Negatively Estimated Variances." Psych 4, no. 3 (June 30, 2022): 343–56. http://dx.doi.org/10.3390/psych4030029.

Full text
Abstract:
The default procedures of the software programs Mplus and lavaan tend to yield an inadmissible solution (also called a Heywood case) when the sample is small or the parameter is close to the boundary of the parameter space. In factor models, a negatively estimated variance does often occur. One strategy to deal with this is fixing the variance to zero and then estimating the model again in order to obtain the estimates of the remaining model parameters. In the present article, we present one possible approach for justifying this strategy. Specifically, using a simple one-factor model as an example, we show that the maximum likelihood (ML) estimate of the variance of the latent factor is zero when the initial solution to the optimization problem (i.e., the solution provided by the default procedure) is a negative value. The basis of our argument is the very definition of ML estimation, which requires that the log-likelihood be maximized over the parameter space. We present the results of a small simulation study, which was conducted to evaluate the proposed ML procedure and compare it with Mplus’ default procedure. We found that the proposed ML procedure increased estimation accuracy compared to Mplus’ procedure, rendering the ML procedure an attractive option to deal with inadmissible solutions.
APA, Harvard, Vancouver, ISO, and other styles
48

Masaoud, Elmabrok, and Henrik Stryhn. "A comparison of statistical methods for the analysis of binary repeated measures data with additional hierarchical structure." Journal of Statistical Research 54, no. 1 (August 25, 2020): 1–25. http://dx.doi.org/10.47302/jsr.2020540101.

Full text
Abstract:
The objective of the study was to compare statistical methods for the analysis of binary repeated measures data with an additional hierarchical level. Random effects true models with autocorrelated ($\rho=1$, 0.9 or 0.5) subject random effects were used in this simulation study. The settings of the simulation were chosen to reflect a real veterinary somatic cell count dataset, except that the within--subject time series were balanced, complete and of fixed length (4 or 8 time points). Four fixed effects parameters were studied: binary predictors at the subject and cluster levels, respectively, a linear time effect, and the intercept. The following marginal and random effects statistical procedures were considered: ordinary logistic regression (OLR), alternating logistic regression (ALR), generalized estimating equations (GEE), marginal quasi-likelihood (MQL), penalized quasi-likelihood (PQL), pseudo likelihood (REPL), maximum likelihood (ML) estimation and Bayesian Markov chain Monte Carlo (MCMC). The performance of these estimation procedures was compared specifically for the four fixed parameters as well as variance and correlation parameters. The findings of this study indicate that in data generated by random intercept models ($\rho=1$), the ML and MCMC procedures performed well and had fairly similar estimation errors. The PQL regression estimates were attenuated while the variance estimates were less accurate than ML and MCMC, but the direction of the bias depended on whether binomial or extra-binomial dispersion was assumed. In datasets with autocorrelation ($\rho<1$), random effects estimates procedures gave downwards biased estimates, while marginal estimates were little affected by the presence of autocorrelation. The results also indicate that in addition to ALR, a GEE procedure that accounts for clustering at the highest hierarchical level is sufficient.
APA, Harvard, Vancouver, ISO, and other styles
49

Lee, J. E., and S. D. Fassois. "Suboptimum Maximum Likelihood Estimation of Structural Parameters from Multiple-Excitation Vibration Data." Journal of Vibration and Acoustics 114, no. 2 (April 1, 1992): 260–71. http://dx.doi.org/10.1115/1.2930256.

Full text
Abstract:
In this paper an effective stochastic and multiple-excitation single-response approach to structural dynamics identification is introduced. The proposed approach accounts for many previously unaccounted for aspects of the problem, as it is based on: A proper, special-form, scalar ARMAX-type representation of the structural and noise dynamics; a new Suboptimum Maximum Likelihood (SML) discrete estimation algorithm (Fassois and Lee, 1990); systematic and efficient modeling strategy and model validation procedures; as well as accurate modal parameter extraction that is compatible with the employed model structure and excitation signal forms. In addition to its comprehensiveness, the proposed approach overcomes the well-known limitations of deterministic time-domain methods in dealing with noise-corrupted data records, while also circumventing some of the major difficulties of existing stochastic schemes by featuring guaranteed algorithmic stability, elimination of wrong convergence problems, very modest computational complexity, and minimal operator intervention. The effectiveness of the approach is verified through numerical simulations with noise-corrupted vibration data, and structural systems characterized by well-separated and closely-spaced vibrational modes. Comparisons with the classical Frequency Domain Method (FDM) are also made, and the approach’s advantages over deterministic methods are demonstrated through comparisons with the Eigensystem Realization Algorithm (ERA). Experimental results, where the proposed approach is used for the modal analysis of a flexible beam from laboratory data, are also presented.
APA, Harvard, Vancouver, ISO, and other styles
50

Taroni, Matteo. "Back to the future: old methods for new estimation and test of the Gutenberg–Richter b-value for catalogues with variable completeness." Geophysical Journal International 224, no. 1 (September 25, 2020): 337–39. http://dx.doi.org/10.1093/gji/ggaa464.

Full text
Abstract:
SUMMARY In this short paper we show how to use the classical maximum likelihood estimation procedure for the b-value of the Gutenberg–Richter law for catalogues with different levels of completeness. With a simple correction, that is subtracting the relative completeness level to each magnitude, it becomes possible to use the classical approach. Moreover, this correction allows to adopt the testing procedures, initially made for catalogues with a single level of completeness, for catalogues with different levels of completeness too.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography