Academic literature on the topic 'Maximum likelihood procedures'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Maximum likelihood procedures.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Maximum likelihood procedures"

1

Green, David M. "Maximum‐likelihood procedures and the inattentive observer." Journal of the Acoustical Society of America 97, no. 6 (June 1995): 3749–60. http://dx.doi.org/10.1121/1.412390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Goutsias, John, and Jerry M. Mendel. "Maximum‐likelihood deconvolution: An optimization theory perspective." GEOPHYSICS 51, no. 6 (June 1986): 1206–20. http://dx.doi.org/10.1190/1.1442175.

Full text
Abstract:
A large number of deconvolution procedures have appeared in the literature during the last three decades, including a number of maximum‐likelihood deconvolution (MLD) procedures. The major advantages of the MLD procedures are (1) no assumption is required about the phase of the wavelet (most of the classical deconvolution techniques assume a minimum‐phase wavelet, an assumption that may not be appropriate for many data sets); (2) MLD procedures can resolve closely spaced events (i.e., they are high‐resolution techniques); and (3) they can efficiently handle modeling and measurement errors, as well as backscatter effects (i.e., reflections from small features). A comparative study of six different MLD procedures for estimating the input of a linear, time‐invariant system from measurements, which have been corrupted by additive noise, was made by using a common framework developed from fundamental optimization theory arguments. To date, only the Kormylo and the Chi‐t algorithms can be recommended.
APA, Harvard, Vancouver, ISO, and other styles
3

Kosfeld, R. "Efficient iteration procedures for maximum likelihood factor analysis." Statistische Hefte 28, no. 1 (December 1987): 301–15. http://dx.doi.org/10.1007/bf02932610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shore, Haim. "Response modeling methodology (RMM)—maximum likelihood estimation procedures." Computational Statistics & Data Analysis 49, no. 4 (June 2005): 1148–72. http://dx.doi.org/10.1016/j.csda.2004.07.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

De Leeuw, Jan, and Norman Verhelst. "Maximum Likelihood Estimation in Generalized Rasch Models." Journal of Educational Statistics 11, no. 3 (September 1986): 183–96. http://dx.doi.org/10.3102/10769986011003183.

Full text
Abstract:
We review various models and techniques that have been proposed for item analysis according to the ideas of Rasch. A general model is proposed that unifies them, and maximum likelihood procedures are discussed for this general model. We show that unconditional maximum likelihood estimation in the functional Rasch model, as proposed by Wright and Haberman, is an important special case. Conditional maximum likelihood estimation, as proposed by Rasch and Andersen, is another important special case. Both procedures are related to marginal maximum likelihood estimation in the structural Rasch model, which has been studied by Sanathanan, Andersen, Tjur, Thissen, and others. Our theoretical results lead to suggestions for alternative computational algorithms.
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Miin-Shen. "On a class of fuzzy classification maximum likelihood procedures." Fuzzy Sets and Systems 57, no. 3 (August 1993): 365–75. http://dx.doi.org/10.1016/0165-0114(93)90030-l.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Johnson, Roger W., and Donna V. Kliche. "Large Sample Comparison of Parameter Estimates in Gamma Raindrop Distributions." Atmosphere 11, no. 4 (March 29, 2020): 333. http://dx.doi.org/10.3390/atmos11040333.

Full text
Abstract:
Raindrop size distributions have been characterized through the gamma family. Over the years, quite a few estimates of these gamma parameters have been proposed. The natural question for the practitioner, then, is what estimation procedure should be used. We provide guidance in answering this question when a large sample size (>2000 drops) of accurately measured drops is available. Seven estimation procedures from the literature: five method of moments procedures, maximum likelihood, and a pseudo maximum likelihood procedure, were examined. We show that the two maximum likelihood procedures provide the best precision (lowest variance) in estimating the gamma parameters. Method of moments procedures involving higher-order moments, on the other hand, give rise to poor precision (high variance) in estimating these parameters. A technique called the delta method assisted in our comparison of these various estimation procedures.
APA, Harvard, Vancouver, ISO, and other styles
8

Rukhin, A. L., and J. Shi. "Recursive procedures for multiple decisions: finite time memory and stepwise maximum likelihood procedure." Statistical Papers 36, no. 1 (December 1995): 155–62. http://dx.doi.org/10.1007/bf02926028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Holly and Magnus. "A Note on Instrumental Variables and Maximum Likelihood Estimation Procedures." Annales d'Économie et de Statistique, no. 10 (1988): 121. http://dx.doi.org/10.2307/20075698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jain, Ram B., Samuel P. Caudill, Richard Y. Wang, and Elizabeth Monsell. "Evaluation of Maximum Likelihood Procedures To Estimate Left Censored Observations." Analytical Chemistry 80, no. 4 (February 2008): 1124–32. http://dx.doi.org/10.1021/ac0711788.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Maximum likelihood procedures"

1

Ehlers, Rene. "Maximum likelihood estimation procedures for categorical data." Pretoria : [s.n.], 2002. http://upetd.up.ac.za/thesis/available/etd-07222005-124541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hill, Terry. "Metrics and Test Procedures for Data Quality Estimation in the Aeronautical Telemetry Channel." International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596445.

Full text
Abstract:
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV
There is great potential in using Best Source Selectors (BSS) to improve link availability in aeronautical telemetry applications. While the general notion that diverse data sources can be used to construct a consolidated stream of "better" data is well founded, there is no standardized means of determining the quality of the data streams being merged together. Absent this uniform quality data, the BSS has no analytically sound way of knowing which streams are better, or best. This problem is further exacerbated when one imagines that multiple vendors are developing data quality estimation schemes, with no standard definition of how to measure data quality. In this paper, we present measured performance for a specific Data Quality Metric (DQM) implementation, demonstrating that the signals present in the demodulator can be used to quickly and accurately measure the data quality, and we propose test methods for calibrating DQM over a wide variety of channel impairments. We also propose an efficient means of encapsulating this DQM information with the data, to simplify processing by the BSS. This work leads toward a potential standardization that would allow data quality estimators and best source selectors from multiple vendors to interoperate.
APA, Harvard, Vancouver, ISO, and other styles
3

Ainkaran, Ponnuthurai. "Analysis of Some Linear and Nonlinear Time Series Models." Thesis, The University of Sydney, 2004. http://hdl.handle.net/2123/582.

Full text
Abstract:
Abstract This thesis considers some linear and nonlinear time series models. In the linear case, the analysis of a large number of short time series generated by a first order autoregressive type model is considered. The conditional and exact maximum likelihood procedures are developed to estimate parameters. Simulation results are presented and compare the bias and the mean square errors of the parameter estimates. In Chapter 3, five important nonlinear models are considered and their time series properties are discussed. The estimating function approach for nonlinear models is developed in detail in Chapter 4 and examples are added to illustrate the theory. A simulation study is carried out to examine the finite sample behavior of these proposed estimates based on the estimating functions.
APA, Harvard, Vancouver, ISO, and other styles
4

Ainkaran, Ponnuthurai. "Analysis of Some Linear and Nonlinear Time Series Models." University of Sydney. Mathematics & statistics, 2004. http://hdl.handle.net/2123/582.

Full text
Abstract:
Abstract This thesis considers some linear and nonlinear time series models. In the linear case, the analysis of a large number of short time series generated by a first order autoregressive type model is considered. The conditional and exact maximum likelihood procedures are developed to estimate parameters. Simulation results are presented and compare the bias and the mean square errors of the parameter estimates. In Chapter 3, five important nonlinear models are considered and their time series properties are discussed. The estimating function approach for nonlinear models is developed in detail in Chapter 4 and examples are added to illustrate the theory. A simulation study is carried out to examine the finite sample behavior of these proposed estimates based on the estimating functions.
APA, Harvard, Vancouver, ISO, and other styles
5

Lou, Jianying. "Diagnostics after a Signal from Control Charts in a Normal Process." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/28957.

Full text
Abstract:
Control charts are fundamental SPC tools for process monitoring. When a control chart or combination of charts signals, knowing the change point, which distributional parameter changed, and/or the change size helps to identify the cause of the change, remove it from the process or adjust the process back in control correctly and immediately. In this study, we proposed using maximum likelihood (ML) estimation of the current process parameters and their ML confidence intervals after a signal to identify and estimate the changed parameters. The performance of this ML diagnostic procedure is evaluated for several different charts or chart combinations for the cases of sample sizes and , and compared to the traditional approaches to diagnostics. None of the ML and the traditional estimators performs well for all patterns of shifts, but the ML estimator has the best overall performance. The ML confidence interval diagnostics are overall better at determining which parameter has shifted than the traditional diagnostics based on which chart signals. The performance of the generalized likelihood ratio (GLR) chart in shift detection and in ML diagnostics is comparable to the best EWMA chart combination. With the application of the ML diagnostics naturally following a GLR chart compared to the traditional control charts, the studies of a GLR chart during process monitoring can be further deepened in the future.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Leroy, Fanny. "Etude des délais de survenue des effets indésirables médicamenteux à partir des cas notifiés en pharmacovigilance : Problème de l'estimation d'une distribution en présence de données tronquées à droite." Phd thesis, Université Paris Sud - Paris XI, 2014. http://tel.archives-ouvertes.fr/tel-01011262.

Full text
Abstract:
Ce travail de thèse porte sur l'estimation paramétrique du maximum de vraisemblance pour des données de survie tronquées à droite, lorsque les délais de troncature sont considérés déterministes. Il a été motivé par le problème de la modélisation des délais de survenue des effets indésirables médicamenteux à partir des bases de données de pharmacovigilance, constituées des cas notifiés. Les distributions exponentielle, de Weibull et log-logistique ont été explorées.Parfois le caractère tronqué à droite des données est ignoré et un estimateur naïf est utilisé à la place de l'estimateur pertinent. Une première étude de simulations a montré que, bien que ces deux estimateurs - naïf et basé sur la troncature à droite - puissent être positivement biaisés, le biais de l'estimateur basé sur la troncature est bien moindre que celui de l'estimateur naïf et il en va de même pour l'erreur quadratique moyenne. De plus, le biais et l'erreur quadratique moyenne de l'estimateur basé sur la troncature à droite diminuent nettement avec l'augmentation de la taille d'échantillon, ce qui n'est pas le cas de l'estimateur naïf. Les propriétés asymptotiques de l'estimateur paramétrique du maximum de vraisemblance ont été étudiées. Sous certaines conditions, suffisantes, cet estimateur est consistant et asymptotiquement normal. La matrice de covariance asymptotique a été détaillée. Quand le délai de survenue est modélisé par la loi exponentielle, une condition d'existence de l'estimation du maximum de vraisemblance, assurant ces conditions suffisantes, a été obtenue. Pour les deux autres lois, une condition d'existence de l'estimation du maximum de vraisemblance a été conjecturée.A partir des propriétés asymptotiques de cet estimateur paramétrique, les intervalles de confiance de type Wald et de la vraisemblance profilée ont été calculés. Une seconde étude de simulations a montré que la couverture des intervalles de confiance de type Wald pouvait être bien moindre que le niveau attendu en raison du biais de l'estimateur du paramètre de la distribution, d'un écart à la normalité et d'un biais de l'estimateur de la variance asymptotique. Dans ces cas-là, la couverture des intervalles de la vraisemblance profilée est meilleure.Quelques procédures d'adéquation adaptées aux données tronquées à droite ont été présentées. On distingue des procédures graphiques et des tests d'adéquation. Ces procédures permettent de vérifier l'adéquation des données aux différents modèles envisagés.Enfin, un jeu de données réelles constitué de 64 cas de lymphomes consécutifs à un traitement anti TNF-α issus de la base de pharmacovigilance française a été analysé, illustrant ainsi l'intérêt des méthodes développées. Bien que ces travaux aient été menés dans le cadre de la pharmacovigilance, les développements théoriques et les résultats des simulations peuvent être utilisés pour toute analyse rétrospective réalisée à partir d'un registre de cas, où les données sur un délai de survenue sont aussi tronquées à droite.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Jian. "Effects of Full Information Maximum Likelihood, Expectation Maximization, Multiple Imputation, and Similar Response Pattern Imputation on Structural Equation Modeling with Incomplete and Multivariate Nonnormal Data." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1281387395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fischer, Manfred M. "Learning in neural spatial interaction models: A statistical perspective." Springer, 2002. http://epub.wu.ac.at/5503/1/neural.pdf.

Full text
Abstract:
In this paper we view learning as an unconstrained non-linear minimization problem in which the objective function is defined by the negative log-likelihood function and the search space by the parameter space of an origin constrained product unit neural spatial interaction model. We consider Alopex based global search, as opposed to local search based upon backpropagation of gradient descents, each in combination with the bootstrapping pairs approach to solve the maximum likelihood learning problem. Interregional telecommunication traffic flow data from Austria are used as test bed for comparing the performance of the two learning procedures. The study illustrates the superiority of Alopex based global search, measured in terms of Kullback and Leibler's information criterion.
APA, Harvard, Vancouver, ISO, and other styles
9

Glórias, Ludgero Miguel Carraça. "Estimating a knowledge production function and knowledge spillovers : a new two-step estimation procedure of a Spatial Autoregressive Poisson Model." Master's thesis, Instituto Superior de Economia e Gestão, 2020. http://hdl.handle.net/10400.5/20711.

Full text
Abstract:
Mestrado em Econometria Aplicada e Previsão
Vários estudos econométricos procuram explicar os determinantes da criação de conhecimento usando como variável dependente o número de patenteamentos numa região. Alguns destes procuram captar os efeitos de Knowledge Spillovers através de modelos lineares que incorporam dependência espacial. No entanto, nenhum estudo foi encontrado que captasse este efeito, tendo em atenção a natureza discreta da variável dependente. Este trabalho pretende preencher essa lacuna propondo um novo estimador de máxima verosimilhança a dois passos para um modelo Poisson Autorregressivo Espacial. As propriedades do estimador são avaliadas num conjunto de simulações de Monte Carlo. Os resultados sugerem que este estimador tem menor Bias e menor RMSE, na generalidade, que outros estimadores propostos, sendo que apenas mostra piores resultados quando a dependência espacial é próxima da unidade. Um exemplo empírico, empregando o novo estimador e um conjunto de estimadores alternativos, é realizado, sendo que a criação de conhecimento em 234 NUTS II de 24 países europeus é analisada. Os resultados evidenciam que existe uma forte dependência espacial na criação de inovação entre as regiões. Conclui-se também que o ambiente socioeconómico é essencial para o processo de formação de conhecimento e que contrariamente às instituições públicas, as empresas privadas são eficientes na produção de inovação. É de realçar, que regiões com menor capacidade em transformar despesas R&D em patenteamentos apresentam maior capacidade de absorção e segregação de conhecimento, evidenciando que regiões vizinhas menos eficientes na produção de conhecimento tendem a criar relações fortalecidas na partilha de conhecimento.
Several econometric studies seek to explain the determinants of knowledge production using as dependent variable the number of patents in a region. Some of these capture the effects of knowledge spillovers through linear models with spatial autorregressive term. However, no study has been found that estimates such effect while also considering the discrete nature of the dependent variable: a count variable. This essay aims to fill this gap by proposing a new Two-step Maximum Likelihood estimator for a Spatial Autorregressive Poisson model. The properties of this estimator are evaluated in a set of Monte Carlo Experiments. The simulation results suggest that this estimator presents lower Bias and lower RMSE than the alternative estimators proposed, only showing worse results when the spatial dependence is close to the unit. An empirical example, using the new estimator and a set of alternative estimators, is executed, where the creation of knowledge in 234 NUTS II from 24 European countries is analyzed. The results show that there is a strong spatial dependence on the creation of innovation. It is also concluded that the socio-economic environment is essential for the knowledge formation and, unlike public R&D institutions, private companies are efficient in producing innovation. It should be noted that regions with less capacity to transform R&D expenses into new patents, have greater capacity for absorption and segregation of knowledge, which shows that neighboring regions less efficient in the production of knowledge tend to create strong relations with each other taking advantage of the knowledge sharing process.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
10

Festucci, Ana Claudia. "Eliminação de parâmetros perturbadores na estimação de tamanhos populacionais." Universidade Federal de São Carlos, 2010. https://repositorio.ufscar.br/handle/ufscar/4540.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:06:04Z (GMT). No. of bitstreams: 1 2751.pdf: 886213 bytes, checksum: 2f07f7329a7f25f1759ddb5d7a6edd66 (MD5) Previous issue date: 2010-01-15
Financiadora de Estudos e Projetos
In this study, we used the capture-recapture procedure to estimate the size of a closed population. We analysed three di_erent statistics models. For each one of these models we determined - through several methods of eliminating nuisance parameters - the likelihood function and the pro_le, conditional, uniform integrated, Je_reys integrated and generalized integrated likelihood functions of the population size, except for the last model where we determined a function that is analogous to the conditional likelihood function, called integrated restricted likelihood function. In each instance we determined the respectives maximum likelihood estimates, the empirical con_dence intervals and the empirical mean squared errors of the estimates for the population size and we studied, using simulated data, the performances of the models.
Nesta dissertação utilizamos o processo de captura-recaptura para estimar o tamanho de uma população fechada. Analisamos três modelos estatísticos diferentes e, para cada um deles, através de diversas metodologias de eliminação de parâmetros perturbadores, determinamos as funções de verossimilhança e de verossimilhança perfilada, condicional, integrada uniforme, integrada de Jeffreys e integrada generalizada do tamanho populacional, com exceção do último modelo onde determinamos uma função análoga à função de verossimilhança condicional, denominada função de verossimilhança restrita integrada. Em cada capítulo determinamos as respectivas estimativas de máxima verossimilhança e construímos intervalos de confiança empíricos para o tamanho populacional, bem como determinamos os erros quadráticos médios empíricos das estimativas e estudamos, através de dados simulados, as performances dos modelos.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Maximum likelihood procedures"

1

Holly, Alberto. A note on instrumental variables and maximum likelihood estimation procedures. London: London School of Economics, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Abdel-Fattah, Abdel-Fattah A. Accuracy of item response theory parameter estimates using maximum likelihood and Bayesian procedures as implemented in LOGIST and BILOG. 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

McCleary, Richard, David McDowall, and Bradley J. Bartos. Noise Modeling. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190661557.003.0003.

Full text
Abstract:
Chapter 3 introduces the Box-Jenkins AutoRegressive Integrated Moving Average (ARIMA) noise modeling strategy. The strategy begins with a test of the Normality assumption using a Kolomogov-Smirnov (KS) statistic. Non-Normal time series are transformed with a Box-Cox procedure is applied. A tentative ARIMA noise model is then identified from a sample AutoCorrelation function (ACF). If the sample ACF identifies a nonstationary model, the time series is differenced. Integer orders p and q of the underlying autoregressive and moving average structures are then identified from the ACF and partial autocorrelation function (PACF). Parameters of the tentative ARIMA noise model are estimated with maximum likelihood methods. If the estimates lie within the stationary-invertible bounds and are statistically significant, the residuals of the tentative model are diagnosed to determine whether the model’s residuals are not different than white noise. If the tentative model’s residuals satisfy this assumption, the statistically adequate model is accepted. Otherwise, the identification-estimation-diagnosis ARIMA noise model-building strategy continues iteratively until it yields a statistically adequate model. The Box-Jenkins ARIMA noise modeling strategy is illustrated with detailed analyses of twelve time series. The example analyses include non-Normal time series, stationary white noise, autoregressive and moving average time series, nonstationary time series, and seasonal time series. The time series models built in Chapter 3 are re-introduced in later chapters. Chapter 3 concludes with a discussion and demonstration of auxiliary modeling procedures that are not part of the Box-Jenkins strategy. These auxiliary procedures include the use of information criteria to compare models, unit root tests of stationarity, and co-integration.
APA, Harvard, Vancouver, ISO, and other styles
4

George C. Marshall Space Flight Center., ed. A recommended procedure for estimating the cosmic-ray spectral parameter of a simple power law with applications to detector design. MSFC, Ala: National Aeronautics and Space Administration, Marshall Space Flight Center, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Maximum likelihood procedures"

1

L. Sclove, Stanley. "Determining an Adequate Number of Principal Components." In Principal Component Analysis [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.104534.

Full text
Abstract:
The problem of choosing the number of PCs to retain is analyzed in the context of model selection, using so-called model selection criteria (MSCs). For a prespecified set of models, indexed by k=1,2,…,K, these model selection criteria (MSCs) take the form MSCk=nLLk+anmk, where, for model k,LLk is the maximum log likelihood, mk is the number of independent parameters, and the constant an is an=lnn for BIC and an=2 for AIC. The maximum log likelihood LLk is achieved by using the maximum likelihood estimates (MLEs) of the parameters. In Gaussian models, LLk involves the logarithm of the mean squared error (MSE). The main contribution of this chapter is to show how to best use BIC to choose the number of PCs, and to compare these results to ad hoc procedures that have been used. Findings include the following. These are stated as they apply to the eigenvalues of the correlation matrix, which are between 0 and p and have an average of 1. For considering an additional PCk + 1, with AIC, inclusion of the additional PCk + 1 is justified if the corresponding eigenvalue λk+1 is greater than exp−2/n. For BIC, the inclusion of an additional PCk + 1 is justified if λk+1>n1/n, which tends to 1 for large n. Therefore, this is in approximate agreement with the average eigenvalue rule for correlation matrices, stating that one should retain dimensions with eigenvalues larger than 1.
APA, Harvard, Vancouver, ISO, and other styles
2

TAKEUCHI, Kei, and Masafumi AKAHIRA. "SECOND ORDER ASYMPTOTIC EFFICIENCY IN TERMS OF ASYMPTOTIC VARIANCES OF THE SEQUENTIAL MAXIMUM LIKELIHOOD ESTIMATION PROCEDURES." In Joint Statistical Papers of Akahira and Takeuchi, 319–24. WORLD SCIENTIFIC, 2003. http://dx.doi.org/10.1142/9789812791221_0027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dragan, Dejan, Tomaž Kramberger, and Darja Topolšek. "Efficiency and Travel Agencies." In Sustainable Logistics and Strategic Transportation Planning, 211–35. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-5225-0001-8.ch010.

Full text
Abstract:
The chapter deals with Bayesian structural equation modeling (SEM) for the case of travel agencies. The focus of research is the investigation of possible impacts of external integration with transport suppliers on the efficiency of travel agencies. In order to calculate the efficiency, the data envelopment analysis was used. For the construction of the measurement part of the model, the confirmatory factor analysis (CFA) was conducted, while its structural part was developed by the means of SEM procedure. When conducting the CFA and SEM procedures, the Bayesian estimation method was employed. Its performance was also compared with the maximum likelihood method and the fit indices of both methods were inspected. The results show that the derived model fits well to the real data. The study confirms certain positive effects of the external integration on the efficiency. This finding could represent an important guideline for the managers of the travel agencies.
APA, Harvard, Vancouver, ISO, and other styles
4

Veech, Joseph A. "Statistical Methods for Analyzing Species–Habitat Associations." In Habitat Ecology and Analysis, 135–74. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198829287.003.0009.

Full text
Abstract:
Six methods for statistically identifying and quantifying meaningful species–habitat associations are discussed. These are (1) comparison among group means (e.g. ANOVA), (2) multiple linear regression, (3) multiple logistic regression, (4) classification and regression trees, (5) multivariate techniques (principal components analysis and discriminant function analysis), and (6) occupancy modeling. Each method is described in statistical detail and associated terminology is explained. The example of habitat associations of a hypothetical beetle species (from Chapter 8) is used to further explain some of the methods. Assumption, strengths, and weaknesses of each method are discussed. Related statistical constructs and procedures such as the variance–covariance matrix, negative binomial distribution, generalized linear modeling, maximum likelihood estimation, and Bayes’ theorem are also explained. Some historical context is provided for some of the methods.
APA, Harvard, Vancouver, ISO, and other styles
5

Bhattacharya, Devanjan, and Santo Banerjee. "A Comparative Study of Four Different Satellite Image Classification Techniques for Geospatial Management." In Chaos and Complexity Theory for Management, 283–95. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2509-9.ch014.

Full text
Abstract:
Satellite imagery interpretation has become the technology of choice for a host of developmental, scientific, and administrative management work. The huge repository of geospatial data and information that are available as satellite imageries datasets from platforms such as Google Earth need to be classified and understood for natural resources management, urban planning, and sustainable development. The classification and analysis procedures involve algorithms like maximum likelihood classifier, isodata, fuzzy-logic classifier, and artificial neural network based classifier. Amongst these classifiers the optimum has to be selected for classifications which involve multiple features and classes. Herein lies the motivation for the present research, which can facilitate the selection of one amongst the many algorithms available to a decision maker/manager. The aforementioned techniques are applied for classification, and the respective accuracies in the classes of forestry, rock, water, built-up area, and dry river bed have been tabulated and verified from ground truth. The comparison is based on time and space complexity of the algorithms considering also the accuracy. It is found that traditional methods like MLC and Isodata offer good time and space consumption performance over the recent more adaptable algorithms as fuzzy and ANN. But the latter group excels in accuracy of assessment. The study suggests points and cases for ranking the techniques as best, 2nd best, and so on, where each technique could be optimally utilised for a given geospatial dataset based on its contents.
APA, Harvard, Vancouver, ISO, and other styles
6

Akahira, Masafumi, and Kei Takeuchi. "THIRD ORDER ASYMPTOTIC EFFICIENCY OF THE SEQUENTIAL MAXIMUM LIKELIHOOD ESTIMATION PROCEDURE." In Joint Statistical Papers of Akahira and Takeuchi, 370–96. WORLD SCIENTIFIC, 2003. http://dx.doi.org/10.1142/9789812791221_0030.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Maximum likelihood procedures"

1

Krishna, Shreya, N. K. Goyal, and Shikhar Dhar. "Software Reliability Growth Modeling: Comparison between Non-Linear- Regression Estimation and Maximum-Likelihood-Estimator Procedures." In International Powertrains, Fuels & Lubricants Meeting. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2018. http://dx.doi.org/10.4271/2018-01-1772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rana, Rakesh, Miroslaw Staron, Christian Berger, Jorgen Hansson, Martin Nilsson, and Fredrik Torner. "Comparing between Maximum Likelihood Estimator and Non-linear Regression Estimation Procedures for NHPP Software Reliability Growth Modelling." In 2013 Joint Conference of the 23nd International Workshop on Software Measurement and the 8th International Conference on Software Process and Product Measurement (IWSM-MENSURA). IEEE, 2013. http://dx.doi.org/10.1109/iwsm-mensura.2013.37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Infantes, María, Javier Naranjo-Pérez, Andrés Sáez, and Javier F. Jiménez-Alonso. "Determining the Best Pareto-solution in a Multi-Objective Approach for Model Updating." In IABSE Symposium, Guimarães 2019: Towards a Resilient Built Environment Risk and Asset Management. Zurich, Switzerland: International Association for Bridge and Structural Engineering (IABSE), 2019. http://dx.doi.org/10.2749/guimaraes.2019.0523.

Full text
Abstract:
<p>Using a multi-objective optimization algorithm avoid the use of weighting factors to balance the different residuals in a finite element model updating procedure under the maximum likelihood method. By using this approach, the fittest model is not unique and a set of solutions that form a curve, so-called Pareto optimal front, is obtained. Within this paper, first a review of the state of the art on the criteria used to determine the most adequate model among all the solutions of the Pareto front is presented. Subsequently, a case study of a real footbridge is considered. A finite element model of the footbridge is updated based on its experimental modal parameters. The Non- Dominated Sorting Genetic Algorithm is used to obtain the Pareto front. Since all the solutions in the Pareto front are non-dominated, the selection of the best candidate requires a reasonable criterion. Herein, different procedures to select the best updated model are discussed.</p>
APA, Harvard, Vancouver, ISO, and other styles
4

Ji, Shi, Xiaomei Xu, Jianbo Zhang, and Zhifang Wang. "Study on Technical Improvements for Human System Interface in the Main Control Room of LingAo 3 and 4." In 18th International Conference on Nuclear Engineering. ASMEDC, 2010. http://dx.doi.org/10.1115/icone18-29011.

Full text
Abstract:
This paper focuses on the technical improvements for Human System Interface (HSI) implemented to be designed to manage normal and accidental situation of the Nuclear Power Plants (NPPs) on the LING AO 3&4 nuclear plants project under construction in the South of China. Regarding the operation principles of the NPPs, two major improvements on the LAO 3&4 NPPs are introduced: Implementation of a Digital Control System (DCS) combined with a computerized Human System Interface and backed-up with a conventional control mean Back-up panel (BUP). Some technical improvements for HSIs such as State Oriented Procedures (SOP), Large Display Panel (LDP), Computerized-base procedures, Advanced alarm system, Safety Parameter Display system (SPDS) are detailed in this paper. Finally, in the scope of these studies, the human factors considerations are considered in order to reduce the likelihood of human errors, to gain maximum benefit of the implemented technology and to increase the performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Feng, Zhili, and Gery Wilkowski. "Repair Welding of Irradiated Materials: Modeling of Helium Bubble Distributions for Determining Crack-Free Welding Procedures." In 10th International Conference on Nuclear Engineering. ASMEDC, 2002. http://dx.doi.org/10.1115/icone10-22660.

Full text
Abstract:
In this paper, a computational simulation study is presented on the prediction of helium bubble evolution during repair welding of irradiated 304 stainless steel. Realistic spatial and temporal temperature and stress evolution during welding were obtained from simulation of the repair welding operation using the finite element model approach. The helium bubble evolution model by Kawano et al. was adopted as a user subroutine in the finite element model to predict the spatial distribution and temporal evolution of the helium bubble size and density in the heat-affected zone (HAZ) of partial penetration welds. Comparisons with experimental results available in open literature show that the predicted average helium bubble sizes were consistent with those observed experimentally under similar conditions. In addition, the computer simulation revealed strong spatial variation of helium bubble size due to the differences in combined thermal and stress conditions experienced in different locations in the HAZ. The predicted location of the maximum helium bubble agreed well with the observed helium-induced cracking site. The effect of welding heat input and welding speed was also investigated numerically. The modeling approach adopted in this study could be used as a cost-effective tool to quantitatively correlate the welding condition, radiation damage, and the likelihood of cracking, under the influence of welding-induced thermal and stress cycles. The model will also be useful in studying the degradation of properties from helium bubble formation of post-welded structures, even if a successful weld is made.
APA, Harvard, Vancouver, ISO, and other styles
6

Tang, Meng, Yimin Liu, and Louis J. Durlofsky. "History Matching Complex 3D Systems Using Deep-Learning-Based Surrogate Flow Modeling and CNN-PCA Geological Parameterization." In SPE Reservoir Simulation Conference. SPE, 2021. http://dx.doi.org/10.2118/203924-ms.

Full text
Abstract:
Abstract The use of deep-learning-based procedures for geological parameterization and fast surrogate flow modeling may enable the application of rigorous history matching algorithms that were previously considered impractical. In this study we incorporate such methods – specifically a geological parameterization that entails principal component analysis combined with a convolutional neural network (CNN-PCA) and a flow surrogate that uses a recurrent residual-U-Net procedure – into three different history matching procedures. The history matching algorithms considered are rejection sampling (RS), randomized maximum likelihood with mesh adaptive direct search optimization (MADS-RML), and ensemble smoother with multiple data assimilation (ES-MDA). RS is a rigorous sampler used here to provide reference results (though it can become intractable in cases with large amounts of observed data). History matching is performed for a channelized geomodel defined on a grid containing 128,000 cells. The CNN-PCA representation of geological realizations involves 400 parameters, and these are the variables determined through history matching. All flow evaluations (after training) are performed using the recurrent residual-U-Net surrogate model. Two cases, involving different amounts of historical data, are considered. We show that both MADS-RML and ES-MDA provide history matching results in general agreement with those from RS. MADS-RML is more accurate, however, and ES-MDA can display significant error in some quantities. ES-MDA requires many fewer function evaluations than MADS-RML, however, so there is a tradeoff between computational demand and accuracy. The framework developed here could be used to evaluate and tune a range of history matching procedures beyond those considered in this work.
APA, Harvard, Vancouver, ISO, and other styles
7

Mathew, Jino, Richard J. Moat, and P. John Bouchard. "Prediction of Pipe Girth Weld Residual Stress Profiles Using Artificial Neural Networks." In ASME 2013 Pressure Vessels and Piping Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/pvp2013-97491.

Full text
Abstract:
Defect assessment procedures such as BS7910, R6 and API 579-1 provide bounding profiles that can be used to characterise the residual stresses present in a weld. The bounding profiles in BS7910 and the R6 procedure have been based on examination of residual stress measurements and expert judgment. This approach suffers from the drawback that the upper bound curve can increase as more measurements and data scatter are obtained. The consequence of this is that structural integrity assessments of defective plant can be over-conservative by a large margin, and may lead to unnecessary and costly repair or inspection. This paper presents work exploring the use of artificial neural networks to predict more realistic residual stress profiles in austenitic stainless steel pipe girth welds. The network is trained using a set of baseline experimental stress data and then tested using previously unseen data. The committee of networks has been implemented and optimised using a maximum likelihood based algorithm. The neural network is validated by comparing predictions with neutron diffraction residual stress measurements of two girth welds 25 mm thick produced using different welding parameters. The capability of the neural network approach is discussed by comparing predictions with stress profiles recommended in the R6 procedure and its potential for development evaluated.
APA, Harvard, Vancouver, ISO, and other styles
8

Arau´jo, Ofe´lia Q. F., Jose´ L. de Medeiros, and Hellen P. M. Carvalho. "A Maxwell-Stefan Approach for Predicting Mixing Effects in Contiguous Batches of Multi-Product Pipelines." In 2002 4th International Pipeline Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/ipc2002-27179.

Full text
Abstract:
Transient phenomena in the liquid batch interfacial zone are addressed based on: (i) a reliable compositional description; and (ii) mass transfer modeling. In phase (i), compositional models are proposed for transported fluids, with parameters estimated by maximum likelihood procedures to match known characterizing data, like distillation curve, density, viscosity and heteroatom weight fractions. In phase (ii), the transient mixing problem is posed on the continua of axial position in the duct, and described by Maxwell-Stefan formalism for multicomponent mass transfer between two contiguous semi-infinite fluids. Turbulent effects are considered on calculation of mass transfer coefficients, and rigorous thermodynamic driving forces are accounted for through component fugacities predicted by equations of state. The model was solved by the finite element method and the resulting set of ordinary differential equations in time was numerically integrated enabling determination of the mixing zone, calculation of property profiles and monitoring loss of product specifications.
APA, Harvard, Vancouver, ISO, and other styles
9

Huyse, Luc, and Katherine A. Brown. "Why Reliability-Based Approaches Need More Realistic Corrosion Growth Modeling." In 2012 9th International Pipeline Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/ipc2012-90319.

Full text
Abstract:
Deterministic design and assessment methods are by definition conservative. Although no claim is made regarding the actual reliability level that is achieved using deterministic, i.e. safety-factor based approaches, the safety factors have been selected such that generally sufficient conservatism is maintained. Reliability-based methods aim to explicitly quantify the aggregated conservatism in terms of failure probabilities or risk. Accurate reliability estimates are not possible without accurate computational prediction models for the limit states and adequate quantification of the uncertainties in both the inputs and model assumptions. Although this statement may seem self-evident, it should not be made light-heartedly. In fact, just about every analysis step in the pipeline integrity assessment procedures contains an inherent, yet unquantified, level of conservatism. One such example is the application of a “maximum” corrosion growth rate that is constant in time. A reliability-based framework holds the promise of a more consistent and explicitly quantified safety level. This ultimately leads to higher safety efficiency for an entire pipeline system than under safety factor based approaches. An accurate prediction of the true likelihood of an adverse event is impossible without significant research into determining and understanding the, usually conservative, bias in the engineering models that are currently employed in the pipeline integrity state-of-the-practice. This paper highlights some of the challenges that are associated when porting the “maximum corrosion rate” approach used in a deterministic approach to a reliability-based paradigm. Issues associated with both defect and segment matching approaches will be highlighted and a better corrosion growth model form will be proposed.
APA, Harvard, Vancouver, ISO, and other styles
10

Larbi, N., and J. Lardies. "Modal Parameters Estimation and Model Order Selection of a Structure Excited by a Random Force." In ASME 1999 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1999. http://dx.doi.org/10.1115/detc99/vib-8095.

Full text
Abstract:
Abstract A multivariate maximum likelihood procedure for the estimation of modal parameters is presented. The vibrating system is excited by a random force and sensors output only are used to estimate the natural frequencies and damping coefficients of the system. The method works in time domain and a vector autoregressive moving average (VARMA) process is used. The information about modal parameters is contained in the multivariate AR part, which is estimated using an iterative maximum likelihood algorithm. This algorithm uses a score technique and output data only. The order of the AR part is obtained via the Minimum Description Length associated with an instrumental variable procedure. Experimental results show the effectiveness of the method for model order selection and modal parameters estimation.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Maximum likelihood procedures"

1

Anderson, T. W., and R. P. Mentz. Iterative Procedures for Exact Maximum Likelihood Estimation in the First-Order Gaussian Moving Average Model. Fort Belvoir, VA: Defense Technical Information Center, November 1990. http://dx.doi.org/10.21236/ada230812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography