Dissertations / Theses on the topic 'Likelihood ratio distributions'

To see the other types of publications on this topic, follow the link: Likelihood ratio distributions.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 33 dissertations / theses for your research on the topic 'Likelihood ratio distributions.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lynch, O'Neil. "Mixture distributions with application to microarray data analysis." Scholar Commons, 2009. http://scholarcommons.usf.edu/etd/2075.

Full text
Abstract:
The main goal in analyzing microarray data is to determine the genes that are differentially expressed across two types of tissue samples or samples obtained under two experimental conditions. In this dissertation we proposed two methods to determine differentially expressed genes. For the penalized normal mixture model (PMMM) to determine genes that are differentially expressed, we penalized both the variance and the mixing proportion parameters simultaneously. The variance parameter was penalized so that the log-likelihood will be bounded, while the mixing proportion parameter was penalized so that its estimates are not on the boundary of its parametric space. The null distribution of the likelihood ratio test statistic (LRTS) was simulated so that we could perform a hypothesis test for the number of components of the penalized normal mixture model. In addition to simulating the null distribution of the LRTS for the penalized normal mixture model, we showed that the maximum likelihood estimates were asymptotically normal, which is a first step that is necessary to prove the asymptotic null distribution of the LRTS. This result is a significant contribution to field of normal mixture model. The modified p-value approach for detecting differentially expressed genes was also discussed in this dissertation. The modified p-value approach was implemented so that a hypothesis test for the number of components can be conducted by using the modified likelihood ratio test. In the modified p-value approach we penalized the mixing proportion so that the estimates of the mixing proportion are not on the boundary of its parametric space. The null distribution of the (LRTS) was simulated so that the number of components of the uniform beta mixture model can be determined. Finally, for both modified methods, the penalized normal mixture model and the modified p-value approach were applied to simulated and real data.
APA, Harvard, Vancouver, ISO, and other styles
2

Liang, Yi. "Likelihood ratio test for the presence of cured individuals : a simulation study /." Internet access available to MUN users only, 2002. http://collections.mun.ca/u?/theses,157472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hornik, Kurt, and Bettina Grün. "On Maximum Likelihood Estimation of the Concentration Parameter of von Mises-Fisher Distributions." WU Vienna University of Economics and Business, 2012. http://epub.wu.ac.at/3669/1/Report120.pdf.

Full text
Abstract:
Maximum likelihood estimation of the concentration parameter of von Mises-Fisher distributions involves inverting the ratio R_nu = I_{nu+1} / I_nu of modified Bessel functions. Computational issues when using approximative or iterative methods were discussed in Tanabe et al. (Comput Stat 22(1):145-157, 2007) and Sra (Comput Stat 27(1):177-190, 2012). In this paper we use Amos-type bounds for R_nu to deduce sharper bounds for the inverse function, determine the approximation error of these bounds, and use these to propose a new approximation for which the error tends to zero when the inverse of R is evaluated at values tending to 1 (from the left). We show that previously introduced rational bounds for R_nu which are invertible using quadratic equations cannot be used to improve these bounds.
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
4

Pescim, Rodrigo Rossetto. "The new class of Kummer beta generalized distributions: theory and applications." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-30012014-112231/.

Full text
Abstract:
In this study, a new class of generalized distributions was developed, based on the Kummer beta distribution (NG; KOTZ, 1995), which contains as particular cases the exponentiated and beta generators of distributions. The main feature of the new family of distributions is to provide greater flexibility to the extremes of the density function and therefore, it becomes suitable for analyzing data sets with high degree of asymmetry and kurtosis. Also, two new distributions belonging to the new class of distributions, based on the Birnbaum-Saunders and generalized gamma distributions, that has as main characteristic the hazard function which assumes different forms (unimodal, bathtub shape, increase, decrease) were studied. In all studies, general mathematical properties such as ordinary and incomplete moments, generating function, mean deviations, reliability, entropies, order statistics and their moments were discussed. The estimation of parameters is approached by the method of maximum likelihood and Bayesian analysis and the observed information matrix is derived. It is also considered the likelihood ratio statistics and formal goodness-of-fit tests to compare all the proposed distributions with some of its sub-models and non-nested models. The developed results for all studies were applied to six real data sets.
Neste trabalho, foi proposta uma nova classe de distribuições generalizadas, baseada na distribuição Kummer beta (NG; KOTZ, 1995), que contém como casos particulares os geradores exponencializado e beta de distribuições. A principal característica da nova família de distribuições é fornecer grande flexibilidade para as extremidades da função densidade e portanto, ela torna-se adequada para a análise de conjuntos de dados com alto grau de assimetria e curtose. Também foram estudadas duas novas distribuições que pertencem à nova família de distribuições, baseadas nas distribuições Birnbaum-Saunders e gama generalizada, que possuem função de taxas de falhas que assumem diferentes formas (unimodal, forma de banheira, crescente e decrescente). Em todas as pesquisas, propriedades matemáticas gerais como momentos ordinários e incompletos, função geradora, desvios médio, confiabilidade, entropias, estatísticas de ordem e seus momentos foram discutidas. A estimação dos parâmetros é abordada pelo método da máxima verossimilhança e pela análise bayesiana e a matriz de informação observada foi derivada. Considerou-se, também, a estatística de razão de verossimilhanças e testes formais de qualidade de ajuste para comparar todas as distribuições propostas com alguns de seus submodelos e modelos não encaixados. Os resultados desenvolvidos foram aplicados a seis conjuntos de dados.
APA, Harvard, Vancouver, ISO, and other styles
5

Dai, Xiaogang. "Score Test and Likelihood Ratio Test for Zero-Inflated Binomial Distribution and Geometric Distribution." TopSCHOLAR®, 2018. https://digitalcommons.wku.edu/theses/2447.

Full text
Abstract:
The main purpose of this thesis is to compare the performance of the score test and the likelihood ratio test by computing type I errors and type II errors when the tests are applied to the geometric distribution and inflated binomial distribution. We first derive test statistics of the score test and the likelihood ratio test for both distributions. We then use the software package R to perform a simulation to study the behavior of the two tests. We derive the R codes to calculate the two types of error for each distribution. We create lots of samples to approximate the likelihood of type I error and type II error by changing the values of parameters. In the first chapter, we discuss the motivation behind the work presented in this thesis. Also, we introduce the definitions used throughout the paper. In the second chapter, we derive test statistics for the likelihood ratio test and the score test for the geometric distribution. For the score test, we consider the score test using both the observed information matrix and the expected information matrix, and obtain the score test statistic zO and zI . Chapter 3 discusses the likelihood ratio test and the score test for the inflated binomial distribution. The main parameter of interest is w, so p is a nuisance parameter in this case. We derive the likelihood ratio test statistics and the score test statistics to test w. In both tests, the nuisance parameter p is estimated using maximum likelihood estimator pˆ. We also consider the score test using both the observed and the expected information matrices. Chapter 4 focuses on the score test in the inflated binomial distribution. We generate data to follow the zero inflated binomial distribution by using the package R. We plot the graph of the ratio of the two score test statistics for the sample data, zI /zO , in terms of different values of n0, the number of zero values in the sample. In chapter 5, we discuss and compare the use of the score test using two types of information matrices. We perform a simulation study to estimate the two types of errors when applying the test to the geometric distribution and the inflated binomial distribution. We plot the percentage of the two errors by fixing different parameters, such as the probability p and the number of trials m. Finally, we conclude by briefly summarizing the results in chapter 6.
APA, Harvard, Vancouver, ISO, and other styles
6

Emberson, E. A. "The asymptotic distribution and robustness of the likelihood ratio and score test statistics." Thesis, University of St Andrews, 1995. http://hdl.handle.net/10023/13738.

Full text
Abstract:
Cordeiro & Ferrari (1991) use the asymptotic expansion of Harris (1985) for the moment generating function of the score statistic to produce a generalization of Bartlett adjustment for application to the score statistic. It is shown here that Harris's expansion is not invariant under reparameterization and an invariant expansion is derived using a method based on the expected likelihood yoke. A necessary and sufficient condition for the existence of a generalized Bartlett adjustment for an arbitrary statistic is given in terms of its moment generating function. Generalized Bartlett adjustments to the likelihood ratio and score test statistics are derived in the case where the interest parameter is one-dimensional under the assumption of a mis-specified model, where the true distribution is not assumed to be that under the null hypothesis.
APA, Harvard, Vancouver, ISO, and other styles
7

Gottfridsson, Anneli. "Likelihood ratio tests of separable or double separable covariance structure, and the empirical null distribution." Thesis, Linköpings universitet, Matematiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-69738.

Full text
Abstract:
The focus in this thesis is on the calculations of an empirical null distributionfor likelihood ratio tests testing either separable or double separable covariancematrix structures versus an unstructured covariance matrix. These calculationshave been performed for various dimensions and sample sizes, and are comparedwith the asymptotic χ2-distribution that is commonly used as an approximative distribution. Tests of separable structures are of particular interest in cases when data iscollected such that more than one relation between the components of the observationis suspected. For instance, if there are both a spatial and a temporalaspect, a hypothesis of two covariance matrices, one for each aspect, is reasonable.
APA, Harvard, Vancouver, ISO, and other styles
8

Ngunkeng, Grace. "Statistical Analysis of Skew Normal Distribution and its Applications." Bowling Green State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1370958073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tao, Jinxin. "Comparison Between Confidence Intervals of Multiple Linear Regression Model with or without Constraints." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-theses/404.

Full text
Abstract:
Regression analysis is one of the most applied statistical techniques. The sta- tistical inference of a linear regression model with a monotone constraint had been discussed in early analysis. A natural question arises when it comes to the difference between the cases of with and without the constraint. Although the comparison be- tween confidence intervals of linear regression models with and without restriction for one predictor variable had been considered, this discussion for multiple regres- sion is required. In this thesis, I discuss the comparison of the confidence intervals between a multiple linear regression model with and without constraints.
APA, Harvard, Vancouver, ISO, and other styles
10

Osaka, Haruki. "Asymptotics of Mixture Model Selection." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/27230.

Full text
Abstract:
In this thesis, we consider the likelihood ratio test (LRT) when testing for homogeneity in a three component normal mixture model. It is well-known that the LRT in this setting exhibits non-standard asymptotic behaviour, due to non-identifiability of the model parameters and possible degeneracy of Fisher Information matrix. In fact, Liu and Shao (2004) showed that for the test of homogeneity in a two component normal mixture model with a single fixed component, the limiting distribution is an extreme value Gumbel distribution under the null hypothesis, rather than the usual chi-squared distribution in regular parametric models for which the classical Wilks' theorem applies. We wish to generalise this result to a three component normal mixture to show that similar non-standard asymptotics also occurs for this model. Our approach follows closely to that of Bickel and Chernoff (1993), where the relevant asymptotics of the LRT statistic were studied indirectly by first considering a certain Gaussian process associated with the testing problem. The equivalence between the process studied by Bickel and Chernoff (1993) and the LRT was later proved by Liu and Shao (2004). Consequently, they verified that the LRT statistic for this problem diverges to infinity at the rate of loglog n; a statement that was first conjectured in Hartigan (1985). In a similar spirit, we consider the limiting distribution of the supremum of a certain quadratic form. More precisely, the quadratic form we consider is the score statistic for the test for homogeneity in the sub-model where the mean parameters are assumed fixed. The supremum of this quadratic form is shown to have a limiting distribution of extreme value type, again with a divergence rate of loglog n. Finally, we show that the LRT statistic for the three component normal mixture model can be uniformly approximated by this quadratic form, thereby proving that that the two statistics share the same limiting distribution.
APA, Harvard, Vancouver, ISO, and other styles
11

Chen, Xinyu. "Inference in Constrained Linear Regression." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-theses/405.

Full text
Abstract:
Regression analyses constitutes an important part of the statistical inference and has great applications in many areas. In some applications, we strongly believe that the regression function changes monotonically with some or all of the predictor variables in a region of interest. Deriving analyses under such constraints will be an enormous task. In this work, the restricted prediction interval for the mean of the regression function is constructed when two predictors are present. I use a modified likelihood ratio test (LRT) to construct prediction intervals.
APA, Harvard, Vancouver, ISO, and other styles
12

Hattaway, James T. "Parameter Estimation and Hypothesis Testing for the Truncated Normal Distribution with Applications to Introductory Statistics Grades." Diss., CLICK HERE for online access, 2010. http://contentdm.lib.byu.edu/ETD/image/etd3412.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Chaudhari, Pragat. "Analytical Methods for the Performance Evaluation of Binary Linear Block Codes." Thesis, University of Waterloo, 2000. http://hdl.handle.net/10012/904.

Full text
Abstract:
The modeling of the soft-output decoding of a binary linear block code using a Binary Phase Shift Keying (BPSK) modulation system (with reduced noise power) is the main focus of this work. With this model, it is possible to provide bit error performance approximations to help in the evaluation of the performance of binary linear block codes. As well, the model can be used in the design of communications systems which require knowledge of the characteristics of the channel, such as combined source-channel coding. Assuming an Additive White Gaussian Noise channel model, soft-output Log Likelihood Ratio (LLR) values are modeled to be Gaussian distributed. The bit error performance for a binary linear code over an AWGN channel can then be approximated using the Q-function that is used for BPSK systems. Simulation results are presented which show that the actual bit error performance of the code is very well approximated by the LLR approximation, especially for low signal-to-noise ratios (SNR). A new measure of the coding gain achievable through the use of a code is introduced by comparing the LLR variance to that of an equivalently scaled BPSK system. Furthermore, arguments are presented which show that the approximation requires fewer samples than conventional simulation methods to obtain the same confidence in the bit error probability value. This translates into fewer computations and therefore, less time is needed to obtain performance results. Other work was completed that uses a discrete Fourier Transform technique to calculate the weight distribution of a linear code. The weight distribution of a code is defined by the number of codewords which have a certain number of ones in the codewords. For codeword lengths of small to moderate size, this method is faster and provides an easily implementable and methodical approach over other methods. This technique has the added advantage over other techniques of being able to methodically calculate the number of codewords of a particular Hamming weight instead of calculating the entire weight distribution of the code.
APA, Harvard, Vancouver, ISO, and other styles
14

Rettiganti, Mallikarjuna Rao. "Statistical Models for Count Data from Multiple Sclerosis Clinical Trials and their Applications." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1291180207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Florez, Guillermo Domingo Martinez. "Extensões do modelo -potência." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-07072011-154259/.

Full text
Abstract:
Em analise de dados que apresentam certo grau de assimetria a suposicao que as observações seguem uma distribuição normal, pode resultar ser uma suposição irreal e a aplicação deste modelo pode ocultar características importantes do modelo verdadeiro. Este tipo de situação deu forca á aplicação de modelo assimétricos, destacando-se entre estes a família de distribuições skew-symmetric, desenvolvida por Azzalini (1985). Neste trabalho nos apresentamos uma segunda proposta para a anàlise de dados com presença importante de assimetria e/ou curtose, comparado com a distribuição normal. Nós apresentamos e estudamos algumas propriedades dos modelos alfa-potência e log-alfa-potência, onde também estudamos o problema de estimação, as matrizes de informação observada e esperada de Fisher e o grau do viés dos estimadores mediante alguns processos de simulação. Nós introduzimos um modelo mais estável que o modelo alfa- potência do qual derivamos o caso bimodal desta distribuição e introduzimos os modelos bimodal simêtrico e assimêtrico alfa-potencia. Posteriormente nós estendemos a distribuição alfa-potência para o caso do modelo Birnbaum-Saunders, estudamos as propriedades deste novo modelo, desenvolvemos estimadores para os parametros e propomos estimadores com viés corrigido. Também introduzimos o modelo de regressão alfa-potência para dados censurados e não censurados e para o modelo de regressão log-linear Birnbaum-Saunders; aqui nós derivamos os estimadores dos parâmetros e estudamos algumas técnicas de validação dos modelos. Por ultimo nós fazemos a extensão multivariada do modelo alfa-potência e estudamos alguns processos de estimação dos parâmetros. Para todos os casos estudados apresentam-se ilustrações com dados já analisados previamente com outras suposições de distribuições.
In data analysis where data present certain degree of asymmetry the assunption of normality can result in an unreal situation and the application of this model can hide important caracteristics of the true model. Situations of this type has given strength to the use of asymmetric models with special emphasis on the skew-symmetric distribution developed by Azzalini (1985). In this work we present an alternative for data analysis in the presence of signi¯cant asymmetry or kurtosis, when compared with the normal distribution, as well as other situations that involve such model. We present and study of the properties of the ®-power and log-®-power distributions, where we also study the estimation problem, the observed and expected information matrices and the degree of bias in estimation using simulation procedures. A °exible model version is proposed for the ®-power distribution, following an extension to a bimodal version. Follows next an extension of the Birnbaum-Saunders distribution using the ®-power distribution, where some properties are studied, estimating approaches are developed as well as corrected bias estimator developed. We also develop censored and uncensored regression for the ®-power model and for the log-linear Birnbaum-Saunders regression models, for which model validation techniques are studied. Finally a multivariate extension of the ®-power model is proposed and some estimation procedures are investigated for the model. All the situations investigated were illustrated with data application using data sets previally analysed with other distributions.
APA, Harvard, Vancouver, ISO, and other styles
16

Silva, Michel Ferreira da. "Estimação e teste de hipótese baseados em verossimilhanças perfiladas." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-06122006-162733/.

Full text
Abstract:
Tratar a função de verossimilhança perfilada como uma verossimilhança genuína pode levar a alguns problemas, como, por exemplo, inconsistência e ineficiência dos estimadores de máxima verossimilhança. Outro problema comum refere-se à aproximação usual da distribuição da estatística da razão de verossimilhanças pela distribuição qui-quadrado, que, dependendo da quantidade de parâmetros de perturbação, pode ser muito pobre. Desta forma, torna-se importante obter ajustes para tal função. Vários pesquisadores, incluindo Barndorff-Nielsen (1983,1994), Cox e Reid (1987,1992), McCullagh e Tibshirani (1990) e Stern (1997), propuseram modificações à função de verossimilhança perfilada. Tais ajustes consistem na incorporação de um termo à verossimilhança perfilada anteriormente à estimação e têm o efeito de diminuir os vieses da função escore e da informação. Este trabalho faz uma revisão desses ajustes e das aproximações para o ajuste de Barndorff-Nielsen (1983,1994) descritas em Severini (2000a). São apresentadas suas derivações, bem como suas propriedades. Para ilustrar suas aplicações, são derivados tais ajustes no contexto da família exponencial biparamétrica. Resultados de simulações de Monte Carlo são apresentados a fim de avaliar os desempenhos dos estimadores de máxima verossimilhança e dos testes da razão de verossimilhanças baseados em tais funções. Também são apresentadas aplicações dessas funções de verossimilhança em modelos não pertencentes à família exponencial biparamétrica, mais precisamente, na família de distribuições GA0(alfa,gama,L), usada para modelar dados de imagens de radar, e no modelo de Weibull, muito usado em aplicações da área da engenharia denominada confiabilidade, considerando dados completos e censurados. Aqui também foram obtidos resultados numéricos a fim de avaliar a qualidade dos ajustes sobre a verossimilhança perfilada, analogamente às simulações realizadas para a família exponencial biparamétrica. Vale mencionar que, no caso da família de distribuições GA0(alfa,gama,L), foi avaliada a aproximação da distribuição da estatística da razão de verossimilhanças sinalizada pela distribuição normal padrão. Além disso, no caso do modelo de Weibull, vale destacar que foram derivados resultados distribucionais relativos aos estimadores de máxima verossimilhança e às estatísticas da razão de verossimilhanças para dados completos e censurados, apresentados em apêndice.
The profile likelihood function is not genuine likelihood function, and profile maximum likelihood estimators are typically inefficient and inconsistent. Additionally, the null distribution of the likelihood ratio test statistic can be poorly approximated by the asymptotic chi-squared distribution in finite samples when there are nuisance parameters. It is thus important to obtain adjustments to the likelihood function. Several authors, including Barndorff-Nielsen (1983,1994), Cox and Reid (1987,1992), McCullagh and Tibshirani (1990) and Stern (1997), have proposed modifications to the profile likelihood function. They are defined in a such a way to reduce the score and information biases. In this dissertation, we review several profile likelihood adjustments and also approximations to the adjustments proposed by Barndorff-Nielsen (1983,1994), also described in Severini (2000a). We present derivations and the main properties of the different adjustments. We also obtain adjustments for likelihood-based inference in the two-parameter exponential family. Numerical results on estimation and testing are provided. We also consider models that do not belong to the two-parameter exponential family: the GA0(alfa,gama,L) family, which is commonly used to model image radar data, and the Weibull model, which is useful for reliability studies, the latter under both noncensored and censored data. Again, extensive numerical results are provided. It is noteworthy that, in the context of the GA0(alfa,gama,L) model, we have evaluated the approximation of the null distribution of the signalized likelihood ratio statistic by the standard normal distribution. Additionally, we have obtained distributional results for the Weibull case concerning the maximum likelihood estimators and the likelihood ratio statistic both for noncensored and censored data.
APA, Harvard, Vancouver, ISO, and other styles
17

Erguven, Sait. "Path Extraction Of Low Snr Dim Targets From Grayscale 2-d Image Sequences." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607723/index.pdf.

Full text
Abstract:
In this thesis, an algorithm for visual detecting and tracking of very low SNR targets, i.e. dim targets, is developed. Image processing of single frame in time cannot be used for this aim due to the closeness of intensity spectrums of the background and target. Therefore
change detection of super pixels, a group of pixels that has sufficient statistics for likelihood ratio testing, is proposed. Super pixels that are determined as transition points are signed on a binary difference matrix and grouped by 4-Connected Labeling method. Each label is processed to find its vector movement in the next frame by Label Destruction and Centroids Mapping techniques. Candidate centroids are put into Distribution Density Function Maximization and Maximum Histogram Size Filtering methods to find the target related motion vectors. Noise related mappings are eliminated by Range and Maneuver Filtering. Geometrical centroids obtained on each frame are used as the observed target path which is put into Optimum Decoding Based Smoothing Algorithm to smooth and estimate the real target path. Optimum Decoding Based Smoothing Algorithm is based on quantization of possible states, i.e. observed target path centroids, and Viterbi Algorithm. According to the system and observation models, metric values of all possible target paths are computed using observation and transition probabilities. The path which results in maximum metric value at the last frame is decided as the estimated target path.
APA, Harvard, Vancouver, ISO, and other styles
18

Ureten, Suzan. "Single and Multiple Emitter Localization in Cognitive Radio Networks." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/35692.

Full text
Abstract:
Cognitive radio (CR) is often described as a context-intelligent radio, capable of changing the transmit parameters dynamically based on the interaction with the environment it operates. The work in this thesis explores the problem of using received signal strength (RSS) measurements taken by a network of CR nodes to generate an interference map of a given geographical area and estimate the locations of multiple primary transmitters that operate simultaneously in the area. A probabilistic model of the problem is developed, and algorithms to address location estimation challenges are proposed. Three approaches are proposed to solve the localization problem. The first approach is based on estimating the locations from the generated interference map when no information about the propagation model or any of its parameters is present. The second approach is based on approximating the maximum likelihood (ML) estimate of the transmitter locations with the grid search method when the model is known and its parameters are available. The third approach also requires the knowledge of model parameters but it is actually based on generating samples from the joint posterior of the unknown location parameter with Markov chain Monte Carlo (MCMC) methods, as an alternative for the highly computationally complex grid search approach. For RF cartography generation problem, we study global and local interpolation techniques, specifically the Delaunay triangulation based techniques as the use of existing triangulation provides a computationally attractive solution. We present a comparative performance evaluation of these interpolation techniques in terms of RF field strength estimation and emitter localization. Even though the estimates obtained from the generated interference maps are less accurate compared to the ML estimator, the rough estimates are utilized to initialize a more accurate algorithm such as the MCMC technique to reduce the complexity of the algorithm. The complexity issues of ML estimators based on full grid search are also addressed by various types of iterative grid search methods. One challenge to apply the ML estimation algorithm to multiple emitter localization problem is that, it requires a pdf approximation to summands of log-normal random variables for likelihood calculations at each grid location. This inspires our investigations on sum of log-normal approximations studied in literature for selecting the appropriate approximation to our model assumptions. As a final extension of this work, we propose our own approximation based on distribution fitting to a set of simulated data and compare our approach with Fenton-Wilkinson's well-known approximation which is a simple and computational efficient approach that fits a log-normal distribution to sum of log-normals by matching the first and second central moments of random variables. We demonstrate that the location estimation accuracy of the grid search technique obtained with our proposed approximation is higher than the one obtained with Fenton-Wilkinson's in many different case scenarios.
APA, Harvard, Vancouver, ISO, and other styles
19

Munasinghe, Wijith Prasantha. "Cluster-based lack of fit tests for nonlinear regression models." Diss., Manhattan, Kan. : Kansas State University, 2010. http://hdl.handle.net/2097/2366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Yu, Jung-Suk. "Essays on Fine Structure of Asset Returns, Jumps, and Stochastic Volatility." ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/431.

Full text
Abstract:
There has been an on-going debate about choices of the most suitable model amongst a variety of model specifications and parameterizations. The first dissertation essay investigates whether asymmetric leptokurtic return distributions such as Hansen's (1994) skewed tdistribution combined with GARCH specifications can outperform mixed GARCH-jump models such as Maheu and McCurdy's (2004) GARJI model incorporating the autoregressive conditional jump intensity parameterization in the discrete-time framework. I find that the more parsimonious GJR-HT model is superior to mixed GARCH-jump models. Likelihood-ratio (LR) tests, information criteria such as AIC, SC, and HQ and Value-at-Risk (VaR) analysis confirm that GJR-HT is one of the most suitable model specifications which gives us both better fit to the data and parsimony of parameterization. The benefits of estimating GARCH models using asymmetric leptokurtic distributions are more substantial for highly volatile series such as emerging stock markets, which have a higher degree of non-normality. Furthermore, Hansen's skewed t-distribution also provides us with an excellent risk management tool evidenced by VaR analysis. The second dissertation essay provides a variety of empirical evidences to support redundancy of stochastic volatility for SP500 index returns when stochastic volatility is taken into account with infinite activity pure Lévy jumps models and the importance of stochastic volatility to reduce pricing errors for SP500 index options without regard to jumps specifications. This finding is important because recent studies have shown that stochastic volatility in a continuous-time framework provides an excellent fit for financial asset returns when combined with finite-activity Merton's type compound Poisson jump-diffusion models. The second essay also shows that stochastic volatility with jumps (SVJ) and extended variance-gamma with stochastic volatility (EVGSV) models perform almost equally well for option pricing, which strongly imply that the type of Lévy jumps specifications is not important factors to enhance model performances once stochastic volatility is incorporated. In the second essay, I compute option prices via improved Fast Fourier Transform (FFT) algorithm using characteristic functions to match arbitrary log-strike grids with equal intervals with each moneyness and maturity of actual market option prices.
APA, Harvard, Vancouver, ISO, and other styles
21

Magalh?es, Felipe Henrique Alves. "Testes em modelos weibull na forma estendida de Marshall-Olkin." Universidade Federal do Rio Grande do Norte, 2011. http://repositorio.ufrn.br:8080/jspui/handle/123456789/18639.

Full text
Abstract:
Made available in DSpace on 2015-03-03T15:28:32Z (GMT). No. of bitstreams: 1 FelipeHAM_DISSERT.pdf: 2307848 bytes, checksum: c94e3d62e5fe54424d6cbe1491c8d85d (MD5) Previous issue date: 2011-12-28
Universidade Federal do Rio Grande do Norte
In survival analysis, the response is usually the time until the occurrence of an event of interest, called failure time. The main characteristic of survival data is the presence of censoring which is a partial observation of response. Associated with this information, some models occupy an important position by properly fit several practical situations, among which we can mention the Weibull model. Marshall-Olkin extended form distributions other a basic generalization that enables greater exibility in adjusting lifetime data. This paper presents a simulation study that compares the gradient test and the likelihood ratio test using the Marshall-Olkin extended form Weibull distribution. As a result, there is only a small advantage for the likelihood ratio test
Em an?lise de sobreviv?ncia, a vari?vel resposta e, geralmente, o tempo at? a ocorr?ncia de um evento de interesse, denominado tempo de falha, e a principal caracter?stica de dados de sobreviv?ncia e a presen?a de censura, que ? a observa??o parcial da resposta. Associados a essas informa??es, alguns modelos ocupam uma posi??o de destaque por sua comprovada adequa??o a v?rias situa??es pr?ticas, entre os quais ? poss?vel citar o modelo Weibull. Distribui??es na forma estendida de Marshall-Olkin oferecem uma generaliza??o de distribui??es b?sicas que permitem uma flexibilidade maior no ajuste de dados de tempo de vida. Este trabalho apresenta um estudo de simula??o que compara duas estat?sticas de teste, a da Raz?o de Verossimilhan?as e a Gradiente, utilizando a distribui??o Weibull em sua forma estendida de Marshall-Olkin. Como resultado, verifica-se apenas uma pequena vantagem para estat?stica da Raz?o de Verossimilhancas
APA, Harvard, Vancouver, ISO, and other styles
22

Gomes, Priscila da Silva. "Distribuição normal assimétrica para dados de expressão gênica." Universidade Federal de São Carlos, 2009. https://repositorio.ufscar.br/handle/ufscar/4530.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:06:02Z (GMT). No. of bitstreams: 1 2390.pdf: 3256865 bytes, checksum: 7ad1acbefc5f29dddbaad3f14dbcef7c (MD5) Previous issue date: 2009-04-17
Financiadora de Estudos e Projetos
Microarrays technologies are used to measure the expression levels of a large amount of genes or fragments of genes simultaneously in diferent situations. This technology is useful to determine genes that are responsible for genetic diseases. A common statistical methodology used to determine whether a gene g has evidences to diferent expression levels is the t-test which requires the assumption of normality for the data (Saraiva, 2006; Baldi & Long, 2001). However this assumption sometimes does not agree with the nature of the analyzed data. In this work we use the skew-normal distribution described formally by Azzalini (1985), which has the normal distribution as a particular case, in order to relax the assumption of normality. Considering a frequentist approach we made a simulation study to detect diferences between the gene expression levels in situations of control and treatment through the t-test. Another simulation was made to examine the power of the t-test when we assume an asymmetrical model for the data. Also we used the likelihood ratio test to verify the adequability of an asymmetrical model for the data.
Os microarrays são ferramentas utilizadas para medir os níveis de expressão de uma grande quantidade de genes ou fragmentos de genes simultaneamente em situações variadas. Com esta ferramenta é possível determinar possíveis genes causadores de doenças de origem genética. Uma abordagem estatística comumente utilizada para determinar se um gene g apresenta evidências para níveis de expressão diferentes consiste no teste t, que exige a suposição de normalidade aos dados (Saraiva, 2006; Baldi & Long, 2001). No entanto, esta suposição pode não condizer com a natureza dos dados analisados. Neste trabalho, será utilizada a distribuição normal assimétrica descrita formalmente por Azzalini (1985), que tem a distribuição normal como caso particular, com o intuito de flexibilizar a suposição de normalidade. Considerando a abordagem clássica, é realizado um estudo de simulação para detectar diferenças entre os níveis de expressão gênica em situações de controle e tratamento através do teste t, também é considerado um estudo de simulação para analisar o poder do teste t quando é assumido um modelo assimétrico para o conjunto de dados. Também é realizado o teste da razão de verossimilhança, para verificar se o ajuste de um modelo assimétrico aos dados é adequado.
APA, Harvard, Vancouver, ISO, and other styles
23

Lemonte, Artur Jose. "Estatística gradiente e refinamento de métodos assintóticos no modelo de regressão Birnbaum-Saunders." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-26102010-123617/.

Full text
Abstract:
Rieck & Nedelman (1991) propuseram um modelo de regressão log-linear tendo como base a distribuição Birnbaum-Saunders (Birnbaum & Saunders, 1969a). O modelo proposto pelos autores vem sendo bastante explorado e tem se mostrado uma ótima alternativa a outros modelos propostos na literatura, como por exemplo, os modelos de regressão Weibull, gama e lognormal. No entanto, até o presente momento, não existe nenhum estudo tratando de refinamentos para as estatísticas da razão de verossimilhanças e escore nesta classe de modelos de regressão. Assim, um dos objetivos desta tese é obter um fator de correção de Bartlett para a estatística da razão de verossimilhanças e um fator de correção tipo-Bartlett para a estatística escore nesse modelo. Estes ajustes melhoram a aproximação da distribuição nula destas estatísticas pela distribuição qui-quadrado de referência. Adicionalmente, objetiva-se obter ajustes para a estatística da razão de verossimilhanças sinalizada. Tais ajustes melhoram a aproximação desta estatística pela distribuição normal padrão. Recentemente, uma nova estatística de teste foi proposta por Terrell (2002), a qual o autor denomina estatística gradiente. Esta estatística foi derivada a partir da estatística escore e da estatística de Wald modificada (Hayakawa & Puri, 1985). A combinação daquelas duas estatísticas resulta em uma estatística muito simples de ser calculada, não envolvendo, por exemplo, nenhum cálculo matricial como produto e inversa de matrizes. Esta estatística foi recentemente citada por Rao (2005): \"The suggestion by Terrell is attractive as it is simple to compute. It would be of interest to investigate the performance of the [gradient] statistic.\" Caminhando na direção da sugestão de Rao, outro objetivo da tese é obter uma expansão assintótica para a distribuição da estatística gradiente sob uma sequência de alternativas de Pitman convergindo para a hipótese nula a uma taxa de convergência de n^{-1/2} utilizando a metodologia desenvolvida por Peers (1971) e Hayakawa (1975). Em particular, mostramos que, até ordem n^{-1/2}, a estatística gradiente segue distribuição qui-quadrado central sob a hipótese nula e distribuição qui-quadrado não central sob a hipótese alternativa. Também temos como objetivo comparar o poder local deste teste com o poder local dos testes da razão de verossimilhanças, de Wald e escore. Finalmente, aplicaremos a expansão assintótica derivada na tese em algumas classes particulares de modelos.
The Birnbaum-Saunders regression model is commonly used in reliability studies.We address the issue of performing inference in this class of models when the number of observations is small. Our simulation results suggest that the likelihood ratio and score tests tend to be liberal when the sample size is small. We derive Bartlett and Bartlett-type correction factors which reduce the size distortion of the tests. Additionally, we also consider modified signed log-likelihood ratio statistics in this class of models. Finally, the asymptotic expansion of the distribution of the gradient test statistic is derived for a composite hypothesis under a sequence of Pitman alternative hypotheses converging to the null hypothesis at rate n^{-1/2}, n being the sample size. Comparisons of the local powers of the gradient, likelihood ratio, Wald and score tests reveal no uniform superiority property.
APA, Harvard, Vancouver, ISO, and other styles
24

Ghorbanzadeh, Dariush. "Détection de rupture dans les modèles statistiques." Paris 7, 1992. http://www.theses.fr/1992PA077246.

Full text
Abstract:
Ce travail concerne l'etude d'une classe de tests de detection de rupture dans les modeles statistiques. La classe de tests consideree est basee sur le test du rapport de vraisemblance. Dans le cadre de la contiguite au sens de lecam, sous l'hypothese nulle (non rupture) et sous l'hypothese alternative (rupture), les lois asymptotiques des statistiques de tests sont evaluees, ce qui permet de determiner asymptotiquement les regions critiques des tests. Les expressions analytiques des puissances asymptotiques sont proposees. En utilisant les techniques de l'analyse discriminante, le probleme de la detection du passage en sida avere a ete etudie par application directe sur les donnees concernant 450 patients hiv-positifs
APA, Harvard, Vancouver, ISO, and other styles
25

SILVA, Priscila Gonçalves da. "Inferência e diagnóstico em modelos não lineares Log-Gama generalizados." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/18637.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-04-25T14:46:06Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE VERSÃO FINAL (CD).pdf: 688894 bytes, checksum: fc5c0291423dc50d4989c1c2d8d4af65 (MD5)
Made available in DSpace on 2017-04-25T14:46:06Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE VERSÃO FINAL (CD).pdf: 688894 bytes, checksum: fc5c0291423dc50d4989c1c2d8d4af65 (MD5) Previous issue date: 2016-11-04
Young e Bakir (1987) propôs a classe de Modelos Lineares Log-Gama Generalizados (MLLGG) para analisar dados de sobrevivência. No nosso trabalho, estendemos a classe de modelos propostapor Young e Bakir (1987) permitindo uma estrutura não linear para os parâmetros de regressão. A nova classe de modelos é denominada como Modelos Não Lineares Log-Gama Generalizados (MNLLGG). Com o objetivo de obter a correção de viés de segunda ordem dos estimadores de máxima verossimilhança (EMV) na classe dos MNLLGG, desenvolvemos uma expressão matricial fechada para o estimador de viés de Cox e Snell (1968). Analisamos, via simulação de Monte Carlo, os desempenhos dos EMV e suas versões corrigidas via Cox e Snell (1968) e através da metodologia bootstrap (Efron, 1979). Propomos também resíduos e técnicas de diagnóstico para os MNLLGG, tais como: alavancagem generalizada, influência local e influência global. Obtivemos, em forma matricial, uma expressão para o fator de correção de Bartlett à estatística da razão de verossimilhanças nesta classe de modelos e desenvolvemos estudos de simulação para avaliar e comparar numericamente o desempenho dos testes da razão de verossimilhanças e suas versões corrigidas em relação ao tamanho e poder em amostras finitas. Além disso, derivamos expressões matriciais para os fatores de correção tipo-Bartlett às estatísticas escore e gradiente. Estudos de simulação foram feitos para avaliar o desempenho dos testes escore, gradiente e suas versões corrigidas no que tange ao tamanho e poder em amostras finitas.
Young e Bakir (1987) proposed the class of generalized log-gamma linear regression models (GLGLM) to analyze survival data. In our work, we extended the class of models proposed by Young e Bakir (1987) considering a nonlinear structure for the regression parameters. The new class of models is called generalized log-gamma nonlinear regression models (GLGNLM). We also propose matrix formula for the second-order bias of the maximum likelihood estimate of the regression parameter vector in the GLGNLM class. We use the results by Cox and Snell (1968) and bootstrap technique [Efron (1979)] to obtain the bias-corrected maximum likelihood estimate. Residuals and diagnostic techniques were proposed for the GLGNLM, such as generalized leverage, local and global influence. An general matrix notation was obtained for the Bartlett correction factor to the likelihood ratio statistic in this class of models. Simulation studies were developed to evaluate and compare numerically the performance of likelihood ratio tests and their corrected versions regarding size and power in finite samples. Furthermore, general matrix expressions were obtained for the Bartlett-type correction factor for the score and gradient statistics. Simulation studies were conducted to evaluate the performance of the score and gradient tests with their corrected versions regarding to the size and power in finite samples.
APA, Harvard, Vancouver, ISO, and other styles
26

Gomes, André Yoshizumi. "Família Weibull de razão de chances na presença de covariáveis." Universidade Federal de São Carlos, 2009. https://repositorio.ufscar.br/handle/ufscar/4558.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:06:06Z (GMT). No. of bitstreams: 1 4331.pdf: 1908865 bytes, checksum: d564b46a6111fdca6f7cc9f4d5596637 (MD5) Previous issue date: 2009-03-18
Universidade Federal de Minas Gerais
The Weibull distribuition is a common initial choice for modeling data with monotone hazard rates. However, such distribution fails to provide a reasonable parametric _t when the hazard function is unimodal or bathtub-shaped. In this context, Cooray (2006) proposed a generalization of the Weibull family by considering the distributions of the odds of Weibull and inverse Weibull families, referred as the odd Weibull family which is not just useful for modeling unimodal and bathtub-shaped hazards, but it is also convenient for testing goodness-of-_t of Weibull and inverse Weibull as submodels. In this project we have systematically studied the odd Weibull family along with its properties, showing motivations for its utilization, inserting covariates in the model, pointing out some troubles associated with the maximum likelihood estimation and proposing interval estimation and hypothesis test construction methodologies for the model parameters. We have also compared resampling results with asymptotic ones. Coverage probability from proposed con_dence intervals and size and power of considered hypothesis tests were both analyzed as well via Monte Carlo simulation. Furthermore, we have proposed a Bayesian estimation methodology for the model parameters based in Monte Carlo Markov Chain (MCMC) simulation techniques.
A distribuição Weibull é uma escolha inicial freqüente para modelagem de dados com taxas de risco monótonas. Entretanto, esta distribuição não fornece um ajuste paramétrico razoável quando as funções de risco assumem um formato unimodal ou em forma de banheira. Neste contexto, Cooray (2006) propôs uma generalização da família Weibull considerando a distribuição da razão de chances das famílias Weibull e Weibull inversa, referida como família Weibull de razão de chances. Esta família não é apenas conveniente para modelar taxas de risco unimodal e banheira, mas também é adequada para testar a adequabilidade do ajuste das famílias Weibull e Weibull inversa como submodelos. Neste trabalho, estudamos sistematicamente a família Weibull de razão de chances e suas propriedades, apontando as motivações para o seu uso, inserindo covariáveis no modelo, veri_cando as di_culdades referentes ao problema da estimação de máxima verossimilhança dos parâmetros do modelo e propondo metodologia de estimação intervalar e construção de testes de hipóteses para os parâmetros do modelo. Comparamos os resultados obtidos por meio dos métodos de reamostragem com os resultados obtidos via teoria assintótica. Tanto a probabilidade de cobertura dos intervalos de con_ança propostos quanto o tamanho e poder dos testes de hipóteses considerados foram estudados via simulação de Monte Carlo. Além disso, propusemos uma metodologia Bayesiana de estimação para os parâmetros do modelo baseados em técnicas de simulação de Monte Carlo via Cadeias de Markov.
APA, Harvard, Vancouver, ISO, and other styles
27

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Full text
Abstract:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO, and other styles
28

Han, Yuan-Lun, and 韓遠綸. "maximum likelihood ratio tests for four bivariate normal distributions." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/29eu3n.

Full text
Abstract:
碩士
國立中央大學
數學系
106
We find the critical regions and power functions of three generalized likelihood ratio tests for three testing problems concerning four bivariate normal distributions. We show that all the tests are consistent.
APA, Harvard, Vancouver, ISO, and other styles
29

RICCIARDI, FEDERICO. "Pre-Experimental Assessment of Forensic DNA Identification Systems." Doctoral thesis, 2012. http://hdl.handle.net/2158/794376.

Full text
Abstract:
In this thesis we want to produce a methodology to evaluate a kinship identification system, i.e. the set of models and data used to ascertain the identity of an individual, a probabilistic tool devoted to obtain the likelihood ratio supporting (or contradicting) the hypothesis that an individual, the candidate for identification, is a specific member of a family, conditional on the available familial DNA evidence. The thesis considers the likelihood ratio as a random variable and focuses on the evaluation of the probability that a candidate for identification would be correctly classified exploiting the likelihood ratio distributions conditional on each hypothesis. The aim of this work is thus to show how it is possible to make statements about the goodness of an identification system and to demonstrate how this can be applied in a great variety of cases. As secondary objective, we want to show how it is possible to obtain the distributions for the likelihood ratio, finding efficient computational methods to cope with the their huge state space. The proposed system evaluation is specific for each case, does not require any additional laboratory costs, and should be carried out before the identification trial is performed. In a pre-experimental perspective, we want to evaluate whether a system fulfils the requirements of the parties involved. A further objective is to consider and find a solution for some complicating issues affecting the estimation of mutation rates for STR markers.
APA, Harvard, Vancouver, ISO, and other styles
30

Chia-HaoChan and 詹嘉豪. "Improving the likelihood ratio test for scale parameters of Gamma distribution." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/35396827582689055776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Wu, Chao-Qing, and 吳昭慶. "Generalized Likelihood Ratio Homogeneous Simultaneous Test for Parameters of Trivariate Normal Distribution." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/94918214230069065363.

Full text
Abstract:
碩士
國立中央大學
數學系
104
In this paper, we study the generalized likelihood ratio homogeneous simultaneous test for parameters of trivariate normal distribution.We find the testing statistic and its asymptotic distribution under the null hypothesis.
APA, Harvard, Vancouver, ISO, and other styles
32

Skipka, Guido. "The Likelihood Ratio Test for Order Restricted Hypotheses in Non-Inferiority Trials." Doctoral thesis, 2003. http://hdl.handle.net/11858/00-1735-0000-000D-F26A-F.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Roçadas, Cláudia Vanessa Rosa Leitão de Macedo. "Populações emparelhadas com reclassificação periódica : aplicação a uma carteira de clientes." Doctoral thesis, 2008. http://hdl.handle.net/10400.2/1390.

Full text
Abstract:
Tese de Doutoramento em Matemática na especialidade de Modelação Estatística apresentada à Universidade Aberta
O objectivo central desta dissertação foi o estudo comparativo de populações abertas emparelhadas sujeitas a reclassificações periódicas. Tais populações estão divididas em subpopulações. Haverá emparelhamento se houver uma bijecção entre os conjuntos de subpopulações e quando os elementos duma dessas populações poderem transitar entre subpopulações se e só se o mesmo se verificar para as sub-populações correspondentes da outra população. Admitiremos que as reclassificações se fazem no início dos períodos juntamente com a classificação dos novos elementos. Assim as cadeias de Markov, com parâmetro discreto, surgem como o modelo matemático adequado para o estudo destas populações. É então possível mostrar, sob condições gerais, a existência de uma distribuição limite para as probabilidades dum elemento duma destas populações pertencer às várias sub-populações. Haverá pois estabilidade no que diz respeito às dimensões relativas das sub-populações. Esta estabilidade corresponde à existência dum vórtice estocástico, ver Guerreiro & Mexia (2003; 2004; 2008) e Guerreiro (2008). As distribuições limite de populações emparelhadas desempenham naturalmente um papel central na comparação das mesmas. Nesta dissertação consideramos duas populações emparelhadas, a dos clientes com e sem gestor de conta duma instituição bancária. Para obter as distribuições limites tivemos de estudar alguns problemas teóricos: definição de isomorfismo entre cadeias de Markov; ajustamento pelo método do mínimos quadrados das matrizes de transição estudo na influência de factores e suas interacções nos fluxos de entrada e saída. Referimos que o isomorfismo entre cadeias de Markov está na base do emparelhamento de populações. Quanto ao ajustamento das matrizes de transição o mesmo foi necessário pois apenas tinhamos os totais de entradas e saídas, nas sub-populações. Finalmente o estudo de fluxos de entrada e saída levou a simplificações importantes no modelo.
Our study is centred on the stochastic structure of matched open populations subject to periodic reclassifications. These populations are divided into sub-populations. Two or more of such population are matched when there is a 1-1 correspondence between their sub-populations and the elements of one of them can go to another if and only if the same occurs with elements from the corresponding sub-populations of the other. When the relative dimensions of the sub-populations are stable we say, following Guerreiro & Mexia (2003; 2004; 2008) and Guerreiro (2008), that we have a stochastic vortex. The existence of such a vortex leads to the existence of a limit distribution. Matched populations may then be compared through these distributions. To obtain conditions for the existence of stochastic vortices we assumed that: - The entries and departures occur at the beginning of fixed length time periods; - Also at the beginning of those periods the new elements are allocated to the different populations and the elements in the population are reallocated; - The entry and reallocation probabilities do not change from period to period. Under these assumptions the populations will have underlying homogeneous Markov chains. We intend to generalize these assumptions but they showed to be acceptable for the application we present. This application considered two populations of customers of a bank: with and without account manager. To carry out our study we showed how to: - define isomorphism between Markov chains; - adjust one step transition matrices through least squares We point out that the isomorphism between Markov chains underlined populations matching and the matrices adjustment was required since we had only the total numbers of entries and departures for sub-populations. Besides these study connected with Markov chains we showed how to carry out Analysis of Variance – like analysis of entries and departures to and from de populations of customers. This study was useful since it enabled as to length the model.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography