To see the other types of publications on this topic, follow the link: Test of homogeneity.

Dissertations / Theses on the topic 'Test of homogeneity'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Test of homogeneity.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wu, Baohua. "Data Driven Approaches to Testing Homogeneity of Intraclass Correlation Coefficients." Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/math_theses/92.

Full text
Abstract:
The test of homogeneity for intraclass correlation coefficients has been one of the active topics in statistical research. Several chi-square tests have been proposed to test the homogeneity of intraclass correlations in the past few decades. The big concern for them is that these methods are seriously biased when sample sizes are not large. In this thesis, data driven approaches are proposed to testing the homogeneity of intraclass correlation coefficients of several populations. Through simulation study, data driven methods have been proved to be less biased and accurate than some commonly used chi-square tests.
APA, Harvard, Vancouver, ISO, and other styles
2

Mu, Zhiqiang. "Comparing the Statistical Tests for Homogeneity of Variances." Digital Commons @ East Tennessee State University, 2006. https://dc.etsu.edu/etd/2212.

Full text
Abstract:
Testing the homogeneity of variances is an important problem in many applications since statistical methods of frequent use, such as ANOVA, assume equal variances for two or more groups of data. However, testing the equality of variances is a difficult problem due to the fact that many of the tests are not robust against non-normality. It is known that the kurtosis of the distribution of the source data can affect the performance of the tests for variance. We review the classical tests and their latest, more robust modifications, some other tests that have recently appeared in the literature, and use bootstrap and permutation techniques to test for equal variances. We compare the performance of these tests under different types of distributions, sample sizes and true ratios of variances of the populations. Monte-Carlo methods are used in this study to calculate empirical powers and type I errors under different settings.
APA, Harvard, Vancouver, ISO, and other styles
3

Nian, Gaowei. "A score test of homogeneity in generalized additive models for zero-inflated count data." Kansas State University, 2014. http://hdl.handle.net/2097/18230.

Full text
Abstract:
Master of Science
Department of Statistics
Wei-Wen Hsu
Zero-Inflated Poisson (ZIP) models are often used to analyze the count data with excess zeros. In the ZIP model, the Poisson mean and the mixing weight are often assumed to depend on covariates through regression technique. In other words, the effect of covariates on Poisson mean or the mixing weight is specified using a proper link function coupled with a linear predictor which is simply a linear combination of unknown regression coefficients and covariates. However, in practice, this predictor may not be linear in regression parameters but curvilinear or nonlinear. Under such situation, a more general and flexible approach should be considered. One popular method in the literature is Zero-Inflated Generalized Additive Models (ZIGAM) which extends the zero-inflated models to incorporate the use of Generalized Additive Models (GAM). These models can accommodate the nonlinear predictor in the link function. For ZIGAM, it is also of interest to conduct inferences for the mixing weight, particularly evaluating whether the mixing weight equals to zero. Many methodologies have been proposed to examine this question, but all of them are developed under classical zero-inflated models rather than ZIGAM. In this report, we propose a generalized score test to evaluate whether the mixing weight is equal to zero under the framework of ZIGAM with Poisson model. Technically, the proposed score test is developed based on a novel transformation for the mixing weight coupled with proportional constraints on ZIGAM, where it assumes that the smooth components of covariates in both the Poisson mean and the mixing weight have proportional relationships. An intensive simulation study indicates that the proposed score test outperforms the other existing tests when the mixing weight and the Poisson mean truly involve a nonlinear predictor. The recreational fisheries data from the Marine Recreational Information Program (MRIP) survey conducted by National Oceanic and Atmospheric Administration (NOAA) are used to illustrate the proposed methodology.
APA, Harvard, Vancouver, ISO, and other styles
4

Stewart, Michael Ian. "Asymptotic methods for tests of homogeneity for finite mixture models." Thesis, The University of Sydney, 2002. http://hdl.handle.net/2123/855.

Full text
Abstract:
We present limit theory for tests of homogeneity for finite mixture models. More specifically, we derive the asymptotic distribution of certain random quantities used for testing that a mixture of two distributions is in fact just a single distribution. Our methods apply to cases where the mixture component distributions come from one of a wide class of one-parameter exponential families, both continous and discrete. We consider two random quantities, one related to testing simple hypotheses, the other composite hypotheses. For simple hypotheses we consider the maximum of the standardised score process, which is itself a test statistic. For composite hypotheses we consider the maximum of the efficient score process, which is itself not a statistic (it depends on the unknown true distribution) but is asymptotically equivalent to certain common test statistics in a certain sense. We show that we can approximate both quantities with the maximum of a certain Gaussian process depending on the sample size and the true distribution of the observations, which when suitably normalised has a limiting distribution of the Gumbel extreme value type. Although the limit theory is not practically useful for computing approximate p-values, we use Monte-Carlo simulations to show that another method suggested by the theory, involving using a Studentised version of the maximum-score statistic and simulating a Gaussian process to compute approximate p-values, is remarkably accurate and uses a fraction of the computing resources that a straight Monte-Carlo approximation would.
APA, Harvard, Vancouver, ISO, and other styles
5

Stewart, Michael Ian. "Asymptotic methods for tests of homogeneity for finite mixture models." University of Sydney. Mathematics and Statistics, 2002. http://hdl.handle.net/2123/855.

Full text
Abstract:
We present limit theory for tests of homogeneity for finite mixture models. More specifically, we derive the asymptotic distribution of certain random quantities used for testing that a mixture of two distributions is in fact just a single distribution. Our methods apply to cases where the mixture component distributions come from one of a wide class of one-parameter exponential families, both continous and discrete. We consider two random quantities, one related to testing simple hypotheses, the other composite hypotheses. For simple hypotheses we consider the maximum of the standardised score process, which is itself a test statistic. For composite hypotheses we consider the maximum of the efficient score process, which is itself not a statistic (it depends on the unknown true distribution) but is asymptotically equivalent to certain common test statistics in a certain sense. We show that we can approximate both quantities with the maximum of a certain Gaussian process depending on the sample size and the true distribution of the observations, which when suitably normalised has a limiting distribution of the Gumbel extreme value type. Although the limit theory is not practically useful for computing approximate p-values, we use Monte-Carlo simulations to show that another method suggested by the theory, involving using a Studentised version of the maximum-score statistic and simulating a Gaussian process to compute approximate p-values, is remarkably accurate and uses a fraction of the computing resources that a straight Monte-Carlo approximation would.
APA, Harvard, Vancouver, ISO, and other styles
6

Höge, Elisabet. "Test and Analysis of Homogeneity Regarding Failure Intensity of Components in Nuclear Power Plants." Thesis, Uppsala universitet, Matematiska institutionen, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-162564.

Full text
Abstract:
In the Probabilistic Safety Assessment (PSA) all potential failure events of a system, for example a nuclear power plant, are identified in order to evaluate the total probability for a severe accident. The input data to the PSA, the reliability parameters, are derived by a two-stage Bayesian method which comprises an assumption of homogeneity among the components of a population. If the components are assumed to be inhomogeneous each individual component is assessed its own failure rate. Contrary, if the components are assumed to be homogeneous the data is pooled before a common reliability parameter is derived. However, the motives for making these assumptions are sparsely documented and the purpose of this study is to design a statistical method for testing the homogeneity of sparse data. The chosen test method, a chi-square goodness-of-fit test with consideration taken to operation (or standby) time, is implemented and applied on failure event data for the Nordic utilities. From the tests it can be concluded that the failure intensity for continuously operating components for most populations can be considered homogeneous with regard to failure rate, since in 6 % of the test the null hypothesis cannot be rejected at 0.05 significance level. Test results indicate that populations of standby components are to a larger extent inhomogeneous, which might be explained by differences in the data set due to unequal number of demands. Also larger populations, i.e. components of all plants, must be considered as more inhomogeneous. If the populations are to be regrouped in order to increase the homogeneity a statistical test could be a controlling tool. Besides statistical tests, there is a need to study the consequences in the PSA before a conclusion can be made on what approach is to be preferred.
APA, Harvard, Vancouver, ISO, and other styles
7

Osaka, Haruki. "Asymptotics of Mixture Model Selection." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/27230.

Full text
Abstract:
In this thesis, we consider the likelihood ratio test (LRT) when testing for homogeneity in a three component normal mixture model. It is well-known that the LRT in this setting exhibits non-standard asymptotic behaviour, due to non-identifiability of the model parameters and possible degeneracy of Fisher Information matrix. In fact, Liu and Shao (2004) showed that for the test of homogeneity in a two component normal mixture model with a single fixed component, the limiting distribution is an extreme value Gumbel distribution under the null hypothesis, rather than the usual chi-squared distribution in regular parametric models for which the classical Wilks' theorem applies. We wish to generalise this result to a three component normal mixture to show that similar non-standard asymptotics also occurs for this model. Our approach follows closely to that of Bickel and Chernoff (1993), where the relevant asymptotics of the LRT statistic were studied indirectly by first considering a certain Gaussian process associated with the testing problem. The equivalence between the process studied by Bickel and Chernoff (1993) and the LRT was later proved by Liu and Shao (2004). Consequently, they verified that the LRT statistic for this problem diverges to infinity at the rate of loglog n; a statement that was first conjectured in Hartigan (1985). In a similar spirit, we consider the limiting distribution of the supremum of a certain quadratic form. More precisely, the quadratic form we consider is the score statistic for the test for homogeneity in the sub-model where the mean parameters are assumed fixed. The supremum of this quadratic form is shown to have a limiting distribution of extreme value type, again with a divergence rate of loglog n. Finally, we show that the LRT statistic for the three component normal mixture model can be uniformly approximated by this quadratic form, thereby proving that that the two statistics share the same limiting distribution.
APA, Harvard, Vancouver, ISO, and other styles
8

Bagdonavičius, Vilijandas B., Ruta Levuliene, Mikhail S. Nikulin, and Olga Zdorova-Cheminade. "Tests for homogeneity of survival distributions against non-location alternatives and analysis of the gastric cancer data." Universität Potsdam, 2004. http://opus.kobv.de/ubp/volltexte/2011/5152/.

Full text
Abstract:
The two and k-sample tests of equality of the survival distributions against the alternatives including cross-effects of survival functions, proportional and monotone hazard ratios, are given for the right censored data. The asymptotic power against approaching alternatives is investigated. The tests are applied to the well known chemio and radio therapy data of the Gastrointestinal Tumor Study Group. The P-values for both proposed tests are much smaller then in the case of other known tests. Differently from the test of Stablein and Koutrouvelis the new tests can be applied not only for singly but also to randomly censored data.
APA, Harvard, Vancouver, ISO, and other styles
9

Gao, Siyu. "The impact of misspecification of nuisance parameters on test for homogeneity in zero-inflated Poisson model: a simulation study." Kansas State University, 2014. http://hdl.handle.net/2097/17804.

Full text
Abstract:
Master of Science
Department of Statistics
Wei-Wen Hsu
The zero-inflated Poisson (ZIP) model consists of a Poisson model and a degenerate distribution at zero. Under this model, zero counts are generated from two sources, representing a heterogeneity in the population. In practice, it is often interested to evaluate this heterogeneity is consistent with the observed data or not. Most of the existing methodologies to examine this heterogeneity are often assuming that the Poisson mean is a function of nuisance parameters which are simply the coefficients associated with covariates. However, these nuisance parameters can be misspecified when performing these methodologies. As a result, the validity and the power of the test may be affected. Such impact of misspecification has not been discussed in the literature. This report primarily focuses on investigating the impact of misspecification on the performance of score test for homogeneity in ZIP models. Through an intensive simulation study, we find that: 1) under misspecification, the limiting distribution of the score test statistic under the null no longer follows a chi-squared distribution. A parametric bootstrap methodology is suggested to use to find the true null limiting distribution of the score test statistic; 2) the power of the test decreases as the number of covariates in the Poisson mean increases. The test with a constant Poisson mean has the highest power, even compared to the test with a well-specified mean. At last, simulation results are applied to the Wuhan Inpatient Care Insurance data which contain excess zeros.
APA, Harvard, Vancouver, ISO, and other styles
10

Carvalho, Helton Graziadei de. "Testes bayesianos para homogeneidade marginal em tabelas de contingência." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-27082015-181850/.

Full text
Abstract:
O problema de testar hipóteses sobre proporções marginais de uma tabela de contingência assume papel fundamental, por exemplo, na investigação da mudança de opinião e comportamento. Apesar disso, a maioria dos textos na literatura abordam procedimentos para populações independentes, como o teste de homogeneidade de proporções. Existem alguns trabalhos que exploram testes de hipóteses em caso de respostas dependentes como, por exemplo, o teste de McNemar para tabelas 2 x 2. A extensão desse teste para tabelas k x k, denominado teste de homogeneidade marginal, usualmente requer, sob a abordagem clássica, a utilização de aproximações assintóticas. Contudo, quando o tamanho amostral é pequeno ou os dados esparsos, tais métodos podem eventualmente produzir resultados imprecisos. Neste trabalho, revisamos medidas de evidência clássicas e bayesianas comumente empregadas para comparar duas proporções marginais. Além disso, desenvolvemos o Full Bayesian Significance Test (FBST) para testar a homogeneidade marginal em tabelas de contingência bidimensionais e multidimensionais. O FBST é baseado em uma medida de evidência, denominada e-valor, que não depende de resultados assintóticos, não viola o princípio da verossimilhança e respeita a várias propriedades lógicas esperadas para testes de hipóteses. Consequentemente, a abordagem ao problema de teste de homogeneidade marginal pelo FBST soluciona diversas limitações geralmente enfrentadas por outros procedimentos.
Tests of hypotheses for marginal proportions in contingency tables play a fundamental role, for instance, in the investigation of behaviour (or opinion) change. However, most texts in the literature are concerned with tests that assume independent populations (e.g: homogeneity tests). There are some works that explore hypotheses tests for dependent proportions such as the McNemar Test for 2 x 2 contingency tables. The generalization of McNemar test for k x k contingency tables, called marginal homogeneity test, usually requires asymptotic approximations. Nevertheless, for small sample sizes or sparse tables, such methods may occasionally produce imprecise results. In this work, we review some classical and Bayesian measures of evidence commonly applied to compare two marginal proportions. We propose the Full Bayesian Significance Test (FBST) to investigate marginal homogeneity in two-way and multidimensional contingency tables. The FBST is based on a measure of evidence, called e-value, which does not depend on asymptotic results, does not violate the likelihood principle and satisfies logical properties that are expected from hypothesis testing. Consequently, the FBST approach to test marginal homogeneity overcomes several limitations usually met by other procedures.
APA, Harvard, Vancouver, ISO, and other styles
11

Driana, Elin. "GENDER DIFFERENTIAL ITEM FUNCTIONING ON A NINTH-GRADE MATHEMATICS PROFICIENCY TEST IN APPALACHIAN OHIO." Ohio University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1181693190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Schumann, Frank. "Untersuchung zur prädiktiven Validität von Konzentrationstests." Doctoral thesis, Universitätsbibliothek Chemnitz, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-209613.

Full text
Abstract:
In der hier vorliegenden Arbeit wurde die Validität von Aufmerksamkeits- und Konzentrationstests untersucht. Im Vordergrund stand dabei die Frage nach dem Einfluss verschiedener kritischer Variablen auf die prädiktive Validität in diesen Tests, insbesondere der Itemschwierigkeit und Itemhomogenität, der Testlänge bzw. des Testverlaufs, der Testdiversifikation und der Validität im Kontext einer echten Personalauslese. In insgesamt fünf Studien wurden die genannten Variablen systematisch variiert und auf ihre prädiktive Validität zur (retrograden und konkurrenten) Vorhersage von schulischen und akademischen Leistungen (Realschule, Abitur, Vordiplom/Bachelor) hin analysiert. Aufgrund der studentischen (d. h. relativ leistungshomogenen) Stichprobe bestand die Erwartung, dass die Korrelationen etwas unterschätzt werden. Da die Validität in dieser Arbeit jedoch „vergleichend“ für bestimmte Tests bzw. experimentelle Bedingungen bestimmt wurde, sollte dies keine Rolle spielen. In Studie 1 (N = 106) wurde zunächst untersucht, wie schwierig die Items in einem Rechenkonzentrationstest sein sollten, um gute Vorhersagen zu gewährleisten. Dazu wurden leichte und schwierigere Items vergleichend auf ihre Korrelation zum Kriterium hin untersucht. Im Ergebnis waren sowohl leichte als auch schwierigere Testvarianten ungefähr gleich prädiktiv. In Studie 2 (N = 103) wurde die Rolle der Testlänge untersucht, wobei die prädiktive Validität von Kurzversion und Langversion in einem Rechenkonzentrationstest vergleichend untersucht wurde. Im Ergebnis zeigte sich, dass die Kurzversion valider war als die Langversion und dass die Validität in der Langversion im Verlauf abnimmt. In Studie 3 (N = 388) stand der Aspekt der Testdiversifikation im Vordergrund, wobei untersucht wurde, ob Intelligenz besser mit einem einzelnen Matrizentest (Wiener Matrizen-Test, WMT) oder mit einer Testbatterie (Intelligenz-Struktur-Test, I-S-T 2000 R) erfasst werden sollte, um gute prädiktive Validität zu gewährleisten. Die Ergebnisse sprechen klar für den Matrizentest, welcher ungefähr gleich valide war wie die Testbatterie, aber dafür testökonomischer ist. In den Studien 4 (N = 105) und 5 (N =97) wurde die prädiktive Validität zur Vorhersage von Schulleistungen im Kontext einer realen Personalauswahlsituation untersucht. Während die großen Testbatterien, Wilde-Intelligenz-Test 2 (WIT-2) und Intelligenz-Struktur-Test 2000R (I-S-T 2000 R), nur mäßig gut vorhersagen konnten, war der Komplexe Konzentrationstest (KKT), insbesondere der KKT-Rechentest ein hervorragender Prädiktor für schulische und akademische Leistungen. Auf Basis dieser Befunde wurden schließlich Empfehlungen und Anwendungshilfen für den strategischen Einsatz von Testinstrumenten in der diagnostischen Berufspraxis ausgesprochen.
APA, Harvard, Vancouver, ISO, and other styles
13

Carpes, Ricardo Howes. "Variabilidade da produção de frutos de abobrinha italiana em função do manejo." Universidade Federal de Santa Maria, 2006. http://repositorio.ufsm.br/handle/1/5007.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
In order to determine the variability for the culture of Italian pumpkin with passing of the multiple harvests in protecting environment and verifying the influence of the plants that had not presented apt fruits to be harvested in definitive harvest in its variance in different handling one conducted a work in the stations winter-spring and summer-autumn 2004/2005, in pertaining area to the Department of Fitotecnia, in UFSM, SantaMaria, RS. The experiment consisted in two tunnels with different handlings, three lines of twenty and three meters length of twenty and five plants in each. The pants had been being for the plastic greenhouse with spacing of 0,80 m between plants and 1,0 m between lines. Had been carried the harvests of the fruits with length 18 cm. Applied the test of Bartlett between the six variances of the lines of culture of each harvest and, it between the average variances of the six to verify the homogeneity between harvests in each station of culture. For comparison of the averages of production of each line of culture, in each harvest inside of each tunnel, and between lines of culture, with the same position in the tunnels test t in the first work was applied. Variances and averages oscillate of significant form between the lines of culture with passing of the productive cycle and of the multiple carried through, independent harvests of the station of culture, and still, they tend to be significantly different between the culture lines, when in climatic conditions of limitation. In as the work, the test of Bartlett was applied to verify the homogeneity enters the variances of each plant, for the situations where had been considered the all plants of the line of culture and for only with the harvested plants, in each type of irrigation, the two stations of culture. In the cases where the behavior of the variances was presented as heterogeneous, new tests of Bartlett between the variances had been become fulfilled, grouping the multiple successive harvests. The system of irrigation for dripping compared with the one for aspersion presented behavior of bigger heterogeneous between the variances. With the methodology of considering value zero, in the plants without fruits harvested in determined harvest, the heterogeneous between the variances tends to increase.
Para se determinar a variabilidade da produção de frutos de abobrinha italiana com o passar das múltiplas colheitas em ambiente protegido e verificar a influência das plantas que não apresentaram frutos aptos a serem colhidos em determinada colheita na sua variância em diferentes manejos conduziu-se um trabalho nas estações sazonais inverno-primavera e verão-outono 2004/2005, em área pertencente ao Departamento de Fitotecnia, na UFSM, Santa Maria, RS. O experimento constituiu-se em dois túneis com diferentes manejos, três linhas de vinte e três metros de comprimento compostas de vinte e cinco plantas em cada. As mudas foram transplantadas para a estufa plástica com espaçamento de 0,80 m entre plantas e 1,0 m entre filas. Foram realizadas as colheitas dos frutos com comprimento ≥ 18 cm Aplicou-se o teste de Bartlett entre as seis variâncias das linhas de cultivo dentro de cada colheita e, entre as variâncias médias das seis para verificar a homogeneidade entre colheitas em cada estação sazonal de cultivo. Para comparação das médias de produção de cada linha de cultivo, em cada colheita dentro de cada túnel, e entre linhas de cultivo, com a mesma posição nos túneis foi aplicado o teste t no primeiro trabalho. Variâncias e médias oscilam de forma significativa entre as linhas de cultivo com o passar do ciclo produtivo e das múltiplas colheitas realizadas, independentes da estação sazonal de cultivo, e ainda, tendem a ser significativamente diferentes entre as linhas de cultivo, quando em condições de limitação climáticas. No segundo trabalho, aplicou-se o teste de Bartlett para verificar a homogeneidade entre as variâncias de cada planta, para as situações onde foram consideradas todas as plantas da linha de cultivo e para apenas com as plantas colhidas, em cada tipo de irrigação, nas duas estações sazonais de cultivo. Nos casos em que o comportamento das variâncias apresentou-se como heterogêneo, realizaram-se novos testes de Bartlett entre as variâncias, agrupandose as múltiplas colheitas sucessivas. O sistema de irrigação por gotejamento comparado com o por aspersão apresentou comportamento de maior heterogeneidade entre as variâncias. Com a metodologia de se considerar o valor zero, nas plantas sem frutos colhidos em determinada colheita, a heterogeneidade entre as variâncias tende a aumentar.
APA, Harvard, Vancouver, ISO, and other styles
14

Qi, Meng. "Development in Normal Mixture and Mixture of Experts Modeling." UKnowledge, 2016. http://uknowledge.uky.edu/statistics_etds/15.

Full text
Abstract:
In this dissertation, first we consider the problem of testing homogeneity and order in a contaminated normal model, when the data is correlated under some known covariance structure. To address this problem, we developed a moment based homogeneity and order test, and design weights for test statistics to increase power for homogeneity test. We applied our test to microarray about Down’s syndrome. This dissertation also studies a singular Bayesian information criterion (sBIC) for a bivariate hierarchical mixture model with varying weights, and develops a new data dependent information criterion (sFLIC).We apply our model and criteria to birth- weight and gestational age data for the same model, whose purposes are to select model complexity from data.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Ling. "Homogeneity tests for several poisson populations." HKBU Institutional Repository, 2008. http://repository.hkbu.edu.hk/etd_ra/909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Mvogo, Jean Kisito. "Regroupement mécanique par méthode vibratoire des bois du bassin du Congo." Thesis, Bordeaux 1, 2008. http://www.theses.fr/2008BOR13790/document.

Full text
Abstract:
L’objet de ce travail de thèse est le regroupement des essences bois du bassin du Congo en quatre groupes en fonction de la similitude de leurs propriétés mécaniques essentielles. Le module d’élasticité (MOE), propriété indicatrice, est estimé par une méthode non destructive : l’essai vibratoire. Pour chaque classe, le fractile à 5% d’exclusion supérieur d’une propriété mécanique est calculé à partir d’une théorie clairement exposée. Le bassin du Congo est un patrimoine en danger à cause des pressions anthropiques au premier rang desquelles figure la coupe hyper sélective des essences bois. Le résultat de ce regroupement permet de proposer comme substitut aux essences menacées d’extinction des essences aux propriétés mécaniques similaires plus disponibles en terme de ressources sur pied. D’où la contribution à la gestion durable et au maintien de la biodiversité du bassin du Congo. On montre que le MOE vibratoire est quasi-invariant avec la teneur en eau et qu’il est bien corrélé au MOE statique, à la masse volumique et à la contrainte à la rupture. C’est donc un bon prédicteur du module de rupture (MOR) à 12% du taux d’humidité du bois vert. Ceci valide l’organisation en groupe d’essences réalisée dans ce travail. Les valeurs caractéristiques des propriétés mesurées sont calculées pour chaque classe. Parce que l’Eurocode 5 exige que les bois utilisés en structure soient classés, nous proposons, in fine, un système de classement en structure des bois du bassin du Congo voué à une meilleure utilisation des forêts pour la conservation des écosystèmes forestiers et la satisfaction des besoins fondamentaux de l’homme
This Ph.D. work deals with a non-destructive experimental approach organizing the species of the Congo basin in four groups according to the likeness of their main mechanical properties, and to guarantee for each group the 5th percentile characteristic value of mechanical properties such as the modulus of elasticity (MOE). For the sustainable management of forest and conservation of exotic biodiversity of the Congo basin, it is necessary to promote the exploitation of less-consumed species with equivalent mechanical characteristics. The aim of the present work is to propose a scheme for grouping species with similar values of modulus of elasticity (MOE) obtained through vibratory method. Only small clear specimens are tested. The assignment of a given specie to one of the four groups takes place in return for a homogeneity test of comparison of the random variables of species to the random variable of the group. Results of the present grading show that some species can be interchangeably consumed in the construction industry and that the procedure must include more species of the Congo basin. This property grouping will facilitate substitution of underutilized species for ones that are being more exploited. Thus, wood designers and constructors might have a wide variety of choices in their decision-making while promoting less-consumed species and reducing the demand of traditionally most wanted extinguishable species. The MOE obtained by intrinsic vibration of wood versus is not influenced by moisture content. So, the non destructive method bases on longitudinal vibrations can be used in industrial production of timber to certify the modulus of rupture (MOR) at 12% moisture content of green wood by direct correlation. Finally, we propose a grading system of tropical wood
APA, Harvard, Vancouver, ISO, and other styles
17

Dupuis, Jérôme. "Analyse statistique bayesienne de modèles de capture-recapture." Paris 6, 1995. http://www.theses.fr/1995PA066077.

Full text
Abstract:
Le modele statistique de base que nous considerons, consiste en n realisations simultanees et i. I. D. D'un processus d'interet ramene a une chaine de markov, avec donnees manquantes, non homogene, a espace d'etat fini comportant un unique etat absorbant. Alors que l'estimateur du maximum de vraisemblance est actuellement disponible l'analyse statistique bayesienne de ce modele de capture-recapture n'a pas encore ete abordee. L'estimation bayesienne des probabilites de survie et de mouvement du modele de base est realisee via l'algorithme de gibbs. Des conditions suffisantes de convergence de l'algorithme sont etablies. Puis nous developpons des tests afin d'apprehender les differentes sources d'heterogeneite (temporelle, individuelle et environnementale) du phenomene biologique represente par la chaine de markov. Le test d'homogeneite temporelle que nous construisons formule la question d'interet en terme de divergence acceptable entre la chaine initiale et sa projection (au sens de la distance de kullback) sur l'espace des chaines de markov homogenes. Nous developpons ensuite des tests formules en terme d'independance conditionnelle permettant de mettre en evidence un effet differe d'un processus auxiliaire (variable aleatoire discrete environnementale ou individuelle, dependant ou non du temps) sur le processus d'interet. Enfin, pour la premiere fois en capture-recapture, une situation de non-independance des comportements migratoires est envisagee. Nous considerons une structure de dependance de nature unilaterale qui permet de rendre compte d'un eventuel effet guide en dynamique des populations animales
APA, Harvard, Vancouver, ISO, and other styles
18

MARACCHINI, GIANLUCA. "Vulnerabilità degli edifici esistenti: utilizzo e limiti di procedure e metodi adottati nella pratica ingegneristica per la sua valutazione e riduzione." Doctoral thesis, Università Politecnica delle Marche, 2017. http://hdl.handle.net/11566/245616.

Full text
Abstract:
Il problema della mitigazione del rischio sismico degli edifici esistenti è in Italia una questione di primaria importanza, sia a causa dell’elevata vulnerabilità strutturale di gran parte del patrimonio edilizio esistente, sia a causa, nel caso degli edifici storici, del loro valore artistico e culturale. In un’ottica di prevenzione, date le scarse risorse disponibili e data la necessità di intervenire nel minor tempo possibile, risulta fondamentale poter disporre di strumenti affidabili che consentano di evitare inaccurate valutazioni di sicurezza sismica. La presente tesi affronta quindi alcune delle problematiche più importanti presenti nella pratica ingegneristica nella valutazione della sicurezza degli edifici esistenti, con particolare riferimento agli edifici in muratura ed in cemento armato. In particolare, dopo aver brevemente descritto le principali vulnerabilità degli edifici in muratura, viene presentata quindi una analisi critica approfondita della letteratura disponibile del metodo di modellazione a telaio equivalente. Tale metodo risulta essere oggi lo strumento di modellazione più diffuso nella pratica ingegneristica oltre ad essere consigliato da diversi codici normativi nazionali e internazionali. Da tale analisi, sono stati definiti limiti e campi d’applicazione per il suo corretto di utilizzo. In particolare, il telaio equivalente può essere utilizzato come primo approccio di tipo conservativo per lo studio della risposta sismica di edifici caratterizzati da un comportamento scatolare a prevalente risposta nel piano e con una disposizione delle aperture pressoché regolare. Diversamente da quanto accade per gli edifici esistenti in muratura, i metodi di modellazione utilizzati nella valutazione sismica degli edifici in cemento armato risultano piuttosto consolidati anche nella pratica ingegneristica. In questo caso, le maggiori fonti di incertezza presenti nella valutazione sono fornite dalla stima delle caratteristiche meccaniche delle proprietà meccaniche in situ del calcestruzzo gettato in opera. L’alta dispersione dei parametri meccanici spesso presente all’interno degli edifici esistenti in c.a., rende infatti ardua la loro rappresentazione. In questa tesi, viene proposto e applicato a sei casi studio un metodo statistico per la caratterizzazione meccanica del calcestruzzo capace di isolare alcune delle fonti di dispersione. Dall’analisi dei risultati è emerso come tale metodo sia capace di fornire una rappresentazione più accurata della dispersione effettivamente presente e di ridurre la dispersione dei risultati delle prove in situ.
The mitigation of the seismic risk of existing buildings is a relevant issue in Italy due both to the high seismic vulnerability of most of the building stock and, in case of historic masonry buildings, to their high artistic and cultural value. From this point of view, due to limited resources available and the need to intervene as quickly as possible, it is essential to have reliable tools in order to avoid inaccurate seismic assessments. This thesis addresses some of the most common problems that are present in the engineering practice related to the structural assessment of existing URM and RC buildings. After having briefly described the main vulnerability of masonry buildings, this work presents a critical analysis of the equivalent frame modelling approach through an in-depth analysis of the literature. This method is today the most widespread modelling tool in the engineering practice, and suggested by national and international standards. From this analysis, the limits and the applicability domain of this method have been defined. As a general result, it has been shown that the equivalent frame model can be used as a conservative approach for the study of the global response of buildings with box-like behavior and quite regular arrangement of openings. Unlike masonry buildings, the modelling methods used in the professional practice for the seismic assessment of existing RC buildings, are well validated. In this case, one of the most important source of uncertainty is probably the evaluation of the in situ mechanical properties of the concrete. Indeed, the high dispersion of the concrete mechanical parameters makes often inaccurate the seismic assessment of these buildings. In this thesis, a statistical method for the mechanical characterization of concrete is proposed and applied to six case studies. As a result, it is showed that the proposed method is capable of providing a more accurate representation of the actually strength distribution and of reducing the dispersion obtained from in situ tests.
APA, Harvard, Vancouver, ISO, and other styles
19

Islam, Md Khairul. "TRANSFORMED TESTS FOR HOMOGENEITY OF VARIANCES AND MEANS." Bowling Green State University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1150727264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ruth, David M. "Applications of assignment algorithms to nonparametric tests for homogeneity." Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/dissert/2009/Sep/09Sep%5FRuth%5FPhD.pdf.

Full text
Abstract:
Dissertation (Ph.D. in Operations Research)--Naval Postgraduate School, September 2009.
Dissertation supervisor: Koyak, Robert. "September 2009." Description based on title screen as viewed on November 5, 2009. Author(s) subject terms: Nonparametric test, distribution-free test, non-bipartite matching, bipartite matching, change point. Includes bibliographical references (p. 121-126). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
21

Franke, Johannes. "Risiken des Klimawandels für den Wasserhaushalt - Variabilität und Trend des zeitlichen Niederschlagsspektrums." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-25922.

Full text
Abstract:
Die vorliegende Arbeit wurde auf der Grundlage begutachteter Publikationen als kumulative Dissertation verfasst. Ziel war hier, das zeitliche Spektrum des Niederschlages unter sich bereits geänderten und zukünftig möglichen Klimabedingungen zu untersuchen, um daraus risikobehaftete Auswirkungen auf den Wasserhaushalt ableiten zu können. Ausgehend von den für Sachsen bzw. Mitteldeutschland jahreszeitlich berechneten Trends für den Niederschlag im Zeitraum 1951-2000 wurde hier der Schwerpunkt auf das Verhalten des Starkniederschlages im Einzugsgebiet der Weißeritz (Osterzgebirge) während der Vegetationsperiode gesetzt. Unter Verwendung von Extremwertverteilungen wurde das lokale Starkniederschlagsgeschehen im Referenzzeitraum 1961-2000 für Ereignisandauern von 1-24 Stunden und deren Wiederkehrzeiten von 5-100 Jahren aus statistischer Sicht beschrieben. Mittels eines wetterlagenbasierten statistischen Downscaling wurden mögliche Änderungen im Niveau des zeitlich höher aufgelösten Niederschlagspektrums gegenüber dem Referenzspektrum auf die Zeitscheiben um 2025 (2011-2040) und 2050 (2036-2065) projiziert. Hierfür wurden die zu erwartenden Klimabedingungen für das IPCC-Emissionsszenario A1B angenommen. Mittels eines problemangepassten Regionalisierungsalgorithmus´ konnte eine Transformation der Punktinformationen in eine stetige Flächeninformation erreicht werden. Dabei wurden verteilungsrelevante Orografieeffekte auf den Niederschlag maßstabsgerecht berücksichtigt. Die signifikanten Niederschlagsabnahmen im Sommer bzw. in der Vegetationsperiode sind in Sachsen mit einer Zunahme und Intensivierung von Starkniederschlägen kombiniert. Hieraus entsteht ein Konfliktpotenzial zwischen Hochwasserschutz auf der einen und (Trink-) Wasserversorgung auf der anderen Seite. Für die zu erwartenden Klimabedingungen der Zeitscheiben um 2025 und 2050 wurden für das Einzugsgebiet der Weißeritz zunehmend positive, nicht-lineare Niveauverschiebungen im zeitlich höher aufgelösten Spektrum des Starkniederschlages berechnet. Für gleich bleibende Wiederkehrzeiten ergaben sich größere Regenhöhen bzw. für konstant gehaltene Regenhöhen kleinere Wiederkehrzeiten. Aus dem erhaltenen Änderungssignal kann gefolgert werden, dass der sich fortsetzende allgemeine Erwärmungstrend mit einer Intensivierung des primär thermisch induzierten, konvektiven Starkniederschlagsgeschehens einhergeht, was in Sachsen mit einem zunehmend häufigeren Auftreten von Starkregenereignissen kürzerer Andauer sowie mit einer zusätzlichen orografischen Verstärkung von Ereignissen längerer Andauer verbunden ist. Anhand des Klimaquotienten nach Ellenberg wurden Effekte des rezenten Klimatrends auf die Verteilung der potenziellen natürlichen Vegetation in Mitteldeutschland beispielhaft untersucht. Über eine Korrektur der Berechnungsvorschrift konnte eine Berücksichtigung der trendbehafteten klimatologischen Rahmenbedingungen, insbesondere dem negativen Niederschlagstrend im Sommer, erreicht werden. Insgesamt konnte festgestellt werden, dass die regionalen Auswirkungen des globalen Klimawandels massive Änderungen in der raum-zeitlichen Struktur des Niederschlages in Sachsen zur Folge haben, was unvermeidlich eine komplexe Wirkungskette auf den regionalen Wasserhaushalt zur Folge hat und mit Risiken verbunden ist
This paper was written as a cumulative doctoral thesis based on appraised publications. Its objective was to study the temporal spectrum of precipitation under already changed or possible future climate conditions in order to derive effects on the water budget which are fraught with risks. Based on seasonal trends as established for Saxony and Central Germany for precipitation in the period of 1951-2000, the focus was on the behaviour of heavy precipitation in the catchment area of the Weißeritz (eastern Ore Mountains) during the growing season. Using distributions of extreme values, the local heavy precipitation behaviour in the reference period of 1961-2000 was described from a statistical point of view for event durations of 1-24 hours and their return periods of 5-100 years. Statistical downscaling based on weather patterns was used to project possible changes in the level of the high temporal resolution spectrum of precipitation, compared with the reference spectrum, to the time slices around 2025 (2011-2040) and 2050 (2036-2065). The IPCC A1B emission scenario was assumed for expected climate conditions for this purpose. Using a regionalisation algorithm adapted to the problem made it possible to achieve a transformation of local information into areal information. In doing so, distribution-relevant orographic effects on precipitation were taken into consideration in a manner true to scale. Significant decreases in precipitation in summer and during the growing season are combined with an increase and intensification of heavy precipitation in Saxony. This gives rise to a potential for conflict between the need for flood protection, on the one hand, and the supply of (drinking) water, on the other hand. For the expected climate conditions of the time slices around 2025 and 2050, increasingly positive, non-linear shifts in the level of the high temporal resolution spectrum of heavy precipitation were calculated for the catchment of the Weißeritz. Higher amounts of rain were found if the return periods were kept constant, and shorter return periods were found if the rain amounts were kept constant. It may be concluded from the change signal obtained that the continuing general warming trend is accompanied by an intensification of the primarily thermally induced convective behaviour of heavy precipitation. In Saxony, this is associated with an increasingly frequent occurrence of heavy precipitation events of short duration and with an additional orographic intensification of events of long duration. Using the Ellenberg climate quotient, effects of the recent climate trend on the distribution of potential natural vegetation in Central Germany were studied by way of example. Underlying climatological conditions subject to a trend, in particular the negative trend of precipitation in summer, were taken into consideration by a modification of the calculation rule. All in all, it was found that regional effects of global climate change bring about massive changes in the spatiotemporal structure of precipitation in Saxony, which inevitably leads to a complex chain of impact on the regional water budget and is fraught with risks
APA, Harvard, Vancouver, ISO, and other styles
22

Stewart, Michael. "Asymptotic methods for tests of homogeneity for finite mixture models." Connect to full text, 2002. http://hdl.handle.net/2123/855.

Full text
Abstract:
Thesis (Ph. D.)--University of Sydney, 2002.
Title from title screen (viewed Apr. 28, 2008). Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the School of Mathematics and Statistics, Faculty of Science. Includes bibliography. Also available in print form.
APA, Harvard, Vancouver, ISO, and other styles
23

Dirbák, Štefan. "Návrh a realizace plošného měření rezistivity půdy." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-413053.

Full text
Abstract:
This diploma thesis deals with research and study of soil impedance measurement and soil resistivity. Currently, the issue of measuring and determining soil resistance is ensured through the gradual measurement of certain soil parameters at individual points of the surface (or depth of the ground). This thesis focuses on the idea of measuring soil resistance on a certain area using a network of electrodes through a suitably designed test, measurement and evaluation system. Such an approach may find application in the need to determine soil parameters (such as resistivity) on a specific demarcated area (or depth). The prospect of such an application can be seen in saving time, energy and money needed to measure the soil resistivity of a certain area (as opposed to gradual point measurements). The configuration possibilities of OMICRON CPC 100 measuring instrument were used for the design and implementation of the measuring system for the mentioned purpose. The work is completed by verification of the proposed solution by real measurement with evaluation of the results.
APA, Harvard, Vancouver, ISO, and other styles
24

SILVA, Débora Karollyne Xavier. "Análise de diagnóstico para o modelo de regressão Log-Birnbaum-Saunders generalizado." Universidade Federal de Campina Grande, 2013. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1391.

Full text
Abstract:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-08T21:14:19Z No. of bitstreams: 1 DÉBORA KAROLLYNE XAVIER SILVA - DISSERTAÇÃO PPGMAT 2013..pdf: 5676823 bytes, checksum: 10779ac6b54c624585a998fed783af51 (MD5)
Made available in DSpace on 2018-08-08T21:14:19Z (GMT). No. of bitstreams: 1 DÉBORA KAROLLYNE XAVIER SILVA - DISSERTAÇÃO PPGMAT 2013..pdf: 5676823 bytes, checksum: 10779ac6b54c624585a998fed783af51 (MD5) Previous issue date: 2013-12
Capes
A distribuição Birnbaum-Saunders surgiu em 1969 com aplicações fortemente ligadas à engenharia e se expandiu nas últimas décadas a diversas áreas. Na literatura, além de tomar um papel de destaque na análise de sobrevivência, podemos destacar o surgimento de várias generalizações. Neste trabalho apresentaremos uma dessas generalizações, a qual foi formulada por Mentainis em 2010. Primeiramente, faremos uma breve explanação sobre a distribuição Birnbaum-Saunders cl´assica e sobre a generaliza¸c˜ao que foi proposta por Mentainis (2010), a qual chamaremos de distribuição Birnbaum-Saunders generalizada. Em seguida, discorreremos sobre a distribuição senh-normal, a qual possui uma importante relação com a distribuição Birnbaum-Saunders. Numa outra etapa, apresentaremos alguns métodos de diagnóstico para o modelo de regressão log-Birnbaum-Saunders generalizado e investigaremos testes de homogeneidade para os correspondentes parˆametros de forma e escala. Por fim, analisamos um conjunto de dados para ilustrar a teoria desenvolvida.
The Birnbaum-Saunders distribution emerged in 1969 motivated by problems in engineering. However, its field of application has been extended beyond the original context of material fatigue and reliability analysis. In the literature, it has made an important role in survival analysis. Moreover, many generalizations of it have been considered. In this work we present one of these generalizations, which was formulated by Mentainis in 2010. First, we provide a brief explanation of the classical Birnbaum-Saunders distribution and its generalization proposed by Mentainis (2010), which we name as the generalized Birnbaum-Saunders distribution. Thereafter, we discuss the sinh-normal distribution, which has an important relationship with the Birnbaum-Saunders distribution. In a further part of this work, we present some diagnostic methods for generalized log-Birnbaum-Saunders regression models and investigate tests of homogeneity for the corresponding shape and scale parameters. Finally, an application with real data is presented.
APA, Harvard, Vancouver, ISO, and other styles
25

Franke, Johannes. "Risiken des Klimawandels für den Wasserhaushalt – Variabilität und Trend des zeitlichen Niederschlagsspektrums." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-71425.

Full text
Abstract:
Die vorliegende Arbeit wurde auf der Grundlage begutachteter Publikationen als kumulative Dissertation verfasst. Ziel war hier, das zeitliche Spektrum des Niederschlages unter sich bereits geänderten und zukünftig möglichen Klimabedingungen zu untersuchen, um daraus risikobehaftete Auswirkungen auf den Wasserhaushalt ableiten zu können. Ausgehend von den für Sachsen bzw. Mitteldeutschland jahreszeitlich berechneten Trends für den Niederschlag im Zeitraum 1951-2000 wurde hier der Schwerpunkt auf das Verhalten des Starkniederschlages im Einzugsgebiet der Weißeritz (Osterzgebirge) während der Vegetationsperiode gesetzt. Unter Verwendung von Extremwertverteilungen wurde das lokale Starkniederschlagsgeschehen im Referenzzeitraum 1961-2000 für Ereignisandauern von 1-24 Stunden und deren Wiederkehrzeiten von 5-100 Jahren aus statistischer Sicht beschrieben. Mittels eines wetterlagenbasierten statistischen Downscaling wurden mögliche Änderungen im Niveau des zeitlich höher aufgelösten Niederschlagspektrums gegenüber dem Referenzspektrum auf die Zeitscheiben um 2025 (2011-2040) und 2050 (2036-2065) projiziert. Hierfür wurden die zu erwartenden Klimabedingungen für das IPCC-Emissionsszenario A1B angenommen. Mittels eines problemangepassten Regionalisierungsalgorithmus´ konnte eine Transformation der Punktinformationen in eine stetige Flächeninformation erreicht werden. Dabei wurden verteilungsrelevante Orografieeffekte auf den Niederschlag maßstabsgerecht berücksichtigt. Die signifikanten Niederschlagsabnahmen im Sommer bzw. in der Vegetationsperiode sind in Sachsen mit einer Zunahme und Intensivierung von Starkniederschlägen kombiniert. Hieraus entsteht ein Konfliktpotenzial zwischen Hochwasserschutz auf der einen und (Trink-) Wasserversorgung auf der anderen Seite. Für die zu erwartenden Klimabedingungen der Zeitscheiben um 2025 und 2050 wurden für das Einzugsgebiet der Weißeritz zunehmend positive, nicht-lineare Niveauverschiebungen im zeitlich höher aufgelösten Spektrum des Starkniederschlages berechnet. Für gleich bleibende Wiederkehrzeiten ergaben sich größere Regenhöhen bzw. für konstant gehaltene Regenhöhen kleinere Wiederkehrzeiten. Aus dem erhaltenen Änderungssignal kann gefolgert werden, dass der sich fortsetzende allgemeine Erwärmungstrend mit einer Intensivierung des primär thermisch induzierten, konvektiven Starkniederschlagsgeschehens einhergeht, was in Sachsen mit einem zunehmend häufigeren Auftreten von Starkregenereignissen kürzerer Andauer sowie mit einer zusätzlichen orografischen Verstärkung von Ereignissen längerer Andauer verbunden ist. Anhand des Klimaquotienten nach Ellenberg wurden Effekte des rezenten Klimatrends auf die Verteilung der potenziellen natürlichen Vegetation in Mitteldeutschland beispielhaft untersucht. Über eine Korrektur der Berechnungsvorschrift konnte eine Berücksichtigung der trendbehafteten klimatologischen Rahmenbedingungen, insbesondere dem negativen Niederschlagstrend im Sommer, erreicht werden. Insgesamt konnte festgestellt werden, dass die regionalen Auswirkungen des globalen Klimawandels massive Änderungen in der raum-zeitlichen Struktur des Niederschlages in Sachsen zur Folge haben, was unvermeidlich eine komplexe Wirkungskette auf den regionalen Wasserhaushalt zur Folge hat und mit Risiken verbunden ist
This paper was written as a cumulative doctoral thesis based on appraised publications. Its objective was to study the temporal spectrum of precipitation under already changed or possible future climate conditions in order to derive effects on the water budget which are fraught with risks. Based on seasonal trends as established for Saxony and Central Germany for precipitation in the period of 1951-2000, the focus was on the behaviour of heavy precipitation in the catchment area of the Weißeritz (eastern Ore Mountains) during the growing season. Using distributions of extreme values, the local heavy precipitation behaviour in the reference period of 1961-2000 was described from a statistical point of view for event durations of 1-24 hours and their return periods of 5-100 years. Statistical downscaling based on weather patterns was used to project possible changes in the level of the high temporal resolution spectrum of precipitation, compared with the reference spectrum, to the time slices around 2025 (2011-2040) and 2050 (2036-2065). The IPCC A1B emission scenario was assumed for expected climate conditions for this purpose. Using a regionalisation algorithm adapted to the problem made it possible to achieve a transformation of local information into areal information. In doing so, distribution-relevant orographic effects on precipitation were taken into consideration in a manner true to scale. Significant decreases in precipitation in summer and during the growing season are combined with an increase and intensification of heavy precipitation in Saxony. This gives rise to a potential for conflict between the need for flood protection, on the one hand, and the supply of (drinking) water, on the other hand. For the expected climate conditions of the time slices around 2025 and 2050, increasingly positive, non-linear shifts in the level of the high temporal resolution spectrum of heavy precipitation were calculated for the catchment of the Weißeritz. Higher amounts of rain were found if the return periods were kept constant, and shorter return periods were found if the rain amounts were kept constant. It may be concluded from the change signal obtained that the continuing general warming trend is accompanied by an intensification of the primarily thermally induced convective behaviour of heavy precipitation. In Saxony, this is associated with an increasingly frequent occurrence of heavy precipitation events of short duration and with an additional orographic intensification of events of long duration. Using the Ellenberg climate quotient, effects of the recent climate trend on the distribution of potential natural vegetation in Central Germany were studied by way of example. Underlying climatological conditions subject to a trend, in particular the negative trend of precipitation in summer, were taken into consideration by a modification of the calculation rule. All in all, it was found that regional effects of global climate change bring about massive changes in the spatiotemporal structure of precipitation in Saxony, which inevitably leads to a complex chain of impact on the regional water budget and is fraught with risks
APA, Harvard, Vancouver, ISO, and other styles
26

Gustin, Sara. "Investigation of some tests for homogeneity of intensity with applications to insurance data." Thesis, Uppsala universitet, Matematisk statistik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-164588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

LUITEN, JOHN WILLIAM. "AN EMPIRICAL COMPARISON OF SELECTED ALTERNATIVES TO THE KUDER AND RICHARDSON FORMULA 20 (RELIABILITY, HOMOGENEITY, SIMULATION)." Diss., The University of Arizona, 1986. http://hdl.handle.net/10150/183963.

Full text
Abstract:
Several alternatives to the Kuder and Richardson formula number 20 (KR20) were compared for accuracy using simulated and actual data sets. Coefficients by Loevinger (1948), Horst (1954), Raju (1982), and Cliff (1984) as well as the Kuder and Richardson formulae numbers 8 and 14 were examined. These alternative reliability coefficients were compared by (1) simulation of tests with varying degrees of item difficulty dispersion, subject proficiency, reliability, and length, and (2) use of the norming samples of the Curriculum Referenced Tests of Mastery (Charles E. Merrill Publishing Co., publisher) for grades four, six, and eight. Most of the coefficients examined proved no more accurate than the KR20 and several were decidedly worse. All coefficients, with the exception of Loevinger's, were affected by item difficulty dispersion. Two coefficients, the KR8 and Horst, were found to have potential as KR20 substitutes. These two coefficients are discussed with recommendations made as to the appropriate use of each one.
APA, Harvard, Vancouver, ISO, and other styles
28

Bakšajev, Aleksej. "Statistical tests based on N-distances." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20100409_082443-70166.

Full text
Abstract:
The thesis is devoted to the application of a new class of probability metrics, N-distances, introduced by Klebanov (Klebanov, 2005; Zinger et al., 1989), to the problems of verification of the classical statistical hypotheses of goodness of fit, homogeneity, symmetry and independence. First of all a construction of statistics based on N metrics for testing mentioned hypotheses is proposed. Then the problem of determination of the critical region of the criteria is investigated. The main results of the thesis are connected with the asymptotic behavior of test statistics under the null and alternative hypotheses. In general case the limit null distribution of proposed in the thesis tests statistics is established in terms of the distribution of infinite quadratic form of random normal variables with coefficients dependent on eigenvalues and functions of a certain integral operator. It is proved that under the alternative hypothesis the test statistics are asymptotically normal. In case of parametric hypothesis of goodness of fit particular attention is devoted to normality and exponentiality criteria. For hypothesis of homogeneity a construction of multivariate distribution free two-sample test is proposed. Testing the hypothesis of uniformity on hypersphere in more detail S1 and S2 cases are investigated. In conclusion, a comparison of N-distance tests with some classical criteria is provided. For simple hypothesis of goodness of fit in univariate case as a measure for... [to full text]
Disertacinis darbas yra skirtas N-metrikų teorijos (Klebanov, 2005; Zinger et al., 1989) pritaikymui klasikinėms statistinėms suderinamumo, homogeniškumo, simetriškumo bei nepriklausomumo hipotezėms tikrinti. Darbo pradžioje pasiūlytas minėtų hipotezių testinių statistikų konstravimo būdas, naudojant N-metrikas. Toliau nagrinėjama problema susijusi su suformuotų kriterijų kritinės srities nustatymu. Pagrindiniai darbo rezultatai yra susiję su pasiūlytų kriterijaus statistikų asimptotiniu skirstiniu. Bendru atveju N-metrikos statistikų asimptotinis skirstinys esant nulinei hipotezei sutampa su Gauso atsitiktinių dydžių begalinės kvadratinės formos skirstiniu. Alternatyvos atveju testinių statistikų ribinis skirstinys yra normalusis. Sudėtinės suderinamumo hipotezės atveju išsamiau yra analizuojami normalumo ir ekponentiškumo kriterijai. Daugiamačiu atveju pasiūlyta konstrukcija, nepriklausanti nuo skirstinio homogeniškumo testo. Tikrinant tolygumo hipersferoje hipotezę detaliau yra nagrinėjami apskritimo ir sferos atvejai. Darbo pabaigoje lyginami pasiūlytos N-metrikos bei kai kurie klasikiniai kriterijai. Neparametrinės suderinamumo hipotezės vienamačiu atveju, kaip palyginimo priemonė, nagrinėjamas Bahaduro asimptotinis santykinis efektyvumas (Bahadur, 1960; Nikitin, 1995). Kartu su teoriniais rezultatais pasiūlytų N-metrikos tipo testų galingumas ištirtas, naudojant Monte-Karlo metodą. Be paprastos ir sudėtinės suderinamumo hipotezių yra analizuojami homogeniškumo testai... [toliau žr. visą tekstą]
APA, Harvard, Vancouver, ISO, and other styles
29

Aaron, Lisa Therese. "A comparative simulation of Type I error and Power of Four tests of homogeneity of effects for random- and fixed-effects models of meta-analysis." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Vu, Thi Lan Huong. "Analyse statistique locale de textures browniennes multifractionnaires anisotropes." Thesis, Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0094.

Full text
Abstract:
Nous construisons quelques extensions anisotropes des champs browniens multifractionnels qui rendent compte de phénomènes spatiaux dont les propriétés de régularité et de directionnalité peuvent varier dans l’espace. Notre objectif est de mettre en place des tests statistiques pour déterminer si un champ observé de ce type est hétérogène ou non. La méthodologie statistique repose sur une analyse de champ par variations quadratiques, qui sont des moyennes d’incréments de champ au carré. Notre approche, ces variations sont calculées localement dans plusieurs directions. Nous établissons un résultat asymptotique montrant une relation linéaire gaussienne entre ces variations et des paramètres liés à la régularité et aux propriétés directionnelles. En utilisant ce résultat, nous concevons ensuite une procédure de test basée sur les statistiques de Fisher des modèles linéaires gaussiens. Nous évaluons cette procédure sur des données simulées. Enfin, nous concevons des algorithmes pour la segmentation d’une image en régions de textures homogènes. Le premier algorithme est basé sur une procédure K-means qui a estimé les paramètres en entrée et prend en compte les distributions de probabilité théoriques. Le deuxième algorithme est basé sur une algorithme EM qui implique une exécution continue à chaque boucle de 2 processus. Finalement, nous présentons une application de ces algorithmes dans le cadre d’un projet pluridisciplinaire visant à optimiser le déploiement de panneaux photovoltaïques sur le terrain. Nous traitons d’une étape de prétraitement du projet qui concerne la segmentation des images du satellite Sentinel-2 dans des régions où la couverture nuageuse est homogène
We deal with some anisotropic extensions of the multifractional brownian fields that account for spatial phenomena whose properties of regularity and directionality may both vary in space. Our aim is to set statistical tests to decide whether an observed field of this kind is heterogeneous or not. The statistical methodology relies upon a field analysis by quadratic variations, which are averages of square field increments. Specific to our approach, these variations are computed locally in several directions. We establish an asymptotic result showing a linear gaussian relationship between these variations and parameters related to regularity and directional properties of the model. Using this result, we then design a test procedure based on Fisher statistics of linear gaussian models. Eventually we evaluate this procedure on simulated data. Finally, we design some algorithms for the segmentation of an image into regions of homogeneous textures. The first algorithm is based on a K-means procedure which has estimated parameters as input and takes into account their theoretical probability distributions. The second algorithm is based on an EM algorithm which involves continuous execution ateach 2-process loop (E) and (M). The values found in (E) and (M) at each loop will be used for calculations in the next loop. Eventually, we present an application of these algorithms in the context of a pluridisciplinary project which aims at optimizing the deployment of photo-voltaic panels on the ground. We deal with a preprocessing step of the project which concerns the segmentation of images from the satellite Sentinel-2 into regions where the cloud cover is homogeneous
APA, Harvard, Vancouver, ISO, and other styles
31

Bakšajev, Aleksej. "Statistinių hipotezių tikrinimas, naudojant N-metrikas." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20100409_082453-39497.

Full text
Abstract:
Disertacinis darbas yra skirtas N-metrikų teorijos (Klebanov, 2005; Zinger et al., 1989) pritaikymui klasikinėms statistinėms suderinamumo, homogeniškumo, simetriškumo bei nepriklausomumo hipotezėms tikrinti. Darbo pradžioje pasiūlytas minėtų hipotezių testinių statistikų konstravimo būdas, naudojant N-metrikas. Toliau nagrinėjama problema susijusi su suformuotų kriterijų kritinės srities nustatymu. Pagrindiniai darbo rezultatai yra susiję su pasiūlytų kriterijaus statistikų asimptotiniu skirstiniu. Bendru atveju N-metrikos statistikų asimptotinis skirstinys esant nulinei hipotezei sutampa su Gauso atsitiktinių dydžių begalinės kvadratinės formos skirstiniu. Alternatyvos atveju testinių statistikų ribinis skirstinys yra normalusis. Sudėtinės suderinamumo hipotezės atveju išsamiau yra analizuojami normalumo ir ekponentiškumo kriterijai. Daugiamačiu atveju pasiūlyta konstrukcija, nepriklausanti nuo skirstinio homogeniškumo testo. Tikrinant tolygumo hipersferoje hipotezę detaliau yra nagrinėjami apskritimo ir sferos atvejai. Darbo pabaigoje lyginami pasiūlytos N-metrikos bei kai kurie klasikiniai kriterijai. Neparametrinės suderinamumo hipotezės vienamačiu atveju, kaip palyginimo priemonė, nagrinėjamas Bahaduro asimptotinis santykinis efektyvumas (Bahadur, 1960; Nikitin, 1995). Kartu su teoriniais rezultatais pasiūlytų N-metrikos tipo testų galingumas ištirtas, naudojant Monte-Karlo metodą. Be paprastos ir sudėtinės suderinamumo hipotezių yra analizuojami homogeniškumo testai... [toliau žr. visą tekstą]
The thesis is devoted to the application of a new class of probability metrics, N-distances, introduced by Klebanov (Klebanov, 2005; Zinger et al., 1989), to the problems of verification of the classical statistical hypotheses of goodness of fit, homogeneity, symmetry and independence. First of all a construction of statistics based on N-metrics for testing mentioned hypotheses is proposed. Then the problem of determination of the critical region of the criteria is investigated. The main results of the thesis are connected with the asymptotic behavior of test statistics under the null and alternative hypotheses. In general case the limit null distribution of proposed in the thesis tests statistics is established in terms of the distribution of infinite quadratic form of random normal variables with coefficients dependent on eigenvalues and functions of a certain integral operator. It is proved that under the alternative hypothesis the test statistics are asymptotically normal. In case of parametric hypothesis of goodness of fit particular attention is devoted to normality and exponentiality criteria. For hypothesis of homogeneity a construction of multivariate distribution-free two-sample test is proposed. Testing the hypothesis of uniformity on hypersphere in more detail S1 and S2 cases are investigated. In conclusion, a comparison of N-distance tests with some classical criteria is provided. For simple hypothesis of goodness of fit in univariate case as a measure for... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
32

Bienaise, Solène. "Tests combinatoires en analyse géométrique des données - Etude de l'absentéisme dans les industries électriques et gazières de 1995 à 2011 à travers des données de cohorte." Phd thesis, Université Paris Dauphine - Paris IX, 2013. http://tel.archives-ouvertes.fr/tel-00941220.

Full text
Abstract:
La première partie de la thèse traite d'inférence combinatoire en Analyse Géométrique des Données (AGD). Nous proposons des tests multidimensionnels sans hypothèse sur le processus d'obtention des données ou les distributions. Nous nous intéressons ici aux problèmes de typicalité (comparaison d'un point moyen à un point de référence ou d'un groupe d'observations à une population de référence) et d'homogénéité (comparaison de plusieurs groupes). Nous utilisons des procédures combinatoires pour construire un ensemble de référence par rapport auquel nous situons les données. Les statistiques de test choisies mènent à des prolongements originaux : interprétation géométrique du seuil observé et construction d'une zone de compatibilité.La seconde partie présente l'étude de l'absentéisme dans les Industries Electriques et Gazières de 1995 à 2011 (avec construction d'une cohorte épidémiologique). Des méthodes d'AGD sont utilisées afin d'identifier des pathologies émergentes et des groupes d'agents sensibles.
APA, Harvard, Vancouver, ISO, and other styles
33

Lin, Tsung-Wei, and 林宗威. "A test of homogeneity distribution with similarity coefficient." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/89538033420687021547.

Full text
Abstract:
碩士
中原大學
應用數學研究所
94
Linear Regression Model is a model that is meant to find out the statistic of testing hypothesis on the basis oformal distribution and then to calculate the confidence interval. Therefore, the importance of testing if the data match normal distribution cannot be overemphasized. Generally speaking, there are two methods of testing normal distribution: one is normal probability plot and the other is Shapiro-Wilks test. However, different people may make different conclusions when they are explaining the same draw due to he different methods they use. As a result, Shapiro-Wilks test, which is based on testing, reveals its importance.Once rejection null hypothesis happens, it means that the data do not match normal distribution. On the other hand,if rejection null hypothesis does not happen, it does not mean that the data come from normal distribution. This research is meant to investigate the similarity between the present data and normal distribution, and provide the users of linear regression model with a more reliable method.The proposed method provided in this thesis can also apply to judging the degree of similarity between the present data and other distributions, from which the users can clearly know what distribution the data probably come from.
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Ying-Hsi, and 陳瑩曦. "A variety of test methods for testing homogeneity forces." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/08965239848679281901.

Full text
Abstract:
碩士
淡江大學
中等學校教師在職進修數學教學碩士學位班
101
Among the many methods of goodness-of-fit test, the most frequently mentioned is the traditional Pearson chi-square statistics. But chi-square test because of segmentation starting(cell point) the selection , resulting in differences in test results. Select a starting point(cell point), you can use 0 as the starting point is equivalent to using the conversion of data to do chi-square test. Data Conversion to other test methods impact test force is not being discussed topics of this thesis to discuss common analog test methods test force might be impacted In addition the study also suggested crop Pearson chi-square test statistic, we interval [0,1] so divided so lattice division number of samples as many, and then calculate for the lattice for each sample has a moving average frequency value, moving average of these frequencies to do with chi-square test. Can be found from the simulation study of its test force is relatively stable and less affected by the impact of data conversion. If the spacing is based on the established order of the test to compare the test force can be found their own good. Unlike conventional chi-square test with respect to space-based verification method established, there are clear and there is poor performance.
APA, Harvard, Vancouver, ISO, and other styles
35

Mawella, Nadeesha R. "A robust test of homogeneity in zero-inflated models for count data." Diss., 2018. http://hdl.handle.net/2097/38877.

Full text
Abstract:
Doctor of Philosophy
Department of Statistics
Wei-Wen Hsu
Evaluating heterogeneity in the class of zero-inflated models has attracted considerable attention in the literature, where the heterogeneity refers to the instances of zero counts generated from two different sources. The mixture probability or the so-called mixing weight in the zero-inflated model is used to measure the extent of such heterogeneity in the population. Typically, the homogeneity tests are employed to examine the mixing weight at zero. Various testing procedures for homogeneity in zero-inflated models, such as score test and Wald test, have been well discussed and established in the literature. However, it is well known that these classical tests require the correct model specification in order to provide valid statistical inferences. In practice, the testing procedure could be performed under model misspecification, which could result in biased and invalid inferences. There are two common misspecifications in zero-inflated models, which are the incorrect specification of the baseline distribution and the misspecified mean function of the baseline distribution. As an empirical evidence, intensive simulation studies revealed that the empirical sizes of the homogeneity tests for zero-inflated models might behave extremely liberal and unstable under these misspecifications for both cross-sectional and correlated count data. We propose a robust score statistic to evaluate heterogeneity in cross-sectional zero-inflated data. Technically, the test is developed based on the Poisson-Gamma mixture model which provides a more general framework to incorporate various baseline distributions without specifying their associated mean function. The testing procedure is further extended to correlated count data. We develop a robust Wald test statistic for correlated count data with the use of working independence model assumption coupled with a sandwich estimator to adjust for any misspecification of the covariance structure in the data. The empirical performances of the proposed robust score test and Wald test are evaluated in simulation studies. It is worth to mention that the proposed Wald test can be implemented easily with minimal programming efforts in a routine statistical software such as SAS. Dental caries data from the Detroit Dental Health Project (DDHP) and Girl Scout data from Scouting Nutrition and Activity Program (SNAP) are used to illustrate the proposed methodologies.
APA, Harvard, Vancouver, ISO, and other styles
36

Li, Pengfei. "Hypothesis Testing in Finite Mixture Models." Thesis, 2007. http://hdl.handle.net/10012/3442.

Full text
Abstract:
Mixture models provide a natural framework for unobserved heterogeneity in a population. They are widely applied in astronomy, biology, engineering, finance, genetics, medicine, social sciences, and other areas. An important first step for using mixture models is the test of homogeneity. Before one tries to fit a mixture model, it might be of value to know whether the data arise from a homogeneous or heterogeneous population. If the data are homogeneous, it is not even necessary to go into mixture modeling. The rejection of the homogeneous model may also have scientific implications. For example, in classical statistical genetics, it is often suspected that only a subgroup of patients have a disease gene which is linked to the marker. Detecting the existence of this subgroup amounts to the rejection of a homogeneous null model in favour of a two-component mixture model. This problem has attracted intensive research recently. This thesis makes substantial contributions in this area of research. Due to partial loss of identifiability, classic inference methods such as the likelihood ratio test (LRT) lose their usual elegant statistical properties. The limiting distribution of the LRT often involves complex Gaussian processes, which can be hard to implement in data analysis. The modified likelihood ratio test (MLRT) is found to be a nice alternative of the LRT. It restores the identifiability by introducing a penalty to the log-likelihood function. Under some mild conditions, the limiting distribution of the MLRT is 1/2\chi^2_0+1/2\chi^2_1, where \chi^2_{0} is a point mass at 0. This limiting distribution is convenient to use in real data analysis. The choice of the penalty functions in the MLRT is very flexible. A good choice of the penalty enhances the power of the MLRT. In this thesis, we first introduce a new class of penalty functions, with which the MLRT enjoys a significantly improved power for testing homogeneity. The main contribution of this thesis is to propose a new class of methods for testing homogeneity. Most existing methods in the literature for testing of homogeneity, explicitly or implicitly, are derived under the condition of finite Fisher information and a compactness assumption on the space of the mixing parameters. The finite Fisher information condition can prevent their usage to many important mixture models, such as the mixture of geometric distributions, the mixture of exponential distributions and more generally mixture models in scale distribution families. The compactness assumption often forces applicants to set artificial bounds for the parameters of interest and makes the resulting limiting distribution dependent on these bounds. Consequently, developing a method without such restrictions is a dream of many researchers. As it will be seen, the proposed EM-test in this thesis is free of these shortcomings. The EM-test combines the merits of the classic LRT and score test. The properties of the EM-test are particularly easy to investigate under single parameter mixture models. It has a simple limiting distribution 0.5\chi^2_0+0.5\chi^2_1, the same as the MLRT. This result is applicable to mixture models without requiring the restrictive regularity conditions described earlier. The normal mixture model is a very popular model in applications. However it does not satisfy the strong identifiability condition, which imposes substantial technical difficulties in the study of the asymptotic properties. Most existing methods do not directly apply to the normal mixture models, so the asymptotic properties have to be developed separately. We investigate the use of the EM-test to normal mixture models and its limiting distributions are derived. For the homogeneity test in the presence of the structural parameter, the limiting distribution is a simple function of the 0.5\chi^2_0+0.5\chi^2_1 and \chi^2_1 distributions. The test with this limiting distribution is still very convenient to implement. For normal mixtures in both mean and variance parameters, the limiting distribution of the EM-test is found be to \chi^2_2. Mixture models are also widely used in the analysis of the directional data. The von Mises distribution is often regarded as the circular normal model. Interestingly, it satisfies the strong identifiability condition and the parameter space of the mean direction is compact. However the theoretical results in the single parameter mixture models can not directly apply to the von Mises mixture models. Because of this, we also study the application of the EM-test to von Mises mixture models in the presence of the structural parameter. The limiting distribution of the EM-test is also found to be 0.5\chi^2_0+0.5\chi^2_1. Extensive simulation results are obtained to examine the precision of the approximation of the limiting distributions to the finite sample distributions of the EM-test. The type I errors with the critical values determined by the limiting distributions are found to be close to nominal values. In particular, we also propose several precision enhancing methods, which are found to work well. Real data examples are used to illustrate the use of the EM-test.
APA, Harvard, Vancouver, ISO, and other styles
37

Bi, Daning. "Various Statistical Inferences for High-dimensional Time Series: Bootstrap, Homogeneity Pursuit and Autocovariance Test." Phd thesis, 2021. http://hdl.handle.net/1885/233476.

Full text
Abstract:
This thesis aims to study various statistical inferences for high-dimensional data, especially high-dimensional time series, including sieve bootstrap, homogeneity pursuit, and an equivalence test for spiked eigenvalues of autocovariance matrix. The primary techniques used in this thesis are novel dimension-reduction methods developed from factor models and principal component analysis (PCA). Chapter 2 proposes a novel sieve bootstrap method for high-dimensional time series and applies it to sparse functional time series where the actual observations are not dense, and pre-smoothing is misleading. Chapter 3 introduces an iterative complement-clustering principal component analysis (CPCA) to study high-dimensional data with group structures, where both homogeneity and sub-homogeneity (group-specific information) can be identified and estimated. Lastly, Chapter 4 proposes a novel test statistic named the autocovariance test to compare the spiked eigenvalues of the autocovariance matrices for two high-dimensional time series. In all chapters, dimension-reduction methods are applied for novel statistical inferences. In particular, Chapters 2 and 4 focus on the spiked eigenstructure of autocovariance matrix and use factors to capture the temporal dependence of the high-dimensional time series. Meanwhile, Chapter 3 aims to simultaneously estimate homogeneity and sub-homogeneity, which form a more complicated spiked eigenstructure of the covariance matrix, despite that the group-specific information is relatively weak compared with the homogeneity and traditional PCA fails to capture it. The theoretical and asymptotic results of all three statistical inferences are provided in each chapter, respectively, where the numerical evidence on the finite-sample performance for each method is also discussed. Finally, these three statistical inferences are applied on particulate matter concentration data, stock return data, and age-specific mortality data for multiple countries, respectively, to provide valid statistical inferences.
APA, Harvard, Vancouver, ISO, and other styles
38

Tseng, Sheng-Hsiang, and 曾聖翔. "Applications of Randomness and Homogeneity Test to Enhance the Systematic Error Resolution for Wafer Map Analysis." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/w88645.

Full text
Abstract:
碩士
國立中央大學
電機工程學系
107
The systematic error is hard to distinguish in small diesize wafer. In order to solve the problem, this paper analyze six kinds of symptomatic failure types among the nine kinds of systematic errors. Each of failure types have been cut into different shapes. The randomness and homogeneity test are used to enhance the detection resolution of systematic error after doing wafer partition. Then, we observe the resolution ratio in systematic error after doing partition. After cutting into various shapes, we use the randomness and homogeneity test to test each wafer. In randomness test, the hypothesis test is used. If the alternative hypothesis is over-cluster, the null hypothesis is determined to use a one-tailed test. And, we will check whether B-Score is higher than the critical value 1.64 or not. If the alternative hypothesis is non-random, the null hypothesis is determined to use two-tailed tests. Then, check the absolute value of B-Score is higher than 1.96 or not. Finally, you can draw a gateway diagram and divide it into five blocks. In the homogeneity test, we choose the yield parameter to support us to analyze wafer. If the yield difference of wafer is higher than the threshold after doing partition, the wafer may have systematic error. Different partition method is used to distinguish different failure types. For example, we change the radius of the donut partition wafer to detect the edge-ring failure type and the defect position. Then, the applications of randomness and homogeneity test is used for the wafer after doing partition. Finally, the results of the test are combined to enhance the systematic errors resolution, so as to improve the yield and reduce costs.
APA, Harvard, Vancouver, ISO, and other styles
39

Lin, Wan-Ting, and 林琬婷. "Self-incompatibility assessment and homogeneity test in anther culture-derived plantlet of purple coneflower (Echinacea purpurea L. Moench)." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/14952636314315111644.

Full text
Abstract:
碩士
中興大學
農藝學系所
99
Echinacea (Asteraceae), is genus of herbaceous perennials, used as medicinal plants. The major pharmacological activity components such as, caffeic acid derivatives, alkylamides, polysaccharides, and polycaetylenes, with anti-inflammatory, stimulates the immune system, anti-oxidation effects. Commercial production of Echinacea mainly propagated by seed, however, Echinacea with the characteristics of self-incompatibility and protandrous, mainly through insects as pollinator. For cross-pollinated plants, it might form a heterogeneous group, propagated by seed will lead to agronomic traits and composition variation, hard to control quality. Using anther culture and chromosome doubling can produce the pure line of homozygous double haploid that could serve as the material to produce hybrid seeds. If double haploid also with the characteristics of self- incompatibility, could absolve steps of emasculation and protect breeder’s rights. In the present study, the microspore origin of anther culture-derived plants of Echinacea was determined using morphological character discrimination and inter simple sequence repeat (ISSR) markers. Morphological traits showed that they were not identical in selfing progeny. Polymorphic fragments between the two parents of the F1 plants were identified through single primer and small amount offspring of self- and cross-pollination. ISSR analysis also showed that anther culture-derived plantlet were not a homogeneous doubled haploid. To evaluate the degree of self-incompatibility by index of self-incompatibility among selfing and crossing materials, results indicated that DH 1 and DH 2 belongs to completely self-incompatible and mostly self-incompatible, respectively. Self-incompatibility are not cause by pistil and/or stamen sterile.
APA, Harvard, Vancouver, ISO, and other styles
40

"Power, extension and multiple comparisons for the Lin and Wang test for overall homogeneity of Weibull survival curves." Tulane University, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
41

Fiala, Ondřej. "Změny srážko-odtokového režimu v oblasti Šumavy." Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-369389.

Full text
Abstract:
CHANGES OF RAINFALL-RUNOFF REGIME IN THE ŠUMAVA / BOHEMIAN FOREST REGION Abstract: The goal of this thesis is the evaluation of changes in rainfall-runoff regime in the Šumava region from time and spacial point of view. The thesis includes research and applied part. The research part is dedicated to the methods of evaluation of runoff changes and their possible causes in the Šumava region. In the applied part there is an analysis of precipitaion - runoff regime for long-term time series of average annual and monthly discharges and also annual and monthly precipitations for selected gauging stations in Czech, German and Austrian part of Šumava using absolute and relative homogenity tests and Mann - Kendall test for long-term trend. The results of this thesis showed significant changes in rainfall and runoff seasonality. One of the main aims of this thesis is the identification of possible orographical effect or the difference between windward and leeward part of Šumava. In conclusion the achieved results are evaluated, discussed and compared with subject publications. Key words: absolute homogeneity, land-use, Mann - Kendall test, runoff, basin, discharge, relative homogeneity, season,precipitation, trend, Šumava
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Ming-Chang, and 李明昌. "A Test of Homogeneity of Firms Included in the GAO Restatement Database Classified by News Search Results-The Case of Conservative Accounting Practices." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/06991320505518302260.

Full text
Abstract:
碩士
元智大學
會計學系
95
This study is based on the GAO restatement sample, which consists of 919 restatements of financial statements by listed corporations from the period between 1997 through June 30, 2002. This database has been employed by many studies to examine various aspects of financial restatements. However, all these studies have treated the entire sample group as homogeneous. On a closer examination based on LexisNexis news search shows that many of these restatements are innocuous, for example, restatements were made to adopt new pronouncements while many restatements were not published by news media. These study aims to investigate if these innocuous restatements are different from the restatement firms due to accounting frauds and irregularities in accounting practices that eventually resulting in restatements. I hypothesize that firms of accounting frauds and irregularities type are more prone to employ aggressive accounting practices which limit their accounting flexibilities, resulting in committing frauds and irregularities to increase accounting income. In terms of accounting conservatism, I use the accumulation of nonoperating accruals, the market-to-book ratio, and the Basu’s (1997) earnings-return regression as measurements to examine the difference between financial statement restatements due to accounting frauds and irregularities from innocuous restatements. The result is as predicted, regardless of which measurement the above-mentioned, accounting practices of innocuous restatements firms are more conservative than those of restatements due to accounting frauds and irregularities. Overall, this study provides evidence that GAO restatement firms are not homogenous and future research planning to employ GAO database can increase power by carefully distinguishing innocuous restatements from those due to accounting frauds and irregularities.
APA, Harvard, Vancouver, ISO, and other styles
43

Chuang, Kai-Ting, and 莊凱婷. "A Test of Homogeneity of Firms Included in the GAO Restatement Database Classified by New Search Results-the Case of Analysts’ Earning Forecast Characteristics." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/84380377115263630439.

Full text
Abstract:
碩士
元智大學
會計學系
95
This study is based on the GAO restatement sample, which consists of 919 restatements of financial statements by listed corporations from the period between 1997 through June 30,2002. This database has been employed by many studies to examine various aspects of financial restatements. However, all these studies have treated the entire sample group as homogeneous. On a closer examination based on Lexis Nexis news search shows that many of these restatements are innocuous, for example, restatements were made to adopt new pronouncements while many restatements were not published by news media. These study aims to investigate if these innocuous restatements are different from the rest of the sample firms from the perspective of financial analysts. I follow Easterwood and Nutt (1999) and classify sample firms into restatements related to accounting frauds and irregularities and innocuous groups and then further divide each group into low, normal and high groups based on magnitude of the earnings changes from previous year .the findings include: First, analysts underreact to expected and unexpected earnings information of innocuous restatement firms’ earnings. However, for the accounting frauds and irregularity group, analysts do not underreact to expected and unexpected earnings information. Second, analysis underreact to bad earnings news but overreact to good earnings news. Third, after company announced that they would restate their financial statement, analysis are more likely revise earnings forecasts downward for the frauds and irregularities group but upward for the innocuous group, indicating that from analysts’ point of view, these two groups are clearly different, which support the value of news search for future research that plan to employ the GAO restatement database.
APA, Harvard, Vancouver, ISO, and other styles
44

Wu, Chiu-Mei, and 吳秋美. "A Test of Homogeneity of Firms Included in the GAO Restatement Database Classified by News Search Results-The Case of Information Transfer of Earnings Restatements." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/05489921464720394297.

Full text
Abstract:
碩士
元智大學
會計學系
95
This study is based on the GAO restatement sample, which consists of 919 restatements of financial statements by listed corporations from the period between 1997 through June 30, 2002. This database has been employed by many studies to examine various aspects of financial restatements. However, all these studies have treated the entire sample group as homogeneous. On a closer examination based on Lexis Nexis news search shows that many of these restatements are innocuous, for example, restatements were made to adopt new pronouncements while many restatements were not published by news media. These study aims to investigate if these innocuous restatements are different from the restatement firms due to accounting frauds and irregularities from the perspective that if the negative information from restatements impact on other firms in the same industry. First I perform the non-directional test. The result supports the notion that restating announcements information reflected in the announcing firms’ security returns influences the security returns of other firms in the same industry and there are different intra-information transferring effects between these two group of firms. Secondly, the directional test is performed. There is statically significant evidence to support the notion that the direction and magnitude of the impact of a firms’ restating announcement on its own stock price is a determinant of both the direction and magnitude of security prices of other firms in its industry and there are different intra-information transferring effects between innocuous restatements and restatements due to accounting frauds and irregularities. Overall, this study provides evidence that GAO restatement firms are not homogenous and future research planning to employ GAO database can increase power by carefully distinguishing innocuous restatements from those due to accounting frauds and irregularities.
APA, Harvard, Vancouver, ISO, and other styles
45

Schumann, Frank. "Untersuchung zur prädiktiven Validität von Konzentrationstests: Ein chronometrischer Ansatz zur Überprüfung der Rolle von Itemschwierigkeit, Testlänge, und Testdiversifikation." Doctoral thesis, 2015. https://monarch.qucosa.de/id/qucosa%3A20551.

Full text
Abstract:
In der hier vorliegenden Arbeit wurde die Validität von Aufmerksamkeits- und Konzentrationstests untersucht. Im Vordergrund stand dabei die Frage nach dem Einfluss verschiedener kritischer Variablen auf die prädiktive Validität in diesen Tests, insbesondere der Itemschwierigkeit und Itemhomogenität, der Testlänge bzw. des Testverlaufs, der Testdiversifikation und der Validität im Kontext einer echten Personalauslese. In insgesamt fünf Studien wurden die genannten Variablen systematisch variiert und auf ihre prädiktive Validität zur (retrograden und konkurrenten) Vorhersage von schulischen und akademischen Leistungen (Realschule, Abitur, Vordiplom/Bachelor) hin analysiert. Aufgrund der studentischen (d. h. relativ leistungshomogenen) Stichprobe bestand die Erwartung, dass die Korrelationen etwas unterschätzt werden. Da die Validität in dieser Arbeit jedoch „vergleichend“ für bestimmte Tests bzw. experimentelle Bedingungen bestimmt wurde, sollte dies keine Rolle spielen. In Studie 1 (N = 106) wurde zunächst untersucht, wie schwierig die Items in einem Rechenkonzentrationstest sein sollten, um gute Vorhersagen zu gewährleisten. Dazu wurden leichte und schwierigere Items vergleichend auf ihre Korrelation zum Kriterium hin untersucht. Im Ergebnis waren sowohl leichte als auch schwierigere Testvarianten ungefähr gleich prädiktiv. In Studie 2 (N = 103) wurde die Rolle der Testlänge untersucht, wobei die prädiktive Validität von Kurzversion und Langversion in einem Rechenkonzentrationstest vergleichend untersucht wurde. Im Ergebnis zeigte sich, dass die Kurzversion valider war als die Langversion und dass die Validität in der Langversion im Verlauf abnimmt. In Studie 3 (N = 388) stand der Aspekt der Testdiversifikation im Vordergrund, wobei untersucht wurde, ob Intelligenz besser mit einem einzelnen Matrizentest (Wiener Matrizen-Test, WMT) oder mit einer Testbatterie (Intelligenz-Struktur-Test, I-S-T 2000 R) erfasst werden sollte, um gute prädiktive Validität zu gewährleisten. Die Ergebnisse sprechen klar für den Matrizentest, welcher ungefähr gleich valide war wie die Testbatterie, aber dafür testökonomischer ist. In den Studien 4 (N = 105) und 5 (N =97) wurde die prädiktive Validität zur Vorhersage von Schulleistungen im Kontext einer realen Personalauswahlsituation untersucht. Während die großen Testbatterien, Wilde-Intelligenz-Test 2 (WIT-2) und Intelligenz-Struktur-Test 2000R (I-S-T 2000 R), nur mäßig gut vorhersagen konnten, war der Komplexe Konzentrationstest (KKT), insbesondere der KKT-Rechentest ein hervorragender Prädiktor für schulische und akademische Leistungen. Auf Basis dieser Befunde wurden schließlich Empfehlungen und Anwendungshilfen für den strategischen Einsatz von Testinstrumenten in der diagnostischen Berufspraxis ausgesprochen.:1 Einführung und Ziele 2 Diagnostik von Konzentrationsfähigkeit 2.1 Historische Einordnung 2.2 Kognitive Modellierung 2.3 Psychometrische Modellierung 3 Prädiktive Validität von Konzentrationstests 3.1 Reliabilität, Konstruktvalidität, Kriterienvalidität 3.2 Konstruktions- und Validierungsstrategien 3.3 Ableitung der Fragestellung 4 Beschreibung der Fragebögen und Tests 5 Empirischer Teil 5.1 Studie 1 - Itemschwierigkeit 5.1.1 Methode 5.1.2 Ergebnisse 5.1.3 Diskussion 5.2 Studie 2 - Testverlängerung und Testverlauf 5.2.1 Methode 5.2.2 Ergebnisse 5.2.3 Diskussion 5.3 Studie 3 - Testdiversifikation 5.3.1 Methode 5.3.2 Ergebnisse 5.3.3 Diskussion 5.4 Studie 4 - Validität in realer Auswahlsituation (I-S-T 2000 R) 5.4.1 Methode 5.4.2 Ergebnisse 5.4.3 Diskussion 5.5 Studie 5 - Validität in realer Auswahlsituation (WIT-2) 5.5.1 Methode 5.5.2 Ergebnisse 5.5.3 Diskussion 6 Diskussion 128 6.1 Sind schwierige Tests besser als leichte Tests? 6.2 Sind lange Tests besser als kurze Tests? 6.3 Sind Testbatterien besser als Einzeltests? 6.4 Sind Tests auch unter „realen“ Bedingungen valide? 6.5 Validität unter realen Bedingungen - Generalisierung 7 Theoretische Implikationen 8 Praktische Konsequenzen 9 Literaturverzeichnis Anhang
APA, Harvard, Vancouver, ISO, and other styles
46

"Comparing the Statistical Tests for Homogeneity of Variances." East Tennessee State University, 2006. http://etd-submit.etsu.edu/etd/theses/available/etd-0714106-143011/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Lin, Jyun-Sheng, and 林峻陞. "Improved likelihood ratio tests for testing marginal homogeneity in 2 × 2 contingency tables." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/n23nsa.

Full text
Abstract:
碩士
國立中央大學
統計研究所
97
This paper considers one-sided hypotheses for testing the marginal homogeneity in a binary matched-pairs design. First we use the exact unconditional tests based on the likelihood ratio statistic to obtain the p-value. The likelihood ratio p-value may be very conservative if the sample sizes are small or moderate. Alternatively, we consider the confidence interval p-value with the specified confidence coefficient, which was derived by Berger and Sidik (2003). But numerical calculations are not give a strong evidence to show that the confidence interval p-value is better than the likelihood ratio p-value for any case. On the other hand, the performance of confidence interval p-value is highly dependent on the choice of confidence coefficient, and hence such the p-value can be improved by using the unconditional approach again. Our numerical studies show that the improved confidence interval p-value is closer to and at least the nominal level than likelihood ratio p-value and confidence interval p-value in all sample sizes.
APA, Harvard, Vancouver, ISO, and other styles
48

Ranjineh, Khojasteh Enayatollah. "Geostatistical three-dimensional modeling of the subsurface unconsolidated materials in the Göttingen area." Doctoral thesis, 2013. http://hdl.handle.net/11858/00-1735-0000-0001-BB9A-B.

Full text
Abstract:
Das Ziel der vorliegenden Arbeit war die Erstellung eines dreidimensionalen Untergrundmodells der Region Göttingen basierend auf einer geotechnischen Klassifikation der unkosolidierten Sedimente. Die untersuchten Materialen reichen von Lockersedimenten bis hin zu Festgesteinen, werden jedoch in der vorliegenden Arbeit als Boden, Bodenklassen bzw. Bodenkategorien bezeichnet. Diese Studie evaluiert verschiedene Möglichkeiten durch geostatistische Methoden und Simulationen heterogene Untergründe zu erfassen. Derartige Modellierungen stellen ein fundamentales Hilfswerkzeug u.a. in der Geotechnik, im Bergbau, der Ölprospektion sowie in der Hydrogeologie dar. Eine detaillierte Modellierung der benötigten kontinuierlichen Parameter wie z. B. der Porosität, der Permeabilität oder hydraulischen Leitfähigkeit des Untergrundes setzt eine exakte Bestimmung der Grenzen von Fazies- und Bodenkategorien voraus. Der Fokus dieser Arbeit liegt auf der dreidimensionalen Modellierung von Lockergesteinen und deren Klassifikation basierend auf entsprechend geostatistisch ermittelten Kennwerten. Als Methoden wurden konventionelle, pixelbasierende sowie übergangswahrscheinlichkeitsbasierende Markov-Ketten Modelle verwendet. Nach einer generellen statistischen Auswertung der Parameter wird das Vorhandensein bzw. Fehlen einer Bodenkategorie entlang der Bohrlöcher durch Indikatorparameter beschrieben. Der Indikator einer Kategorie eines Probepunkts ist eins wenn die Kategorie vorhanden ist bzw. null wenn sie nicht vorhanden ist. Zwischenstadien können ebenfalls definiert werden. Beispielsweise wird ein Wert von 0.5 definiert falls zwei Kategorien vorhanden sind, der genauen Anteil jedoch nicht näher bekannt ist. Um die stationären Eigenschaften der Indikatorvariablen zu verbessern, werden die initialen Koordinaten in ein neues System, proportional zur Ober- bzw. Unterseite der entsprechenden Modellschicht, transformiert. Im neuen Koordinatenraum werden die entsprechenden Indikatorvariogramme für jede Kategorie für verschiedene Raumrichtungen berechnet. Semi-Variogramme werden in dieser Arbeit, zur besseren Übersicht, ebenfalls als Variogramme bezeichnet. IV Durch ein Indikatorkriging wird die Wahrscheinlichkeit jeder Kategorie an einem Modellknoten berechnet. Basierend auf den berechneten Wahrscheinlichkeiten für die Existenz einer Modellkategorie im vorherigen Schritt wird die wahrscheinlichste Kategorie dem Knoten zugeordnet. Die verwendeten Indikator-Variogramm Modelle und Indikatorkriging Parameter wurden validiert und optimiert. Die Reduktion der Modellknoten und die Auswirkung auf die Präzision des Modells wurden ebenfalls untersucht. Um kleinskalige Variationen der Kategorien auflösen zu können, wurden die entwickelten Methoden angewendet und verglichen. Als Simulationsmethoden wurden "Sequential Indicator Simulation" (SISIM) und der "Transition Probability Markov Chain" (TP/MC) verwendet. Die durchgeführten Studien zeigen, dass die TP/MC Methode generell gute Ergebnisse liefert, insbesondere im Vergleich zur SISIM Methode. Vergleichend werden alternative Methoden für ähnlichen Fragestellungen evaluiert und deren Ineffizienz aufgezeigt. Eine Verbesserung der TP/MC Methoden wird ebenfalls beschrieben und mit Ergebnissen belegt, sowie weitere Vorschläge zur Modifikation der Methoden gegeben. Basierend auf den Ergebnissen wird zur Anwendung der Methode für ähnliche Fragestellungen geraten. Hierfür werden Simulationsauswahl, Tests und Bewertungsysteme vorgeschlagen sowie weitere Studienschwerpunkte beleuchtet. Eine computergestützte Nutzung des Verfahrens, die alle Simulationsschritte umfasst, könnte zukünftig entwickelt werden um die Effizienz zu erhöhen. Die Ergebnisse dieser Studie und nachfolgende Untersuchungen könnten für eine Vielzahl von Fragestellungen im Bergbau, der Erdölindustrie, Geotechnik und Hydrogeologie von Bedeutung sein.
APA, Harvard, Vancouver, ISO, and other styles
49

Laisi, Elton. "Development of a flood-frequency model for the river basins of the Central Region of Malawi as a tool for engineering design and disaster preparedness in flood-prone areas." Diss., 2016. http://hdl.handle.net/10500/23597.

Full text
Abstract:
Since 1971, a number of flood frequency models have been developed for river basins in Malawi for use in the design of hydraulic structures, but the varied nature of their results have most often given a dilemma to the design engineer due to differences in magnitudes of calculated floods for given return periods. All the known methods for flood frequency analysis developed in country so far have not used a homogeneity test for the river basins from which the hydrological data has been obtained. This study was thus conducted with a view to resolving this problem and hence improve the design of hydraulic structures such as culverts, bridges, water intake points for irrigation schemes, and flood protection dykes. In light of the above, during the course of this study the applicability of existing methods in the design of hydraulic structures was assessed. Also, the study investigated how land use and land cover change influence the frequency and magnitude of floods in the study area, and how their deleterious impacts on the socio-economic and natural environment in the river basins could be mitigated
Environmental Sciences
M. Sc. (Environmental Management)
APA, Harvard, Vancouver, ISO, and other styles
50

Rossouw, Pieter Johannes. "A qualitative evaluation of self-motivation in a measure of Trait Emotional Intelligence." Diss., 2014. http://hdl.handle.net/10500/14495.

Full text
Abstract:
In this study, the author provided a discussion of international cross-cultural validation studies which reported low internal consistency reliabilities for the self-motivation facet of the Trait Emotional Intelligence Questionnaire (TEIQue). A review of salient models of emotional intelligence (EI) revealed that self-motivation was consistently conceptualised as part of the sampling domain of trait and mixed models of EI, but not ability-based conceptualisations of the construct. The author provided a qualitative evaluation of the ten self-motivation test items as they appeared in the TEIQue with the purpose of exploring the operationalisation of the construct in a multi-cultural South African sample. The exploratory-descriptive research was conducted amongst permanent employees who have all completed the TEIQue as part of on-going employee assessments. The present study found limited support for a satisfactory operationalisation of the self-motivation facet of the TEIQue as it related to a multi-cultural South African research sample.
Psychology
M.A. (Psychology)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography