To see the other types of publications on this topic, follow the link: ANOVA.

Dissertations / Theses on the topic 'ANOVA'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'ANOVA.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ozbozkurt, Pelin. "Bayesian Inference In Anova Models." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12611532/index.pdf.

Full text
Abstract:
Estimation of location and scale parameters from a random sample of size n is of paramount importance in Statistics. An estimator is called fully efficient if it attains the Cramer-Rao minimum variance bound besides being unbiased. The method that yields such estimators, at any rate for large n, is the method of modified maximum likelihood estimation. Apparently, such estimators cannot be made more efficient by using sample based classical methods. That makes room for Bayesian method of estimation which engages prior distributions and likelihood functions. A formal combination of the prior knowledge and the sample information is called posterior distribution. The posterior distribution is maximized with respect to the unknown parameter(s). That gives HPD (highest probability density) estimator(s). Locating the maximum of the posterior distribution is, however, enormously difficult (computationally and analytically) in most situations. To alleviate these difficulties, we use modified likelihood function in the posterior distribution instead of the likelihood function. We derived the HPD estimators of location and scale parameters of distributions in the family of Generalized Logistic. We have extended the work to experimental design, one way ANOVA. We have obtained the HPD estimators of the block effects and the scale parameter (in the distribution of errors)
they have beautiful algebraic forms. We have shown that they are highly efficient. We have given real life examples to illustrate the usefulness of our results. Thus, the enormous computational and analytical difficulties with the traditional Bayesian method of estimation are circumvented at any rate in the context of experimental design.
APA, Harvard, Vancouver, ISO, and other styles
2

Halldestam, Markus. "ANOVA - The Effect of Outliers." Thesis, Uppsala universitet, Statistiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-295864.

Full text
Abstract:
This bachelor’s thesis focuses on the effect of outliers on the one-way analysis of variance and examines whether the estimate in ANOVA is robust and whether the actual test itself is robust from influence of extreme outliers. The robustness of the estimates is examined using the breakdown point while the robustness of the test is examined by simulating the hypothesis test under some extreme situations. This study finds evidence that the estimates in ANOVA are sensitive to outliers, i.e. that the procedure is not robust. Samples with a larger portion of extreme outliers have a higher type-I error probability than the expected level.
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Hangcheng. "Comparing Welch's ANOVA, a Kruskal-Wallis test and traditional ANOVA in case of Heterogeneity of Variance." VCU Scholars Compass, 2015. http://scholarscompass.vcu.edu/etd/3985.

Full text
Abstract:
Analysis of variance (ANOVA) is a robust test against the normality assumption, but it may be inappropriate when the assumption of homogeneity of variance has been violated. Welch ANOVA and the Kruskal-Wallis test (a non-parametric method) can be applicable for this case. In this study we compare the three methods in empirical type I error rate and power, when heterogeneity of variance occurs and find out which method is the most suitable with which cases including balanced/unbalanced, small/large sample size, and/or with normal/non-normal distributions.
APA, Harvard, Vancouver, ISO, and other styles
4

Prosser, Robert James. "Robustness of multivariate mixed model ANOVA." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/25511.

Full text
Abstract:
In experimental or quasi-experimental studies in which a repeated measures design is used, it is common to obtain scores on several dependent variables on each measurement occasion. Multivariate mixed model (MMM) analysis of variance (Thomas, 1983) is a recently developed alternative to the MANOVA procedure (Bock, 1975; Timm, 1980) for testing multivariate hypotheses concerning effects of a repeated factor (called occasions in this study) and interaction between repeated and non-repeated factors (termed group-by-occasion interaction here). If a condition derived by Thomas (1983), multivariate multi-sample sphericity (MMS), regarding the equality and structure of orthonormalized population covariance matrices is satisfied (given multivariate normality and independence for distributions of subjects' scores), valid likelihood-ratio MMM tests of group-by-occasion interaction and occasions hypotheses are possible. To date, no information has been available concerning actual (empirical) levels of significance of such tests when the MMS condition is violated. This study was conducted to begin to provide such information. Departure from the MMS condition can be classified into three types— termed departures of types A, B, and C respectively: (A) the covariance matrix for population ℊ (ℊ = 1,...G), when orthonormalized, has an equal-diagonal-block form but the resulting matrix for population ℊ is unequal to the resulting matrix for population ℊ' (ℊ ≠ ℊ'); (B) the G populations' orthonormalized covariance matrices are equal, but the matrix common to the populations does not have equal-diagonal-block structure; or (C) one or more populations has an orthonormalized covariance matrix which does not have equal-diagonal-block structure and two or more populations have unequal orthonormalized matrices. In this study, Monte Carlo procedures were used to examine the effect of each type of violation in turn on the Type I error rates of multivariate mixed model tests of group-by-occasion interaction and occasions null hypotheses. For each form of violation, experiments modelling several levels of severity were simulated. In these experiments: (a) the number of measured variables was two; (b) the number of measurement occasions was three; (c) the number of populations sampled was two or three; (d) the ratio of average sample size to number of measured variables was six or 12; and (e) the sample size ratios were 1:1 and 1:2 when G was two, and 1:1:1 and 1:1:2 when G was three. In experiments modelling violations of types A and C, the effects of negative and positive sampling were studied. When type A violations were modelled and samples were equal in size, actual Type I error rates did not differ significantly from nominal levels for tests of either hypothesis except under the most severe level of violation. In type A experiments using unequal groups in which the largest sample was drawn from the population whose orthogonalized covariance matrix has the smallest determinant (negative sampling), actual Type I error rates were significantly higher than nominal rates for tests of both hypotheses and for all levels of violation. In contrast, empirical levels of significance were significantly lower than nominal rates in type A experiments in which the largest sample was drawn from the population whose orthonormalized covariance matrix had the largest determinant (positive sampling). Tests of both hypotheses tended to be liberal in experiments which modelled type B violations. No strong relationships were observed between actual Type I error rates and any of: severity of violation, number of groups, ratio of average sample size to number of variables, and relative sizes of samples. In equal-groups experiments modelling type C violations in which the orthonormalized pooled covariance matrix departed at the more severe level from equal-diagonal-block form, actual Type I error rates for tests of both hypotheses tended to be liberal. Findings were more complex under the less severe level of structural departure. Empirical significance levels did not vary with the degree of interpopulation heterogeneity of orthonormalized covariance matrices. In type C experiments modelling negative sampling, tests of both hypotheses tended to be liberal. Degree of structural departure did not appear to influence actual Type I error rates but degree of interpopulation heterogeneity did. Actual Type I error rates in type C experiments modelling positive sampling were apparently related to the number of groups. When two populations were sampled, both tests tended to be conservative, while for three groups, the results were more complex. In general, under all types of violation the ratio of average group size to number of variables did not greatly affect actual Type I error rates. The report concludes with suggestions for practitioners considering use of the MMM procedure based upon the findings and recommends four avenues for future research on Type I error robustness of MMM analysis of variance. The matrix pool and computer programs used in the simulations are included in appendices.
Education, Faculty of
Educational and Counselling Psychology, and Special Education (ECPS), Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
5

Storm, Christine. "Permutation procedures for ANOVA, regression and PCA." Diss., University of Pretoria, 2012. http://hdl.handle.net/2263/24960.

Full text
Abstract:
Parametric methods are effective and appropriate when data sets are obtained by well-defined random sampling procedures, the population distribution for responses is well-defined, the null sampling distributions of suitable test statistics do not depend on any unknown entity and well-defined likelihood models are provided for by nuisance parameters. Permutation testing methods, on the other hand, are appropriate and unavoidable when distribution models for responses are not well specified, nonparametric or depend on too many nuisance parameters; when ancillary statistics in well-specified distributional models have a strong influence on inferential results or are confounded with other nuisance entities; when the sample sizes are less than the number of parameters and when data sets are obtained by ill-specified selection-bias procedures. In addition, permutation tests are useful not only when parametric tests are not possible, but also when more importance needs to be given to the observed data set, than to the population model, as is typical for example in biostatistics. The different types of permutation methods for analysis of variance, multiple linear regression and principal component analysis are explored. More specifically, one-way, twoway and three-way ANOVA permutation strategies will be discussed. Approximate and exact permutation tests for the significance of one or more regression coefficients in a multiple linear regression model will be explained next, and lastly, the use of permutation tests used as a means to validate and confirm the results obtained from the exploratory PCA will be described.
Dissertation (MSc)--University of Pretoria, 2012.
Statistics
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Gang. "A New Approach to ANOVA Methods for Autocorrelated Data." University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1461226897.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lind, Ingela. "Regressor and Structure Selection : Uses of ANOVA in System Identification." Doctoral thesis, Linköping : Linköpings universitet, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hammi, Malik, and Ahmet Hakan Akdeve. "Poweranalys : bestämmelse av urvalsstorlek genom linjära mixade modeller och ANOVA." Thesis, Linköpings universitet, Statistik och maskininlärning, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-149026.

Full text
Abstract:
In research where experiments on humans and animals is performed, it is in advance important to determine how many observations that is needed in a study to detect any effects in groups and to save time and costs. This could be examined by power analysis, in order to determine a sample size which is enough to detect any effects in a study, a so called “power”. Power is the probability to reject the null hypothesis when the null hypothesis is false. Mälardalen University and the Caroline Institute have in cooperation, formed a study (The Climate Friendly and Ecological Food on Microbiota) based on individual’s dietary intake. Every single individual have been assigned to a specific diet during 8 weeks, with the purpose to examine whether emissions of carbon dioxide, CO2, differs reliant to the specific diet each individuals follows. There are two groups, one treatment and one control group. Individuals assigned to the treatment group are supposed to follow a climatarian diet while the individuals in the control group follows a conventional diet. Each individual have been followed up during 8 weeks in total, with three different measurements occasions, 4 weeks apart. The different measurements are Baseline assessment, Midline assessment and End assessment. In the CLEAR-study there are a total of 18 individuals, with 9 individuals in each group. The amount of individuals are not enough to reach any statistical significance in a test and therefore the sample size shall be examined through power analysis. In terms of, data, every individual have three different measurements occasions that needs to be modeled through mixed-design ANOVA and linear mixed models. These two methods takes into account, each individual’s different measurements. The models which describes data are applied in the computations of sample sizes and power. All the analysis are done in the programming language R with means and standard deviations from the study and the models as a base. Sample sizes and power have been computed for two different linear mixed models and one ANOVA model. The linear mixed models required less individuals than ANOVA in terms of a desired power of 80 percent. 24 individuals in total were required by the linear mixed model that had the factors group, time, id and the covariate sex. 42 individuals were required by ANOVA that includes the variables id, group and time.
Inom forskning där försök, dels utförs på människor och djur, vill man försäkra sig om en lämplig urvalsstorlek för att spara tid och kostnad samtidigt som en önskad statistisk styrka uppnås. Mälardalens högskola och Karolinska institutet har gjort en pilotstudie (CLEAR) som undersöker människors koldioxidutsläpp i förhållande till kosthållning. Varje individ i studien har fått riktlinjer om att antingen följa en klimatvänlig- eller en konventionell kosthållning i totalt 8 veckor. Individerna följs upp med 4 veckors mellanrum, vilket har resulterat i tre mättillfällen, inklusive en baslinjemätning. I CLEAR-studien finns variabler om individernas kön, ålder, kosthållning samt intag av makro- och mikronäringsämnen. Nio individer i respektive grupp finns, där grupperna är klimat- och kontrollgruppen. Totala antalet individer i pilotstudien är för få för att erhålla statistisk signifikans vid statistiska tester och därför bör urvalsstorleken undersökas genom att göra styrkeberäkningar. Styrkan som beräknas är sannolikheten att förkasta nollhypotesen när den är falsk. För att kunna beräkna urvalsstorlekar måste modeller skapas utifrån strukturen på data, vilket kommer att göras med metoderna mixed-design ANOVA och linjära mixade modeller. Metoderna tar hänsyn till att varje individ har fler än en mätning. Modellerna som beskriver data tillämpas i beräkningarna av styrka. Urvalsstorlekarna och styrkan som beräknats är simuleringsbaserad och har analyserats i programspråket R med modellerna och värden från pilotstudien som grund. Styrka och urvalsstorlekar har beräknats för två linjära mixade modeller och en ANOVA. De linjära mixade modellerna kräver färre individer än ANOVA för en önskad styrka på 80 procent. Av de linjära mixade modellerna som krävde minst individer behövdes totalt 24 individer medan mixed design-ANOVA krävde 42 individer totalt.
APA, Harvard, Vancouver, ISO, and other styles
9

Adnan, Arisman. "Analysis of taste-panel data using ANOVA and ordinal logistic regression." Thesis, University of Newcastle Upon Tyne, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.402150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Yuan. "Mixed anova model analysis of microarray experiments with locally polled error /." Electronic version (PDF), 2004. http://dl.uncw.edu/etd/2004/liuy/yuanliu.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Makhanya, Nhlanhla Well-Beloved. "Alternative methods to parametric significance testing in linear regression and ANOVA." Diss., University of Pretoria, 2015. http://hdl.handle.net/2263/53516.

Full text
Abstract:
The aim of the study was to survey permutation tests, bootstrapping and jackknife methods and their application to significance testing of regression coefficients in linear regression analysis. A Monte Carlo simulation study was performed in order to compare the different methods in terms of empirical probability of type 1 error, power of a test and confidence interval where coverage and average length of confidence interval were used as measures of comparison. The empirical probability of type 1 error and power of a test were used to compare permutation tests, bootstrapping and parametric methods, while the confidence intervals were used to compare jackknife, bootstrap as well as the parametric method. These comparisons were performed in order to investigate the effect of (1) sample size (2) when errors are normally, uniformly and lognormally distributed (3) when the number of explanatory variables is 1, 2 and 5. (4) When the correlation coefficient between the explanatory variables is 0, 0.5 and 0.9. The results obtained from the Monte Carlo simulation study showed that permutation and bootstrap methods produced similar probability of type 1 error results while the parametric methods understated probability of type 1 error when errors are lognormally distributed. In the absence of multicollinearity all the methods were almost equally powerful and in presence of multicollinearity they all suffered equally in terms of power. The jackknife produced poor result in terms of 100(1???)% confidence interval while the bootstrap produced reasonable results especially for larger sample sizes. The improvement was observed under the jackknife method when percentile based intervals were applied. It was concluded that permutation tests as well as bootstrap methods are good alternative methods to use in significance testing in regression and ANOVA.
Dissertation (MSc)--University of Pretoria, 2015.
Statistics
MSc
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
12

Ning, Wei. "A new approach to test for interactions in two-way ANOVA models." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2006. http://proquest.umi.com/login?COPT=REJTPTU0NWQmSU5UPTAmVkVSPTI=&clientId=3739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Jordaan, Aletta Gertruida. "Empirical Bayes estimation of the extreme value index in an ANOVA setting." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86216.

Full text
Abstract:
Thesis (MComm)-- Stellenbosch University, 2014.
ENGLISH ABSTRACT: Extreme value theory (EVT) involves the development of statistical models and techniques in order to describe and model extreme events. In order to make inferences about extreme quantiles, it is necessary to estimate the extreme value index (EVI). Numerous estimators of the EVI exist in the literature. However, these estimators are only applicable in the single sample setting. The aim of this study is to obtain an improved estimator of the EVI that is applicable to an ANOVA setting. An ANOVA setting lends itself naturally to empirical Bayes (EB) estimators, which are the main estimators under consideration in this study. EB estimators have not received much attention in the literature. The study begins with a literature study, covering the areas of application of EVT, Bayesian theory and EB theory. Different estimation methods of the EVI are discussed, focusing also on possible methods of determining the optimal threshold. Specifically, two adaptive methods of threshold selection are considered. A simulation study is carried out to compare the performance of different estimation methods, applied only in the single sample setting. First order and second order estimation methods are considered. In the case of second order estimation, possible methods of estimating the second order parameter are also explored. With regards to obtaining an estimator that is applicable to an ANOVA setting, a first order EB estimator and a second order EB estimator of the EVI are derived. A case study of five insurance claims portfolios is used to examine whether the two EB estimators improve the accuracy of estimating the EVI, when compared to viewing the portfolios in isolation. The results showed that the first order EB estimator performed better than the Hill estimator. However, the second order EB estimator did not perform better than the “benchmark” second order estimator, namely fitting the perturbed Pareto distribution to all observations above a pre-determined threshold by means of maximum likelihood estimation.
AFRIKAANSE OPSOMMING: Ekstreemwaardeteorie (EWT) behels die ontwikkeling van statistiese modelle en tegnieke wat gebruik word om ekstreme gebeurtenisse te beskryf en te modelleer. Ten einde inferensies aangaande ekstreem kwantiele te maak, is dit nodig om die ekstreem waarde indeks (EWI) te beraam. Daar bestaan talle beramers van die EWI in die literatuur. Hierdie beramers is egter slegs van toepassing in die enkele steekproef geval. Die doel van hierdie studie is om ’n meer akkurate beramer van die EWI te verkry wat van toepassing is in ’n ANOVA opset. ’n ANOVA opset leen homself tot die gebruik van empiriese Bayes (EB) beramers, wat die fokus van hierdie studie sal wees. Hierdie beramers is nog nie in literatuur ondersoek nie. Die studie begin met ’n literatuurstudie, wat die areas van toepassing vir EWT, Bayes teorie en EB teorie insluit. Verskillende metodes van EWI beraming word bespreek, insluitend ’n bespreking oor hoe die optimale drempel bepaal kan word. Spesifiek word twee aanpasbare metodes van drempelseleksie beskou. ’n Simulasiestudie is uitgevoer om die akkuraatheid van beraming van verskillende beramingsmetodes te vergelyk, in die enkele steekproef geval. Eerste orde en tweede orde beramingsmetodes word beskou. In die geval van tweede orde beraming, word moontlike beramingsmetodes van die tweede orde parameter ook ondersoek. ’n Eerste orde en ’n tweede orde EB beramer van die EWI is afgelei met die doel om ’n beramer te kry wat van toepassing is vir die ANAVA opset. ’n Gevallestudie van vyf versekeringsportefeuljes word gebruik om ondersoek in te stel of die twee EB beramers die akkuraatheid van beraming van die EWI verbeter, in vergelyking met die EWI beramers wat verkry word deur die portefeuljes afsonderlik te ontleed. Die resultate toon dat die eerste orde EB beramer beter gevaar het as die Hill beramer. Die tweede orde EB beramer het egter slegter gevaar as die tweede orde beramer wat gebruik is as maatstaf, naamlik die passing van die gesteurde Pareto verdeling (PPD) aan alle waarnemings bo ’n gegewe drempel, met behulp van maksimum aanneemlikheidsberaming.
APA, Harvard, Vancouver, ISO, and other styles
14

Fonseca, Monique Regina. "InfluÃncia do Ãngulo de Pitch no Desempenho de um Aerogerador de Pequeno Porte Projetado com o Perfil AerodinÃmico NREL S809." Universidade Federal do CearÃ, 2012. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=9031.

Full text
Abstract:
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior
O controle do Ãngulo pitch à uma tÃcnica amplamente usada para controlar a resposta aerodinÃmica de um aerogerador de eixo horizontal (HAWT). Embora o controle do pitch seja encontrado quase que exclusivamente em grandes HAWTs, sua utilizaÃÃo para as pequenas nÃo pode ser desconsiderada, apesar do fato de que os meios eletrÃnicos para essa funÃÃo tendem a aumentar o custo do aerogerador. Este tipo de controle à de grande importÃncia para evitar problemas estruturais ao aerogerador, causado pelas altas velocidades do vento, ou para configurar a taxa de velocidade de ponta da turbina para seu valor de projeto. Este trabalho tem o objetivo de promover um estudo comparativo da influÃncia da variaÃÃo do Ãngulo de pitch no desempenho de um aerogerador de pequeno porte, atravÃs de uma anÃlise comparativa de dados experimentais. Uma pà de raio 1,5m com perfil aerodinÃmico NREL S809 foi projetada a partir dos conceitos baseados na teoria do momento do elemento de pà (BEM), seguindo as premissas de operaÃÃo em velocidade variÃvel, e velocidade especÃfica de projeto igual a 7. Os parÃmetros geomÃtricos de afilamento e torÃÃo foram definidos para construÃÃo da pà que foi usada na montagem de um protÃtipo experimental. O protÃtipo, juntamente com um sistema de aquisiÃÃo de dados, foram utilizados em testes de campo com o intuito de se obter dados experimentais de operaÃÃo em diferentes Ãngulos de pitch. Esses dados foram usados em uma anÃlise estatÃstica comparativa, baseada na anÃlise de variÃncia e no teste de mÃdias, para avaliaÃÃo do desempenho do aerogerador. Resultaram desta anÃlise que: em intervalos de velocidade especÃfica baixos nÃo foram observadas diferenÃas no desempenho do aerogerador; enquanto que nos intervalos de velocidade especÃfica mais prÃximos ao de projeto houve uma diferenÃa estatisticamente significativa entre mÃdias, o que significa do ponto de vista fÃsico, que a variaÃÃo no Ãngulo de pitch afeta o desempenho do aerogerador. Observou-se ainda que o coeficiente de potÃncia para operaÃÃo em Ãngulo de pitch de 30,1Â, mostrou-se maior a baixos valores de velocidade especÃfica, o que demonstra a necessidade de um mecanismo de controle de pitch para operaÃÃo a velocidades de vento e de rotaÃÃes mais baixas.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Anquan. "An Empirical Comparison of Four Data Generating Procedures in Parametric and Nonparametric ANOVA." OpenSIUC, 2011. https://opensiuc.lib.siu.edu/dissertations/329.

Full text
Abstract:
The purpose of this dissertation was to empirically investigate the Type I error and power rates of four data transformations that produce a variety of non-normal distributions. Specifically, the transformations investigated were (a) the g-and-h, (b) the generalized lambda distribution (GLD), (c) the power method, and (d) the Burr families of distributions in the context of between-subjects and within-subjects analysis of variance (ANOVA). The traditional parametric F tests and their nonparametric counterparts, the Kruskal-Wallis (KW) and Friedman (FR) tests, were selected to be used in this investigation. The four data transformations produce non-normal distributions that have either valid or invalid probability density functions (PDFs). Specifically, the data generating procedures will produce distributions with valid PDFs if and only if the transformations are strictly increasing - otherwise the distributions are considered to be associated with invalid PDFs. As such, the primary objective of this study was to isolate and investigate the behaviors of the four data transformation procedures themselves while holding all other conditions constant (i.e., sample sizes, effect sizes, correlation levels, skew, kurtosis, random seed numbers, etc. all remain the same). The overall results of the Monte Carlo study generally suggest that when the distributions have valid probability density functions (PDFs) that the Type I error and power rates for the parametric (or nonparametric) tests were similar across all four data transformations. It is noted that there were some dissimilar results when the distributions were very skewed and near their associated boundary conditions for a valid PDF. These dissimilarities were most pronounced in the context of the KW and FR tests. In contrast, when the four transformations produced distributions with invalid PDFs, the Type I error and power rates were more frequently dissimilar for both the parametric F and nonparametric (KW, FR) tests. The dissimilarities were most pronounced when the distributions were skewed and heavy-tailed. For example, in the context of a parametric between subjects design, four groups of data were generated with (a) sample sizes of 10, (b) standardized effect size of 0.50 between groups, (c) skew of 2.5 and kurtosis of 60, (d) power method transformations generating distributions with invalid PDFs, and (e) g-and-h and GLD transformations both generating distributions with valid PDFs. The power results associated with the power method transformation showed that the F-test (KW test) was rejecting at a rate of .32 (.86). On the other hand, the power results associated with both the g-and-h and GLD transformations showed that the F-test (KW test) was rejecting at a rate of approximately .19 (.26). The primary recommendation of this study is that researchers conducting Monte Carlo studies in the context described herein should use data transformation procedures that produce valid PDFs. This recommendation is important to the extent that researchers using transformations that produce invalid PDFs increase the likelihood of limiting their study to the data generating procedure being used i.e. Type I error and power results may be substantially disparate between different procedures. Further, it also recommended that g-and-h, GLD, Burr, and fifth-order power method transformations be used if it is desired to generate distributions with extreme skew and/or heavy-tails whereas third-order polynomials should be avoided in this context.
APA, Harvard, Vancouver, ISO, and other styles
16

Carter, Bruce Jerome. "An ANOVA Analysis of Education Inequities Using Participation and Representation in Education Systems." ScholarWorks, 2017. https://scholarworks.waldenu.edu/dissertations/4274.

Full text
Abstract:
A problem recognized in the United States is that a K-12 public education in urban communities is more likely to support existing patterns of inequality than to serve as a pathway to opportunity. The specific focus of this research was on the poor academic performance in U.S K-12 urban communities. Using Benet's polarities of democracy theory as the foundation, the purpose of this correlational study was to determine which independent variables, enrollment rates, high school graduation rates, property tax funding rates for schools, teacher quality, and youth literacy rates are- statistically associated- with quality education outcomes by using the polarities of democracy participation and representation tenets as proxy variables. Secondary data spanning a 5-year aggregate period, 2010-2015, was compared for both Massachusetts and the United States, using Germany as the benchmark. Data were acquired from the Programme for International Student Assessment from the Organisation for Economic Cooperation and Development. The total sample included 150 cases randomly selected from 240 schools in Massachusetts and 150 schools in Germany. Data were analyzed using ANOVA. The results of this study indicate a statistically- significant (p- < .001) pairwise association between each of the 5 independent variables and the dependent variable. The 5 independent variables had a positive statistically significant effect on education quality. The implication for social change from this study includes insight and recommendations to the U.S Department of Education into best practices for reducing educational inequality and improving educational quality as measured by achievement in the United States.
APA, Harvard, Vancouver, ISO, and other styles
17

Pumputis, Vidmantas. "Saugaus eismo sistemos ,,Eismo dalyvis - transporto priemonė - kelias (eismo aplinka)" elementų sąveikos tyrimas." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2007. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2007~D_20070129_094108-91522.

Full text
Abstract:
To research the interaction between the elements of the traffic safety system “Traffic Participant – Vehicle – Road (Traffic Environment)” and the influencing factors, and to provide recommendations for the improvement of traffic safety in Lithuania. The following main problems were solved in the research: • to research the models applied for the analysis of traffic safety system; • to identify the key factors influencing the traffic safety system reliability; • to perform reaction tests of traffic participants, during which the driver’s reaction time in usual situations and in potentially dangerous or unforeseen situations is identified, i.e. while talking on a mobile phone, distractions, headlight dazzle at night, and other situations; • based on mathematical calculation methods and the tests performed, to identify the factors influencing the driver’s reaction time; • based on traffic accident data on certain main roads and by applying statistical mathematical packages, to identify the factors affecting the number of traffic accidents; • after analyzing the factors affecting the traffic safety system, to formulate substantiated trends for the improvement of traffic safety and to implement that traffic safety improvement means for these trends; • to assess the efficiency of traffic safety improvement means.
APA, Harvard, Vancouver, ISO, and other styles
18

Soumare, Ibrahim. "Comparing Performance of ANOVA to Poisson and Negative Binomial Regression When Applied to Count Data." Thesis, North Dakota State University, 2020. https://hdl.handle.net/10365/31887.

Full text
Abstract:
Analysis of Variance (ANOVA) is the easiest and most widely used model nowadays in statistics. ANOVA however requires a set of assumptions for the model to be a valid choice and for the inferences to be accurate. Among many, ANOVA assumes the data in question is normally distributed and homogenous. However, data from most disciplines does not meet the assumption of normality and/or equal variance. Regrettably, researchers do not always check whether the assumptions are met, and if these assumptions are violated, inferences might well be wrong. We conducted a simulation study to compare the performance of standard ANOVA to Poisson and Negative Binomial models when applied to counts data. We considered different combination of sample sizes and underlying distributions. In this simulation study, we first assed Type I error for each model involved. We then compared power as well as the quality of the estimated parameters across the models.
APA, Harvard, Vancouver, ISO, and other styles
19

Pedott, Alexandre Homsi. "Análise de dados funcionais aplicada ao estudo de repetitividade e reprodutividade : ANOVA das distâncias." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2010. http://hdl.handle.net/10183/24726.

Full text
Abstract:
Esta dissertação apresenta um método adaptado do estudo de repetitividade e reprodutibilidade para analisar a capacidade e o desempenho de sistemas de medição, no contexto da análise de dados funcionais. Dado funcional é a variável de resposta dada por uma coleção de dados que formam um perfil ou uma curva. O método adaptado contribui para o avanço do estado da arte sobre a análise de sistemas de medição. O método proposto é uma alternativa ao uso de métodos tradicionais de análise, que usados de forma equivocada, podem deteriorar a qualidade dos produtos monitorados através de variáveis de resposta funcionais. O método proposto envolve a adaptação de testes de hipótese e da análise de variância de um e dois fatores usados em comparações de populações, na avaliação de sistemas de medições. A proposta de adaptação foi baseada na utilização de distâncias entre curvas. Foi usada a Distância de Hausdorff como uma medida de proximidade entre as curvas. A adaptação proposta à análise de variância foi composta de três abordagens. Os métodos adaptados foram aplicados a um estudo simulado de repetitividade e reprodutibilidade. O estudo foi estruturado para analisar cenários em que o sistema de medição foi aprovado e reprovado. O método proposto foi denominado de ANOVA das Distâncias.
This work presents a method to analyze a measurement system's performance in a functional data analysis context, based on repeatability and reproducibility studies. Functional data are a collection of data points organized as a profile or curve. The proposed method contributes to the state of the art on measurement system analysis. The method is an alternative to traditional methods often used mistakenly, leading to deterioration in the quality of products monitored through functional responses. In the proposed method we adapt hypothesis tests and one-way and two-way ANOVA to be used in measurement system analysis. The method is grounded on the use of distances between curves. For that matter the Hausdorff distance was chosen as a measure of proximity between curves. Three ANOVA approaches were proposed and applied in a simulated repeatability and reproducibility study. The study was structured to analyze scenarios in which the measurement system was approved or rejected. The proposed method was named ANOVA of the distances.
APA, Harvard, Vancouver, ISO, and other styles
20

Senteney, Michael H. "A Monte Carlo Study to Determine Sample Size for Multiple Comparison Procedures in ANOVA." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou160433478343909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Agharben, El Amine. "Optimisation et réduction de la variabilité d’une nouvelle architecture mémoire non volatile ultra basse consommation." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEM013.

Full text
Abstract:
Le marché mondial des semi-conducteurs connait une croissance continue due à l'essor de l'électronique grand public et entraîne dans son sillage le marché des mémoires non volatiles. L'importance de ces produits mémoires est accentuée depuis le début des années 2000 par la mise sur le marché de produits nomades tels que les smartphones ou plus récemment les produits de l’internet des objets. De par leurs performances et leur fiabilité, la technologie Flash constitue, à l'heure actuelle, la référence en matière de mémoire non volatile. Cependant, le coût élevé des équipements en microélectronique rend impossible leur amortissement sur une génération technologique. Ceci incite l’industriel à adapter des équipements d’ancienne génération à des procédés de fabrication plus exigeants. Cette stratégie n’est pas sans conséquence sur la dispersion des caractéristiques physiques (dimension géométrique, épaisseur…) et électriques (courant, tension…) des dispositifs. Dans ce contexte, le sujet de ma thèse est d’optimiser et de réduire la variabilité d’une nouvelle architecture mémoire non volatile ultra basse consommation.Cette étude vise à poursuivre les travaux entamés par STMicroelectronics sur le développement, l’étude et la mise en œuvre de boucles de contrôle de type Run-to-Run (R2R) sur une nouvelle cellule mémoire ultra basse consommation. Afin d’assurer la mise en place d’une régulation pertinente, il est indispensable de pouvoir simuler l’influence des étapes du procédé de fabrication sur le comportement électrique des cellules en s’appuyant sur l’utilisation d’outils statistiques ainsi que sur une caractérisation électrique pointue
The global semiconductor market is experiencing steady growth due to the development of consumer electronics and the wake of the non-volatile memory market. The importance of these memory products has been accentuated since the beginning of the 2000s by the introduction of nomadic products such as smartphones or, more recently, the Internet of things. Because of their performance and reliability, Flash technology is currently the standard for non-volatile memory. However, the high cost of microelectronic equipment makes it impossible to depreciate them on a technological generation. This encourages industry to adapt equipment from an older generation to more demanding manufacturing processes. This strategy is not without consequence on the spread of the physical characteristics (geometric dimension, thickness ...) and electrical (current, voltage ...) of the devices. In this context, the subject of my thesis is “Optimization and reduction of the variability of a new architecture ultra-low power non-volatile memory”.This study aims to continue the work begun by STMicroelectronics on the improvement, study and implementation of Run-to-Run (R2R) control loops on a new ultra-low power memory cell. In order to ensure the implementation of a relevant regulation, it is essential to be able to simulate the process manufacturing influence on the electrical behavior of the cells, using statistical tools as well as the electric characterization
APA, Harvard, Vancouver, ISO, and other styles
22

Schepers, Karine Chrystel. "Quantification of uncertainty in reservoir simulations influenced by varying input geological parameters, Maria Reservoir, CaHu Field." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969.1/1302.

Full text
Abstract:
Finding and developing oil and gas resources requires accurate geological information with which to formulate strategies for exploration and exploitation ventures. When data are scarce, statistical procedures are sometimes substituted to compensate for the lack of information about reservoir properties. The most modern methods incorporate geostatistics. Even the best geostatistical methods yield results with varying degrees of uncertainty in their solutions. Geological information is, by its nature, spatially limited and the geoscientist is handicapped in determining appropriate values for various geological parameters that affect the final reservoir model (Massonnat, 1999). This study focuses on reservoir models that depend on geostatistical methods. This is accomplished by quantifying the uncertainty in outcome of reservoir simulations as six different geological variables are changed during a succession of reservoir simulations. In this study, variations in total fluid produced are examined by numerical modeling. Causes of uncertainty in outcomes of the model runs are examined by changing one of six geological parameters for each run. The six geological parameters tested for their impact on reservoir performances include the following: 1) variogram range used to krig thickness layers, 2) morphology around well 14, 3) shelf edge orientation, 4) bathymetry ranges attributed for each facies, 5) variogram range used to simulate facies distribution, 6) extension of the erosion at top of the reservoir. The parameters were assigned values that varied from a minimum to a maximum quantity, determined from petrophysical and core analysis. After simulation runs had been completed, a realistic, 3-dimensional reservoir model was developed that revealed a range of reservoir production data. The parameters that had the most impact on reservoir performance were: 1) the amount of rock eroded at the top of the reservoir zone and 2) the bathymetry assigned to the reservoir facies. This study demonstrates how interaction between geological parameters influence reservoir fluid production, how variations in those parameters influence uncertainties in reservoir simulations, and it highlights the interdependencies between geological variables. The analysis of variance method used to quantify uncertainty in this study was found to be rapid, accurate, and highly satisfactory for this type of study. It is recommended for future applications in the petroleum industry.
APA, Harvard, Vancouver, ISO, and other styles
23

Nakamura, Taichi. "The use of vocabulary learning strategies : the case of Japanese EFL learners in two different learning environments." Thesis, University of Essex, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Bourahima, Fazati. "Évolutions microstructurales et défauts générés par laser cladding lors du dépôt de Ni sur des moules de verrerie en alliage de Cu-Ni-Al et en fonte GL." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS017/document.

Full text
Abstract:
Dans l’industrie de la verrerie, le laser cladding est une technique de rechargement très innovante permettant de déposer une couche très fine d’un alliage à base de nickel pour protéger les moules (utiles à la fabrication de bouteilles en verre) de la corrosion, de l’abrasion ainsi que de la fatigue thermique. La méthode utilisée ici (fusion de poudre projetée par laser) est très courante en fabrication additive. Cette étude s’intéresse à l’impact du rechargement sur le dépôt et les substrats en Cu-Ni-Al et en fonte GL. L’influence sur la microstructure ainsi que sur le comportement mécanique a pu être étudiée (MEB et microanalyses, dureté, contraintes résiduelles ...). Nous nous sommes focalisés sur l’apparition de défauts tels que le manque d’accroche lors du laser cladding sur du Cu-Ni-Al et sur la possible fissuration lors du rechargement sur de la fonte GL. Le but est bien sûr de s’affranchir de ces défauts. Il a notamment été mis en évidence que le manque d’accroche (cas du Cu-Ni-Al) est lié à la distribution gaussienne de la poudre qui atténue la puissance incidente du laser au niveau du pic de poudre. Le manque d'accroche n'est pas détecté sur le substrat en fonte en raison de sa forte absorptivité et de sa faible conductivité thermique. Néanmoins, des fissures peuvent être observées en raison de contraintes résiduelles thermiques et de la présence d'une zone affectée thermiquement. De plus, l’analyse statistique ANOVA a permis une optimisation des paramètres de rechargement de telle sorte à obtenir une accroche dans toute la section tout en respectant les préconisations géométriques données par les Établissements CHPOLANSKY pour le cordon
In glass industry, the laser cladding is an innovative surfacing technique allowing to deposit a very thin layer of nickel to protect glass mold (useful for glass bottle production) against corrosion, abrasion and thermal fatigue. This method (powder fusion by projection) is well known in additive manufacturing. The aim of this work is to observe the impact of the laser cladding on the coating behavior but also on the Cu-Ni-Al and flake-graphite cast iron substrates. The microstructure and the mechanical properties were studied (SEM and microanalysis, microhardness, residual stress …) around the interface cladding/substrate. The work was also focused on the defects like lack of bonding but also on cracking behavior during surfacing on cast iron. The purpose was to prevent from those defects. This work showed that the lack of bonding to the Cu-Ni-Al substrate is due to the gaussian distribution of the powder which attenuates the input laser power at its peak. The lack of bonding is not detected on cast iron substrate thanks to its high absorptivity and low thermal conductivity. Nevertheless, cracks can be observed due to thermal residual stresses and the presence of a thermal affected zone. The ANOVA technique allowed us to optimize the processing parameters in order to obtain a perfect bonding and the geometry wanted by CHPOLANSKY Establishments
APA, Harvard, Vancouver, ISO, and other styles
25

Tissot, Jean-Yves. "Sur la décomposition ANOVA et l'estimation des indices de Sobol'. Application à un modèle d'écosystème marin." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00762800.

Full text
Abstract:
Dans les domaines de la modélisation et de la simulation numérique, les simulateurs développés prennent parfois en compte de nombreux paramètres dont l'impact sur les sorties n'est pas toujours bien connu. L'objectif principal de l'analyse de sensibilité est d'aider à mieux comprendre comment les sorties d'un modèle sont sensibles aux variations de ces paramètres. L'approche la mieux adaptée pour appréhender ce problème dans le cas de modèles potentiellement complexes et fortement non linéaires repose sur la décomposition ANOVA et les indices de Sobol'. En particulier, ces derniers permettent de quantifier l'influence de chacun des paramètres sur la réponse du modèle. Dans cette thèse, nous nous intéressons au problème de l'estimation des indices de Sobol'. Dans une première partie, nous réintroduisons de manière rigoureuse des méthodes existantes au regard de l'analyse harmonique discrète sur des groupes cycliques et des tableaux orthogonaux randomisés. Cela nous permet d'étudier les propriétés théoriques de ces méthodes et de les généraliser. Dans un second temps, nous considérons la méthode de Monte Carlo spécifique à l'estimation des indices de Sobol' et nous introduisons une nouvelle approche permettant de l'améliorer. Cette amélioration est construite autour des hypercubes latins et permet de réduire le nombre de simulations nécessaires pour estimer les indices de Sobol' par cette méthode. En parallèle, nous mettons en pratique ces différentes méthodes sur un modèle d'écosystème marin.
APA, Harvard, Vancouver, ISO, and other styles
26

Tissot, Jean-yves. "Sur la décomposition ANOVA et l'estimation des indices de Sobol'. Application à un modèle d'écosystème marin." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM064/document.

Full text
Abstract:
Dans les domaines de la modélisation et de la simulation numérique, les simulateurs développés prennent parfois en compte de nombreux paramètres dont l'impact sur les sorties n'est pas toujours bien connu. L'objectif principal de l'analyse de sensibilité est d'aider à mieux comprendre comment les sorties d'un modèle sont sensibles aux variations de ces paramètres. L'approche la mieux adaptée pour appréhender ce problème dans le cas de modèles potentiellement complexes et fortement non linéaires repose sur la décomposition ANOVA et les indices de Sobol'. En particulier, ces derniers permettent de quantifier l'influence de chacun des paramètres sur la réponse du modèle. Dans cette thèse, nous nous intéressons au problème de l'estimation des indices de Sobol'. Dans une première partie, nous réintroduisons de manière rigoureuse des méthodes existantes au regard de l'analyse harmonique discrète sur des groupes cycliques et des tableaux orthogonaux randomisés. Cela nous permet d'étudier les propriétés théoriques de ces méthodes et de les généraliser. Dans un second temps, nous considérons la méthode de Monte Carlo spécifique à l'estimation des indices de Sobol' et nous introduisons une nouvelle approche permettant de l'améliorer. Cette amélioration est construite autour des hypercubes latins et permet de réduire le nombre de simulations nécessaires pour estimer les indices de Sobol' par cette méthode. En parallèle, nous mettons en pratique ces différentes méthodes sur un modèle d'écosystème marin
In the fields of modelization and numerical simulation, simulators generally depend on several input parameters whose impact on the model outputs are not always well known. The main goal of sensitivity analysis is to better understand how the model outputs are sensisitive to the parameters variations. One of the most competitive method to handle this problem when complex and potentially highly non linear models are considered is based on the ANOVA decomposition and the Sobol' indices. More specifically the latter allow to quantify the impact of each parameters on the model response. In this thesis, we are interested in the issue of the estimation of the Sobol' indices. In the first part, we revisit in a rigorous way existing methods in light of discrete harmonic analysis on cyclic groups and randomized orthogonal arrays. It allows to study theoretical properties of this method and to intriduce generalizations. In a second part, we study the Monte Carlo method for the Sobol' indices and we introduce a new approach to reduce the number of simulations of this method. In parallel with this theoretical work, we apply these methods on a marine ecosystem model
APA, Harvard, Vancouver, ISO, and other styles
27

King, Taylor J. "Power Analysis to Determine the Importance of Covariance Structure Choice in Mixed Model Repeated Measures Anova." Thesis, North Dakota State University, 2017. https://hdl.handle.net/10365/28656.

Full text
Abstract:
Repeated measures experiments involve multiple subjects with measurements taken on each subject over time. We used SAS to conduct a simulation study to see how different methods of analysis perform under various simulation parameters (e.g. sample size, autocorrelation, repeated measures). Our goals were to: compare the multivariate analysis of variance method using PROC GLM to the mixed model method using PROC MIXED in terms of power, determine how choosing the incorrect covariance structure for mixed model analysis affects power, and identify sample sizes needed to produce adequate power of 90 percent under different scenarios. The findings support using the mixed model method over the multivariate method because power is generally higher when using the mixed model method. Simpler covariance structures may be preferred when testing the within-subjects effect to obtain high power. Additionally, these results can be used as a guide for determining the sample size needed for adequate power.
APA, Harvard, Vancouver, ISO, and other styles
28

Patrick, Joshua Daniel. "Simulations to analyze Type I error and power in the ANOVA F test and nonparametric alternatives." [Pensacola, Fla.] : University of West Florida, 2009. http://purl.fcla.edu/fcla/etd/WFE0000158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Andersson, Hanna-Mia, and Elinor Persson. "Kvalitetsutvärdering av höjdbestämning med GNSS-teknik : Variansanalys av enkelstations-RTK och nätverks-RTK." Thesis, Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-84689.

Full text
Abstract:
GNSS-teknik ersätter i allt högre grad terrester mätteknik, dels på grund av sin enkelhet och dels på grund av att den är mindre kostsam än traditionella metoder. En vanlig förekommande GNSS-teknik är RTK (Real Time Kinematic) som är en teknik som beräknar en position i realtid genom bärvågsmätning. Inom RTK-mätning finns det olika tekniker att utöva; enkelstations-RTK (ERTK) och nätverks-RTK (NRTK). I studien undersöktes kvaliteten och lägesosäkerhet på höjdbestämningsdata erhållen från dessa metoder. En envägs variansanalys (ANOVA) användes för att undersöka om det fanns en signifikant skillnad mellan de genomsnittliga avvikelser som erhölls från mätmetoderna. Mätmetoderna utfördes över två punkter med känd höjd som fastställdes tidigare med ett dubbelavvägningståg. ERTK och NRTK varvades med en observationstid på 20 minuter med positioneringsintervall på 3 sekunder. Tidseparationen mellan mätningarna varade i 30 minuter och sammanlagt utfördes 5 mätserier med 400 observationer i varje serie. Grova fel eliminerades genom att kassera värden som föll utanför 3σ-gränsen. Resultaten från ERTK-mätningarna visade att punkten kunde höjdbestämmas med en lägesosäkerhet på 22 mm och en mätosäkerhet på 32 mm (2σ) för samtliga mätserier tillsammans. Internt varierade lägesosäkerheten 13–28 mm mellan serierna. NRTK mätningarna erhöll en total lägesosäkerhet på 14 mm och en mätosäkerhet på 24 mm (2σ). Från enskilda mätserier erhöll serie 3 den lägsta lägesosäkerheten på 9 mm, och serie 4 den högsta med 18 mm. Generellt visade NRTK-metoden lägre och jämnare avvikelser från referensdata än ERTK, resultatet kan dock ha blivit påverkat av basens läge i relation till ett närliggande träd. ANOVA-testet visade att det fanns en signifikant skillnad mellan mätserierna (p =0,00) per enskild metod, men skillnaden av medelavvikelserna mellan dessa metoder var inte signifikanta (p =0,115). Resultatet från denna studie är viktig med avseende på kvalitetsutvärdering av olika GNSS-metoder och kan användas som underlag för beslut om tillämpad metod för andra mätuppdrag.
A quality survey was performed on the position accuracy of two GNSS-methods (single station-RTK and network-RTK) for height determination, and a one-way analysis of variance (ANOVA) was used for statistical investigation of differences in the spread of height deviations. The GNSS-methods were applied on a reference point, which was determined prior with leveling, and measured with 20 minutes observation time and 30 minutes time separation, resulting in 5 series containing 400 observations each from respective method. The ANOVA test was performed by grouping the height deviations with respect to the measurement series, as well as the mean deviations with respect to the methods. Height determination with the ERTK method showed a total positional uncertainty of 22 mm (13-28 mm between the series) and a measurement uncertainty of 32 mm (2σ). Results obtained with NRTK showed a total positional uncertainty of 14 mm (9-14 between the series) and a total measurement uncertainty of 24 mm (2σ). The statistical tests showed that the differences between the measurement series for individual methods were significant (p = 0,000) but that the mean deviations between the methods were not (p = 0,115). NRTK obtained a lower positional uncertainty than ERTK measurements in this study, and the ANOVA test showed that there was no significant difference in the distribution of the mean deviations between the measurement methods. This study is important with regard to quality evaluation of different GNSS-methods and can be used as a basis for deciding on the applied measurement method.
APA, Harvard, Vancouver, ISO, and other styles
30

An, Qian. "A Monte Carlo study of several alpha-adjustment procedures using a testing multiple hypotheses in factorial anova." Ohio : Ohio University, 2010. http://www.ohiolink.edu/etd/view.cgi?ohiou1269439475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Hullmann, Alexander [Verfasser]. "The ANOVA decomposition and generalized sparse grid methods for the high-dimensional backward Kolmogorov equation / Alexander Hullmann." Bonn : Universitäts- und Landesbibliothek Bonn, 2015. http://d-nb.info/1077289480/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Atas, Sait. "Effect of Formative Feedback via Interactive Concept Maps on Informal Inferential Reasoning and Conceptual Understanding of ANOVA." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/38935.

Full text
Abstract:
This study assessed the knowledge structure of undergraduate participants related to previously determined critical concepts of Analysis of Variance (ANOVA) by using Pathfinder networks. Three domain experts’ knowledge structures regarding the same concepts were also elicited and averaged to create a referent knowledge structure. The referent knowledge structure served as a basis for formative feedback. Then, each participant’s knowledge structure was compared with the referent structure to identify common, missing, and extraneous links between the two networks. Each participant was provided with individualized written and visual, and multi-media feedback through an online Concept Mapping tool based on the principals of formative assessment and feedback in an attempt to increase their conceptual knowledge of ANOVA. The study was conducted with 67 undergraduate participants from a mid-size university in the United States. Participants completed two data collection tools related to the critical concepts of ANOVA. Later, three different types of feedback around the critical concepts were given to participants in three stages. First, each participant was given visual feedback as a result of the comparison between their own knowledge structures and the referent knowledge structure to highlight similarities and differences between the two. Then, participants were provided with individualized written and multi-media feedback to emphasize conceptual understanding behind ANOVA procedures. This procedure was followed by the re-assessment of participants’ reasoning ability related to ANOVA and knowledge structures related to critical concepts to measure the effect of the intervention. Results suggest that participants both in control and intervention groups had the same level of statistics experience and anxiety before this study indicating that randomization of participants into two different groups was successful. Moreover, women participants reported a statistically significant higher level of statistics anxiety than men, however, it seems that this small difference did not limit their ability to perform required statistical tasks. Further, findings revealed that participants’ conceptual knowledge related to critical concepts of ANOVA increased significantly after the individualized feedback. However, the increase in the conceptual understanding did not help participants to transform this knowledge into more formal understanding related to procedures underlying ANOVA. Moreover, even though, previous similar studies suggest that participants are consistent in using a single strategy for making inferential reasoning across datasets, in the present study, qualitative data analysis revealed that statistics learners demonstrate diverse patterns of inferential reasoning strategies when they were provided with different size of datasets each with varying amount of variability. As a result, findings support the use of an extended framework for describing and measuring the development of participants’ reasoning ability regarding consideration of variation in statistics education.
APA, Harvard, Vancouver, ISO, and other styles
33

Krome, Lesly R. "The influence of core self-evaluations on determining blame for workplace errors: an ANOVA-attribution-model approach." Thesis, Kansas State University, 2013. http://hdl.handle.net/2097/16221.

Full text
Abstract:
Master of Science
Department of Psychological Sciences
Patrick Knight
The current study examined attributions of blame for workplace errors through the lens of Kelley’s (1967) ANOVA model of attribution-making, which addresses the consensus, consistency, and distinctiveness of a behavior. Consensus and distinctiveness information were manipulated in the description of a workplace accident. It was expected that participants would make different attributions regarding the cause of the event due to these manipulations. This study further attempted to determine if an individual’s core self-evaluations (CSE) impact how she or he evaluates a workplace accident and attributes blame, either from the perspective of the employee who made the error or that of a co-worker. Because CSE are fundamental beliefs about an individual’s success, ability, and self-worth, they may contribute to how the individual attributes blame for a workplace accident. It was found that CSE were positively related to participants’ inclination to make internal attributions of blame for a workplace error. Contrary to expectations, manipulations of the consensus and distinctiveness of the workplace error did not moderate participants’ attributions of blame. Explanations for these findings are discussed, as are possible applications of this research.
APA, Harvard, Vancouver, ISO, and other styles
34

CHEN, XINYAO. "Using PCA & Repeated ANOVA to evaluate the In Situ Bioremediation performance of sites contaminated by trichloroethylene." Thesis, Högskolan i Halmstad, Akademin för ekonomi, teknik och naturvetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-37476.

Full text
Abstract:
Currently, one of the most common techniques to remediate contaminated sites of TCE is in situ bioremediation (ISB). In this study, PCA and repeated ANOVA were used to statistically analyze the trends of variables over time to aid in the interpretation of the performance of the in situ bioremediation (ISB) technique. cDCE, Mn2+, chloride and alkalinity have appeared a significant trend over time suggested they have relative stronger indicating power to the performance of ISB. The variables that most effectively describe the bioremediation performance are Fe2+, DOC, Mn2+, methane and alkalinity. Their dramatic changes with time indicate the active functioning of dechlorinating bacteria to remediate the contamination. Three group of indicators can be identified according to their trends over time having a certain consistent character. The first group is methane and ethane, the second group consists of chloride, sulfate and alkalinity and the third group consists of cDCE and tDCE. Definitely, PCA can be an effective tool to analyze the overall trends and transformation pattern of variables over time and at different sampling points within the site. However, the fragmented data set reduce the possibilities for a complete understanding of the remediation process at the site.
APA, Harvard, Vancouver, ISO, and other styles
35

An, Qian. "A Monte Carlo Study of Several Alpha-Adjustment Procedures Used in Testing Multiple Hypotheses in Factorial Anova." Ohio University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1269439475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Young, Lucas Blackmore. "Type I Error Assessment and Power Comparison of ANOVA and Zero-Inflated Methods on Zero-Inflated Data." Thesis, North Dakota State University, 2019. https://hdl.handle.net/10365/31712.

Full text
Abstract:
Many tests for the analysis of continuous data have the underlying assumption that the data in question follows a normal distribution (ex. ANOVA, regression, etc.). Within certain research topics, it is common to end up with a dataset that has a disproportionately high number of zero-values but is otherwise relatively normal. These datasets are often referred to as ‘zero-inflated’ and their analysis can be challenging. An example of where these zero-inflated datasets arise is in plant science. We conducted a simulation study to compare the performance of zero-inflated models to a standard ANOVA model on different types of zero-inflated data. Underlying distributions, experimental design scenario, sample sizes, and percentages of zeros were variables of consideration. In this study, we conduct a Type I error assessment followed by a power comparison between the models.
APA, Harvard, Vancouver, ISO, and other styles
37

Rasch, Dieter, Thomas Rusch, Marie Simeckova, Klaus D. Kubinger, Karl Moder, and Petr Simecek. "Tests of additivity in mixed and fixed effect two-way ANOVA models with single sub-class numbers." Springer, 2009. http://dx.doi.org/10.1070/s00362-009-0254-4.

Full text
Abstract:
In variety testing as well as in psychological assessment, the situation occurs that in a two-way ANOVA-type model with only one replication per cell, analysis is done under the assumption of no interaction between the two factors. Tests for this situation are known only for fixed factors and normally distributed outcomes. In the following we will present five additivity tests and apply them to fixed and mixed models and to quantitative as well as to Bernoulli distributed data. We consider their performance via simulation studies with respect to the type-I-risk and power. Furthermore, two new approaches will be presented, one being a modification of Tukey's test and the other being a new experimental design to test for interactions.
APA, Harvard, Vancouver, ISO, and other styles
38

尾関, 美喜, and Miki OZEKI. "集団ごとに収集された個人データの分析(2) ― 分散分析とHLM (Hierarchical Linear Model) の比較 ―." 名古屋大学大学院教育発達科学研究科, 2007. http://hdl.handle.net/2237/10338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Pirnia, Seyed Amir. "A Groundwater Vulnerability Assessment Method Using GIS and Multivariate Statistics - Gotland, Sweden." Thesis, KTH, Mark- och vattenteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-99343.

Full text
Abstract:
Concentrations of microorganisms and chemical components in groundwater are serious threats for groundwater resources sustainability and contribute to technical and health problems. Recent studies and reports in Gotland revealed huge concerns about water quality in the area. In this master thesis a range of methods such as GIS and statistical analysis including multivariate analysis and non-parametric analysis, have been used in order to identify natural and human factors which affect groundwater contamination. Main focus of the study was on using existing data and available databases in analyses. Consequently, several important factors such as land use, overlaying soil cover, soil thickness, bedrock, elevation, distance to deformation and fracture zones and slope were evaluated considering 8 variables including micro-organisms and chemical components. The results clarified several significant factors which statistically affected the micro-biological and chemical components of groundwater. These relations can be used for development of risk maps which can be used in spatial planning.
APA, Harvard, Vancouver, ISO, and other styles
40

Bauret, Samuel. "Stabilité des barrages-poids en béton: contribution de la cohésion à la résistance de l'interface béton-rocher." Mémoire, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/8835.

Full text
Abstract:
Le contexte de ce projet de recherche est celui de la stabilité des barrages-poids et aborde le besoin d’évaluation de la résistance de l’interface béton-rocher. Puisqu’il est techniquement difficile d’évaluer si l’interface est liée ou non, la cohésion réelle et sa contribution à la résistance au cisaillement sont souvent négligées et ce sujet précis est peu abordé dans la littérature. Un lien direct peut être fait entre cette non-considération et des travaux de stabilisation réalisés sur des ouvrages hydrauliques. Cette étude a comme objectif la caractérisation de la cohésion réelle dans le but de déterminer s’il est sécuritaire d’incorporer sa contribution dans l’évaluation de stabilité des barrages-poids. Pour ce faire, il est nécessaire d’évaluer les comportements en traction et en cisaillement de l’interface et d’analyser comment ils sont affectés par des paramètres importants telle la rugosité de l’interface. Cette caractérisation est faite à l’aide d’un programme expérimental sur 66 répliques d’interfaces béton-rocher en mortier. La rugosité est évaluée à l’aide d’un profilomètre laser et du paramètre Z2. Les répliques ont fait l’objet d’essais de traction directe, de traction par pression de fluide et de cisaillement direct. L’influence de la rugosité d’interface et de la résistance à la compression uniaxiale (UCS) des matériaux sur les résistances à la traction et au cisaillement est évaluée grâce à l’analyse des variances (ANOVA). Des essais supplémentaires ont permis d’approfondir la compréhension du mécanisme de rupture en cisaillement. Les résultats indiquent une résistance à la traction moyenne de l’interface liée de 0,62 MPa et une cohésion (en cisaillement) moyenne de 3,1 MPa. L’ANOVA montre une augmentation significative de la résistance à la traction avec la rugosité et une augmentation significative de la résistance au cisaillement au pic avec la rugosité, l’UCS et la contrainte normale. Il a aussi été observé que le pas d’échantillonnage a un impact important sur la valeur de Z2. Les résultats suggèrent qu’une valeur minimale de cohésion de 100 à 200 kPa pourrait être utilisée dans la mesure où il peut être démontré que l’interface est liée. Cette condition pourrait d’ailleurs constituer un sujet de recherche s’inscrivant dans la continuité des travaux réalisés.
APA, Harvard, Vancouver, ISO, and other styles
41

Rajapaksha, Kosman Watte Gedara Dimuthu Hansana. "WALD TYPE TESTS WITH THE WRONG DISPERSION MATRIX." OpenSIUC, 2021. https://opensiuc.lib.siu.edu/dissertations/1949.

Full text
Abstract:
A Wald type test with the wrong dispersion matrix is used when the dispersion matrix is not a consistent estimator of the asymptotic covariance matrixof the test statistic. One class of such tests occurs when there are k groups and it is assumed that the population covariance matrices from the k groups are equal, but the common covariance matrix assumption does not hold. The pooled t test, one way AVOVA F test, and one way MANOVA F test are examples of this class. Two bootstrap confidence regions are modified to obtain large sample Wald type tests with the wrong dispersion matrix.
APA, Harvard, Vancouver, ISO, and other styles
42

Kellar, Thomas W. "Cognitive Stimulation for Long-Term Care Adults with Dementia." University of Akron / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=akron1410464307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Opoku-Nsiah, Richard. "A computationally efficient bootstrap-equivalent test for ANOVA in skewed populations with a large number of factor levels." Diss., Kansas State University, 2017. http://hdl.handle.net/2097/38155.

Full text
Abstract:
Doctor of Philosophy
Department of Statistics
Haiyan Wang
Advances in technology easily collect a large amount of data in scientific research such as agricultural screening and micro-array experiments. We are particularly interested in data from one-way and crossed two-way designs that have a large number of treatment combinations but small replications with heteroscedastic variances. In this framework, several test statistics have been proposed in the literature. Even though the form of these proposed test statistics may be different, they all use limiting normal or chi-square distribution to conduct their tests. Such approximation approaches the true distribution very slowly when the sample size ni is small while the number of levels of treatments a gets large. A strategy to obtain better accuracy in the classical large sample size setting is to use the bootstrap procedure with studentized statistic. Unfortunately, the available bootstrap method fails when the number of treatment level combinations is large while the number of replications is small. The Fisher and Hall (1990) asymptotic pivotal statistic under large sample size setting is no longer pivotal under small sample size setting with large number of treatment levels. In the first part of this dissertation, we start with describing suitable bootstrap statistics and procedures for hypothesis tests in one- and two-way ANOVA with a large number of levels and small sample sizes. We prove that the theoretical type I error-rate of Akritas and Papadatos (2004) and Wang and Akritas (2006) test statistics and their corresponding bootstrap versions have accuracy of order O(1/√a). We then modify their statistics to obtain asymptotically pivotal statistics in our current framework. We prove that the theoretical type I error-rate of the bootstrap version of the pivotal statistics is accurate up to order O(1/√a). In the second part of the dissertation, we propose a new test statistic in one-way ANOVA which is asymptotically pivotal in the current setting. We improve the accuracy of approximation of the distribution of the test statistic by deriving asymptotic expansion of the statistic under the current framework and define a new test rejection region through Cornish-Fisher expansion of quantiles. The type I error-rate of the new test has a faster convergence rate and is accurate up to order O(1/a). Simulation studies show that our tests performs better in terms of type I error-rate but comparable power with that of Akritas and Papadatos (2004) in the large a small ni setting. The connection between our asymptotic expansions and bootstrap distribution in the large a small ni setting is discussed. Our proposed test based on asymptotic expansion and Cornish-Fisher expansion of quantiles have both the advantage of higher accuracy and computational efficiency due to no resampling is needed.
APA, Harvard, Vancouver, ISO, and other styles
44

Gatzhammer, Stefan. "O debate de direito eclesiástico : a circuncisão por motivos religiosos e anova lei do Código Civil da Alemanha." Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2014/7043/.

Full text
Abstract:
Em Maio 2012 uma decisão equívoca do Tribunal de segunda instância de Colónia (Landgericht Köln) declarou crime a circuncisão de um menino por motivos religiosos, mesmo efectuada de acordo com as leges artis e com o consentimento dos pais. O artigo comenta a nova legislação introduzida em Dezembro de 2012 no Código Civil alemão (BGB) como quadro legal para a circuncisão do menino em relação com o direito à liberdade religiosa e com o direito dos pais à educação, especialmente para judeus e muçulmanos, na Alemanha.
In May 2012 a misleading decision of the Landgericht (Court of Appeal) Cologne declared that male circumcision in children amounts to be a criminal offence, even if performed lege artis and with the consent of the parents. The article pays attention to the new legislation of December 2012 introduced into the BGB as a legal framework of male circumcision with regard to the right of freedom of religion and the parental rights in education especially for Jewish and Muslims in Germany.
APA, Harvard, Vancouver, ISO, and other styles
45

Touzani, Samir. "Méthodes de surface de réponse basées sur la décomposition de la variance fonctionnelle et application à l'analyse de sensibilité." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00614038.

Full text
Abstract:
L'objectif de cette thèse est l'investigation de nouvelles méthodes de surface de réponse afin de réaliser l'analyse de sensibilité de modèles numériques complexes et coûteux en temps de calcul. Pour ce faire, nous nous sommes intéressés aux méthodes basées sur la décomposition ANOVA. Nous avons proposé l'utilisation d'une méthode basée sur les splines de lissage de type ANOVA, alliant procédures d'estimation et de sélection de variables. L'étape de sélection de variable peut devenir très coûteuse en temps de calcul, particulièrement dans le cas d'un grand nombre de paramètre d'entrée. Pour cela nous avons développé un algorithme de seuillage itératif dont l'originalité réside dans sa simplicité d'implémentation et son efficacité. Nous avons ensuite proposé une méthode directe pour estimer les indices de sensibilité. En s'inspirant de cette méthode de surface de réponse, nous avons développé par la suite une méthode adaptée à l'approximation de modèles très irréguliers et discontinus, qui utilise une base d'ondelettes. Ce type de méthode a pour propriété une approche multi-résolution permettant ainsi une meilleure approximation des fonctions à forte irrégularité ou ayant des discontinuités. Enfin, nous nous sommes penchés sur le cas où les sorties du simulateur sont des séries temporelles. Pour ce faire, nous avons développé une méthodologie alliant la méthode de surface de réponse à base de spline de lissage avec une décomposition en ondelettes. Afin d'apprécier l'efficacité des méthodes proposées, des résultats sur des fonctions analytiques ainsi que sur des cas d'ingénierie de réservoir sont présentées.
APA, Harvard, Vancouver, ISO, and other styles
46

Outmani, Imane. "Caractérisation des variabilités Matériaux/ Process pour une convergence produit de fonderie par approche prédictive." Thesis, Paris, ENSAM, 2017. http://www.theses.fr/2017ENAM0007/document.

Full text
Abstract:
Les alliages Al-Si sont largement utilisés dans l’industrie automobile en fonderie sous pression, en particulier pour la fabrication des blocs moteurs, en raison de leurs bon rapport résistance/ poids et leurs excellentes propriétés mécaniques. Du fait de l’internationalisation de la production, la composition chimique de ces alliages et les paramètres du procédé HPDC peuvent varier d’un pays à l’autre, ils peuvent même varier d’un site de fabrication à l’autre dans le même pays. Or, les conceptions des pièces automobiles sont aujourd’hui de type déterministe et elles sont réalisées sur la base des matériaux et procédés européens, ce qui peut affecter les propriétés de ces pièces dans le cas d’une localisation hors Europe. Ainsi, il est important de pouvoir adapter les conceptions rapidement et à moindre coût en prenant en compte les contraintes matériau/ process locales. Dans cette thèse, nous avons proposé une approche méthodologique permettant de prédire les caractéristiques mécaniques en fonction de la variabilité matériaux/ process en s'appuyant sur une étude expérimentale/ statistique de l’effet de la variabilité des principaux éléments d’alliage (Si, Cu, Mg) et des paramètres procédé (température de la coulée et pression d’injection) sur les propriétés mécaniques des alliages d’Al-Si moulés sous pression. La microstructure et le taux de porosités ont également été évalués. Cette méthodologie a abouti à la construction d’un outil de conception produit permettant de prédire les caractéristiques mécaniques dans le cas du changement de l’un (ou des) paramètres Matériau/ Process
Secondary Al-Si alloys are widely used in automotive industry for engine blocks because they offer a considerable weight reduction whilst maintaining good mechanical properties. The ever-expanding internationalisation of production, with same stages of production processes spread across a number of countries to produce locally, causes however high variability in the casting products. The chemical composition of the same alloys and the working variables of the unchanged high-pressure die casting process (HPDC) may change for the same casting parts from one country to another, they can even sometimes vary from one manufacturing site to another within the same country. Designing for aluminium automotive components does call today for new deterministic design methods that are often achieved from European material and casting process databases, which can affect the properties of these parts in the case of a location outside Europe. Thus, it is important to adapt the design of die casting parts quickly and inexpensively by taking into account the material and process local constraints. In this work, a methodological approach which permits to predict mechanical properties as a function of material and process variability based on an experimental/ statistical study on the effect of the variability of the primary factors of alloying elements contents (Si, Cu and Mg) and HPDC process parameters (casting temperature and injection pressure) on mechanical properties of die cast aluminium alloys has been proposed. The microstructural features and the porosity level were also investigated and assessed. This approach has resulted in statistical design tool that will allow designers to make changes to the design of their casting and to industrialize them outside Europe
APA, Harvard, Vancouver, ISO, and other styles
47

Koro, Catalin. "Mätningar av kortisolkoncentrationen i saliv under två perioder där stressfaktorn upplevs variera. : Analys av kortisolkoncentrationen och intraindividuell stabilitet inom cortisol awakening response (CAR)." Thesis, Halmstad University, Biological and Environmental Systems (BLESS), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-4694.

Full text
Abstract:

Version:1.0 StartHTML:0000000178 EndHTML:0000005278 StartFragment:0000002640 EndFragment:0000005242 SourceURL:file://localhost/Volumes/NAMNLOS/Examensarbete%20kortisol.doc

Föreliggande studie syftar till att försöka utläsa skillnader mellan två olika perioder då den personliga stressfaktorn upplevs vara olika intensiv. Undersökningen syftar även till att studera huruvida den mänskliga kortisolutsöndringens diurnala upp - och ned gångar följer en intraindividuell stabilitet av CAR (cortisol awakening responce). Detta skulle innebära ett upprepande mönster av kortisolkoncentrationens magnitud och mätvärde inom varje individ från dag till dag, vid uppvaknandet och 30 minuter efter.

Undersökningen har genomförts som en pilotstudie där en försökspersons kortisolkoncentration i saliv har mätts genom enzymkopplad immunabsorberande analys (ELISA). För att jämföra mätserierna inom de olika perioderna med varandra har även en variationsanalys av typen Analysis of variance (ANOVA) utförts med hjälp av programvaran SPSS. Då provernas mätvärde har analyserats och jämförts med varandra har ett resultat kunnat fastställas.

Eftersom utsöndringen av den individuella kortisolkoncentrationen lätt påverkas av omgivningsfaktorer användes endast en försöksperson, författaren, vilket underlättade en detaljerad analys där observation av påverkande faktorer lätt kunde tas med i beräkningen för att fastställa ett tillförlitligt resultat. Försökspersonen, kvinna 21 år, utförde 6 provtagningar under två perioder som upplevdes ha olika hög stressfaktor. Perioderna innehöll två arbetsdagar. Parallellt med provtagningen fördes noggranna dagboksanteckningar för att underlätta analyseringsarbetet.

Resultatet uppvisar en intraindividuell stabilitet av CAR hos försökspersonen. Studien visar även en skillnad mellan de två perioderna genom en högre procentuell ökning av CAR under den period då stressfaktorn upplevdes som mer intensiv.

Den tydliga skillnaden av kortisolkoncentrationens mätvärde mellan de olika dagarna indikerar även att livsstil, fysisk aktivitet och drömmar kan påverka utseendet av kortisolkoncentrationskurvans diurnala upp – och nedgångar.

APA, Harvard, Vancouver, ISO, and other styles
48

Chong, Beng Keok. "An outcomes-based framework for assessing the quality of transnational engineering education at a private college." University of Southern Queensland, Faculty of Education, 2005. http://eprints.usq.edu.au/archive/00001415/.

Full text
Abstract:
The concept of "transnational" education has emerged over the past decade or more as a critical strategy for meeting the growing demand for higher education worldwide. Essentially, transnational higher education allows international providers with outstanding credentials to conduct degree programs at local sites in conjunction with local tertiary institutions. Due to the rapid expansion of transnational programmes and the proliferation of transnational education providers, both governments and parents have, however, raised questions about the quality of education provided through transnational mechanisms. Rapid technological development, coupled with the recent growth of new engineering specialty areas, has led to the development of outcomes-based criteria for engineering education by a range of international engineering professional bodies. The emergence of outcomes-based approaches requires new instruments to measure the success, or otherwise, of engineering programs offered by universities. This study was conducted at a Malaysian private college (pseudonym "Trans College" with the prime purpose of developing an authoritative measurement instrument for evaluating the quality of transnational engineering education. This study generated a theory-based 11-dimension Preliminary Conceptual Framework consisting of four Outcomes dimensions and seven Contributory dimensions for Transnational Engineering Education, and tested the integrity of the theoretical framework through surveys of enrolled students, staff, and representatives of employing agencies. The Preliminary Conceptual Framework was found to have a high degree of conceptual validity, as well as some limitations. The findings of the surveys enabled a Revised Conceptual Framework for Transnational Engineering Education to be developed through reliability test and validated by using confirmatory and exploratory factor analyses. The revised framework comprises five Outcomes and eight Contributory dimensions. It has been transposed into a 13-dimension revised survey instrument consisting of 25 Outcomes items clustered into five Outcomes dimensions, and 49 Contributory items clustered into eight Contributory dimensions. The developed survey instrument was then used to study the perceptions of students, staff, and employers regarding the quality of the transnational engineering education. Through performing t-tests, ANOVA, and other statistical analyses, the results of the study indicate that the quality of the transnational engineering education at Trans College was perceived by students, staff, and employers to be generally sound. It was also revealed that the Contributory construct can be adopted for measuring the satisfaction levels of students. Students, staff, and employers were also satisfied for the most part with their respective experiences of the programs in question. The study is believed to have considerable significance. First, it has generated a conceptual framework for measuring the quality of the transnational engineering education. The validated conceptual framework is transposed into a validated instrument that can be adapted for use by a range of other transnational educational providers. Second, it affirms the value of the "transnational" concept while also providing a number of recommendations for the enhancement of such programmes, particularly at Trans College. Third, the conceptual framework for the delivery of successful transnational engineering education derived from this study may help to improve the quality of transnational engineering programmes conducted in Malaysia, and make Malaysia "the centre of educational excellence" in the ASEAN region, with the transnational providers becoming hubs of tertiary education, and their networks spanning the globe.
APA, Harvard, Vancouver, ISO, and other styles
49

Yates, Heath Landon. "A comparison of type I error and power of the aligned rank method using means and medians for alignment." Kansas State University, 2011. http://hdl.handle.net/2097/8548.

Full text
Abstract:
Master of Science
Department of Statistics
James J. Higgins
A simulation study was done to compare the Type I error and power of standard analysis of variance (ANOVA), the aligned rank transform procedure (ART), and the aligned rank transform procedure where alignment is done using medians (ART + Median). The methods were compared in the context of a balanced two-way factorial design with interaction when errors have a normal distribution and outliers are present in the data and when errors have the Cauchy distribution. The simulation results suggest that the nonparametric methods are more outlier-resistant and valid when errors have heavy tails in comparison to ANOVA. The ART + Median method appears to provide greater resistance to outliers and is less affected by heavy-tailed distributions than the ART method and ANOVA.
APA, Harvard, Vancouver, ISO, and other styles
50

Seixas, Luís Pedro de Mascarenhas. "Desenvolvimento de uma metodologia de aplicação para controlo de brancos em ICP-MS." Master's thesis, Faculdade de Ciências e Tecnologia, 2014. http://hdl.handle.net/10362/11976.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia e Gestão Industrial
O procedimento de controlo da qualidade tem acompanhado desde sempre os processos produtivos e analíticos, com a finalidade de garantir outputs válidos. Com o desenvolvimento e crescimento dos negócios, houve a necessidade de adaptar estes procedimentos, passando de um método em que se verificava peça a peça, amostra a amostra, para um método de avaliação da qualidade de uma amostra representativa de toda a população. Este método é denominado de controlo estatístico do processo (Statistical Process Control, SPC) e torna menos morosa e onerosa esta importante tarefa de análise. Apesar de ter tido origem no âmbito industrial, o SPC pode ser aplicado a todos os processos automatizados, como as análises laboratoriais. A ferramenta principal para realizar o controlo estatístico do processo é a carta de controlo. Um dos grupos de processos analíticos laboratoriais de interesse, é a espectrometria atómica que agrega as valências de separação da cromatografia com a sensibilidade e selectividade na detecção, da espectroscopia atómica. Um dos avanços tecnológicos, de maior interesse, nesta área, prende-se com a introdução de sistemas de espectrómetros de massa com fonte indutiva de plasma (ICP-MS). A principal aplicação deste método está relacionada com a determinação do teor de elementos traço em matrizes com diferentes origens como alimentar, geológica ou humana. Este trabalho visa desenvolver metodologias de aplicação para controlo dos valores dos brancos, obtidos no processo ICP-MS, recorrendo ao SPC e ANOVA. Numa primeira abordagem, pretende-se estimar os parâmetros do processo, média e desvio-padrão, através do SPC para calcular novos limites de quantificação e compará-los com os que vigoram. A segunda abordagem prende-se com a elaboração de cartas de controlo adequadas às características dos dados, que são identificadas com recurso à análise de variância. Em ambas as metodologias, o ponto de partida passa por um tratamento estatístico dos dados, recorrendo à análise de valores extremos e/ou outliers. Pretende-se, em termos gerais perceber algumas particularidades e características do método e conseguir apertar o limite de quantificação de forma a tornar mais rigoroso o método. O estudo é feito para um total de 11 elementos, utilizando uma variedade de 17 matrizes de alimentos e 3 tipos de brancos.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography