Siga este enlace para ver otros tipos de publicaciones sobre el tema: Multiple statistical analysis.

Tesis sobre el tema "Multiple statistical analysis"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Multiple statistical analysis".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Smith, Anna Lantz. "Statistical Methodology for Multiple Networks". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1492720126432803.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

DI, BRISCO AGNESE MARIA. "Statistical Network Analysis: a Multiple Testing Approach". Doctoral thesis, Università degli Studi di Milano-Bicocca, 2015. http://hdl.handle.net/10281/96090.

Texto completo
Resumen
The problem of identifying connections between nodes in a network model is of fundamental importance in the analysis of brain networks because each node represents a specific brain region that can potentially be connected to other brain regions by means of functional relations; the dynamical behavior of each node can be quantified by adopting a correlation measure among time series. In this contest, the whole set of links between nodes in a network can be represented by means of an adjacency matrix with high dimension, that can be obtained by performing a huge number of simultaneous tests on correlations. In this regard, the Thesis has dealt with the problem of multiple testing in a Bayesian perspective, by examining in depth the “Bayesian False Discovery Rate” (FDR), already defined in Efron, and by introducing the “Bayesian Power” (BP). The behavior of the FDR and BP estimators has been analyzed both with asymptotic theory and with Monte Carlo simulations; furthermore, it has been investigated the robustness of the proposed estimators by simulating specific pattern of dependencies among the p-values associated to the multiple comparisons. Such a multiple testing approach, that allows to control both FDR and BP, has been applyied to a dataset provided by the Milan Center for Neuroscience (NeuroMi). Once selected a sample of 70 participants, classified properly into young subjects and elderly subjects, subject by subject network models have been constructed in order to verify two alternative theories about changes in the pattern of functional connectivity as time goes by, namely the de-differentiation hypothesis versus the localization hypothesis. This objective has been achieved by selecting some proper network measures in order to verify the original hypotheses about the pattern of functional connectivity in the elderly’s group and in the group of young subjects, and by constructing some ad-hoc measures.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Liu, Wei. "Analysis of power functions of multiple comparisons tests". Thesis, University of Bath, 1990. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.235586.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Zain, Zakiyah. "Combining multiple survival endpoints within a single statistical analysis". Thesis, Lancaster University, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.618302.

Texto completo
Resumen
The aim of this thesis is to develop methodology for combining multiple endpoints within a single statistical analysis that compares the responses of patients treated with a novel treatment with those of control patients treated conventionally. The focus is on interval-censored bivariate survival data, and five real data sets from previous studies concerning multiple responses are used to illustrate the techniques developed. The background to survival analysis is introduced by a general description of survival data, and an overview of existing methods and underlying models is included. A review is given of two of the most popular survival analysis methods, namely the logrank test and Cox's proportional hazards model. The global score test methodology for combining multiple end points is described in detail, and application to real data demonstrates its benefits. The correlation between two score statistics arising from bivariate interval-censored survival data is the core of this research. The global score test methodology is extended to the case of bivariate interval-censored survival data and a complementary log-log link is applied to derive the covariance and the correlation between the two score statistics. A number of common scenarios are considered in this investigation and the accuracy of the estimator is evaluated by means of extensive simulations. An established method, namely the approach of Wei, Lin and Weissfeld, is examined and compared with the proposed method using both real and simulated data. It is concluded that our method is accurate, consistent and comparable to the competitor. This study marked the first successful development of the global score test methodology for bivariate survival data, employing a new approach to the derivation of the covariance between two score statistics on the basis of an interval-censored model. Additionally. the relationship between the jackknife technique and the Wei, Lin and Weissfeld method has been clarified.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

李志傑 y Chi-kit Li. "The statistical analysis of multi-way and multiple compositions". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1986. http://hub.hku.hk/bib/B31230672.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Li, Chi-kit. "The statistical analysis of multi-way and multiple compositions /". [Hong Kong] : University of Hong Kong, 1986. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12323652.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Nashimoto, Kane. "Multiple comparison techniques for order restricted models /". free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p3144445.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Bianchini, Germán. "Wildland Fire Prediction based on Statistical Analysis of Multiple Solutions". Doctoral thesis, Universitat Autònoma de Barcelona, 2006. http://hdl.handle.net/10803/5762.

Texto completo
Resumen
En diferentes áreas científicas, el uso de modelos para representar sistemas físicos se ha tornado una tarea habitual. Estos modelos reciben parámetros de entradas representando condiciones particulares y proveen una salida que representa la evolución del sistema. Usualmente, dichos modelos están integrados en herramientas de simulación que pueden ser ejecutadas en una computadora.
Un caso particular donde los modelos resultan muy útiles es la predicción de la propagación de Incendios Forestales. Los incendios se han vuelto un gran peligro que cada año provoca grandes pérdidas desde el punto de vista ambiental, económico, social y humano. En particular, las estaciones secas y calurosas incrementan seriamente el riesgo de incendios en el área Mediterránea. Por lo tanto, el uso de modelos es relevante para estimar el riesgo de incendios y predecir el comportamiento de los mismos.
Sin embargo, en muchos casos, los modelos presentan una serie de limitaciones. Estas se relacionan con la necesidad de un gran número de parámetros de entrada. En muchos casos, tales parámetros presentan cierto grado de incertidumbre debido a la imposibilidad de medirlos en tiempo real, y deben ser estimados a partir de datos indirectas. Además, en muchos casos estos modelos no se pueden resolver analíticamente y deben ser calculados aplicando métodos numéricos que son una aproximación de la realidad.
Se han desarrollado diversos métodos basados en asimilación de datos para optimizar los parámetros de entrada. Comúnmente, estos métodos operan sobre un gran número de parámetros de entrada y, a través de optimización, se enfocan en hallar un único conjunto de parámetros que describa de la mejor forma posible el comportamiento previo. Por lo tanto, es de esperar que el mismo conjunto de valores pueda ser usado para describir el futuro inmediato.
Sin embargo, esta clase de predicción se basa en un solo conjunto de parámetros y, por lo que se explicó, debido a aquellos parámetros que presentan un comportamiento dinámico, los valores optimizados pueden no resultar adecuados para el siguiente paso.
El presente trabajo propone un método alternativo. Nuestro sistema, llamado Sistema Estadístico para la Gestión de Incendios Forestales, se basa en conceptos estadísticos. Su objetivo es hallar un patrón del comportamiento del incendio, independientemente de los valores de los parámetros. En este método, cada parámetro es representado mediante un rango de valores y una cardinalidad. Se generan todos los posibles escenarios considerando todas las posibles combinaciones de los valores de los parámetros de entrada, y entonces se evalúa la propagación para cada caso. Los resultados son agregados estadísticamente para determinar la probabilidad de que cada área se queme. Esta agregación se utiliza para predecir el área quemada en el siguiente paso.
Para validar nuestro método, usamos un conjunto de quemas reales prescritas. Además, comparamos nuestro método contra otros dos. Uno de estos dos métodos fue implementado para este trabajo: GLUE (Generalized Likelihood Uncertainty Estimation). Dicho método corresponde a una adaptación de un sistema hidrológico. El otro caso (Método Evolutivo) es un algoritmo genético previamente desarrollado e implementado también por nuestro equipo de investigación.
Los sistemas propuestos requieren un gran número de simulaciones, razón por la cual decidimos usar un esquema paralelo para implementarlos. Esta forma de trabajo difiere del esquema tradicional de teoría y experimentación, lo cual es la forma común de la ciencia y la ingeniería. El cómputo científico está en continua expansión, principalmente a través del análisis de modelos matemáticos implementados en computadores. Los científicos e ingenieros desarrollan programas de computador que modelan los sistemas bajo estudio. Esta metodología está creando una nueva rama de la ciencia basada en métodos computacionales, la cual crece de forma acelerada. Esta aproximación es llamada Ciencia Computacional.
In many different scientific areas, the use of models to represent the physical system has become a common strategy. These models receive some input parameters representing the particular conditions and provide an output representing the evolution of the system. Usually, these models are integrated in simulation tools that can be executed on a computer.
A particular case where models are very useful is the prediction of Forest Fire propagation. Forest fire is a very significant hazard that every year provokes huge looses from the environmental, economical, social and human point of view. Particularly dry and hot seasons seriously increase the risk of forest fires in the Mediterranean area. Therefore, the use of models is very relevant to estimate fire risk, and predict fire behavior.
However, in many cases models present a series of limitations. Usually, such limitations are due to the need of a large number of input parameters. In many cases such parameters present some uncertainty due to the impossibility to measure all of them in real time and must be estimated from indirect measurements. Moreover, in most cases these models cannot be solved analytically and must be solved applying numerical methods that are only an approach to reality (still without considering the limitations that present the translations of these solutions when they are carried out by means of computers).
Several methods based on data assimilation have been developed to optimize the input parameters. In general, these methods operate over a large number of input parameters, and, by mean of some kind of optimization, they focus on finding a unique parameter set that would describe the previous behavior in the best form. Therefore, it is hoped that the same set of values could be used to describe the immediate future.
However, this kind of prediction is based on a single value of parameters and, as it has been said above, for those parameters that present a dynamic behavior the new optimized values cannot be adequate for the next step.
The objective of this work is to propose an alternative method. Our method, called Statistical System for Forest Fire Management, is based on statistical concepts. Its goal is to find a pattern of the forest fire behavior, independently of the parameters values. In this method, each parameter is represented by a range of values with a particular cardinality for each one of them. All possible scenarios considering all possible combinations of input parameters values are generated and the propagation for each scenario is evaluated. All results are statically aggregated to determine the burning probability of each area. This aggregation is used to predict the burned area in the next step.
To validate our method, we use a set of real prescribed burnings. Furthermore, we compare our method against two other methods. One of these methods was implemented by us for this work: GLUE (Generalized Likelihood Uncertainty Estimation). It corresponds to an adaptation of a hydrological method. The other method (Evolutionary method) is a genetic algorithm previously developed and implemented by our research team.
The proposed system requires a large number of simulations, a reason why we decide to use a parallel-scheme to implement them. This way of working is different from traditional scheme of theory and experiment, which is the common form of science and engineering. The scientific computing approach is in continuous expansion, mainly through the analysis of mathematical models implemented on computers. Scientists and engineers develop computer programs that model the systems under study. This methodology is creating a new branch of science based on computational methods that is growing very fast. This approach is called Computational Science.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Miller, Christopher Ryan 'Red'. "Statistical analysis of wireless networks predicting performance in multiple environments /". Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Jun%5FMiller.pdf.

Texto completo
Resumen
Thesis (M.S. in Applied Science (Operations Research))--Naval Postgraduate School, June 2006.
Thesis Advisor(s): David Annis. "June 2006." Includes bibliographical references (p.57). Also available in print.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Miller, Christopher Ryan. "Statistical analysis of wireless networks predicting performance in multiple environments". Thesis, Monterey, California. Naval Postgraduate School, 2006. http://hdl.handle.net/10945/2817.

Texto completo
Resumen
With the advent of easily accessible, deployable, and usable 802.11 technology, users can connect and network with practically any infrastructure that exists today. Due to that simplicity and ease of use, it only seems logical that the military and tactical users should also employ these technologies. The questions regarding 802.11 network performances in a hostile and signal-unfriendly environment (i.e., high temperature and high humidity) have yet to be answered. The goal of this thesis is to quantify 802.11 network capabilities, in terms of throughput, while it is employed in those areas. Ultimately, the objective is to produce statistical models able to represent any variations in the 802.11 signals and network due to those environmental factors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Rao, Youlan. "Statistical Analysis of Microarray Experiments in Pharmacogenomics". The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1244756072.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Merlotti, Alessandra. "DNA sequence analysis: a statistical characterization of dinucleotides interdistances across multiple organisms". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13518/.

Texto completo
Resumen
In questo lavoro abbiamo scelto un approccio basato sulle interdistanze per studiare le proprietà statistiche dei diversi dinucleotidi, in quanto riteniamo che vi sia una relazione tra la posizione che essi occupano nel genoma e la funzione biologica che essi svolgono. Sono stati perciò studiati 18 organismi modello appartenenti a diverse classi e dai risultati è emersa una netta differenza tra le distribuzioni dei CG dei mammiferi rispetto a quelle dei non CG; diversamente, nel caso dei non mammiferi la differenza è risultata essere più lieve e in alcuni casi nulla. In particolare, è emerso che le distribuzioni CG dei mammiferi risultano essere ben descritte da una distribuzione gamma, mentre nel caso dei non mammiferi, questo andamento è stato ritrovato solo in pochi casi. Si è visto inoltre che i CG dei mammiferi risultano essere in numero inferiore rispetto ai non CG, perciò è stato elaborato un modello nullo che provasse a rendere conto di questa discrepanza, imputandone la causa a mutazioni casuali di una singola base azotata. Il modello è stato applicato ai dinucleotidi di Homo sapiens e dai risultati è emerso che solo le distribuzioni AT e TA risultano simili a quella dei CG e che il processo è irreversibile. Infine si è visto che rappresentando il parametro di scala della distribuzione gamma, ricavato dal fit, in funzione del paramtero di forma, è stato possibile distringuere le diverse classi di organismi, sia per i dati di partenza che per un set più ampio; inoltre, i risultati mostrano l'esistenza di una relazione lineare tra il parametro di scala e la percentuale di CG presenti nella sequenza analizzata, se rappresentati in scala doppio logaritmica. Lo studio svolto ha dunque confermato l'esistenza di una relazione tra la posizione occupata dai CG nei genomi e la funzione biologica da essi svolta.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

ZHONG, WEI. "STATISTICAL APPROACHES TO ANALYZE CENSORED DATA WITH MULTIPLE DETECTION LIMITS". University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1130204124.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Li, Xuan. "Statistical analysis and reduction of multiple access interference in MC-CDMA systems". Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/30352/1/Xuan_Li_Thesis.pdf.

Texto completo
Resumen
Multicarrier code division multiple access (MC-CDMA) is a very promising candidate for the multiple access scheme in fourth generation wireless communi- cation systems. During asynchronous transmission, multiple access interference (MAI) is a major challenge for MC-CDMA systems and significantly affects their performance. The main objectives of this thesis are to analyze the MAI in asyn- chronous MC-CDMA, and to develop robust techniques to reduce the MAI effect. Focus is first on the statistical analysis of MAI in asynchronous MC-CDMA. A new statistical model of MAI is developed. In the new model, the derivation of MAI can be applied to different distributions of timing offset, and the MAI power is modelled as a Gamma distributed random variable. By applying the new statistical model of MAI, a new computer simulation model is proposed. This model is based on the modelling of a multiuser system as a single user system followed by an additive noise component representing the MAI, which enables the new simulation model to significantly reduce the computation load during computer simulations. MAI reduction using slow frequency hopping (SFH) technique is the topic of the second part of the thesis. Two subsystems are considered. The first sub- system involves subcarrier frequency hopping as a group, which is referred to as GSFH/MC-CDMA. In the second subsystem, the condition of group hopping is dropped, resulting in a more general system, namely individual subcarrier frequency hopping MC-CDMA (ISFH/MC-CDMA). This research found that with the introduction of SFH, both of GSFH/MC-CDMA and ISFH/MC-CDMA sys- tems generate less MAI power than the basic MC-CDMA system during asyn- chronous transmission. Because of this, both SFH systems are shown to outper- form MC-CDMA in terms of BER. This improvement, however, is at the expense of spectral widening. In the third part of this thesis, base station polarization diversity, as another MAI reduction technique, is introduced to asynchronous MC-CDMA. The com- bined system is referred to as Pol/MC-CDMA. In this part a new optimum com- bining technique namely maximal signal-to-MAI ratio combining (MSMAIRC) is proposed to combine the signals in two base station antennas. With the applica- tion of MSMAIRC and in the absents of additive white Gaussian noise (AWGN), the resulting signal-to-MAI ratio (SMAIR) is not only maximized but also in- dependent of cross polarization discrimination (XPD) and antenna angle. In the case when AWGN is present, the performance of MSMAIRC is still affected by the XPD and antenna angle, but to a much lesser degree than the traditional maximal ratio combining (MRC). Furthermore, this research found that the BER performance for Pol/MC-CDMA can be further improved by changing the angle between the two receiving antennas. Hence the optimum antenna angles for both MSMAIRC and MRC are derived and their effects on the BER performance are compared. With the derived optimum antenna angle, the Pol/MC-CDMA system is able to obtain the lowest BER for a given XPD.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Li, Xuan. "Statistical analysis and reduction of multiple access interference in MC-CDMA systems". Queensland University of Technology, 2008. http://eprints.qut.edu.au/30352/.

Texto completo
Resumen
Multicarrier code division multiple access (MC-CDMA) is a very promising candidate for the multiple access scheme in fourth generation wireless communi- cation systems. During asynchronous transmission, multiple access interference (MAI) is a major challenge for MC-CDMA systems and significantly affects their performance. The main objectives of this thesis are to analyze the MAI in asyn- chronous MC-CDMA, and to develop robust techniques to reduce the MAI effect. Focus is first on the statistical analysis of MAI in asynchronous MC-CDMA. A new statistical model of MAI is developed. In the new model, the derivation of MAI can be applied to different distributions of timing offset, and the MAI power is modelled as a Gamma distributed random variable. By applying the new statistical model of MAI, a new computer simulation model is proposed. This model is based on the modelling of a multiuser system as a single user system followed by an additive noise component representing the MAI, which enables the new simulation model to significantly reduce the computation load during computer simulations. MAI reduction using slow frequency hopping (SFH) technique is the topic of the second part of the thesis. Two subsystems are considered. The first sub- system involves subcarrier frequency hopping as a group, which is referred to as GSFH/MC-CDMA. In the second subsystem, the condition of group hopping is dropped, resulting in a more general system, namely individual subcarrier frequency hopping MC-CDMA (ISFH/MC-CDMA). This research found that with the introduction of SFH, both of GSFH/MC-CDMA and ISFH/MC-CDMA sys- tems generate less MAI power than the basic MC-CDMA system during asyn- chronous transmission. Because of this, both SFH systems are shown to outper- form MC-CDMA in terms of BER. This improvement, however, is at the expense of spectral widening. In the third part of this thesis, base station polarization diversity, as another MAI reduction technique, is introduced to asynchronous MC-CDMA. The com- bined system is referred to as Pol/MC-CDMA. In this part a new optimum com- bining technique namely maximal signal-to-MAI ratio combining (MSMAIRC) is proposed to combine the signals in two base station antennas. With the applica- tion of MSMAIRC and in the absents of additive white Gaussian noise (AWGN), the resulting signal-to-MAI ratio (SMAIR) is not only maximized but also in- dependent of cross polarization discrimination (XPD) and antenna angle. In the case when AWGN is present, the performance of MSMAIRC is still affected by the XPD and antenna angle, but to a much lesser degree than the traditional maximal ratio combining (MRC). Furthermore, this research found that the BER performance for Pol/MC-CDMA can be further improved by changing the angle between the two receiving antennas. Hence the optimum antenna angles for both MSMAIRC and MRC are derived and their effects on the BER performance are compared. With the derived optimum antenna angle, the Pol/MC-CDMA system is able to obtain the lowest BER for a given XPD.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Tao, Hui. "An Investigation of False Discovery Rates in Multiple Testing under Dependence". Fogler Library, University of Maine, 2005. http://www.library.umaine.edu/theses/pdf/TaoH2005.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Claggett, Brian Lee. "Statistical Methods for Clinical Trials with Multiple Outcomes, HIV Surveillance, and Nonparametric Meta-Analysis". Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10440.

Texto completo
Resumen
Central to the goals of public health are obtaining and interpreting timely and relevant information for the benefit of humanity. In this dissertation, we propose methods to monitor and assess the spread HIV in a more rapid manner, as well as to improve decisions regarding patient treatment options. In Chapter 1, we propose a method, extending the previously proposed dual-testing algorithm and augmented cross-sectional design, for estimating the HIV incidence rate in a particular community. Compared to existing methods, our proposed estimator allows for shorter follow-up time and does not require estimation of the mean window period, a crucial, but often unknown, parameter. The estimator performs well in a wide range of simulation settings. We discuss when this estimator would be expected to perform well and offer design considerations for the implementation of such a study. Chapters 2 and 3 are concerned with obtaining a more complete understanding of the impact of treatment in randomized clinical trials in which multiple patient outcomes are recorded. Chapter 2 provides an illustration of methods that may be used to address concerns of both risk-benefit analysis and personalized medicine simultaneously, with a goal of successfully identifying patients who will be ideal candidates for future treatment. Riskbenefit analysis is intended to address the multivariate nature of patient outcomes, while “personalized medicine” is concerned with patient heterogeneity, both of which complicate the determination of a treatment’s usefulness. A third complicating factor is the duration of treatment use. Chapter 3 features proposed methods for assessing the impact of treatment as a function of time, as well as methods for summarizing the impact of treatment across a range of follow-up times. Chapter 4 addresses the issue of meta-analysis, a commonly used tool for combining information for multiple independent studies, primarily for the purpose of answering a clinical question not suitably addressed by any one single study. This approach has proven highly useful and attractive in recent years, but often relies on parametric assumptions that cannot be verified. We propose a non-parametric approach to meta-analysis, valid in a wider range of scenarios, minimizing concerns over compromised validity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

An, Qian. "A Monte Carlo study of several alpha-adjustment procedures using a testing multiple hypotheses in factorial anova". Ohio : Ohio University, 2010. http://www.ohiolink.edu/etd/view.cgi?ohiou1269439475.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Casadei, Francesco. "Statistical analysis of genetic and epigenetic features in cancer cells". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24388/.

Texto completo
Resumen
Cancer is one of the leading causes of death in almost every country and, in 2020, 19.3 million new cases and 10 million cancer deaths in the world have been estimated by WHO. The onset of a tumor is often accompanied with a set of genetic and epigenetic alterations, whose understanding can have both diagnostic role and prognostic power for targeted treatments. The spread of NGS platforms, which allow to sequence an entire human genome in a short time and relatively low cost, and the use of statistical methods that help in dealing with such huge amount of data and in finding hidden relationships, have a crucial role in the development of the precision medicine. The present thesis work consists in two projects. The first one is a study of point mutations and methylation of a cohort of patients diagnosed with Glioblastoma (GBM), involving both Illumina sequencing-by-synthesis platform and Oxford Nanopore Technologies. The second one is an application of Dirichlet Process, a statistical learning method, to a set of Multiple Myeloma (MM) patients characterized by Copy Number Variant (CNV) measures. The study of GBM patients resulted in a characterization of mutated targeted genes and methylated regions of MGMT, which is involved in the cancer evolution. Moreover, this project confirmed that results from ILM data and ONT do agree, giving the opportunity to use ONT for long read sequencing. This approach will reduce misalignment issues when repeats and pseudogenes are present and allows for the identification of point variants far from each other in the same chromosome. In the second project, the use of two Hierarchical Dirichlet Clustering approaches allowed to identify groups of MM patients with similar CNV evolution between the diagnosis and the post-treatment relapse. The results confirmed the high CNV variability of MM and show that its progression cannot be simply explained by means of clinical parameters about the therapy carried out and patient's response.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Wang, Zhenrui. "Statistical Analysis of Operational Data for Manufacturing System Performance Improvement". Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/301673.

Texto completo
Resumen
The performance of a manufacturing system relies on its four types of elements: operators, machines, computer system and material handling system. To ensure the performance of these elements, operational data containing various aspects of information are collected for monitoring and analysis. This dissertation focuses on the operator performance evaluation and machine failure prediction. The proposed research work is motivated by the following challenges in analyzing operational data. (i) the complex relationship between the variables, (ii) the implicit information important to failure prediction, and (iii) data with outliers, missing and erroneous measurements. To overcome these challenges, the following research has been conducted. To compare operator performance, a methodology combining regression modeling and multiple comparisons technique is proposed. The regression model quantifies and removes the complex effects of other impacting factors on the operator performance. A robust zero-inflated Poisson (ZIP) model is developed to reduce the impacts of the excessive zeros and outliers in the performance metric, i.e. the number of defects (NoD), on regression analysis. The model residuals are plotted in non-parametric statistical charts for performance comparison. The estimated model coefficients are also used to identify under-performing machines. To detect temporal patterns from operational data sequence, an algorithm is proposed for detecting interval-based asynchronous periodic patterns (APP). The algorithm effectively and efficiently detects pattern through a modified clustering and a convolution-based template matching method. To predict machine failures based on the covariates with erroneous measurements, a new method is proposed for statistical inference of proportional hazard model under a mixture of classical and Berkson errors. The method estimates the model coefficients with an expectation-maximization (EM) algorithm with expectation step achieved by Monte Carlo simulation. The model estimated with the proposed method will improve the accuracy of the inference on machine failure probability. The research work presented in this dissertation provides a package of solutions to improve manufacturing system performance. The effectiveness and efficiency of the proposed methodologies have been demonstrated and justified with both numerical simulations and real-world case studies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Heeb, Thomas Gregory. "Examination of turbulent mixing with multiple second order chemical reactions by the statistical analysis technique /". The Ohio State University, 1986. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487267024995615.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Yazdani, Akram. "Statistical Approaches in Genome-Wide Association Studies". Doctoral thesis, Università degli studi di Padova, 2014. http://hdl.handle.net/11577/3423743.

Texto completo
Resumen
Genome-wide association studies, GWAS, typically contain hundreds of thousands single nucleotide polymorphisms, SNPs, genotyped for few numbers of samples. The aim of these studies is to identify regions harboring SNPs or to predict the outcomes of interest. Since the number of predictors in the GWAS far exceeds the number of samples, it is impossible to analyze the data with classical statistical methods. In the current GWAS, the widely applied methods are based on single marker analysis that does assess association of each SNP with the complex traits independently. Because of the low power of this analysis for detecting true association, simultaneous analysis has recently received more attention. The new statistical methods for simultaneous analysis in high dimensional settings have a limitation of disparity between the number of predictors and the number of samples. Therefore, reducing the dimensionality of the set of SNPs is required. This thesis reviews single marker analysis and simultaneous analysis with a focus on Bayesian methods. It addresses the weaknesses of these approaches with reference to recent literature and illustrating simulation studies. To bypass these problems, we first attempt to reduce dimension of the set of SNPs with random projection technique. Since this method does not improve the predictive performance of the model, we present a new two-stage approach that is a hybrid method of single and simultaneous analyses. This full Bayesian approach selects the most promising SNPs in the first stage by evaluating the impact of each marker independently. In the second stage, we develop a hierarchical Bayesian model to analyze the impact of selected markers simultaneously. The model that accounts for related samples places the local-global shrinkage prior on marker effects in order to shrink small effects to zero while keeping large effects relatively large. The prior specification on marker effects, which is hierarchical representation of generalized double Pareto, improves the predictive performance. Finally, we represent the result of real SNP-data analysis through single-maker study and the new two-stage approach.
Lo Studio di Associazione Genome-Wide, GWAS, tipicamente comprende centinaia di migliaia di polimorfismi a singolo nucleotide, SNPs, genotipizzati per pochi campioni. L'obiettivo di tale studio consiste nell'individuare le regioni cruciali SNPs e prevedere gli esiti di una variabile risposta. Dal momento che il numero di predittori è di gran lunga superiore al numero di campioni, non è possibile condurre l'analisi dei dati con metodi statistici classici. GWAS attuali, i metodi negli maggiormente utilizzati si basano sull'analisi a marcatore unico, che valuta indipendentemente l'associazione di ogni SNP con i tratti complessi. A causa della bassa potenza dell'analisi a marcatore unico nel rilevamento delle associazioni reali, l'analisi simultanea ha recentemente ottenuto più attenzione. I recenti metodi per l'analisi simultanea nel multidimensionale hanno una limitazione sulla disparità tra il numero di predittori e il numero di campioni. Pertanto, è necessario ridurre la dimensionalità dell'insieme di SNPs. Questa tesi fornisce una panoramica dell'analisi a marcatore singolo e dell'analisi simultanea, focalizzandosi su metodi Bayesiani. Vengono discussi i limiti di tali approcci in relazione ai GWAS, con riferimento alla letteratura recente e utilizzando studi di simulazione. Per superare tali problemi, si è cercato di ridurre la dimensione dell'insieme di SNPs con una tecnica a proiezione casuale. Poiché questo approccio non comporta miglioramenti nella accuratezza predittiva del modello, viene quindi proposto un approccio in due fasi, che risulta essere un metodo ibrido di analisi singola e simultanea. Tale approccio, completamente Bayesiano, seleziona gli SNPs più promettenti nella prima fase valutando l'impatto di ogni marcatore indipendentemente. Nella seconda fase, viene sviluppato un modello gerarchico Bayesiano per analizzare contemporaneamente l'impatto degli indicatori selezionati. Il modello che considera i campioni correlati pone una priori locale-globale ristretta sugli effetti dei marcatori. Tale prior riduce a zero gli effetti piccoli, mentre mantiene gli effetti più grandi relativamente grandi. Le priori specificate sugli effetti dei marcatori sono rappresentazioni gerarchiche della distribuzione Pareto doppia; queste a priori migliorano le prestazioni predittive del modello. Infine, nella tesi vengono riportati i risultati dell'analisi su dati reali di SNP basate sullo studio a marcatore singolo e sul nuovo approccio a due stadi.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Chou, Shih-Hsiung. "Quality engineering applications on single and multiple nonlinear profiles". Diss., Kansas State University, 2014. http://hdl.handle.net/2097/17214.

Texto completo
Resumen
Doctor of Philosophy
Department of Industrial and Manufacturing Systems Engineering
Shing I. Chang
Profile analysis has drawn attention in quality engineering applications due to the growing use of sensors and information technologies. Unlike the conventional quality characteristics of interest, a profile is formed functionally dependent on one or more explanatory variables. A single profile may contain hundred or thousand data points. The conventional charting tools cannot handle such high dimensional datasets. In this dissertation, six unsolved issues are investigated. First, Chang and Yadama’s method (2010) shows competitive results in nonlinear profile monitoring. However, the effectiveness of removing noise from given nonlinear profile by using B-splines fitting with and without wavelet transformation is unclear. Second, many researches dealt with profile analysis problem considering whether profile shape change only or variance change only. Those methods cannot identify whether the process is out-of-control due to mean or variance shift. Third, methods dealing with detecting profile shape change always assume that a gold standard profile exists. The existing profile shape change detecting methods are hard to be implemented directly. Fourth, multiple nonlinear profiles situation may exist in real world applications, so that conventional single profile analysis methods may result in high false alarm rate when dealing multiple profile scenario. Fifth, Multiple nonlinear profiles situation may be also happened in designs of experiment. In a conventional experimental design, the response variable is usually considered a single value or a vector. The conventional approach cannot deal with when the format of the response factor as multiple nonlinear profiles. Finally, profile fault diagnosis is an important step after detecting out-of-control signal. However, current approaches will lead to large number of combinations if the number of sections is too large. The organization of this dissertation is as following. Chapter 1 introduce the profile analysis, current solutions, and challenges; Chapter 2 to Chapter 4 explore the unsolved challenges in single profile analysis; Chapter 5 and Chapter 6 investigate multiple profiles issues in profile monitoring analysis and experimental design method. Chapter 7 proposed a novel high-dimensional diagnosis control chart to diagnose the cause of out-of-control signal via visualization aid. Finally, Chapter 8 summarizes the achievements and contributions of this research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Kuo, Yong-Fang. "Statistical Methods for Determining Single or Multiple Cupoints of Risk Factors in Survival Data Analysis". The Ohio State University, 1997. http://rave.ohiolink.edu/etdc/view?acc_num=osu1394728637.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Kuo, Yong-Fang. "Statistical methods for determining single or multiple cutpoints of risk factors in survival data analysis /". The Ohio State University, 1997. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487945015616444.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Yousef, Mohammed A. "Astrostatistics: Statistical Analysis of Solar Activity from 1939 to 2008". Bowling Green State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1395405508.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Fiero, Mallorie H. "Statistical Approaches for Handling Missing Data in Cluster Randomized Trials". Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/612860.

Texto completo
Resumen
In cluster randomized trials (CRTs), groups of participants are randomized as opposed to individual participants. This design is often chosen to minimize treatment arm contamination or to enhance compliance among participants. In CRTs, we cannot assume independence among individuals within the same cluster because of their similarity, which leads to decreased statistical power compared to individually randomized trials. The intracluster correlation coefficient (ICC) is crucial in the design and analysis of CRTs, and measures the proportion of total variance due to clustering. Missing data is a common problem in CRTs and should be accommodated with appropriate statistical techniques because they can compromise the advantages created by randomization and are a potential source of bias. In three papers, I investigate statistical approaches for handling missing data in CRTs. In the first paper, I carry out a systematic review evaluating current practice of handling missing data in CRTs. The results show high rates of missing data in the majority of CRTs, yet handling of missing data remains suboptimal. Fourteen (16%) of the 86 reviewed trials reported carrying out a sensitivity analysis for missing data. Despite suggestions to weaken the missing data assumption from the primary analysis, only five of the trials weakened the assumption. None of the trials reported using missing not at random (MNAR) models. Due to the low proportion of CRTs reporting an appropriate sensitivity analysis for missing data, the second paper aims to facilitate performing a sensitivity analysis for missing data in CRTs by extending the pattern mixture approach for missing clustered data under the MNAR assumption. I implement multilevel multiple imputation (MI) in order to account for the hierarchical structure found in CRTs, and multiply imputed values by a sensitivity parameter, k, to examine parameters of interest under different missing data assumptions. The simulation results show that estimates of parameters of interest in CRTs can vary widely under different missing data assumptions. A high proportion of missing data can occur among CRTs because missing data can be found at the individual level as well as the cluster level. In the third paper, I use a simulation study to compare missing data strategies to handle missing cluster level covariates, including the linear mixed effects model, single imputation, single level MI ignoring clustering, MI incorporating clusters as fixed effects, and MI at the cluster level using aggregated data. The results show that when the ICC is small (ICC ≤ 0.1) and the proportion of missing data is low (≤ 25\%), the mixed model generates unbiased estimates of regression coefficients and ICC. When the ICC is higher (ICC > 0.1), MI at the cluster level using aggregated data performs well for missing cluster level covariates, though caution should be taken if the percentage of missing data is high.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Sultana, Mir Samia. "Toward better understanding of mechanical response of fabrics under multiple combined loading modes : experimental and statistical analysis". Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/54556.

Texto completo
Resumen
Fabric reinforced composites are becoming among primary materials of choice in manufacturing damage tolerant aerospace, automotive, and naval architectural parts. Detailed characterization of fabric reinforcements, however, is necessary to ensure the quality of such composite part and to prevent structural failure during their service. A number of experimental studies have been dedicated in the past to characterize the deformation of fabrics under individual loading modes, such as pure uniaxial tension, pure biaxial tension and pure shear. There still exists, however, a lack of knowledge and standardization in testing and analyzing the mechanical response of fabrics under combined shear-tension loadings, both in simultaneous and sequential modes. Moreover, in reality, there are sources of uncertainties in the forming of these multi-scale fibrous materials, which often results in non-repeatable test data and causes inconsistencies for full characterization. Recognizing the above gaps, the aim of this thesis has been to design, conduct, and analyze a set of experiments for enhanced characterization of a typical glass fabric under select individual and combined shear-biaxial tension loading modes. The experimental tests were performed using a new fixture recently designed and manufactured by the Composites & Optimization Laboratory at UBC and its international partners. On the account of inherent material uncertainties, all tested deformation modes were analyzed and compared via a series of ANOVA analysis. Results showed that statistically there were significant differences between the warp and weft responses of the fabric under all the deformation modes, with weft yarns being generally stiffer. The shear-tension coupling effect in combined deformation modes yielded higher normal axial and shear forces compared to the individual deformation modes. More severe local damage zones were observed during the coupling tests. Finally, a Digital Image Correlation test was conducted to inspect wrinkling in the deformed specimens. Under a pure shear mode, some out of plane wrinkles appeared due to misalignment, whereas in the simultaneous loading condition it was nearly disappeared, thanks to the presence of fiber tension.
Applied Science, Faculty of
Engineering, School of (Okanagan)
Graduate
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Chevallier, Juliette. "Statistical models and stochastic algorithms for the analysis of longitudinal Riemanian manifold valued data with multiple dynamic". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX059/document.

Texto completo
Resumen
Par delà les études transversales, étudier l'évolution temporelle de phénomènes connait un intérêt croissant. En effet, pour comprendre un phénomène, il semble plus adapté de comparer l'évolution des marqueurs de celui-ci au cours du temps plutôt que ceux-ci à un stade donné. Le suivi de maladies neuro-dégénératives s'effectue par exemple par le suivi de scores cognitifs au cours du temps. C'est également le cas pour le suivi de chimiothérapie : plus que par l'aspect ou le volume des tumeurs, les oncologues jugent que le traitement engagé est efficace dès lors qu'il induit une diminution du volume tumoral.L'étude de données longitudinales n'est pas cantonnée aux applications médicales et s'avère fructueuse dans des cadres d'applications variés tels que la vision par ordinateur, la détection automatique d'émotions sur un visage, les sciences sociales, etc.Les modèles à effets mixtes ont prouvé leur efficacité dans l'étude des données longitudinales, notamment dans le cadre d'applications médicales. Des travaux récent (Schiratti et al., 2015, 2017) ont permis l'étude de données complexes, telles que des données anatomiques. L'idée sous-jacente est de modéliser la progression temporelle d'un phénomène par des trajectoires continues dans un espace de mesures, que l'on suppose être une variété riemannienne. Sont alors estimées conjointement une trajectoire moyenne représentative de l'évolution globale de la population, à l'échelle macroscopique, et la variabilité inter-individuelle. Cependant, ces travaux supposent une progression unidirectionnelle et échouent à décrire des situations telles que la sclérose en plaques ou le suivi de chimiothérapie. En effet, pour ces pathologies, vont se succéder des phases de progression, de stabilisation et de remision de la maladie, induisant un changement de la dynamique d'évolution globale.Le but de cette thèse est de développer des outils méthodologiques et algorithmiques pour l’analyse de données longitudinales, dans le cas de phénomènes dont la dynamique d'évolution est multiple et d'appliquer ces nouveaux outils pour le suivi de chimiothérapie. Nous proposons un modèle non-linéaire à effets mixtes dans lequel les trajectoires d'évolution individuelles sont vues comme des déformations spatio-temporelles d'une trajectoire géodésique par morceaux et représentative de l'évolution de la population. Nous présentons ce modèle sous des hypothèses très génériques afin d'englober une grande classe de modèles plus spécifiques.L'estimation des paramètres du modèle géométrique est réalisée par un estimateur du maximum a posteriori dont nous démontrons l'existence et la consistance sous des hypothèses standards. Numériquement, du fait de la non-linéarité de notre modèle, l'estimation est réalisée par une approximation stochastique de l'algorithme EM, couplée à une méthode de Monte-Carlo par chaînes de Markov (MCMC-SAEM). La convergence du SAEM vers les maxima locaux de la vraisemblance observée ainsi que son efficacité numérique ont été démontrées. En dépit de cette performance, l'algorithme SAEM est très sensible à ses conditions initiales. Afin de palier ce problème, nous proposons une nouvelle classe d'algorithmes SAEM dont nous démontrons la convergence vers des minima locaux. Cette classe repose sur la simulation par une loi approchée de la vraie loi conditionnelle dans l'étape de simulation. Enfin, en se basant sur des techniques de recuit simulé, nous proposons une version tempérée de l'algorithme SAEM afin de favoriser sa convergence vers des minima globaux
Beyond transversal studies, temporal evolution of phenomena is a field of growing interest. For the purpose of understanding a phenomenon, it appears more suitable to compare the evolution of its markers over time than to do so at a given stage. The follow-up of neurodegenerative disorders is carried out via the monitoring of cognitive scores over time. The same applies for chemotherapy monitoring: rather than tumors aspect or size, oncologists asses that a given treatment is efficient from the moment it results in a decrease of tumor volume. The study of longitudinal data is not restricted to medical applications and proves successful in various fields of application such as computer vision, automatic detection of facial emotions, social sciences, etc.Mixed effects models have proved their efficiency in the study of longitudinal data sets, especially for medical purposes. Recent works (Schiratti et al., 2015, 2017) allowed the study of complex data, such as anatomical data. The underlying idea is to model the temporal progression of a given phenomenon by continuous trajectories in a space of measurements, which is assumed to be a Riemannian manifold. Then, both a group-representative trajectory and inter-individual variability are estimated. However, these works assume an unidirectional dynamic and fail to encompass situations like multiple sclerosis or chemotherapy monitoring. Indeed, such diseases follow a chronic course, with phases of worsening, stabilization and improvement, inducing changes in the global dynamic.The thesis is devoted to the development of methodological tools and algorithms suited for the analysis of longitudinal data arising from phenomena that undergo multiple dynamics and to apply them to chemotherapy monitoring. We propose a nonlinear mixed effects model which allows to estimate a representative piecewise-geodesic trajectory of the global progression and together with spacial and temporal inter-individual variability. Particular attention is paid to estimation of the correlation between the different phases of the evolution. This model provides a generic and coherent framework for studying longitudinal manifold-valued data.Estimation is formulated as a well-defined maximum a posteriori problem which we prove to be consistent under mild assumptions. Numerically, due to the non-linearity of the proposed model, the estimation of the parameters is performed through a stochastic version of the EM algorithm, namely the Markov chain Monte-Carlo stochastic approximation EM (MCMC-SAEM). The convergence of the SAEM algorithm toward local maxima of the observed likelihood has been proved and its numerical efficiency has been demonstrated. However, despite appealing features, the limit position of this algorithm can strongly depend on its starting position. To cope with this issue, we propose a new version of the SAEM in which we do not sample from the exact distribution in the expectation phase of the procedure. We first prove the convergence of this algorithm toward local maxima of the observed likelihood. Then, with the thought of the simulated annealing, we propose an instantiation of this general procedure to favor convergence toward global maxima: the tempering-SAEM
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Manandhr-Shrestha, Nabin K. "Statistical Learning and Behrens Fisher Distribution Methods for Heteroscedastic Data in Microarray Analysis". Scholar Commons, 2010. http://scholarcommons.usf.edu/etd/3513.

Texto completo
Resumen
The aim of the present study is to identify the di®erentially expressed genes be- tween two di®erent conditions and apply it in predicting the class of new samples using the microarray data. Microarray data analysis poses many challenges to the statis- ticians because of its high dimensionality and small sample size, dubbed as "small n large p problem". Microarray data has been extensively studied by many statisticians and geneticists. Generally, it is said to follow a normal distribution with equal vari- ances in two conditions, but it is not true in general. Since the number of replications is very small, the sample estimates of variances are not appropriate for the testing. Therefore, we have to consider the Bayesian approach to approximate the variances in two conditions. Because the number of genes to be tested is usually large and the test is to be repeated thousands of times, there is a multiplicity problem. To remove the defect arising from multiple comparison, we use the False Discovery Rate (FDR) correction. Applying the hypothesis test repeatedly gene by gene for several thousands of genes, there is a great chance of selecting false genes as di®erentially expressed, even though the signi¯cance level is set very small. For the test to be reliable, the probability of selecting true positive should be high. To control the false positive rate, we have applied the FDR correction, in which the p -values for each of the gene is compared with its corresponding threshold. A gene is, then, said to be di®erentially expressed if the p-value is less than the threshold. We have developed a new method of selecting informative genes based on the Bayesian Version of Behrens-Fisher distribution which assumes the unequal variances in two conditions. Since the assumption of equal variances fail in most of the situation and the equal variance is a special case of unequal variance, we have tried to solve the problem of ¯nding di®erentially expressed genes in the unequal variance cases. We have found that the developed method selects the actual expressed genes in the simulated data and compared this method with the recent methods such as Fox and Dimmic’s t-test method, Tusher and Tibshirani’s SAM method among others. The next step of this research is to check whether the genes selected by the pro- posed Behrens -Fisher method is useful for the classi¯cation of samples. Using the genes selected by the proposed method that combines the Behrens Fisher gene se- lection method with some other statistical learning methods, we have found better classi¯cation result. The reason behind it is the capability of selecting the genes based on the knowledge of prior and data. In the case of microarray data due to the small sample size and the large number of variables, the variances obtained by the sample is not reliable in the sense that it is not positive de¯nite and not invertible. So, we have derived the Bayesian version of the Behrens Fisher distribution to remove that insu±ciency. The e±ciency of this established method has been demonstrated by ap- plying them in three real microarray data and calculating the misclassi¯cation error rates on the corresponding test sets. Moreover, we have compared our result with some of the other popular methods, such as Nearest Shrunken Centroid and Support Vector Machines method, found in the literature. We have studied the classi¯cation performance of di®erent classi¯ers before and after taking the correlation between the genes. The classi¯cation performance of the classi¯er has been signi¯cantly improved once the correlation was accounted. The classi¯cation performance of di®erent classi¯ers have been measured by the misclas- si¯cation rates and the confusion matrix. The another problem in the multiple testing of large number of hypothesis is the correlation among the test statistics. we have taken the correlation between the test statistics into account. If there were no correlation, then it will not a®ect the shape of the normalized histogram of the test statistics. As shown by Efron, the degree of the correlation among the test statistics either widens or shrinks the tail of the histogram of the test statistics. Thus the usual rejection region as obtained by the signi¯cance level is not su±cient. The rejection region should be rede¯ned accordingly and depends on the degree of correlation. The e®ect of the correlation in selecting the appropriate rejection region have also been studied.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

McCaskie, Pamela Ann. "Multiple-imputation approaches to haplotypic analysis of population-based data with applications to cardiovascular disease". University of Western Australia. School of Population Health, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0160.

Texto completo
Resumen
[Truncated abstract] This thesis investigates novel methods for the genetic association analysis of haplotype data in samples of unrelated individuals, and applies these methods to the analysis of coronary heart disease and related phenotypes. Determining the inheritance pattern of genetic variants in studies of unrelated individuals can be problematic because family members of the studied individuals are often not available. For the analysis of individual genetic loci, no problem arises because the unit of interest is the observed genotype. When the unit of interest is the linear combination of alleles along one chromosome, inherited together in a haplotype, it is not always possible to determine with certainty the inheritance pattern, and therefore statistical methods to infer these patterns must be adopted. Due to genotypic heterozygosity, mutliple possible haplotype configurations can often resolve an individual's genotype measures at multiple loci. When haplotypes are not known, but are inferred statistically, an element of uncertainty is thus inherent which, if not dealt with appropriately, can result in unreliable estimates of effect sizes in an association setting. The core aim of the research described in this thesis was to develop and implement a general method for haplotype-based association analysis using multiple imputation to appropriately deal with uncertainty haplotype assignment. Regression-based approaches to association analysis provide flexible methods to investigate the influence of a covariate on a response variable, adjusting for the effects of other variables including interaction terms. ... These methods are then applied to models accommodating binary, quantitative, longitudinal and survival data. The performance of the multiple imputation method implemented was assessed using simulated data under a range of haplotypic effect sizes and genetic inheritance patterns. The multiple imputation approach performed better, on average, than ignoring haplotypic uncertainty, and provided estimates that in most cases were similar to those observed when haplotypes were known. The haplotype association methods developed in this thesis were used to investigate the genetic epidemiology of cardiovascular disease, utilising data for the cholesteryl ester transfer protein gene (CETP), the hepatic lipase (LIPC) gene and the 15- lipoxygenase (ALOX15) gene on a total of 6,487 individuals from three Western Australian studies. Results of these analyses suggested single nucleotide polymorphisms (SNPs) and haplotypes in the CETP gene were associated with increased plasma high-density lipoprotein cholesterol (HDL-C). SNPs in the LIPC gene were also associated with increased HDL-C and haplotypes in the ALOX15 gene were associated with risk of carotid plaque among individuals with premature CHD. The research presented in this thesis is both novel and important as it provides methods for the analysis of haplotypic associations with a range of response types, while incorporating information about haplotype uncertainty inherent in populationbased studies. These methods are shown to perform well for a range of simulated and real data situations, and have been written into a statistical analysis package that has been freely released to the research community.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Metawe, Saad Abdel-Hamid. "The Prediction of Industrial Bond Rating Changes: a Multiple Discriminant Model Versus a Statistical Decomposition Model". Thesis, North Texas State University, 1985. https://digital.library.unt.edu/ark:/67531/metadc332370/.

Texto completo
Resumen
The purpose of this study is to investigate the usefulness of statistical decomposition measures in the prediction of industrial bond rating changes. Further, the predictive ability of decomposition measures is compared with multiple discriminant analysis on the same sample. The problem of this study is twofold. It stems in general from the statistical problems associated with current techniques employed in the study of bond ratings and in particular from the lack of attention to the study of bond rating changes. Two main hypotheses are tested in this study. The first is that bond rating changes can be predicted through the use of financial statement data. The second is that decomposition analysis can achieve the same performance as multiple discriminant analysis in duplicating and predicting industrial bond rating changes. To explain and predict industrial bond rating changes, statistical decomposition measures were computed for each company in the sample. Based on these decomposition measures, the two types of analyses performed were (a) a univariate analysis where each decomposition measure was compared with an industry average decomposition measure, and (b) a multivariate analysis where decomposition measures were used as independent variables in a probability linear model. In addition to statistical decomposition analysis, multiple discriminant analysis was used in duplicating and predicting bond rating changes. Finally, a comparison was made between the predictive abilities of decomposition analysis and discriminant analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Zhang, Jian. "Bayesian multiple hypotheses testing with quadratic criterion". Thesis, Troyes, 2014. http://www.theses.fr/2014TROY0016/document.

Texto completo
Resumen
Le problème de détection et localisation d’anomalie peut être traité comme le problème du test entre des hypothèses multiples (THM) dans le cadre bayésien. Le test bayésien avec la fonction de perte 0−1 est une solution standard pour ce problème, mais les hypothèses alternatives pourraient avoir une importance tout à fait différente en pratique. La fonction de perte 0−1 ne reflète pas cette réalité tandis que la fonction de perte quadratique est plus appropriée. L’objectif de cette thèse est la conception d’un test bayésien avec la fonction de perte quadratique ainsi que son étude asymptotique. La construction de ce test est effectuée en deux étapes. Dans la première étape, un test bayésien avec la fonction de perte quadratique pour le problème du THM sans l’hypothèse de base est conçu et les bornes inférieures et supérieures des probabilités de classification erronée sont calculées. La deuxième étape construit un test bayésien pour le problème du THM avec l’hypothèse de base. Les bornes inférieures et supérieures des probabilités de fausse alarme, des probabilités de détection manquée, et des probabilités de classification erronée sont calculées. A partir de ces bornes, l’équivalence asymptotique entre le test proposé et le test standard avec la fonction de perte 0−1 est étudiée. Beaucoup d’expériences de simulation et une expérimentation acoustique ont illustré l’efficacité du nouveau test statistique
The anomaly detection and localization problem can be treated as a multiple hypotheses testing (MHT) problem in the Bayesian framework. The Bayesian test with the 0−1 loss function is a standard solution for this problem, but the alternative hypotheses have quite different importance in practice. The 0−1 loss function does not reflect this fact while the quadratic loss function is more appropriate. The objective of the thesis is the design of a Bayesian test with the quadratic loss function and its asymptotic study. The construction of the test is made in two steps. In the first step, a Bayesian test with the quadratic loss function for the MHT problem without the null hypothesis is designed and the lower and upper bounds of the misclassification probabilities are calculated. The second step constructs a Bayesian test for the MHT problem with the null hypothesis. The lower and upper bounds of the false alarm probabilities, the missed detection probabilities as well as the misclassification probabilities are calculated. From these bounds, the asymptotic equivalence between the proposed test and the standard one with the 0-1 loss function is studied. A lot of simulation and an acoustic experiment have illustrated the effectiveness of the new statistical test
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Merkle, Edgar C. "Bayesian estimation of factor analysis models with incomplete data". Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1126895149.

Texto completo
Resumen
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xi, 106 p.; also includes graphics. Includes bibliographical references (p. 103-106). Available online via OhioLINK's ETD Center
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Herman, Joseph L. "Multiple sequence analysis in the presence of alignment uncertainty". Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:88a56d9f-a96e-48e3-b8dc-a73f3efc8472.

Texto completo
Resumen
Sequence alignment is one of the most intensely studied problems in bioinformatics, and is an important step in a wide range of analyses. An issue that has gained much attention in recent years is the fact that downstream analyses are often highly sensitive to the specific choice of alignment. One way to address this is to jointly sample alignments along with other parameters of interest. In order to extend the range of applicability of this approach, the first chapter of this thesis introduces a probabilistic evolutionary model for protein structures on a phylogenetic tree; since protein structures typically diverge much more slowly than sequences, this allows for more reliable detection of remote homologies, improving the accuracy of the resulting alignments and trees, and reducing sensitivity of the results to the choice of dataset. In order to carry out inference under such a model, a number of new Markov chain Monte Carlo approaches are developed, allowing for more efficient convergence and mixing on the high-dimensional parameter space. The second part of the thesis presents a directed acyclic graph (DAG)-based approach for representing a collection of sampled alignments. This DAG representation allows the initial collection of samples to be used to generate a larger set of alignments under the same approximate distribution, enabling posterior alignment probabilities to be estimated reliably from a reasonable number of samples. If desired, summary alignments can then be generated as maximum-weight paths through the DAG, under various types of loss or scoring functions. The acyclic nature of the graph also permits various other types of algorithms to be easily adapted to operate on the entire set of alignments in the DAG. In the final part of this work, methodology is introduced for alignment-DAG-based sequence annotation using hidden Markov models, and RNA secondary structure prediction using stochastic context-free grammars. Results on test datasets indicate that the additional information contained within the DAG allows for improved predictions, resulting in substantial gains over simply analysing a set of alignments one by one.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Girka, Fabien. "Development of new statistical/ML methods for identifying multimodal factors related to the evolution of Multiple Sclerosis". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG075.

Texto completo
Resumen
L'étude d'un phénomène à travers plusieurs modalités peut permettre de mieux en comprendre les mécanismes sous-jacents par rapport à l'étude indépendante des différentes modalités. Dans l'optique d'une telle étude, les données sont souvent acquises par différentes sources, donnant lieu à des jeux de données multimodaux/multi-sources/multiblocs. Un cadre statistique explicitement adapté pour l'analyse jointe de données multi-sources est l'Analyse Canonique des Corrélations Généralisée Régularisée (RGCCA). RGCCA extrait des vecteurs et composantes canoniques qui résument les différentes modalités et leurs interactions.Les contributions de cette thèse sont de quatre ordres. (i) Améliorer et enrichir le package R pour RGCCA afin de démocratiser son usage. (ii) Etendre le cadre de RGCCA pour mieux prendre en compte les données tensorielles en imposant une décomposition tensorielle de rang faible aux vecteurs canoniques extraits par la méthode. (iii) Proposer et étudier des approches simultanées de RGCCA pour obtenir toutes les composantes canoniques d'un seul coup. Les méthodes proposées ouvrent la voie à de nouveaux développements de RGCCA. (iv) Utiliser les outils et l'expertise développés pour analyser des données sur la sclérose en plaques et la leucodystrophie. L'accent est mis sur l'identification de biomarqueurs permettant de différencier les patients des témoins sains ou de trouver des différences entre groupes de patients
Studying a given phenomenon under multiple views can reveal a more significant part of the mechanisms at stake rather than considering each view separately. In order to design a study under such a paradigm, measurements are usually acquired through different modalities resulting in multimodal/multiblock/multi-source data. One statistical framework suited explicitly for the joint analysis of such multi-source data is Regularized Generalized Canonical Correlation Analysis (RGCCA). RGCCA extracts canonical vectors and components that summarize the different views and their interactions. The contributions of this thesis are fourfold. (i) Improve and enrich the RGCCA R package to democratize its use. (ii) Extend the RGCCA framework to better handle tensor data by imposing a low-rank tensor factorization to the extracted canonical vectors. (iii) Propose and investigate simultaneous versions of RGCCA to get all canonical components at once. The proposed methods pave the way for new extensions of RGCCA. (iv) Use the developed tools and expertise to analyze multiple sclerosis and leukodystrophy data. A focus is made on identifying biomarkers differentiating between patients and healthy controls or between groups of patients
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Bankefors, Johan. "Structural classification of Quillaja saponins by electrospray ionisation ion trap multiple-stage mass spectrometry in combination with multivariate analysis /". Uppsala : Department of Chemistry, Swedish University of Agricultural Sciences, 2006. http://epsilon.slu.se/10284550.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Moutsianas, Loukas. "Imputation aided analysis of the association between autoimmune diseases and the MHC". Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:aa570447-9e25-42de-b10d-9f445c0a094e.

Texto completo
Resumen
The Major Histocompatibility Complex (MHC) is a genomic region in chromosome 6 which has been consistently found to be associated with the risk of developing virtually all common autoimmune diseases. Although its importance in disease pathogenesis has been known for decades, efforts to disentangle the roles of the classical human leukocyte antigens (HLA) and other variants responsible for the susceptibility to disease have often met with limited success, owing to the complex structure and extreme heterogeneity of the region. In this thesis, I interrogate the MHC for association with three common autoimmune diseases, ankylosing spondylitis, psoriasis and multiple sclerosis, with the aim of confirming the previously-reported associations and of identifying novel ones. To do so, I employ a systematic, joint analysis of single nucleotide polymorphism (SNP) and HLA allele data, in a logistic regression framework, using a recently developed algorithm to predict the HLA alleles for samples where such information is unavailable. To ensure the reliability of the analysis, I apply stringent quality control procedures and integrate over the uncertainty of the HLA allele predictions. Moreover, I resolve the haplotype phase of individuals from the HapMap project to create reliable reference panels, used in both HLA prediction and in quality control procedures. By directly testing HLA subtypes for association with the disease, the power to detect such associations is increased. I present the results of the analysis on the three disease phenotypes and discuss the evidence for important novel findings amongst both SNPs and HLA alleles in two of the diseases. In the final part of this thesis, I introduce a novel, model-based approach to detect inconsistencies in the data and show how it can be used to flag problematic SNPs which conventional quality control procedures may fail to identify.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Madaris, Aaron T. "Characterization of Peripheral Lung Lesions by Statistical Image Processing of Endobronchial Ultrasound Images". Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1485517151147533.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Konigorski, Stefan. "Development and application of new statistical methods for the analysis of multiple phenotypes to investigate genetic associations with cardiometabolic traits". Doctoral thesis, Humboldt-Universität zu Berlin, 2018. http://dx.doi.org/10.18452/19132.

Texto completo
Resumen
Die biotechnologischen Entwicklungen der letzten Jahre ermöglichen eine immer detailliertere Untersuchung von genetischen und molekularen Markern mit multiplen komplexen Traits. Allerdings liefern vorhandene statistische Methoden für diese komplexen Analysen oft keine valide Inferenz. Das erste Ziel der vorliegenden Arbeit ist, zwei neue statistische Methoden für Assoziationsstudien von genetischen Markern mit multiplen Phänotypen zu entwickeln, effizient und robust zu implementieren, und im Vergleich zu existierenden statistischen Methoden zu evaluieren. Der erste Ansatz, C-JAMP (Copula-based Joint Analysis of Multiple Phenotypes), ermöglicht die Assoziation von genetischen Varianten mit multiplen Traits in einem gemeinsamen Copula Modell zu untersuchen. Der zweite Ansatz, CIEE (Causal Inference using Estimating Equations), ermöglicht direkte genetische Effekte zu schätzen und testen. C-JAMP wird in dieser Arbeit für Assoziationsstudien von seltenen genetischen Varianten mit quantitativen Traits evaluiert, und CIEE für Assoziationsstudien von häufigen genetischen Varianten mit quantitativen Traits und Ereigniszeiten. Die Ergebnisse von umfangreichen Simulationsstudien zeigen, dass beide Methoden unverzerrte und effiziente Parameterschätzer liefern und die statistische Power von Assoziationstests im Vergleich zu existierenden Methoden erhöhen können - welche ihrerseits oft keine valide Inferenz liefern. Für das zweite Ziel dieser Arbeit, neue genetische und transkriptomische Marker für kardiometabolische Traits zu identifizieren, werden zwei Studien mit genom- und transkriptomweiten Daten mit C-JAMP und CIEE analysiert. In den Analysen werden mehrere neue Kandidatenmarker und -gene für Blutdruck und Adipositas identifiziert. Dies unterstreicht den Wert, neue statistische Methoden zu entwickeln, evaluieren, und implementieren. Für beide entwickelten Methoden sind R Pakete verfügbar, die ihre Anwendung in zukünftigen Studien ermöglichen.
In recent years, the biotechnological advancements have allowed to investigate associations of genetic and molecular markers with multiple complex phenotypes in much greater depth. However, for the analysis of such complex datasets, available statistical methods often don’t yield valid inference. The first aim of this thesis is to develop two novel statistical methods for association analyses of genetic markers with multiple phenotypes, to implement them in a computationally efficient and robust manner so that they can be used for large-scale analyses, and evaluate them in comparison to existing statistical approaches under realistic scenarios. The first approach, called the copula-based joint analysis of multiple phenotypes (C-JAMP) method, allows investigating genetic associations with multiple traits in a joint copula model and is evaluated for genetic association analyses of rare genetic variants with quantitative traits. The second approach, called the causal inference using estimating equations (CIEE) method, allows estimating and testing direct genetic effects in directed acyclic graphs, and is evaluated for association analyses of common genetic variants with quantitative and time-to-event traits. The results of extensive simulation studies show that both approaches yield unbiased and efficient parameter estimators and can improve the power of association tests in comparison to existing approaches, which yield invalid inference in many scenarios. For the second goal of this thesis, to identify novel genetic and transcriptomic markers associated with cardiometabolic traits, C-JAMP and CIEE are applied in two large-scale studies including genome- and transcriptome-wide data. In the analyses, several novel candidate markers and genes are identified, which highlights the merit of developing, evaluating, and implementing novel statistical approaches. R packages are available for both methods and enable their application in future studies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Heidt, Kaitlyn. "Comparison of Imputation Methods for Mixed Data Missing at Random". Digital Commons @ East Tennessee State University, 2019. https://dc.etsu.edu/etd/3559.

Texto completo
Resumen
A statistician's job is to produce statistical models. When these models are precise and unbiased, we can relate them to new data appropriately. However, when data sets have missing values, assumptions to statistical methods are violated and produce biased results. The statistician's objective is to implement methods that produce unbiased and accurate results. Research in missing data is becoming popular as modern methods that produce unbiased and accurate results are emerging, such as MICE in R, a statistical software. Using real data, we compare four common imputation methods, in the MICE package in R, at different levels of missingness. The results were compared in terms of the regression coefficients and adjusted R^2 values using the complete data set. The CART and PMM methods consistently performed better than the OTF and RF methods. The procedures were repeated on a second sample of real data and the same conclusions were drawn.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Keeling, Kellie Bliss. "Developing Criteria for Extracting Principal Components and Assessing Multiple Significance Tests in Knowledge Discovery Applications". Thesis, University of North Texas, 1999. https://digital.library.unt.edu/ark:/67531/metadc2231/.

Texto completo
Resumen
With advances in computer technology, organizations are able to store large amounts of data in data warehouses. There are two fundamental issues researchers must address: the dimensionality of data and the interpretation of multiple statistical tests. The first issue addressed by this research is the determination of the number of components to retain in principal components analysis. This research establishes regression, asymptotic theory, and neural network approaches for estimating mean and 95th percentile eigenvalues for implementing Horn's parallel analysis procedure for retaining components. Certain methods perform better for specific combinations of sample size and numbers of variables. The adjusted normal order statistic estimator (ANOSE), an asymptotic procedure, performs the best overall. Future research is warranted on combining methods to increase accuracy. The second issue involves interpreting multiple statistical tests. This study uses simulation to show that Parker and Rothenberg's technique using a density function with a mixture of betas to model p-values is viable for p-values from central and non-central t distributions. The simulation study shows that final estimates obtained in the proposed mixture approach reliably estimate the true proportion of the distributions associated with the null and nonnull hypotheses. Modeling the density of p-values allows for better control of the true experimentwise error rate and is used to provide insight into grouping hypothesis tests for clustering purposes. Future research will expand the simulation to include p-values generated from additional distributions. The techniques presented are applied to data from Lake Texoma where the size of the database and the number of hypotheses of interest call for nontraditional data mining techniques. The issue is to determine if information technology can be used to monitor the chlorophyll levels in the lake as chloride is removed upstream. A relationship established between chlorophyll and the energy reflectance, which can be measured by satellites, enables more comprehensive and frequent monitoring. The results have both economic and political ramifications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Sokrut, Nikolay. "The Integrated Distributed Hydrological Model, ECOFLOW- a Tool for Catchment Management". Doctoral thesis, Stockholm, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-237.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Hu, Zhiguang y 胡志光. "Binary latent variable modelling in the analysis of health data with multiple binary outcomes in an air pollution study in Hong Kong". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B31237058.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Dwyer, Michael G. "Development and application of novel algorithms for quantitative analysis of magnetic resonance imaging in multiple sclerosis". Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/6298.

Texto completo
Resumen
This document is a critical synopsis of prior work by Michael Dwyer submitted in support of a PhD by published work. The selected work is focused on the application of quantitative magnet resonance imaging (MRI) analysis techniques to the study of multiple sclerosis (MS). MS is a debilitating disease with a multi-factorial pathology, progression, and clinical presentation. Its most salient feature is focal inflammatory lesions, but it also includes significant parenchymal atrophy and microstructural damage. As a powerful tool for in vivo investigation of tissue properties, MRI can provide important clinical and scientific information regarding these various aspects of the disease, but precise, accurate quantitative analysis techniques are needed to detect subtle changes and to cope with the vast amount of data produced in an MRI session. To address this, eight new techniques were developed by Michael Dwyer and his co-workers to better elucidate focal, atrophic, and occult/"invisible" pathology. These included: a method to better evaluate errors in lesion identification; a method to quantify differences in lesion distribution between scanner strengths; a method to measure optic nerve atrophy; a more precise method to quantify tissue-specific atrophy; a method sensitive to dynamic myelin changes; and a method to quantify iron in specific brain structures. Taken together, these new techniques are complementary and improve the ability of clinicians and researchers to reliably assess various key elements of MS pathology in vivo.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Senteney, Michael H. "A Monte Carlo Study to Determine Sample Size for Multiple Comparison Procedures in ANOVA". Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou160433478343909.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Oketch, Tobias O. "Performance of Imputation Algorithms on Artificially Produced Missing at Random Data". Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3217.

Texto completo
Resumen
Missing data is one of the challenges we are facing today in modeling valid statistical models. It reduces the representativeness of the data samples. Hence, population estimates, and model parameters estimated from such data are likely to be biased. However, the missing data problem is an area under study, and alternative better statistical procedures have been presented to mitigate its shortcomings. In this paper, we review causes of missing data, and various methods of handling missing data. Our main focus is evaluating various multiple imputation (MI) methods from the multiple imputation of chained equation (MICE) package in the statistical software R. We assess how these MI methods perform with different percentages of missing data. A multiple regression model was fit on the imputed data sets and the complete data set. Statistical comparisons of the regression coefficients are made between the models using the imputed data and the complete data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Haynes, Michele Ann. "Flexible distributions and statistical models in ranking and selection procedures with applications". Thesis, Queensland University of Technology, 1998.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Xi, Wenna. "Comparing the Statistical Power of Analysis of Covariance after Multiple Imputation and the Mixed Model in Testing the Treatment Effect for Pre-post Studies with Loss to Follow-up". The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1403557167.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Konigorski, Stefan [Verfasser], Marius [Gutachter] Kloft, Tobias [Gutachter] Pischon y Yildiz E. [Gutachter] Yilmaz. "Development and application of new statistical methods for the analysis of multiple phenotypes to investigate genetic associations with cardiometabolic traits / Stefan Konigorski ; Gutachter: Marius Kloft, Tobias Pischon, Yildiz E. Yilmaz". Berlin : Humboldt-Universität zu Berlin, 2018. http://d-nb.info/1182542395/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía