Dissertations / Theses on the topic 'Hypothesis sampling'

To see the other types of publications on this topic, follow the link: Hypothesis sampling.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 23 dissertations / theses for your research on the topic 'Hypothesis sampling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Koziuk, Andzhey. "Re-sampling in instrumental variables regression." Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/20869.

Full text
Abstract:
Diese Arbeit behandelt die Instrumentalvariablenregression im Kontext der Stichprobenwiederholung. Es wird ein Rahmen geschaffen, der das Ziel der Inferenz identifiziert. Diese Abhandlung versucht die Instrumentalvariablenregression von einer neuen Perspektive aus zu motivieren. Dabei wird angenommen, dass das Ziel der Schätzung von zwei Faktoren gebildet wird, einer Umgebung und einer zu einem internen Model spezifischen Struktur. Neben diesem Rahmen entwickelt die Arbeit eine Methode der Stichprobenwiederholung, die geeignet für das Testen einer linearen Hypothese bezüglich der Schätzung des Ziels ist. Die betreffende technische Umgebung und das Verfahren werden im Zusammenhang in der Einleitung und im Hauptteil der folgenden Arbeit erklärt. Insbesondere, aufbauend auf der Arbeit von Spokoiny, Zhilova 2015, rechtfertigt und wendet diese Arbeit ein numerisches ’multiplier-bootstrap’ Verfahren an, um nicht asymptotische Konfidenzintervalle für den Hypothesentest zu konstruieren. Das Verfahren und das zugrunde liegende statistische Werkzeug wurden so gewählt und angepasst, um ein im Model auftretendes und von asymptotischer Analysis übersehenes Problem zu erklären, das formal als Schwachheit der Instrumentalvariablen bekannt ist. Das angesprochene Problem wird jedoch durch den endlichen Stichprobenansatz von Spokoiny 2014 adressiert.
Instrumental variables regression in the context of a re-sampling is considered. In the work a framework is built to identify an inferred target function. It attempts to approach an idea of a non-parametric regression and motivate instrumental variables regression from a new perspective. The framework assumes a target of estimation to be formed by two factors - an environment and an internal, model specific structure. Aside from the framework, the work develops a re-sampling method suited to test linear hypothesis on the target. Particular technical environment and procedure are given and explained in the introduction and in the body of the work. Specifically, following the work of Spokoiny, Zhilova 2015, the writing justifies and applies numerically 'multiplier bootstrap' procedure to construct confidence intervals for the testing problem. The procedure and underlying statistical toolbox were chosen to account for an issue appearing in the model and overlooked by asymptotic analysis, that is weakness of instrumental variables. The issue, however, is addressed by design of the finite sample approach by Spokoiny 2014.
APA, Harvard, Vancouver, ISO, and other styles
2

Tucker, Joanne M. (Joanne Morris). "Robustness of the One-Sample Kolmogorov Test to Sampling from a Finite Discrete Population." Thesis, University of North Texas, 1996. https://digital.library.unt.edu/ark:/67531/metadc278186/.

Full text
Abstract:
One of the most useful and best known goodness of fit test is the Kolmogorov one-sample test. The assumptions for the Kolmogorov (one-sample test) test are: 1. A random sample; 2. A continuous random variable; 3. F(x) is a completely specified hypothesized cumulative distribution function. The Kolmogorov one-sample test has a wide range of applications. Knowing the effect fromusing the test when an assumption is not met is of practical importance. The purpose of this research is to analyze the robustness of the Kolmogorov one-sample test to sampling from a finite discrete distribution. The standard tables for the Kolmogorov test are derived based on sampling from a theoretical continuous distribution. As such, the theoretical distribution is infinite. The standard tables do not include a method or adjustment factor to estimate the effect on table values for statistical experiments where the sample stems from a finite discrete distribution without replacement. This research provides an extension of the Kolmogorov test when the hypothesized distribution function is finite and discrete, and the sampling distribution is based on sampling without replacement. An investigative study has been conducted to explore possible tendencies and relationships in the distribution of Dn when sampling with and without replacement for various parameter settings. In all, 96 sampling distributions were derived. Results show the standard Kolmogorov table values are conservative, particularly when the sample sizes are small or the sample represents 10% or more of the population.
APA, Harvard, Vancouver, ISO, and other styles
3

Michel, Frank [Verfasser], Carsten [Akademischer Betreuer] Rother, Stefan [Akademischer Betreuer] Gumhold, Carsten [Gutachter] Rother, and Carsten [Gutachter] Steger. "Hypothesis Generation for Object Pose Estimation From local sampling to global reasoning / Frank Michel ; Gutachter: Carsten Rother, Carsten Steger ; Carsten Rother, Stefan Gumhold." Dresden : Technische Universität Dresden, 2019. http://d-nb.info/1226897592/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Grabaskas, David. "Efficient Approaches to the Treatment of Uncertainty in Satisfying Regulatory Limits." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1345464067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Carroll, James Lamond. "A Bayesian Decision Theoretical Approach to Supervised Learning, Selective Sampling, and Empirical Function Optimization." Diss., CLICK HERE for online access, 2010. http://contentdm.lib.byu.edu/ETD/image/etd3413.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

LEONARD, ANTHONY CHARLES. "HYPOTHESIS TESTING WITH THE SIMILARITY INDEX." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1005680996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Silva, Ricardo Gonçalves da. ""Testes de hipótese e critério bayesiano de seleção de modelos para séries temporais com raiz unitária"." Universidade de São Paulo, 2004. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-19082004-163615/.

Full text
Abstract:
A literatura referente a testes de hipótese em modelos auto-regressivos que apresentam uma possível raiz unitária é bastante vasta e engloba pesquisas oriundas de diversas áreas. Nesta dissertação, inicialmente, buscou-se realizar uma revisão dos principais resultados existentes, oriundos tanto da visão clássica quanto da bayesiana de inferência. No que concerne ao ferramental clássico, o papel do movimento browniano foi apresentado de forma detalhada, buscando-se enfatizar a sua aplicabilidade na dedução de estatísticas assintóticas para a realização dos testes de hipótese relativos à presença de uma raíz unitária. Com relação à inferência bayesiana, foi inicialmente conduzido um exame detalhado do status corrente da literatura. A seguir, foi realizado um estudo comparativo em que se testa a hipótese de raiz unitária com base na probabilidade da densidade a posteriori do parâmetro do modelo, considerando as seguintes densidades a priori: Flat, Jeffreys, Normal e Beta. A inferência foi realizada com base no algoritmo Metropolis-Hastings, usando a técnica de simulação de Monte Carlo por Cadeias de Markov (MCMC). Poder, tamanho e confiança dos testes apresentados foram computados com o uso de séries simuladas. Finalmente, foi proposto um critério bayesiano de seleção de modelos, utilizando as mesmas distribuições a priori do teste de hipótese. Ambos os procedimentos foram ilustrados com aplicações empíricas à séries temporais macroeconômicas.
Testing for unit root hypothesis in non stationary autoregressive models has been a research topic disseminated along many academic areas. As a first step for approaching this issue, this dissertation includes an extensive review highlighting the main results provided by Classical and Bayesian inferences methods. Concerning Classical approach, the role of brownian motion is discussed in a very detailed way, clearly emphasizing its application for obtaining good asymptotic statistics when we are testing for the existence of a unit root in a time series. Alternatively, for Bayesian approach, a detailed discussion is also introduced in the main text. Then, exploring an empirical façade of this dissertation, we implemented a comparative study for testing unit root based on a posteriori model's parameter density probability, taking into account the following a priori densities: Flat, Jeffreys, Normal and Beta. The inference is based on the Metropolis-Hastings algorithm and on the Monte Carlo Markov Chains (MCMC) technique. Simulated time series are used for calculating size, power and confidence intervals for the developed unit root hypothesis test. Finally, we proposed a Bayesian criterion for selecting models based on the same a priori distributions used for developing the same hypothesis tests. Obviously, both procedures are empirically illustrated through application to macroeconomic time series.
APA, Harvard, Vancouver, ISO, and other styles
8

Sun, Yiping. "Rank-sum test for two-sample location problem under order restricted randomized design." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180147276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Erfmeier, Alexandra. "Ursachen des Invasionserfolges von Rhododendron ponticum L. auf den Britischen Inseln Einfluss von Habitat und Genotyp /." Doctoral thesis, [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=975033476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Su, Weizhe. "Bayesian Hidden Markov Model in Multiple Testing on Dependent Count Data." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1613751403094066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Martinez, Maria. "Analyse genetique d'un trait pathologique a partir de familles selectionnees : consequences d'un ecart a certaines hypotheses du modele classique de recensement." Paris 7, 1988. http://www.theses.fr/1988PA077113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

"Exact conditional tests under inverse sampling." 2005. http://library.cuhk.edu.hk/record=b5892696.

Full text
Abstract:
Chan For Yee.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.
Includes bibliographical references (leaves 88-90).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement --- p.iv
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Basic Concepts --- p.6
Chapter 2.1 --- Binomial vs Inverse Sampling --- p.6
Chapter 2.2 --- Equivalence / Non-inferiority Test --- p.7
Chapter 3 --- Testing Procedures --- p.9
Chapter 3.1 --- The Model --- p.9
Chapter 3.2 --- Asymptotic Behaviors of the Estimators --- p.10
Chapter 3.2.1 --- Asymptotic Test Statistic based on Unconditional Maximum Likelihood Estimate --- p.12
Chapter 3.2.2 --- Asymptotic Test Statistic based on restricted maximum likelihood estimate --- p.13
Chapter 3.3 --- Conditional Exact Procedures --- p.16
Chapter 3.3.1 --- Non-test-statistic-based procedure --- p.17
Chapter 3.3.2 --- Test-statistic-based procedure --- p.17
Chapter 4 --- Simulation Study --- p.19
Chapter 4.1 --- Simulation Results - Type I error rate --- p.21
Chapter 4.1.1 --- Asymptotic Test Statistic based on Unconditional MLE . . --- p.21
Chapter 4.1.2 --- Asymptotic Test Statistic based on Restricted MLE . . . . --- p.22
Chapter 4.1.3 --- Non-test-statistic-based Conditional Exact Test --- p.23
Chapter 4.1.4 --- Test-statistic-based Conditional Exact Test --- p.24
Chapter 4.2 --- Simulation Results - Power --- p.25
Chapter 4.2.1 --- Asymptotic Tests - Similarity and Difference between using Unconditional and Restricted MLE --- p.25
Chapter 4.2.2 --- Conditional Exact Tests - Similarity and Difference be- tween using Non-test-statistic-based and Test-statistic-based Procedures --- p.30
Chapter 4.2.3 --- Test-statistic-based Conditional Exact Tests - Similarity and Difference between using Unconditional and Restricted MLE --- p.31
Chapter 5 --- Conclusion --- p.32
Appendices --- p.36
Chapter A. --- Simulation Result - Type I error rate --- p.36
Chapter B. --- Simulation Result - Power value --- p.42
Bibliography --- p.88
APA, Harvard, Vancouver, ISO, and other styles
13

Hsieh, Ching-Ying, and 謝靖瑩. "A Geometric Mean Approach to Sampling Size Determinationfor the Equivalence Hypothesis." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/26672461433552854457.

Full text
Abstract:
碩士
國立臺灣大學
農藝學研究所
99
Equivalence hypothesis is the correct hypothesis to confirm whether the newly developed product conforms to the current standard product. It has great applications to evaluation of generic drug products and other new clinical modalities. Two one-sided tests (TOST) procedure was proposed to test the equivalence hypothesis for two treatments. When the difference in population means between two treatments is not 0, the power function is not symmetric, hence only approximate formulas are proposed to determine the sample size for the equivalence hypothesis. The resulting sample sizes may provide either insufficient power or unnecessarily high power. We suggest geometric mean approaches to determination of the sample size for equivalence hypothesis. A numerical study was conducted to compare the performance of our proposed method with other current methods. Numerical examples illustrate the applications to bioequivalence on the logarithmic scale and to clinical equivalence on the original scale. Remarks on the usage of different methods for sample size determination for equivalence hypothesis are made.
APA, Harvard, Vancouver, ISO, and other styles
14

Michel, Frank. "Hypothesis Generation for Object Pose Estimation From local sampling to global reasoning." Doctoral thesis, 2017. https://tud.qucosa.de/id/qucosa%3A33169.

Full text
Abstract:
Pose estimation has been studied since the early days of computer vision. The task of object pose estimation is to determine the transformation that maps an object from it's inherent coordinate system into the camera-centric coordinate system. This transformation describes the translation of the object relative to the camera and the orientation of the object in three dimensional space. The knowledge of an object's pose is a key ingredient in many application scenarios like robotic grasping, augmented reality, autonomous navigation and surveillance. A general estimation pipeline consists of the following four steps: extraction of distinctive points, creation of a hypotheses pool, hypothesis verification and, finally, the hypotheses refinement. In this work, we focus on the hypothesis generation process. We show that it is beneficial to utilize geometric knowledge in this process. We address the problem of hypotheses generation of articulated objects. Instead of considering each object part individually we model the object as a kinematic chain. This enables us to use the inner-part relationships when sampling pose hypotheses. Thereby we only need K correspondences for objects consisting of K parts. We show that applying geometric knowledge about part relationships improves estimation accuracy under severe self-occlusion and low quality correspondence predictions. In an extension we employ global reasoning within the hypotheses generation process instead of sampling 6D pose hypotheses locally. We therefore formulate a Conditional-Random-Field operating on the image as a whole inferring those pixels that are consistent with the 6D pose. Within the CRF we use a strong geometric check that is able to assess the quality of correspondence pairs. We show that our global geometric check improves the accuracy of pose estimation under heavy occlusion.
APA, Harvard, Vancouver, ISO, and other styles
15

McDonald, Trent 1965. "Analysis of finite population surveys : sample size and testing considerations." Thesis, 1996. http://hdl.handle.net/1957/35277.

Full text
Abstract:
This dissertation concerns two topics in the analysis of finite population surveys: setting sample size and hypothesis testing. The first concerns the a priori determination of the sample size needed to obtain species members. The second concerns testing distributional hypotheses when two equal-size populations are sampled. Setting sample size to obtain species is a problem which arises when an investigator wants to obtain (1) a member of all species present in an area (2) a member of all species whose relative frequency is greater than, say, 20% or (3) a member of each species in a target set of species. Chapter 2 presents a practical solution to these questions by setting a target sample size for which the species are obtained with known probability. The solution requires the estimated relative frequency of the rarest species of interest; total number of species is not needed. Because this problem has substantial computational demands, easy-to-compute formulas are needed and given. Three practical examples are presented. Testing of finite population distributional hypotheses is covered in Chapter 3. The test proposed here works under reasonably general designs and is based on a Horvitz-Thompson type correction of the usual Mann-Whitney U statistic. The investigation here compared this proposed test to a corrected (for finiteness) form of the usual Wilcoxon rank sum test. Size and power of the two test procedures are investigated using simulation. The proposed test had approximately correct nominal size over a wide range of situations. The corrected Wilcoxon test exhibited extreme violations in size in many cases. Power of the two tests in situations where they have equal size is similar in most practically interesting cases.
Graduation date: 1996
APA, Harvard, Vancouver, ISO, and other styles
16

"Small sample properties of transmission disequilibrium test and related tests." 2007. http://library.cuhk.edu.hk/record=b5893384.

Full text
Abstract:
Cheung, Ka Wai Ricker.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.
Includes bibliographical references (leaves 68-69).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Basic Concepts --- p.1
Chapter 1.2 --- Linkage Disequilibrium --- p.5
Chapter 1.3 --- Transmission Disequilibrium Test --- p.7
Chapter 1.4 --- Scope of Thesis --- p.8
Chapter 2 --- Transmission Disequilibrium Test --- p.9
Chapter 2.1 --- The Model --- p.9
Chapter 2.2 --- The Data Structure and The Statistic --- p.12
Chapter 3 --- Small Sample Properties of Transmission Disequilibrium Test --- p.16
Chapter 3.1 --- Exact Distribution of TDT Statistic --- p.16
Chapter 3.2 --- Power under Alternative Hypothesis --- p.20
Chapter 3.3 --- P-Value --- p.29
Chapter 4 --- Exact P-Value and Power --- p.35
Chapter 5 --- Haplotype Relative Risk --- p.61
Chapter 6 --- Conclusion --- p.66
References --- p.68
APA, Harvard, Vancouver, ISO, and other styles
17

"Hypothesis Testing for High-Dimensional Regression Under Extreme Phenotype Sampling of Continuous Traits." Tulane University, 2018.

Find full text
Abstract:
acase@tulane.edu
Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in the extreme phenotypic samples within the top and bottom percentiles, EPS can boost the study power compared with the random sampling with the same sample size. The existing statistical methods for EPS data test the variants/regions individually. However, many disorders are caused by multiple genetic factors. Therefore, it is critical to simultaneously model the effects of genetic factors, which may increase the power of current genetic studies and identify novel disease-associated genetic factors in EPS. The challenge of the simultaneous analysis of genetic data is that the number (p ~10,000) of genetic factors is typically greater than the sample size (n ~1,000) in a single study. The standard linear model would be inappropriate for this p>n problem due to the rank deficiency of the design matrix. An alternative solution is to apply a penalized regression method – the least absolute shrinkage and selection operator (LASSO). LASSO can deal with this high-dimensional (p>n) problem by forcing certain regression coefficients to be zero. Although the application of LASSO in genetic studies under random sampling has been widely studied, its statistical inference and testing under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function to investigate the genetic associations, including the gene expression and rare variant analyses. The comprehensive simulation shows EPS-LASSO outperforms existing methods with superior power when the effects are large and stable type I error and FDR control. Together with the real data analysis of genetic study for obesity, our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors.
1
Chao Xu
APA, Harvard, Vancouver, ISO, and other styles
18

Servidea, James Dominic. "Bridge sampling with dependent random draws : techniques and strategy /." 2002. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3048422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

King, J. Patrick. "A microsatellite-based statistic for inferring patterns of population growth: Sampling properties and hypothesis testing." Thesis, 2000. http://hdl.handle.net/1911/19523.

Full text
Abstract:
DNA sequences sampled from a genetic locus within a population are related by a genealogy. If there is no recombination within the locus, each pair of sequences is descended from some ancestral sequence, one of which is the most recent common ancestor of the entire sample. Past demography shapes this genealogy since the branch lengths depend on the size history of the population. For this reason, observed distributions of allelic types carry information about the population's demographic history. Because of their abundance and relative ease of typing, microsatellites, or short tandem repeats, represent a useful class of loci for the study of demography. This thesis investigates the properties of the imbalance index beta, a microsatellite-based statistic constructed for demographic inference. Simulated data sets are used to explore the sampling properties of beta and to compare its performance to that of other statistics available in the literature. Tests based on these statistics are applied to samples of microsatellite loci from human populations, and the results are interpreted in light of recent hypotheses concerning the evolution of modern humans.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Wei. "A comparison of four estimators of a population measure of model misfit in covariance structure analysis." 2005. http://etd.nd.edu.lib-proxy.nd.edu/ETD-db/theses/available/etd-10272005-175023/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Tran, Quoc Huy. "Robust parameter estimation in computer vision: geometric fitting and deformable registration." Thesis, 2014. http://hdl.handle.net/2440/86270.

Full text
Abstract:
Parameter estimation plays an important role in computer vision. Many computer vision problems can be reduced to estimating the parameters of a mathematical model of interest from the observed data. Parameter estimation in computer vision is challenging, since vision data unavoidably have small-scale measurement noise and large-scale measurement errors (outliers) due to imperfect data acquisition and preprocessing. Traditional parameter estimation methods developed in the statistics literature mainly deal with noise and are very sensitive to outliers. Robust parameter estimation techniques are thus crucial for effectively removing outliers and accurately estimating the model parameters with vision data. The research conducted in this thesis focuses on single structure parameter estimation and makes a direct contribution to two specific branches under that topic: geometric fitting and deformable registration. In geometric fitting problems, a geometric model is used to represent the information of interest, such as a homography matrix in image stitching, or a fundamental matrix in three-dimensional reconstruction. Many robust techniques for geometric fitting involve sampling and testing a number of model hypotheses, where each hypothesis consists of a minimal subset of data for yielding a model estimate. It is commonly known that, due to the noise added to the true data (inliers), drawing a single all-inlier minimal subset is not sufficient to guarantee a good model estimate that fits the data well; the inliers therein should also have a large spatial extent. This thesis investigates a theoretical reasoning behind this long-standing principle, and shows a clear correlation between the span of data points used for estimation and the quality of model estimate. Based on this finding, the thesis explains why naive distance-based sampling fails as a strategy to maximise the span of all-inlier minimal subsets produced, and develops a novel sampling algorithm which, unlike previous approaches, consciously targets all-inlier minimal subsets with large span for robust geometric fitting. The second major contribution of this thesis relates to another computer vision problem which also requires the knowledge of robust parameter estimation: deformable registration. The goal of deformable registration is to align regions in two or more images corresponding to a common object that can deform nonrigidly such as a bending piece of paper or a waving flag. The information of interest is the nonlinear transformation that maps points from one image to another, and is represented by a deformable model, for example, a thin plate spline warp. Most of the previous approaches to outlier rejection in deformable registration rely on optimising fully deformable models in the presence of outliers due to the assumption of the highly nonlinear correspondence manifold which contains the inliers. This thesis makes an interesting observation that, for many realistic physical deformations, the scale of errors of the outliers usually dwarfs the nonlinear effects of the correspondence manifold on which the inliers lie. The finding suggests that standard robust techniques for geometric fitting are applicable to model the approximately linear correspondence manifold for outlier rejection. Moreover, the thesis develops two novel outlier rejection methods for deformable registration, which are based entirely on fitting simple linear models and shown to be considerably faster but at least as accurate as previous approaches.
Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2014
APA, Harvard, Vancouver, ISO, and other styles
22

"A New Method Of Resampling Testing Nonparametric Hypotheses: Balanced Randomization Tests." Tulane University, 2014.

Find full text
Abstract:
Background: Resampling methods such as the Monte Carlo (MC) and Bootstrap Approach (BA) are very flexible tools for statistical inference. They are used in general in experiments with small sample size or where the parametric test assumptions are not met. They are also used in situations where expressions for properties of complex estimators are statistically intractable. However, the MC and BA methods require relatively large random samples to estimate the parameters of the full permutation (FP) or exact distribution. Objective: The objective of this research study was to develop an efficient statistical computational resampling method that compares two population parameters, using a balanced and controlled sampling design. The application of the new method, the balanced randomization (BR) method, is discussed using microarray data where sample sizes are generally small. Methods: Multiple datasets were simulated from real data to compare the accuracy and efficiency of the methods (BR, MC, and BA). Datasets, probability distributions, parameters, and sample sizes were varied in the simulation. The correlation between the exact p-value and the p-values generated by simulation provide a measure of accuracy/consistency to compare methods. Sensitivity, specificity, power function, false negative and positive rates using graphical and multivariate analyses were used to compare methods. Results and Discussions: The correlation between the exact p-value and those estimated from simulation are higher for BR and MC, (increasing somewhat with increasing sample size), much less for BA, and most pronounced for skewed distributions (lognormal, exponential). Furthermore, the relative proportion of 95%/99% CI containing the true p-value for BR vs. MC=3%/1.3% (p<0.0001) and BR vs. BA=20%/15% (p<0.0001). The sensitivity, specificity and power function of the BR method were shown to have a slight advantage compared to those of MC and BA in most situations. As an example, the BR method was applied to a microarray study to discuss significantly differentially expressed genes.
acase@tulane.edu
APA, Harvard, Vancouver, ISO, and other styles
23

Carland, Matthew A. "A theoretical and experimental dissociation of two models of decision‐making." Thèse, 2014. http://hdl.handle.net/1866/12038.

Full text
Abstract:
La prise de décision est un processus computationnel fondamental dans de nombreux aspects du comportement animal. Le modèle le plus souvent rencontré dans les études portant sur la prise de décision est appelé modèle de diffusion. Depuis longtemps, il explique une grande variété de données comportementales et neurophysiologiques dans ce domaine. Cependant, un autre modèle, le modèle d’urgence, explique tout aussi bien ces mêmes données et ce de façon parcimonieuse et davantage encrée sur la théorie. Dans ce travail, nous aborderons tout d’abord les origines et le développement du modèle de diffusion et nous verrons comment il a été établi en tant que cadre de travail pour l’interprétation de la plupart des données expérimentales liées à la prise de décision. Ce faisant, nous relèveront ses points forts afin de le comparer ensuite de manière objective et rigoureuse à des modèles alternatifs. Nous réexaminerons un nombre d’assomptions implicites et explicites faites par ce modèle et nous mettrons alors l’accent sur certains de ses défauts. Cette analyse servira de cadre à notre introduction et notre discussion du modèle d’urgence. Enfin, nous présenterons une expérience dont la méthodologie permet de dissocier les deux modèles, et dont les résultats illustrent les limites empiriques et théoriques du modèle de diffusion et démontrent en revanche clairement la validité du modèle d'urgence. Nous terminerons en discutant l'apport potentiel du modèle d'urgence pour l'étude de certaines pathologies cérébrales, en mettant l'accent sur de nouvelles perspectives de recherche.
Decision‐making is a computational process of fundamental importance to many aspects of animal behavior. The prevailing model in the experimental study of decision‐making is the drift‐diffusion model, which has a long history and accounts for a broad range of behavioral and neurophysiological data. However, an alternative model – called the urgency‐gating model – has been offered which can account equally well for much of the same data in a more parsimonious and theoretically‐sound manner. In what follows, we will first trace the origins and development of the DDM, as well as give a brief overview of the manner in which it has supplied an explanatory framework for a large number of behavioral and physiological studies in the domain of decision‐making. In so doing, we will attempt to build a strong and clear case for its strengths so that it can be fairly and rigorously compared to potential alternative models. We will then re‐examine a number of the implicit and explicit theoretical assumptions made by the drift‐diffusion model, as well as highlight some of its empirical shortcomings. This analysis will serve as the contextual backdrop for our introduction and discussion of the urgency‐gating model. Finally, we present a novel experiment, the methodological design of which uniquely affords a decisive empirical dissociation of the models, the results of which illustrate the empirical and theoretical shortcomings of the drift‐diffusion model and instead offer clear support for the urgency‐gating model. We finish by discussing the potential for the urgency gating model to shed light on a number of clinical disorders, highlighting a number of future directions for research.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography