Dissertations / Theses on the topic 'Sampling (statistics)'

To see the other types of publications on this topic, follow the link: Sampling (statistics).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Sampling (statistics).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pollard, John. "Adaptive distance sampling." Thesis, University of St Andrews, 2002. http://hdl.handle.net/10023/15176.

Full text
Abstract:
We investigate mechanisms to improve efficiency for line and point transect surveys of clustered populations by combining the distance methods with adaptive sampling. In adaptive sampling, survey effort is increased when areas of high animal density are located, thereby increasing the number of observations. We begin by building on existing adaptive sampling techniques, to create both point and line transect adaptive estimators, these are then extended to allow the inclusion of covariates in the detection function estimator. However, the methods are limited, as the total effort required cannot be forecast at the start of a survey, and so a new fixed total effort adaptive approach is developed. A key difference in the new method is that it does not require the calculation of the inclusion probabilities typically used by existing adaptive estimators. The fixed effort method is primarily aimed at line transect sampling, but point transect derivations are also provided. We evaluate the new methodology by computer simulation, and report on surveys of harbour porpoise in the Gulf of Maine, in which the approach was compared with conventional line transect sampling. Line transect simulation results for a clustered population showed up to a 6% improvement in the adaptive density variance estimate over the conventional, whilst when there was no clustering the adaptive estimate was 1% less efficient than the conventional. For the harbour porpoise survey, the adaptive density estimate cvs showed improvements of 8% for individual porpoise density and 14% for school density over the conventional estimates. The primary benefit of the fixed effort method is the potential to improve survey coverage, allowing a survey to complete within a fixed time and effort; an important feature if expensive survey resources are involved, such as an aircraft, crew and observers.
APA, Harvard, Vancouver, ISO, and other styles
2

Svensson, Jens. "On Importance Sampling and Dependence Modeling." Doctoral thesis, KTH, Matematik (Inst.), 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-11272.

Full text
Abstract:
This thesis consists of four papers. In the first paper, Monte Carlo simulation for tail probabilities of heavy-tailed random walks is considered. Importance sampling algorithms are constructed by using mixtures of the original distribution with some other state-dependent distributions. Sufficient conditions under which the relative error of such algorithms is bounded are found, and the bound is calculated. A new mixture algorithm based on scaling of the original distribution is presented and compared to existing algorithms. In the second paper, Monte Carlo simulation of quantiles is treated. It is shown that by using importance sampling algorithms developed for tail probability estimation, efficient quantile estimators can be obtained. A functional limit of the quantile process under the importance sampling measure is found, and the variance of the limit process is calculated for regularly varying distributions. The procedure is also applied to the calculation of expected shortfall. The algorithms are illustrated numerically for a heavy-tailed random walk. In the third paper, large deviation probabilities for a sum of dependent random variables are derived. The dependence stems from a few underlying random variables, so-called factors. Each summand is composed of two parts: an idiosyncratic part and a part given by the factors. Conditions under which both factors and idiosyncratic components contribute to the large deviation behavior are found, and the resulting approximation is evaluated in a simple example. In the fourth paper, the asymptotic eigenvalue distribution of the exponentially weighted moving average covariance estimator is studied. Equations for the asymptotic spectral density and the boundaries of its support are found using the Marchenko-Pastur theorem.
QC 20100811
APA, Harvard, Vancouver, ISO, and other styles
3

Sung, Iyue. "Importance sampling kernel density estimation /." The Ohio State University, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=osu1486398528559777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Meister, Kadri. "On Methods for Real Time Sampling and Distributions in Sampling." Doctoral thesis, Umeå : Univ, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ignatieva, Ekaterina. "Adaptive Bayesian sampling with application to 'bubbles'." Connect to e-thesis, 2008. http://theses.gla.ac.uk/356/.

Full text
Abstract:
Thesis (MSc(R)) - University of Glasgow, 2008.
MSc(R). thesis submitted to the Department of Mathematics, Faculty of Information and Mathematical Sciences, University of Glasgow, 2008. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
6

Frey, Jesse C. "Inference procedures based on order statistics." Connect to this title online, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1122565389.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xi, 148 p.; also includes graphics. Includes bibliographical references (p. 146-148). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
7

Greenfield, C. C. "Replicated sampling in censuses and surveys." Thesis, [Hong Kong] : University of Hong Kong, 1985. http://sunzi.lib.hku.hk/hkuto/record.jsp?B1232131X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xi, Liqun, and 奚李群. "Estimating population size for capture-recapture/removal models with heterogeneity and auxiliary information." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B29957783.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

譚玉貞 and Yuk-ching Tam. "Some practical issues in estimation based on a ranked set sample." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31221683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

尹再英 and Choi-ying Wan. "Statistical analysis for capture-recapture experiments in discrete time." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31225287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wu, Qin. "Reliable techniques for survey with sensitive question." HKBU Institutional Repository, 2013. http://repository.hkbu.edu.hk/etd_ra/1496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Han, Xiao-liang. "Markov Chain Monte Carlo and sampling efficiency." Thesis, University of Bristol, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Zhou, Shu. "Exploring network models under sampling." Kansas State University, 2015. http://hdl.handle.net/2097/20349.

Full text
Abstract:
Master of Science
Department of Statistics
Perla Reyes
Networks are defined as sets of items and their connections. Interconnected items are represented by mathematical abstractions called vertices (or nodes), and the links connecting pairs of vertices are known as edges. Networks are easily seen in everyday life: a network of friends, the Internet, metabolic or citation networks. The increase of available data and the need to analyze network have resulted in the proliferation of models for networks. However, for networks with billions of nodes and edges, computation and inference might not be achieved within a reasonable amount of time or budget. A sampling approach seems a natural choice, but traditional models assume that we can have access to the entire network. Moreover, when data is only available for a sampled sub-network conclusions tend to be extrapolated to the whole network/population without regard to sampling error. The statistical problem this report addresses is the issue of how to sample a sub-network and then draw conclusions about the whole network. Are some sampling techniques better than others? Are there more efficient ways to estimate parameters of interest? In which way can we measure how effectively my method is reproducing the original network? We explore these questions with a simulation study on Mesa High School students' friendship network. First, to assess the characteristics of the whole network, we applied the traditional exponential random graph model (ERGM) and a stochastic blockmodel to the complete population of 205 students. Then, we drew simple random and stratified samples of 41 students, applied the traditional ERGM and the stochastic blockmodel again, and defined a way to generalized the sample findings to the population friendship network of 205 students. Finally, we used the degree distribution and other network statistics to compare the true friendship network with the projected one. We achieved the following important results: 1) as expected stratified sampling outperforms simple random sampling when selecting nodes; 2) ERGM without restrictions offers a poor estimate for most of the tested parameters; and 3) the Bayesian stochastic blockmodel estimation using a strati ed sample of nodes achieves the best results.
APA, Harvard, Vancouver, ISO, and other styles
14

Ogorodnikova, Natalia. "Pareto πps sampling design vs. Poisson πps sampling design. : Comparison of performance in terms of mean-squared error and evaluation of factors influencing the performance measures." Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-67978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lopez, Escobar Emilio. "On variance estimation under complex sampling designs." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/354346/.

Full text
Abstract:
This thesis is formed of three manuscripts (chapters) about variance estimation. Each of the chapters focuses on developing new original variance estimators. The Chapter 1 proposes a novel jackknife variance estimator for self weighted two-stage sampling. Customary jackknifes for these designs rely only on the first sampling stage. This omission may induce a bias in the variance estimation when cluster sizes vary, second stage sampling fractions are small or when there is low variability between clusters. The proposed jackknife accounts of all sampling stages via deletion of clusters and observations within clusters. It does not need join-inclusion probabilities and naturally includes finite population corrections. Its asymptotic design-consistency is shown. A simulation study show that it can be more accurate than the customary jackknife used for this kind of sampling designs (Rao, Wu and Yue, 1992). The Chapter 2 proposes a totally new replication variance estimator for any unequal-probability without-replacement sampling design. The proposed replication estimator is approximately equal to the linearisation variance estimators obtained by the Demnati and Rao (2004) approach. It is more general than the Campbell (1980); Berger and Skinner (2005) generalised jackknife. Its asymptotic design consistency is shown. A simulation study shows it is more stable than standard jackknifes (Quenouille, 1956; Tukey, 1958) with ad hoc finite population corrections and than the generalised jackknife (Campbell, 1980; Berger and Skinner, 2005). The Chapter 3 proposes a new variance estimator which accounts the item non-response under unequal-probability without-replacement sampling when estimating a change from rotating (overlapping) repeated surveys. The proposed estimator combines the original approach by Berger and Priam (2010, 2012) and the non response reverse approach for variance estimation (Fay, 1991; Shao and Steel, 1999). It gives design-consistent estimation of the variance of change when the sampling fraction is small. The proposed estimator uses random Hot-deck imputation, but it can be implemented with other imputation techniques. Further, there are two more complementary chapters. One introduces the R package called samplingVarEst which implements of some methods for variance estimation utilised for the simulations. Finally, there is a brief chapter which discusses future research work.
APA, Harvard, Vancouver, ISO, and other styles
16

Wasserman, Gary Steven. "Design of a beattie procedure for continuous acceptance sampling or process surveillance." Diss., Georgia Institute of Technology, 1986. http://hdl.handle.net/1853/24104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Yu. "Revisiting Network Sampling." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1546425835709593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Rao, Naresh Krishna. "A variable sampling interval chart for a combined statistic." Thesis, Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/52068.

Full text
Abstract:
This thesis is an extension of the work on variable sampling charts (VSI) for monitoring a single parameter. An attempt is made to develop a chart which can simultaneously monitor both the process mean and process variance. The chart is based on a statistic which combines both mean and variance. After developing such a chart variable sampling intervals are introduced and it is evaluated against alternative methods of monitoring mean and variance with variable sampling intervals. The statistic chosen is an approximate statistic and simulation studies are performed for the evaluation. The results are at times counter-intuitive thus an analysis of the properties of the chart is made and explanations are provided.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
19

Tse, Kwok Ho. "Sample size calculation : influence of confounding and interaction effects /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?MATH%202006%20TSE.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Tʻang, Min. "Extention of evaluating the operating characteristics for dependent mixed variables-attributes sampling plans to large first sample size /." Online version of thesis, 1991. http://hdl.handle.net/1850/11208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Murff, Elizabeth J. Tipton. "On the efficiency of ranked set sampling relative to simple random sampling for estimating the ordinary least squares parameters of the simple linear regression model /." Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3008403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Florêncio, Dinei Alfonso Ferreira. "A new sampling theory and a framework for nonlinear filter banks." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/15792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Cheng, Dunlei Stamey James D. "Topics in Bayesian sample size determination and Bayesian model selection." Waco, Tex. : Baylor University, 2007. http://hdl.handle.net/2104/5039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Suen, Wai-sing Alan, and 孫偉盛. "Sample size planning for clinical trials with repeated measurements." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B31972172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

McShine, Lisa Maria. "Random sampling of combinatorial structures." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/28771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Bailes, Wesley Wayne. "A comparison of basal area and merchantable height as auxiliary variables for double sampling with point sampling." Morgantown, W. Va. : [West Virginia University Libraries], 2004. https://etd.wvu.edu/etd/controller.jsp?moduleName=documentdata&jsp%5FetdId=3385.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2004.
Title from document title page. Document formatted into pages; contains xi, 116 p. : ill. (some col.), col. maps. Vita. Includes abstract. Includes bibliographical references (p. 55-56).
APA, Harvard, Vancouver, ISO, and other styles
27

McGrath, Neill. "Effective sample size in order statistics of correlated data." [Boise, Idaho] : Boise State University, 2009. http://scholarworks.boisestate.edu/td/32/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Shen, Gang. "Bayesian predictive inference under informative sampling and transformation." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0429104-142754/.

Full text
Abstract:
Thesis (M.S.) -- Worcester Polytechnic Institute.
Keywords: Ignorable Model; Transformation; Poisson Sampling; PPS Sampling; Gibber Sampler; Inclusion Probabilities; Selection Bias; Nonignorable Model; Bayesian Inference. Includes bibliographical references (p.34-35).
APA, Harvard, Vancouver, ISO, and other styles
29

Hill, Raymond R. "Multivariate Sampling With Explicit Correlation Induction For Simulation and Optimization Studies /." The Ohio State University, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487931993469621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Nahhas, Ramzi William. "Ranked set sampling : ranking error models, cost, and optimal set size /." The Ohio State University, 1999. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488187049542056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Amin, Raid Widad. "Variable sampling interval control charts." Diss., Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/82617.

Full text
Abstract:
Process control charts are widely used to display sample data from a process for purposes of determining whether a process is in control, for bringing an out-of-control process into control, and for monitoring a process to make sure that it stays in control. The usual practice in maintaining a control chart is to take samples from the process at fixed length sampling intervals. This research investigates the modification of the standard practice where the sampling interval or time between samples is not fixed but can vary depending on what is observed from the data. Variable sampling interval process control procedures are considered for monitoring the outcome of a production process. The time until the next sample depends on what is being observed in the current sample. Sampling is less frequent when the process is at a high level of quality and vise versa. Properties such as the average number of samples until signal, average time to signal and the variance of the time to signal are developed for the variable sampling interval Shewhart and cusum charts. A Markov chain is utilized to approximate the average time to signal and the corresponding variance for the cusum charts. Properties of the variable sampling interval Shewhart chart are investigated through Renewal Theory and Markov chain approaches for the cases of a sudden and gradual shift in the process mean respectively. Also considered is the case of a shift occurring in the time between two samples without the simplifying assumption that the process mean remains the same from time zero onward. For such a case, the adjusted time to signal is developed for both the Shewhart and cusum charts in addition to the variance of the adjusted time to signal. Results show that the variable sampling interval control charts are considerably more efficient than the corresponding fixed sampling interval control charts. It is preferable to use only two sampling intervals which keeps the complexity of the chart to a reasonable level and has practical implications. This feature should make such charts very appealing for use in industry and other fields of application where control charts are used.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
32

Steele, Russell John. "Practical importance sampling methods for finite mixture models and multiple imputation /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/8956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Haggarty, Ruth Alison. "Evaluation of sampling and monitoring designs for water quality." Thesis, University of Glasgow, 2012. http://theses.gla.ac.uk/3789/.

Full text
Abstract:
Assessing water quality is of crucial importance to both society and the environment. Deterioration in water quality through issues such as eutrophication presents substantial risk to human health, plant and animal life, and can have detrimental effects on the local economy. Long-term data records across multiple sites can be used to investigate water quality and risk factors statistically, however, identification of underlying changes can only be successful if there is a sufficient quantity of data available. As vast amounts of resources are required for the implementation and maintenance of a monitoring network, logistically and financially it is not possible to employ continuous monitoring of all water environments. This raises the question as to the optimal design for long-term monitoring networks which are capable of capturing underlying changes. Two of the main design considerations are clearly where to sample, and how frequently to sample. The principal aim of this thesis is to use statistical analysis to investigate frequently used environmental monitoring networks, developing new methodology where appropriate, so that the design and implementation of future networks can be made as effective and cost efficient as possible. Using data which have been provided by the Scottish Environment Protection Agency, several data from Scottish lakes and rivers and a range of determinands are considered in order to explore water quality monitoring in Scotland. Chapter 1 provides an introduction to environmental monitoring and both existing statistical techniques, and potential challenges which are commonly encountered in the analysis of environmental data are discussed. Following this, Chapter 2 presents a simulation study which has been designed and implemented in order to evaluate the nature and statistical power for commonly used environmental sampling and monitoring designs for surface waters. The aim is to answer questions regarding how many samples to base the chemical classification of standing waters, and how appropriate the currently available data in Scotland are for detecting trends and seasonality. The simulation study was constructed to investigate the ability to detect the different underlying features of the data under several different sampling conditions. After the assessment of how often sampling is required to detect change, the remainder of the thesis will attempt to address some of the questions associated with where the optimal sampling locations are. The European Union Water Framework Directive (WFD) was introduced in 2003 to set compliance standards for all water bodies across Europe, with an aim to prevent deterioration, and ensure all sites reach `good' status by 2015. One of the features of the WFD is that water bodies can be grouped together and the classification of all members of the group is then based on the classification of a single representative site. The potential misclassification of sites means one of the key areas of interest is how well the existing groups used by SEPA for classification capture differences between the sites in terms of several chemical determinands. This will be explored in Chapter 3 where a functional data analysis approach will be taken in order to investigate some of the features of the existing groupings. An investigation of the effect of temporal autocorrelation on our ability to distinguish groups of sites from one another will also be presented here. It is also of interest to explore whether fewer, or indeed more groups would be optimal in order to accurately represent the trends and variability in the water quality parameters. Different statistical approaches for grouping standing waters will be presented in Chapter 4, where the question of how many groups is statistically optimal is also addressed. As in Chapter 3, these approaches for grouping sites will be based on functional data in order to include the temporal dynamics of the variable of interest within any analysis of group structure obtained. Both hierarchical and model based functional clustering are considered here. The idea of functional clustering is also extended to the multivariate setting, thus enabling information from several determinands of interest to be used within formation of groups. This is something which is of particular importance in view of the fact that the WFD classification encompasses a range of different determinands. In addition to the investigation of standing waters, an entirely different type of water quality monitoring network is considered in Chapter 5. While standing waters are assumed to be spatially independent of one another there are several situations where this assumption is not appropriate and where spatial correlation between locations needs to be accounted for. Further developments of the functional clustering methods explored in Chapter 4 are presented here in order to obtain groups of stations that are not only similar in terms of mean levels and temporal patterns of the determinand of interest, but which are also spatially homogenous. The river network data explored in Chapter 5 introduces a set of new challenges when considering functional clustering that go beyond the inclusion of Euclidean distance based spatial correlation. Existing methodology for estimating spatial correlation are combined with functional clustering approaches and developed to be suitable for application on sites which lie along a river network. The final chapter of this thesis provides a summary of the work presented and discussion of limitations and suggestions for future directions.
APA, Harvard, Vancouver, ISO, and other styles
34

Liao, Yijie. "Testing of non-unity risk ratio under inverse sampling." HKBU Institutional Repository, 2006. http://repository.hkbu.edu.hk/etd_ra/707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Stevens, Kevin Wilson. "Adaptive sequential sampling for extreme event statistics in ship design." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118693.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018.
Thesis: S.M. in Naval Architecture and Marine Engineering, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 103-105).
For more than a century, many facets of ship design have fallen into the domain of rules-based engineering. Recent technological progress has been validated to the point that many of these areas will soon warrant reconsideration. In this emerging environment, accurately predicting the motions and loading conditions which a ship is likely to encounter during its lifetime takes on renewed importance. Even when the wave elevations a ship encounters are governed by normal (Gaussian) statistics, the resulting motions and loading conditions can deviate substantially due to the nonlinear nature of the ship dynamics. This is sometimes manifested by heavy tailed non-Gaussian statistics in which extreme events have a high probability of occurrence. The primary method for quantifying these extreme events is to perform direct Monte-Carlo simulations of a desired seaway and tabulate the results. While this method has been shown to be largely accurate, it is computationally expensive and in many cases impractical; today's computers and software packages can only perform these analyses slightly faster than real time, making it unlikely that they will accurately capture the 500 or 1,000-year wave or wave group even if run in parallel on a large computer cluster; these statistics are instead extrapolated. Recent work by Mohamad and Sapsis at the MIT Stochastic Analysis and Non- Linear Dynamics (SAND) lab has identified a new approach for quantifying generic extreme events of systems subjected to irregular waves and coupled it with a sequential sampling algorithm which allows the accurate results to be determined for meager computational cost. This thesis discusses the results of applying this approach directly to ship motions and loading conditions using a modified version of the Large Amplitude Motions Program (LAMP) software package. By simulating the ship response for a small number of wave-groups (order of 100) we assess the accuracy of the method to capture the tail structure of the probability distribution function in different cases and for different observables. Results are compared with direct Monte-Carlo simulations.
by Kevin Wilson Stevens.
S.M.
S.M. in Naval Architecture and Marine Engineering
APA, Harvard, Vancouver, ISO, and other styles
36

Zhuang, Yongzhen. "Intelligent sampling over wireless sensor networks /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?CSED%202008%20ZHUANG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Bergeron, Pierre-Jérôme. "Covariates and length-biased sampling : is there more than meets the eye ?" Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=102958.

Full text
Abstract:
It is well known that when subjects with a disease are identified through a cross-sectional survey and then followed forward in time until either failure or censoring, their estimated survival function of the true survival function from onset are biased. This bias, which is caused by the sampling of prevalent rather than incident cases, is termed length bias if the onset time of the disease forms a stationary Poisson process. While authors have proposed different approaches to the analysis of length-biased survival data, there remain a number of issues that have not been fully addressed. The most, important of these is perhaps that of how to include covariates into length-biased lifetime data analysis of the natural history of diseases, that are initiated by cross-sectional sampling of a population. One aspect of that problem, which appears to have been neglected in the literature, concerns the effect of length-bias on the sampling distribution of the covariates. If the covariates have an effect on the survival time, then their marginal distribution in a length-biased sample is also subject to a bias and is informative about the parameters of interest. As is conventional in most regression analyses one conditions on the observed covariate values. By conditioning on the observed covariates in the situation described above, however, one effectively ignores the information contained in the distribution of the covariates in the sample. We present the appropriate likelihood approach that takes into account this information and we establish the consistency and asymptotic normality of the resulting estimators. It is shown that by ignoring the information contained in the sampling distribution of the covariates, one can still obtain, asymptotically, the same point estimates as with the joint likelihood. However, these conditional estimates are less efficient. Our results are illustrated using data on survival with dementia; collected as part of the Canadian Study of Health an Aging.
APA, Harvard, Vancouver, ISO, and other styles
38

Morin, Antoine. "Estimation and prediction of black fly abundance and productivity." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=75447.

Full text
Abstract:
Sampling and analytical techniques to estimate abundance and productivity of stream invertebrates are examined for their precision and accuracy, and then utilized to develop empirical models of sampling variability, abundance, and growth rates of overwintering larvae of black flies (Diptera: Simuliidae). Sampling variability of density estimates of stream benthos increases with mean density, and decreases with sampler size. Artificial substrates do not consistently reduce sampling variability, and introduce variable bias in estimates of simuliid density. Growth rates of overwintering simuliids are mainly a function of their body size, but available data show that growth rates also increase with water temperature. Biomass of overwintering simuliids in lake outlets in Southern Quebec is positively related to chlorophyll concentration and current velocity, and negatively related to distance from the lake, water depth, and periphyton biomass. Computer simulations show that published methods fail to produce reliable confidence intervals for estimates of secondary production for highly aggregated populations, and a reliable method, based on the Bootstrap procedure and the Allen curve, is presented.
APA, Harvard, Vancouver, ISO, and other styles
39

Ashbridge, Jonathan. "Inference for plant-capture." Thesis, University of St Andrews, 1998. http://hdl.handle.net/10023/13741.

Full text
Abstract:
When investigating the dynamics of an animal population, a primary objective is to obtain reasonable estimates of abundance or population size. This thesis concentrates on the problem of obtaining point estimates of abundance from capture-recapture data and on how such estimation can be improved by using the method of plant-capture. Plant-capture constitutes a natural generalisation of capture-recapture. In a plant-capture study a pre-marked population of known size is added to the target population of unknown size. The capture-recapture experiment is then carried out on the augmented population. Chapter 1 considers the addition of planted individuals to target populations which behave according to the standard capture-recapture model M0. Chapter 2 investigates an analogous model based on sampling in continuous time. In each of these chapters, distributional results are derived under the assumption that the behaviour of the plants is indistinguishable from that of members of the target population. Maximum likelihood estimators and other new estimators are proposed for each model. The results suggest that the use of plants is beneficial, and furthermore that the new estimators perform more satisfactorily than the maximum likelihood estimators. Chapter 3 introduces, initially in the absence of plants, a new class of estimators, described as coverage adjusted estimators, for the standard capture-recapture model M[sub]h. These new estimators are shown, through simulation and real life data, to compare favourably with estimators that have previously been proposed. Plant-capture versions of these new estimators are then derived and the usefulness of the plants is demonstrated through simulation. Chapter 4 describes how the approach taken in chapter 3 can be modified to produce a new estimator for the analogous continuous time model. This estimator is then shown through simulation to be preferable to estimators that have previously been proposed.
APA, Harvard, Vancouver, ISO, and other styles
40

Wan, Choi-ying. "Statistical analysis for capture-recapture experiments in discrete time." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22753217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Tam, Yuk-ching. "Some practical issues in estimation based on a ranked set sample /." Hong Kong : University of Hong Kong, 1999. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20897169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

González, Rocío Prieto. "Incorporating animal movement into circular plot and point transect surveys of wildlife abundance." Thesis, University of St Andrews, 2018. http://hdl.handle.net/10023/15612.

Full text
Abstract:
Estimating wildlife abundance is fundamental for its effective management and conservation. A range of methods exist: total counts, plot sampling, distance sampling and capture-recapture based approaches. Methods have assumptions and their failure can lead to substantial bias. Current research in the field is focused not on establishing new methods but in extending existing methods to deal with their assumptions' violation. This thesis focus on incorporating animal movement into circular plot sampling (CPS) and point transect sampling (PTS), where a key assumption is that animals do not move while within detection range, i.e., the survey is a snapshot in time. While targeting this goal, we found some unexpected bias in PTS when animals were still and model selection was used to choose among different candidate models for the detection function (the model describing how detectability changes with observer-animal distance). Using a simulation study, we found that, although PTS estimators are asymptotically unbiased, for the recommended sample sizes the bias depended on the form of the true detection function. We then extended the simulation study to include animal movement, and found this led to further bias in CPS and PTS. We present novel methods that incorporate animal movement with constant speed into estimates of abundance. First, in CPS, we present an analytic expression to correct for the bias given linear movement. When movement is de ned by a diffusion process, a simulation based approach, modelling the probability of animal presence in the circular plot, results in less than 3% bias in the abundance estimates. For PTS we introduce an estimator composed of two linked submodels: the movement (animals moving linearly) and the detection model. The performance of the proposed method is assessed via simulation. Despite being biased, the new estimator yields improved results compared to ignoring animal movement using conventional PTS.
APA, Harvard, Vancouver, ISO, and other styles
43

Tirres, Lizet. "Survey design, sampling, and analysis with applications." Thesis, California State University, Long Beach, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10131680.

Full text
Abstract:

Survey theory developed as a means to overcome problems with design and analysis is inherent in early research. Survey sampling methodology improves the quality of information collected, ensures the accuracy of data analysis, and reduces the cost of research. Technology drives the evolution of data collection and analysis that is required in survey sampling. In turn, this influences survey sampling techniques. I investigate the history of survey sampling, current survey sampling theory, and current theory applied to two examples: 1) a stratified market research survey, and 2) a psychological survey for health science research.

The market research survey was an original design using a specific methodology: conduct pre-interviews on a small sample, develop survey questions based on the qualitative research, stratify the target sample during data collection, and perform data analysis on the resulting cross-sectional data. The second survey utilizes well-developed and tested health measurement instruments that have already been developed and tested. The resulting longitudinal data are then scored and analyzed.

APA, Harvard, Vancouver, ISO, and other styles
44

Batidzirai, Jesca Mercy. "Randomization in a two armed clinical trial: an overview of different randomization techniques." Thesis, University of Fort Hare, 2011. http://hdl.handle.net/10353/395.

Full text
Abstract:
Randomization is the key element of any sensible clinical trial. It is the only way we can be sure that the patients have been allocated into the treatment groups without bias and that the treatment groups are almost similar before the start of the trial. The randomization schemes used to allocate patients into the treatment groups play a role in achieving this goal. This study uses SAS simulations to do categorical data analysis and comparison of differences between two main randomization schemes namely unrestricted and restricted randomization in dental studies where there are small samples, i.e. simple randomization and the minimization method respectively. Results show that minimization produces almost equally sized treatment groups, but simple randomization is weak in balancing prognostic factors. Nevertheless, simple randomization can also produce balanced groups even in small samples, by chance. Statistical power is also improved when minimization is used than in simple randomization, but bigger samples might be needed to boost the power.
APA, Harvard, Vancouver, ISO, and other styles
45

Anderson, Barbara J., and n/a. "Something to do with community structure : the influence of sampling and analysis on measures of community structure." University of Otago. Department of Botany, 2006. http://adt.otago.ac.nz./public/adt-NZDU20070215.150836.

Full text
Abstract:
Diversity indices confound two components: species richness and evenness. Community structure should therefore be evaluated by employing separate measures of the number of species and their relative abundances. However, the relative abundances of species are dependent on the abundance measure used. Although the use of biomass or productivity is recommended by theory, in practice a surrogate measure is more often used. Frequency (local or relative) and point-quadrat cover provide two objective measures of abundance which are fast, less destructive and avoid problems associated with distinguishing individuals. However, both give discrete bounded data which may further alter the relative abundances of species. These measures have a long history of use and, as the need for objective information on biodiversity becomes more pressing, their use is likely to become more widespread. Consequently, it seems appropriate to investigate the effect of these abundance measures, and the resolution at which they are used, on calculated evenness. Field, artificial and simulated data were used to investigate the effect of abundance measure and resolution on evidence for community structure. The field data consisted of seventeen sites. Sites from four vegetation types (saltmeadow, geothermal, ultramafic and high-altitude meadow) were sampled in three biogeographical regions. Most of the indices of community structure (species richness, diversity and evenness) detected differences between the different vegetation types, and different niche-apportionment models were fitted to the field data from saltmeadow and geothermal vegetation. Estimates of community structure based on local frequency and point-quadrat data differed. Local frequency tended to give higher calculated evenness; whereas point-quadrat data tended to fit to niche apportionment models where local frequency data failed. The effect of resolution on the eighteen evenness indices investigated depended on community species richness and the particular index used. The investigated evenness indices were divided into three groups (symmetric, continuous and traditional indices) based on how they ranked real and artificially constructed communities. Contrary to Smith and Wilson�s recommendation the symmetric indices E[VAR] and E[Q] proved unsuitable for use with most types of plant data. In particular, E[Q] tends to assign most communities low values and has a dubious relationship with intrinsic evenness. The continuous indices, E[MS] and E[2,1], were the indices best able to discriminate between field, artificial and simulated communities, and their use should be re-evaluated. Traditional indices used with low resolution tended to elevate the calculated evenness, especially in species-rich communities. The relativized indices, E[Hurlbert] and EO[dis], were an exception, as they were always able to attain the minimum of zero; however, they were more sensitive to changes in resolution, particularly when resolution was low. Overall, traditional indices based on Hill�s ratios, including E[1/D] (=E[2,0]), and G[2,1] gave the best performance, while the general criticism of the use of Pielou�s J� as an index of evenness was further substantiated by this study. As a final recommendation, ecologists are implored to investigate their data and the likely effects that sampling and analysis have had on the calculated values of their indices.
APA, Harvard, Vancouver, ISO, and other styles
46

Lipson, Kay, and klipson@swin edu au. "The role of the sampling distribution in developing understanding of statistical inference." Swinburne University of Technology, 2000. http://adt.lib.swin.edu.au./public/adt-VSWT20050711.161903.

Full text
Abstract:
There has been widespread concern expressed by members of the statistics education community in the past few years about the lack of any real understanding demonstrated by many students completing courses in introductory statistics. This deficiency in understanding has been particularly noted in the area of inferential statistics, where students, particularly those studying statistics as a service course, have been inclined to view statistical inference as a set of unrelated recipes. As such, these students have developed skills that have little practical application and are easily forgotten. This thesis is concerned with the development of understanding in statistical inference for beginning students of statistics at the post-secondary level. This involves consideration of the nature of understanding in introductory statistical inference, and how understanding can be measured in the context of statistical inference. In particular, the study has examined the role of the sampling distribution in the students? schemas for statistical inference, and its relationship to both conceptual and procedural understanding. The results of the study have shown that, as anticipated, students will construct highly individual schemas for statistical inference but that the degree of integration of the concept of sampling distribution within this schema is indicative of the level of development of conceptual understanding in that student. The results of the study have practical implications for the teaching of courses in introductory statistics, in terms of content, delivery and assessment.
APA, Harvard, Vancouver, ISO, and other styles
47

Fike, William H. "Lobster Sampling Trap." Fogler Library, University of Maine, 2007. http://www.library.umaine.edu/theses/pdf/FikeWH2007.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Ertefaie, Ashkan. "Casual inference via propensity score regression and length-biased sampling." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104784.

Full text
Abstract:
Confounder adjustment is the key in the estimation of exposure effect in observational studies. Two well known causal adjustment techniques are the propensity score and the inverse probability of treatment weighting. We have compared the asymptotic properties of these two estimators and showed that the former method results in a more efficient estimator. Since ignoring important confounders result in a biased estimator, it seems beneficial to adjust for all the covariates. This, however, may result in an inflation of the variance of the estimated parameters and induce bias as well. We present a penalization technique based on the joint likelihood of the treatment and response variables to select the key covariates that need to be included in the treatment assignment model. Besides the bias induced by the non-randomization, we discuss another source of bias induced by having a non-representative sample of the target population. In particular, we study the effect of length-biased sampling in the estimation of the treatment effect. We introduced a weighted and a double robust estimating equations to adjust for the biased sampling and the non-randomization in the generalized accelerated failure time model setting. Large sample properties of the estimators are established.We conduct an extensive simulation studies to study the small sample properties of the estimators. In each Chapter, we apply our proposed technique on real data sets and compare the result with those obtained by other methods.
L'ajustement du facteur de confusion est la clé dans l'estimation de l'effet de traitement dans les études observationelles. Deux techniques bien connus d'ajustement causal sont le score de propension et la probabilité de traitement inverse pondéré. Nous avons comparé les propriétés asymptotiques de ces deux estimateurs et avons démontré que la première méthode est un estimateur plus efficace. Étant donné que d'ignorer des facteurs de confusion importants ne fait que biaiser l'estimateur, il semble bénéfique de tenir compte de tous les co-variables. Cependant, ceci peut entrainer une inflation de la variance des paramètres estimés et provoquer des biais également. Par conséquent, nous présentons une pénalisation technique basée conjointement sur la probabilité du traitement et sur les variables de la réponse pour sélectionner la clé co-variables qui doit être inclus dans le modèle du traitement attribué. Outre le biais introduit par la non-randomisation, nous discutons d'une autre source de biais introduit par un échantillon non représentatif de la population cible. Plus précisément, nous étudions l'effet de la longueur du biais de l'échantillon dans l'estimation de la résultante du traitement. Nous avons introduit une pondération et une solide équation d'estimation double pour ajuster l'échantillonnage biaisé et la non-randomisation dans la généralisation du modèle à temps accéléré échec réglage. Puis, les propriétés des estimateurs du vaste échantillon sont établies. Nous menons une étude étendue pour examiner la simulation des propriétés des estimateurs du petit échantillon. Dans chaque chapitre, nous appliquons notre propre technique sur de véritables ensembles de données et comparons les résultats avec ceux obtenus par d'autres méthodes.
APA, Harvard, Vancouver, ISO, and other styles
49

Hudson-Curtis, Buffy L. "Generalizations of the Multivariate Logistic Distribution with Applications to Monte Carlo Importance Sampling." NCSU, 2001. http://www.lib.ncsu.edu/theses/available/etd-20011101-224634.

Full text
Abstract:

Monte Carlo importance sampling is a useful numerical integration technique, particularly in Bayesian analysis. A successful importance sampler will mimic the behavior of the posterior distribution, not only in the center, where most of the mass lies, but also in the tails (Geweke, 1989). Typically, the Hessian of the importance sampler is set equal to the Hessian of the posterior distribution evaluated at the mode. Since the importance sampling estimates are weighted averages, their accuracy is assessed by assuming a normal limiting distribution. However, if this scaling of the Hessian leads to a poor match in the tails of the posterior, this assumption may be false (Geweke, 1989). Additionally, in practice, two commonly used importance samplers, the Multivariate Normal Distribution and the Multivariate Student-t Distribution, do not perform well for a number of posterior distributions (Monahan, 2000). A generalization of the Multivariate Logistic Distribution (the Elliptical Multivariate Logistic Distribution) is described and its properties explored. This distribution outperforms the Multivariate Normal distribution and the Multivariate Student-t distribution as an importance sampler for several posterior distributions chosen from the literature. A modification of the scaling by Hessians of the importance sampler and the posterior distribution is explained. Employing this alternate relationship increases the number of posterior distributions for which the Multivariate Normal, the Multivariate Student-t, and the Elliptical Multivariate Logistic Distribution can serve as importance samplers.

APA, Harvard, Vancouver, ISO, and other styles
50

Vining, G. Geoffrey. "Determining the most appropiate [sic] sampling interval for a Shewhart X-chart." Thesis, Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/94487.

Full text
Abstract:
A common problem encountered in practice is determining when it is appropriate to change the sampling interval for control charts. This thesis examines this problem for Shewhart X̅ charts. Duncan's economic model (1956) is used to develop a relationship between the most appropriate sampling interval and the present rate of"disturbances,” where a disturbance is a shift to an out of control state. A procedure is proposed which switches the interval to convenient values whenever a shift in the rate of disturbances is detected. An example using simulation demonstrates the procedure.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography