To see the other types of publications on this topic, follow the link: Sampling with varying probabilities.

Journal articles on the topic 'Sampling with varying probabilities'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sampling with varying probabilities.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Goulionis, J. E. "Strategies for sampling with varying probabilities." Journal of Statistical Computation and Simulation 81, no. 11 (November 2011): 1753. http://dx.doi.org/10.1080/00949655.2011.622434.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chaudhuri, Arijit, and Arun Kumar Adhikary. "Circular Systematic Sampling with Varying Probabilities." Calcutta Statistical Association Bulletin 36, no. 3-4 (September 1987): 193–96. http://dx.doi.org/10.1177/0008068319870310.

Full text
Abstract:
Certain conditions connecting the population size, sample size and the sampling interval in circular systematic sampling with equal probabilities are known. We present here a simple “condition” connecting the sample size, size-measures and the sampling interval in pps circular systematic sampling. The condition is important in noting limitations on sample-sizes when a sampling interval is pre-assigned.
APA, Harvard, Vancouver, ISO, and other styles
3

Kumar, Prenesh. "On sampling of three units using varying probabilities." Statistics 18, no. 3 (January 1987): 373–77. http://dx.doi.org/10.1080/02331888708802032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Blanchet, Jose, and Jingchen Liu. "Efficient importance sampling in ruin problems for multidimensional regularly varying random walks." Journal of Applied Probability 47, no. 2 (June 2010): 301–22. http://dx.doi.org/10.1239/jap/1276784893.

Full text
Abstract:
We consider the problem of efficient estimation via simulation of first passage time probabilities for a multidimensional random walk with heavy-tailed increments. In addition to being a natural generalization to the problem of computing ruin probabilities in insurance - in which the focus is the maximum of a one-dimensional random walk with negative drift - this problem captures important features of large deviations for multidimensional heavy-tailed processes (such as the role played by the mean of the process in connection to the location of the target set). We develop a state-dependent importance sampling estimator for this class of multidimensional problems. Then, using techniques based on Lyapunov inequalities, we argue that our estimator is strongly efficient in the sense that the relative mean squared error of our estimator can be made arbitrarily small by increasing the number of replications, uniformly as the probability of interest approaches 0.
APA, Harvard, Vancouver, ISO, and other styles
5

Blanchet, Jose, and Jingchen Liu. "Efficient importance sampling in ruin problems for multidimensional regularly varying random walks." Journal of Applied Probability 47, no. 02 (June 2010): 301–22. http://dx.doi.org/10.1017/s0021900200006653.

Full text
Abstract:
We consider the problem of efficient estimation via simulation of first passage time probabilities for a multidimensional random walk with heavy-tailed increments. In addition to being a natural generalization to the problem of computing ruin probabilities in insurance - in which the focus is the maximum of a one-dimensional random walk with negative drift - this problem captures important features of large deviations for multidimensional heavy-tailed processes (such as the role played by the mean of the process in connection to the location of the target set). We develop a state-dependent importance sampling estimator for this class of multidimensional problems. Then, using techniques based on Lyapunov inequalities, we argue that our estimator is strongly efficient in the sense that the relative mean squared error of our estimator can be made arbitrarily small by increasing the number of replications, uniformly as the probability of interest approaches 0.
APA, Harvard, Vancouver, ISO, and other styles
6

Haslett, Stephen. "Best linear unbiased estimation for varying probability with and without replacement sampling." Special Matrices 7, no. 1 (January 1, 2019): 78–91. http://dx.doi.org/10.1515/spma-2019-0007.

Full text
Abstract:
Abstract When sample survey data with complex design (stratification, clustering, unequal selection or inclusion probabilities, and weighting) are used for linear models, estimation of model parameters and their covariance matrices becomes complicated. Standard fitting techniques for sample surveys either model conditional on survey design variables, or use only design weights based on inclusion probabilities essentially assuming zero error covariance between all pairs of population elements. Design properties that link two units are not used. However, if population error structure is correlated, an unbiased estimate of the linear model error covariance matrix for the sample is needed for efficient parameter estimation. By making simultaneous use of sampling structure and design-unbiased estimates of the population error covariance matrix, the paper develops best linear unbiased estimation (BLUE) type extensions to standard design-based and joint design and model based estimation methods for linear models. The analysis covers both with and without replacement sample designs. It recognises that estimation for with replacement designs requires generalized inverses when any unit is selected more than once. This and the use of Hadamard products to link sampling and population error covariance matrix properties are central topics of the paper. Model-based linear model parameter estimation is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Fatima, Khadija, Zahoor Ahmed, and Abeeda Fatima. "JACKKNIFE REPLICATION VARIANCE ESTIMATION OF POPULATION TOTAL UNDER SYSTEMATIC SAMPLING WITH VARYING PROBABILITIES." Matrix Science Mathematic 1, no. 1 (January 20, 2017): 34–39. http://dx.doi.org/10.26480/msmk.01.2017.34.39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Blanchet, Jose H., and Jingchen Liu. "State-dependent importance sampling for regularly varying random walks." Advances in Applied Probability 40, no. 4 (December 2008): 1104–28. http://dx.doi.org/10.1239/aap/1231340166.

Full text
Abstract:
Consider a sequence (Xk:k≥ 0) of regularly varying independent and identically distributed random variables with mean 0 and finite variance. We develop efficient rare-event simulation methodology associated with large deviation probabilities for the random walk (Sn:n≥ 0). Our techniques are illustrated by examples, including large deviations for the empirical mean and path-dependent events. In particular, we describe two efficient state-dependent importance sampling algorithms for estimating the tail ofSnin a large deviation regime asn↗ ∞. The first algorithm takes advantage of large deviation approximations that are used to mimic the zero-variance change of measure. The second algorithm uses a parametric family of changes of measure based on mixtures. Lyapunov-type inequalities are used to appropriately select the mixture parameters in order to guarantee bounded relative error (or efficiency) of the estimator. The second example involves a path-dependent event related to a so-called knock-in financial option under heavy-tailed log returns. Again, the importance sampling algorithm is based on a parametric family of mixtures which is selected using Lyapunov bounds. In addition to the theoretical analysis of the algorithms, numerical experiments are provided in order to test their empirical performance.
APA, Harvard, Vancouver, ISO, and other styles
9

Blanchet, Jose H., and Jingchen Liu. "State-dependent importance sampling for regularly varying random walks." Advances in Applied Probability 40, no. 04 (December 2008): 1104–28. http://dx.doi.org/10.1017/s0001867800002986.

Full text
Abstract:
Consider a sequence (X k : k ≥ 0) of regularly varying independent and identically distributed random variables with mean 0 and finite variance. We develop efficient rare-event simulation methodology associated with large deviation probabilities for the random walk (S n : n ≥ 0). Our techniques are illustrated by examples, including large deviations for the empirical mean and path-dependent events. In particular, we describe two efficient state-dependent importance sampling algorithms for estimating the tail of S n in a large deviation regime as n ↗ ∞. The first algorithm takes advantage of large deviation approximations that are used to mimic the zero-variance change of measure. The second algorithm uses a parametric family of changes of measure based on mixtures. Lyapunov-type inequalities are used to appropriately select the mixture parameters in order to guarantee bounded relative error (or efficiency) of the estimator. The second example involves a path-dependent event related to a so-called knock-in financial option under heavy-tailed log returns. Again, the importance sampling algorithm is based on a parametric family of mixtures which is selected using Lyapunov bounds. In addition to the theoretical analysis of the algorithms, numerical experiments are provided in order to test their empirical performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Asmussen, Søren. "Importance Sampling for Failure Probabilities in Computing and Data Transmission." Journal of Applied Probability 46, no. 3 (September 2009): 768–90. http://dx.doi.org/10.1239/jap/1253279851.

Full text
Abstract:
In this paper we study efficient simulation algorithms for estimating P(X›x), whereXis the total time of a job with ideal time T that needs to be restarted after a failure. The main tool is importance sampling, where a good importance distribution is identified via an asymptotic description of the conditional distribution ofTgivenX›x. IfT≡tis constant, the problem reduces to the efficient simulation of geometric sums, and a standard algorithm involving a Cramér-type root, γ(t), is available. However, we also discuss an algorithm that avoids finding the root. IfTis random, particular attention is given toThaving either a gamma-like tail or a regularly varying tail, and to failures at Poisson times. Different types of conditional limit occur, in particular exponentially tilted Gumbel distributions and Pareto distributions. The algorithms based upon importance distributions forTusing these asymptotic descriptions have bounded relative error asx→∞ when combined with the ideas used for a fixedt. Nevertheless, we give examples of algorithms carefully designed to enjoy bounded relative error that may provide little or no asymptotic improvement over crude Monte Carlo simulation when the computational effort is taken into account. To resolve this problem, an alternative algorithm using two-sided Lundberg bounds is suggested.
APA, Harvard, Vancouver, ISO, and other styles
11

Asmussen, Søren. "Importance Sampling for Failure Probabilities in Computing and Data Transmission." Journal of Applied Probability 46, no. 03 (September 2009): 768–90. http://dx.doi.org/10.1017/s0021900200005878.

Full text
Abstract:
In this paper we study efficient simulation algorithms for estimating P(X›x), where X is the total time of a job with ideal time T that needs to be restarted after a failure. The main tool is importance sampling, where a good importance distribution is identified via an asymptotic description of the conditional distribution of T given X›x. If T≡t is constant, the problem reduces to the efficient simulation of geometric sums, and a standard algorithm involving a Cramér-type root, γ(t), is available. However, we also discuss an algorithm that avoids finding the root. If T is random, particular attention is given to T having either a gamma-like tail or a regularly varying tail, and to failures at Poisson times. Different types of conditional limit occur, in particular exponentially tilted Gumbel distributions and Pareto distributions. The algorithms based upon importance distributions for T using these asymptotic descriptions have bounded relative error as x→∞ when combined with the ideas used for a fixed t. Nevertheless, we give examples of algorithms carefully designed to enjoy bounded relative error that may provide little or no asymptotic improvement over crude Monte Carlo simulation when the computational effort is taken into account. To resolve this problem, an alternative algorithm using two-sided Lundberg bounds is suggested.
APA, Harvard, Vancouver, ISO, and other styles
12

Conn, Paul B., Mark V. Bravington, Shane Baylis, and Jay M. Ver Hoef. "Robustness of close‐kin mark–recapture estimators to dispersal limitation and spatially varying sampling probabilities." Ecology and Evolution 10, no. 12 (May 5, 2020): 5558–69. http://dx.doi.org/10.1002/ece3.6296.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Adepetun, A. O. "Contraceptive usage among undergraduate female students: A survey sampling approach." Journal of Fundamental and Applied Sciences 12, no. 3 (September 1, 2020): 1101–13. http://dx.doi.org/10.4314/jfas.v12i3.7.

Full text
Abstract:
In this paper, a survey sampling on contraceptive usage among the female undergraduates of Federal University of Technology, Akure using the randomized response technique proposed by Warner was extensively carried out. The data for the study was obtained from the administration of well-structured survey questionnaires on respondents from three different schools in the University. Consequently, the estimated Warner’s proportions of female undergraduate students who used contraceptives and their resulting variances were evaluated at varying sample sizes and predetermined probabilities 𝑃 respectively. The results as presented in the summary table 10 revealed that contraceptive usage was at maximum level for School of Agriculture and Agricultural Technology as sample sizes varied.
APA, Harvard, Vancouver, ISO, and other styles
14

Foote, Michael, James S. Crampton, Alan G. Beu, and Campbell S. Nelson. "Aragonite bias, and lack of bias, in the fossil record: lithological, environmental, and ecological controls." Paleobiology 41, no. 2 (February 24, 2015): 245–65. http://dx.doi.org/10.1017/pab.2014.16.

Full text
Abstract:
AbstractMacroevolutionary and macroecological studies must account for biases in the fossil record, especially when questions concern the relative abundance and diversity of taxa that differ in preservation and sampling potential. Using Cenozoic marine mollusks from a temperate setting (New Zealand), we find that much of the long-term temporal variation in gastropod versus bivalve occurrences is correlated with the stage-level sampling probabilities of aragonitic versus calcitic taxa. Average sampling probabilities are higher for calcitic species, but this contrast is time-varying in a predictable way, being concentrated in stages with widespread carbonate deposition.To understand these results fully, we link them with analyses at the level of individual point occurrences. Doing so reveals that aragonite bias is effectively absent in terrigenous clastic sediments. In limestones, by contrast, calcitic species have at least twice the odds of sampling as aragonitic species. This result is most pronounced during times of widespread carbonate deposition, where the difference in the per-collection odds of sampling species is a factor of eight. During carbonate-rich intervals, calcitic taxa also have higher odds of sampling in clastics. At first glance this result may suggest simple preservational bias against aragonite. However, comparing relative odds of aragonitic versus calcitic sampling with absolute sampling rates shows that the positive calcite bias during carbonate-rich times reflects higher than average occurrence rates for calcitic taxa (rather than lower rates for aragonitic taxa) and that the negative aragonite bias in limestones reflects lower than average occurrence rates for aragonitic taxa (rather than higher rates for calcitic taxa).Our results therefore indicate a time-varying interplay of two main factors: (1) taphonomic loss of aragonitic species in carbonate sediments, with no substantial bias in terrigenous clastics; and (2) an ecological preference of calcitic taxa for environments characteristic of periods with pervasive carbonate deposition, irrespective of lithology per se.
APA, Harvard, Vancouver, ISO, and other styles
15

Yagawa, G., S. Yoshimura, N. Handa, T. Uno, K. Watashi, T. Fujioka, H. Ueda, M. Uno, K. Hojo, and S. Ueda. "Study on Life Extension of Aged RPV Material Based on Probabilistic Fracture Mechanics: Japanese Round Robin." Journal of Pressure Vessel Technology 117, no. 1 (February 1, 1995): 7–13. http://dx.doi.org/10.1115/1.2842095.

Full text
Abstract:
This paper is concerned with round-robin analyses of probabilistic fracture mechanics (PFM) problems of aged RPV material. Analyzed here is a plate with a semi-elliptical surface crack subjected to various cyclic tensile and bending stresses. A depth and an aspect ratio of the surface crack are assumed to be probabilistic variables. Failure probabilities are calculated using the Monte Carlo methods with the importance sampling or the stratified sampling techniques. Material properties are chosen from the Marshall report, the ASME Code Section XI, and the experiments on a Japanese RPV material carried out by the Life Evaluation (LE) subcommittee of the Japan Welding Engineering Society (JWES), while loads are determined referring to design loading conditions of pressurized water reactors (PWR). Seven organizations participate in this study. At first, the procedures for obtaining reliable PFM solutions with low failure probabilities are examined by solving a unique problem with seven computer programs. The seven solutions agree very well with one another, i.e., by a factor of 2 to 5 in failure probabilities. Next, sensitivity analyses are performed by varying fracture toughness values, loading conditions, and pre and in-service inspections. Finally, life extension simulations based on the PFM analyses are performed. It is clearly demonstrated from these analyses that failure probabilities are so sensitive to the change of fracture toughness values that the degree of neutron irradiation significantly influences the judgment of plant life extension.
APA, Harvard, Vancouver, ISO, and other styles
16

Gotelli, Nicholas J., Robert M. Dorazio, Aaron M. Ellison, and Gary D. Grossman. "Detecting temporal trends in species assemblages with bootstrapping procedures and hierarchical models." Philosophical Transactions of the Royal Society B: Biological Sciences 365, no. 1558 (November 27, 2010): 3621–31. http://dx.doi.org/10.1098/rstb.2010.0262.

Full text
Abstract:
Quantifying patterns of temporal trends in species assemblages is an important analytical challenge in community ecology. We describe methods of analysis that can be applied to a matrix of counts of individuals that is organized by species (rows) and time-ordered sampling periods (columns). We first developed a bootstrapping procedure to test the null hypothesis of random sampling from a stationary species abundance distribution with temporally varying sampling probabilities. This procedure can be modified to account for undetected species. We next developed a hierarchical model to estimate species-specific trends in abundance while accounting for species-specific probabilities of detection. We analysed two long-term datasets on stream fishes and grassland insects to demonstrate these methods. For both assemblages, the bootstrap test indicated that temporal trends in abundance were more heterogeneous than expected under the null model. We used the hierarchical model to estimate trends in abundance and identified sets of species in each assemblage that were steadily increasing, decreasing or remaining constant in abundance over more than a decade of standardized annual surveys. Our methods of analysis are broadly applicable to other ecological datasets, and they represent an advance over most existing procedures, which do not incorporate effects of incomplete sampling and imperfect detection.
APA, Harvard, Vancouver, ISO, and other styles
17

Liow, Lee Hsiang, and James D. Nichols. "Estimating Rates and Probabilities of Origination and Extinction Using Taxonomic Occurrence Data: Capture-Mark-Recapture (CMR) Approaches." Paleontological Society Papers 16 (October 2010): 81–94. http://dx.doi.org/10.1017/s1089332600001820.

Full text
Abstract:
We rely on observations of occurrences of fossils to infer the rates and timings of origination and extinction of taxa. These estimates can then be used to shed light on questions such as whether extinction and origination rates have been higher or lower at different times in earth history or in different geographical regions, etc. and to investigate the possible underlying causes of varying rates. An inherent problem in inference using occurrence data is one of incompleteness of sampling. Even if a taxon is present at a given time and place, we are guaranteed to detect or sample it less than 100% of the time we search in a random outcrop or sediment sample that should contain it, either because it was not preserved, it was preserved but then eroded, or because we simply did not find it. Capture-mark-recapture (CMR) methods rely on replicate sampling to allow for the simultaneous estimation of sampling probability and the parameters of interest (e.g. extinction, origination, occupancy, diversity). Here, we introduce the philosophy of CMR approaches especially as applicable to paleontological data and questions. The use of CMR is in its infancy in paleobiological applications, but the handful of studies that have used it demonstrate its utility and generality. We discuss why the use of CMR has not matched its development in other fields, such as in population ecology, as well as the importance of modelling the sampling process and estimating sampling probabilities. In addition, we suggest some potential avenues for the development of CMR applications in paleobiology.
APA, Harvard, Vancouver, ISO, and other styles
18

Hanberry, B. B., and H. S. He. "Prevalence, statistical thresholds, and accuracy assessment for species distribution models." Web Ecology 13, no. 1 (May 13, 2013): 13–19. http://dx.doi.org/10.5194/we-13-13-2013.

Full text
Abstract:
Abstract. For species distribution models, species frequency is termed prevalence and prevalence in samples should be similar to natural species prevalence, for unbiased samples. However, modelers commonly adjust sampling prevalence, producing a modeling prevalence that has a different frequency of occurrences than sampling prevalence. The separate effects of (1) use of sampling prevalence compared to adjusted modeling prevalence and (2) modifications necessary in thresholds, which convert continuous probabilities to discrete presence or absence predictions, to account for prevalence, are unresolved issues. We examined effects of prevalence and thresholds and two types of pseudoabsences on model accuracy. Use of sampling prevalence produced similar models compared to use of adjusted modeling prevalences. Mean correlation between predicted probabilities of the least (0.33) and greatest modeling prevalence (0.83) was 0.86. Mean predicted probability values increased with increasing prevalence; therefore, unlike constant thresholds, varying threshold to match prevalence values was effective in holding true positive rate, true negative rate, and species prediction areas relatively constant for every modeling prevalence. The area under the curve (AUC) values appeared to be as informative as sensitivity and specificity, when using surveyed pseudoabsences as absent cases, but when the entire study area was coded, AUC values reflected the area of predicted presence as absent. Less frequent species had greater AUC values when pseudoabsences represented the study background. Modeling prevalence had a mild impact on species distribution models and accuracy assessment metrics when threshold varied with prevalence. Misinterpretation of AUC values is possible when AUC values are based on background absences, which correlate with frequency of species.
APA, Harvard, Vancouver, ISO, and other styles
19

Nichols, James D., James E. Hines, John R. Sauer, Frederick W. Fallon, Jane E. Fallon, and Patricia J. Heglund. "A Double-Observer Approach for Estimating Detection Probability and Abundance From Point Counts." Auk 117, no. 2 (April 1, 2000): 393–408. http://dx.doi.org/10.1093/auk/117.2.393.

Full text
Abstract:
Abstract Although point counts are frequently used in ornithological studies, basic assumptions about detection probabilities often are untested. We apply a double-observer approach developed to estimate detection probabilities for aerial surveys (Cook and Jacobson 1979) to avian point counts. At each point count, a designated “primary” observer indicates to another (“secondary”) observer all birds detected. The secondary observer records all detections of the primary observer as well as any birds not detected by the primary observer. Observers alternate primary and secondary roles during the course of the survey. The approach permits estimation of observer-specific detection probabilities and bird abundance. We developed a set of models that incorporate different assumptions about sources of variation (e.g. observer, bird species) in detection probability. Seventeen field trials were conducted, and models were fit to the resulting data using program SURVIV. Single-observer point counts generally miss varying proportions of the birds actually present, and observer and bird species were found to be relevant sources of variation in detection probabilities. Overall detection probabilities (probability of being detected by at least one of the two observers) estimated using the double-observer approach were very high (>0.95), yielding precise estimates of avian abundance. We consider problems with the approach and recommend possible solutions, including restriction of the approach to fixed-radius counts to reduce the effect of variation in the effective radius of detection among various observers and to provide a basis for using spatial sampling to estimate bird abundance on large areas of interest. We believe that most questions meriting the effort required to carry out point counts also merit serious attempts to estimate detection probabilities associated with the counts. The double-observer approach is a method that can be used for this purpose.
APA, Harvard, Vancouver, ISO, and other styles
20

Michaud, Clayton P., and Thomas W. Sproul. "Bayesian Downscaling Methods for Aggregated Count Data." Agricultural and Resource Economics Review 47, no. 1 (December 17, 2017): 178–94. http://dx.doi.org/10.1017/age.2017.26.

Full text
Abstract:
Policy-critical, micro-level statistical data are often unavailable at the desired level of disaggregation. We present a Bayesian methodology for “downscaling” aggregated count data to the micro level, using an outside statistical sample. Our procedure combines numerical simulation with exact calculation of combinatorial probabilities. We motivate our approach with an application estimating the number of farms in a region, using count totals at higher levels of aggregation. In a simulation analysis over varying population sizes, we demonstrate both robustness to sampling variability and outperformance relative to maximum likelihood. Spatial considerations, implementation of “informative” priors, non-spatial classification problems, and best practices are discussed.
APA, Harvard, Vancouver, ISO, and other styles
21

Wagner, Peter J. "On the probabilities of branch durations and stratigraphic gaps in phylogenies of fossil taxa when rates of diversification and sampling vary over time." Paleobiology 45, no. 1 (February 2019): 30–55. http://dx.doi.org/10.1017/pab.2018.35.

Full text
Abstract:
AbstractThe time separating the first appearances of species from their divergences from related taxa affects assessments of macroevolutionary hypotheses about rates of anatomical or ecological change. Branch durations necessarily posit stratigraphic gaps in sampling within a clade over which we have failed to sample predecessors (ancestors) and over which there are no divergences leading to sampled relatives (sister taxa). The former reflects only sampling rates, whereas the latter reflects sampling, origination, and extinction rates. Because all three rates vary over time, the probability of a branch duration of any particular length will differ depending on when in the Phanerozoic that branch duration spans. Here, I present a birth–death-sampling model allowing interval-to-interval variation in diversification and sampling rates. Increasing either origination or sampling rates increases the probability of finding sister taxa that diverge both during and before intervals of high sampling/origination. Conversely, elevated extinction reduces the probability of divergences from sampled sister taxa before and during intervals of elevated extinction. In the case of total extinction, a Signor-Lipps will reduce expected sister taxa leading up to the extinction, with the possible effect stretching back many millions of years when sampling is low. Simulations indicate that this approach provides reasonable estimates of branch duration probabilities under a variety of circumstances. Because current probability models for describing morphological evolution are less advanced than methods for inferring diversification and sampling rates, branch duration priors allowing for time-varying diversification could be a potent tool for phylogenetic inference with fossil data.
APA, Harvard, Vancouver, ISO, and other styles
22

Paiva, Thais, and Jerome P. Reiter. "Stop or Continue Data Collection: A Nonignorable Missing Data Approach for Continuous Variables." Journal of Official Statistics 33, no. 3 (September 1, 2017): 579–99. http://dx.doi.org/10.1515/jos-2017-0028.

Full text
Abstract:
AbstractWe present an approach to inform decisions about nonresponse follow-up sampling. The basic idea is (i) to create completed samples by imputing nonrespondents’ data under various assumptions about the nonresponse mechanisms, (ii) take hypothetical samples of varying sizes from the completed samples, and (iii) compute and compare measures of accuracy and cost for different proposed sample sizes. As part of the methodology, we present a new approach for generating imputations for multivariate continuous data with nonignorable unit nonresponse. We fit mixtures of multivariate normal distributions to the respondents’ data, and adjust the probabilities of the mixture components to generate nonrespondents’ distributions with desired features. We illustrate the approaches using data from the 2007 U.S. Census of Manufactures.
APA, Harvard, Vancouver, ISO, and other styles
23

Luo, Peng, Timothy A. DeVol, and Julia L. Sharp. "Sequential Probability Ratio Test Using Scaled Time-Intervals for Environmental Radiation Monitoring." IEEE Transactions on Nuclear Science 57, no. 6 (June 2010): 1556–62. http://dx.doi.org/10.1109/tns.2010.2045900.

Full text
Abstract:
Sequential probability ratio test (SPRT) of scaled time-interval data (time to record N radiation pulses),SPRT_scaled, was evaluated against commonly used single-interval test (SIT) and SPRT with a fixed counting interval,SPRT_fixed, on experimental and simulated data. Experimental data were acquired with a DGF-4C (XIA, Inc) system in list mode. Simulated time-interval data were obtained using Monte Carlo techniques to perform a random radiation sampling of the Poisson distribution. The three methods (SIT, SPRT_fixed and SPRT_scaled) were compared in terms of detection probability and average time to make a decision regarding the source of radiation. For both experimental and simulated data, SPRT_scaled provided similar detection probabilities as other tests, but was able to make a quicker decision with fewer pulses at relatively higher radiation levels. SPRT_scaled has a provision for varying the sampling time depending on the radiation level, which may further shorten the time needed for radiation monitoring. Parameter adjustments to the SPRT_scaled method for increased detection probability are discussed.
APA, Harvard, Vancouver, ISO, and other styles
24

Layou, Karen M. "A quantitative null model of additive diversity partitioning: examining the response of beta diversity to extinction." Paleobiology 33, no. 1 (2007): 116–24. http://dx.doi.org/10.1666/06025.1.

Full text
Abstract:
Paleobiological diversity is often expressed as α (within-sample), β (among-sample), and γ (total) diversities. However, when studying the effects of extinction on diversity patterns, only variations in α and γ diversities are typically addressed. A null model that examines changes in β diversity as a function of percent extinction is presented here.The model examines diversity in the context of a hierarchical sampling strategy that allows for the additive partitioning of γ diversity into mean α and β diversities at varying scales. Here, the sampling hierarchy has four levels: samples, beds, facies, and region; thus, there are four levels of α diversity (α1, α2, α3, α4) and three levels of β diversity (β1, β2, and β3). Taxa are randomly assigned to samples within the hierarchy according to probability of occurrence, and initial mean α and β values are calculated. A regional extinction is imposed, and the hierarchy is resampled from the remaining extant taxa. Post-extinction mean α and β values are then calculated.Both non-selective and selective extinctions with respect to taxon abundance yield decreases in α, β, and γ diversities. Non-selective extinction with respect to taxon abundance shows little effect on diversity partitioning except at the highest extinction magnitudes (above 75% extinction), where the contribution of α1 to total γ increases at the expense of β3, with β1 and β2 varying little with increasing extinction magnitude. The pre-extinction contribution of α1 to total diversity increases with increased probabilities of taxon occurrence and the number of shared taxa between facies. Both β1 and β2 contribute equally to total diversity at low occurrence probabilities, but β2 is negligible at high probabilities, because individual samples preserve all the taxonomic variation present within a facies. Selective extinction with respect to rare taxa indicates a constant increase in α1 and constant decrease in β3 with increasing extinction magnitudes, whereas selective extinction with respect to abundant taxa yields the opposite pattern of an initial decrease in α1 and increase in β3. Both β1 and β2 remain constant with increasing extinction for both cases of selectivity. By comparing diversity partitioning before and after an extinction event, it may be possible to determine whether the extinction was selective with respect to taxon abundances, and if so, whether that selectivity was against rare or abundant taxa.Field data were collected across a Late Ordovician regional extinction in the Nashville Dome of Tennessee, with sampling hierarchy similar to that of the model. These data agree with the abundant-selective model, showing declines in α, β, and γ diversities, and a decrease in α1 and increase in β3, which suggests this extinction may have targeted abundant taxa.
APA, Harvard, Vancouver, ISO, and other styles
25

Bravington, Mark V., David L. Miller, and Sharon L. Hedley. "Variance Propagation for Density Surface Models." Journal of Agricultural, Biological and Environmental Statistics 26, no. 2 (February 23, 2021): 306–23. http://dx.doi.org/10.1007/s13253-021-00438-2.

Full text
Abstract:
AbstractSpatially explicit estimates of population density, together with appropriate estimates of uncertainty, are required in many management contexts. Density surface models (DSMs) are a two-stage approach for estimating spatially varying density from distance sampling data. First, detection probabilities—perhaps depending on covariates—are estimated based on details of individual encounters; next, local densities are estimated using a GAM, by fitting local encounter rates to location and/or spatially varying covariates while allowing for the estimated detectabilities. One criticism of DSMs has been that uncertainty from the two stages is not usually propagated correctly into the final variance estimates. We show how to reformulate a DSM so that the uncertainty in detection probability from the distance sampling stage (regardless of its complexity) is captured as an extra random effect in the GAM stage. In effect, we refit an approximation to the detection function model at the same time as fitting the spatial model. This allows straightforward computation of the overall variance via exactly the same software already needed to fit the GAM. A further extension allows for spatial variation in group size, which can be an important covariate for detectability as well as directly affecting abundance. We illustrate these models using point transect survey data of Island Scrub-Jays on Santa Cruz Island, CA, and harbour porpoise from the SCANS-II line transect survey of European waters. Supplementary materials accompanying this paper appear on-line.
APA, Harvard, Vancouver, ISO, and other styles
26

Gómez-Rocha, José Emmanuel, Eva Selene Hernández-Gress, and Héctor Rivera-Gómez. "Production planning of a furniture manufacturing company with random demand and production capacity using stochastic programming." PLOS ONE 16, no. 6 (June 14, 2021): e0252801. http://dx.doi.org/10.1371/journal.pone.0252801.

Full text
Abstract:
In this article two multi-stage stochastic linear programming models are developed, one applying the stochastic programming solver integrated by Lingo 17.0 optimization software that utilizes an approximation using an identical conditional sampling and Latin-hyper-square techniques to reduce the sample variance, associating the probability distributions to normal distributions with defined mean and standard deviation; and a second proposed model with a discrete distribution with 3 values and their respective probabilities of occurrence. In both cases, a scenario tree is generated. The models developed are applied to an aggregate production plan (APP) for a furniture manufacturing company located in the state of Hidalgo, Mexico, which has important clients throughout the country. Production capacity and demand are defined as random variables of the model. The main purpose of this research is to determine a feasible solution to the aggregate production plan in a reasonable computational time. The developed models were compared and analyzed. Moreover, this work was complemented with a sensitivity analysis; varying the percentage of service level, also, varying the stochastic parameters (mean and standard deviation) to test how these variations impact in the solution and decision variables.
APA, Harvard, Vancouver, ISO, and other styles
27

Pitombeira-Neto, Anselmo, Carlos Loureiro, and Luis Carvalho. "Bayesian Inference on Dynamic Linear Models of Day-to-Day Origin-Destination Flows in Transportation Networks." Urban Science 2, no. 4 (December 10, 2018): 117. http://dx.doi.org/10.3390/urbansci2040117.

Full text
Abstract:
Estimation of origin–destination (OD) demand plays a key role in successful transportation studies. In this paper, we consider the estimation of time-varying day-to-day OD flows given data on traffic volumes in a transportation network for a sequence of days. We propose a dynamic linear model (DLM) in order to represent the stochastic evolution of OD flows over time. DLMs are Bayesian state-space models which can capture non-stationarity. We take into account the hierarchical relationships between the distribution of OD flows among routes and the assignment of traffic volumes on links. Route choice probabilities are obtained through a utility model based on past route costs. We propose a Markov chain Monte Carlo algorithm, which integrates Gibbs sampling and a forward filtering backward sampling technique, in order to approximate the joint posterior distribution of mean OD flows and parameters of the route choice model. Our approach can be applied to congested networks and in the case when data are available on only a subset of links. We illustrate the application of our approach through simulated experiments on a test network from the literature.
APA, Harvard, Vancouver, ISO, and other styles
28

Didham, R. K., J. H. Lawton, P. M. Hammond, and P. Eggleton. "Trophic structure stability and extinction dynamics of beetles (Coleoptera) in tropical forest fragments." Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 353, no. 1367 (March 29, 1998): 437–51. http://dx.doi.org/10.1098/rstb.1998.0221.

Full text
Abstract:
A first analysis of the stability of trophic structure following tropical forest fragmentation was performed in an experimentally fragmented tropical forest landscape in Central Amazonia. A taxonomically and trophically diverse assemblage of 993 species of beetles was sampled from 920 m 2 of leaf litter at 46 sites varying in distance from forest edge and fragment area. Beetle density increased significantly towards the forest edge and showed non-linear changes with fragment area, due to the influx of numerous disturbed-area species into 10 ha and 1 ha fragments. There was a marked change in species composition with both decreasing distance from forest edge and decreasing fragment area, but surprisingly this change in composition was not accompanied by a change in species richness. Rarefied species richness did not vary significantly across any of the sites, indicating that local extinctions of deep forest species were balanced by equivalent colonization rates of disturbed-area species. The change in species composition with fragmentation was non-random across trophic groups. Proportions of predator species and xylophage species changed significantly with distance from forest edge, but no area-dependent changes in proportions of species in trophic groups were observed. Trophic structure was also analysed with respect to proportions of abundance in six trophic groups. Proportions of abundance of all trophic groups except xylomycetophages changed markedly with respect to both distance from forest edge and fragment area. Local extinction probabilities calculated for individual beetle species supported theoretical predictions of the differential susceptibility of higher trophic levels to extinction, and of changes in trophic structure following forest fragmentation. To reduce random effects due to sampling error, only abundant species ( n ≥ 46) were analysed for extinction probabilities, as defined by absence from samples. Of these common species, 27% had significantly higher probabilities of local extinction following fragmentation. The majority of these species were predators; 42% of all abundant predator species were significantly more likely to be absent from samples in forest fragments than in undisturbed forest. These figures are regarded as minimum estimates for the entire beetle assemblage because rarer species will inevitably have higher extinction probabilities. Absolute loss of biodiversity will affect ecosystem process rates, but the differential loss of species from trophic groups will have an even greater destabilizing effect on food web structure and ecosystem function.
APA, Harvard, Vancouver, ISO, and other styles
29

Kreb, D. "Abundance of freshwater Irrawaddy dolphins in the Mahakam River in East Kalimantan, Indonesia, based on mark-recapture analysis of photo-identified individuals." J. Cetacean Res. Manage. 6, no. 3 (March 15, 2023): 269–77. http://dx.doi.org/10.47536/jcrm.v6i3.770.

Full text
Abstract:
From February 1999 to August 2002 ca 9,000km (840 hours) of search effort and 549 hours of observation on Irrawaddy dolphins (Orcaella brevirostris) were conducted by boat in the Mahakam River in East Kalimantan, Indonesia. An abundance estimate based on mark-recapture analysis of individuals photographed during separate surveys is presented here. Petersen and Jolly-Seber analysis methods were employed and compared along with earlier estimates derived from strip-transect analysis and direct counts. These comparisons serve to evaluate the biases of each method and assess the reliability of the abundance estimates. The feasibility of video-identification is also assessed. Total population size calculated by Petersen and Jolly-Seber mark-recapture analyses, was estimated to be 55 (95% CL=44-76; CV=6%) and 48 individuals (95% CL=33-63; CV=15%) respectively. Estimates based on strip-transect and direct count analysis for one sampling period, which was also included in the mark-recapture analysis, were within the confidence limits of the Jolly-Seber estimate (Ncount = 35 and Nstrip = 43). Calculated potential maximum biases appeared to be small, i.e. 2% of N for Petersen and 10% of N for the Jolly-Seber method, which are lower than the associated CVs. In addition, a high re-sight probability was calculated for both methods varying between 65% and 67%. Video images were considered a valuable, supplementary tool to still photography in the identification of individual dolphins in this study. For future monitoring of trends in abundance using mark/recapture analyses, a time interval is recommended between the two sampling periods that is short enough to minimise the introduction of errors due to gains and losses. Also, survey area coverage during photoidentification should be similar to avoid violation of the assumption of equal capture probabilities. The alarmingly low abundance estimates presented underline the need for immediate and strong action to preserve Indonesia’s only known freshwater dolphin population.
APA, Harvard, Vancouver, ISO, and other styles
30

Canion, Andy, Katherine M. Ransom, and Brian G. Katz. "Discrimination of Nitrogen Sources in Karst Spring Contributing Areas Using a Bayesian Isotope Mixing Model and Wastewater Tracers (Florida, USA)." Environmental and Engineering Geoscience 26, no. 3 (August 27, 2020): 291–311. http://dx.doi.org/10.2113/eeg-2310.

Full text
Abstract:
ABSTRACT Many springs in Florida have experienced a proliferation of nuisance algae and alteration of trophic structure in response to increases in nitrate concentration concurrent with rapid population growth and land use intensification beginning in the mid-20th century. While loading targets and remediation plans have been developed by state agencies to address excess nitrogen inputs, further confirmation of the relative contribution of nitrogen sources to groundwater is necessary to optimize the use of resources when implementing projects to reduce nitrogen loads. In the present study, stable isotopes of nitrate and wastewater indicators were used to discriminate sources of nitrogen in wells and springs in central Florida. Sampling was performed in 50 wells at 38 sites and at 10 springs with varying levels of agriculture and urban development. Nitrate isotope values were used to develop Bayesian mixing models to estimate the probability distribution of the contributions of nitrate sources in wells. Prior probabilities for the fractional contribution of each source were adjusted based on land use and density of septic tanks. Sucralose and the Cl:Br mass ratio were used as confirmatory indicators of wastewater sources. In residential areas, mixing model results indicated that fertilizer or mixed fertilizer and wastewater (septic tank effluent and reuse water) were the primary sources, with sucralose detections corresponding to wells with elevated contributions from wastewater. Sources of nitrogen in pasture and field crop areas were primarily fertilizer and manure; however, model posterior distributions of δ15N indicated that manure sources may have been overpredicted. The present study demonstrates the utility of a multi-tracer approach to build multiple lines of evidence to develop locally relevant remediation strategies for nitrogen sources in groundwater.
APA, Harvard, Vancouver, ISO, and other styles
31

Gupta, Hoshin V., Mohammad Reza Ehsani, Tirthankar Roy, Maria A. Sans-Fuentes, Uwe Ehret, and Ali Behrangi. "Computing Accurate Probabilistic Estimates of One-D Entropy from Equiprobable Random Samples." Entropy 23, no. 6 (June 11, 2021): 740. http://dx.doi.org/10.3390/e23060740.

Full text
Abstract:
We develop a simple Quantile Spacing (QS) method for accurate probabilistic estimation of one-dimensional entropy from equiprobable random samples, and compare it with the popular Bin-Counting (BC) and Kernel Density (KD) methods. In contrast to BC, which uses equal-width bins with varying probability mass, the QS method uses estimates of the quantiles that divide the support of the data generating probability density function (pdf) into equal-probability-mass intervals. And, whereas BC and KD each require optimal tuning of a hyper-parameter whose value varies with sample size and shape of the pdf, QS only requires specification of the number of quantiles to be used. Results indicate, for the class of distributions tested, that the optimal number of quantiles is a fixed fraction of the sample size (empirically determined to be ~0.25–0.35), and that this value is relatively insensitive to distributional form or sample size. This provides a clear advantage over BC and KD since hyper-parameter tuning is not required. Further, unlike KD, there is no need to select an appropriate kernel-type, and so QS is applicable to pdfs of arbitrary shape, including those with discontinuous slope and/or magnitude. Bootstrapping is used to approximate the sampling variability distribution of the resulting entropy estimate, and is shown to accurately reflect the true uncertainty. For the four distributional forms studied (Gaussian, Log-Normal, Exponential and Bimodal Gaussian Mixture), expected estimation bias is less than 1% and uncertainty is low even for samples of as few as 100 data points; in contrast, for KD the small sample bias can be as large as −10% and for BC as large as −50%. We speculate that estimating quantile locations, rather than bin-probabilities, results in more efficient use of the information in the data to approximate the underlying shape of an unknown data generating pdf.
APA, Harvard, Vancouver, ISO, and other styles
32

Huang, M., G. R. Carmichael, T. Chai, R. B. Pierce, S. J. Oltmans, D. A. Jaffe, K. W. Bowman, et al. "Impacts of transported background pollutants on summertime Western US air quality: model evaluation, sensitivity analysis and data assimilation." Atmospheric Chemistry and Physics Discussions 12, no. 6 (June 15, 2012): 15227–99. http://dx.doi.org/10.5194/acpd-12-15227-2012.

Full text
Abstract:
Abstract. The impacts of transported background (TBG) pollutants on Western US ozone (O3) distributions in summer 2008 are studied using the multi-scale Sulfur Transport and dEposition Modeling system. Forward sensitivity simulations show that TBG extensively affect Western US surface O3, and can contribute to >50% of the total O3, varying among different geographical regions and land types. The stratospheric O3 impacts are weak. Ozone is the major contributor to surfaceO3 among the TBG pollutants, and TBG peroxyacetyl nitrate is the most important O3 precursor species. Compared to monthly mean daily maximum 8-h average O3, the secondary standard metric "W126 monthly index" shows larger responses to TBG perturbations and stronger non-linearity to the size of perturbations. Overall the model-estimated TBG impacts negatively correlate to the vertical resolution and positively correlate to the horizontal resolution. The estimated TBG impacts weakly depend on the uncertainties in US anthropogenic emissions. Ozone sources differ at three sites spanning ~10° in latitude. Mt. Bachelor (MBO) and Trinidad Head (THD) O3 are strongly affected by TBG, and occasionally by US emissions, while South Coast (SC) O3 is strongly affected by local emissions. The probabilities of airmasses originating from MBO (2.7 km) and THD (2.5 km) entraining into the boundary layer reach daily maxima of 66% and 34% at ~3:00 p.m. PDT, respectively, and stay above 50% during 9:00 a.m.–4:00 p.m. for those originating from SC (1.5 km). Receptor-based adjoint sensitivity analysis demonstrates the connection between the surface O3 and O3 aloft (at ~1–4 km) at these sites 1–2 days earlier. Assimilation of the surface in-situ measurements significantly reduced (~5 ppb in average, up to ~17 ppb) the modeled surface O3 errors during a long-range transport episode, and is useful for estimating the upper-limits of uncertainties in satellite retrievals (in this case 5–20% and 20–30% for Tropospheric Emission Spectrometer (TES) and Ozone Monitoring Instrument (OMI) O3 profiles, respectively). Satellite observations identified this transport event, but assimilation of the existing O3 vertical profiles from TES, OMI and THD sonde in this case did not efficiently improve the O3 distributions except near the sampling locations, due to their limited spatiotemporal resolution and possible uncertainties.
APA, Harvard, Vancouver, ISO, and other styles
33

Pflug, G. Ch. "Sampling derivatives of probabilities." Computing 42, no. 4 (December 1989): 315–28. http://dx.doi.org/10.1007/bf02243227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Singh, M. P. "Sampling with unequal probabilities." Metrika 33, no. 1 (December 1986): 92. http://dx.doi.org/10.1007/bf01894732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Milbrodt, Hartmut. "Comparing inclusion probabilities and drawing probabilities for rejective sampling and successive sampling." Statistics & Probability Letters 14, no. 3 (June 1992): 243–46. http://dx.doi.org/10.1016/0167-7152(92)90029-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Packham, N., M. Kalkbrener, and L. Overbeck. "Asymptotic behaviour of multivariate default probabilities and default correlations under stress." Journal of Applied Probability 53, no. 1 (March 2016): 71–81. http://dx.doi.org/10.1017/jpr.2015.9.

Full text
Abstract:
Abstract We investigate default probabilities and default correlations of Merton-type credit portfolio models in stress scenarios where a common risk factor is truncated. For elliptically distributed asset variables, the asymptotic limits of default probabilities and default correlations depend on the max-domain of attraction of the asset variables. In the regularly varying case, we derive an integral representation for multivariate default probabilities, which turn out to be strictly smaller than 1. Default correlations are in (0, 1). In the rapidly varying case, asymptotic multivariate default probabilities are 1 and asymptotic default correlations are 0.
APA, Harvard, Vancouver, ISO, and other styles
37

Greco, Luigi, and Stefania Naddeo. "Inverse Sampling with Unequal Selection Probabilities." Communications in Statistics - Theory and Methods 36, no. 5 (April 3, 2007): 1039–48. http://dx.doi.org/10.1080/03610920601033926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Ng, Meei Pyng, and Martin Donadio. "Computing inclusion probabilities for order sampling." Journal of Statistical Planning and Inference 136, no. 11 (November 2006): 4026–42. http://dx.doi.org/10.1016/j.jspi.2005.03.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chauvet, G., D. Bonnéry, and J. C. Deville. "Optimal inclusion probabilities for balanced sampling." Journal of Statistical Planning and Inference 141, no. 2 (February 2011): 984–94. http://dx.doi.org/10.1016/j.jspi.2010.09.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Letac, Gérard. "Comment: Lancaster Probabilities and Gibbs Sampling." Statistical Science 23, no. 2 (May 2008): 187–91. http://dx.doi.org/10.1214/08-sts252a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Andrew, Ward Whitt, and Jingtong Zhao. "STAFFING TO STABILIZE BLOCKING IN LOSS MODELS WITH TIME-VARYING ARRIVAL RATES." Probability in the Engineering and Informational Sciences 30, no. 2 (December 9, 2015): 185–211. http://dx.doi.org/10.1017/s0269964815000340.

Full text
Abstract:
The modified-offered-load approximation can be used to choose a staffing function (the time-varying number of servers) to stabilize delay probabilities at target levels in multi-server delay models with time-varying arrival rates, with or without customer abandonment. In contrast, as we confirm with simulations, it is not possible to stabilize blocking probabilities to the same extent in corresponding loss models, without extra waiting space, because these probabilities necessarily change dramatically after each staffing change. Nevertheless, blocking probabilities can be stabilized provided that we either randomize the times of staffing changes or average the blocking probabilities over a suitably small time interval. We develop systematic procedures and study how to choose the averaging parameters.
APA, Harvard, Vancouver, ISO, and other styles
42

Lee, Hyo-Chan, Seyoung Park, and Jong Mun Yoon. "Optimal investment with time-varying transition probabilities for regime switching." Journal of Derivatives and Quantitative Studies: 선물연구 29, no. 2 (June 4, 2021): 102–15. http://dx.doi.org/10.1108/jdqs-12-2020-0032.

Full text
Abstract:
Abstract This study aims to generalize the following result of McDonald and Siegel (1986) on optimal investment: it is optimal for an investor to invest when project cash flows exceed a certain threshold. This study presents other results that refine or extend this one by integrating timing flexibility and changes in cash flows with time-varying transition probabilities for regime switching. This study emphasizes that optimal thresholds are either overvalued or undervalued in the absence of time-varying transition probabilities. Accordingly, the stochastic nature of transition probabilities has important implications to the search for optimal timing of investment.
APA, Harvard, Vancouver, ISO, and other styles
43

Hudson, Richard R. "Two-Locus Sampling Distributions and Their Application." Genetics 159, no. 4 (December 1, 2001): 1805–17. http://dx.doi.org/10.1093/genetics/159.4.1805.

Full text
Abstract:
AbstractMethods of estimating two-locus sample probabilities under a neutral model are extended in several ways. Estimation of sample probabilities is described when the ancestral or derived status of each allele is specified. In addition, probabilities for two-locus diploid samples are provided. A method for using these two-locus probabilities to test whether an observed level of linkage disequilibrium is unusually large or small is described. In addition, properties of a maximum-likelihood estimator of the recombination parameter based on independent linked pairs of sites are obtained. A composite-likelihood estimator, for more than two linked sites, is also examined and found to work as well, or better, than other available ad hoc estimators. Linkage disequilibrium in the Xq28 and Xq25 region of humans is analyzed in a sample of Europeans (CEPH). The estimated recombination parameter is about five times smaller than one would expect under an equilibrium neutral model.
APA, Harvard, Vancouver, ISO, and other styles
44

Kemp, Adrienne W. "Absorption sampling and the absorption distribution." Journal of Applied Probability 35, no. 2 (June 1998): 489–94. http://dx.doi.org/10.1239/jap/1032192864.

Full text
Abstract:
The inverse absorption distribution is shown to be a q-Pascal analogue of the Kemp and Kemp (1991) q-binomial distribution. The probabilities for the direct absorption distribution are obtained via the inverse absorption probabilities and exact expressions for its first two factorial moments are derived using q-series transformations of its probability generating function. Alternative models for the distribution are given.
APA, Harvard, Vancouver, ISO, and other styles
45

Kemp, Adrienne W. "Absorption sampling and the absorption distribution." Journal of Applied Probability 35, no. 02 (June 1998): 489–94. http://dx.doi.org/10.1017/s0021900200015114.

Full text
Abstract:
The inverse absorption distribution is shown to be a q-Pascal analogue of the Kemp and Kemp (1991) q-binomial distribution. The probabilities for the direct absorption distribution are obtained via the inverse absorption probabilities and exact expressions for its first two factorial moments are derived using q-series transformations of its probability generating function. Alternative models for the distribution are given.
APA, Harvard, Vancouver, ISO, and other styles
46

Kluever, Bryan M., Eric M. Gese, and Steven J. Dempsey. "The influence of road characteristics and species on detection probabilities of carnivore faeces." Wildlife Research 42, no. 1 (2015): 75. http://dx.doi.org/10.1071/wr14244.

Full text
Abstract:
Context Determining reliable estimates of carnivore population size and distributions are paramount for developing informed conservation and management plans. Traditionally, invasive sampling has been employed to monitor carnivores, but non-invasive sampling has the advantage of not needing to capture the animal and is generally less expensive. Faeces sampling is a common non-invasive sampling technique and future use is forecasted to increase due to the low costs and logistical ease of sampling, and more advanced techniques in landscape and conservation genetics. For many species, faeces sampling often occurs on or alongside roads. Despite the commonality of road-based faeces sampling, detectability issues are often not addressed. Aim We sought to test whether faeces detection probabilities varied by species – coyote (Canis latrans) versus kit fox (Vulpes macrotis) – and to test whether road characteristics influenced faeces detection probabilities. Methods We placed coyote and kit fox faeces along roads, quantified road characteristics, and then subsequently conducted ‘blind’ road-based faeces detection surveys in Utah during 2012 and 2013. Technicians that surveyed the faeces deposition transects had no knowledge of the locations of the placed faeces. Key results Faeces detection probabilities for kit foxes and coyotes were 45% and 74%, respectively; larger faeces originated from coyotes and were more readily detected. Misidentification of placed faeces was rare and did not differ by species. The width of survey roads and the composition of a road’s surface influenced detection probabilities. Conclusion We identified factors that can influence faeces detection probabilities. Not accounting for variable detection probabilities of different species or not accounting for or reducing road-based variables influencing faeces detection probabilities could hamper reliable counts of mammalian faeces, and could potentially reduce precision of population estimates derived from road-based faeces deposition surveys. Implications We recommend that wildlife researchers acknowledge and account for imperfect faeces detection probabilities during faecal sampling. Steps can be taken during study design to improve detection probabilities, and during the analysis phase to account for variable detection probabilities.
APA, Harvard, Vancouver, ISO, and other styles
47

Brádler, Kamil, Shmuel Friedland, Josh Izaac, Nathan Killoran, and Daiqin Su. "Graph isomorphism and Gaussian boson sampling." Special Matrices 9, no. 1 (January 1, 2021): 166–96. http://dx.doi.org/10.1515/spma-2020-0132.

Full text
Abstract:
Abstract We introduce a connection between a near-term quantum computing device, specifically a Gaussian boson sampler, and the graph isomorphism problem. We propose a scheme where graphs are encoded into quantum states of light, whose properties are then probed with photon-number-resolving detectors. We prove that the probabilities of different photon-detection events in this setup can be combined to give a complete set of graph invariants. Two graphs are isomorphic if and only if their detection probabilities are equivalent. We present additional ways that the measurement probabilities can be combined or coarse-grained to make experimental tests more amenable. We benchmark these methods with numerical simulations on the Titan supercomputer for several graph families: pairs of isospectral nonisomorphic graphs, isospectral regular graphs, and strongly regular graphs.
APA, Harvard, Vancouver, ISO, and other styles
48

Chaudhuri, Arijit. "Network and Adaptive Sampling with Unequal Probabilities." Calcutta Statistical Association Bulletin 50, no. 3-4 (September 2000): 237–54. http://dx.doi.org/10.1177/0008068320000310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Lehtonen, T., and H. Nyrhinen. "Simulating level-crossing probabilities by importance sampling." Advances in Applied Probability 24, no. 4 (December 1992): 858–74. http://dx.doi.org/10.2307/1427716.

Full text
Abstract:
Let X1, X2, · ·· be independent and identically distributed random variables such that ΕΧ1 < 0 and P(X1 ≥ 0) ≥ 0. Fix M ≥ 0 and let T = inf {n: X1 + X2 + · ·· + Xn ≥ M} (T = +∞, if for every n = 1,2, ···). In this paper we consider the estimation of the level-crossing probabilities P(T <∞) and , by using Monte Carlo simulation and especially importance sampling techniques. When using importance sampling, precision and efficiency of the estimation depend crucially on the choice of the simulation distribution. For this choice we introduce a new criterion which is of the type of large deviations theory; consequently, the basic large deviations theory is the main mathematical tool of this paper. We allow a wide class of possible simulation distributions and, considering the case that M →∞, we prove asymptotic optimality results for the simulation of the probabilities P(T <∞) and . The paper ends with an example.
APA, Harvard, Vancouver, ISO, and other styles
50

AIRES, NIBIA, JOHAN JONASSON, and OLLE NERMAN. "Order Sampling Design with Prescribed Inclusion Probabilities." Scandinavian Journal of Statistics 29, no. 1 (March 2002): 183–87. http://dx.doi.org/10.1111/1467-9469.00120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography