Academic literature on the topic 'Sampling with loaded probabilities'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sampling with loaded probabilities.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sampling with loaded probabilities"

1

Cumpston, J. Richard. "A more efficient sampling procedure, using loaded probabilities." International Journal of Microsimulation 5, no. 1 (2011): 21–30. http://dx.doi.org/10.34196/ijm.00065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hultgren, Gustav, Leo Myrén, Zuheir Barsoum, and Rami Mansour. "Digital Scanning of Welds and Influence of Sampling Resolution on the Predicted Fatigue Performance: Modelling, Experiment and Simulation." Metals 11, no. 5 (May 18, 2021): 822. http://dx.doi.org/10.3390/met11050822.

Full text
Abstract:
Digital weld quality assurance systems are increasingly used to capture local geometrical variations that can be detrimental for the fatigue strength of welded components. In this study, a method is proposed to determine the required scanning sampling resolution for proper fatigue assessment. Based on FE analysis of laser-scanned welded joints, fatigue failure probabilities are computed using a Weakest-link fatigue model with experimentally determined parameters. By down-sampling of the scanning data in the FE simulations, it is shown that the uncertainty and error in the fatigue failure probability prediction increases with decreased sampling resolution. The required sampling resolution is thereafter determined by setting an allowable error in the predicted failure probability. A sampling resolution of 200 to 250 μm has been shown to be adequate for the fatigue-loaded welded joints investigated in the current study. The resolution requirements can be directly incorporated in production for continuous quality assurance of welded structures. The proposed probabilistic model used to derive the resolution requirement accurately captures the experimental fatigue strength distribution, with a correlation coefficient of 0.9 between model and experimental failure probabilities. This work therefore brings novelty by deriving sampling resolution requirements based on the influence of stochastic topographical variations on the fatigue strength distribution.
APA, Harvard, Vancouver, ISO, and other styles
3

Pflug, G. Ch. "Sampling derivatives of probabilities." Computing 42, no. 4 (December 1989): 315–28. http://dx.doi.org/10.1007/bf02243227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Singh, M. P. "Sampling with unequal probabilities." Metrika 33, no. 1 (December 1986): 92. http://dx.doi.org/10.1007/bf01894732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Milbrodt, Hartmut. "Comparing inclusion probabilities and drawing probabilities for rejective sampling and successive sampling." Statistics & Probability Letters 14, no. 3 (June 1992): 243–46. http://dx.doi.org/10.1016/0167-7152(92)90029-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Goulionis, J. E. "Strategies for sampling with varying probabilities." Journal of Statistical Computation and Simulation 81, no. 11 (November 2011): 1753. http://dx.doi.org/10.1080/00949655.2011.622434.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chaudhuri, Arijit, and Arun Kumar Adhikary. "Circular Systematic Sampling with Varying Probabilities." Calcutta Statistical Association Bulletin 36, no. 3-4 (September 1987): 193–96. http://dx.doi.org/10.1177/0008068319870310.

Full text
Abstract:
Certain conditions connecting the population size, sample size and the sampling interval in circular systematic sampling with equal probabilities are known. We present here a simple “condition” connecting the sample size, size-measures and the sampling interval in pps circular systematic sampling. The condition is important in noting limitations on sample-sizes when a sampling interval is pre-assigned.
APA, Harvard, Vancouver, ISO, and other styles
8

Greco, Luigi, and Stefania Naddeo. "Inverse Sampling with Unequal Selection Probabilities." Communications in Statistics - Theory and Methods 36, no. 5 (April 3, 2007): 1039–48. http://dx.doi.org/10.1080/03610920601033926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ng, Meei Pyng, and Martin Donadio. "Computing inclusion probabilities for order sampling." Journal of Statistical Planning and Inference 136, no. 11 (November 2006): 4026–42. http://dx.doi.org/10.1016/j.jspi.2005.03.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chauvet, G., D. Bonnéry, and J. C. Deville. "Optimal inclusion probabilities for balanced sampling." Journal of Statistical Planning and Inference 141, no. 2 (February 2011): 984–94. http://dx.doi.org/10.1016/j.jspi.2010.09.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Sampling with loaded probabilities"

1

Liao, Yijie. "Testing of non-unity risk ratio under inverse sampling." HKBU Institutional Repository, 2006. http://repository.hkbu.edu.hk/etd_ra/707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schelin, Lina. "Spatial sampling and prediction." Doctoral thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-53286.

Full text
Abstract:
This thesis discusses two aspects of spatial statistics: sampling and prediction. In spatial statistics, we observe some phenomena in space. Space is typically of two or three dimensions, but can be of higher dimension. Questions in mind could be; What is the total amount of gold in a gold-mine? How much precipitation could we expect in a specific unobserved location? What is the total tree volume in a forest area? In spatial sampling the aim is to estimate global quantities, such as population totals, based on samples of locations (papers III and IV). In spatial prediction the aim is to estimate local quantities, such as the value at a single unobserved location, with a measure of uncertainty (papers I, II and V). In papers III and IV, we propose sampling designs for selecting representative probability samples in presence of auxiliary variables. If the phenomena under study have clear trends in the auxiliary space, estimation of population quantities can be improved by using representative samples. Such samples also enable estimation of population quantities in subspaces and are especially needed for multi-purpose surveys, when several target variables are of interest. In papers I and II, the objective is to construct valid prediction intervals for the value at a new location, given observed data. Prediction intervals typically rely on the kriging predictor having a Gaussian distribution. In paper I, we show that the distribution of the kriging predictor can be far from Gaussian, even asymptotically. This motivated us to propose a semiparametric method that does not require distributional assumptions. Prediction intervals are constructed from the plug-in ordinary kriging predictor. In paper V, we consider prediction in the presence of left-censoring, where observations falling below a minimum detection limit are not fully recorded. We review existing methods and propose a semi-naive method. The semi-naive method is compared to one model-based method and two naive methods, all based on variants of the kriging predictor.
APA, Harvard, Vancouver, ISO, and other styles
3

Peng, Linghua. "Normalizing constant estimation for discrete distribution simulation /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lundquist, Anders. "Contributions to the theory of unequal probability sampling." Doctoral thesis, Umeå : Department of Mathematics and Mathematical Statistics, Umeå University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-22459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shen, Gang. "Bayesian predictive inference under informative sampling and transformation." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0429104-142754/.

Full text
Abstract:
Thesis (M.S.) -- Worcester Polytechnic Institute.
Keywords: Ignorable Model; Transformation; Poisson Sampling; PPS Sampling; Gibber Sampler; Inclusion Probabilities; Selection Bias; Nonignorable Model; Bayesian Inference. Includes bibliographical references (p.34-35).
APA, Harvard, Vancouver, ISO, and other styles
6

Grafström, Anton. "On unequal probability sampling designs." Doctoral thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-33701.

Full text
Abstract:
The main objective in sampling is to select a sample from a population in order to estimate some unknown population parameter, usually a total or a mean of some interesting variable. When the units in the population do not have the same probability of being included in a sample, it is called unequal probability sampling. The inclusion probabilities are usually chosen to be proportional to some auxiliary variable that is known for all units in the population. When unequal probability sampling is applicable, it generally gives much better estimates than sampling with equal probabilities. This thesis consists of six papers that treat unequal probability sampling from a finite population of units. A random sample is selected according to some specified random mechanism called the sampling design. For unequal probability sampling there exist many different sampling designs. The choice of sampling design is important since it determines the properties of the estimator that is used. The main focus of this thesis is on evaluating and comparing different designs. Often it is preferable to select samples of a fixed size and hence the focus is on such designs. It is also important that a design has a simple and efficient implementation in order to be used in practice by statisticians. Some effort has been made to improve the implementation of some designs. In Paper II, two new implementations are presented for the Sampford design. In general a sampling design should also have a high level of randomization. A measure of the level of randomization is entropy. In Paper IV, eight designs are compared with respect to their entropy. A design called adjusted conditional Poisson has maximum entropy, but it is shown that several other designs are very close in terms of entropy. A specific situation called real time sampling is treated in Paper III, where a new design called correlated Poisson sampling is evaluated. In real time sampling the units pass the sampler one by one. Since each unit only passes once, the sampler must directly decide for each unit whether or not it should be sampled. The correlated Poisson design is shown to have much better properties than traditional methods such as Poisson sampling and systematic sampling.
APA, Harvard, Vancouver, ISO, and other styles
7

Cancino, Cancino Jorge Orlando. "Analyse und praktische Umsetzung unterschiedlicher Methoden des Randomized Branch Sampling." Doctoral thesis, Göttingen, 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=969133375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stevenson, Clint W. "A Logistic Regression Analysis of Utah Colleges Exit Poll Response Rates Using SAS Software." BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/1116.

Full text
Abstract:
In this study I examine voter response at an interview level using a dataset of 7562 voter contacts (including responses and nonresponses) in the 2004 Utah Colleges Exit Poll. In 2004, 4908 of the 7562 voters approached responded to the exit poll for an overall response rate of 65 percent. Logistic regression is used to estimate factors that contribute to a success or failure of each interview attempt. This logistic regression model uses interviewer characteristics, voter characteristics (both respondents and nonrespondents), and exogenous factors as independent variables. Voter characteristics such as race, gender, and age are strongly associated with response. An interviewer's prior retail sales experience is associated with whether a voter will decide to respond to a questionnaire or not. The only exogenous factor that is associated with voter response is whether the interview occurred in the morning or afternoon.
APA, Harvard, Vancouver, ISO, and other styles
9

Dubourg, Vincent. "Méta-modèles adaptatifs pour l'analyse de fiabilité et l'optimisation sous contrainte fiabiliste." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2011. http://tel.archives-ouvertes.fr/tel-00697026.

Full text
Abstract:
Cette thèse est une contribution à la résolution du problème d'optimisation sous contrainte de fiabilité. Cette méthode de dimensionnement probabiliste vise à prendre en compte les incertitudes inhérentes au système à concevoir, en vue de proposer des solutions optimales et sûres. Le niveau de sûreté est quantifié par une probabilité de défaillance. Le problème d'optimisation consiste alors à s'assurer que cette probabilité reste inférieure à un seuil fixé par les donneurs d'ordres. La résolution de ce problème nécessite un grand nombre d'appels à la fonction d'état-limite caractérisant le problème de fiabilité sous-jacent. Ainsi,cette méthodologie devient complexe à appliquer dès lors que le dimensionnement s'appuie sur un modèle numérique coûteux à évaluer (e.g. un modèle aux éléments finis). Dans ce contexte, ce manuscrit propose une stratégie basée sur la substitution adaptative de la fonction d'état-limite par un méta-modèle par Krigeage. On s'est particulièrement employé à quantifier, réduire et finalement éliminer l'erreur commise par l'utilisation de ce méta-modèle en lieu et place du modèle original. La méthodologie proposée est appliquée au dimensionnement des coques géométriquement imparfaites soumises au flambement.
APA, Harvard, Vancouver, ISO, and other styles
10

Cumpston, John Richard. "New techniques for household microsimulation, and their application to Australia." Phd thesis, 2011. http://hdl.handle.net/1885/9046.

Full text
Abstract:
Household microsimulation models are sometimes used by national governments to make long-term projections of proposed policy changes. They are costly to develop and maintain, and sometimes have short lifetimes. Most present national models have limited interactions between agents, few regions and long simulation cycles. Some models are very slow to run. Overcoming these limitations may open up a much wider range of government, business and individual uses. This thesis suggests techniques to help make multi-purpose dynamic microsimulations of households, with fine spatial resolutions, high sampling densities and short simulation cycles. Techniques suggested are: * simulation by sampling with loaded probabilities * proportional event alignment * event alignment using random sampling * immediate matching by probability-weighting * immediate 'best of n' matching. All of these techniques are tested in artificial situations. Three of them - sampling with loaded probabilities, alignment using random sampling and best of n matching - are successfully tested in the Cumpston model, a household microsimulation model developed for this thesis. Sampling with loaded probabilities is found to give almost identical results to the traditional all-case sampling, but be quicker. The suggested alignment and matching techniques are shown to give less distortion and generally lower runtimes than some techniques currently in use. The Cumpston model is based on a 1% sample from the 2001 Australian census. Individuals, families, households and dwellings are included. Immigration and emigration are separately simulated, together with internal migration between 57 statistical divisions. Transitions between 8 person types are simulated, and between 9 occupations. The model projects education, employment, earnings and retirement savings for each individual, and dwelling values, rents and housing loans for each household. The onset and development of diseases for each individual are simulated. Validation of the model was based on methods used by the Orcutt, CORSIM, DYNACAN and APPSIM models. Iterative methods for model calibration are described, together with a statistical test for creep in multiple runs. The model takes about 85 seconds to make projections for 50 years with yearly simulation cycles. After standardizing for sample size and projection years, this is a little slower than the fastest national models currently operating. A planned extension of the model is to 2.2 million persons over 2,214 areas, synthesized from 2011 census tabulations. Using multithreading where feasible, a 50-year projection may take about 10 minutes.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Sampling with loaded probabilities"

1

Gurao, Dr Rajendra G. Sampling Methods. Kanpur, India: Chandralok Prakashan, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

G, Meeden, ed. Bayesian methods for finite population sampling. London: Chapman & Hall, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

1931-, MacNeill Ian B., and Umphrey Gary J. 1953-, eds. Applied probability, stochastic processes, and sampling theory. Dordrecht: D. Reidel, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wright, Tommy. Exact confidence bounds when sampling from small finite universes: An easy reference based on the hypergeometric distribution. Berlin: Springer-Verlag, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Introduction to empirical processes and semiparametric inference. New York: Springer, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gupta, A. K. Theory of sample surveys. New Jersey: World Scientific, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Evans, Michael J. Monte Carlo computation of marginal posterior qualities. Toronto: University of Toronto, Dept. of Statistics, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Miliken, George A. Analysis of messy data. New York: Chapman & Hall, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

R.C. Bose Symposium (1988 New Delhi). Probability, statistics and design of experiments. New Delhi: Wiley Eastern, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Resampling: The new statistics. 2nd ed. Arlington, VA: Resampling Stats, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Sampling with loaded probabilities"

1

Lohr, Sharon L. "Sampling with Unequal Probabilities." In Sampling, 219–72. 3rd ed. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9780429298899-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lohr, Sharon L. "Cluster Sampling with Equal Probabilities." In Sampling, 167–218. 3rd ed. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9780429298899-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Singh, Ravindra, and Naurang Singh Mangat. "Sampling With Varying Probabilities." In Kluwer Texts in the Mathematical Sciences, 67–101. Dordrecht: Springer Netherlands, 1996. http://dx.doi.org/10.1007/978-94-017-1404-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lohr, Sharon L. "Sampling with Unequal Probabilities." In SAS® Software Companion for Sampling, 69–82. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003160366-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lu, Yan, and Sharon L. Lohr. "Sampling with Unequal Probabilities." In R Companion for Sampling, 69–84. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003228196-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chaudhuri, Arijit, and Sanghamitra Pal. "Sampling with Varying Probabilities." In Indian Statistical Institute Series, 43–109. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1418-8_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lu, Yan, and Sharon L. Lohr. "Cluster Sampling with Equal Probabilities." In R Companion for Sampling, 57–68. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003228196-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lohr, Sharon L. "Cluster Sampling with Equal Probabilities." In SAS® Software Companion for Sampling, 55–68. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003160366-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Brus, Dick J. "Sampling with probabilities proportional to size." In Spatial Sampling with R, 123–36. Boca Raton: Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003258940-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Stamp, Mark. "Sampling with probabilities proportional to size." In Introduction to Machine Learning with Applications in Information Security, 123–36. 2nd ed. Boca Raton: Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003264873-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sampling with loaded probabilities"

1

Barranco Cicilia, Federico, and Alberto Omar Va´zquez Herna´ndez. "Reliability of TLP Tethers Using Evolutionary Strategies." In ASME 2008 27th International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2008. http://dx.doi.org/10.1115/omae2008-58025.

Full text
Abstract:
Tether system is a critical component for the TLPs, since its failure may lead to the collapse of the whole structure involving human lives, economic losses and damages to the environment. Due to this fact, reliability methods have been proposed to design TLP tethers and new codes are being developed to increase their safety level. The objective of this paper is to compare the probability of failure for TLP tethers considering the maximum tension limit state obtained with three methods, which are: a methodology based on Evolutionary Strategies and the Monte Carlo Importance Sampling, the First Order Reliability Method, and the Second Order Reliability Method. Von-Mises failure criterion is used as limit state function for the most loaded tether of a TLP submitted to different sea states. Efficiency of the ES algorithm to find design points and probabilities of failure obtained with the reliability methods are discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Leong, Darrell, Ying Min Low, and Youngkook Kim. "Long-Term Extreme Response Prediction of Mooring Lines Using Subset Simulation." In ASME 2018 37th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/omae2018-77064.

Full text
Abstract:
Rigorous methods of probabilistic evaluations on long-term extremes are integral components in reliability research of offshore structures against overload events. Assessment across all conceivable sea states requires accounting for variabilities of long-term environmental loads and short-term stochastics, traditionally captured through extensive sampling or numerical expectation integration. The amount of environmental load variables render numerical integrations across high dimensions computationally prohibitive, while industry requirements of high return periods demand large Monte Carlo samples of timedomain dynamic analyses. Subset simulation offers a promising alternative to classic methods of statistical analysis, dividing ultra-low probability problems into subsets of intermediate probabilities. The methodology is uniquely advantageous for the assessment of heavy-tail overload events, which are unpredictably severe and occur at exceedingly rare frequencies. Subset simulation is experimented on a mooring case study situated in the hurricane-prone Gulf of Mexico, with the structure exposed to a joint-probabilistic description of wave, wind and current loads. The devised methodology is found to successfully evaluate hurricane-stimulated extreme events at ultra-low probabilities, beyond the feasible reach of Monte Carlo simulation at reasonable lead times.
APA, Harvard, Vancouver, ISO, and other styles
3

Beardsmore, David W., Karen Stone, and Huaguo Teng. "Advanced Probabilistic Fracture Mechanics Using the R6 Procedure." In ASME 2010 Pressure Vessels and Piping Division/K-PVP Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/pvp2010-25942.

Full text
Abstract:
Deterministic Fracture Mechanics (DFM) assessments of structural components (e.g. pressure vessels and piping used in the nuclear industry) containing defects can usually be carried out using the R6 procedure. The aim of such an assessment is to demonstrate that there are sufficient safety margins on the applied loads, defect size and fracture toughness for the safe continual operation of the component. To ensure a conservative assessment is made, a lower-bound fracture toughness, and upper-bound defect sizes and applied loads are used. In some cases, this approach will be too conservative and will provide insufficient safety margins. Probabilistic Fracture Mechanics (PFM) allow a way forward in such cases by allowing for the inherent scatter in material properties, defect size and applied loads explicitly. Basic Monte Carlo Methods (MCM) allow an estimate of the probability of failure to be calculated by carrying out a large number of fracture mechanics assessments, each using a random sample of the different random variables (loads, defect size, fracture toughness etc). The probability of failure is obtained by counting the proportion of simulations which lead to assessment points that lie outside the R6 failure assessment curve. This approach can give good results for probabilities greater than 10−5. However, for smaller probabilities, the calculation may be inefficient and a very large number of assessments may be necessary to obtain an accurate result, which may be prohibitive. Engineering Reliability Methods (ERM), such as the First Order Reliability method (FORM) and the Second Order Reliability Method (SORM), can be used to estimate the probability of failure in such cases, but these methods can be difficult to implement, do not always give the correct result, and are not always robust enough for general use. Advanced Monte Carlo Methods (AMCM) combine the two approaches to provide an accurate and efficient calculation of probability of failure in all cases. These methods aim to carry out Importance Sampling so that only assessment points that lie close to or outside the failure assessment curve are calculated. Two methods are described in this paper: (1) orthogonal sampling, and (2) spherical sampling. The power behind these methods is demonstrated by carrying out calculations of probability of failure for semi-elliptical, surface breaking, circumferential cracks in the inside of a pressure vessel. The results are compared with the results of Basic Monte Carlo and Engineering Reliability calculations. The calculations use the R6 assessment procedure.
APA, Harvard, Vancouver, ISO, and other styles
4

Leong, Darrell, Ying Min Low, and Youngkook Kim. "An Efficient System Reliability Approach Against Mooring Overload Failures." In ASME 2019 38th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/omae2019-95048.

Full text
Abstract:
Abstract As exploration for hydrocarbon resources venture into deeper waters, offshore floating structures are increasingly required to be stationed at sites of highly uncertain environmental loading conditions. Driven by high historical mooring failure rates of severe consequences, the need for effective long-term structural reliability methods arises for mooring lines. However, system nonlinearities, high problem dimensionality, and the diversity of conceivable failure causalities their extremely low probabilities complicates the analysis. Variations on the Monte Carlo approach are robust in addressing these challenges, but at the expense of high computational costs. In this study, distributions of environmental parameters and their correlations are modelled into a joint probabilistic description. By classifying conceivable sea states across the domain, an efficient uniform sampling scheme is presented as an efficient means of assessing long-term reliability against extreme events. The proposed method was performed on a floating production unit case study situated in the hurricane-prone Gulf of Mexico, exposed to irregular wave loads. The analysis was found to provide probability estimates with negligible bias when validated against subset simulation, with significant variance reduction of mean estimators by eliminating the need to over-simulate non-critical environmental conditions. The resulting sampling density has an added advantage of being non-failure specific, enabling system reliability assessments across multiple modes and locations of failure without the need for re-analysis.
APA, Harvard, Vancouver, ISO, and other styles
5

Petricic, Martin, and Alaa E. Mansour. "Estimation of the Long Term Correlation Coefficients by Simulation." In ASME 2010 29th International Conference on Ocean, Offshore and Arctic Engineering. ASMEDC, 2010. http://dx.doi.org/10.1115/omae2010-20637.

Full text
Abstract:
This paper proposes a simulation method for obtaining the estimate of the long term correlation coefficients between different low-frequency wave-induced loads acting on a ship hull. They are essential part of the load combination procedures in design and strength evaluations. Existing theory is limited to linear time-invariant systems with weakly stationary stochastic inputs such as waves during a single sea state (short-term). The simulation treats the non-stationary wave elevations during the ship’s entire life (long-term) as a sequence of different stationary Gaussian stochastic processes. Different sea states (HS, T0, Wave Direction) are sampled, using rejection sampling, from the joint probability density functions fitted to every Marsden zone on the ship’s route. The time series of the loads are simulated from the load spectra for each sea state, including the effects of loading condition, heading, speed, seasonality and voluntary as well as involuntary speed reduction. The estimates of the correlation coefficients are then calculated from these time series. The simulation time can be significantly reduced (to the order of seconds rather than hours and days) by introducing the seasonal variations into a single voyage. It is proven that the estimate of the correlation coefficient, obtained by simulating only a single voyage, approaches the true correlation coefficient in probability as the number of simulated load values increases. The simulation method can also be used for finding the long-term exceedance probabilities of the peak values of individual loads as well as for analyzing various load combinations (linear and nonlinear). Related concepts and limitations of this method are demonstrated by an example of a containership operating between Boston, MA and Southampton, UK.
APA, Harvard, Vancouver, ISO, and other styles
6

Frey, Kristoffer, Ted Steiner, and Jonathan How. "Collision Probabilities for Continuous-Time Systems Without Sampling." In Robotics: Science and Systems 2020. Robotics: Science and Systems Foundation, 2020. http://dx.doi.org/10.15607/rss.2020.xvi.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kath, W., and G. Biondini. "Calculating PMD statistics and outage probabilities with importance sampling." In OFC 2003 - Optical Fiber Communication Conference and Exhibition. IEEE, 2003. http://dx.doi.org/10.1109/ofc.2003.315983.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

de Angelis, M., E. Patelli, and M. Beer. "Line Sampling for Assessing Structural Reliability with Imprecise Failure Probabilities." In Second International Conference on Vulnerability and Risk Analysis and Management (ICVRAM) and the Sixth International Symposium on Uncertainty, Modeling, and Analysis (ISUMA). Reston, VA: American Society of Civil Engineers, 2014. http://dx.doi.org/10.1061/9780784413609.093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Agaskar, Ameya, Chuang Wang, and Yue M. Lu. "Randomized Kaczmarz algorithms: Exact MSE analysis and optimal sampling probabilities." In 2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 2014. http://dx.doi.org/10.1109/globalsip.2014.7032145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Pengyi, Wei Liu, and Jean Yang. "Positive unlabeled learning via wrapper-based adaptive sampling." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/457.

Full text
Abstract:
Learning from positive and unlabeled data frequently occurs in applications where only a subset of positive instances is available while the rest of the data are unlabeled. In such scenarios, often the goal is to create a discriminant model that can accurately classify both positive and negative data by modelling from labeled and unlabeled instances. In this study, we propose an adaptive sampling (AdaSampling) approach that utilises prediction probabilities from a model to iteratively update the training data. Starting with equal prior probabilities for all unlabeled data, our method "wraps" around a predictive model to iteratively update these probabilities to distinguish positive and negative instances in unlabeled data. Subsequently, one or more robust negative set(s) can be drawn from unlabeled data, according to the likelihood of each instance being negative, to train a single classification model or ensemble of models.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Sampling with loaded probabilities"

1

Iyengar, Satish. Importance Sampling for Tail Probabilities. Fort Belvoir, VA: Defense Technical Information Center, February 1991. http://dx.doi.org/10.21236/ada232412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Peterson, James T. On the Estimation of Detection Probabilities for Sampling Stream-Dwelling Fishes. Office of Scientific and Technical Information (OSTI), November 1999. http://dx.doi.org/10.2172/783958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Harter, Rachel, Joseph McMichael, and S. Grace Deng. New Approach for Handling Drop Point Addresses in Mail/ Web Surveys. RTI Press, August 2022. http://dx.doi.org/10.3768/rtipress.2022.op.0074.2209.

Full text
Abstract:
The purpose of this paper is to introduce the concept of drop unit substitution in address-based samples for mail and web surveys. A drop point is a single US Postal Service (USPS) delivery point or receptacle that services multiple businesses, families, or households (USPS, 2017). Residential drop units are the individual housing units served by the drop point address. For the most part, address-based sampling frames list the number of units at a drop point address but will not contain information identifying specific units. Drop units comprise less than 2 percent of all residential addresses in the United States (McMichael, 2017), but they tend to be concentrated in certain large cities. In Queens, New York, for example, drop units constitute 27 percent of residential housing units. The problem with drop units for address-based surveys with mail contacts is that, without names or unit identifiers, there is no way to control which unit receives the various mailings. This limitation leads to distorted selection probabilities, renders the use of cash incentives by mail impractical, and precludes traditional methods for mail nonresponse follow-up, thus resulting in higher nonresponse. Alternatively, excluding drop units results in coverage error, which can be considerable for some subnational estimates. The authors propose a substitution approach when a drop unit is sampled—in other words, replacing the unit with a similar nearby unit in a non–drop point building.
APA, Harvard, Vancouver, ISO, and other styles
4

Russo, David, Daniel M. Tartakovsky, and Shlomo P. Neuman. Development of Predictive Tools for Contaminant Transport through Variably-Saturated Heterogeneous Composite Porous Formations. United States Department of Agriculture, December 2012. http://dx.doi.org/10.32747/2012.7592658.bard.

Full text
Abstract:
The vadose (unsaturated) zone forms a major hydrologic link between the ground surface and underlying aquifers. To understand properly its role in protecting groundwater from near surface sources of contamination, one must be able to analyze quantitatively water flow and contaminant transport in variably saturated subsurface environments that are highly heterogeneous, often consisting of multiple geologic units and/or high and/or low permeability inclusions. The specific objectives of this research were: (i) to develop efficient and accurate tools for probabilistic delineation of dominant geologic features comprising the vadose zone; (ii) to develop a complementary set of data analysis tools for discerning the fractal properties of hydraulic and transport parameters of highly heterogeneous vadose zone; (iii) to develop and test the associated computational methods for probabilistic analysis of flow and transport in highly heterogeneous subsurface environments; and (iv) to apply the computational framework to design an “optimal” observation network for monitoring and forecasting the fate and migration of contaminant plumes originating from agricultural activities. During the course of the project, we modified the third objective to include additional computational method, based on the notion that the heterogeneous formation can be considered as a mixture of populations of differing spatial structures. Regarding uncertainly analysis, going beyond approaches based on mean and variance of system states, we succeeded to develop probability density function (PDF) solutions enabling one to evaluate probabilities of rare events, required for probabilistic risk assessment. In addition, we developed reduced complexity models for the probabilistic forecasting of infiltration rates in heterogeneous soils during surface runoff and/or flooding events Regarding flow and transport in variably saturated, spatially heterogeneous formations associated with fine- and coarse-textured embedded soils (FTES- and CTES-formations, respectively).We succeeded to develop first-order and numerical frameworks for flow and transport in three-dimensional (3-D), variably saturated, bimodal, heterogeneous formations, with single and dual porosity, respectively. Regarding the sampling problem defined as, how many sampling points are needed, and where to locate them spatially in the horizontal x₂x₃ plane of the field. Based on our computational framework, we succeeded to develop and demonstrate a methdology that might improve considerably our ability to describe quntitaively the response of complicated 3-D flow systems. The results of the project are of theoretical and practical importance; they provided a rigorous framework to modeling water flow and solute transport in a realistic, highly heterogeneous, composite flow system with uncertain properties under-specified by data. Specifically, they: (i) enhanced fundamental understanding of the basic mechanisms of field-scale flow and transport in near-surface geological formations under realistic flow scenarios, (ii) provided a means to assess the ability of existing flow and transport models to handle realistic flow conditions, and (iii) provided a means to assess quantitatively the threats posed to groundwater by contamination from agricultural sources.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography