Academic literature on the topic 'Probability bounds analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Probability bounds analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Probability bounds analysis"

1

Enszer, Joshua A., D. Andrei Măceș, and Mark A. Stadtherr. "Probability bounds analysis for nonlinear population ecology models." Mathematical Biosciences 267 (September 2015): 97–108. http://dx.doi.org/10.1016/j.mbs.2015.06.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Enszer, Joshua A., Youdong Lin, Scott Ferson, George F. Corliss, and Mark A. Stadtherr. "Probability bounds analysis for nonlinear dynamic process models." AIChE Journal 57, no. 2 (January 10, 2011): 404–22. http://dx.doi.org/10.1002/aic.12278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cuesta, Juan A., and Carlos Matrán. "Conditional bounds and best L∞-approximations in probability spaces." Journal of Approximation Theory 56, no. 1 (January 1989): 1–12. http://dx.doi.org/10.1016/0021-9045(89)90128-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hughett, Paul. "Error Bounds for Numerical Inversion of a Probability Characteristic Function." SIAM Journal on Numerical Analysis 35, no. 4 (August 1998): 1368–92. http://dx.doi.org/10.1137/s003614299631085x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Feng, Geng. "Sensitivity Analysis for Systems under Epistemic Uncertainty with Probability Bounds Analysis." International Journal of Computer Applications 179, no. 31 (April 17, 2018): 1–6. http://dx.doi.org/10.5120/ijca2018915892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Jing, and Xin Geng. "Theoretical Analysis of Label Distribution Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5256–63. http://dx.doi.org/10.1609/aaai.v33i01.33015256.

Full text
Abstract:
As a novel learning paradigm, label distribution learning (LDL) explicitly models label ambiguity with the definition of label description degree. Although lots of work has been done to deal with real-world applications, theoretical results on LDL remain unexplored. In this paper, we rethink LDL from theoretical aspects, towards analyzing learnability of LDL. Firstly, risk bounds for three representative LDL algorithms (AA-kNN, AA-BP and SA-ME) are provided. For AA-kNN, Lipschitzness of the label distribution function is assumed to bound the risk, and for AA-BP and SA-ME, rademacher complexity is utilized to give data-dependent risk bounds. Secondly, a generalized plug-in decision theorem is proposed to understand the relation between LDL and classification, uncovering that approximation to the conditional probability distribution function in absolute loss guarantees approaching to the optimal classifier, and also data-dependent error probability bounds are presented for the corresponding LDL algorithms to perform classification. As far as we know, this is perhaps the first research on theory of LDL.
APA, Harvard, Vancouver, ISO, and other styles
7

Aydın, Ata Deniz, and Aurelian Gheondea. "Probability Error Bounds for Approximation of Functions in Reproducing Kernel Hilbert Spaces." Journal of Function Spaces 2021 (April 30, 2021): 1–15. http://dx.doi.org/10.1155/2021/6617774.

Full text
Abstract:
We find probability error bounds for approximations of functions f in a separable reproducing kernel Hilbert space H with reproducing kernel K on a base space X , firstly in terms of finite linear combinations of functions of type K x i and then in terms of the projection π x n on span K x i i = 1 n , for random sequences of points x = x i i in X . Given a probability measure P , letting P K be the measure defined by d P K x = K x , x d P x , x ∈ X , our approach is based on the nonexpansive operator L 2 X ; P K ∋ λ ↦ L P , K λ ≔ ∫ X λ x K x d P x ∈ H , where the integral exists in the Bochner sense. Using this operator, we then define a new reproducing kernel Hilbert space, denoted by H P , that is the operator range of L P , K . Our main result establishes bounds, in terms of the operator L P , K , on the probability that the Hilbert space distance between an arbitrary function f in H and linear combinations of functions of type K x i , for x i i sampled independently from P , falls below a given threshold. For sequences of points x i i = 1 ∞ constituting a so-called uniqueness set, the orthogonal projections π x n to span K x i i = 1 n converge in the strong operator topology to the identity operator. We prove that, under the assumption that H P is dense in H , any sequence of points sampled independently from P yields a uniqueness set with probability 1. This result improves on previous error bounds in weaker norms, such as uniform or L p norms, which yield only convergence in probability and not almost certain convergence. Two examples that show the applicability of this result to a uniform distribution on a compact interval and to the Hardy space H 2 D are presented as well.
APA, Harvard, Vancouver, ISO, and other styles
8

Block, Hentry W., Tim Costigan, and Allan R. Sampson. "Product-Type Probability Bounds of Higher Order." Probability in the Engineering and Informational Sciences 6, no. 3 (July 1992): 349–70. http://dx.doi.org/10.1017/s0269964800002588.

Full text
Abstract:
GIaz and Johnson [14] introduce ith-order product-type approximations, βi, i =1,…n − 1, for Pn = P(X1 ≤c1, X2 ≤ c2,…Xn≤ cn) and show that Pn≥ βn−1 ≥ βn−2 ≥… ≥ β2 ≥ β1 when X is MTP2. In this article, it is shown thatunder weaker positive dependence conditions. For multivariate normal distributions, these conditions reduce to cov(Xi,Xj) ≥ 0 for 1 ≤ i < j ≤ n and cov(Xi,Xj| Xj−1) ≥ 0 for 1 ≤ i < j − 1, j = 3,…,n. This is applied to group sequential analysis with bivariate normal responses. Conditions for Pn ≥ β3 ≥ β2 ≥β1 are also derived. Bound conditions are also obtained that ensure that product-type approximations are nested lower bounds to upper orthant probabilities P(X1 > C1,…,Xn > cn). It is shown that these conditions are satisfied for the multivariate exponential distribution of Marshall and Olkin [20].
APA, Harvard, Vancouver, ISO, and other styles
9

McCormick, William P., and You Sung Park. "Asymptotic analysis of extremes from autoregressive negative binomial processes." Journal of Applied Probability 29, no. 4 (December 1992): 904–20. http://dx.doi.org/10.2307/3214723.

Full text
Abstract:
It is well known that most commonly used discrete distributions fail to belong to the domain of maximal attraction for any extreme value distribution. Despite this negative finding, C. W. Anderson showed that for a class of discrete distributions including the negative binomial class, it is possible to asymptotically bound the distribution of the maximum. In this paper we extend Anderson's result to discrete-valued processes satisfying the usual mixing conditions for extreme value results for dependent stationary processes. We apply our result to obtain bounds for the distribution of the maximum based on negative binomial autoregressive processes introduced by E. McKenzie and Al-Osh and Alzaid. A simulation study illustrates the bounds for small sample sizes.
APA, Harvard, Vancouver, ISO, and other styles
10

Chib, Siddhartha, and Ram C. Tiwari. "Extreme Bounds Analysis in the Kalman Filter." American Statistician 45, no. 2 (May 1991): 113. http://dx.doi.org/10.2307/2684370.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Probability bounds analysis"

1

Ling, Jay Michael. "Managing Information Collection in Simulation-Based Design." Thesis, Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11504.

Full text
Abstract:
An important element of successful engineering design is the effective management of resources to support design decisions. Design decisions can be thought of as having two phasesa formulation phase and a solution phase. As part of the formulation phase, engineers must decide how much information to collect and which models to use to support the design decision. Since more information and more accurate models come at a greater cost, a cost-benefit trade-off must be made. Previous work has considered such trade-offs in decision problems when all aspects of the decision problem can be represented using precise probabilities, an assumption that is not justified when information is sparse. In this thesis, we use imprecise probabilities to manage the information cost-benefit trade-off for two decision problems in which the quality of the information is imprecise: 1) The decision of when to stop collecting statistical data about a quantity that is characterized by a probability distribution with unknown parameters; and 2) The selection of the most preferred model to help guide a particular design decision when the model accuracy is characterized as an interval. For each case, a separate novel approach is developed in which the principles of information economics are incorporated into the information management decision. The problem of statistical data collection is explored with a pressure vessel design. This design problem requires the characterization of the probability distribution that describes a novel material's strength. The model selection approach is explored with the design of an I-beam structure. The designer must decide how accurate of a model to use to predict the maximum deflection in the span of the structure. For both problems, it is concluded that the information economic approach developed in this thesis can assist engineers in their information management decisions.
APA, Harvard, Vancouver, ISO, and other styles
2

Dankwah, Charles O. "Investigating an optimal decision point for probability bounds analysis models when used to estimate remedial soil volumes under uncertainty at hazardous waste sites." ScholarWorks, 2010. https://scholarworks.waldenu.edu/dissertations/776.

Full text
Abstract:
Hazardous waste site remediation cost estimation requires a good estimate of the contaminated soil volume. The United States Environmental Protection Agency (U.S. EPA) currently uses deterministic point values to estimate soil volumes but the literature suggests that probability bounds analysis (PBA) is the more accurate method to make estimates under uncertainty. The underlying statistical theory is that they are more accurate than deterministic estimates because probabilistic estimates account for data uncertainties. However, the literature does not address the problem of selecting an optimal decision point from the interval-valued PBA estimates. The purpose of this study was to identify the optimal PBA decision point estimator and use it to demonstrate that because the PBA method also accounts for data uncertainties, PBA estimates of remedial soil volumes are more accurate than the U.S. EPA deterministic estimates. The research questions focused on determining whether the mean or the 95th percentile decision point is the optimal PBA estimator. A convenience sample of seven sites was selected from the U.S. EPA Superfund Database. The PBA method was used to estimate the remedial soil volumes for the sites. Correlation analyses were performed between the mean and 95th percentile PBA estimates and the actual excavated soil volumes. The study results suggest that the lower bound 95th percentile PBA estimate, which had the best R2-value of 89%, is the optimal estimator. The R2-value for a similar correlation analysis using the U.S. EPA deterministic estimates was only 59%. This confirms that PBA is the better estimator. The PBA estimates are less contestable than the current U.S. EPA deterministic point estimates. Thus, the PBA method will reduce litigation and speed up cleanup activities to the benefit of the U.S. EPA, corporations, the health and safety of nearby residents, and society in general.
APA, Harvard, Vancouver, ISO, and other styles
3

Dixon, William J., and bill dixon@dse vic gov au. "Uncertainty in Aquatic Toxicological Exposure-Effect Models: the Toxicity of 2,4-Dichlorophenoxyacetic Acid and 4-Chlorophenol to Daphnia carinata." RMIT University. Biotechnology and Environmental Biology, 2005. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20070119.163720.

Full text
Abstract:
Uncertainty is pervasive in risk assessment. In ecotoxicological risk assessments, it arises from such sources as a lack of data, the simplification and abstraction of complex situations, and ambiguities in assessment endpoints (Burgman 2005; Suter 1993). When evaluating and managing risks, uncertainty needs to be explicitly considered in order to avoid erroneous decisions and to be able to make statements about the confidence that we can place in risk estimates. Although informative, previous approaches to dealing with uncertainty in ecotoxicological modelling have been found to be limited, inconsistent and often based on assumptions that may be false (Ferson & Ginzburg 1996; Suter 1998; Suter et al. 2002; van der Hoeven 2004; van Straalen 2002a; Verdonck et al. 2003a). In this thesis a Generalised Linear Modelling approach is proposed as an alternative, congruous framework for the analysis and prediction of a wide range of ecotoxicological effects. This approach was used to investigate the results of toxicity experiments on the effect of 2,4-Dichlorophenoxyacetic Acid (2,4-D) formulations and 4-Chlorophenol (4-CP, an associated breakdown product) on Daphnia carinata. Differences between frequentist Maximum Likelihood (ML) and Bayesian Markov-Chain Monte-Carlo (MCMC) approaches to statistical reasoning and model estimation were also investigated. These approaches are inferentially disparate and place different emphasis on aleatory and epistemic uncertainty (O'Hagan 2004). Bayesian MCMC and Probability Bounds Analysis methods for propagating uncertainty in risk models are also compared for the first time. For simple models, Bayesian and frequentist approaches to Generalised Linear Model (GLM) estimation were found to produce very similar results when non-informative prior distributions were used for the Bayesian models. Potency estimates and regression parameters were found to be similar for identical models, signifying that Bayesian MCMC techniques are at least a suitable and objective replacement for frequentist ML for the analysis of exposureresponse data. Applications of these techniques demonstrated that Amicide formulations of 2,4-D are more toxic to Daphnia than their unformulated, Technical Acid parent. Different results were obtained from Bayesian MCMC and ML methods when more complex models and data structures were considered. In the analysis of 4-CP toxicity, the treatment of 2 different factors as fixed or random in standard and Mixed-Effect models was found to affect variance estimates to the degree that different conclusions would be drawn from the same model, fit to the same data. Associated discrepancies in the treatment of overdispersion between ML and Bayesian MCMC analyses were also found to affect results. Bayesian MCMC techniques were found to be superior to the ML ones employed for the analysis of complex models because they enabled the correct formulation of hierarchical (nested) datastructures within a binomial logistic GLM. Application of these techniques to the analysis of results from 4-CP toxicity testing on two strains of Daphnia carinata found that between-experiment variability was greater than that within-experiments or between-strains. Perhaps surprisingly, this indicated that long-term laboratory culture had not significantly affected the sensitivity of one strain when compared to cultures of another strain that had recently been established from field populations. The results from this analysis highlighted the need for repetition of experiments, proper model formulation in complex analyses and careful consideration of the effects of pooling data on characterising variability and uncertainty. The GLM framework was used to develop three dimensional surface models of the effects of different length pulse exposures, and subsequent delayed toxicity, of 4-CP on Daphnia. These models described the relationship between exposure duration and intensity (concentration) on toxicity, and were constructed for both pulse and delayed effects. Statistical analysis of these models found that significant delayed effects occurred following the full range of pulse exposure durations, and that both exposure duration and intensity interacted significantly and concurrently with the delayed effect. These results indicated that failure to consider delayed toxicity could lead to significant underestimation of the effects of pulse exposure, and therefore increase uncertainty in risk assessments. A number of new approaches to modelling ecotoxicological risk and to propagating uncertainty were also developed and applied in this thesis. In the first of these, a method for describing and propagating uncertainty in conventional Species Sensitivity Distribution (SSD) models was described. This utilised Probability Bounds Analysis to construct a nonparametric 'probability box' on an SSD based on EC05 estimates and their confidence intervals. Predictions from this uncertain SSD and the confidence interval extrapolation methods described by Aldenberg and colleagues (2000; 2002a) were compared. It was found that the extrapolation techniques underestimated the width of uncertainty (confidence) intervals by 63% and the upper bound by 65%, when compared to the Probability Bounds (P3 Bounds) approach, which was based on actual confidence estimates derived from the original data. An alternative approach to formulating ecotoxicological risk modelling was also proposed and was based on a Binomial GLM. In this formulation, the model is first fit to the available data in order to derive mean and uncertainty estimates for the parameters. This 'uncertain' GLM model is then used to predict the risk of effect from possible or observed exposure distributions. This risk is described as a whole distribution, with a central tendency and uncertainty bounds derived from the original data and the exposure distribution (if this is also 'uncertain'). Bayesian and P-Bounds approaches to propagating uncertainty in this model were compared using an example of the risk of exposure to a hypothetical (uncertain) distribution of 4-CP for the two Daphnia strains studied. This comparison found that the Bayesian and P-Bounds approaches produced very similar mean and uncertainty estimates, with the P-bounds intervals always being wider than the Bayesian ones. This difference is due to the different methods for dealing with dependencies between model parameters by the two approaches, and is confirmation that the P-bounds approach is better suited to situations where data and knowledge are scarce. The advantages of the Bayesian risk assessment and uncertainty propagation method developed are that it allows calculation of the likelihood of any effect occurring, not just the (probability)bounds, and that the same software (WinBugs) and model construction may be used to fit regression models and predict risks simultaneously. The GLM risk modelling approaches developed here are able to explain a wide range of response shapes (including hormesis) and underlying (non-normal) distributions, and do not involve expression of the exposure-response as a probability distribution, hence solving a number of problems found with previous formulations of ecotoxicological risk. The approaches developed can also be easily extended to describe communities, include modifying factors, mixed-effects, population growth, carrying capacity and a range of other variables of interest in ecotoxicological risk assessments. While the lack of data on the toxicological effects of chemicals is the most significant source of uncertainty in ecotoxicological risk assessments today, methods such as those described here can assist by quantifying that uncertainty so that it can be communicated to stakeholders and decision makers. As new information becomes available, these techniques can be used to develop more complex models that will help to bridge the gap between the bioassay and the ecosystem.
APA, Harvard, Vancouver, ISO, and other styles
4

Gobard, Renan. "Fluctuations dans des modèles de boules aléatoires." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S025/document.

Full text
Abstract:
Dans ce travail de thèse, nous étudions les fluctuations macroscopiques dans un modèle de boules aléatoires. Un modèle de boules aléatoires est une agrégation de boules dans Rd dont les centres et les rayons sont aléatoires. On marque également chaque boule par un poids aléatoire. On considère la masse M induite par le système de boules pondérées sur une configuration μ de Rd. Pour réaliser l’étude macroscopique des fluctuations de M, on réalise un "dézoom" sur la configuration de boules. Mathématiquement cela revient à diminuer le rayon moyen tout en augmentant le nombre moyen de centres par unité de volume. La question a déjà été étudiée lorsque les composantes des triplets (centre, rayon, poids) sont indépen- dantes et que ces triplets sont engendrés selon un processus ponctuel de Poisson sur Rd × R+ × R. On observe alors trois comportements distincts selon le rapport de force entre la vitesse de diminution des rayons et la vitesse d’augmentation de la densité des boules. Nous proposons de généraliser ces résultats dans trois directions distinctes. La première partie de ce travail de thèse consiste à introduire de la dépendance entre les centres et les rayons et de l’inhomogénéité dans la répartition des centres. Dans le modèle que nous proposons, le comportement stochastique des rayons dépend de l’emplacement de la boule. Dans les travaux précédents, les convergences obtenues pour les fluctuations de M sont au mieux des convergences fonctionnelles en dimension finie. Nous obtenons, dans la deuxième partie de ce travail, de la convergence fonctionnelle sur un ensemble de configurations μ de dimension infinie. Dans une troisième et dernière partie, nous étudions un modèle de boules aléatoires (non pondérées) sur C dont les couples (centre, rayon) sont engendrés par un processus ponctuel déterminantal. Contrairement au processus ponctuel de Poisson, le processus ponctuel déterminantal présente des phénomènes de répulsion entre ses points ce qui permet de modéliser davantage de problèmes physiques
In this thesis, we study the macroscopic fluctuations in random balls models. A random balls model is an aggregation of balls in Rd whose centers and radii are random. We also mark each balls with a random weight. We consider the mass M induced by the system of weighted balls on a configuration μ of Rd. In order to investigate the macroscopic fluctuations of M, we realize a zoom-out on the configuration of balls. Mathematically, we reduce the mean radius while increasing the mean number of centers by volume unit. The question has already been studied when the centers, the radii and the weights are independent and the triplets (center, radius, weight) are generated according to a Poisson point process on Rd ×R+ ×R. Then, we observe three different behaviors depending on the comparison between the speed of the decreasing of the radii and the speed of the increasing of the density of centers. We propose to generalize these results in three different directions. The first part of this thesis consists in introducing dependence between the radii and the centers and inhomogeneity in the distribution of the centers. In the model we propose, the stochastic behavior of the radii depends on the location of the ball. In the previous works, the convergences obtained for the fluctuations of M are at best functional convergences in finite dimension. In the second part of this work, we obtain functional convergence on an infinite dimensional set of configurations μ. In the third and last part, we study a random balls model (non-weighted) on C where the couples (center, radius) are generated according to determinantal point process. Unlike to the Poisson point process, the determinantal point process exhibits repulsion phenomena between its points which allows us to model more physical problems
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Probability bounds analysis"

1

Kalashnikov, Vladimir Vi͡acheslavovich. Geometric sums, bounds for rare events with applications: Risk analysis, reliability, queueing. Dordrecht: Kluwer Academic, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kalashnikov, Vladimir. Geometric Sums: Bounds for Rare Events with Applications: Risk Analysis, Reliability, Queueing. Dordrecht: Springer Netherlands, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rüschendorf, Ludger. Mathematical Risk Analysis: Dependence, Risk Bounds, Optimal Allocations and Portfolios. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

1975-, Sims Robert, and Ueltschi Daniel 1969-, eds. Entropy and the quantum II: Arizona School of Analysis with Applications, March 15-19, 2010, University of Arizona. Providence, R.I: American Mathematical Society, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rüschendorf, Ludger. Mathematical Risk Analysis: Dependence, Risk Bounds, Optimal Allocations and Portfolios. Springer, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rüschendorf, Ludger. Mathematical Risk Analysis: Dependence, Risk Bounds, Optimal Allocations and Portfolios. Springer, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ashby, F. Gregory, and Fabian A. Soto. Multidimensional Signal Detection Theory. Edited by Jerome R. Busemeyer, Zheng Wang, James T. Townsend, and Ami Eidels. Oxford University Press, 2015. http://dx.doi.org/10.1093/oxfordhb/9780199957996.013.2.

Full text
Abstract:
Multidimensional signal detection theory is a multivariate extension of signal detection theory that makes two fundamental assumptions, namely that every mental state is noisy and that every action requires a decision. The most widely studied version is known as general recognition theory (GRT). General recognition theory assumes that the percept on each trial can be modeled as a random sample from a multivariate probability distribution defined over the perceptual space. Decision bounds divide this space into regions that are each associated with a response alternative. General recognition theory rigorously defines and tests a number of important perceptual and cognitive conditions, including perceptual and decisional separability and perceptual independence. General recognition theory has been used to analyze data from identification experiments in two ways: (1) fitting and comparing models that make different assumptions about perceptual and decisional processing, and (2) testing assumptions by computing summary statistics and checking whether these satisfy certain conditions. Much has been learned recently about the neural networks that mediate the perceptual and decisional processing modeled by GRT, and this knowledge can be used to improve the design of experiments where a GRT analysis is anticipated.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Probability bounds analysis"

1

Kwon, Joong Sung, and Ronald Pyke. "Probability Bounds for Product Poisson Processes." In Athens Conference on Applied Probability and Time Series Analysis, 137–58. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-0749-8_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zheng, Kailiang, Helen H. Lou, and Yinlun Huang. "Sustainability Under Severe Uncertainty: A Probability-Bounds-Analysis-Based Approach." In Treatise on Sustainability Science and Engineering, 51–66. Dordrecht: Springer Netherlands, 2013. http://dx.doi.org/10.1007/978-94-007-6229-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Grous, Ammar. "Analysis Elements for Determining the Probability of Rupture by Simple Bounds." In Fracture Mechanics 2, 69–86. Hoboken, NJ USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118580028.ch2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Moosbrugger, Marcel, Ezio Bartocci, Joost-Pieter Katoen, and Laura Kovács. "Automated Termination Analysis of Polynomial Probabilistic Programs." In Programming Languages and Systems, 491–518. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72019-3_18.

Full text
Abstract:
AbstractThe termination behavior of probabilistic programs depends on the outcomes of random assignments. Almost sure termination (AST) is concerned with the question whether a program terminates with probability one on all possible inputs. Positive almost sure termination (PAST) focuses on termination in a finite expected number of steps. This paper presents a fully automated approach to the termination analysis of probabilistic while-programs whose guards and expressions are polynomial expressions. As proving (positive) AST is undecidable in general, existing proof rules typically provide sufficient conditions. These conditions mostly involve constraints on supermartingales. We consider four proof rules from the literature and extend these with generalizations of existing proof rules for (P)AST. We automate the resulting set of proof rules by effectively computing asymptotic bounds on polynomials over the program variables. These bounds are used to decide the sufficient conditions – including the constraints on supermartingales – of a proof rule. Our software tool Amber can thus check AST, PAST, as well as their negations for a large class of polynomial probabilistic programs, while carrying out the termination reasoning fully with polynomial witnesses. Experimental results show the merits of our generalized proof rules and demonstrate that Amber can handle probabilistic programs that are out of reach for other state-of-the-art tools.
APA, Harvard, Vancouver, ISO, and other styles
5

Stavrakakis, P., and P. Valettas. "On the Geometry of Log-Concave Probability Measures with Bounded Log-Sobolev Constant." In Asymptotic Geometric Analysis, 359–80. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-6406-8_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Spel, Jip, Sebastian Junges, and Joost-Pieter Katoen. "Finding Provably Optimal Markov Chains." In Tools and Algorithms for the Construction and Analysis of Systems, 173–90. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72016-2_10.

Full text
Abstract:
AbstractParametric Markov chains (pMCs) are Markov chains with symbolic (aka: parametric) transition probabilities. They are a convenient operational model to treat robustness against uncertainties. A typical objective is to find the parameter values that maximize the reachability of some target states. In this paper, we consider automatically proving robustness, that is, an $$\varepsilon $$ ε -close upper bound on the maximal reachability probability. The result of our procedure actually provides an almost-optimal parameter valuation along with this upper bound.We propose to tackle these ETR-hard problems by a tight combination of two significantly different techniques: monotonicity checking and parameter lifting. The former builds a partial order on states to check whether a pMC is (local or global) monotonic in a certain parameter, whereas parameter lifting is an abstraction technique based on the iterative evaluation of pMCs without parameter dependencies. We explain our novel algorithmic approach and experimentally show that we significantly improve the time to determine almost-optimal synthesis.
APA, Harvard, Vancouver, ISO, and other styles
7

Cardaliaguet, Pierre, François Delarue, Jean-Michel Lasry, and Pierre-Louis Lions. "Convergence of the Nash System." In The Master Equation and the Convergence Problem in Mean Field Games, 159–74. Princeton University Press, 2019. http://dx.doi.org/10.23943/princeton/9780691190716.003.0006.

Full text
Abstract:
This chapter talks about addressing the convergence problem, which is devoted to the convergence of the Nash system. It contains several results on the differential calculus on the space of probability measures together with an Itô's formula for functionals of a process taking values in the space of probability measures. For simplicity, most of the analysis provided in the chapter is on the torus, but the method is robust enough to accommodate the nonperiodic setting. The chapter also shows that monotonicity plays no role in the proofs of certain theorems. Basically, only the global Lipschitz properties of H and DpH, together with the nondegeneracy of the diffusions and the various bounds obtained for the solution of the master equation and its derivatives matter.
APA, Harvard, Vancouver, ISO, and other styles
8

Kaporis, Alexis C., and Lefteris M. Kirousis. "Proving Conditional Randomness using the Principle of Deferred Decisions." In Computational Complexity and Statistical Physics. Oxford University Press, 2005. http://dx.doi.org/10.1093/oso/9780195177374.003.0016.

Full text
Abstract:
In order to prove that a certain property holds asymptotically for a restricted class of objects such as formulas or graphs, one may apply a heuristic on a random element of the class, and then prove by probabilistic analysis that the heuristic succeeds with high probability. This method has been used to establish lower bounds on thresholds for desirable properties such as satisfiability and colorability: lower bounds for the 3-SAT threshold were discussed briefly in the previous chapter. The probabilistic analysis depends on analyzing the mean trajectory of the heuristic—as we have seen in chapter 3—and in parallel, showing that in the asymptotic limit the trajectory’s properties are strongly concentrated about their mean. However, the mean trajectory analysis requires that certain random characteristics of the heuristic’s starting sample are retained throughout the trajectory. We propose a methodology in this chapter to determine the conditional that should be imposed on a random object, such as a conjunctive normal form (CNF) formula or a graph, so that conditional randomness is retained when we run a given algorithm. The methodology is based on the principle of deferred decisions. The essential idea is to consider information about the object as being stored in “small pieces,” in separate registers. The contents of the registers pertaining to the conditional are exposed, while the rest remain unexposed. Having separate registers for different types of information prevents exposing information unnecessarily. We use this methodology to prove various randomness invariance results, one of which answers a question posed by Molloy [402].
APA, Harvard, Vancouver, ISO, and other styles
9

Baker, John. "Meeting in the Shadow of Heroes? Personal Names and Assembly Places." In Power and Place in Europe in the Early Middle Ages, 37–63. British Academy, 2019. http://dx.doi.org/10.5871/bacad/9780197266588.003.0002.

Full text
Abstract:
This chapter examines the likelihood that celebrated individuals were commemorated in the names of assembly sites as part of a display of political authority or cultural affiliation. Focusing primarily on the names of Domesday hundreds, it draws comparisons with the personal names in other well-established Anglo-Saxon corpora (including charter bounds, narrative sources, Domesday Book and place-names), in order to assess the social context of those individuals commemorated in hundred-names. The chapter then evaluates the probability that such names could carry specific political or cultural resonance at the time of naming, and there are clear indications that this may sometimes have been the case, perhaps especially in the first half of the 10th century. While the evidence implies that the hundred-names arose in a number of different circumstances, the analysis suggests that reference to heroic figures may have been one motivating factor in the naming of sites of assembly.
APA, Harvard, Vancouver, ISO, and other styles
10

Porter, Theodore M. "The Errors of Art and Nature." In The Rise of Statistical Thinking, 1820-1900, 97–115. Princeton University Press, 2020. http://dx.doi.org/10.23943/princeton/9780691208428.003.0005.

Full text
Abstract:
This chapter analyzes the law of facility of errors. All the early applications of the error law could be understood in terms of a binomial converging to an exponential, as in Abrahan De Moivre's original derivation. All but Joseph Fourier's law of heat, which was never explicitly tied to mathematical probability except by analogy, were compatible with the classical interpretation of probability. Just as probability was a measure of uncertainty, this exponential function governed the chances of error. It was not really an attribute of nature, but only a measure of human ignorance—of the imperfection of measurement techniques or the inaccuracy of inference from phenomena that occur in finite numbers to their underlying causes. Moreover, the mathematical operations used in conjunction with it had a single purpose: to reduce the error to the narrowest bounds possible. With Adolphe Quetelet, all that began to change, and a wider conception of statistical mathematics became possible. When Quetelet announced in 1844 that the astronomer's error law applied also to the distribution of human features such as height and girth, he did more than add one more set of objects to the domain of this probability function; he also began to break down its exclusive association with error.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Probability bounds analysis"

1

Bin Hu and Peter Seiler. "Probability bounds for false alarm analysis of fault detection systems." In 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2013. http://dx.doi.org/10.1109/allerton.2013.6736633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Du, Xiaoping. "Interval Reliability Analysis." In ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/detc2007-34582.

Full text
Abstract:
Traditional reliability analysis uses probability distributions to calculate reliability. In many engineering applications, some nondeterministic variables are known within intervals. When both random variables and interval variables are present, a single probability measure, namely, the probability of failure or reliability, is not available in general; but its lower and upper bounds exist. The mixture of distributions and intervals makes reliability analysis more difficult. Our goal is to investigate computational tools to quantify the effects of random and interval inputs on reliability associated with performance characteristics. The proposed reliability analysis framework consists of two components — direct reliability analysis and inverse reliability analysis. The algorithms are based on the First Order Reliability Method and many existing reliability analysis methods. The efficient and robust improved HL-RF method is further developed to accommodate interval variables. To deal with interval variables for black-box functions, nonlinear optimization is used to identify the extreme values of a performance characteristic. The direct reliability analysis provides bounds of a probability of failure; the inverse reliability analysis computes the bounds of the percentile value of a performance characteristic given reliability. One engineering example is provided.
APA, Harvard, Vancouver, ISO, and other styles
3

Aughenbaugh, Jason Matthew, Scott Duncan, Christiaan J. J. Paredis, and Bert Bras. "A Comparison of Probability Bounds Analysis and Sensitivity Analysis in Environmentally Benign Design and Manufacture." In ASME 2006 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2006. http://dx.doi.org/10.1115/detc2006-99230.

Full text
Abstract:
There is growing acceptance in the design community that two types of uncertainty exist: inherent variability and uncertainty that results from a lack of knowledge, which variously is referred to as imprecision, incertitude, irreducible uncertainty, and epistemic uncertainty. There is much less agreement on the appropriate means for representing and computing with these types of uncertainty. Probability bounds analysis (PBA) is a method that represents uncertainty using upper and lower cumulative probability distributions. These structures, called probability boxes or just p-boxes, capture both variability and imprecision. PBA includes algorithms for efficiently computing with these structures under certain conditions. This paper explores the advantages and limitations of PBA in comparison to traditional decision analysis with sensitivity analysis in the context of environmentally benign design and manufacture. The example of the selection of an oil filter involves multiple objectives and multiple uncertain parameters. These parameters are known with varying levels of uncertainty, and different assumptions about the dependencies between variables are made. As such, the example problem provides a rich context for exploring the applicability of PBA and sensitivity analysis to making engineering decisions under uncertainty. The results reveal specific advantages and limitations of both methods. The appropriate choice of an analysis depends on the exact decision scenario.
APA, Harvard, Vancouver, ISO, and other styles
4

Aughenbaugh, Jason Matthew, and Christiaan J. J. Paredis. "Probability Bounds Analysis as a General Approach to Sensitivity Analysis in Decision Making Under Uncertainty." In SAE World Congress & Exhibition. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2007. http://dx.doi.org/10.4271/2007-01-1480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Du, Xiaoping. "Uncertainty Analysis With Probability and Evidence Theories." In ASME 2006 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2006. http://dx.doi.org/10.1115/detc2006-99078.

Full text
Abstract:
Both aleatory and epistemic uncertainties exist in engineering applications. Aleatory uncertainty (objective or stochastic uncertainty) describes the inherent variation associated with a physical system or environment. Epistemic uncertainty, on the other hand, is derived from some level of ignorance or incomplete information about a physical system or environment. Aleatory uncertainty associated with parameters is usually modeled by probability theory and has been widely researched and applied by industry, academia, and government. The study of epistemic uncertainty in engineering has recently started. The feasibility of the unified uncertainty analysis that deals with both types of uncertainties is investigated in this paper. The input parameters with aleatory uncertainty are modeled with probability distributions by probability theory, and the input parameters with epistemic uncertainty are modeled with basic probability assignment by evidence theory. The effect of the mixture of both aleatory and epistemic uncertainties on the model output is modeled with belief and plausibility measures (or the lower and upper probability bounds). It is shown that the calculation of belief measure or plausibility measure can be converted to the calculation of the minimum or maximum probability of failure over each of the mutually exclusive subsets of the input parameters with epistemic uncertainty. A First Order Reliability Method (FORM) based algorithm is proposed to conduct the unified uncertainty analysis. Two examples are given for the demonstration. Future research directions are derived from the discussions in this paper.
APA, Harvard, Vancouver, ISO, and other styles
6

Marti, K. "Approximation and Derivatives of Probability Functions in Probabilistic Structural Analysis and Design." In ASME 1995 Design Engineering Technical Conferences collocated with the ASME 1995 15th International Computers in Engineering Conference and the ASME 1995 9th Annual Engineering Database Symposium. American Society of Mechanical Engineers, 1995. http://dx.doi.org/10.1115/detc1995-0048.

Full text
Abstract:
Abstract Yield stresses, allowable stresses, moment capacities (plastic moments), external loadings, manufacturing errors are not given fixed quantities in practice, but have to be modelled as random variables with a certain joint probability distribution. Hence, problems from limit (collapse) load analysis or plastic analysis and from plastic and elastic design of structures are treated in the framework of stochastic optimization. Using especially reliability-oriented optimization methods, the behavioral constraints are quantified by means of the corresponding probability ps of survival. Lower bounds for ps are obtained by selecting certain redundants in the vector of internal forces; moreover, upper bounds for ps are constructed by considering a pair of dual linear pro-prams for the optimizational representation of the yield or safety conditions. Whereas ps can be computed e.g. by sampling methods or by asymptotic expansion techniques based on Laplace integral representations of certain multiple integrals, efficient techniques for the computation of the sensitivities (of various orders) of ps with respect to input or design variables have yet to be developed. Hence several new techniques are suggested for the numerical computation of derivatives of ps.
APA, Harvard, Vancouver, ISO, and other styles
7

Ferson, Scott. "Probability Bounds Analysis Solves the Problem of Incomplete Specification in Probabilistic Risk and Safety Assessments." In Ninth United Engineering Foundation Conference on Risk-Based Decisionmaking in Water Resources. Reston, VA: American Society of Civil Engineers, 2001. http://dx.doi.org/10.1061/40577(306)16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Yanwen, and Pingfeng Wang. "Sequential Sampling Based Reliability Analysis for High Dimensional Rare Events With Confidence Intervals." In ASME 2020 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/detc2020-22146.

Full text
Abstract:
Abstract Analysis of rare failure events accurately is often challenging with an affordable computational cost in many engineering applications, and this is especially true for problems with high dimensional system inputs. The extremely low probabilities of occurrences for those rare events often lead to large probability estimation errors and low computational efficiency. Thus, it is vital to develop advanced probability analysis methods that are capable of providing robust estimations of rare event probabilities with narrow confidence bounds. Generally, confidence intervals of an estimator can be established based on the central limit theorem, but one of the critical obstacles is the low computational efficiency, since the widely used Monte Carlo method often requires a large number of simulation samples to derive a reasonably narrow confidence interval. This paper develops a new probability analysis approach that can be used to derive the estimates of rare event probabilities efficiently with narrow estimation bounds simultaneously for high dimensional problems. The asymptotic behaviors of the developed estimator has also been proved theoretically without imposing strong assumptions. Further, an asymptotic confidence interval is established for the developed estimator. The presented study offers important insights into the robust estimations of the probability of occurrences for rare events. The accuracy and computational efficiency of the developed technique is assessed with numerical and engineering case studies. Case study results have demonstrated that narrow bounds can be built efficiently using the developed approach, and the true values have always been located within the estimation bounds, indicating that good estimation accuracy along with a significantly improved efficiency.
APA, Harvard, Vancouver, ISO, and other styles
9

Jie, Yongshi, Wei Wang, Xue Bai, and Yongxiang Li. "Uncertainty analysis based on probability bounds in probabilistic risk assessment of high microgravity science experiment system." In 2016 11th International Conference on Reliability, Maintainability and Safety (ICRMS). IEEE, 2016. http://dx.doi.org/10.1109/icrms.2016.8050109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gray, A., A. Wimbush, M. De Angelis, P. O. Hristov, E. Miralles-Dolz, D. Calleja, and R. Rocchetta. "Bayesian Calibration and Probability Bounds Analysis Solution to the Nasa 2020 UQ Challenge on Optimization under Uncertainty." In Proceedings of the 29th European Safety and Reliability Conference (ESREL). Singapore: Research Publishing Services, 2020. http://dx.doi.org/10.3850/978-981-14-8593-0_5520-cd.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Probability bounds analysis"

1

Oberkampf, William Louis, W. Troy Tucker, Jianzhong Zhang, Lev Ginzburg, Daniel J. Berleant, Scott Ferson, Janos Hajagos, and Roger B. Nelsen. Dependence in probabilistic modeling, Dempster-Shafer theory, and probability bounds analysis. Office of Scientific and Technical Information (OSTI), October 2004. http://dx.doi.org/10.2172/919189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mullahy, John. Individual Results May Vary: Elementary Analytics of Inequality-Probability Bounds, with Applications to Health-Outcome Treatment Effects. Cambridge, MA: National Bureau of Economic Research, July 2017. http://dx.doi.org/10.3386/w23603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography