Dissertations / Theses on the topic 'Probabilistic analysi'

To see the other types of publications on this topic, follow the link: Probabilistic analysi.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Probabilistic analysi.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

POZZI, FEDERICO ALBERTO. "Probabilistic Relational Models for Sentiment Analysis in Social Networks." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2015. http://hdl.handle.net/10281/65709.

Full text
Abstract:
The huge amount of textual data on theWeb has grown in the last few years rapidly creating unique contents of massive dimensions that constitutes fertile ground for Sentiment Analysis. In particular, social networks represents an emerging challenging sector where the natural language expressions of people can be easily reported through short but meaningful text messages. This unprecedented contents of huge dimensions need to be efficiently and effectively analyzed to create actionable knowledge for decision making processes. A key information that can be grasped from social environments relates to the polarity of text messages, i. e. the sentiment (positive, negative or neutral) that the messages convey. However, most of the works regarding polarity classification usually consider text as unique information to infer sentiment, do not taking into account that social networks are actually networked environments. A representation of real world data where instances are considered as homogeneous, independent and identically distributed (i.i.d.) leads us to a substantial loss of information and to the introduction of a statistical bias. For this reason, the combination of content and relationships is a core task of the recent literature on Sentiment Analysis, where friendships are usually investigated to model the principle of homophily (a contact among similar people occurs at a higher rate than among dissimilar people). However, paired with the assumption of homophily, constructuralism explains how social relationships evolve via dynamic and continuous interactions as the knowledge and behavior that two actors share increase. Considering the similarity among users on the basis of constructuralism appears to be a much more powerful force than interpersonal influence within the friendship network. As first contribution, this Ph.D. thesis proposes Approval Network as a novel graph representation to jointly model homophily and constructuralism, which is intended to better represent the contagion on social networks. Starting from the classical state-of-the-art methodologies where only text is used to infer the polarity of social networks messages, this thesis presents novel Probabilistic Relational Models on user, document and aspect-level which integrate the structural information to improve classification performance. The integration is particularly useful when textual features do not provide sufficient or explicit information to infer sentiment (e. g., I agree!). The experimental investigations reveal that incorporating network information through approval relations can lead to statistically significant improvements over the performance of complex learning approaches based only on textual features.
APA, Harvard, Vancouver, ISO, and other styles
2

Crotta, M. "PROBABILISTIC MODELLING IN FOOD SAFETY: A SCIENCE-BASED APPROACH FOR POLICY DECISIONS." Doctoral thesis, Università degli Studi di Milano, 2015. http://hdl.handle.net/2434/339138.

Full text
Abstract:
This thesis deals with use of qualitative and quantitative probabilistic models for the animal-derived food safety management. Four unrelated models are presented: three quantitative and one qualitative. Two of the quantitative models concern the risk posed by pathogens in raw milk, in the first study, a probabilistic approach for the inclusion of the variability and the uncertainty in the consumers’ habits and the bacterial pathogenic potential is proposed while the second study, demonstrate how the overlook of the relationship between the storage time and temperature has led to overestimated results in raw milk-related models published so far and an equation to address the issue is provided. In the third study, quantitative modelling techniques are used to simulate the dynamics underlying the spread of Campylobacter in broiler flocks and quantify the potential effects that different on-farm mitigation strategies or management measures have on the microbial load in the intestine of infected birds at the end of the rearing period. In the qualitative study, a general approach for the estimation of the likelihoods of introduction of live parasites in aquaculture implants and the commercialization of infested product is outlined by using the example of Anisakids in farmed Atlantic salmon.
APA, Harvard, Vancouver, ISO, and other styles
3

SCOZZESE, FABRIZIO. "AN EFFICIENT PROBABILISTIC FRAMEWORK FOR SEISMIC RISK ANALYSIS OF STRUCTURAL SYSTEMS EQUIPPED WITH LINEAR AND NONLINEAR VISCOUS DAMPERS." Doctoral thesis, Università degli Studi di Camerino, 2018. http://hdl.handle.net/11581/429547.

Full text
Abstract:
Seismic passive protection with supplemental damping devices represents an efficient strategy to produce resilient structural systems with improved seismic performances and notably reduced post-earthquake consequences. Such strategy offers indeed several advantages with respect to the ordinary seismic design philosophy: structural damages are prevented; the safety of the occupants is ensured and the system remains operational both during and right after the earthquake; no major retrofit interventions are needed but only a post-earthquake inspection (and if necessary, replacement) of dissipation devices is required; a noticeable reduction of both direct and indirect outlays is achieved. However, structural systems equipped with seismic control devices (dampers) may show potentially limited robustness, since an unexpected early disruption on the dampers may lead to a progressive collapse of the actually non-ductile system. Although the most advanced international seismic codes acknowledge this issue and require dampers to have higher safety margins against the failure, they only provide simplified approaches to cope with the problem, often consisting of general demand amplification rules which are not tailored on the actual needs of different device typologies and which lead to reliability levels not explicitly declared. The research activity carried out within this Thesis stems from the need to fill the gaps still present in the international regulatory framework, and respond to the scarcity of specific probabilistic studies geared to characterize and understand the probabilistic seismic response of such systems up to very low failure probabilities. In particular, as a first step towards this goal, the present work aims at addressing the issue of the seismic risk of structures with fluid viscous dampers, a simple and widely used class of dissipation devices. A robust probabilistic framework has been defined for the purposes of the present work, made up of the combination of an advanced probabilistic tool for solving reliability problems, consisting of Subset Simulation (with Markov chain Monte Carlo and Metropolis-like algorithms), and a stochastic ground motion model for statistical seismic hazard characterization. The seismic performance of the system is described by means of demand hazard curves, providing the mean annual frequency of exceeding any specified threshold demand value for all the relevant global and local Engineering Demand Parameters (EDPs). A wide range of performance levels is monitored, encompassing the serviceability conditions, the ultimate limit states, up to very rare performance demand levels (with mean annual frequency of exceedance around 10-6) at which the seismic reliability shall be checked in order to confer the system an adequate level of safety margins against seismic events rarer than the design one. Some original contributions regarding the methodological approaches have been obtained by an efficient combination of the common conditional probabilistic methods (i.e., multiple-stripe and cloud analysis) with a stochastic earthquake model, in which subset simulation is exploited for efficiently generate both the seismic hazard curve and the ground motion samples for structural analysis purposes. The accuracy of the proposed strategy is assessed by comparing the achieved seismic risk estimates with those provided via Subset Simulation, the latter being assumed as reference reliability method. Furthermore, a reliability-based optimization method is proposed as powerful tool for investigating upon the seismic risk sensitivity to variable model parameters. Such method proves to be particularly useful when a proper statistical characterization of the model parameters is not available. The proposed probabilistic framework is applied to a set of single-degree-of-freedom damped models to carry out an extensive parametric analysis, and to a multi-story steel building with linear and nonlinear viscous dampers for the aims of a deeper investigation. The influence of viscous dampers nonlinearity level on the seismic risk of such systems is investigated. The variability of viscous constitutive parameters due to the tolerance allowed in devices’ quality control and production tests is also accounted for, and the consequential effects on the seismic performances are evaluated. The reliability of simplified approaches proposed by the main international seismic codes for dampers design is assessed, the main regulatory gaps are highlighted and proposals for improvement are given as well. Results from this whole probabilistic investigation contribute to the development of more reliable design procedures for seismic passive protection strategies.
APA, Harvard, Vancouver, ISO, and other styles
4

Tagliaferri, Lorenza. "Probabilistic Envelope Curves for Extreme Rainfall Events - Curve Inviluppo Probabilistiche per Precipitazioni Estreme." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amslaurea.unibo.it/99/.

Full text
Abstract:
A regional envelope curve (REC) of flood flows summarises the current bound on our experience of extreme floods in a region. RECs are available for most regions of the world. Recent scientific papers introduced a probabilistic interpretation of these curves and formulated an empirical estimator of the recurrence interval T associated with a REC, which, in principle, enables us to use RECs for design purposes in ungauged basins. The main aim of this work is twofold. First, it extends the REC concept to extreme rainstorm events by introducing the Depth-Duration Envelope Curves (DDEC), which are defined as the regional upper bound on all the record rainfall depths at present for various rainfall duration. Second, it adapts the probabilistic interpretation proposed for RECs to DDECs and it assesses the suitability of these curves for estimating the T-year rainfall event associated with a given duration and large T values. Probabilistic DDECs are complementary to regional frequency analysis of rainstorms and their utilization in combination with a suitable rainfall-runoff model can provide useful indications on the magnitude of extreme floods for gauged and ungauged basins. The study focuses on two different national datasets, the peak over threshold (POT) series of rainfall depths with duration 30 min., 1, 3, 9 and 24 hrs. obtained for 700 Austrian raingauges and the Annual Maximum Series (AMS) of rainfall depths with duration spanning from 5 min. to 24 hrs. collected at 220 raingauges located in northern-central Italy. The estimation of the recurrence interval of DDEC requires the quantification of the equivalent number of independent data which, in turn, is a function of the cross-correlation among sequences. While the quantification and modelling of intersite dependence is a straightforward task for AMS series, it may be cumbersome for POT series. This paper proposes a possible approach to address this problem.
APA, Harvard, Vancouver, ISO, and other styles
5

Saad, Feras Ahmad Khaled. "Probabilistic data analysis with probabilistic programming." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/113164.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 48-50).
Probabilistic techniques are central to data analysis, but dierent approaches can be challenging to apply, combine, and compare. This thesis introduces composable generative population models (CGPMs), a computational abstraction that extends directed graphical models and can be used to describe and compose a broad class of probabilistic data analysis techniques. Examples include hierarchical Bayesian models, multivariate kernel methods, discriminative machine learning, clustering algorithms, dimensionality reduction, and arbitrary probabilistic programs. We also demonstrate the integration of CGPMs into BayesDB, a probabilistic programming platform that can express data analysis tasks using a modeling language and a structured query language. The practical value is illustrated in two ways. First, CGPMs are used in an analysis that identifies satellite data records which probably violate Kepler's Third Law, by composing causal probabilistic programs with non-parametric Bayes in under 50 lines of probabilistic code. Second, for several representative data analysis tasks, we report on lines of code and accuracy measurements of various CGPMs, plus comparisons with standard baseline solutions from Python and MATLAB libraries.
by Feras Ahmad Khaled Saad.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
6

Shirmohammadi, Mahsa. "Qualitative analysis of synchronizing probabilistic systems." Thesis, Cachan, Ecole normale supérieure, 2014. http://www.theses.fr/2014DENS0054/document.

Full text
Abstract:
Les Markov Decision Process (MDP) sont des systèmes finis probabilistes avec à la fois des choix aléatoires et des stratégies, et sont ainsi reconnus comme de puissants outils pour modéliser les interactions entre un contrôleur et les réponses aléatoires de l'environment. Mathématiquement, un MDP peut être vu comme un jeu stochastique à un joueur et demi où le contrôleur choisit à chaque tour une action et l'environment répond en choisissant un successeur selon une distribution de probabilités fixée.Il existe deux incomparables représentations du comportement d'un MDP une fois les choix de la stratégie fixés.Dans la représentation classique, un MDP est un générateur de séquences d'états, appelées state-outcome; les conditions gagnantes du joueur sont ainsi exprimées comme des ensembles de séquences désirables d'états qui sont visités pendant le jeu, e.g. les conditions de Borel telles que l'accessibilité. La complexité des problèmes de décision ainsi que la capacité mémoire requise des stratégies gagnantes pour les conditions dites state-outcome ont été déjà fortement étudiées.Depuis peu, les MDPs sont également considérés comme des générateurs de séquences de distributions de probabilités sur les états, appelées distribution-outcome. Nous introduisons des conditions de synchronisation sur les distributions-outcome, qui intuitivement demandent à ce que la masse de probabilité s'accumule dans un (ensemble d') état, potentiellement de façon asymptotique.Une distribution de probabilités est p-synchrone si la masse de probabilité est d'au moins p dans un état; et la séquence de distributions de probabilités est toujours, éventuellement, faiblement, ou fortement p-synchrone si, respectivement toutes, certaines, infiniment plusieurs ou toutes sauf un nombre fini de distributions dans la séquence sont p-synchrones.Pour chaque type de synchronisation, un MDP peut être(i) assurément gagnant si il existe une stratégie qui génère une séquence 1-synchrone;(ii) presque-assurément gagnant si il existe une stratégie qui génère une séquence (1-epsilon)-synchrone et cela pour tout epsilon strictement positif;(iii) asymptotiquement gagnant si pour tout epsilon strictement positif, il existe une stratégie produisant une séquence (1-epsilon)-synchrone.Nous considérons le problème consistant à décider si un MDP est gagnant, pour chaque type de synchronisation et chaque mode gagnant: nous établissons les limites supérieures et inférieures de la complexité de ces problèmes ainsi que la capacité mémoire requise pour une stratégie gagnante optimale.En outre, nous étudions les problèmes de synchronisation pour les automates probabilistes (PAs) qui sont en fait des instances de MDP où les contrôleurs sont restreint à utiliser uniquement des stratégies-mots; c'est à dire qu'ils n'ont pas la possibilité d'observer l'historique de l'exécution du système et ne peuvent connaitre que le nombre de choix effectués jusque là. Les langages synchrones d'un PA sont donc l'ensemble des stratégies-mots synchrones: nous établissons la complexité des problèmes des langages synchrones vides et universels pour chaque mode gagnant.Nous répercutons nos résultats obtenus pour les problèmes de synchronisation sur les MDPs et PAs aux jeux tour à tour à deux joueurs ainsi qu'aux automates finis non-déterministes. En plus de nos résultats principaux, nous établissons de nouveaux résultats de complexité sur les automates finis alternants avec des alphabets à une lettre. Enfin, nous étudions plusieurs variations de synchronisation sur deux instances de systèmes infinis que sont les automates temporisés et pondérés
Markov decision processes (MDPs) are finite-state probabilistic systems with bothstrategic and random choices, hence well-established to model the interactions between a controller and its randomly responding environment.An MDP can be mathematically viewed as a one and half player stochastic game played in rounds when the controller chooses an action,and the environment chooses a successor according to a fixedprobability distribution.There are two incomparable views on the behavior of an MDP, when thestrategic choices are fixed. In the traditional view, an MDP is a generator of sequence of states, called the state-outcome; the winning condition of the player is thus expressed as a set of desired sequences of states that are visited during the game, e.g. Borel condition such as reachability.The computational complexity of related decision problems and memory requirement of winning strategies for the state-outcome conditions are well-studied.Recently, MDPs have been viewed as generators of sequences of probability distributions over states, calledthe distribution-outcome. We introduce synchronizing conditions defined on distribution-outcomes,which intuitively requires that the probability mass accumulates insome (group of) state(s), possibly in limit.A probability distribution is p-synchronizing if the probabilitymass is at least p in some state, anda sequence of probability distributions is always, eventually,weakly, or strongly p-synchronizing if respectively all, some, infinitely many, or all but finitely many distributions in the sequence are p-synchronizing.For each synchronizing mode, an MDP can be (i) sure winning if there is a strategy that produces a 1-synchronizing sequence; (ii) almost-sure winning if there is a strategy that produces a sequence that is, for all epsilon > 0, a (1-epsilon)-synchronizing sequence; (iii) limit-sure winning if for all epsilon > 0, there is a strategy that produces a (1-epsilon)-synchronizing sequence.We consider the problem of deciding whether an MDP is winning, for each synchronizing and winning mode: we establish matching upper and lower complexity bounds of the problems, as well as the memory requirementfor optimal winning strategies.As a further contribution, we study synchronization in probabilistic automata (PAs), that are kind of MDPs where controllers are restricted to use only word-strategies; i.e. no ability to observe the history of the system execution, but the number of choices made so far.The synchronizing languages of a PA is then the set of all synchronizing word-strategies: we establish the computational complexity of theemptiness and universality problems for all synchronizing languages in all winning modes.We carry over results for synchronizing problems from MDPs and PAs to two-player turn-based games and non-deterministic finite state automata. Along with the main results, we establish new complexity results foralternating finite automata over a one-letter alphabet.In addition, we study different variants of synchronization for timed andweighted automata, as two instances of infinite-state systems
APA, Harvard, Vancouver, ISO, and other styles
7

Baier, Christel, Benjamin Engel, Sascha Klüppelholz, Steffen Märcker, Hendrik Tews, and Marcus Völp. "A Probabilistic Quantitative Analysis of Probabilistic-Write/Copy-Select." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-129917.

Full text
Abstract:
Probabilistic-Write/Copy-Select (PWCS) is a novel synchronization scheme suggested by Nicholas Mc Guire which avoids expensive atomic operations for synchronizing access to shared objects. Instead, PWCS makes inconsistencies detectable and recoverable. It builds on the assumption that, for typical workloads, the probability for data races is very small. Mc Guire describes PWCS for multiple readers but only one writer of a shared data structure. In this paper, we report on the formal analysis of the PWCS protocol using a continuous-time Markov chain model and probabilistic model checking techniques. Besides the original PWCS protocol, we also considered a variant with multiple writers. The results were obtained by the model checker PRISM and served to identify scenarios in which the use of the PWCS protocol is justified by guarantees on the probability of data races. Moreover, the analysis showed several other quantitative properties of the PWCS protocol.
APA, Harvard, Vancouver, ISO, and other styles
8

Munch, Mélanie. "Améliorer le raisonnement dans l'incertain en combinant les modèles relationnels probabilistes et la connaissance experte." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASB011.

Full text
Abstract:
Cette thèse se concentre sur l'intégration des connaissances d'experts pour améliorer le raisonnement dans l'incertitude. Notre objectif est de guider l'apprentissage des relations probabilistes avec les connaissances d'experts pour des domaines décrits par les ontologies.Pour ce faire, nous proposons de coupler des bases de connaissances (BC) et une extension orientée objet des réseaux bayésiens, les modèles relationnels probabilistes (PRM). Notre objectif est de compléter l'apprentissage statistique par des connaissances expertes afin d'apprendre un modèle aussi proche que possible de la réalité et de l'analyser quantitativement (avec des relations probabilistes) et qualitativement (avec la découverte causale). Nous avons développé trois algorithmes à travers trois approches distinctes, dont les principales différences résident dans leur automatisation et l'intégration (ou non) de la supervision d'experts humains.L'originalité de notre travail est la combinaison de deux philosophies opposées : alors que l'approche bayésienne privilégie l'analyse statistique des données fournies pour raisonner avec, l'approche ontologique est basée sur la modélisation de la connaissance experte pour représenter un domaine. La combinaison de la force des deux permet d'améliorer à la fois le raisonnement dans l'incertitude et la connaissance experte
This thesis focuses on integrating expert knowledge to enhance reasoning under uncertainty. Our goal is to guide the probabilistic relations’ learning with expert knowledge for domains described by ontologies.To do so we propose to couple knowledge bases (KBs) and an oriented-object extension of Bayesian networks, the probabilistic relational models (PRMs). Our aim is to complement the statistical learning with expert knowledge in order to learn a model as close as possible to the reality and analyze it quantitatively (with probabilistic relations) and qualitatively (with causal discovery). We developped three algorithms throught three distinct approaches, whose main differences lie in their automatisation and the integration (or not) of human expert supervision.The originality of our work is the combination of two broadly opposed philosophies: while the Bayesian approach favors the statistical analysis of the given data in order to reason with it, the ontological approach is based on the modelization of expert knowledge to represent a domain. Combining the strenght of the two allows to improve both the reasoning under uncertainty and the expert knowledge
APA, Harvard, Vancouver, ISO, and other styles
9

Echard, Benjamin. "Assessment by kriging of the reliability of structures subjected to fatigue stress." Thesis, Clermont-Ferrand 2, 2012. http://www.theses.fr/2012CLF22269/document.

Full text
Abstract:
Les méthodes traditionnelles de dimensionnement à la fatigue s’appuient sur l’utilisation de coefficients dits de “sécurité” dans le but d’assurer l’intégrité de la structure en couvrant les incertitudes inhérentes à la fatigue. Ces méthodes de l’ingénieur ont le mérite d’être simples d’application et de donner des solutions heureusement satisfaisantes du point de vue de la sécurité. Toutefois, elles ne permettent pas au concepteur de connaître la véritable marge de sécurité de la structure et l’influence des différents paramètres de conception sur la fiabilité. Les approches probabilistes sont envisagées dans cette thèse afin d’acquérir ces informations essentielles pour un dimensionnement optimal de la structure vis-à-vis de la fatigue. Une approche générale pour l’analyse probabiliste en fatigue est proposée dans ce manuscrit. Elle s’appuie sur la modélisation des incertitudes (chargement, propriétés du matériau, géométrie, courbe de fatigue) et vise à quantifier le niveau de fiabilité de la structure étudiée pour un scénario de défaillance en fatigue. Les méthodes classiques de fiabilité nécessitent un nombre important d’évaluations du modèle mécanique de la structure et ne sont donc pas envisageables lorsque le calcul du modèle est coûteux en temps. Une famille de méthodes appelée AK-RM (Active learning and Kriging-based Reliability Methods) est précisément proposée dans ces travaux de thèse afin de résoudre le problème de fiabilité avec un minimum d’évaluations du modèle mécanique. L’approche générale est appliquée à deux cas-tests fournis par SNECMA dans le cadre du projet ANR APPRoFi
Traditional procedures for designing structures against fatigue are grounded upon the use of so-called safety factors in an attempt to ensure structural integrity while masking the uncertainties inherent to fatigue. These engineering methods are simple to use and fortunately, they give satisfactory solutions with regard to safety. However, they do not provide the designer with the structure’s safety margin as well as the influence of each design parameter on reliability. Probabilistic approaches are considered in this thesis in order to acquire this information, which is essential for an optimal design against fatigue. A general approach for probabilistic analysis in fatigue is proposed in this manuscript. It relies on the modelling of the uncertainties (load, material properties, geometry, and fatigue curve), and aims at assessing the reliability level of the studied structure in the case of a fatigue failure scenario. Classical reliability methods require a large number of calls to the mechanical model of the structure and are thus not applicable when the model evaluation is time-demanding. A family of methods named AK-RM (Active learning and Kriging-based Reliability methods) is proposed in this research work in order to solve the reliability problem with a minimum number of mechanical model evaluations. The general approach is applied to two case studies submitted by SNECMA in the frame of the ANR project APPRoFi
APA, Harvard, Vancouver, ISO, and other styles
10

Kassa, Negede Abate. "Probabilistic safety analysis of dams." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-60843.

Full text
Abstract:
Successful dam design endeavor involves generating technical solutions that can meet intended functional objectives and choosing the best one among the alternative technical solutions. The process of choosing the best among the alternative technical solutions depends on evaluation of design conformance with technical specifications and reliability standards (such as capacity, environmental, safety, social, political etc pecifications). The process also involves evaluation on whether an optimal balance is set between safety and economy. The process of evaluating alternative design solutions requires generating a quantitative expression for lifetime performance and safety. An objective and numerical evaluation of lifetime performance and safety of dams is an essential but complex undertaking. Its domain involves much uncertainty (uncertainty in loads, hazards, strength parameters, boundary conditions, models and dam failure consequences) all of which should be characterized. Arguably uncertainty models and risk analysis provide the most complete characterization of dam performance and safety issues. Risk is a combined measure of the probability and severity of an adverse effect (functional and/or structural failure), and is often estimated by the product of the probability of the adverse event occurring and the expected consequences. Thus, risk analysis requires (1) determination of failure probabilities. (2) probabilistic estimation of consequences. Nonetheless, there is no adequately demonstrated, satisfactorily comprehensive and precise method for explicit treatment and integration of all uncertainties in variables of dam design and risk analysis. Therefore, there is a need for evaluating existing uncertainty models for their applicability, to see knowledge and realization gaps, to drive or adopt new approaches and tools and to adequately demonstrate their practicability by using real life case studies. This is required not only for hopefully improving the performance and safety evaluation process accuracy but also for getting better acceptance of the probabilistic approaches by those who took deterministic design based research and engineering practices as their life time career. These problems have motivated the initiation of this research. In this research the following have been accomplished: (1) Identified various ways of analyzing and representing uncertainty in dam design parameters pertinent to three dominant dam failure causes (sliding, overtopping and seepage), and tested a suite of stochastic models capable of capturing design parameters uncertainty to better facilitate evaluation of failure probabilities; (2) Studied three classical stochastic models: Monte Carlo Simulation Method (MCSM), First Order Second Moment (FOSM) and Second Order Second Moment (SOSM), and applied them for modeling dam performance and for evaluating failure probabilities in line with the above mentioned dominant dam failure causes; (3) Presented an exact new for the purpose analytical method of transforming design parameters distributions to a distribution representing dam performance (Analytical Solution for finding Derived Distributions (ASDD) method). Laid out proves of its basic principles, prepared a generic implementation architecture and demonstrated its applicability for the three failure modes using a real life case study data; (4) Presented a multitude of tailor-made reliability equations and solution procedures that will enable the implementations of the above stochastic and analytical methods for failure probability evaluation; (5) Implemented the stochastic and analytical methods using real life data pertinent to the three failure mechanisms from Tendaho Dam, Ethiopia. Compared the performance of the various stochastic and analytical methods with each other and with the classical deterministic design approach; and (6) Provided solution procedures, implementation architectures, and Mathematica 5.2, Crystal Ball 7 and spreadsheet based tools for doing the above mentioned analysis. The results indicate that: (1) The proposed approaches provide a valid set of procedures, internally consistent logic and produce more realistic solutions. Using the approaches engineers could design dams to meet a quantified level of performance (volume of failure) and could set a balance between safety and economy; (2) The research is assumed to bridge the gap between the available probability theories in one hand and the suffering distribution problems in dam safety evaluation on the other; (3) Out of the suite of stochastic approaches studied the ASDD method out perform the classical methods (MCSM, FOSM and SOSM methods) by its theoretical foundation, accuracy and reproducibility. However, when compared with deterministic approach, each of the stochastic approaches provides valid set of procedures, consistent logic and they gave more realistic solution. Nonetheless, it is good practice to compare results from the proposed probabilistic approaches; (4) The different tailor-made reliability equations and solution approaches followed are proved to work for stochastic safety evaluation of dams; and (5) The research drawn from some important conclusions and lessons, in relation to stochastic safety analysis of dams against the three dominant failure mechanisms, are. The end result of the study should provide dam engineers and decision makers with perspectives, methodologies, techniques and tools that help them better understand dam safety related issues and enable them to conduct quantitative safety analysis and thus make intelligent dam design, upgrading and rehabilitation decisions.
APA, Harvard, Vancouver, ISO, and other styles
11

Glynn, Luke. "A Probabilistic Analysis of Causation." Thesis, University of Oxford, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.523098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

ARAÚJO, Rafael Pereira de. "Probabilistic analysis applied to robots." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/20827.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-08-23T12:48:01Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) dissertacao_mestrado_rafael_araujo.pdf: 1319314 bytes, checksum: 15854b595d618c609a911b95573a01ad (MD5)
Made available in DSpace on 2017-08-23T12:48:01Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) dissertacao_mestrado_rafael_araujo.pdf: 1319314 bytes, checksum: 15854b595d618c609a911b95573a01ad (MD5) Previous issue date: 2016-09-15
Robots are increasingly being used in industry and starting their way to our homes as well. Nonetheless, the most frequently used techniques to analyze robots motion are based on simulations or statistical experiments made from filming robots’ movements. In this work we propose an alternative way of performing such analysis by using Probabilistic Model Checking with the language and tool PRISM. With PRISM we can perform simulations as well as check exhaustively whether a robot motion planning satisfies specific Probabilistic Temporal formulas. Therefore we can measure energy consumption, time to complete missions, etc., and all of these in terms of specific motion planning algorithms. As consequence we can also determine if an algorithm is superior to another in certain metrics. Furthermore, to ease the use of our work, we hide the PRISM syntax by proposing a more user-friendly DSL. As a consequence, we created a translator from the DSL to PRISM by implementing the translation rules and also, a preliminary investigation about its relative completeness by using the grammatical elements generation tool LGen. We illustrate those ideas with motion planning algorithms for home cleaning robots.
Robôs estão sendo cada vez mais utilizados na indústria e entrando em nossas casas também. No entanto, as técnicas mais frequentemente utilizadas para analisar a movimentação dos robôs são baseadas em simulações ou experimentos estatísticos realizados a partir da filmagem do movimento dos robôs. Neste trabalho, nós propomos uma maneira alternativa de realizar tais análises com a utilização da técnica de Verificação de Modelos Probabilísticos com a linguagem e ferramenta PRISM. Com PRISM, podemos, tanto realizar simulações quanto verificar exaustivamente se um planejamento de movimentação do robô satisfaz fórmulas Probabilísticas Temporais específicas. Portanto, podemos medir o consumo de energia, tempo necessário para completar missões, etc. e tudo isso em termos de algoritmos específicos de planejamento de movimentação. Como consequência, podemos, também, determinar se um algoritmo é superior a outro em relação a certas métricas. Além disso, para facilitar o uso do nosso trabalho, escondemos a sintaxe do PRISM propondo uma DSL amigável ao usuário. Em consequência, criamos um tradutor da DSL em PRISM através da implementação de regras de tradução bem como fizemos uma investigação preliminar sobre sua completude relativa usando a ferramenta de geração de elementos gramaticais LGen. Ilustramos as idéias com algoritmos de planejamento de movimentação para robôs de limpeza de casas.
APA, Harvard, Vancouver, ISO, and other styles
13

Asafu-Adjei, Joseph Kwaku. "Probabilistic Methods." VCU Scholars Compass, 2007. http://hdl.handle.net/10156/1420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Jreich, Rana. "Distribution verticale du carbone dans les sols - Analyse bayésienne des profils des teneurs en carbone et de C14." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLV060/document.

Full text
Abstract:
Le réchauffement climatique est un problème majeur pour le monde scientifique et les sociétés. La concentration de dioxyde de carbone a augmenté de 45% depuis la période préindustrielle (Harris, 2010), conséquence des activités humaines déséquilibrant le cycle du carbone mondial. Cela se traduit par un réchauffement de la planète avec des impacts dramatiques sur la terre et encore plus pour les populations fragiles.Parmi les solutions d'atténuation, une meilleure utilisation du sol est proposée. En effet, les sols ont la plus grande capacité d'échange de carbone avec l'atmosphère et renferment un stock important de carbone. Une augmentation minime du stock de carbone du sol, les échanges de carbone entre l'atmosphère et le sol plus favorables à la séquestration du carbone dans le sol compenseraient les émissions de carbone provenant de la combustion des combustibles fossiles. Cependant, la dynamique du carbone dans le sol souffre encore de connaissances insuffisantes. Il subsiste alors une grande incertitude quant à la réponse du carbone du sol aux changements climatiques et aux changements d'affectation des terres.Plusieurs modèles mécanistiques ont été développés pour mieux comprendre la dynamique du carbone du sol. Cependant, ces modèles mécanistes ont encore une vue incomplète des processus physiques affectant la matière organique (MO) du sol. Il faudra beaucoup de temps pour obtenir un modèle complet et à jour de la dynamique des sols.Dans ma thèse, nous avons proposé un modèle statistique bayésien visant à décrire la dynamique verticale du carbone du sol. Cela se fait grâce à la modélisation du carbone organique du sol et aussi des données radiocarbone, car elles illustrent le temps de séjour de la MO et donc la dynamique du carbone du sol. Cette approche statistique visait à mieux représenter les incertitudes sur la dynamique du carbone du sol et quantifier les effets des facteurs climatiques et environnementaux sur le carbone des sols superficiels et profonds.Cette méta-analyse a été réalisée sur une base de données de 344 profils, collectés à partir de 87 articles scientifiques et archéologiques et paléoclimatologiques, sous différentes conditions climatiques et environnementales.Un modèle non linéaire hiérarchique avec effets aléatoires a été proposé pour modéliser la dynamique verticale du radiocarbone en fonction de la profondeur. Les techniques de sélection bayésiennes, récemment publiées, ont été appliquées aux couches latentes de notre modèle, elles-mêmes liées par une relation linéaire aux facteurs climatiques et environnementaux. Le Bayesian Group Lasso, le Bayesian Sparse Group Selection(BSGS) et le Bayesian Effect Fusion(BEF) ont été testés pour identifier les principaux prédicteurs explicatifs catégoriels et le Stochastic Search Variable Selection pour identifier les prédicteurs explicatifs numériques influents. Une comparaison de ces techniques bayésiennes a été effectuée sur la base des critères de sélection du modèle bayésien pour spécifier quel modèle a le meilleur pouvoir prédictif En plus de la sélection de prédicteurs catégoriels, le BSGS permet de formuler une probabilité d'inclusion a posteriori pour chaque niveau dans les prédicteurs catégoriels comme type de sol et type d'écosystème. En outre, le BEF a permis de fusionner les types de sol et les types d’écosystèmes qui, selon le BEF, sont supposés avoir les mêmes effets sur nos réponses d’intérêts que la réponse du radiocarbone du sol arable.L'application de ces techniques a permis de prédire, en moyenne et au niveau mondial, la dynamique verticale du radiocarbone dans le cas d'une augmentation de température et de changement d’usage des sols. Par exemple, nous avons étudié l'impact de la déforestation des forêts tropicales et leur remplacement par des terres cultivées sur la dynamique du carbone du sol. La même analyse statistique a également été effectuée pour mieux comprendre la dynamique verticale de la teneur en carbone du sol
Global warming is a major issue for both the scientific world and societies. The concentration of carbon dioxide has increased by 45% since the pre-industrial era (Harris, 2010) as a consequence of human activities, unbalancing the global carbon cycle. This results in global warming with dramatic impacts on the Earth, particularly for fragile populations.Amongst mitigation solutions, a better use of soil is proposed. Soils have the largest capacity of carbon exchanges with the atmosphere and contain a large stock of carbon. A tiny increase in this soil carbon stock and in carbon exchanges between atmosphere and soil would be more favorable to soil carbon sequestration and would compensate for carbon emissios from burning fossil fuel. However, soil carbon dynamics still suffers from insufficient knowledge. There remains therefore a huge uncertainty about the soil carbon response to climate and land-use changes.While several mechanistic models have been developed to better understand the dynamics of soil carbon, they provide an incomplete view of the physical processes affecting soil organic matter (OM). It will be long before a complete and updated soil dynamics model becomes available.In my thesis, I propose a Bayesian statistical model aiming at describing the vertical dynamics of soil carbon. This is done thanks to the modeling of both soil organic carbon and of radiocarbon data as they illustrate the residence time of organic matter and thus the soil carbon dynamics. The purpose of this statistical approach was to better represent the uncertainties on soil carbon dynamics and to quantify the effects of climatic and environmental factors on both surface and deep soil carbon.This meta-analysis was performed on a database of 344 profiles, collected from 87 soil science papers and the literature in archeology and paleoclimatology, under different climate conditions (temperature, precipitation, etc.) and environments (soil type and type of ecosystem).A hierarchical non-linear model with random effects was proposed to model the vertical dynamics of radiocarbon as a function of depth. Bayesian selection techniques, recently published, were applied to the latent layers of the model, which in turn are linked by a linear relationship to the climatic and environmental factors. The Bayesian Group Lasso with Spike and Slab Prior (BGL-SS), the Bayesian Sparse Group Selection (BSGS) and the Bayesian Effect Fusion model-based clustering (BEF) were tested to identify the significant categorical explanatory predictors (soil type, ecosystem type) and the Stochastic Search Variable Selection method to identify the influential numerical explanatory predictors. A comparison of these Bayesian techniques was made based on the Bayesian model selection criteria (the DIC (Deviance Information Criterion), the Posterior Predictive Check, etc.) to specify which model has the best predictive and adjustment power of the database profiles. In addition to selecting categorical predictors, the BSGS allows the formulation of an a posteriori inclusion probability for each level within the categorical predictors such as soil type and ecosystem type (9 soil types and 6 ecosystem types were considered in our study). Furthermore, the BEF made it possible to merge the types of soil as well as the types of ecosystem, which according to the BEF, are considered to have the same effects on the responses of interest here, such as the response of the topsoil radiocarbon.The application of these techniques allowed us to predict, on average and on a global level, the vertical dynamics of the radiocarbon in the case of a temperature increase of 1, 1.5 and 2 °C, and in the case of a change in vegetation cover. For example, we studied the impact of deforesting tropical forests and replacing them by cultivated land on soil carbon dynamics. The same statistical analysis was also done to better understand the vertical dynamics of soil carbon content
APA, Harvard, Vancouver, ISO, and other styles
15

Pan, Qiujing. "Deterministic and Probabilistic Assessment of Tunnel Face Stability." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAI044.

Full text
Abstract:
The main work for Qiujing PAN’s PhD thesis is to develop the stability analysis for underground structures, which contains two parts, deterministic model and probabilistic analysis. During his 1st year of PhD research, he has mainly finished the deterministic model study. In the 2nd year, I developed a probabilistic model for high dimensional problems
In the contemporary society, the utilization and exploitation of underground space has become an inevitable and necessary measure to solve the current urban congestion. One of the most important requirements for successful design and construction in tunnels and underground engineering is to maintain the stability of the surrounding soils of the engineering. But the stability analysis requires engineers to have a clear ideal of the earth pressure, the pore water pressure, the seismic effects and the soil variability. Therefore, the research aimed at employing an available theory to design tunnels and underground structures which would be a hot issue with high engineering significance. Among these approaches employed to address the above problem, limit analysis is a powerful tool to perform the stability analysis and has been widely used for real geotechnical works. This research subject will undertake further research on the application of upper bound theorem to the stability analysis of tunnels and underground engineering. Then this approach will be compared with three dimensional analysis and experimental available data. The final goal is to validate new simplified mechanisms using limit analysis to design the collapse and blow-out pressure at the tunnel face. These deterministic models will then be used in a probabilistic framework. The Collocation-based Stochastic Response Surface Methodology will be used, and generalized in order to make possible at a limited computational cost a complete parametric study on the probabilistic properties of the input variables. The uncertainty propagation through the models of stability and ground movements will be evaluated, and some methods of reliability-based design will be proposed. The spatial variability of the soil will be taken into account using the random field theory, and applied to the tunnel face collapse. This model will be developed in order to take into account this variability for much smaller computation times than numerical models, will be validated numerically and submitted to extensive random samplings. The effect of the spatial variability will be evaluated
APA, Harvard, Vancouver, ISO, and other styles
16

Goka, Edoh. "Analyse des tolérances des systèmes complexes – Modélisation des imperfections de fabrication pour une analyse réaliste et robuste du comportement des systèmes." Thesis, Paris, ENSAM, 2019. http://www.theses.fr/2019ENAM0019/document.

Full text
Abstract:
L’analyse des tolérances a pour but de vérifier lors de la phase de conception, l’impact des tolérances individuelles sur l’assemblage et la fonctionnalité d’un système mécanique. Les produits fabriqués possèdent différents types de contacts et sont sujets à des imperfections de fabrication qui sont sources de défaillances d’assemblage et fonctionnelle. Les méthodes généralement proposées pour l’analyse des tolérances ne considèrent pas les défauts de forme. L’objectif des travaux de thèse est de proposer une nouvelle procédure d’analyse des tolérances permettant de prendre en compte les défauts de forme et le comportement géométriques des différents types de contacts. Ainsi, dans un premier temps, une méthode de modélisation des défauts de forme est proposée afin de rendre les simulations plus réalistes. Dans un second temps, ces défauts de forme sont intégrés dans la modélisation du comportement géométrique d’un système mécanique hyperstatique, en considérant les différents types de contacts. En effet, le comportement géométrique des différents types de contacts est différent dès que les défauts de forme sont considérés. La simulation de Monte Carlo associée à une technique d’optimisation est la méthode choisie afin de réaliser l’analyse des tolérances. Cependant, cette méthode est très couteuse en temps de calcul. Pour pallier ce problème, une approche utilisant des modèles probabilistes obtenus grâce à l’estimation par noyaux, est proposée. Cette nouvelle approche permet de réduire les temps de calcul de manière significative
Tolerance analysis aims toward the verification of the impact of individual tolerances on the assembly and functional requirements of a mechanical system. The manufactured products have several types of contacts and their geometry is imperfect, which may lead to non-functioning and non-assembly. Traditional methods for tolerance analysis do not consider the form defects. This thesis aims to propose a new procedure for tolerance analysis which considers the form defects and the different types of contact in its geometrical behavior modeling. A method is firstly proposed to model the form defects to make realistic analysis. Thereafter, form defects are integrated in the geometrical behavior modeling of a mechanical system and by considering also the different types of contacts. Indeed, these different contacts behave differently once the imperfections are considered. The Monte Carlo simulation coupled with an optimization technique is chosen as the method to perform the tolerance analysis. Nonetheless, this method is subject to excessive numerical efforts. To overcome this problem, probabilistic models using the Kernel Density Estimation method are proposed
APA, Harvard, Vancouver, ISO, and other styles
17

Alhajj, Chehade Hicham. "Geosynthetic-Reinforced Retaining Walls-Deterministic And Probabilistic Approaches." Thesis, Université Grenoble Alpes, 2021. http://www.theses.fr/2021GRALI010.

Full text
Abstract:
L'objectif de cette thèse est de développer, dans le cadre de la mécanique des sols, des méthodes d’analyse de la stabilité interne des murs de soutènement renforcés par géosynthétiques sous chargement sismique. Le travail porte d'abord sur des analyses déterministes, puis est étendu à des analyses probabilistes. Dans la première partie de cette thèse, un modèle déterministe, basé sur le théorème cinématique de l'analyse limite, est proposé pour évaluer le facteur de sécurité d’un mur en sol renforcé ou la résistance nécessaire du renforcement pour stabiliser la structure. Une technique de discrétisation spatiale est utilisée pour générer une surface de rupture rotationnelle, afin de pouvoir considérer des remblais hétérogènes et/ou de représenter le chargement sismique par une approche de type pseudo-dynamique. Les cas de sols secs, non saturés et saturés sont étudiés. La présence de fissures dans le sol est également prise en compte. Ce modèle déterministe permet d’obtenir des résultats rigoureux et est validé par confrontation avec des résultats existants dans la littérature. Dans la deuxième partie du mémoire de thèse, ce modèle déterministe est utilisé dans un cadre probabiliste. Tout d'abord, l’approche en variables aléatoires est utilisée. Les incertitudes considérées concernent les paramètres de résistance au cisaillement du sol, la charge sismique et la résistance des renforcements. L'expansion du chaos polynomial qui consiste à remplacer le modèle déterministe coûteux par un modèle analytique, combinée avec la technique de simulation de Monte Carlo est la méthode fiabiliste considérée pour effectuer l'analyse probabiliste. L'approche en variables aléatoires néglige la variabilité spatiale du sol puisque les propriétés du sol et les autres paramètres modélisés par des variables aléatoires, sont considérés comme constants dans chaque simulation déterministe. Pour cette raison, dans la dernière partie du manuscrit, la variabilité spatiale du sol est considérée en utilisant la théorie des champs aléatoires. La méthode SIR/A-bSPCE, une combinaison entre la technique de réduction dimensionnelle SIR (Sliced Inverse Regression) et une expansion de chaos polynomial adaptative (A-bSPCE), est la méthode fiabiliste considérée pour effectuer l'analyse probabiliste. Le temps de calcul total de l'analyse probabiliste, effectuée à l'aide de la méthode SIR-SPCE, est considérablement réduit par rapport à l'exécution directe des méthode probabilistes classiques. Seuls les paramètres de résistance du sol sont modélisés à l'aide de champs aléatoires, afin de se concentrer sur l'effet de la variabilité spatiale sur les résultats fiabilistes
The aim of this thesis is to assess the seismic internal stability of geosynthetic reinforced soil retaining walls. The work first deals with deterministic analyses and then focus on probabilistic ones. In the first part of this thesis, a deterministic model, based on the upper bound theorem of limit analysis, is proposed for assessing the reinforced soil wall safety factor or the required reinforcement strength to stabilize the structure. A spatial discretization technique is used to generate the rotational failure surface and give the possibility of considering heterogeneous backfills and/or to represent the seismic loading by the pseudo-dynamic approach. The cases of dry, unsaturated and saturated soils are investigated. Additionally, the crack presence in the backfill soils is considered. This deterministic model gives rigorous results and is validated by confrontation with existing results from the literature. Then, in the second part of the thesis, this deterministic model is used in a probabilistic framework. First, the uncertain input parameters are modeled using random variables. The considered uncertainties involve the soil shear strength parameters, seismic loading and reinforcement strength parameters. The Sparse Polynomial Chaos Expansion that consists of replacing the time expensive deterministic model by a meta-model, combined with Monte Carlo Simulations is considered as the reliability method to carry out the probabilistic analysis. Random variables approach neglects the soil spatial variability since the soil properties and the other uncertain input parameters, are considered constant in each deterministic simulation. Therefore, in the last part of the manuscript, the soil spatial variability is considered using the random field theory. The SIR/A-bSPCE method, a combination between the dimension reduction technique, Sliced Inverse Regression (SIR) and an active learning sparse polynomial chaos expansion (A-bSPCE), is implemented to carry out the probabilistic analysis. The total computational time of the probabilistic analysis, performed using SIR-SPCE, is significantly reduced compared to directly running classical probabilistic methods. Only the soil strength parameters are modeled using random fields, in order to focus on the effect of the spatial variability on the reliability results
APA, Harvard, Vancouver, ISO, and other styles
18

Royer, Clément. "Algorithmes d'optimisation sans dérivées à caractère probabiliste ou déterministe : analyse de complexité et importance en pratique." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30207/document.

Full text
Abstract:
L'utilisation d'aspects aléatoires a contribué de façon majeure aux dernières avancées dans le domaine de l'optimisation numérique; cela est dû en partie à la recrudescence de problèmes issus de l'apprentissage automatique (machine learning). Dans un tel contexte, les algorithmes classiques d'optimisation non linéaire, reposant sur des principes déterministes, se révèlent en effet bien moins performants que des variantes incorporant de l'aléatoire. Le coût de ces dernières est souvent inférieur à celui de leurs équivalents déterministes; en revanche, il peut s'avérer difficile de maintenir les propriétés théoriques d'un algorithme déterministe lorsque de l'aléatoire y est introduit. Effectuer une analyse de complexité d'une telle méthode est un procédé très répandu dans ce contexte. Cette technique permet déstimer la vitesse de convergence du schéma considéré et par là même d'établir une forme de convergence de celui-ci. Les récents travaux sur ce sujet, en particulier pour des problèmes d'optimisation non convexes, ont également contribué au développement de ces aspects dans le cadre déterministe, ceux-ci apportant en effet un éclairage nouveau sur le comportement des algorithmes. Dans cette thèse, on s'intéresse à l'amélioration pratique d'algorithmes d'optimisation sans dérivées à travers l'introduction d'aléatoire, ainsi qu'à l'impact numérique des analyses de complexité. L'étude se concentre essentiellement sur les méthodes de recherche directe, qui comptent parmi les principales catégories d'algorithmes sans dérivées; cependant, l'analyse sous-jacente est applicable à un large éventail de ces classes de méthodes. On propose des variantes probabilistes des propriétés requises pour assurer la convergence des algorithmes étudiés, en mettant en avant le gain en efficacité induit par ces variantes: un tel gain séxplique principalement par leur coût très faible en évaluations de fonction. Le cadre de base de notre analyse est celui de méthodes convergentes au premier ordre, que nous appliquons à des problèmes sans ou avec contraintes linéaires. Les bonnes performances obtenues dans ce contexte nous incitent par la suite à prendre en compte des aspects d'ordre deux. A partir des propriétés de complexité des algorithmes sans dérivées, on développe de nouvelles méthodes qui exploitent de l'information du second ordre. L'analyse de ces procédures peut être réalisée sur un plan déterministe ou probabiliste: la deuxième solution nous permet d'étudier de nouveaux aspects aléatoires ainsi que leurs conséquences sur l'éfficacité et la robustesse des algorithmes considérés
Randomization has had a major impact on the latest developments in the field of numerical optimization, partly due to the outbreak of machine learning applications. In this increasingly popular context, classical nonlinear programming algorithms have indeed been outperformed by variants relying on randomness. The cost of these variants is usually lower than for the traditional schemes, however theoretical guarantees may not be straightforward to carry out from the deterministic to the randomized setting. Complexity analysis is a useful tool in the latter case, as it helps in providing estimates on the convergence speed of a given scheme, which implies some form of convergence. Such a technique has also gained attention from the deterministic optimization community thanks to recent findings in the nonconvex case, as it brings supplementary indicators on the behavior of an algorithm. In this thesis, we investigate the practical enhancement of deterministic optimization algorithms through the introduction of random elements within those frameworks, as well as the numerical impact of their complexity results. We focus on direct-search methods, one of the main classes of derivative-free algorithms, yet our analysis applies to a wide range of derivative-free methods. We propose probabilistic variants on classical properties required to ensure convergence of the studied methods, then enlighten their practical efficiency induced by their lower consumption of function evaluations. Firstorder concerns form the basis of our analysis, which we apply to address unconstrained and linearly-constrained problems. The observed gains incite us to additionally take second-order considerations into account. Using complexity properties of derivative-free schemes, we develop several frameworks in which information of order two is exploited. Both a deterministic and a probabilistic analysis can be performed on these schemes. The latter is an opportunity to introduce supplementary probabilistic properties, together with their impact on numerical efficiency and robustness
APA, Harvard, Vancouver, ISO, and other styles
19

Guo, Xiangfeng. "Probabilistic stability analysis of an earth dam using field data." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALI017.

Full text
Abstract:
Compte tenu de la nature des sols, des incertitudes sur leurs propriétés sont largement rencontrées en géotechnique, en particulier dans le domaine des barrages en terre. Actuellement, il est de plus en plus nécessaire de tenir compte de ces incertitudes pour l'évaluation de la sécurité des grands barrages, notamment dans le cadre des études d’analyse de risques. Cependant, les analyses probabilistes sont complexes et difficiles à mettre en œuvre en raison du nombre limité de mesures, des temps de calcul importants et des limites des méthodes fiabilistes implémentées dans les outils de simulation commerciaux. De plus, la plupart des études précédentes sont basées sur des cas académiques et des données hypothétiques.Ce travail tente de résoudre les problèmes mentionnés ci-dessus en fournissant une étude d'analyse probabiliste pour la stabilité d'un barrage réel en terre en considérant les données in-situ disponibles. Cette étude inclut les éléments principaux suivants: (1) définition de la variabilité des sols en utilisant les mesures disponibles; (2) développement des modèles déterministes; (3-4) analyses probabilistes bu barrage en utilisant des approches en variables aléatoires et en champs aléatoires; (5) analyse 3D de la fiabilité du barrage considéré. Des méthodes fiabilistes avancées (par exemple le métamodèle adaptatif) sont introduites. Cela permet d'estimer précisément la probabilité de rupture du barrage et les valeurs statistiques des facteurs de sécurité avec un temps de calcul significativement réduit. En outre, certaines questions, qui restaient floues dans le domaine de l'analyse probabiliste des barrages, sont discutées (e.g. l’analyse de sensibilité globale des paramètres hydrauliques et géo-mécaniques des sols ; l’étude des performances de cinq méthodes de fiabilité; la simulation/comparaison de trois types de champs aléatoires : générique, conditionnel et non-stationnaire). Le travail présenté, basé sur des données réelles, pourrait être un bon complément aux études probabilistes existantes des ouvrages géotechniques. Les lecteurs pourront également trouver des informations utiles à partir des résultats obtenus afin de mieux résoudre les problèmes pratiques de géo-ingénierie dans un cadre probabiliste
Uncertainties of soil properties are widely encountered in the field of geotechnical engineering especially for earth dams which are constructed with earthen materials. In recent years, there is an increasing need, motivated by the deficiencies of the traditional deterministic approach or guided by the national regulations such as in France, of accounting for these uncertainties for a safe assessment of large dams particularly in the framework of risk analysis studies. However, probabilistic analyses are still complex and not so easy to implement in practice due to the limited number of in-situ measurements, expensive computation efforts and lack of implementation of reliability methods in commercial simulation tools. Moreover, most of the previous studies are based on academic cases and hypothetic data.This work attempts to deal with the aforementioned issues by providing a probabilistic analysis study for the stability of a real earth dam using available field data. This study includes the following main elements: (1) definition of the soil variability by using the available measurements; (2) development of the deterministic models; (3-4) dam probabilistic analyses using the random-variables and random-fields approaches; (5) three-dimensional reliability analysis of the considered dam. Advanced reliability methods, such as the adaptive surrogate modelling, are introduced for the studied earth dam problem. This allows accurately estimating the dam failure probability and the safety factor statistics with a significantly reduced calculation time. In addition, some issues, that remain unknown or unclear in the field of the dam probabilistic analysis, are discussed (e.g. global sensitivity analysis of the soil hydraulic and shear strength parameters; performance survey of five reliability methods; simulation/comparison of three different kinds of random fields: generic (unconditional-stationary), conditional and nonstationary). The presented work, based on real measurements, could be a good supplement to the existing probabilistic studies of geo-structures. Readers will find useful information from the obtained results in order to better solve the practical geotechnical problems in a probabilistic framework
APA, Harvard, Vancouver, ISO, and other styles
20

Bertsimas, Dimitris J. "The Probabilistic Minimum Spanning Tree, Part II: Probabilistic Analysis and Asymptotic Results." Massachusetts Institute of Technology, Operations Research Center, 1988. http://hdl.handle.net/1721.1/5284.

Full text
Abstract:
In this paper, which is a sequel to [3], we perform probabilistic analysis under the random Euclidean and the random length models of the probabilistic minimum spanning tree (PMST) problem and the two re-optimization strategies, in which we find the MST or the Steiner tree respectively among the points that are present at a particular instance. Under the random Euclidean model we prove that with probability 1, as the number of points goes to infinity, the expected length of the PMST is the same with the expectation of the MST re-optimization strategy and within a constant of the Steiner re-optimization strategy. In the random length model, using a result of Frieze [6], we prove that with probability 1 the expected length of the PMST is asymptotically smaller than the expectation of the MST re-optimization strategy. These results add evidence that a priori strategies may offer a useful and practical method for resolving combinatorial optimization problems on modified instances. Key words: Probabilistic analysis, combinatorial optimization, minimum spanning tree, Steiner tree.
APA, Harvard, Vancouver, ISO, and other styles
21

Cruz, Fernández Francisco. "Probabilistic graphical models for document analysis." Doctoral thesis, Universitat Autònoma de Barcelona, 2016. http://hdl.handle.net/10803/399520.

Full text
Abstract:
Actualmente, más del 80\% de los documentos almacenados en papel pertenecen al ámbito empresarial. Avances en materia de digitalización de documentos han fomentado el interés en crear copias digitales para solucionar problemas de mantenimiento y almacenamiento, además de poder disponer de formas eficientes de transmisión y extracción automática de la información contenida en ellos. Esta situación ha propiciado la necesidad de crear sistemas capaces de extraer y analizar automáticamente esta información. La gran variedad en tipos de documentos hace que esta no sea una tarea trivial. Un proceso de extracción de datos numéricos de tablas o facturas difiere sustancialmente del reconocimiento de texto manuscrito en un documento con anotaciones. No obstante, hay un nexo común en las dos tareas: dado un documento, es necesario localizar la región donde está la información de interés. En el área del Análisis de Documentos, a este proceso se denomina Análisis de la estructura del documento, y tiene como objetivo la identificación y categorización de las diferentes entidades que lo componen. Estas entidades pueden ser regiones de texto, imágenes, líneas de texto, celdas de una tabla, campos de un formulario, etc. Este proceso se puede realizar desde dos enfoques diferentes: análisis físico, o análisis lógico. El análisis físico consiste en identificar la ubicación y los limites que definen el área donde se encuentra la región de interés. El análisis lógico incluye además información acerca de su función y significado dentro del ámbito del documento. Para poder modelar esta información, es necesario incorporar al proceso de análisis un conocimiento previo sobre la tarea. Este conocimiento previo se puede modelar haciendo uso de relaciones contextuales entre las diferentes entidades. El uso del contexto en tareas de visión por computador ha demostrado ser de gran utilidad para guiar el proceso de reconocimiento y reforzar los resultados. Este proceso implica dos cuestiones fundamentales: qué tipo de información contextual es la adecuada para cada problema, y como incorporamos esa información al modelo. En esta tesis abordamos el análisis de la estructura de documentos basándonos en la incorporación de información contextual en el proceso de análisis. Hacemos énfasis en el uso de modelos gráficos probabilísticos y otros mecanismos para proponer soluciones al problema de la identificación de regiones y la segmentación de líneas de texto manuscritas. Presentamos varios métodos que hacen uso de modelos gráficos probabilísticos para resolver las anteriores tareas, y varios tipos de información contextual. En primer lugar presentamos un conjunto de características que pueden modelar información contextual sobre la posición relativa entre las diferentes regiones. Utilizamos estas características junto a otras para en varios modelos basados en modelos gráficos probabilísticos, y los comparamos con un modelo sintáctico clásico basado en gramáticas libres de contexto. En segundo lugar presentamos un marco probabilístico aplicado a la segmentación de líneas de líneas de texto. Combinamos el proceso de inferencia en el modelo con la estimación de las líneas de texto. Demostramos como el uso de información contextual mediante modelos gráficos probabilísticos es de gran utilidad para estas tareas.
Currently, more than 80% of the documents stored on paper belong to the business field. Advances in digitization techniques have fostered the interest in creating digital copies in order to solve maintenance and storage problems, as well as to have efficient ways for transmission and automatic extraction of the information contained therein. This situation has led to the need to create systems that can automatically extract and analyze this kind of information. The great variety of types of documents makes this not a trivial task. The extraction process of numerical data from tables or invoices differs substantially from a task of handwriting recognition in a document with annotations. However, there is a common link in the two tasks: Given a document, we need to identify the region where the information of interest is located. In the area of Document Analysis this process is called Layout Analysis, and aims at identifying and categorizing the different entities that compose the document. These entities can be text regions, pictures, text lines or tables, among others. This process can be done from two different approaches: physical or logical analysis. Physical analysis focus on identifying the physical boundaries that define the area of interest, whereas logical analysis also models information about the role and semantics of the entities within the scope of the document. To encode this information it is necessary to incorporate prior knowledge about the task into the analysis process, which can be introduced in terms of contextual relations between entities. The use of context has proven to be useful to reinforce the recognition process and improve the results on many computer vision tasks. It presents two fundamental questions: what kind of contextual information is appropriate, and how to incorporate this information into the model. In this thesis we study several ways to incorporate contextual information on the task of document layout analysis. We focus on the study of Probabilistic Graphical Models and other mechanisms for the inclusion of contextual relations applied to the specific tasks of region identification and handwritten text line segmentation. On the one hand, we present several methods for region identification. First, we present a method for layout analysis based on Conditional Random Fields for maximum a posteriori estimation. We encode a set of structural relations between different classes of regions on a set of features. Second, we present a method based on 2D-Probabilistic Context-free Grammars and perform a comparative study between probabilistic graphical models and this syntactic approach. Third, we propose a statistical approach based on the Expectation-Maximization algorithm devised to structured documents. We perform a thorough evaluation of the proposed methods on two particular collections of documents: a historical dataset composed of ancient structured documents, and a collection of contemporary documents. On the other hand, we present a probabilistic framework applied to the task of handwritten text line segmentation. We successfully combine the EM algorithm and variational approaches for this purpose. We demonstrate that the use of contextual information using probabilistic graphical models is of great utility for these tasks.
APA, Harvard, Vancouver, ISO, and other styles
22

Kosmidis, Leonidas. "Enabling caches in probabilistic timing analysis." Doctoral thesis, Universitat Politècnica de Catalunya, 2017. http://hdl.handle.net/10803/460819.

Full text
Abstract:
Hardware and software complexity of future critical real-time systems challenges the scalability of traditional timing analysis methods. Measurement-Based Probabilistic Timing Analysis (MBPTA) has recently emerged as an industrially-viable alternative technique to deal with complex hardware/software. Yet, MBPTA requires certain timing properties in the system under analysis that are not satisfied in conventional systems. In this thesis, we introduce, for the first time, hardware and software solutions to satisfy those requirements as well as to improve MBPTA applicability. We focus on one of the hardware resources with highest impact on both average performance and Worst-Case Execution Time (WCET) in current real-time platforms, the cache. In this line, the contributions of this thesis follow three different axes: hardware solutions and software solutions to enable MBPTA, and MBPTA analysis enhancements in systems featuring caches. At hardware level, we set the foundations of MBPTA-compliant processor designs, and define efficient time-randomised cache designs for single- and multi-level hierarchies of arbitrary complexity, including unified caches, which can be time-analysed for the first time. We propose three new software randomisation approaches (one dynamic and two static variants) to control, in an MBPTA-compliant manner, the cache jitter in Commercial off-the-shelf (COTS) processors in real-time systems. To that end, all variants randomly vary the location of programs' code and data in memory across runs, to achieve probabilistic timing properties similar to those achieved with customised hardware cache designs. We propose a novel method to estimate the WCET of a program using MBPTA, without requiring the end-user to identify worst-case paths and inputs, improving its applicability in industry. We also introduce Probabilistic Timing Composability, which allows Integrated Systems to reduce their WCET in the presence of time-randomised caches. With the above contributions, this thesis pushes the limits in the use of complex real-time embedded processor designs equipped with caches and paves the way towards the industrialisation of MBPTA technology.
La complejidad de hardware y software de los sistemas críticos del futuro desafía la escalabilidad de los métodos tradicionales de análisis temporal. El análisis temporal probabilístico basado en medidas (MBPTA) ha aparecido últimamente como una solución viable alternativa para la industria, para manejar hardware/software complejo. Sin embargo, MBPTA requiere ciertas propiedades de tiempo en el sistema bajo análisis que no satisfacen los sistemas convencionales. En esta tesis introducimos, por primera vez, soluciones hardware y software para satisfacer estos requisitos como también mejorar la aplicabilidad de MBPTA. Nos centramos en uno de los recursos hardware con el máximo impacto en el rendimiento medio y el peor caso del tiempo de ejecución (WCET) en plataformas actuales de tiempo real, la cache. En esta línea, las contribuciones de esta tesis siguen 3 ejes distintos: soluciones hardware y soluciones software para habilitar MBPTA, y mejoras de el análisis MBPTA en sistemas usado caches. A nivel de hardware, creamos las bases del diseño de un procesador compatible con MBPTA, y definimos diseños de cache con tiempo aleatorio para jerarquías de memoria con uno y múltiples niveles de cualquier complejidad, incluso caches unificadas, las cuales pueden ser analizadas temporalmente por primera vez. Proponemos tres nuevos enfoques de aleatorización de software (uno dinámico y dos variedades estáticas) para manejar, en una manera compatible con MBPTA, la variabilidad del tiempo (jitter) de la cache en procesadores comerciales comunes en el mercado (COTS) en sistemas de tiempo real. Por eso, todas nuestras propuestas varían aleatoriamente la posición del código y de los datos del programa en la memoria entre ejecuciones del mismo, para conseguir propiedades de tiempo aleatorias, similares a las logradas con diseños hardware personalizados. Proponemos un nuevo método para estimar el WCET de un programa usando MBPTA, sin requerir que el usuario dentifique los caminos y las entradas de programa del peor caso, mejorando así la aplicabilidad de MBPTA en la industria. Además, introducimos la composabilidad de tiempo probabilística, que permite a los sistemas integrados reducir su WCET cuando usan caches de tiempo aleatorio. Con estas contribuciones, esta tesis empuja los limites en el uso de diseños complejos de procesadores empotrados en sistemas de tiempo real equipados con caches y prepara el terreno para la industrialización de la tecnología MBPTA.
APA, Harvard, Vancouver, ISO, and other styles
23

Muller, Maria Anna Elizabeth. "Probabilistic analysis of repairable redundant systems." Pretoria : [s.n.], 2005. http://upetd.up.ac.za/thesis/available/etd-10182006-132917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Rabeau, Nicholas Marc. "Probabilistic approach to contingent claims analysis." Thesis, Imperial College London, 1996. http://hdl.handle.net/10044/1/8195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kaowichakorn, Peerachai. "Probabilistic Analysis of Quality of Service." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4880.

Full text
Abstract:
Current complex service systems are usually comprised of many other components which are often external services performing particular tasks. The quality of service (QoS) attributes such as availability, cost, response time are essential to determine usability and eciency of such system. Obviously, the QoS of such compound system is dependent on the QoS of its components. However, the QoS of each component is naturally unstable and di erent each time it is called due to many factors like network bandwidth, workload, hardware resource, etc. This will consequently make the QoS of the whole system be unstable. This uncertainty can be described and represented with probability distributions. This thesis presents an approach to calculate the QoS of the system when the probability distributions of QoS of each component are provided by service provider or derived from historical data, along with the structure of their compositions. In addition, an analyzer tool is implemented in order to predict the QoS of the given compositions and probability distributions following the proposed approach. The output of the analyzer can be used to predict the behavior of the system to be implemented and to make decisions based on the expected performance. The experimental evaluation shows that the estimation is reliable with a minimal and acceptable error measurement.
APA, Harvard, Vancouver, ISO, and other styles
26

Küttler, Martin, Michael Roitzsch, Claude-Joachim Hamann, and Marcus Völp. "Probabilistic Analysis of Low-Criticality Execution." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-233117.

Full text
Abstract:
The mixed-criticality toolbox promises system architects a powerful framework for consolidating real-time tasks with different safety properties on a single computing platform. Thanks to the research efforts in the mixed-criticality field, guarantees provided to the highest criticality level are well understood. However, lower-criticality job execution depends on the condition that all high-criticality jobs complete within their more optimistic low-criticality execution time bounds. Otherwise, no guarantees are made. In this paper, we add to the mixed-criticality toolbox by providing a probabilistic analysis method for low-criticality tasks. While deterministic models reduce task behavior to constant numbers, probabilistic analysis captures varying runtime behavior. We introduce a novel algorithmic approach for probabilistic timing analysis, which we call symbolic scheduling. For restricted task sets, we also present an analytical solution. We use this method to calculate per-job success probabilities for low-criticality tasks, in order to quantify, how low-criticality tasks behave in case of high-criticality jobs overrunning their optimistic low-criticality reservation.
APA, Harvard, Vancouver, ISO, and other styles
27

Cai, Xing Shi. "A probabilistic analysis of Kademlia networks." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=114333.

Full text
Abstract:
Nowadays Kademlia is one of the most widely used DHTs (Distributed Hash Table) in P2P (peer-to-peer) networks. This work studies one essential question about Kademlia overlay networks from a mathematical perspective: how long does it take to locate a node? To answer it, we introduce a random graph K to model a Kademlia overlay and study how long it takes to locate a given vertex in K by using Kademlia's routing algorithm.
Aujourd'hui Kademlia est l'un des les plus utilisés DHTs (Distributed Hash Tableau) dans les réseaux P2P (peer-to-peer). Cet article étudie une question essentielle des réseaux "overlay" de Kademlia d'un point de vue mathématique: combien de temps faut-il pour localiser un noeud? Pour y répondre, nous introduisons un graphe aléatoire K pour modéliser un réseau de Kademlia et étudier la complexité d'un algorithme de routage de Kademlia.
APA, Harvard, Vancouver, ISO, and other styles
28

Johnson, Elizabeth Alice. "Probabilistic analysis of shingle beach management." Thesis, University of Bristol, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.422561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Trim, A. D. "Probabilistic dynamic analysis of offshore structures." Thesis, Cranfield University, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.376215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kountras, Apostolos 1970. "Probabilistic analysis of turbine blade durability." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28893.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2004.
Includes bibliographical references (leaves 71-72).
The effect of variability on turbine blade durability was assessed for seven design/operating parameters in three blade designs. The parameters included gas path and cooling convective parameters, metal and coating thermal conductivity and coating thickness. The durability life was modelled as limited by thermo-mechanical low cycle fatigue and creep. A nominal blade design as well as two additional variants were examined using deterministic and probabilistic approaches. External thermal and pressure boundary conditions were generated by three-dimensional CFD calculations. The location of expected failure was the bottom of the trailing edge cooling slot and was the same for all three designs examined. The nominal design had higher life and less variability for the ranges of design parameters examined. For the temperature range studied fatigue was the primary damage mechanism. The variation in cooling air bulk temperature was most important in setting the variation in blade durability life. This life variation was also affected by main gas bulk temperature and heat transfer coefficient, and cooling heat transfer coefficient, but to a lesser extent.
by Apostolos Kountras.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
31

Sheel, Minaskshi. "Probabilistic analysis of ground-holding strategies." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/28175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Küttler, Martin, Michael Roitzsch, Claude-Joachim Hamann, and Marcus Völp. "Probabilistic Analysis of Low-Criticality Execution." Technische Universität Dresden, 2017. https://tud.qucosa.de/id/qucosa%3A30798.

Full text
Abstract:
The mixed-criticality toolbox promises system architects a powerful framework for consolidating real-time tasks with different safety properties on a single computing platform. Thanks to the research efforts in the mixed-criticality field, guarantees provided to the highest criticality level are well understood. However, lower-criticality job execution depends on the condition that all high-criticality jobs complete within their more optimistic low-criticality execution time bounds. Otherwise, no guarantees are made. In this paper, we add to the mixed-criticality toolbox by providing a probabilistic analysis method for low-criticality tasks. While deterministic models reduce task behavior to constant numbers, probabilistic analysis captures varying runtime behavior. We introduce a novel algorithmic approach for probabilistic timing analysis, which we call symbolic scheduling. For restricted task sets, we also present an analytical solution. We use this method to calculate per-job success probabilities for low-criticality tasks, in order to quantify, how low-criticality tasks behave in case of high-criticality jobs overrunning their optimistic low-criticality reservation.
APA, Harvard, Vancouver, ISO, and other styles
33

Yu, Jenn-Hwa. "Probabilistic analysis of some search algorithms /." The Ohio State University, 1990. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487683756126241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Mapuranga, Victor Philip. "Probabilistic seismic hazard analysis for Zimbabwe." Diss., University of Pretoria, 2014. http://hdl.handle.net/2263/43166.

Full text
Abstract:
In this study, the seismic hazards of Zimbabwe are presented as maps showing probabilistic peak ground acceleration (PGA). Seismic hazards maps have a 10% chance of exceeding the indicated ground acceleration over a 50 year period, and are prepared using a homogenized 101 year catalogue compiled for seismic moment magnitude . Two approaches of probabilistic seismic hazard assessment were applied. The first was the widely used "deductive" approach (Cornell, 1968) which integrates geological and geophysical information together with seismic event catalogues in the assessment of seismic hazards. Application of the procedure includes several steps. As a first step, this procedure requires the delineation of potential seismic zones, which is strongly influenced by historic patterns and based on independent geologic evidence or tectonic features such as faults (Atkinson, 2004; Kijko and Graham, 1998). The second method was the "parametric-historic" approach of Kijko and Graham (1998, 1999) which has been developed for regions with incomplete catalogues and does not require the subjective delineation of active seismic zones. It combines the best features of the deductive Cornell-McGuire procedure and the historic method of Veneziano et al. (1984). Four (4) ground motion prediction equations suitable for hard rock conditions in a specified region were applied in the assessment of seismic hazards. The highest levels of hazards in Zimbabwe are in the south-eastern border of the country with Mozambique, the Lake Kariba area and the mid-Zambezi basin in the vicinity of the Save-Limpopo mobile belt. Results show that assessment of seismic hazard using parametric-historic procedure to a large extent gives a “mirror” of the seismicity pattern whereas using the classic Cornell-McGuire procedure gives results that reflect the delineated pattern of seismic zones and the two methods are best used complementary of each other depending on available input data.
Dissertation (MSc)--University of Pretoria, 2014.
lk2014
Physics
MSc
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
35

SAVELY, JAMES PALMER. "PROBABILISTIC ANALYSIS OF FRACTURED ROCK MASSES." Diss., The University of Arizona, 1987. http://hdl.handle.net/10150/184249.

Full text
Abstract:
Stability analysis of rock masses composed of small, discrete rock blocks that are in-place and interlocked should consider four components of failure: (1) Sliding between blocks. (2) Shearing through rock blocks. (3) Rolling blocks in a shear zone. (4) Crushing of rock blocks. Statistical rock mass description is used to define the characteristics of the rock blocks and the block assemblage. Clastic mechanics is one method of predicting stresses produced by the arrangement of rock blocks and the loading conditions. Failure begins at a point of maximum stress behind the slope. Progression of the failure is assumed if the first block fails because adjacent blocks will become overstressed. The location of the point of maximum stress is determined from the shape and arrangement of the constituent rock blocks. Because strength is mobilized block-by-block rather than instantaneously along a continuous shear surface, sliding between blocks shows less stability than a soil rotational shear analysis or a rigid block sliding analysis. Shearing through rock blocks occurs when maximum shear stress exceeds rock shear strength. Crushing of rock blocks is predicted if the normal stress exceeds the compressive strength of the rock block. A size-strength relationship is combined with the rock block size distribution curve to estimate crushing strength. Rotating blocks in a shear zone have been observed in model studies and as a mechanism in landslides. Stability analysis assumes that the rock mass is sufficiently loosened by blasting and excavation to allow blocks to rotate. The shear strength of rolling blocks is dynamic shear strength that is less than static sliding shear strength. This rolling mechanism can explain release of slope failures where there are no other obvious structural controls. Stability of each component of rock mass failure is calculated separately using capacity-demand reliability. These results are combined as a series-connected system to give the overall stability of the rock mass. This probability of failure for the rock mass system explicitly accounts for the four components of rock mass failure. Criteria for recognizing rock mass failure potential and examples applying the proposed method are presented.
APA, Harvard, Vancouver, ISO, and other styles
36

Blakely, Scott. "Probabilistic Analysis for Reliable Logic Circuits." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1860.

Full text
Abstract:
Continued aggressive scaling of electronic technology poses obstacles for maintaining circuit reliability. To this end, analysis of reliability is of increasing importance. Large scale number of inputs and gates or correlations of failures render such analysis computationally complex. This paper presents an accurate framework for reliability analysis of logic circuits, while inherently handling reconvergent fan-out without additional complexity. Combinational circuits are modeled stochastically as Discrete-Time Markov Chains, where propagation of node logic levels and error probability distributions through circuitry are used to determine error probabilities at nodes in the circuit. Model construction is scalable, as it is done so on a gate-by-gate basis. The stochastic nature of the model lends itself to allow various properties of the circuit to be formally analyzed by means of steady-state properties. Formal verifying the properties against the model can circumvent strenuous simulations while exhaustively checking all possible scenarios for given properties. Small combinational circuits are used to explain model construction, properties are presented for analysis of the system, more example circuits are demonstrated, and the accuracy of the method is verified against an existing simulation method.
APA, Harvard, Vancouver, ISO, and other styles
37

Hohn, Jennifer Lynn. "Generalized Probabilistic Bowling Distributions." TopSCHOLAR®, 2009. http://digitalcommons.wku.edu/theses/82.

Full text
Abstract:
Have you ever wondered if you are better than the average bowler? If so, there are a variety of ways to compute the average score of a bowling game, including methods that account for a bowler’s skill level. In this thesis, we discuss several different ways to generate bowling scores randomly. For each distribution, we give results for the expected value and standard deviation of each frame's score, the expected value of the game’s final score, and the correlation coefficient between the score of the first and second roll of a single frame. Furthermore, we shall generalize the results in each distribution for an frame game on pins. Additionally, we shall generalize the number of possible games when bowling frames on pins. Then, we shall derive the frequency distribution of each frame’s scores and the arithmetic mean for frames on pins. Finally, to summarize the variety of distributions, we shall make tables that display the results obtained from each distribution used to model a particular bowler’s score. We evaluate the special case when bowling 10 frames on 10 pins, which represents a standard bowling game.
APA, Harvard, Vancouver, ISO, and other styles
38

Mason, Dave. "Probabilistic Program Analysis for Software Component Reliability." Thesis, University of Waterloo, 2002. http://hdl.handle.net/10012/1059.

Full text
Abstract:
Components are widely seen by software engineers as an important technology to address the "software crisis''. An important aspect of components in other areas of engineering is that system reliability can be estimated from the reliability of the components. We show how commonly proposed methods of reliability estimation and composition for software are inadequate because of differences between the models and the actual software systems, and we show where the assumptions from system reliability theory cause difficulty when applied to software. This thesis provides an approach to reliability that makes it possible, if not currently plausible, to compose component reliabilities so as to accurately and safely determine system reliability. Firstly, we extend previous work on input sub-domains, or partitions, such that our sub-domains can be sampled in a statistically sound way. We provide an algorithm to generate the most important partitions first, which is particularly important when there are an infinite number of input sub-domains. We combine analysis and testing to provide useful reliabilities for the various input sub-domains of a system, or component. This provides a methodology for calculating true reliability for a software system for any accurate statistical distribution of input values. Secondly, we present a calculus for probability density functions that permits accurately modeling the input distribution seen by each component in the system - a critically important issue in dealing with reliability of software components. Finally, we provide the system structuring calculus that allows a system designer to take components from component suppliers that have been built according to our rules and to determine the resulting system reliability. This can be done without access to the actual components. This work raises many issues, particularly about scalability of the proposed techniques and about the ability of the system designer to know the input profile to the level and kind of accuracy required. There are also large classes of components where the techniques are currently intractable, but we see this work as an important first step.
APA, Harvard, Vancouver, ISO, and other styles
39

Feng, Jianwen. "Probabilistic modelling of heterogeneous media." Thesis, Swansea University, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.644724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Nicola, Jérémy. "Robust, precise and reliable simultaneous localization and mapping for and underwater robot. Comparison and combination of probabilistic and set-membership methods for the SLAM problem." Thesis, Brest, 2017. http://www.theses.fr/2017BRES0066/document.

Full text
Abstract:
Dans cette thèse on s'intéresse au problème de la localisation d'un robot sous-marin et de la cartographie en simultané d'un jeu de balises acoustiques installées sur le fond marin, en utilisant un distance-mètre acoustique et une centrale inertielle. Nous nous focalisons sur les deux approches principales utilisées pour résoudre ce type de problème: le filtrage de Kalman et le filtrage ensembliste basé sur l'analyse par intervalles. Le filtre de Kalman est optimal quand les équations d'état du robot sont linéaires et les bruits sont additifs, Gaussiens. Le filtrage par intervalles ne modélise pas les incertitudes dans un cadre probabiliste, et ne fait qu'une seule hypothèse sur leur nature: elles sont bornées. De plus, l'approche utilisant les intervalles permet la propagation rigoureuse des incertitudes, même quand les équations sont non linéaires. Cela résulte en une estimation hautement fiable, au prix d'une précision réduite. Nous montrons que dans un contexte sous-marin, quand le robot est équipé avec une centrale inertielle de haute précision, une partie des équations du SLAM peut raisonnablement être considérée comme linéaire avec un bruit Gaussien additif, en faisant le terrain de jeu idéal d'un filtre de Kalman. De l'autre côté, les équations liées aux observations du distance-mètre acoustique sont bien plus problématiques: le système n'est pas observable, les équations sont non linéaires, et les outliers sont fréquents. Ces conditions sont idéales pour une approche à erreur bornées basée sur l'analyse par intervalles. En prenant avantage des propriétés des bruits Gaussiens, cette thèse réconcilie le traitement probabiliste et ensembliste des incertitudes pour les systèmes aussi bien linéaires que non linéaires sujets à des bruits Gaussiens additifs. En raisonnant de manière géométrique, nous sommes capables d'exprimer la partie des équations du filtre de Kalman modélisant la dynamique du véhicule dans un cadre ensembliste. De la même manière, un traitement plus rigoureux et précis des incertitudes est décrit pour la partie des équations du filtre de Kalman liée aux mesures de distances. Ces outils peuvent ensuite être combinés pour obtenir un algorithme de SLAM qui est fiable, précis et robuste. Certaines des méthodes développées dans cette thèse sont illustrées sur des données réelles
In this thesis, we work on the problem of simultaneously localizing an underwater robot while mapping a set of acoustic beacons lying on the seafloor, using an acoustic range-meter and an inertial navigation system. We focus on the two main approaches classically used to solve this type of problem: Kalman filtering and set-membership filtering using interval analysis. The Kalman filter is optimal when the state equations of the robot are linear, and the noises are additive, white and Gaussian. The interval-based filter do not model uncertainties in a probabilistic framework, and makes only one assumption about their nature: they are bounded. Moreover, the interval-based approach allows to rigorously propagate the uncertainties, even when the equations are non-linear. This results in a high reliability in the set estimate, at the cost of a reduced precision.We show that in a subsea context, when the robot is equipped with a high precision inertial navigation system, a part of the SLAM equations can reasonably be seen as linear with additive Gaussian noise, making it the ideal playground of a Kalman filter. On the other hand, the equations related to the acoustic range-meter are much more problematic: the system is not observable, the equations are non-linear, and the outliers are frequent. These conditions are ideal for a set-based approach using interval analysis.By taking advantage of the properties of Gaussian noises, this thesis reconciles the probabilistic and set-membership processing of uncertainties for both linear and non-linear systems with additive Gaussian noises. By reasoning geometrically, we are able to express the part of the Kalman filter equations linked to the dynamics of the vehicle in a set-membership context. In the same way, a more rigorous and precise treatment of uncertainties is described for the part of the Kalman filter linked to the range-measurements. These two tools can then be combined to obtain a SLAM algorithm that is reliable, precise and robust. Some of the methods developed during this thesis are demonstrated on real data
APA, Harvard, Vancouver, ISO, and other styles
41

Mason, David Victor. "Probabilistic program analysis for software component reliability." Waterloo, Ont. : University of Waterloo, 2002. http://etd.uwaterloo.ca/etd/dmason2002.pdf.

Full text
Abstract:
Thesis (Ph.D.)--University of Waterloo, 2002.
"A thesis presented to the University of Waterloo in fulfilment of the thesis requirement for the degree of Doctor of Philosophy in Computer Science". Includes bibliographical references. Also available in microfiche format.
APA, Harvard, Vancouver, ISO, and other styles
42

Milutinovic, Suzana. "On the limits of probabilistic timing analysis." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/668475.

Full text
Abstract:
Over the last years, we are witnessing the steady and rapid growth of Critica! Real-Time Embedded Systems (CRTES) industries, such as automotive and aerospace. Many of the increasingly-complex CRTES' functionalities that are currently implemented with mechanical means are moving towards to an electromechanical implementation controlled by critica! software. This trend results in a two-fold consequence. First, the size and complexity of critical-software increases in every new embedded product. And second, high-performance hardware features like caches are more frequently used in real-time processors. The increase in complexity of CRTES challenges the validation and verification process, a necessary step to certify that the system is safe for deployment. Timing validation and verification includes the computation of the Worst-Case Execution Time (WCET) estimates, which need to be trustworthy and tight. Traditional timing analysis are challenged by the use of complex hardware/software, resulting in low-quality WCET estimates, which tend to add significant pessimism to guarantee estimates' trustworthiness. This calls for new solutions that help tightening WCET estimates in a safe manner. In this Thesis, we investigate the novel Measurement-Based Probabilistic Timing Analysis (MBPTA), which in its original version already shows potential to deliver trustworthy and tight WCETs for tasks running on complex systems. First, we propose a methodology to assess and ensure that ali cache memory layouts, which can significantly impact WCET, have been adequately factored in the WCET estimation process. Second, we provide a solution to achieve simultaneously cache representativeness and full path coverage. This solution provides evidence proving that WCET estimates obtained are valid for ali program execution paths regardless of how code and data are laid out in the cache. Lastly, we analyse and expose the main misconceptions and pitfalls that can prevent a sound application of WCET analysis based on extreme value theory, which is used as part of MBPTA.
En los últimos años, se ha podido observar un crecimiento rápido y sostenido de la industria de los sistemas embebidos críticos de tiempo real (abreviado en inglés CRTES}, como por ejemplo la industria aeronáutica o la automovilística. En un futuro cercano, muchas de las funcionalidades complejas que actualmente se están implementando a través de sistemas mecánicos en los CRTES pasarán a ser controladas por software crítico. Esta tendencia tiene dos consecuencias claras. La primera, el tamaño y la complejidad del software se incrementará en cada nuevo producto embebido que se lance al mercado. La segunda, las técnicas hardware destinadas a alto rendimiento (por ejemplo, memorias caché) serán usadas más frecuentemente en los procesadores de tiempo real. El incremento en la complejidad de los CRTES impone un reto en los procesos de validación y verificación de los procesadores, un paso imprescindible para certificar que los sistemas se pueden comercializar de forma segura. La validación y verificación del tiempo de ejecución incluye la estimación del tiempo de ejecución en el peor caso (abreviado en inglés WCET}, que debe ser precisa y certera. Desafortunadamente, los procesos tradicionales para analizar el tiempo de ejecución tienen problemas para analizar las complejas combinaciones entre el software y el hardware, produciendo estimaciones del WCET de mala calidad y conservadoras. Para superar dicha limitación, es necesario que florezcan nuevas técnicas que ayuden a proporcionar WCET más precisos de forma segura y automatizada. En esta Tesis se profundiza en la investigación referente al análisis probabilístico de tiempo de ejecución basado en medidas (abreviado en inglés MBPTA), cuyas primeras implementaciones muestran potencial para obtener un WCET preciso y certero en tareas ejecutadas en sistemas complejos. Primero, se propone una metodología para certificar que todas las distribuciones de la memoria caché, una de las estructuras más complejas de los CRTES, han sido contabilizadas adecuadamente durante el proceso de estimación del WCET. Segundo, se expone una solución para conseguir a la vez representatividad en la memoria caché y cobertura total en caminos críticos del programa. Dicha solución garantiza que la estimación WCET obtenida es válida para todos los caminos de ejecución, independientemente de como el código y los datos se guardan en la memoria caché. Finalmente, se analizan y discuten los mayores malentendidos y obstáculos que pueden prevenir la aplicabilidad del análisis de WCET basado en la teoría de valores extremos, la cual forma parte del MBPTA.
APA, Harvard, Vancouver, ISO, and other styles
43

Li, Bin. "Integrating Software into PRA (Probabilistic Risk Analysis)." College Park, Md. : University of Maryland, 2004. http://hdl.handle.net/1903/1993.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2004
Thesis research directed by: Reliability Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
44

Rexhepi, Astrit. "Motion analysis using probabilistic and statistical reasoning." Thesis, University of Surrey, 2007. http://epubs.surrey.ac.uk/843205/.

Full text
Abstract:
The usual input to a motion analysis system is a temporal image sequence. Even though motion analysis is often called dynamic image analysis, it is frequently based on a small number of consecutive images, sometimes just two or three in a sequence. This case is similar to an analysis of static images, and the motion is actually analyzed at a higher level. There are three main groups of motion-related problems: Motion detection, moving object detection and localization, and derivation of 3D object properties. In this thesis we focused our attention on the second group. More specifically, within this group we will be dealing with four main issues: Moving object boundary detection, boundary delineation, boundary representation and description, and boundary matching for fast future location prediction. To detect moving object boundaries a new theory derived from temporal cooccurrence matrices is proposed, developed and applied. Afterwards, a filter design is developed to get fast and accurate results. As any boundary detection method, the output from this stage is usually a set of unlinked segments of boundaries. To assemble these segments of boundaries into meaningful boundary, a new active contour model has been proposed and developed that is capable of escaping energy minima caused by noise. Since our method for matching we based on the correspondence of interest points (feature points), we needed a proper set of invariant descriptors in order to match contours of two successive frames. For this task, a new theory on boundary representation and description called The theory of variances and varilets has been proposed and developed. We used moments of the variance transform and the normalized variance transform for an initial matching of contours which is in some sense a global matching. Afterwards, an Iterative sub-mappings strategy has been proposed and applied for fine matching. An important issue from the moment function was that extrema of successive derivatives provide as a coarse-to-fine matching, where to each feature point we assigned a proper descriptor induced from the normalized variance transform matrix.
APA, Harvard, Vancouver, ISO, and other styles
45

Rejimon, Thara. "Reliability-centric probabilistic analysis of VLSI circuits." [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Hughes, Nicholas Peter. "Probabilistic models for automated ECG interval analysis." Thesis, University of Oxford, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Wojtczak, Dominik. "Recursive probabilistic models : efficient analysis and implementation." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/3217.

Full text
Abstract:
This thesis examines Recursive Markov Chains (RMCs), their natural extensions and connection to other models. RMCs can model in a natural way probabilistic procedural programs and other systems that involve recursion and probability. An RMC is a set of ordinary finite state Markov Chains that are allowed to call each other recursively and it describes a potentially infinite, but countable, state ordinary Markov Chain. RMCs generalize in a precise sense several well studied probabilistic models in other domains such as natural language processing (Stochastic Context-Free Grammars), population dynamics (Multi-Type Branching Processes) and in queueing theory (Quasi-Birth-Death processes (QBDs)). In addition, RMCs can be extended to a controlled version called Recursive Markov Decision Processes (RMDPs) and also a game version referred to as Recursive (Simple) Stochastic Games (RSSGs). For analyzing RMCs, RMDPs, RSSGs we devised highly optimized numerical algorithms and implemented them in a tool called PReMo (Probabilistic Recursive Models analyzer). PReMo allows computation of the termination probability and expected termination time of RMCs and QBDs, and a restricted subset of RMDPs and RSSGs. The input models are described by the user in specifically designed simple input languages. Furthermore, in order to analyze the worst and best expected running time of probabilistic recursive programs we study models of RMDPs and RSSGs with positive rewards assigned to each of their transitions and provide new complexity upper and lower bounds of their analysis. We also establish some new connections between our models and models studied in queueing theory. Specifically, we show that (discrete time) QBDs can be described as a special subclass of RMCs and Tree-like QBDs, which are a generalization of QBDs, are equivalent to RMCs in a precise sense. We also prove that for a given QBD we can compute (in the unit cost RAM model) an approximation of its termination probabilities within i bits of precision in time polynomial in the size of the QBD and linear in i. Specifically, we show that we can do this using a decomposed Newton’s method.
APA, Harvard, Vancouver, ISO, and other styles
48

Wallace, William Frederick. "Design and analysis of probabilistic carry adders." Thesis, University of Newcastle Upon Tyne, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.247875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Ho, K. H. L. "Probabilistic scene analysis of two dimensional images." Thesis, University of Bristol, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.303747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

GUEDES, MARIA CECILIA SAFADY. "DISCUSSION ON PROBABILISTIC ANALYSIS OF SLOPE STABILITY." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1997. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=1924@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
São abordados alguns aspectos relativos à execução de análises probabilísticas em projetos de geotecnia. Apresenta-se um resumo dos conceitos de probabilidade e estatística, utilizados ao longo do trabalho. Descreve-se uma metodologia para a obtenção dos dados necessários à análise probabilística, incluindo a quantidade e a localização de amostras, o cálculo das médias e variâncias dos parâmetros do solo e a quantificação das incertezas relativas a estes valores. Apresenta-se o procedimento de execução dos três métodos probabilísticos mais utilizados em geotecnia com ênfase especial para o Método do Segundo Momento de Primeira Ordem. São executadas análises probabilísticas considerando, separadamente, variações de altura e inclinação de um talude de mineração sob condições drenadas. Avalia-se também a aplicação da metodologia de análise probabilística em situações não-drenadas, através da análise da estabilidade de um quebra-mar sobre argila mole.
Some aspects about probabilistic analysis of stability in geotechnical engineering are studied in this thesis. A summary about basic concepts of probability and statistics used along this work is presented. The methodology for obtaining the data needed for probabilistic analysis is described, including quantity and localization of samples, computation of mean and variance of soil properties and determination of uncertainties about these values. The procedures of three probabilistic methods which are useful in geotechnics are presented, with special emphasis on the first order second moment method (FOSM). Probabilistic analysis are made considering independent changes of height and inclination of a mine slope under drained conditions. The application of probabilistic analysis of a breakwater above a soft clay deposit under undrained conditions is also presented.
Se abordan algunos aspectos relativos a la ejecución de análisis probabilístico en proyectos de geotecnia. Se presenta un resumen de los conceptos de probabilidades y estadísticas, utilizados a lo largo del trabajo. Se describe una metodología para la obtención de los datos necesarios para el análisis probabilístico, incluyendo la cantidad y la localización de las muestras, el cálculo de las medias y variancias de los parámetros del suelo y la cuantificación de los errores relativos a estos valores. Se presenta el procedimientode ejecución de los tres métodos probabilísticos más utilizados en geotecnia con énfasis especial para el Método del Segundo Momento de Primer Orden. Se realizan análisis probabilísticos considerando, separadamente, variaciones de altura e inclinación de un talud de mineración en condiciones drenadas. También se evalúa la aplicación de la metodología de análisis probabilística en situaciones no-drenadas, a través de el análisis de la estabilidad de un quebra olas sobre arcilla blanda.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography