Siga este link para ver outros tipos de publicações sobre o tema: Uncertainly quantification.

Teses / dissertações sobre o tema "Uncertainly quantification"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Uncertainly quantification".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Nguyen, Trieu Nhat Thanh. "Modélisation et simulation d'éléments finis du système pelvien humain vers un outil d'aide à la décision fiable : incertitude des données et des lois de comportement". Electronic Thesis or Diss., Centrale Lille Institut, 2024. http://www.theses.fr/2024CLIL0015.

Texto completo da fonte
Resumo:
Cette thèse a développé une approche originale pour quantifier les incertitudes liées aux propriétés hyperélastiques des tissus mous, en utilisant à la fois des probabilités précises et imprécises. Le protocole de calcul a été étendu pour quantifier les incertitudes dans les contractions utérines actives lors des simulations du deuxième stade du travail. De plus, une simulation de la descente foetale a été créée, intégrant des données de contraction utérine active basées sur l'IRM et une quantification d'incertitude associée. L'étude a révélé que l'Expansion du Chaos Polynomial (PCE) non intrusif est plus efficace que les simulations directes de Monte Carlo.Ce travail met en évidence l'importance de quantifier et de propager les incertitudes dans les propriétés hyperélastiques des tissus utérins lors des simulations de travail, améliorant ainsi la fiabilité des résultats de simulation. Pour la première fois, il aborde la quantification des incertitudes des contractions utérines actives pendant le travail, assurant des résultats de simulation fiables et valides. La simulation de la descente foetale, cohérente avec les données théoriques et IRM, valide la précision des modèles en reflétant les processus physiologiques, soulignant la nécessité d'inclure les contractions utérines actives pour des résultats plus réalistes. L'étude souligne également l'importance d'évaluer la sensibilité globale des paramètres, l'incertitude et les résultats de simulation pour des applications cliniques fiables. En conclusion, cette recherche fait progresser de manière significative les simulations de l'accouchement en fournissant un cadre robuste pour la quantification des incertitudes, améliorant ainsi la fiabilité des résultats de simulation et soutenant une meilleure prise de décision clinique.Les travaux futurs étendront le processus à un modèle complet du système pelvien, incluant l'os du bassin, les ligaments et d'autres organes (comme la vessie, le rectum) pour simuler l'ensemble du processus de délivrance. Des comportements plus complexes des tissus mous pelviens seront étudiés pour mieux décrire l'interaction foetale pendant le travail. L'utilisation de données IRM 3D, si disponibles, permettra une meilleure évaluation, notamment pour la rotation foetale lors de l'expulsion. Un modèle complet du bassin maternel sera couplé à l'apprentissage par renforcement pour identifier les mécanismes de délivrance. De plus, une combinaison plus complexe d'orientations de fibres sera envisagée. Pour améliorer la méthode de Monte Carlo, des techniques de réduction de la variance et des stratégies d'optimisation telles que l'échantillonnage par importance, l'échantillonnage hypercube latin et les méthodes de Monte Carlo par chaînes de Markov seront utilisées pour réduire la taille des échantillons tout en maintenant la précision. Des méthodes pour une convergence plus rapide et une précision accrue dans la quantification des incertitudes, comme discuté par Hauseux et al. (2017), seront explorées. D'autres formulations de la méthode des éléments finis stochastiques (SFEM), comme la méthode SFEM spectrale stochastique (SSFEM), seront considérées pour la quantification des incertitudes, et des méthodes intrusives comme le stochastique-Galerkin seront utilisées pour leurs avantages computationnels. Ces approches pourraient améliorer la quantification des incertitudes dans les études futures.Enfin, l'approche développée pourrait être adaptée à la modélisation spécifique au patient et aux simulations de complications de la délivrance, permettant d'identifier les risques et les solutions thérapeutiques potentielles pour des interventions médicales personnalisées et des résultats améliorés pour les patients
Approximately 0.5 million deaths during childbirth occur annually, as reported by the World Health Organization (WHO). One prominent cause is complicated obstructed labor, also known as labor dystocia. This condition arises when the baby fails to navigate the birth canal despite normal uterine contractions. Therefore, understanding this complex physiological process is essential for improving diagnosis, optimizing clinical interventions, and defining predictive and preventive strategies. Currently, due to the complexity of experimental protocols and associated ethical issues, computational modeling and simulation of childbirth have emerged as the most promising solutions to achieve these objectives. However, it is crucial to quantify the significant influence of inherent uncertainties in the parameters and behaviors of the human pelvic system and their propagation through simulations to establish reliable indicators for clinical decision-making. Specifically, epistemic uncertainties due to lack of knowledge and aleatoric uncertainties due to intrinsic variability in physical domain geometries, material properties, and loads are often not fully understood and are frequently overlooked in current literature on childbirth computational modeling and simulation.This PhD thesis addresses three original contributions aimed at overcoming these challenges: 1) development and evaluation of a computational workflow for the uncertainty quantification of hyperelastic properties of the soft tissue using precise and imprecise probabilities; 2) extrapolation of the developed protocol for the uncertainty quantification of the active uterine contraction during the second stage of labor simulation; and 3) development and evaluation of a fetus descent simulation with the active uterine contraction using MRI-based observations and associated uncertainty quantification process.This thesis pays the way to a more reliable childbirth modeling and simulation under passive and active uterine contractions. In fact, the developed computational protocols could be extrapolated into a patient-specific modeling and simulation to identify the risk factors and associated strategies for vaginal delivery complications in a straightforward manner. Finally, the investigation of stochastic finite element formulation will allow to improve the computational cost for the uncertainty quantification process
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Elfverson, Daniel. "Multiscale Methods and Uncertainty Quantification". Doctoral thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-262354.

Texto completo da fonte
Resumo:
In this thesis we consider two great challenges in computer simulations of partial differential equations: multiscale data, varying over multiple scales in space and time, and data uncertainty, due to lack of or inexact measurements. We develop a multiscale method based on a coarse scale correction, using localized fine scale computations. We prove that the error in the solution produced by the multiscale method decays independently of the fine scale variation in the data or the computational domain. We consider the following aspects of multiscale methods: continuous and discontinuous underlying numerical methods, adaptivity, convection-diffusion problems, Petrov-Galerkin formulation, and complex geometries. For uncertainty quantification problems we consider the estimation of p-quantiles and failure probability. We use spatial a posteriori error estimates to develop and improve variance reduction techniques for Monte Carlo methods. We improve standard Monte Carlo methods for computing p-quantiles and multilevel Monte Carlo methods for computing failure probability.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Parkinson, Matthew. "Uncertainty quantification in Radiative Transport". Thesis, University of Bath, 2019. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.767610.

Texto completo da fonte
Resumo:
We study how uncertainty in the input data of the Radiative Transport equation (RTE), affects the distribution of (functionals of) its solution (the output data). The RTE is an integro-differential equation, in up to seven independent variables, that models the behaviour of rarefied particles (such as photons and neutrons) in a domain. Its applications include nuclear reactor design, radiation shielding, medical imaging, optical tomography and astrophysics. We focus on the RTE in the context of nuclear reactor physics where, to design and maintain safe reactors, understanding the effects of uncertainty is of great importance. There are many potential sources of uncertainty within a nuclear reactor. These include the geometry of the reactor, the material composition and reactor wear. Here we consider uncertainty in the macroscopic cross-sections ('the coefficients'), representing them as correlated spatial random fields. We wish to estimate the statistics of a problem-specific quantity of interest (under the influence of the given uncertainty in the cross-sections), which is defined as a functional of the scalar flux. This is the forward problem of Uncertainty Quantification. We seek accurate and efficient methods for estimating these statistics. Thus far, the research community studying Uncertainty Quantification in radiative transport has focused on the Polynomial Chaos expansion. However, it is known that the number of terms in the expansion grows exponentially with respect to the number of stochastic dimensions and the order of the expansion, i.e. polynomial chaos suffers from the curse of dimensionality. Instead, we focus our attention on variants of Monte Carlo sampling - studying standard and quasi-Monte Carlo methods, and their multilevel and multi-index variants. We show numerically that the quasi-Monte Carlo rules, and the multilevel variance reduction techniques, give substantial gains over the standard Monte Carlo method for a variety of radiative transport problems. Moreover, we report problems in up to 3600 stochastic dimensions, far beyond the capability of polynomial chaos. A large part of this thesis is focused towards a rigorous proof that the multilevel Monte Carlo method is superior to the standard Monte Carlo method, for the RTE in one spatial and one angular dimension with random cross-sections. This is the first rigorous theory of Uncertainty Quantification for transport problems and the first rigorous theory for Uncertainty Quantification for any PDE problem which accounts for a path-dependent stability condition. To achieve this result, we first present an error analysis (including a stability bound on the discretisation parameters) for the combined spatial and angular discretisation of the spatially heterogeneous RTE, which is explicit in the heterogeneous coefficients. We can then extend this result to prove probabilistic bounds on the error, under assumptions on the statistics of the cross-sections and provided the discretisation satisfies the stability condition pathwise. The multilevel Monte Carlo complexity result follows. Amongst other novel contributions, we: introduce a method which combines a direct and iterative solver to accelerate the computation of the scalar flux, by adaptively choosing the fastest solver based on the given coefficients; numerically test an iterative eigensolver, which uses a single source iteration within each loop of a shifted inverse power iteration; and propose a novel model for (random) heterogeneity in concrete which generates (piecewise) discontinuous coefficients according to the material type, but where the composition of materials are spatially correlated.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Carson, J. "Uncertainty quantification in palaeoclimate reconstruction". Thesis, University of Nottingham, 2015. http://eprints.nottingham.ac.uk/29076/.

Texto completo da fonte
Resumo:
Studying the dynamics of the palaeoclimate is a challenging problem. Part of the challenge lies in the fact that our understanding must be based on only a single realisation of the climate system. With only one climate history, it is essential that palaeoclimate data are used to their full extent, and that uncertainties arising from both data and modelling are well characterised. This is the motivation behind this thesis, which explores approaches for uncertainty quantification in problems related to palaeoclimate reconstruction. We focus on uncertainty quantification problems for the glacial-interglacial cycle, namely parameter estimation, model comparison, and age estimation of palaeoclimate observations. We develop principled data assimilation schemes that allow us to assimilate palaeoclimate data into phenomenological models of the glacial-interglacial cycle. The statistical and modelling approaches we take in this thesis means that this amounts to the task of performing Bayesian inference for multivariate stochastic differential equations that are only partially observed. One contribution of this thesis is the synthesis of recent methodological advances in approximate Bayesian computation and particle filter methods. We provide an up-to-date overview that relates the different approaches and provides new insights into their performance. Through simulation studies we compare these approaches using a common benchmark, and in doing so we highlight the relative strengths and weaknesses of each method. There are two main scientific contributions in this thesis. The first is that by using inference methods to jointly perform parameter estimation and model comparison, we demonstrate that the current two-stage practice of first estimating observation times, and then treating them as fixed for subsequent analysis, leads to conclusions that are not robust to the methods used for estimating the observation times. The second main contribution is the development of a novel age model based on a linear sediment accumulation model. By extending the target of the particle filter we are able to jointly perform parameter estimation, model comparison, and observation age estimation. In doing so, we are able to perform palaeoclimate reconstruction using sediment core data that takes age uncertainty in the data into account, thus solving the problem of dating uncertainty highlighted above.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Boopathy, Komahan. "Uncertainty Quantification and Optimization Under Uncertainty Using Surrogate Models". University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1398302731.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Kalmikov, Alexander G. "Uncertainty Quantification in ocean state estimation". Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/79291.

Texto completo da fonte
Resumo:
Thesis (Ph. D.)--Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and the Woods Hole Oceanographic Institution), 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 158-160).
Quantifying uncertainty and error bounds is a key outstanding challenge in ocean state estimation and climate research. It is particularly difficult due to the large dimensionality of this nonlinear estimation problem and the number of uncertain variables involved. The "Estimating the Circulation and Climate of the Oceans" (ECCO) consortium has developed a scalable system for dynamically consistent estimation of global time-evolving ocean state by optimal combination of ocean general circulation model (GCM) with diverse ocean observations. The estimation system is based on the "adjoint method" solution of an unconstrained least-squares optimization problem formulated with the method of Lagrange multipliers for fitting the dynamical ocean model to observations. The dynamical consistency requirement of ocean state estimation necessitates this approach over sequential data assimilation and reanalysis smoothing techniques. In addition, it is computationally advantageous because calculation and storage of large covariance matrices is not required. However, this is also a drawback of the adjoint method, which lacks a native formalism for error propagation and quantification of assimilated uncertainty. The objective of this dissertation is to resolve that limitation by developing a feasible computational methodology for uncertainty analysis in dynamically consistent state estimation, applicable to the large dimensionality of global ocean models. Hessian (second derivative-based) methodology is developed for Uncertainty Quantification (UQ) in large-scale ocean state estimation, extending the gradient-based adjoint method to employ the second order geometry information of the model-data misfit function in a high-dimensional control space. Large error covariance matrices are evaluated by inverting the Hessian matrix with the developed scalable matrix-free numerical linear algebra algorithms. Hessian-vector product and Jacobian derivative codes of the MIT general circulation model (MITgcm) are generated by means of algorithmic differentiation (AD). Computational complexity of the Hessian code is reduced by tangent linear differentiation of the adjoint code, which preserves the speedup of adjoint checkpointing schemes in the second derivative calculation. A Lanczos algorithm is applied for extracting the leading rank eigenvectors and eigenvalues of the Hessian matrix. The eigenvectors represent the constrained uncertainty patterns. The inverse eigenvalues are the corresponding uncertainties. The dimensionality of UQ calculations is reduced by eliminating the uncertainty null-space unconstrained by the supplied observations. Inverse and forward uncertainty propagation schemes are designed for assimilating observation and control variable uncertainties, and for projecting these uncertainties onto oceanographic target quantities. Two versions of these schemes are developed: one evaluates reduction of prior uncertainties, while another does not require prior assumptions. The analysis of uncertainty propagation in the ocean model is time-resolving. It captures the dynamics of uncertainty evolution and reveals transient and stationary uncertainty regimes. The system is applied to quantifying uncertainties of Antarctic Circumpolar Current (ACC) transport in a global barotropic configuration of the MITgcm. The model is constrained by synthetic observations of sea surface height and velocities. The control space consists of two-dimensional maps of initial and boundary conditions and model parameters. The size of the Hessian matrix is 0(1010) elements, which would require 0(60GB) of uncompressed storage. It is demonstrated how the choice of observations and their geographic coverage determines the reduction in uncertainties of the estimated transport. The system also yields information on how well the control fields are constrained by the observations. The effects of controls uncertainty reduction due to decrease of diagonal covariance terms are compared to dynamical coupling of controls through off-diagonal covariance terms. The correlations of controls introduced by observation uncertainty assimilation are found to dominate the reduction of uncertainty of transport. An idealized analytical model of ACC guides a detailed time-resolving understanding of uncertainty dynamics. Keywords: Adjoint model uncertainty, sensitivity, posterior error reduction, reduced rank Hessian matrix, Automatic Differentiation, ocean state estimation, barotropic model, Drake Passage transport.
by Alexander G. Kalmikov.
Ph.D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Malenova, Gabriela. "Uncertainty quantification for high frequency waves". Licentiate thesis, KTH, Numerisk analys, NA, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186287.

Texto completo da fonte
Resumo:
We consider high frequency waves satisfying the scalar wave equationwith highly oscillatory initial data. The speed of propagation of the mediumas well as the phase and amplitude of the initial data is assumed to beuncertain, described by a finite number of independent random variables withknown probability distributions. We introduce quantities of interest (QoIs)aslocal averages of the squared modulus of the wave solution, or itsderivatives.The regularity of these QoIs in terms of the input random parameters and thewavelength is important for uncertainty quantification methods based oninterpolation in the stochastic space. In particular, the size of thederivativesshould be bounded and independent of the wavelength. In the contributedpapers, we show that the QoIs indeed have this property, despite the highlyoscillatory character of the waves.

QC 20160510

Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Roy, Pamphile. "Uncertainty quantification in high dimensional problems". Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0038.

Texto completo da fonte
Resumo:
Les incertitudes font partie du monde qui nous entoure. Se limiter à une seule valeur nominale est bien souvent trop restrictif, et ce d'autant plus lorsqu'il est question de systèmes complexes. Comprendre la nature et l'impact de ces incertitudes est devenu un aspect important de tout travail d'ingénierie. D'un point de vue sociétal, les incertitudes jouent un rôle important dans les processus de décision. Les dernières recommandations de la Commission européenne en matière d'analyses des risques souligne l'importance du traitement des incertitudes. Afin de comprendre les incertitudes, une nouvelle discipline mathématique appelée la quantification des incertitudes a été créée. Ce domaine regroupe un large éventail de méthodes d'analyse statistique qui visent à lier des perturbations sur les paramètres d'entrée d'un système (plan d'expérience) à une quantité d'intérêt. L'objectif de ce travail de thèse est de proposer des améliorations sur divers aspects méthodologiques de la quantification des incertitudes dans le cadre de simulation numérique coûteuse. Cela passe par une utilisation des méthodes existantes avec une approche multi-stratégie mais aussi la création de nouvelles méthodes. Dans ce contexte, de nouvelles méthodes d'échantillonnage et de ré-échantillonnage ont été développées afin de mieux capturer la variabilité dans le cas d'un problème de grande dimension. Par ailleurs, de nouvelles méthodes de visualisation des incertitudes sont proposées dans le cas d'une grande dimension des paramètres d'entrée et d'une grande dimension de la quantité d'intérêt. Les méthodes développées peuvent être utilisées dans divers domaines comme la modélisation hydraulique ou encore la modélisation aérodynamique. Leur apport est démontré sur des systèmes réalistes en faisant appel à des outils de mécanique des fluides numérique. Enfin, ces méthodes ne sont pas seulement utilisables dans le cadre de simulation numérique, mais elles peuvent être utilisées sur de réels dispositifs expérimentaux
Uncertainties are predominant in the world that we know. Referring therefore to a nominal value is too restrictive, especially when it comes to complex systems. Understanding the nature and the impact of these uncertainties has become an important aspect of engineering work. On a societal point of view, uncertainties play a role in terms of decision-making. From the European Commission through the Better Regulation Guideline, impact assessments are now advised to take uncertainties into account. In order to understand the uncertainties, the mathematical field of uncertainty quantification has been formed. UQ encompasses a large palette of statistical tools and it seeks to link a set of input perturbations on a system (design of experiments) towards a quantity of interest. The purpose of this work is to propose improvements on various methodological aspects of uncertainty quantification applied to costly numerical simulations. This is achieved by using existing methods with a multi-strategy approach but also by creating new methods. In this context, novel sampling and resampling approaches have been developed to better capture the variability of the physical phenomenon when dealing with a high number of perturbed inputs. These allow to reduce the number of simulation required to describe the system. Moreover, novel methods are proposed to visualize uncertainties when dealing with either a high dimensional input parameter space or a high dimensional quantity of interest. The developed methods can be used in various fields like hydraulic modelling and aerodynamic modelling. Their capabilities are demonstrated in realistic systems using well established computational fluid dynamics tools. Lastly, they are not limited to the use of numerical experiments and can be used equally for real experiments
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Alvarado, Martin Guillermo. "Quantification of uncertainty during history matching". Texas A&M University, 2003. http://hdl.handle.net/1969/463.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Jimenez, Edwin. "Uncertainty quantification of nonlinear stochastic phenomena". Tallahassee, Florida : Florida State University, 2009. http://etd.lib.fsu.edu/theses/available/etd-11092009-161351/.

Texto completo da fonte
Resumo:
Thesis (Ph. D.)--Florida State University, 2009.
Advisor: M.Y. Hussaini, Florida State University, College of Arts and Sciences, Dept. of Mathematics. Title and description from dissertation home page (viewed on Mar. 16, 2010). Document formatted into pages; contains xii, 113 pages. Includes bibliographical references.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Timmins, Benjamin H. "Automatic Particle Image Velocimetry Uncertainty Quantification". DigitalCommons@USU, 2011. https://digitalcommons.usu.edu/etd/884.

Texto completo da fonte
Resumo:
The uncertainty of any measurement is the interval in which one believes the actual error lies. Particle Image Velocimetry (PIV) measurement error depends on the PIV algorithm used, a wide range of user inputs, flow characteristics, and the experimental setup. Since these factors vary in time and space, they lead to nonuniform error throughout the flow field. As such, a universal PIV uncertainty estimate is not adequate and can be misleading. This is of particular interest when PIV data are used for comparison with computational or experimental data. A method to estimate the uncertainty due to the PIV calculation of each individual velocity measurement is presented. The relationship between four error sources and their contribution to PIV error is first determined. The sources, or parameters, considered are particle image diameter, particle density, particle displacement, and velocity gradient, although this choice in parameters is arbitrary and may not be complete. This information provides a four-dimensional "uncertainty surface" for the PIV algorithm used. After PIV processing, our code "measures" the value of each of these parameters and estimates the velocity uncertainty for each vector in the flow field. The reliability of the methodology is validated using known flow fields so the actual error can be determined. Analysis shows that, for most flows, the uncertainty distribution obtained using this method fits the confidence interval. The method is general and can be adapted to any PIV analysis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Yu, Xuanlong. "Uncertainty quantification for vision regression tasks". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG094.

Texto completo da fonte
Resumo:
Ce travail se concentre sur la quantification de l'incertitude pour les réseaux de neurones profonds, qui est vitale pour la fiabilité et la précision de l'apprentissage profond. Cependant, la conception complexe du réseau et les données d'entrée limitées rendent difficile l'estimation des incertitudes. Parallèlement, la quantification de l'incertitude pour les tâches de régression a reçu moins d'attention que pour celles de classification en raison de la sortie standardisée plus simple de ces dernières et de leur grande importance. Cependant, des problèmes de régression sont rencontrés dans un large éventail d'applications en vision par ordinateur. Notre principal axe de recherche porte sur les méthodes post-hoc, et notamment les réseaux auxiliaires, qui constituent l'un des moyens les plus efficaces pour estimer l'incertitude des prédictions des tâches principales sans modifier le modèle de la tâche principale. Dans le même temps, le scénario d'application se concentre principalement sur les tâches de régression visuelle. En outre, nous fournissons également une méthode de quantification de l'incertitude basée sur le modèle modifié de tâche principale et un ensemble de données permettant d'évaluer la qualité et la robustesse des estimations de l'incertitude.Nous proposons d'abord Side Learning Uncertainty for Regression Problems (SLURP), une approche générique pour l'estimation de l'incertitude de régression via un réseau auxiliaire qui exploite la sortie et les représentations intermédiaires générées par le modèle pour la tâche principale. Le réseau auxiliaire apprend l'erreur de prédiction du modèle pour la tâche principale et peut fournir des estimations d'incertitude comparables à celles des approches des ensembles pour différentes tâches de régression par pixel.Pour être considéré comme robuste, un estimateur d'incertitude auxiliaire doit être capable de maintenir ses performances et de déclencher des incertitudes plus élevées tout en rencontrant des entrées des examples Out-Of-Distribution (OOD), c'est-à-dire de fournir une incertitude aléatoire et épistémique robuste. Nous considérons que SLURP est principalement adapté aux estimations de l'incertitude aléatoires. De plus, la robustesse des estimateurs auxiliaires d'incertitude n'a pas été explorée. Notre deuxième travail propose un schéma d'estimateur d'incertitude auxiliaire généralisé, introduisant la distribution de Laplace pour l'estimation aléatoire robuste de l'incertitude et le Discretization-Induced Dirichlet pOsterior (DIDO) pour l'incertitude épistémique. Des expériences approfondies confirment la robustesse dans diverses tâches.De plus, pour présenter DIDO, nous présentons un article d'évaluation des solutions qui appliquent des stratégies de discrétisation aux tâches de régression, développant une solution de quantification d'incertitude post-hoc, baptisée Expectation of Distance (E-Dist), qui surpasse les autres solutions post-hoc dans les mêmes conditions.De plus, nous étudions les méthodes de quantification de l'incertitude en un seul passage basées sur le modèle de tâche principale ajusté. Nous proposons Latent Discreminant deterministic Uncertainty (LDU), qui fait progresser l'estimation déterministe de l'incertitude évolutive et rivalise avec les Deep Ensembles sur les tâches d'estimation de profondeur monoculaire.En termes d'évaluation de la quantification de l'incertitude, nous proposons un ensemble de données Multiple Uncertainty Autonomous Driving (MUAD), prenant en charge diverses tâches de vision par ordinateur dans différents scénarios urbains avec des différents exemples OOD difficiles.En résumé, nous proposons de nouvelles solutions et références pour la quantification de l'incertitude de l'apprentissage profond, notamment SLURP, E-Dist, DIDO et LDU. De plus, nous proposons l'ensemble de données MUAD pour fournir une évaluation plus complète des scénarios de conduite autonome avec différentes sources d'incertitude
This work focuses on uncertainty quantification for deep neural networks, which is vital for reliability and accuracy in deep learning. However, complex network design and limited training data make estimating uncertainties challenging. Meanwhile, uncertainty quantification for regression tasks has received less attention than for classification ones due to the more straightforward standardized output of the latter and their high importance. However, regression problems are encountered in a wide range of applications in computer vision. Our main research direction is on post-hoc methods, and especially auxiliary networks, which are one of the most effective means of estimating the uncertainty of main task predictions without modifying the main task model. At the same time, the application scenario mainly focuses on visual regression tasks. In addition, we also provide an uncertainty quantification method based on the modified main task model and a dataset for evaluating the quality and robustness of uncertainty estimates.We first propose Side Learning Uncertainty for Regression Problems (SLURP), a generic approach for regression uncertainty estimation via an auxiliary network that exploits the output and the intermediate representations generated by the main task model. This auxiliary network effectively captures prediction errors and competes with ensemble methods in pixel-wise regression tasks.To be considered robust, an auxiliary uncertainty estimator must be capable of maintaining its performance and triggering higher uncertainties while encountering Out-of-Distribution (OOD) inputs, i.e., to provide robust aleatoric and epistemic uncertainty. We consider that SLURP is mainly adapted for aleatoric uncertainty estimates. Moreover, the robustness of the auxiliary uncertainty estimators has not been explored. Our second work presents a generalized auxiliary uncertainty estimator scheme, introducing the Laplace distribution for robust aleatoric uncertainty estimation and Discretization-Induced Dirichlet pOsterior (DIDO) for epistemic uncertainty. Extensive experiments confirm robustness in various tasks.Furthermore, to introduce DIDO, we provide a survey paper on regression with discretization strategies, developing a post-hoc uncertainty quantification solution, dubbed Expectation of Distance (E-Dist), which outperforms the other post-hoc solutions under the same settings. Additionally, we investigate single-pass uncertainty quantification methods, introducing Discriminant deterministic Uncertainty (LDU), which advances scalable deterministic uncertainty estimation and competes with Deep Ensembles on monocular depth estimation tasks.In terms of uncertainty quantification evaluation, we offer the Multiple Uncertainty Autonomous Driving dataset (MUAD), supporting diverse computer vision tasks in varying urban scenarios with challenging out-of-distribution examples.In summary, we contribute new solutions and benchmarks for deep learning uncertainty quantification, including SLURP, E-Dist, DIDO, and LDU. In addition, we propose the MUAD dataset to provide a more comprehensive evaluation of autonomous driving scenarios with different uncertainty sources
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Fiorito, Luca. "Nuclear data uncertainty propagation and uncertainty quantification in nuclear codes". Doctoral thesis, Universite Libre de Bruxelles, 2016. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/238375.

Texto completo da fonte
Resumo:
Uncertainties in nuclear model responses must be quantified to define safety limits, minimize costs and define operational conditions in design. Response uncertainties can also be used to provide a feedback on the quality and reliability of parameter evaluations, such as nuclear data. The uncertainties of the predictive model responses sprout from several sources, e.g. nuclear data, model approximations, numerical solvers, influence of random variables. It was proved that the largest quantifiable sources of uncertainty in nuclear models, such as neutronics and burnup calculations, are the nuclear data, which are provided as evaluated best estimates and uncertainties/covariances in data libraries. Nuclear data uncertainties and/or covariances must be propagated to the model responses with dedicated uncertainty propagation tools. However, most of the nuclear codes for neutronics and burnup models do not have these capabilities and produce best-estimate results without uncertainties. In this work, the nuclear data uncertainty propagation was concentrated on the SCK•CEN code burnup ALEPH-2 and the Monte Carlo N-Particle code MCNP.Two sensitivity analysis procedures, i.e. FSAP and ASAP, based on linear perturbation theory were implemented in ALEPH-2. These routines can propagate nuclear data uncertainties in pure decay models. ASAP and ALEPH-2 were tested and validated against the decay heat and uncertainty quantification for several fission pulses and for the MYRRHA subcritical system. The decay uncertainty is necessary to define the reliability of the decay heat removal systems and prevent overheating and mechanical failure of the reactor components. It was proved that the propagation of independent fission yield and decay data uncertainties can be carried out with ASAP also in neutron irradiation models. Because of the ASAP limitations, the Monte Carlo sampling solver NUDUNA was used to propagate cross section covariances. The applicability constraints of ASAP drove our studies towards the development of a tool that could propagate the uncertainty of any nuclear datum. In addition, the uncertainty propagation tool was supposed to operate with multiple nuclear codes and systems, including non-linear models. The Monte Carlo sampling code SANDY was developed. SANDY is independent of the predictive model, as it only interacts with the nuclear data in input. Nuclear data are sampled from multivariate probability density functions and propagated through the model according to the Monte Carlo sampling theory. Not only can SANDY propagate nuclear data uncertainties and covariances to the model responses, but it is also able to identify the impact of each uncertainty contributor by decomposing the response variance. SANDY was extensively tested against integral parameters and was used to quantify the neutron multiplication factor uncertainty of the VENUS-F reactor.Further uncertainty propagation studies were carried out for the burnup models of light water reactor benchmarks. Our studies identified fission yields as the largest source of uncertainty for the nuclide density evolution curves of several fission products. However, the current data libraries provide evaluated fission yields and uncertainties devoid of covariance matrices. The lack of fission yield covariance information does not comply with the conservation equations that apply to a fission model, and generates inconsistency in the nuclear data. In this work, we generated fission yield covariance matrices using a generalised least-square method and a set of physical constraints. The fission yield covariance matrices solve the inconsistency in the nuclear data libraries and reduce the role of the fission yields in the uncertainty quantification of burnup models responses.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Cheng, Haiyan. "Uncertainty Quantification and Uncertainty Reduction Techniques for Large-scale Simulations". Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/28444.

Texto completo da fonte
Resumo:
Modeling and simulations of large-scale systems are used extensively to not only better understand a natural phenomenon, but also to predict future events. Accurate model results are critical for design optimization and policy making. They can be used effectively to reduce the impact of a natural disaster or even prevent it from happening. In reality, model predictions are often affected by uncertainties in input data and model parameters, and by incomplete knowledge of the underlying physics. A deterministic simulation assumes one set of input conditions, and generates one result without considering uncertainties. It is of great interest to include uncertainty information in the simulation. By ``Uncertainty Quantification,'' we denote the ensemble of techniques used to model probabilistically the uncertainty in model inputs, to propagate it through the system, and to represent the resulting uncertainty in the model result. This added information provides a confidence level about the model forecast. For example, in environmental modeling, the model forecast, together with the quantified uncertainty information, can assist the policy makers in interpreting the simulation results and in making decisions accordingly. Another important goal in modeling and simulation is to improve the model accuracy and to increase the model prediction power. By merging real observation data into the dynamic system through the data assimilation (DA) technique, the overall uncertainty in the model is reduced. With the expansion of human knowledge and the development of modeling tools, simulation size and complexity are growing rapidly. This poses great challenges to uncertainty analysis techniques. Many conventional uncertainty quantification algorithms, such as the straightforward Monte Carlo method, become impractical for large-scale simulations. New algorithms need to be developed in order to quantify and reduce uncertainties in large-scale simulations. This research explores novel uncertainty quantification and reduction techniques that are suitable for large-scale simulations. In the uncertainty quantification part, the non-sampling polynomial chaos (PC) method is investigated. An efficient implementation is proposed to reduce the high computational cost for the linear algebra involved in the PC Galerkin approach applied to stiff systems. A collocation least-squares method is proposed to compute the PC coefficients more efficiently. A novel uncertainty apportionment strategy is proposed to attribute the uncertainty in model results to different uncertainty sources. The apportionment results provide guidance for uncertainty reduction efforts. The uncertainty quantification and source apportionment techniques are implemented in the 3-D Sulfur Transport Eulerian Model (STEM-III) predicting pollute concentrations in the northeast region of the United States. Numerical results confirm the efficacy of the proposed techniques for large-scale systems and the potential impact for environmental protection policy making. ``Uncertainty Reduction'' describes the range of systematic techniques used to fuse information from multiple sources in order to increase the confidence one has in model results. Two DA techniques are widely used in current practice: the ensemble Kalman filter (EnKF) and the four-dimensional variational (4D-Var) approach. Each method has its advantages and disadvantages. By exploring the error reduction directions generated in the 4D-Var optimization process, we propose a hybrid approach to construct the error covariance matrix and to improve the static background error covariance matrix used in current 4D-Var practice. The updated covariance matrix between assimilation windows effectively reduces the root mean square error (RMSE) in the solution. The success of the hybrid covariance updates motivates the hybridization of EnKF and 4D-Var to further reduce uncertainties in the simulation results. Numerical tests show that the hybrid method improves the model accuracy and increases the model prediction quality.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Cousins, William Bryan. "Boundary Conditions and Uncertainty Quantification for Hemodynamics". Thesis, North Carolina State University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3575896.

Texto completo da fonte
Resumo:

We address outflow boundary conditions for blood flow modeling. In particular, we consider a variety of fundamental issues in the structured tree boundary condition. We provide a theoretical analysis of the numerical implementation of the structured tree, showing that it is sensible but must be performed with great care. We also perform analytical and numerical studies on the sensitivity of model output on the structured tree's defining geometrical parameters. The most important component of this dissertation is the derivation of the new, generalized structured tree boundary condition. Unlike the original structured tree condition, the generalized structured tree does not contain a temporal periodicity assumption and is thus applicable to a much broader class of blood flow simulations. We describe a numerical implementation of this new boundary condition and show that the original structured tree is in fact a rough approximation of the new, generalized condition.

We also investigate parameter selection for outflow boundary conditions, and attempt to determine a set of structured tree parameters that gives reasonable simulation results without requiring any calibration. We are successful in doing so for a simulation of the systemic arterial tree, but the same parameter set yields physiologically unreasonable results in simulations of the Circle of Willis. Finally, we investigate the extension of recently introduced PDF methods to smooth solutions of systems of hyperbolic balance laws subject to uncertain inputs. These methods, currently available only for scalar equations, would provide a powerful tool for quantifying uncertainty in predictions of blood flow and other phenomena governed by first order hyperbolic systems.

Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Teckentrup, Aretha Leonore. "Multilevel Monte Carlo methods and uncertainty quantification". Thesis, University of Bath, 2013. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.577753.

Texto completo da fonte
Resumo:
We consider the application of multilevel Monte Carlo methods to elliptic partial differential equations with random coefficients. Such equations arise, for example, in stochastic groundwater ow modelling. Models for random coefficients frequently used in these applications, such as log-normal random fields with exponential covariance, lack uniform coercivity and boundedness with respect to the random parameter and have only limited spatial regularity. To give a rigorous bound on the cost of the multilevel Monte Carlo estimator to reach a desired accuracy, one needs to quantify the bias of the estimator. The bias, in this case, is the spatial discretisation error in the numerical solution of the partial differential equation. This thesis is concerned with establishing bounds on this discretisation error in the practically relevant and technically demanding case of coefficients which are not uniformly coercive or bounded with respect to the random parameter. Under mild assumptions on the regularity of the coefficient, we establish new results on the regularity of the solution for a variety of model problems. The most general case is that of a coefficient which is piecewise Hölder continuous with respect to a random partitioning of the domain. The established regularity of the solution is then combined with tools from classical discretisation error analysis to provide a full convergence analysis of the bias of the multilevel estimator for finite element and finite volume spatial discretisations. Our analysis covers as quantities of interest several spatial norms of the solution, as well as point evaluations of the solution and its gradient and any continuously Fréchet differentiable functional. Lastly, we extend the idea of multilevel Monte Carlo estimators to the framework of Markov chain Monte Carlo simulations. We develop a new multilevel version of a Metropolis Hastings algorithm, and provide a full convergence analysis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Strandberg, Rickard, e Johan Låås. "Uncertainty quantification using high-dimensional numerical integration". Thesis, KTH, Skolan för teknikvetenskap (SCI), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-195701.

Texto completo da fonte
Resumo:
We consider quantities that are uncertain because they depend on one or many uncertain parameters. If the uncertain parameters are stochastic the expected value of the quantity can be obtained by integrating the quantity over all the possible values these parameters can take and dividing the result by the volume of the parameter-space. Each additional uncertain parameter has to be integrated over; if the parameters are many, this give rise to high-dimensional integrals. This report offers an overview of the theory underpinning four numerical methods used to compute high-dimensional integrals: Newton-Cotes, Monte Carlo, Quasi-Monte Carlo, and sparse grid. The theory is then applied to the problem of computing the impact coordinates of a thrown ball by introducing uncertain parameters such as wind velocities into Newton’s equations of motion.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Fadikar, Arindam. "Stochastic Computer Model Calibration and Uncertainty Quantification". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91985.

Texto completo da fonte
Resumo:
This dissertation presents novel methodologies in the field of stochastic computer model calibration and uncertainty quantification. Simulation models are widely used in studying physical systems, which are often represented by a set of mathematical equations. Inference on true physical system (unobserved or partially observed) is drawn based on the observations from corresponding computer simulation model. These computer models are calibrated based on limited ground truth observations in order produce realistic predictions and associated uncertainties. Stochastic computer model differs from traditional computer model in the sense that repeated execution results in different outcomes from a stochastic simulation. This additional uncertainty in the simulation model requires to be handled accordingly in any calibration set up. Gaussian process (GP) emulator replaces the actual computer simulation when it is expensive to run and the budget is limited. However, traditional GP interpolator models the mean and/or variance of the simulation output as function of input. For a simulation where marginal gaussianity assumption is not appropriate, it does not suffice to emulate only the mean and/or variance. We present two different approaches addressing the non-gaussianity behavior of an emulator, by (1) incorporating quantile regression in GP for multivariate output, (2) approximating using finite mixture of gaussians. These emulators are also used to calibrate and make forward predictions in the context of an Agent Based disease model which models the Ebola epidemic outbreak in 2014 in West Africa. The third approach employs a sequential scheme which periodically updates the uncertainty inn the computer model input as data becomes available in an online fashion. Unlike other two methods which use an emulator in place of the actual simulation, the sequential approach relies on repeated run of the actual, potentially expensive simulation.
Doctor of Philosophy
Mathematical models are versatile and often provide accurate description of physical events. Scientific models are used to study such events in order to gain understanding of the true underlying system. These models are often complex in nature and requires advance algorithms to solve their governing equations. Outputs from these models depend on external information (also called model input) supplied by the user. Model inputs may or may not have a physical meaning, and can sometimes be only specific to the scientific model. More often than not, optimal values of these inputs are unknown and need to be estimated from few actual observations. This process is known as inverse problem, i.e. inferring the input from the output. The inverse problem becomes challenging when the mathematical model is stochastic in nature, i.e., multiple execution of the model result in different outcome. In this dissertation, three methodologies are proposed that talk about the calibration and prediction of a stochastic disease simulation model which simulates contagion of an infectious disease through human-human contact. The motivating examples are taken from the Ebola epidemic in West Africa in 2014 and seasonal flu in New York City in USA.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Hagues, Andrew W. "Uncertainty quantification for problems in radionuclide transport". Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/9088.

Texto completo da fonte
Resumo:
The field of radionuclide transport has long recognised the stochastic nature of the problems encountered. Many parameters that are used in computational models are very difficult, if not impossible, to measure with any great degree of confidence. For example, bedrock properties can only be measured at a few discrete points, the properties between these points may be inferred or estimated using experiments but it is difficult to achieve any high levels of confidence. This is a major problem when many countries around the world are considering deep geologic repositories as a disposal option for long-lived nuclear waste but require a high degree of confidence that any release of radioactive material will not pose a risk to future populations. In this thesis we apply Polynomial Chaos methods to a model of the biosphere that is similar to those used to assess exposure pathways for humans and associated dose rates by many countries worldwide. We also apply the Spectral-Stochastic Finite Element Method to the problem of contaminated fluid flow in a porous medium. For this problem we use the Multi-Element generalized Polynomial Chaos method to discretise the random dimensions in a manner similar to the well known Finite Element Method. The stochastic discretisation is then refined adaptively to mitigate the build up errors over the solution times. It was found that these methods have the potential to provide much improved estimates for radionuclide transport problems. However, further development is needed in order to obtain the necessary efficiency that would be required to solve industrial problems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

El-Shanawany, Ashraf Ben Mamdouh. "Quantification of uncertainty in probabilistic safety analysis". Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/48104.

Texto completo da fonte
Resumo:
This thesis develops methods for quantification and interpretation of uncertainty in probabilistic safety analysis, focussing on fault trees. The output of a fault tree analysis is, usually, the probability of occurrence of an undesirable event (top event) calculated using the failure probabilities of identified basic events. The standard method for evaluating the uncertainty distribution is by Monte Carlo simulation, but this is a computationally intensive approach to uncertainty estimation and does not, readily, reveal the dominant reasons for the uncertainty. A closed form approximation for the fault tree top event uncertainty distribution, for models using only lognormal distributions for model inputs, is developed in this thesis. Its output is compared with the output from two sampling based approximation methods; standard Monte Carlo analysis, and Wilks’ method, which is based on order statistics using small sample sizes. Wilks’ method can be used to provide an upper bound for the percentiles of top event distribution, and is computationally cheap. The combination of the lognormal approximation and Wilks’ Method can be used to give, respectively, the overall shape and high confidence on particular percentiles of interest. This is an attractive, practical option for evaluation of uncertainty in fault trees and, more generally, uncertainty in certain multilinear models. A new practical method of ranking uncertainty contributors in lognormal models is developed which can be evaluated in closed form, based on cutset uncertainty. The method is demonstrated via examples, including a simple fault tree model and a model which is the size of a commercial PSA model for a nuclear power plant. Finally, quantification of “hidden uncertainties” is considered; hidden uncertainties are those which are not typically considered in PSA models, but may contribute considerable uncertainty to the overall results if included. A specific example of the inclusion of a missing uncertainty is explained in detail, and the effects on PSA quantification are considered. It is demonstrated that the effect on the PSA results can be significant, potentially permuting the order of the most important cutsets, which is of practical concern for the interpretation of PSA models. Finally, suggestions are made for the identification and inclusion of further hidden uncertainties.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

VIRDIS, IRENE. "Uncertainty Quantification and Optimization of Aeronautical Components". Doctoral thesis, Università degli Studi di Cagliari, 2022. http://hdl.handle.net/11584/332668.

Texto completo da fonte
Resumo:
This project was part of a wider investigation performed on a set of 200 High Pressure Turbine (HPT) blades, dismounted after several hours of flight and characterized by in-service and manufacturing variations. The main objective of this project was to determine the impact of these variations on the aerodynamic performance of the rotor and to devise a strategy to design more robust geometries, i.e, less sensitive to the given uncertainties. The initial set of data consisted of the digitized versions of the blades (GOM scans). The geometrical deviations characterizing the blades from their hub until the 70% of the span were parametrized via a set of aerodynamic Engineering Parameters belonging to PADRAM design space (EP) and quantified via an inverse mapping procedure (P2S-PADRAM-SOFT); the visual inspection of the digital twins highlighted a considerable volume loss along the blade rim (tip region) due to localized erosion: the quantification of this metal degradation was performed via in-house Python scripts and expressed as indices of volume removal rate (PADRAM Squealer Tip option). The adjoint solver and the Polynomial Chaos Expansion (PCE) were used for the Uncertainty Propagation (UP) of the first set of uncertain variables, while for the erosion parameters only PCE was selected. The sensitivity of the nominal blade with respect to the typical erosion level was found to be higher than that of the geometrical deviations occurring beneath the tip. UP was followed by a series of gradient-based robust optimizations (RO) approaches. The rotor efficiency was selected as the figure of merit to be maximized by optimizing the EP values; in the first approach, the performance of a simplified geometry (without winglet and gutter) was slightly improved driving SLSQP Python optimizer via adjoint gradients. Following this, the nominal configuration, complete with winglet and gutter, was optimized providing SLSQP with the first order derivatives calculated via PCE. The same strategy for the calculation of the derivatives was used in the third approach, only this time the erosion level, characterizing the worst damaged rim, was included. A local, gradient-based, optimization was then performed in a larger design space: the final optimized configuration is then recovered a good percentage of rotor efficiency otherwise lost when the erosion occurs, thanks to an offloading of the tip region, while also improving the nominal rotor performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Lam, Xuan-Binh. "Uncertainty quantification for stochastic subspace indentification methods". Rennes 1, 2011. http://www.theses.fr/2011REN1S133.

Texto completo da fonte
Resumo:
In Operational Modal Analysis, the modal parameters (natural frequencies, damping ratios, and mode shapes) obtained from Stochastic Subspace Identification (SSI) of a structure, are afflicted with statistical uncertainty. For evaluating the quality of the obtained results it is essential to know the appropriate uncertainty bounds of these terms. In this thesis, the algorithms, that automatically compute the uncertainty bounds of modal parameters obtained from SSI of a structure based on vibration measurements, are presented. With these new algorithms, the uncertainty bounds of the modal parameters of some relevant industrial examples are computed. To quantify the statistical uncertainty of the obtained modal parameters, the statistical uncertainty in the data can be evaluated and propagated to the system matrices and, thus, to the modal parameters. In the uncertainty quantification algorithm, which is a perturbation-based method, it has been shown how uncertainty bounds of modal parameters can be determined from the covariances of the system matrices, which are obtained from some covariance of the data and the covariances of subspace matrices. In this thesis, several results are derived. Firstly, a novel and more realistic scheme for the uncertainty calculation of the mode shape is presented, the mode shape is normalized by the phase angle of the component having the maximal absolute value instead of by one of its components. Secondly, the uncertainty quantification is derived and developed for several identification methods, first few of them are covariance- and data-driven SSI. The thesis also mentions about Eigensystem Realization Algorithm (ERA), a class of identification methods, and its uncertainty quantification scheme. This ERA approach is introduced in conjunction with the singular value decomposition to derive the basic formulation of minimum order realization. Besides, the thesis supposes efficient algorithms to estimate the system matrices at multiple model orders, the uncertainty quantification is also derived for this new multi-order SSI method. Two last interesting sections of the thesis are discovering the uncertainty of multi-setups SSI algorithm and recursive algorithms. In summary, subspace algorithms are efficient tools for vibration analysis, fitting a model to input/output or output-only measurements taken from a system. However, uncertainty quantification for SSI was missing for a long time. The uncertainty quantification is very important feature for credibility of modal analysis exploitation
En analyse modale operationelle, les paramètres modaux (fréquence, amortissement, déforméees) peuvent être obtenus par des méthodes d'identification de type sous espaces et sont définis à une incertitude stochastique près. Pour évaluer la qualité des résultats obtenus, il est essentiel de connaître les bornes de confiance sur ces résultats. Dans cette thèse sont développés des algorithmes qui calcule automatiquement de telles bornes de confiance pour des paramètres modaux caractèristiques d'une structure mécanique. Ces algorithmes sont validés sur des exemples industriels significatifs. L'incertitude est tout d'abord calculé sur les données puis propagée sur les matrices du système par calcul de sensibilité, puis finalement sur les paramètres modaux. Les algorithmes existants sur lesquels se basent cette thèse dérivent l'incertitude des matrices du système de l'incertitude sur les covariances des entrées mesurées. Dans cette thèse, plusieurs résultats ont été obtenus. Tout d'abord, l'incertitude sur les déformées modales est obtenue par un schema de calcul plus réaliste que précédemment, utilisant une normalisation par l'angle de phase de la composante de valeur maximale. Ensuite, plusieurs méthodes de sous espaces et non seulement les méthodes à base de covariance sont considérées, telles que la méthode de réalisation stochastique ERA ainsi que la méthode UPC, à base des données. Pour ces méthodes, le calcul d'incertitude est explicité. Deu autres problèmatiques sont adressés : tout d'abord l'estimation multi ordre par méthode de sous espace et l'estimation à partir de jeux de données mesurées séparément. Pour ces deux problèmes, les schemas d'incertitude sont développés. En conclusion, cette thèse s'est attaché à développer des schemas de calcul d'incertitude pour une famille de méthodes sous espaces ainsi que pour un certain nombre de problèmes pratiques. La thèse finit avec le calcul d'incertitudes pour les méthodes récursives. Les méthodes sous espaces sont considérées comme une approche d'estimation robuste et consistante pour l'extraction des paramètres modaux à partir de données temporelles. Le calcul des incertitudes pour ces méthodes est maintenant possible, rendant ces méthodes encore plus crédible dans le cadre de l'exploitation de l'analyse modale
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Larvaron, Benjamin. "Modeling battery health degradation with uncertainty quantification". Electronic Thesis or Diss., Université de Lorraine, 2024. http://www.theses.fr/2024LORR0028.

Texto completo da fonte
Resumo:
Face au changement climatique, des mesures importantes doivent être prise pour décarboner l’économie. Cela inclus une transformation des secteurs du transport et de la production d’énergie. Ces transformations augmentent l’utilisation d’énergie électrique et posent la question du stockage notamment grâce aux batteries Lithium-ion. Dans cette thèse nous nous intéressons à la modélisation de la dégradation de la santé des batteries. Afin de quantifier les risques associés aux garantis de performance, les incertitudes doivent être prise en compte. La dégradation est un phénomène complexe mettant en jeux différents mécanismes physiques en interaction. Celle-ci va varier selon le type de batterie ou de conditions d’utilisation. Nous avons tout d’abord considéré le problème de la dégradation temporelle à une condition expérimentale de référence, par une approche « data-driven » par processus gaussiens. Cette approche présente l’avantage de permettre l’apprentissage de modèles complexes tout en incluant une quantification des incertitudes. Partant de l’état de l’art, nous avons proposé une adaptation de la régression par processus gaussien. Par un design de noyaux adapté le modèle permet de prendre explicitement en compte la variabilité de performance entre les batteries. Cependant, la régression par processus Gaussien repose généralement sur une hypothèse de stationnarité, trop restrictive pour prendre en compte l’évolution de l’incertitude au cours du temps. Nous avons donc exploité le cadre plus général de la régression par processus gaussiens chaînés, reposant sur l’inférence variationnelle. Avec un choix adapté de fonction de vraisemblance, ce cadre permet d’ajuster un modèle non paramétrique de l’évolution de la variabilité entre batteries, améliorant significativement la quantification des incertitudes. Cela produit un modèle satisfaisant aux cycles observés mais se généralise mal pour prédire l’évolution future de la dégradation avec des comportements incohérents d’un point de vue physique. En particulier, la monotonie et la concavité des courbes de dégradations ne sont pas toujours respectées. Nous avons proposé une approche, pour inclure ces contraintes dans la régression par processus gaussiens chaînés. Cela nous a ainsi permis d’améliorer les prévisions à un horizon de plusieurs centaines de cycles, permettant potentiellement de réduire le temps de test nécessaire des batteries, source de coûts importants pour les manufacturiers. Nous avons ensuite élargi le problème afin de prendre en compte l’effet des conditions expérimentales sur la dégradation des batteries. Nous avons tout d’abord tenté d’adapter les méthodes à base de processus gaussien en incluant les facteurs expérimentaux comme variables explicatives. Cette approche a fourni des résultats intéressants dans des cas de conditions avec des dégradations similaires. Cependant pour des conditions plus complexes les résultats deviennent incohérents avec les connaissances physiques et ne sont plus exploitables. Nous avons donc proposé une autre approche, en deux temps, séparant l’évolution temporelle de l’effet des facteurs. Dans un premier temps l’évolution temporelle est modélisée par les méthodes par processus gaussien précédentes. Le second temps, plus complexe, utilise les résultats de l’étape précédente, des distributions gaussiennes, pour apprendre un modèle des conditions expérimentales. Cela nécessite une approche de régression sur données complexes. Nous proposons l’utilisation des barycentres conditionnels de Wasserstein, bien adapté au cas des distributions. Deux modèles sont introduits. Le premier, utilisant le cadre de la régression structurée, permet d’inclure un modèle physique de la dégradation. Le second, utilisant la régression Fréchet, permet d’améliorer les résultats en interpolant les conditions expérimentales et en permettant la prise en compte de plusieurs facteurs expérimentaux
With the acceleration of climate change, significant measures must be taken to decarbonize the economy. This includes a transformation of the transportation and energy production sectors. These changes increase the use of electrical energy and raise the need for storage, particularly through Lithium-ion batteries.In this thesis, we focus on modeling battery health degradation. To quantify the risks associated with performance guarantees, uncertainties must be taken into account. Degradation is a complex phenomenon involving various interacting physical mechanisms. It varies depending on the battery type and usage conditions. We first addressed the issue of the temporal degradation under a reference experimental condition using a data-driven approach based on Gaussian processes. This approach allows for learning complex models while incorporating uncertainty quantification. Building upon the state-of-the art, we proposed an adaptation of Gaussian process regression. By designing appropriate kernels, the model explicitly considers performance variability among batteries. However, Gaussian process regression generally relies on a stationarity assumption, which is too restrictive to account for uncertainty evolution over time. Therefore, we have leveraged the broader framework of chained Gaussian process regression, based on variational inference. With a suitable choice of likelihood function, this framework allows for adjusting a non-parametric model of the evolution of the variability among batteries, significantly improving uncertainty quantification. While this approach yields a model that fits observed cycles well, it does not generalize effectively to predict future degradation with consistent physical behaviors. Specifically, monotonicity and concavity of degradation curves are not always preserved. To address this, we proposed an approach to incorporate these constraints into chained Gaussian process regression. As a result, we have enhanced predictions over several hundred cycles, potentially reducing the necessary battery testing time—a significant cost for manufacturers. We then expanded the problem to account for the effect of experimental conditions on battery degradation. Initially, we attempted to adapt Gaussian process-based methods by including experimental factors as additional explanatory variables. This approach yielded interesting results in cases with similar degradation conditions. However, for more complex settings, the results became inconsistent with physical knowledge and were no longer usable. As a result, we proposed an alternative two-step approach, separating the temporal evolution from the effect of factors. In the first step, temporal evolution was modeled using the previously mentioned Gaussian process methods. The second, more complex step utilized the results from the previous stage—Gaussian distributions—to learn a model of experimental conditions. This required a regression approach for complex data. We suggest using Wasserstein conditional barycenters, which are well-suited for distribution cases. Two models were introduced. The first model, within the structured regression framework, incorporates a physical degradation model. The second model, using Fréchet regression, improves results by interpolating experimental conditions and accounting for multiple experimental factors
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Liang, Yue, Tian-Chyi Jim Yeh, Yu-Li Wang, Mingwei Liu, Junjie Wang e Yonghong Hao. "Numerical simulation of backward erosion piping in heterogeneous fields". AMER GEOPHYSICAL UNION, 2017. http://hdl.handle.net/10150/624364.

Texto completo da fonte
Resumo:
Backward erosion piping (BEP) is one of the major causes of seepage failures in levees. Seepage fields dictate the BEP behaviors and are influenced by the heterogeneity of soil properties. To investigate the effects of the heterogeneity on the seepage failures, we develop a numerical algorithm and conduct simulations to study BEP progressions in geologic media with spatially stochastic parameters. Specifically, the void ratio e, the hydraulic conductivity k, and the ratio of the particle contents r of the media are represented as the stochastic variables. They are characterized by means and variances, the spatial correlation structures, and the cross correlation between variables. Results of the simulations reveal that the heterogeneity accelerates the development of preferential flow paths, which profoundly increase the likelihood of seepage failures. To account for unknown heterogeneity, we define the probability of the seepage instability (PI) to evaluate the failure potential of a given site. Using Monte-Carlo simulation (MCS), we demonstrate that the PI value is significantly influenced by the mean and the variance of ln k and its spatial correlation scales. But the other parameters, such as means and variances of e and r, and their cross correlation, have minor impacts. Based on PI analyses, we introduce a risk rating system to classify the field into different regions according to risk levels. This rating system is useful for seepage failures prevention and assists decision making when BEP occurs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Ndiaye, Aïssatou. "Uncertainty Quantification of Thermo-acousticinstabilities in gas turbine combustors". Thesis, Montpellier, 2017. http://www.theses.fr/2017MONTS062/document.

Texto completo da fonte
Resumo:
Les instabilités thermo-acoustiques résultent de l'interaction entre les oscillations de pression acoustique et les fluctuations du taux de dégagement de chaleur de la flamme. Ces instabilités de combustion sont particulièrement préoccupantes en raison de leur fréquence dans les turbines à gaz modernes et à faible émission. Leurs principaux effets indésirables sont une réduction du temps de fonctionnement du moteur en raison des oscillations de grandes amplitudes ainsi que de fortes vibrations à l'intérieur de la chambre de combustion. La simulation numérique est maintenant devenue une approche clé pour comprendre et prédire ces instabilités dans la phase de conception industrielle. Cependant, la prédiction de ce phénomène reste difficile en raison de sa complexité; cela se confirme lorsque les paramètres physiques du processus de modélisation sont incertains, ce qui est pratiquement toujours le cas pour des systèmes réels.Introduire la quantification des incertitudes pour la thermo-acoustique est le seul moyen d'étudier et de contrôler la stabilité des chambres de combustion qui fonctionnent dans des conditions réalistes; c'est l'objectif de cette thèse.Dans un premier temps, une chambre de combustion académique (avec un seul injecteur et une seule flamme) ainsi que deux chambres de moteurs d'hélicoptère (avec N injecteurs et des flammes) sont étudiés. Les calculs basés sur un solveur de Helmholtz et un outil quasi-analytique de bas ordre fournissent des estimations appropriées de la fréquence et des structures modales pour chaque géométrie. L'analyse suggère que la réponse de la flamme aux perturbations acoustiques joue un rôle prédominant dans la dynamique de la chambre de combustion. Ainsi, la prise en compte des incertitudes liées à la représentation de la flamme apparaît comme une étape nécessaire vers une analyse robuste de la stabilité du système.Dans un second temps, la notion de facteur de risque, c'est-à-dire la probabilité pour un mode thermo-acoustique d'être instable, est introduite afin de fournir une description plus générale du système que la classification classique et binaire (stable / instable). Les approches de modélisation de Monte Carlo et de modèle de substitution sont associées pour effectuer une analyse de quantification d'incertitudes de la chambre de combustion académique avec deux paramètres incertains (amplitude et temps de réponse de la flamme). On montre que l'utilisation de modèles de substitution algébriques réduit drastiquement le nombre de calculs initiales, donc la charge de calcul, tout en fournissant des estimations précises du facteur de risque modal. Pour traiter les problèmes multidimensionnel tels que les deux moteurs d'hélicoptère, une stratégie visant à réduire le nombre de paramètres incertains est introduite. La méthode <> combinée à une approche de changement de variables a permis d'identifier trois directions dominantes (au lieu des N paramètres incertains initiaux) qui suffisent à décrire la dynamique des deux systèmes industriels. Dès lors que ces paramètres dominants sont associés à des modèles de substitution appropriés, cela permet de réaliser efficacement une analyse de quantification des incertitudes de systèmes thermo-acoustiques complexes.Finalement, on examine la perspective d'utiliser la méthode adjointe pour analyser la sensibilité des systèmes thermo-acoustiques représentés par des solveurs 3D de Helmholtz. Les résultats obtenus sur des cas tests 2D et 3D sont prometteurs et suggèrent d'explorer davantage le potentiel de cette méthode dans le cas de problèmes thermo-acoustiques encore plus complexes
Thermoacoustic instabilities result from the interaction between acoustic pressure oscillations and flame heat release rate fluctuations. These combustion instabilities are of particular concern due to their frequent occurrence in modern, low emission gas turbine engines. Their major undesirable consequence is a reduced time of operation due to large amplitude oscillations of the flame position and structural vibrations within the combustor. Computational Fluid Dynamics (CFD) has now become one a key approach to understand and predict these instabilities at industrial readiness level. Still, predicting this phenomenon remains difficult due to modelling and computational challenges; this is even more true when physical parameters of the modelling process are uncertain, which is always the case in practical situations. Introducing Uncertainty Quantification for thermoacoustics is the only way to study and control the stability of gas turbine combustors operated under realistic conditions; this is the objective of this work.First, a laboratory-scale combustor (with only one injector and flame) as well as two industrial helicopter engines (with N injectors and flames) are investigated. Calculations based on a Helmholtz solver and quasi analytical low order tool provide suitable estimates of the frequency and modal structures for each geometry. The analysis suggests that the flame response to acoustic perturbations plays the predominant role in the dynamics of the combustor. Accounting for the uncertainties of the flame representation is thus identified as a key step towards a robust stability analysis.Second, the notion of Risk Factor, that is to say the probability for a particular thermoacoustic mode to be unstable, is introduced in order to provide a more general description of the system than the classical binary (stable/unstable) classification. Monte Carlo and surrogate modelling approaches are then combined to perform an uncertainty quantification analysis of the laboratory-scale combustor with two uncertain parameters (amplitude and time delay of the flame response). It is shown that the use of algebraic surrogate models reduces drastically the number of state computations, thus the computational load, while providing accurate estimates of the modal risk factor. To deal with the curse of dimensionality, a strategy to reduce the number of uncertain parameters is further introduced in order to properly handle the two industrial helicopter engines. The active subspace algorithm used together with a change of variables allows identifying three dominant directions (instead of N initial uncertain parameters) which are sufficient to describe the dynamics of the industrial systems. Combined with appropriate surrogate models construction, this allows to conduct computationally efficient uncertainty quantification analysis of complex thermoacoustic systems.Third, the perspective of using adjoint method for the sensitivity analysis of thermoacoustic systems represented by 3D Helmholtz solvers is examined. The results obtained for 2D and 3D test cases are promising and suggest to further explore the potential of this method on even more complex thermoacoustic problems
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

au, P. Kraipeerapun@murdoch edu, e Pawalai Kraipeerapun. "Neural network classification based on quantification of uncertainty". Murdoch University, 2009. http://wwwlib.murdoch.edu.au/adt/browse/view/adt-MU20090526.100525.

Texto completo da fonte
Resumo:
This thesis deals with feedforward backpropagation neural networks and interval neutrosophic sets for the binary and multiclass classification problems. Neural networks are used to predict “true” and “false” output values. These results together with the uncertainty of type error and vagueness occurred in the prediction are then represented in the form of interval neutrosophic sets. Each element in an interval neutrosophic set consists of three membership values: truth, indeterminacy, and false. These three membership values are then used in the classification process. For binary classification, a pair of neural networks is first applied in order to predict the degrees of truth and false membership values. Subsequently, bagging technique is applied to an ensemble of pairs of neural networks in order to improve the performance. For multiclass classification, two basic multiclass classification methods are proposed. A pair of neural networks with multiple outputs and multiple pairs of binary neural network are experimented. A number of aggregation techniques are proposed in this thesis. The difference between each pair of the truth and false membership values determines the vagueness value. Error occurred in the prediction are estimated using an interpolation technique. Both vagueness and error then form the indeterminacy membership. Two and three dimensional visualization of the three membership values are also presented. Ten data sets obtained from UCI machine learning repository are experimented with the proposed approaches. The approaches are also applied to two real world problems: mineral prospectivity prediction and lithofacies classification.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Pettersson, Per. "Uncertainty Quantification and Numerical Methods for Conservation Laws". Doctoral thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-188348.

Texto completo da fonte
Resumo:
Conservation laws with uncertain initial and boundary conditions are approximated using a generalized polynomial chaos expansion approach where the solution is represented as a generalized Fourier series of stochastic basis functions, e.g. orthogonal polynomials or wavelets. The stochastic Galerkin method is used to project the governing partial differential equation onto the stochastic basis functions to obtain an extended deterministic system. The stochastic Galerkin and collocation methods are used to solve an advection-diffusion equation with uncertain viscosity. We investigate well-posedness, monotonicity and stability for the stochastic Galerkin system. High-order summation-by-parts operators and weak imposition of boundary conditions are used to prove stability. We investigate the impact of the total spatial operator on the convergence to steady-state.  Next we apply the stochastic Galerkin method to Burgers' equation with uncertain boundary conditions. An analysis of the truncated polynomial chaos system presents a qualitative description of the development of the solution over time. An analytical solution is derived and the true polynomial chaos coefficients are shown to be smooth, while the corresponding coefficients of the truncated stochastic Galerkin formulation are shown to be discontinuous. We discuss the problematic implications of the lack of known boundary data and possible ways of imposing stable and accurate boundary conditions. We present a new fully intrusive method for the Euler equations subject to uncertainty based on a Roe variable transformation. The Roe formulation saves computational cost compared to the formulation based on expansion of conservative variables. Moreover, it is more robust and can handle cases of supersonic flow, for which the conservative variable formulation fails to produce a bounded solution. A multiwavelet basis that can handle  discontinuities in a robust way is used. Finally, we investigate a two-phase flow problem. Based on regularity analysis of the generalized polynomial chaos coefficients, we present a hybrid method where solution regions of varying smoothness are coupled weakly through interfaces. In this way, we couple smooth solutions solved with high-order finite difference methods with non-smooth solutions solved for with shock-capturing methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Hunt, Stephen E. "Uncertainty Quantification Using Epi-Splines and Soft Information". Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/7361.

Texto completo da fonte
Resumo:
Approved for public release; distribution is unlimited
This thesis deals with the problem of measuring system performance in the presence of uncertainty. The system under consideration may be as simple as an Army vehicle subjected to a kinetic attack or as complex as the human cognitive process. Information about the system performance is found in the observed data points, which we call hard information, and may be collected from physical sensors, field test data, and computer simulations. Soft information is available from human sources such as subject-matter experts and analysts, and represents qualitative information about the system performance and the uncertainty present. We propose the use of epi-splines in a nonparametric framework that allows for the systematic integration of hard and soft information for the estimation of system performance density functions in order to quantify uncertainty. We conduct empirical testing of several benchmark analytical examples, where the true probability density functions are known. We compare the performance of the epi-spline estimator to kernel-based estimates and highlight a real-world problem context to illustrate the potential of the framework.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Lebon, Jérémy. "Towards multifidelity uncertainty quantification for multiobjective structural design". Phd thesis, Université de Technologie de Compiègne, 2013. http://tel.archives-ouvertes.fr/tel-01002392.

Texto completo da fonte
Resumo:
This thesis aims at Multi-Objective Optimization under Uncertainty in structural design. We investigate Polynomial Chaos Expansion (PCE) surrogates which require extensive training sets. We then face two issues: high computational costs of an individual Finite Element simulation and its limited precision. From numerical point of view and in order to limit the computational expense of the PCE construction we particularly focus on sparse PCE schemes. We also develop a custom Latin Hypercube Sampling scheme taking into account the finite precision of the simulation. From the modeling point of view,we propose a multifidelity approach involving a hierarchy of models ranging from full scale simulations through reduced order physics up to response surfaces. Finally, we investigate multiobjective optimization of structures under uncertainty. We extend the PCE model of design objectives by taking into account the design variables. We illustrate our work with examples in sheet metal forming and optimal design of truss structures.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Hristov, Peter O. "Numerical modelling and uncertainty quantification of biodiesel filters". Thesis, University of Liverpool, 2018. http://livrepository.liverpool.ac.uk/3024537/.

Texto completo da fonte
Resumo:
This dissertation explores the design and analysis of computer models for filters used to separate water from biodiesel. Regulations concerning air pollution and increasing fossil fuel scarcity mandate the transition towards biofuels. Moreover, increasingly stringent standards for fuel cleanliness are introduced continually. Biodiesel exhibits strong affinity towards water, which makes its separation from the fuel challenging. Water in the fuel can cause problems, ranging from reduced performance to significant damage to the equipment. A model of the filter is needed to substitute costly or impractical laboratory experiments and to enable the systematic studies of coalescence processes. These computational experiments provide a means for designing filtration equipment with optimal separation efficiency and pressure drop. The coalescence process is simulated using the lattice Boltzmann modelling framework. These models offer several advantages over conventional computational fluid dynamics solvers and are commonly used for the simulation of multiphase flows. Different versions of lattice Boltzmann models in two and three dimensions are created and used in this work. Complex computer models, such as those employed in this dissertation are considered expensive, in that their running times may prohibit any type of code analysis which requires many evaluations of the simulator to be performed. To alleviate this problem, a statistical metamodel known as a Gaussian process emulator is used. Once the computational cost of the model is reduced, uncertainty quantification methods and, in particular, sensitivity and reliability analyses are used to study its performance. Tools and packages for industrial use are developed in this dissertation to enable the practical application of the studies conducted in it.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Lal, Rajnesh. "Data assimilation and uncertainty quantification in cardiovascular biomechanics". Thesis, Montpellier, 2017. http://www.theses.fr/2017MONTS088/document.

Texto completo da fonte
Resumo:
Les simulations numériques des écoulements sanguins cardiovasculaires peuvent combler d’importantes lacunes dans les capacités actuelles de traitement clinique. En effet, elles offrent des moyens non invasifs pour quantifier l’hémodynamique dans le cœur et les principaux vaisseaux sanguins chez les patients atteints de maladies cardiovasculaires. Ainsi, elles permettent de recouvrer les caractéristiques des écoulements sanguins qui ne peuvent pas être obtenues directement à partir de l’imagerie médicale. Dans ce sens, des simulations personnalisées utilisant des informations propres aux patients aideraient à une prévision individualisée des risques. Nous pourrions en effet, disposer des informations clés sur la progression éventuelle d’une maladie ou détecter de possibles anomalies physiologiques. Les modèles numériques peuvent fournir également des moyens pour concevoir et tester de nouveaux dispositifs médicaux et peuvent être utilisés comme outils prédictifs pour la planification de traitement chirurgical personnalisé. Ils aideront ainsi à la prise de décision clinique. Cependant, une difficulté dans cette approche est que, pour être fiables, les simulations prédictives spécifiques aux patients nécessitent une assimilation efficace de leurs données médicales. Ceci nécessite la solution d’un problème hémodynamique inverse, où les paramètres du modèle sont incertains et sont estimés à l’aide des techniques d’assimilation de données.Dans cette thèse, le problème inverse pour l’estimation des paramètres est résolu par une méthode d’assimilation de données basée sur un filtre de Kalman d’ensemble (EnKF). Connaissant les incertitudes sur les mesures, un tel filtre permet la quantification des incertitudes liées aux paramètres estimés. Un algorithme d’estimation de paramètres, basé sur un filtre de Kalman d’ensemble, est proposé dans cette thèse pour des calculs hémodynamiques spécifiques à un patient, dans un réseau artériel schématique et à partir de mesures cliniques incertaines. La méthodologie est validée à travers plusieurs scenarii in silico utilisant des données synthétiques. La performance de l’algorithme d’estimation de paramètres est également évaluée sur des données expérimentales pour plusieurs réseaux artériels et dans un cas provenant d’un banc d’essai in vitro et des données cliniques réelles d’un volontaire (cas spécifique du patient). Le but principal de cette thèse est l’analyse hémodynamique spécifique du patient dans le polygone de Willis, appelé aussi cercle artériel du cerveau. Les propriétés hémodynamiques communes, comme celles de la paroi artérielle (module de Young, épaisseur de la paroi et coefficient viscoélastique), et les paramètres des conditions aux limites (coefficients de réflexion et paramètres du modèle de Windkessel) sont estimés. Il est également démontré qu’un modèle appelé compartiment d’ordre réduit (ou modèle dimension zéro) permet une estimation simple et fiable des caractéristiques du flux sanguin dans le polygone de Willis. De plus, il est ressorti que les simulations avec les paramètres estimés capturent les formes attendues pour les ondes de pression et de débit aux emplacements prescrits par le clinicien
Cardiovascular blood flow simulations can fill several critical gaps in current clinical capabilities. They offer non-invasive ways to quantify hemodynamics in the heart and major blood vessels for patients with cardiovascular diseases, that cannot be directly obtained from medical imaging. Patient-specific simulations (incorporating data unique to the individual) enable individualised risk prediction, provide key insights into disease progression and/or abnormal physiologic detection. They also provide means to systematically design and test new medical devices, and are used as predictive tools to surgical and personalize treatment planning and, thus aid in clinical decision-making. Patient-specific predictive simulations require effective assimilation of medical data for reliable simulated predictions. This is usually achieved by the solution of an inverse hemodynamic problem, where uncertain model parameters are estimated using the techniques for merging data and numerical models known as data assimilation methods.In this thesis, the inverse problem is solved through a data assimilation method using an ensemble Kalman filter (EnKF) for parameter estimation. By using an ensemble Kalman filter, the solution also comes with a quantification of the uncertainties for the estimated parameters. An ensemble Kalman filter-based parameter estimation algorithm is proposed for patient-specific hemodynamic computations in a schematic arterial network from uncertain clinical measurements. Several in silico scenarii (using synthetic data) are considered to investigate the efficiency of the parameter estimation algorithm using EnKF. The usefulness of the parameter estimation algorithm is also assessed using experimental data from an in vitro test rig and actual real clinical data from a volunteer (patient-specific case). The proposed algorithm is evaluated on arterial networks which include single arteries, cases of bifurcation, a simple human arterial network and a complex arterial network including the circle of Willis.The ultimate aim is to perform patient-specific hemodynamic analysis in the network of the circle of Willis. Common hemodynamic properties (parameters), like arterial wall properties (Young’s modulus, wall thickness, and viscoelastic coefficient) and terminal boundary parameters (reflection coefficient and Windkessel model parameters) are estimated as the solution to an inverse problem using time series pressure values and blood flow rate as measurements. It is also demonstrated that a proper reduced order zero-dimensional compartment model can lead to a simple and reliable estimation of blood flow features in the circle of Willis. The simulations with the estimated parameters capture target pressure or flow rate waveforms at given specific locations
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Zhang, Zheng Ph D. Massachusetts Institute of Technology. "Uncertainty quantification for integrated circuits and microelectrornechanical systems". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99855.

Texto completo da fonte
Resumo:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 155-168).
Uncertainty quantification has become an important task and an emerging topic in many engineering fields. Uncertainties can be caused by many factors, including inaccurate component models, the stochastic nature of some design parameters, external environmental fluctuations (e.g., temperature variation), measurement noise, and so forth. In order to enable robust engineering design and optimal decision making, efficient stochastic solvers are highly desired to quantify the effects of uncertainties on the performance of complex engineering designs. Process variations have become increasingly important in the semiconductor industry due to the shrinking of micro- and nano-scale devices. Such uncertainties have led to remarkable performance variations at both circuit and system levels, and they cannot be ignored any more in the design of nano-scale integrated circuits and microelectromechanical systems (MEMS). In order to simulate the resulting stochastic behaviors, Monte Carlo techniques have been employed in SPICE-like simulators for decades, and they still remain the mainstream techniques in this community. Despite of their ease of implementation, Monte Carlo simulators are often too time-consuming due to the huge number of repeated simulations. This thesis reports the development of several stochastic spectral methods to accelerate the uncertainty quantification of integrated circuits and MEMS. Stochastic spectral methods have emerged as a promising alternative to Monte Carlo in many engineering applications, but their performance may degrade significantly as the parameter dimensionality increases. In this work, we develop several efficient stochastic simulation algorithms for various integrated circuits and MEMS designs, including problems with both low-dimensional and high-dimensional random parameters, as well as complex systems with hierarchical design structures. The first part of this thesis reports a novel stochastic-testing circuit/MEMS simulator as well as its advanced simulation engine for radio-frequency (RF) circuits. The proposed stochastic testing can be regarded as a hybrid variant of stochastic Galerkin and stochastic collocation: it is an intrusive simulator with decoupled computation and adaptive time stepping inside the solver. As a result, our simulator gains remarkable speedup over standard stochastic spectral methods and Monte Carlo in the DC, transient and AC simulation of various analog, digital and RF integrated circuits. An advanced uncertainty quantification algorithm for the periodic steady states (or limit cycles) of analog/RF circuits is further developed by combining stochastic testing and shooting Newton. Our simulator is verified by various integrated circuits, showing 10² x to 10³ x speedup over Monte Carlo when a similar level of accuracy is required. The second part of this thesis presents two approaches for hierarchical uncertainty quantification. In hierarchical uncertainty quantification, we propose to employ stochastic spectral methods at different design hierarchies to simulate efficiently complex systems. The key idea is to ignore the multiple random parameters inside each subsystem and to treat each subsystem as a single random parameter. The main difficulty is to recompute the basis functions and quadrature rules that are required for the high-level uncertainty quantification, since the density function of an obtained low-level surrogate model is generally unknown. In order to address this issue, the first proposed algorithm computes new basis functions and quadrature points in the low-level (and typically high-dimensional) parameter space. This approach is very accurate; however it may suffer from the curse of dimensionality. In order to handle high-dimensional problems, a sparse stochastic testing simulator based on analysis of variance (ANOVA) is developed to accelerate the low-level simulation. At the high-level, a fast algorithm based on tensor decompositions is proposed to compute the basis functions and Gauss quadrature points. Our algorithm is verified by some MEMS/IC co-design examples with both low-dimensional and high-dimensional (up to 184) random parameters, showing about 102 x speedup over the state-of-the-art techniques. The second proposed hierarchical uncertainty quantification technique instead constructs a density function for each subsystem by some monotonic interpolation schemes. This approach is capable of handling general low-level possibly non-smooth surrogate models, and it allows computing new basis functions and quadrature points in an analytical way. The computational techniques developed in this thesis are based on stochastic differential algebraic equations, but the results can also be applied to many other engineering problems (e.g., silicon photonics, heat transfer problems, fluid dynamics, electromagnetics and power systems). There exist lots of research opportunities in this direction. Important open problems include how to solve high-dimensional problems (by both deterministic and randomized algorithms), how to deal with discontinuous response surfaces, how to handle correlated non-Gaussian random variables, how to couple noise and random parameters in uncertainty quantification, how to deal with correlated and time-dependent subsystems in hierarchical uncertainty quantification, and so forth.
by Zheng Zhang.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Chen, Qi. "Uncertainty quantification in assessment of damage ship survivability". Thesis, University of Strathclyde, 2012. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=19511.

Texto completo da fonte
Resumo:
Ongoing developments in improving ship safety indicate the gradual transition from a compliance-based culture to a sustainable safety-oriented culture. Sophisticated methods, tools and techniques are demanded to address the dynamic behaviour of a ship in a physical environment. This is particularly true for investigating the flooding phenomenon of a damaged ship, a principal hazard endangering modern ships. In this respect, first-principles tools represent a rational and cost-effective approach to address it at both design and operational stages. Acknowledging the criticality of ship survivability and the various maturity levels of state-of-the-art tools, analyses of the underlying uncertainties in relation to relevant predictions become an inevitable component to be addressed. The research presented in this thesis proposes a formalised Bayesian approach for quantifying uncertainties associated with the assessment of ship survivability. It elaborates a formalised procedu re for synthesizing first-principles tools with existing knowledge from various sources. The outcome is a mathematical model for predicting time-domain survivability and quantifying the associated uncertainties. In view of emerging ship life-cycle safety management issues and the recent initiative of "Safe Return to Port", emergency management is recognised as the last remedy to address an evolving flooding crisis. For this reason, an emergency decision support framework is proposed to demonstrate the applicability of the presented Bayesian approach. A case study is enclosed to elucidate the devised shipboard decision support framework for flooding-related emergency control. Various aspects of the presented methodology demonstrate considerable potential for further research, development and application. In an environment where more emphasis is placed on performance and probabilistic-based solutions, it is believed that this research has contributed positiv ely and substantially towards ship safety, with particular reference to uncertainty analysis and ensuing applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Abdollahzadeh, Asaad. "Adaptive algorithms for history matching and uncertainty quantification". Thesis, Heriot-Watt University, 2014. http://hdl.handle.net/10399/2752.

Texto completo da fonte
Resumo:
Numerical reservoir simulation models are the basis for many decisions in regard to predicting, optimising, and improving production performance of oil and gas reservoirs. History matching is required to calibrate models to the dynamic behaviour of the reservoir, due to the existence of uncertainty in model parameters. Finally a set of history matched models are used for reservoir performance prediction and economic and risk assessment of different development scenarios. Various algorithms are employed to search and sample parameter space in history matching and uncertainty quantification problems. The algorithm choice and implementation, as done through a number of control parameters, have a significant impact on effectiveness and efficiency of the algorithm and thus, the quality of results and the speed of the process. This thesis is concerned with investigation, development, and implementation of improved and adaptive algorithms for reservoir history matching and uncertainty quantification problems. A set of evolutionary algorithms are considered and applied to history matching. The shared characteristic of applied algorithms is adaptation by balancing exploration and exploitation of the search space, which can lead to improved convergence and diversity. This includes the use of estimation of distribution algorithms, which implicitly adapt their search mechanism to the characteristics of the problem. Hybridising them with genetic algorithms, multiobjective sorting algorithms, and real-coded, multi-model and multivariate Gaussian-based models can help these algorithms to adapt even more and improve their performance. Finally diversity measures are used to develop an explicit, adaptive algorithm and control the algorithm’s performance, based on the structure of the problem. Uncertainty quantification in a Bayesian framework can be carried out by resampling of the search space using Markov chain Monte-Carlo sampling algorithms. Common critiques of these are low efficiency and their need for control parameter tuning. A Metropolis-Hastings sampling algorithm with an adaptive multivariate Gaussian proposal distribution and a K-nearest neighbour approximation has been developed and applied.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Pascual, Blanca. "Uncertainty quantification for complex structures : statics and dynamics". Thesis, Swansea University, 2012. https://cronfa.swan.ac.uk/Record/cronfa42987.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Mulani, Sameer B. "Uncertainty Quantification in Dynamic Problems With Large Uncertainties". Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/28617.

Texto completo da fonte
Resumo:
This dissertation investigates uncertainty quantification in dynamic problems. The Advanced Mean Value (AMV) method is used to calculate probabilistic sound power and the sensitivity of elastically supported panels with small uncertainty (coefficient of variation). Sound power calculations are done using Finite Element Method (FEM) and Boundary Element Method (BEM). The sensitivities of the sound power are calculated through direct differentiation of the FEM/BEM/AMV equations. The results are compared with Monte Carlo simulation (MCS). An improved method is developed using AMV, metamodel, and MCS. This new technique is applied to calculate sound power of a composite panel using FEM and Rayleigh Integral. The proposed methodology shows considerable improvement both in terms of accuracy and computational efficiency. In systems with large uncertainties, the above approach does not work. Two Spectral Stochastic Finite Element Method (SSFEM) algorithms are developed to solve stochastic eigenvalue problems using Polynomial chaos. Presently, the approaches are restricted to problems with real and distinct eigenvalues. In both the approaches, the system uncertainties are modeled by Wiener-Askey orthogonal polynomial functions. Galerkin projection is applied in the probability space to minimize the weighted residual of the error of the governing equation. First algorithm is based on inverse iteration method. A modification is suggested to calculate higher eigenvalues and eigenvectors. The above algorithm is applied to both discrete and continuous systems. In continuous systems, the uncertainties are modeled as Gaussian processes using Karhunen-Loeve (KL) expansion. Second algorithm is based on implicit polynomial iteration method. This algorithm is found to be more efficient when applied to discrete systems. However, the application of the algorithm to continuous systems results in ill-conditioned system matrices, which seriously limit its application. Lastly, an algorithm to find the basis random variables of KL expansion for non-Gaussian processes, is developed. The basis random variables are obtained via nonlinear transformation of marginal cumulative distribution function using standard deviation. Results are obtained for three known skewed distributions, Log-Normal, Beta, and Exponential. In all the cases, it is found that the proposed algorithm matches very well with the known solutions and can be applied to solve non-Gaussian process using SSFEM.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Macatula, Romcholo Yulo. "Linear Parameter Uncertainty Quantification using Surrogate Gaussian Processes". Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/99411.

Texto completo da fonte
Resumo:
We consider uncertainty quantification using surrogate Gaussian processes. We take a previous sampling algorithm and provide a closed form expression of the resulting posterior distribution. We extend the method to weighted least squares and a Bayesian approach both with closed form expressions of the resulting posterior distributions. We test methods on 1D deconvolution and 2D tomography. Our new methods improve on the previous algorithm, however fall short in some aspects to a typical Bayesian inference method.
Master of Science
Parameter uncertainty quantification seeks to determine both estimates and uncertainty regarding estimates of model parameters. Example of model parameters can include physical properties such as density, growth rates, or even deblurred images. Previous work has shown that replacing data with a surrogate model can provide promising estimates with low uncertainty. We extend the previous methods in the specific field of linear models. Theoretical results are tested on simulated computed tomography problems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Huang, Jiangeng. "Sequential learning, large-scale calibration, and uncertainty quantification". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91935.

Texto completo da fonte
Resumo:
With remarkable advances in computing power, computer experiments continue to expand the boundaries and drive down the cost of various scientific discoveries. New challenges keep arising from designing, analyzing, modeling, calibrating, optimizing, and predicting in computer experiments. This dissertation consists of six chapters, exploring statistical methodologies in sequential learning, model calibration, and uncertainty quantification for heteroskedastic computer experiments and large-scale computer experiments. For heteroskedastic computer experiments, an optimal lookahead based sequential learning strategy is presented, balancing replication and exploration to facilitate separating signal from input-dependent noise. Motivated by challenges in both large data size and model fidelity arising from ever larger modern computer experiments, highly accurate and computationally efficient divide-and-conquer calibration methods based on on-site experimental design and surrogate modeling for large-scale computer models are developed in this dissertation. The proposed methodology is applied to calibrate a real computer experiment from the gas and oil industry. This on-site surrogate calibration method is further extended to multiple output calibration problems.
Doctor of Philosophy
With remarkable advances in computing power, complex physical systems today can be simulated comparatively cheaply and to high accuracy through computer experiments. Computer experiments continue to expand the boundaries and drive down the cost of various scientific investigations, including biological, business, engineering, industrial, management, health-related, physical, and social sciences. This dissertation consists of six chapters, exploring statistical methodologies in sequential learning, model calibration, and uncertainty quantification for heteroskedastic computer experiments and large-scale computer experiments. For computer experiments with changing signal-to-noise ratio, an optimal lookahead based sequential learning strategy is presented, balancing replication and exploration to facilitate separating signal from complex noise structure. In order to effectively extract key information from massive amount of simulation and make better prediction for the real world, highly accurate and computationally efficient divide-and-conquer calibration methods for large-scale computer models are developed in this dissertation, addressing challenges in both large data size and model fidelity arising from ever larger modern computer experiments. The proposed methodology is applied to calibrate a real computer experiment from the gas and oil industry. This large-scale calibration method is further extended to solve multiple output calibration problems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Vishwanathan, Aditya. "Uncertainty Quantification for Topology Optimisation of Aerospace Structures". Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/23922.

Texto completo da fonte
Resumo:
The design and optimisation of aerospace structures is non-trivial. There are several reasons for this including, but not limited to, (1) complex problem instances (multiple objectives, constraints, loads, and boundary conditions), (2) the use of high fidelity meshes which impose significant computational burden, and (3) dealing with uncertainties in the engineering modelling. The last few decades have seen a considerable increase in research output dedicated to solving these problems, and yet the majority of papers neglect the effect of uncertainties and assume deterministic conditions. This is particularly the case for topology optimisation - a promising method for aerospace design that has seen relatively little practical application to date. This thesis will address notable gaps in the topology optimisation under uncertainty literature. Firstly, an observation underpinning the field of uncertainty quantification (UQ) is the lack of experimental studies and dealing with non-parametric variability (e.g. model unknowns, experimental and human errors etc.). Random Matrix Theory (RMT) is a method explored heavily in this thesis for the purpose of numerical and experimental UQ of aerospace structures for both parametric and non-parametric uncertainties. Next, a novel algorithm is developed using RMT to increase the efficiency of Reliability-Based topology optimisation, a formulation which has historically been limited by computational runtime. This thesis also provides contributions to Robust Topology optimisation (RTO) by integrating uncertain boundary conditions and providing experimental validation of the results. The final chapter of this thesis addresses uncertainties in multi-objective topology optimisation (MOTO), and also considers treating a single objective RTO problem as a MOTO to provide a more consistent distribution of solutions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Kraipeerapun, Pawalai. "Neural network classification based on quantification of uncertainty". Thesis, Kraipeerapun, Pawalai (2009) Neural network classification based on quantification of uncertainty. PhD thesis, Murdoch University, 2009. https://researchrepository.murdoch.edu.au/id/eprint/699/.

Texto completo da fonte
Resumo:
This thesis deals with feedforward backpropagation neural networks and interval neutrosophic sets for the binary and multiclass classification problems. Neural networks are used to predict “true” and “false” output values. These results together with the uncertainty of type error and vagueness occurred in the prediction are then represented in the form of interval neutrosophic sets. Each element in an interval neutrosophic set consists of three membership values: truth, indeterminacy, and false. These three membership values are then used in the classification process. For binary classification, a pair of neural networks is first applied in order to predict the degrees of truth and false membership values. Subsequently, bagging technique is applied to an ensemble of pairs of neural networks in order to improve the performance. For multiclass classification, two basic multiclass classification methods are proposed. A pair of neural networks with multiple outputs and multiple pairs of binary neural network are experimented. A number of aggregation techniques are proposed in this thesis. The difference between each pair of the truth and false membership values determines the vagueness value. Error occurred in the prediction are estimated using an interpolation technique. Both vagueness and error then form the indeterminacy membership. Two and three dimensional visualization of the three membership values are also presented. Ten data sets obtained from UCI machine learning repository are experimented with the proposed approaches. The approaches are also applied to two real world problems: mineral prospectivity prediction and lithofacies classification.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Kraipeerapun, Pawalai. "Neural network classification based on quantification of uncertainty". Kraipeerapun, Pawalai (2009) Neural network classification based on quantification of uncertainty. PhD thesis, Murdoch University, 2009. http://researchrepository.murdoch.edu.au/699/.

Texto completo da fonte
Resumo:
This thesis deals with feedforward backpropagation neural networks and interval neutrosophic sets for the binary and multiclass classification problems. Neural networks are used to predict “true” and “false” output values. These results together with the uncertainty of type error and vagueness occurred in the prediction are then represented in the form of interval neutrosophic sets. Each element in an interval neutrosophic set consists of three membership values: truth, indeterminacy, and false. These three membership values are then used in the classification process. For binary classification, a pair of neural networks is first applied in order to predict the degrees of truth and false membership values. Subsequently, bagging technique is applied to an ensemble of pairs of neural networks in order to improve the performance. For multiclass classification, two basic multiclass classification methods are proposed. A pair of neural networks with multiple outputs and multiple pairs of binary neural network are experimented. A number of aggregation techniques are proposed in this thesis. The difference between each pair of the truth and false membership values determines the vagueness value. Error occurred in the prediction are estimated using an interpolation technique. Both vagueness and error then form the indeterminacy membership. Two and three dimensional visualization of the three membership values are also presented. Ten data sets obtained from UCI machine learning repository are experimented with the proposed approaches. The approaches are also applied to two real world problems: mineral prospectivity prediction and lithofacies classification.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Doty, Austin. "Nonlinear Uncertainty Quantification, Sensitivity Analysis, and Uncertainty Propagation of a Dynamic Electrical Circuit". University of Dayton / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1355456642.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Hale, II Lawrence Edmond. "Aerodynamic Uncertainty Quantification and Estimation of Uncertainty Quantified Performance of Unmanned Aircraft Using Non-Deterministic Simulations". Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/74427.

Texto completo da fonte
Resumo:
This dissertation addresses model form uncertainty quantification, non-deterministic simulations, and sensitivity analysis of the results of these simulations, with a focus on application to analysis of unmanned aircraft systems. The model form uncertainty quantification utilizes equation error to estimate the error between an identified model and flight test results. The errors are then related to aircraft states, and prediction intervals are calculated. This method for model form uncertainty quantification results in uncertainty bounds that vary with the aircraft state, narrower where consistent information has been collected and wider where data are not available. Non-deterministic simulations can then be performed to provide uncertainty quantified estimates of the system performance. The model form uncertainties could be time varying, so multiple sampling methods were considered. The two methods utilized were a fixed uncertainty level and a rate bounded variation in the uncertainty level. For analysis using fixed uncertainty level, the corner points of the model form uncertainty were sampled, providing reduced computational time. The second model better represents the uncertainty but requires significantly more simulations to sample the uncertainty. The uncertainty quantified performance estimates are compared to estimates based on flight tests to check the accuracy of the results. Sensitivity analysis is performed on the uncertainty quantified performance estimates to provide information on which of the model form uncertainties contribute most to the uncertainty in the performance estimates. The proposed method uses the results from the fixed uncertainty level analysis that utilizes the corner points of the model form uncertainties. The sensitivity of each parameter is estimated based on corner values of all the other uncertain parameters. This results in a range of possible sensitivities for each parameter dependent on the true value of the other parameters.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Gilbert, Michael Stephen. "A Small-Perturbation Automatic-Differentiation (SPAD) Method for Evaluating Uncertainty in Computational Electromagnetics". The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354742230.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Blumer, Joel David. "Cross-scale model validation with aleatory and epistemic uncertainty". Thesis, Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53571.

Texto completo da fonte
Resumo:
Nearly every decision must be made with a degree of uncertainty regarding the outcome. Decision making based on modeling and simulation predictions needs to incorporate and aggregate uncertain evidence. To validate multiscale simulation models, it may be necessary to consider evidence collected at a length scale that is different from the one at which a model predicts. In addition, traditional methods of uncertainty analysis do not distinguish between two types of uncertainty: uncertainty due to inherently random inputs, and uncertainty due to lack of information about the inputs. This thesis examines and applies a Bayesian approach for model parameter validation that uses generalized interval probability to separate these two types of uncertainty. A generalized interval Bayes’ rule (GIBR) is used to combine the evidence and update belief in the validity of parameters. The sensitivity of completeness and soundness for interval range estimation in GIBR is investigated. Several approaches to represent complete ignorance of probabilities’ values are tested. The result from the GIBR method is verified using Monte Carlo simulations. The method is first applied to validate the parameter set for a molecular dynamics simulation of defect formation due to radiation. Evidence is supplied by the comparison with physical experiments. Because the simulation includes variables whose effects are not directly observable, an expanded form of GIBR is implemented to incorporate the uncertainty associated with measurement in belief update. In a second example, the proposed method is applied to combining the evidence from two models of crystal plasticity at different length scales.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Dostert, Paul Francis. "Uncertainty quantification using multiscale methods for porous media flows". [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-2532.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Lonsdale, Jack Henry. "Predictive modelling and uncertainty quantification of UK forest growth". Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/16202.

Texto completo da fonte
Resumo:
Forestry in the UK is dominated by coniferous plantations. Sitka spruce (Picea sitchensis) and Scots pine (Pinus sylvestris) are the most prevalent species and are mostly grown in single age mono-culture stands. Forest strategy for Scotland, England, and Wales all include efforts to achieve further afforestation. The aim of this afforestation is to provide a multi-functional forest with a broad range of benefits. Due to the time scale involved in forestry, accurate forecasts of stand productivity (along with clearly defined uncertainties) are essential to forest managers. These can be provided by a range of approaches to modelling forest growth. In this project model comparison, Bayesian calibration, and data assimilation methods were all used to attempt to improve forecasts and understanding of uncertainty therein of the two most important conifers in UK forestry. Three different forest growth models were compared in simulating growth of Scots pine. A yield table approach, the process-based 3PGN model, and a Stand Level Dynamic Growth (SLeDG) model were used. Predictions were compared graphically over the typical productivity range for Scots pine in the UK. Strengths and weaknesses of each model were considered. All three produced similar growth trajectories. The greatest difference between models was in volume and biomass in unthinned stands where the yield table predicted a much larger range compared to the other two models. Future advances in data availability and computing power should allow for greater use of process-based models, but in the interim more flexible dynamic growth models may be more useful than static yield tables for providing predictions which extend to non-standard management prescriptions and estimates of early growth and yield. A Bayesian calibration of the SLeDG model was carried out for both Sitka spruce and Scots pine in the UK for the first time. Bayesian calibrations allow both model structure and parameters to be assessed simultaneously in a probabilistic framework, providing a model with which forecasts and their uncertainty can be better understood and quantified using posterior probability distributions. Two different structures for including local productivity in the model were compared with a Bayesian model comparison. A complete calibration of the more probable model structure was then completed. Example forecasts from the calibration were compatible with existing yield tables for both species. This method could be applied to other species or other model structures in the future. Finally, data assimilation was investigated as a way of reducing forecast uncertainty. Data assimilation assumes that neither observations nor models provide a perfect description of a system, but combining them may provide the best estimate. SLeDG model predictions and LiDAR measurements for sub-compartments within Queen Elizabeth Forest Park were combined with an Ensemble Kalman Filter. Uncertainty was reduced following the second data assimilation in all of the state variables. However, errors in stand delineation and estimated stand yield class may have caused observational uncertainty to be greater thus reducing the efficacy of the method for reducing overall uncertainty.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Mantis, George C. "Quantification and propagation of disciplinary uncertainty via bayesian statistics". Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/12136.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Phillips, Edward G. "Fast solvers and uncertainty quantification for models of magnetohydrodynamics". Thesis, University of Maryland, College Park, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3644175.

Texto completo da fonte
Resumo:

The magnetohydrodynamics (MHD) model describes the flow of electrically conducting fluids in the presence of magnetic fields. A principal application of MHD is the modeling of plasma physics, ranging from plasma confinement for thermonuclear fusion to astrophysical plasma dynamics. MHD is also used to model the flow of liquid metals, for instance in magnetic pumps, liquid metal blankets in fusion reactor concepts, and aluminum electrolysis. The model consists of a non-self-adjoint, nonlinear system of partial differential equations (PDEs) that couple the Navier-Stokes equations for fluid flow to a reduced set of Maxwell's equations for electromagnetics.

In this dissertation, we consider computational issues arising for the MHD equations. We focus on developing fast computational algorithms for solving the algebraic systems that arise from finite element discretizations of the fully coupled MHD equations. Emphasis is on solvers for the linear systems arising from algorithms such as Newton's method or Picard iteration, with a main goal of developing preconditioners for use with iterative methods for the linearized systems. In particular, we first consider the linear systems arising from an exact penalty finite element formulation of the MHD equations. We then draw on this research to develop solvers for a formulation that includes a Lagrange multiplier within Maxwell's equations. We also consider a simplification of the MHD model: in the MHD kinematics model, the equations are reduced by assuming that the flow behavior of the system is known. In this simpler setting, we allow for epistemic uncertainty to be present. By mathematically modeling this uncertainty with random variables, we investigate its implications on the physical model.

Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Erbas, Demet. "Sampling strategies for uncertainty quantification in oil recovery prediction". Thesis, Heriot-Watt University, 2007. http://hdl.handle.net/10399/70.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia