Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Inverse Uncertainty Quantification.

Dissertationen zum Thema „Inverse Uncertainty Quantification“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-24 Dissertationen für die Forschung zum Thema "Inverse Uncertainty Quantification" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Chue, Bryan C. „Efficient Hessian computation in inverse problems with application to uncertainty quantification“. Thesis, Boston University, 2013. https://hdl.handle.net/2144/21138.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.Sc.Eng.) PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.
This thesis considers the efficient Hessian computation in inverse problems with specific application to the elastography inverse problem. Inverse problems use measurements of observable parameters to infer information about model parameters, and tend to be ill-posed. They are typically formulated and solved as regularized constrained optimization problems, whose solutions best fit the measured data. Approaching the same inverse problem from a probabilistic Bayesian perspective produces the same optimal point called the maximum a posterior (MAP) estimate of the parameter distribution, but also produces a posterior probability distribution of the parameter estimate, from which a measure of the solution's uncertainty may be obtained. This probability distribution is a very high dimensional function with which it can be difficult to work. For example, in a modest application with N = 104 optimization variables, representing this function with just three values in each direction requires 3^10000 U+2248 10^5000 variables, which far exceeds the number of atoms in the universe. The uncertainty of the MAP estimate describes the shape of the probability distribution and to leading order may be parameterized by the covariance. Directly calculating the Hessian and hence the covariance, requires O(N) solutions of the constraint equations. Given the size of the problems of interest (N = O(10^4 - 10^6)), this is impractical. Instead, an accurate approximation of the Hessian can be assembled using a Krylov basis. The ill-posed nature of inverse problems suggests that its Hessian has low rank and therefore can be approximated with relatively few Krylov vectors. This thesis proposes a method to calculate this Krylov basis in the process of determining the MAP estimate of the parameter distribution. Using the Krylov space based conjugate gradient (CG) method, the MAP estimate is computed. Minor modifications to the algorithm permit storage of the Krylov approximation of the Hessian. As the accuracy of the Hessian approximation is directly related to the Krylov basis, long term orthogonality amongst the basis vectors is maintained via full reorthogonalization. Upon reaching the MAP estimate, the method produces a low rank approximation of the Hessian that can be used to compute the covariance.
2031-01-01
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Hebbur, Venkata Subba Rao Vishwas. „Adjoint based solution and uncertainty quantification techniques for variational inverse problems“. Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/76665.

Der volle Inhalt der Quelle
Annotation:
Variational inverse problems integrate computational simulations of physical phenomena with physical measurements in an informational feedback control system. Control parameters of the computational model are optimized such that the simulation results fit the physical measurements.The solution procedure is computationally expensive since it involves running the simulation computer model (the emph{forward model}) and the associated emph {adjoint model} multiple times. In practice, our knowledge of the underlying physics is incomplete and hence the associated computer model is laden with emph {model errors}. Similarly, it is not possible to measure the physical quantities exactly and hence the measurements are associated with emph {data errors}. The errors in data and model adversely affect the inference solutions. This work develops methods to address the challenges posed by the computational costs and by the impact of data and model errors in solving variational inverse problems. Variational inverse problems of interest here are formulated as optimization problems constrained by partial differential equations (PDEs). The solution process requires multiple evaluations of the constraints, therefore multiple solutions of the associated PDE. To alleviate the computational costs we develop a parallel in time discretization algorithm based on a nonlinear optimization approach. Like in the emph{parareal} approach, the time interval is partitioned into subintervals, and local time integrations are carried out in parallel. Solution continuity equations across interval boundaries are added as constraints. All the computational steps - forward solutions, gradients, and Hessian-vector products - involve only ideally parallel computations and therefore are highly scalable. This work develops a systematic mathematical framework to compute the impact of data and model errors on the solution to the variational inverse problems. The computational algorithm makes use of first and second order adjoints and provides an a-posteriori error estimate for a quantity of interest defined on the inverse solution (i.e., an aspect of the inverse solution). We illustrate the estimation algorithm on a shallow water model and on the Weather Research and Forecast model. Presence of outliers in measurement data is common, and this negatively impacts the solution to variational inverse problems. The traditional approach, where the inverse problem is formulated as a minimization problem in $L_2$ norm, is especially sensitive to large data errors. To alleviate the impact of data outliers we propose to use robust norms such as the $L_1$ and Huber norm in data assimilation. This work develops a systematic mathematical framework to perform three and four dimensional variational data assimilation using $L_1$ and Huber norms. The power of this approach is demonstrated by solving data assimilation problems where measurements contain outliers.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Devathi, Duttaabhinivesh. „Uncertainty Quantification for Underdetermined Inverse Problems via Krylov Subspace Iterative Solvers“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case155446130705089.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Andersson, Hjalmar. „Inverse Uncertainty Quantification using deterministic sampling : An intercomparison between different IUQ methods“. Thesis, Uppsala universitet, Tillämpad kärnfysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447070.

Der volle Inhalt der Quelle
Annotation:
In this thesis, two novel methods for Inverse Uncertainty Quantification are benchmarked against the more established methods of Monte Carlo sampling of output parameters(MC) and Maximum Likelihood Estimation (MLE). Inverse Uncertainty Quantification (IUQ) is the process of how to best estimate the values of the input parameters in a simulation, and the uncertainty of said estimation, given a measurement of the output parameters. The two new methods are Deterministic Sampling (DS) and Weight Fixing (WF). Deterministic sampling uses a set of sampled points such that the set of points has the same statistic as the output. For each point, the corresponding point of the input is found to be able to calculate the statistics of the input. Weight fixing uses random samples from the rough region around the input to create a linear problem that involves finding the right weights so that the output has the right statistic. The benchmarking between the four methods shows that both DS and WF are comparably accurate to both MC and MLE in most cases tested in this thesis. It was also found that both DS and WF uses approximately the same amount of function calls as MLE and all three methods use a lot fewer function calls to the simulation than MC. It was discovered that WF is not always able to find a solution. This is probably because the methods used for WF are not the optimal method for what they are supposed to do. Finding more optimal methods for WF is something that could be investigated further.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lal, Rajnesh. „Data assimilation and uncertainty quantification in cardiovascular biomechanics“. Thesis, Montpellier, 2017. http://www.theses.fr/2017MONTS088/document.

Der volle Inhalt der Quelle
Annotation:
Les simulations numériques des écoulements sanguins cardiovasculaires peuvent combler d’importantes lacunes dans les capacités actuelles de traitement clinique. En effet, elles offrent des moyens non invasifs pour quantifier l’hémodynamique dans le cœur et les principaux vaisseaux sanguins chez les patients atteints de maladies cardiovasculaires. Ainsi, elles permettent de recouvrer les caractéristiques des écoulements sanguins qui ne peuvent pas être obtenues directement à partir de l’imagerie médicale. Dans ce sens, des simulations personnalisées utilisant des informations propres aux patients aideraient à une prévision individualisée des risques. Nous pourrions en effet, disposer des informations clés sur la progression éventuelle d’une maladie ou détecter de possibles anomalies physiologiques. Les modèles numériques peuvent fournir également des moyens pour concevoir et tester de nouveaux dispositifs médicaux et peuvent être utilisés comme outils prédictifs pour la planification de traitement chirurgical personnalisé. Ils aideront ainsi à la prise de décision clinique. Cependant, une difficulté dans cette approche est que, pour être fiables, les simulations prédictives spécifiques aux patients nécessitent une assimilation efficace de leurs données médicales. Ceci nécessite la solution d’un problème hémodynamique inverse, où les paramètres du modèle sont incertains et sont estimés à l’aide des techniques d’assimilation de données.Dans cette thèse, le problème inverse pour l’estimation des paramètres est résolu par une méthode d’assimilation de données basée sur un filtre de Kalman d’ensemble (EnKF). Connaissant les incertitudes sur les mesures, un tel filtre permet la quantification des incertitudes liées aux paramètres estimés. Un algorithme d’estimation de paramètres, basé sur un filtre de Kalman d’ensemble, est proposé dans cette thèse pour des calculs hémodynamiques spécifiques à un patient, dans un réseau artériel schématique et à partir de mesures cliniques incertaines. La méthodologie est validée à travers plusieurs scenarii in silico utilisant des données synthétiques. La performance de l’algorithme d’estimation de paramètres est également évaluée sur des données expérimentales pour plusieurs réseaux artériels et dans un cas provenant d’un banc d’essai in vitro et des données cliniques réelles d’un volontaire (cas spécifique du patient). Le but principal de cette thèse est l’analyse hémodynamique spécifique du patient dans le polygone de Willis, appelé aussi cercle artériel du cerveau. Les propriétés hémodynamiques communes, comme celles de la paroi artérielle (module de Young, épaisseur de la paroi et coefficient viscoélastique), et les paramètres des conditions aux limites (coefficients de réflexion et paramètres du modèle de Windkessel) sont estimés. Il est également démontré qu’un modèle appelé compartiment d’ordre réduit (ou modèle dimension zéro) permet une estimation simple et fiable des caractéristiques du flux sanguin dans le polygone de Willis. De plus, il est ressorti que les simulations avec les paramètres estimés capturent les formes attendues pour les ondes de pression et de débit aux emplacements prescrits par le clinicien
Cardiovascular blood flow simulations can fill several critical gaps in current clinical capabilities. They offer non-invasive ways to quantify hemodynamics in the heart and major blood vessels for patients with cardiovascular diseases, that cannot be directly obtained from medical imaging. Patient-specific simulations (incorporating data unique to the individual) enable individualised risk prediction, provide key insights into disease progression and/or abnormal physiologic detection. They also provide means to systematically design and test new medical devices, and are used as predictive tools to surgical and personalize treatment planning and, thus aid in clinical decision-making. Patient-specific predictive simulations require effective assimilation of medical data for reliable simulated predictions. This is usually achieved by the solution of an inverse hemodynamic problem, where uncertain model parameters are estimated using the techniques for merging data and numerical models known as data assimilation methods.In this thesis, the inverse problem is solved through a data assimilation method using an ensemble Kalman filter (EnKF) for parameter estimation. By using an ensemble Kalman filter, the solution also comes with a quantification of the uncertainties for the estimated parameters. An ensemble Kalman filter-based parameter estimation algorithm is proposed for patient-specific hemodynamic computations in a schematic arterial network from uncertain clinical measurements. Several in silico scenarii (using synthetic data) are considered to investigate the efficiency of the parameter estimation algorithm using EnKF. The usefulness of the parameter estimation algorithm is also assessed using experimental data from an in vitro test rig and actual real clinical data from a volunteer (patient-specific case). The proposed algorithm is evaluated on arterial networks which include single arteries, cases of bifurcation, a simple human arterial network and a complex arterial network including the circle of Willis.The ultimate aim is to perform patient-specific hemodynamic analysis in the network of the circle of Willis. Common hemodynamic properties (parameters), like arterial wall properties (Young’s modulus, wall thickness, and viscoelastic coefficient) and terminal boundary parameters (reflection coefficient and Windkessel model parameters) are estimated as the solution to an inverse problem using time series pressure values and blood flow rate as measurements. It is also demonstrated that a proper reduced order zero-dimensional compartment model can lead to a simple and reliable estimation of blood flow features in the circle of Willis. The simulations with the estimated parameters capture target pressure or flow rate waveforms at given specific locations
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Narayanamurthi, Mahesh. „Advanced Time Integration Methods with Applications to Simulation, Inverse Problems, and Uncertainty Quantification“. Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/104357.

Der volle Inhalt der Quelle
Annotation:
Simulation and optimization of complex physical systems are an integral part of modern science and engineering. The systems of interest in many fields have a multiphysics nature, with complex interactions between physical, chemical and in some cases even biological processes. This dissertation seeks to advance forward and adjoint numerical time integration methodologies for the simulation and optimization of semi-discretized multiphysics partial differential equations (PDEs), and to estimate and control numerical errors via a goal-oriented a posteriori error framework. We extend exponential propagation iterative methods of Runge-Kutta type (EPIRK) by [Tokman, JCP 2011], to build EPIRK-W and EPIRK-K time integration methods that admit approximate Jacobians in the matrix-exponential like operations. EPIRK-W methods extend the W-method theory by [Steihaug and Wofbrandt, Math. Comp. 1979] to preserve their order of accuracy under arbitrary Jacobian approximations. EPIRK-K methods extend the theory of K-methods by [Tranquilli and Sandu, JCP 2014] to EPIRK and use a Krylov-subspace based approximation of Jacobians to gain computational efficiency. New families of partitioned exponential methods for multiphysics problems are developed using the classical order condition theory via particular variants of T-trees and corresponding B-series. The new partitioned methods are found to perform better than traditional unpartitioned exponential methods for some problems in mild-medium stiffness regimes. Subsequently, partitioned stiff exponential Runge-Kutta (PEXPRK) methods -- that extend stiffly accurate exponential Runge-Kutta methods from [Hochbruck and Ostermann, SINUM 2005] to a multiphysics context -- are constructed and analyzed. PEXPRK methods show full convergence under various splittings of a diffusion-reaction system. We address the problem of estimation of numerical errors in a multiphysics discretization by developing a goal-oriented a posteriori error framework. Discrete adjoints of GARK methods are derived from their forward formulation [Sandu and Guenther, SINUM 2015]. Based on these, we build a posteriori estimators for both spatial and temporal discretization errors. We validate the estimators on a number of reaction-diffusion systems and use it to simultaneously refine spatial and temporal grids.
Doctor of Philosophy
The study of modern science and engineering begins with descriptions of a system of mathematical equations (a model). Different models require different techniques to both accurately and effectively solve them on a computer. In this dissertation, we focus on developing novel mathematical solvers for models expressed as a system of equations, where only the initial state and the rate of change of state as a function are known. The solvers we develop can be used to both forecast the behavior of the system and to optimize its characteristics to achieve specific goals. We also build methodologies to estimate and control errors introduced by mathematical solvers in obtaining a solution for models involving multiple interacting physical, chemical, or biological phenomena. Our solvers build on state of the art in the research community by introducing new approximations that exploit the underlying mathematical structure of a model. Where it is necessary, we provide concrete mathematical proofs to validate theoretically the correctness of the approximations we introduce and correlate with follow-up experiments. We also present detailed descriptions of the procedure for implementing each mathematical solver that we develop throughout the dissertation while emphasizing on means to obtain maximal performance from the solver. We demonstrate significant performance improvements on a range of models that serve as running examples, describing chemical reactions among distinct species as they diffuse over a surface medium. Also provided are results and procedures that a curious researcher can use to advance the ideas presented in the dissertation to other types of solvers that we have not considered. Research on mathematical solvers for different mathematical models is rich and rewarding with numerous open-ended questions and is a critical component in the progress of modern science and engineering.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ray, Kolyan Michael. „Asymptotic theory for Bayesian nonparametric procedures in inverse problems“. Thesis, University of Cambridge, 2015. https://www.repository.cam.ac.uk/handle/1810/278387.

Der volle Inhalt der Quelle
Annotation:
The main goal of this thesis is to investigate the frequentist asymptotic properties of nonparametric Bayesian procedures in inverse problems and the Gaussian white noise model. In the first part, we study the frequentist posterior contraction rate of nonparametric Bayesian procedures in linear inverse problems in both the mildly and severely ill-posed cases. This rate provides a quantitative measure of the quality of statistical estimation of the procedure. A theorem is proved in a general Hilbert space setting under approximation-theoretic assumptions on the prior. The result is applied to non-conjugate priors, notably sieve and wavelet series priors, as well as in the conjugate setting. In the mildly ill-posed setting, minimax optimal rates are obtained, with sieve priors being rate adaptive over Sobolev classes. In the severely ill-posed setting, oversmoothing the prior yields minimax rates. Previously established results in the conjugate setting are obtained using this method. Examples of applications include deconvolution, recovering the initial condition in the heat equation and the Radon transform. In the second part of this thesis, we investigate Bernstein--von Mises type results for adaptive nonparametric Bayesian procedures in both the Gaussian white noise model and the mildly ill-posed inverse setting. The Bernstein--von Mises theorem details the asymptotic behaviour of the posterior distribution and provides a frequentist justification for the Bayesian approach to uncertainty quantification. We establish weak Bernstein--von Mises theorems in both a Hilbert space and multiscale setting, which have applications in $L^2$ and $L^\infty$ respectively. This provides a theoretical justification for plug-in procedures, for example the use of certain credible sets for sufficiently smooth linear functionals. We use this general approach to construct optimal frequentist confidence sets using a Bayesian approach. We also provide simulations to numerically illustrate our approach and obtain a visual representation of the different geometries involved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Alhossen, Iman. „Méthode d'analyse de sensibilité et propagation inverse d'incertitude appliquées sur les modèles mathématiques dans les applications d'ingénierie“. Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30314/document.

Der volle Inhalt der Quelle
Annotation:
Dans de nombreuses disciplines, les approches permettant d'étudier et de quantifier l'influence de données incertaines sont devenues une nécessité. Bien que la propagation directe d'incertitudes ait été largement étudiée, la propagation inverse d'incertitudes demeure un vaste sujet d'étude, sans méthode standardisée. Dans cette thèse, une nouvelle méthode de propagation inverse d'incertitude est présentée. Le but de cette méthode est de déterminer l'incertitude d'entrée à partir de données de sortie considérées comme incertaines. Parallèlement, les méthodes d'analyse de sensibilité sont également très utilisées pour déterminer l'influence des entrées sur la sortie lors d'un processus de modélisation. Ces approches permettent d'isoler les entrées les plus significatives, c'est à dire les plus influentes, qu'il est nécessaire de tester lors d'une analyse d'incertitudes. Dans ce travail, nous approfondirons tout d'abord la méthode d'analyse de sensibilité de Sobol, qui est l'une des méthodes d'analyse de sensibilité globale les plus efficaces. Cette méthode repose sur le calcul d'indices de sensibilité, appelés indices de Sobol, qui représentent l'effet des données d'entrées (vues comme des variables aléatoires continues) sur la sortie. Nous démontrerons ensuite que la méthode de Sobol donne des résultats fiables même lorsqu'elle est appliquée dans le cas discret. Puis, nous étendrons le cadre d'application de la méthode de Sobol afin de répondre à la problématique de propagation inverse d'incertitudes. Enfin, nous proposerons une nouvelle approche de la méthode de Sobol qui permet d'étudier la variation des indices de sensibilité par rapport à certains facteurs du modèle ou à certaines conditions expérimentales. Nous montrerons que les résultats obtenus lors de ces études permettent d'illustrer les différentes caractéristiques des données d'entrée. Pour conclure, nous exposerons comment ces résultats permettent d'indiquer les meilleures conditions expérimentales pour lesquelles l'estimation des paramètres peut être efficacement réalisée
Approaches for studying uncertainty are of great necessity in all disciplines. While the forward propagation of uncertainty has been investigated extensively, the backward propagation is still under studied. In this thesis, a new method for backward propagation of uncertainty is presented. The aim of this method is to determine the input uncertainty starting from the given data of the uncertain output. In parallel, sensitivity analysis methods are also of great necessity in revealing the influence of the inputs on the output in any modeling process. This helps in revealing the most significant inputs to be carried in an uncertainty study. In this work, the Sobol sensitivity analysis method, which is one of the most efficient global sensitivity analysis methods, is considered and its application framework is developed. This method relies on the computation of sensitivity indexes, called Sobol indexes. These indexes give the effect of the inputs on the output. Usually inputs in Sobol method are considered to vary as continuous random variables in order to compute the corresponding indexes. In this work, the Sobol method is demonstrated to give reliable results even when applied in the discrete case. In addition, another advancement for the application of the Sobol method is done by studying the variation of these indexes with respect to some factors of the model or some experimental conditions. The consequences and conclusions derived from the study of this variation help in determining different characteristics and information about the inputs. Moreover, these inferences allow the indication of the best experimental conditions at which estimation of the inputs can be done
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Gehre, Matthias [Verfasser], Peter [Akademischer Betreuer] Maaß und Bangti [Akademischer Betreuer] Jin. „Rapid Uncertainty Quantification for Nonlinear Inverse Problems / Matthias Gehre. Gutachter: Peter Maaß ; Bangti Jin. Betreuer: Peter Maaß“. Bremen : Staats- und Universitätsbibliothek Bremen, 2013. http://d-nb.info/1072078589/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Kamilis, Dimitrios. „Uncertainty Quantification for low-frequency Maxwell equations with stochastic conductivity models“. Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31415.

Der volle Inhalt der Quelle
Annotation:
Uncertainty Quantification (UQ) has been an active area of research in recent years with a wide range of applications in data and imaging sciences. In many problems, the source of uncertainty stems from an unknown parameter in the model. In physical and engineering systems for example, the parameters of the partial differential equation (PDE) that model the observed data may be unknown or incompletely specified. In such cases, one may use a probabilistic description based on prior information and formulate a forward UQ problem of characterising the uncertainty in the PDE solution and observations in response to that in the parameters. Conversely, inverse UQ encompasses the statistical estimation of the unknown parameters from the available observations, which can be cast as a Bayesian inverse problem. The contributions of the thesis focus on examining the aforementioned forward and inverse UQ problems for the low-frequency, time-harmonic Maxwell equations, where the model uncertainty emanates from the lack of knowledge of the material conductivity parameter. The motivation comes from the Controlled-Source Electromagnetic Method (CSEM) that aims to detect and image hydrocarbon reservoirs by using electromagnetic field (EM) measurements to obtain information about the conductivity profile of the sub-seabed. Traditionally, algorithms for deterministic models have been employed to solve the inverse problem in CSEM by optimisation and regularisation methods, which aside from the image reconstruction provide no quantitative information on the credibility of its features. This work employs instead stochastic models where the conductivity is represented as a lognormal random field, with the objective of providing a more informative characterisation of the model observables and the unknown parameters. The variational formulation of these stochastic models is analysed and proved to be well-posed under suitable assumptions. For computational purposes the stochastic formulation is recast as a deterministic, parametric problem with distributed uncertainty, which leads to an infinite-dimensional integration problem with respect to the prior and posterior measure. One of the main challenges is thus the approximation of these integrals, with the standard choice being some variant of the Monte-Carlo (MC) method. However, such methods typically fail to take advantage of the intrinsic properties of the model and suffer from unsatisfactory convergence rates. Based on recently developed theory on high-dimensional approximation, this thesis advocates the use of Sparse Quadrature (SQ) to tackle the integration problem. For the models considered here and under certain assumptions, we prove that for forward UQ, Sparse Quadrature can attain dimension-independent convergence rates that out-perform MC. Typical CSEM models are large-scale and thus additional effort is made in this work to reduce the cost of obtaining forward solutions for each sampling parameter by utilising the weighted Reduced Basis method (RB) and the Empirical Interpolation Method (EIM). The proposed variant of a combined SQ-EIM-RB algorithm is based on an adaptive selection of training sets and a primal-dual, goal-oriented formulation for the EIM-RB approximation. Numerical examples show that the suggested computational framework can alleviate the computational costs associated with forward UQ for the pertinent large-scale models, thus providing a viable methodology for practical applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Attia, Ahmed Mohamed Mohamed. „Advanced Sampling Methods for Solving Large-Scale Inverse Problems“. Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73683.

Der volle Inhalt der Quelle
Annotation:
Ensemble and variational techniques have gained wide popularity as the two main approaches for solving data assimilation and inverse problems. The majority of the methods in these two approaches are derived (at least implicitly) under the assumption that the underlying probability distributions are Gaussian. It is well accepted, however, that the Gaussianity assumption is too restrictive when applied to large nonlinear models, nonlinear observation operators, and large levels of uncertainty. This work develops a family of fully non-Gaussian data assimilation algorithms that work by directly sampling the posterior distribution. The sampling strategy is based on a Hybrid/Hamiltonian Monte Carlo (HMC) approach that can handle non-normal probability distributions. The first algorithm proposed in this work is the "HMC sampling filter", an ensemble-based data assimilation algorithm for solving the sequential filtering problem. Unlike traditional ensemble-based filters, such as the ensemble Kalman filter and the maximum likelihood ensemble filter, the proposed sampling filter naturally accommodates non-Gaussian errors and nonlinear model dynamics, as well as nonlinear observations. To test the capabilities of the HMC sampling filter numerical experiments are carried out using the Lorenz-96 model and observation operators with different levels of nonlinearity and differentiability. The filter is also tested with shallow water model on the sphere with linear observation operator. Numerical results show that the sampling filter performs well even in highly nonlinear situations where the traditional filters diverge. Next, the HMC sampling approach is extended to the four-dimensional case, where several observations are assimilated simultaneously, resulting in the second member of the proposed family of algorithms. The new algorithm, named "HMC sampling smoother", is an ensemble-based smoother for four-dimensional data assimilation that works by sampling from the posterior probability density of the solution at the initial time. The sampling smoother naturally accommodates non-Gaussian errors and nonlinear model dynamics and observation operators, and provides a full description of the posterior distribution. Numerical experiments for this algorithm are carried out using a shallow water model on the sphere with observation operators of different levels of nonlinearity. The numerical results demonstrate the advantages of the proposed method compared to the traditional variational and ensemble-based smoothing methods. The HMC sampling smoother, in its original formulation, is computationally expensive due to the innate requirement of running the forward and adjoint models repeatedly. The proposed family of algorithms proceeds by developing computationally efficient versions of the HMC sampling smoother based on reduced-order approximations of the underlying model dynamics. The reduced-order HMC sampling smoothers, developed as extensions to the original HMC smoother, are tested numerically using the shallow-water equations model in Cartesian coordinates. The results reveal that the reduced-order versions of the smoother are capable of accurately capturing the posterior probability density, while being significantly faster than the original full order formulation. In the presence of nonlinear model dynamics, nonlinear observation operator, or non-Gaussian errors, the prior distribution in the sequential data assimilation framework is not analytically tractable. In the original formulation of the HMC sampling filter, the prior distribution is approximated by a Gaussian distribution whose parameters are inferred from the ensemble of forecasts. The Gaussian prior assumption in the original HMC filter is relaxed. Specifically, a clustering step is introduced after the forecast phase of the filter, and the prior density function is estimated by fitting a Gaussian Mixture Model (GMM) to the prior ensemble. The base filter developed following this strategy is named cluster HMC sampling filter (ClHMC ). A multi-chain version of the ClHMC filter, namely MC-ClHMC , is also proposed to guarantee that samples are taken from the vicinities of all probability modes of the formulated posterior. These methodologies are tested using a quasi-geostrophic (QG) model with double-gyre wind forcing and bi-harmonic friction. Numerical results demonstrate the usefulness of using GMMs to relax the Gaussian prior assumption in the HMC filtering paradigm. To provide a unified platform for data assimilation research, a flexible and a highly-extensible testing suite, named DATeS , is developed and described in this work. The core of DATeS is implemented in Python to enable for Object-Oriented capabilities. The main components, such as the models, the data assimilation algorithms, the linear algebra solvers, and the time discretization routines are independent of each other, such as to offer maximum flexibility to configure data assimilation studies.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Galbally, David. „Nonlinear model reduction for uncertainty quantification in large-scale inverse problems : application to nonlinear convection-diffusion-reaction equation“. Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43079.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2008.
Includes bibliographical references (p. 147-152).
There are multiple instances in science and engineering where quantities of interest are evaluated by solving one or several nonlinear partial differential equations (PDEs) that are parametrized in terms of a set of inputs. Even though well-established numerical techniques exist for solving these problems, their computational cost often precludes their use in cases where the outputs of interest must be evaluated repeatedly for different values of the input parameters such as probabilistic analysis applications. In this thesis we present a model reduction methodology that combines efficient representation of the nonlinearities in the governing PDE with an efficient model-constrained, greedy algorithm for sampling the input parameter space. The nonlinearities in the PDE are represented using a coefficient-function approximation that enables the development of an efficient offline-online computational procedure where the online computational cost is independent of the size of the original high-fidelity model. The input space sampling algorithm used for generating the reduced space basis adaptively improves the quality of the reduced order approximation by solving a PDE-constrained continuous optimization problem that targets the output error between the reduced and full order models in order to determine the optimal sampling point at every greedy cycle. The resulting model reduction methodology is applied to a highly nonlinear combustion problem governed by a convection-diffusion-reaction PDE with up to 3 input parameters. The reduced basis approximation developed for this problem is up to 50, 000 times faster to solve than the original high-fidelity finite element model with an average relative error in prediction of outputs of interest of 2.5 - 10-6 over the input parameter space. The reduced order model developed in this thesis is used in a novel probabilistic methodology for solving inverse problems.
(cont) The extreme computational cost of the Bayesian framework approach for inferring the values of the inputs that generated a given set of empirically measured outputs often precludes its use in practical applications. In this thesis we show that using a reduced order model for running the Markov
by David Galbally.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

John, David Nicholas [Verfasser], und Vincent [Akademischer Betreuer] Heuveline. „Uncertainty quantification for an electric motor inverse problem - tackling the model discrepancy challenge / David Nicholas John ; Betreuer: Vincent Heuveline“. Heidelberg : Universitätsbibliothek Heidelberg, 2021. http://d-nb.info/122909265X/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Cho, Taewon. „Computational Advancements for Solving Large-scale Inverse Problems“. Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103772.

Der volle Inhalt der Quelle
Annotation:
For many scientific applications, inverse problems have played a key role in solving important problems by enabling researchers to estimate desired parameters of a system from observed measurements. For example, large-scale inverse problems arise in many global problems and medical imaging problems such as greenhouse gas tracking and computational tomography reconstruction. This dissertation describes advancements in computational tools for solving large-scale inverse problems and for uncertainty quantification. Oftentimes, inverse problems are ill-posed and large-scale. Iterative projection methods have dramatically reduced the computational costs of solving large-scale inverse problems, and regularization methods have been critical in obtaining stable estimations by applying prior information of unknowns via Bayesian inference. However, by combining iterative projection methods and variational regularization methods, hybrid projection approaches, in particular generalized hybrid methods, create a powerful framework that can maximize the benefits of each method. In this dissertation, we describe various advancements and extensions of hybrid projection methods that we developed to address three recent open problems. First, we develop hybrid projection methods that incorporate mixed Gaussian priors, where we seek more sophisticated estimations where the unknowns can be treated as random variables from a mixture of distributions. Second, we describe hybrid projection methods for mean estimation in a hierarchical Bayesian approach. By including more than one prior covariance matrix (e.g., mixed Gaussian priors) or estimating unknowns and hyper-parameters simultaneously (e.g., hierarchical Gaussian priors), we show that better estimations can be obtained. Third, we develop computational tools for a respirometry system that incorporate various regularization methods for both linear and nonlinear respirometry inversions. For the nonlinear systems, blind deconvolution methods are developed and prior knowledge of nonlinear parameters are used to reduce the dimension of the nonlinear systems. Simulated and real-data experiments of the respirometry problems are provided. This dissertation provides advanced tools for computational inversion and uncertainty quantification.
Doctor of Philosophy
For many scientific applications, inverse problems have played a key role in solving important problems by enabling researchers to estimate desired parameters of a system from observed measurements. For example, large-scale inverse problems arise in many global problems such as greenhouse gas tracking where the problem of estimating the amount of added or removed greenhouse gas at the atmosphere gets more difficult. The number of observations has been increased with improvements in measurement technologies (e.g., satellite). Therefore, the inverse problems become large-scale and they are computationally hard to solve. Another example of an inverse problem arises in tomography, where the goal is to examine materials deep underground (e.g., to look for gas or oil) or reconstruct an image of the interior of the human body from exterior measurements (e.g., to look for tumors). For tomography applications, there are typically fewer measurements than unknowns, which results in non-unique solutions. In this dissertation, we treat unknowns as random variables with prior probability distributions in order to compensate for a deficiency in measurements. We consider various additional assumptions on the prior distribution and develop efficient and robust numerical methods for solving inverse problems and for performing uncertainty quantification. We apply our developed methods to many numerical applications such as greenhouse gas tracking, seismic tomography, spherical tomography problems, and the estimation of CO2 of living organisms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Perrin, Guillaume. „Random fields and associated statistical inverse problems for uncertainty quantification : application to railway track geometries for high-speed trains dynamical responses and risk assessment“. Phd thesis, Université Paris-Est, 2013. http://pastel.archives-ouvertes.fr/pastel-01001045.

Der volle Inhalt der Quelle
Annotation:
Les nouvelles attentes vis-à-vis des nouveaux trains à grande vitesse sont nombreuses: on les voudrait plus rapides, plus confortables, plus stables, tout en étant moins consommateur d'énergie, moins agressif vis-à-vis des voies, moins bruyants... Afin d'optimiser la conception de ces trains du futur, il est alors nécessaire de pouvoir se baser sur une connaissance précise de l'ensemble des conditions de circulations qu'ils sont susceptibles de rencontrer au cours de leur cycle de vie. Afin de relever ces défis, la simulation a un très grand rôle à jouer. Pour que la simulation puisse être utilisée dans des perspectives de conception, de certification et d'optimisation de la maintenance, elle doit alors être tout à fait représentative de l'ensemble des comportements physiques mis en jeu. Le modèle du train, du contact entre les roues et le rail, doivent ainsi être validés avec attention, et les simulations doivent être lancées sur des ensembles d'excitations qui sont réalistes et représentatifs de ces défauts de géométrie. En ce qui concerne la dynamique, la géométrie de la voie, et plus particulièrement les défauts de géométrie, représentent une des principales sources d'excitation du train, qui est un système mécanique fortement non linéaire. A partir de mesures de la géométrie d'un réseau ferroviaire, un paramétrage complet de la géométrie de la voie et de sa variabilité semblent alors nécessaires, afin d'analyser au mieux le lien entre la réponse dynamique du train et les propriétés physiques et statistiques de la géométrie de la voie. Dans ce contexte, une approche pertinente pour modéliser cette géométrie de la voie, est de la considérer comme un champ aléatoire multivarié, dont les propriétés sont a priori inconnues. En raison des interactions spécifiques entre le train et la voie, il s'avère que ce champ aléatoire n'est ni Gaussien ni stationnaire. Ce travail de thèse s'est alors particulièrement concentré sur le développement de méthodes numériques permettant l'identification en inverse, à partir de mesures expérimentales, de champs aléatoires non Gaussiens et non stationnaires. Le comportement du train étant très non linéaire, ainsi que très sensible vis-à-vis de la géométrie de la voie, la caractérisation du champ aléatoire correspondant aux défauts de géométrie doit être extrêmement fine, tant du point de vue fréquentiel que statistique. La dimension des espaces statistiques considérés est alors très importante. De ce fait, une attention toute particulière a été portée dans ces travaux aux méthodes de réduction statistique, ainsi qu'aux méthodes pouvant être généralisées à la très grande dimension. Une fois la variabilité de la géométrie de la voie caractérisée à partir de données expérimentales, elle doit ensuite être propagée au sein du modèle numérique ferroviaire. A cette fin, les propriétés mécaniques d'un modèle numérique de train à grande vitesse ont été identifiées à partir de mesures expérimentales. La réponse dynamique stochastique de ce train, soumis à un très grand nombre de conditions de circulation réalistes et représentatives générées à partir du modèle stochastique de la voie ferrée, a été ainsi évaluée. Enfin, afin d'illustrer les possibilités apportées par un tel couplage entre la variabilité de la géométrie de la voie et la réponse dynamique du train, ce travail de thèse aborde trois applications
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Cortesi, Andrea Francesco. „Predictive numerical simulations for rebuilding freestream conditions in atmospheric entry flows“. Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0021/document.

Der volle Inhalt der Quelle
Annotation:
Une prédiction fidèle des écoulements hypersoniques à haute enthalpie est capitale pour les missions d'entrée atmosphérique. Cependant, la présence d'incertitudes est inévitable, sur les conditions de l'écoulement libre comme sur d'autres paramètres des modèles physico-chimiques. Pour cette raison, une quantification rigoureuse de l'effet de ces incertitudes est obligatoire pour évaluer la robustesse et la prédictivité des simulations numériques. De plus, une reconstruction correcte des paramètres incertains à partir des mesures en vol peut aider à réduire le niveau d'incertitude sur les sorties. Dans ce travail, nous utilisons un cadre statistique pour la propagation directe des incertitudes ainsi que pour la reconstruction inverse des conditions de l'écoulement libre dans le cas d'écoulements de rentrée atmosphérique. La possibilité d'exploiter les mesures de flux thermique au nez du véhicule pour la reconstruction des variables de l'écoulement libre et des paramètres incertains du modèle est évaluée pour les écoulements de rentrée hypersoniques. Cette reconstruction est réalisée dans un cadre bayésien, permettant la prise en compte des différentes sources d'incertitudes et des erreurs de mesure. Différentes techniques sont introduites pour améliorer les capacités de la stratégie statistique de quantification des incertitudes. Premièrement, une approche est proposée pour la génération d'un métamodèle amélioré, basée sur le couplage de Kriging et Sparse Polynomial Dimensional Decomposition. Ensuite, une méthode d'ajoute adaptatif de nouveaux points à un plan d'expériences existant est présentée dans le but d'améliorer la précision du métamodèle créé. Enfin, une manière d'exploiter les sous-espaces actifs dans les algorithmes de Markov Chain Monte Carlo pour les problèmes inverses bayésiens est également exposée
Accurate prediction of hypersonic high-enthalpy flows is of main relevance for atmospheric entry missions. However, uncertainties are inevitable on freestream conditions and other parameters of the physico-chemical models. For this reason, a rigorous quantification of the effect of uncertainties is mandatory to assess the robustness and predictivity of numerical simulations. Furthermore, a proper reconstruction of uncertain parameters from in-flight measurements can help reducing the level of uncertainties of the output. In this work, we will use a statistical framework for direct propagation of uncertainties and inverse freestream reconstruction applied to atmospheric entry flows. We propose an assessment of the possibility of exploiting forebody heat flux measurements for the reconstruction of freestream variables and uncertain parameters of the model for hypersonic entry flows. This reconstruction is performed in a Bayesian framework, allowing to account for sources of uncertainties and measurement errors. Different techniques are introduced to enhance the capabilities of the statistical framework for quantification of uncertainties. First, an improved surrogate modeling technique is proposed, based on Kriging and Sparse Polynomial Dimensional Decomposition. Then a method is proposed to adaptively add new training points to an existing experimental design to improve the accuracy of the trained surrogate model. A way to exploit active subspaces in Markov Chain Monte Carlo algorithms for Bayesian inverse problems is also proposed
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Calatayud, Gregori Julia. „Computational methods for random differential equations: probability density function and estimation of the parameters“. Doctoral thesis, Universitat Politècnica de València, 2020. http://hdl.handle.net/10251/138396.

Der volle Inhalt der Quelle
Annotation:
[EN] Mathematical models based on deterministic differential equations do not take into account the inherent uncertainty of the physical phenomenon (in a wide sense) under study. In addition, inaccuracies in the collected data often arise due to errors in the measurements. It thus becomes necessary to treat the input parameters of the model as random quantities, in the form of random variables or stochastic processes. This gives rise to the study of random ordinary and partial differential equations. The computation of the probability density function of the stochastic solution is important for uncertainty quantification of the model output. Although such computation is a difficult objective in general, certain stochastic expansions for the model coefficients allow faithful representations for the stochastic solution, which permits approximating its density function. In this regard, Karhunen-Loève and generalized polynomial chaos expansions become powerful tools for the density approximation. Also, methods based on discretizations from finite difference numerical schemes permit approximating the stochastic solution, therefore its probability density function. The main part of this dissertation aims at approximating the probability density function of important mathematical models with uncertainties in their formulation. Specifically, in this thesis we study, in the stochastic sense, the following models that arise in different scientific areas: in Physics, the model for the damped pendulum; in Biology and Epidemiology, the models for logistic growth and Bertalanffy, as well as epidemiological models; and in Thermodynamics, the heat partial differential equation. We rely on Karhunen-Loève and generalized polynomial chaos expansions and on finite difference schemes for the density approximation of the solution. These techniques are only applicable when we have a forward model in which the input parameters have certain probability distributions already set. When the model coefficients are estimated from collected data, we have an inverse problem. The Bayesian inference approach allows estimating the probability distribution of the model parameters from their prior probability distribution and the likelihood of the data. Uncertainty quantification for the model output is then carried out using the posterior predictive distribution. In this regard, the last part of the thesis shows the estimation of the distributions of the model parameters from experimental data on bacteria growth. To do so, a hybrid method that combines Bayesian parameter estimation and generalized polynomial chaos expansions is used.
[ES] Los modelos matemáticos basados en ecuaciones diferenciales deterministas no tienen en cuenta la incertidumbre inherente del fenómeno físico (en un sentido amplio) bajo estudio. Además, a menudo se producen inexactitudes en los datos recopilados debido a errores en las mediciones. Por lo tanto, es necesario tratar los parámetros de entrada del modelo como cantidades aleatorias, en forma de variables aleatorias o procesos estocásticos. Esto da lugar al estudio de las ecuaciones diferenciales aleatorias. El cálculo de la función de densidad de probabilidad de la solución estocástica es importante en la cuantificación de la incertidumbre de la respuesta del modelo. Aunque dicho cálculo es un objetivo difícil en general, ciertas expansiones estocásticas para los coeficientes del modelo dan lugar a representaciones fieles de la solución estocástica, lo que permite aproximar su función de densidad. En este sentido, las expansiones de Karhunen-Loève y de caos polinomial generalizado constituyen herramientas para dicha aproximación de la densidad. Además, los métodos basados en discretizaciones de esquemas numéricos de diferencias finitas permiten aproximar la solución estocástica, por lo tanto, su función de densidad de probabilidad. La parte principal de esta disertación tiene como objetivo aproximar la función de densidad de probabilidad de modelos matemáticos importantes con incertidumbre en su formulación. Concretamente, en esta memoria se estudian, en un sentido estocástico, los siguientes modelos que aparecen en diferentes áreas científicas: en Física, el modelo del péndulo amortiguado; en Biología y Epidemiología, los modelos de crecimiento logístico y de Bertalanffy, así como modelos de tipo epidemiológico; y en Termodinámica, la ecuación en derivadas parciales del calor. Utilizamos expansiones de Karhunen-Loève y de caos polinomial generalizado y esquemas de diferencias finitas para la aproximación de la densidad de la solución. Estas técnicas solo son aplicables cuando tenemos un modelo directo en el que los parámetros de entrada ya tienen determinadas distribuciones de probabilidad establecidas. Cuando los coeficientes del modelo se estiman a partir de los datos recopilados, tenemos un problema inverso. El enfoque de inferencia Bayesiana permite estimar la distribución de probabilidad de los parámetros del modelo a partir de su distribución de probabilidad previa y la verosimilitud de los datos. La cuantificación de la incertidumbre para la respuesta del modelo se lleva a cabo utilizando la distribución predictiva a posteriori. En este sentido, la última parte de la tesis muestra la estimación de las distribuciones de los parámetros del modelo a partir de datos experimentales sobre el crecimiento de bacterias. Para hacerlo, se utiliza un método híbrido que combina la estimación de parámetros Bayesianos y los desarrollos de caos polinomial generalizado.
[CAT] Els models matemàtics basats en equacions diferencials deterministes no tenen en compte la incertesa inherent al fenomen físic (en un sentit ampli) sota estudi. A més a més, sovint es produeixen inexactituds en les dades recollides a causa d'errors de mesurament. Es fa així necessari tractar els paràmetres d'entrada del model com a quantitats aleatòries, en forma de variables aleatòries o processos estocàstics. Açò dóna lloc a l'estudi de les equacions diferencials aleatòries. El càlcul de la funció de densitat de probabilitat de la solució estocàstica és important per a quantificar la incertesa de la sortida del model. Tot i que, en general, aquest càlcul és un objectiu difícil d'assolir, certes expansions estocàstiques dels coeficients del model donen lloc a representacions fidels de la solució estocàstica, el que permet aproximar la seua funció de densitat. En aquest sentit, les expansions de Karhunen-Loève i de caos polinomial generalitzat esdevenen eines per a l'esmentada aproximació de la densitat. A més a més, els mètodes basats en discretitzacions mitjançant esquemes numèrics de diferències finites permeten aproximar la solució estocàstica, per tant la seua funció de densitat de probabilitat. La part principal d'aquesta dissertació té com a objectiu aproximar la funció de densitat de probabilitat d'importants models matemàtics amb incerteses en la seua formulació. Concretament, en aquesta memòria s'estudien, en un sentit estocàstic, els següents models que apareixen en diferents àrees científiques: en Física, el model del pèndol amortit; en Biologia i Epidemiologia, els models de creixement logístic i de Bertalanffy, així com models de tipus epidemiològic; i en Termodinàmica, l'equació en derivades parcials de la calor. Per a l'aproximació de la densitat de la solució, ens basem en expansions de Karhunen-Loève i de caos polinomial generalitzat i en esquemes de diferències finites. Aquestes tècniques només són aplicables quan tenim un model cap avant en què els paràmetres d'entrada tenen ja determinades distribucions de probabilitat. Quan els coeficients del model s'estimen a partir de les dades recollides, tenim un problema invers. L'enfocament de la inferència Bayesiana permet estimar la distribució de probabilitat dels paràmetres del model a partir de la seua distribució de probabilitat prèvia i la versemblança de les dades. La quantificació de la incertesa per a la resposta del model es fa mitjançant la distribució predictiva a posteriori. En aquest sentit, l'última part de la tesi mostra l'estimació de les distribucions dels paràmetres del model a partir de dades experimentals sobre el creixement de bacteris. Per a fer-ho, s'utilitza un mètode híbrid que combina l'estimació de paràmetres Bayesiana i els desenvolupaments de caos polinomial generalitzat.
This work has been supported by the Spanish Ministerio de Econom´ıa y Competitividad grant MTM2017–89664–P.
Calatayud Gregori, J. (2020). Computational methods for random differential equations: probability density function and estimation of the parameters [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/138396
TESIS
Premiado
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Cioaca, Alexandru George. „A Computational Framework for Assessing and Optimizing the Performance of Observational Networks in 4D-Var Data Assimilation“. Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/51795.

Der volle Inhalt der Quelle
Annotation:
A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimilation is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) - data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Mondal, Anirban. „Bayesian Uncertainty Quantification for Large Scale Spatial Inverse Problems“. Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-08-9905.

Der volle Inhalt der Quelle
Annotation:
We considered a Bayesian approach to nonlinear inverse problems in which the unknown quantity is a high dimension spatial field. The Bayesian approach contains a natural mechanism for regularization in the form of prior information, can incorporate information from heterogeneous sources and provides a quantitative assessment of uncertainty in the inverse solution. The Bayesian setting casts the inverse solution as a posterior probability distribution over the model parameters. Karhunen-Lo'eve expansion and Discrete Cosine transform were used for dimension reduction of the random spatial field. Furthermore, we used a hierarchical Bayes model to inject multiscale data in the modeling framework. In this Bayesian framework, we have shown that this inverse problem is well-posed by proving that the posterior measure is Lipschitz continuous with respect to the data in total variation norm. The need for multiple evaluations of the forward model on a high dimension spatial field (e.g. in the context of MCMC) together with the high dimensionality of the posterior, results in many computation challenges. We developed two-stage reversible jump MCMC method which has the ability to screen the bad proposals in the first inexpensive stage. Channelized spatial fields were represented by facies boundaries and variogram-based spatial fields within each facies. Using level-set based approach, the shape of the channel boundaries was updated with dynamic data using a Bayesian hierarchical model where the number of points representing the channel boundaries is assumed to be unknown. Statistical emulators on a large scale spatial field were introduced to avoid the expensive likelihood calculation, which contains the forward simulator, at each iteration of the MCMC step. To build the emulator, the original spatial field was represented by a low dimensional parameterization using Discrete Cosine Transform (DCT), then the Bayesian approach to multivariate adaptive regression spline (BMARS) was used to emulate the simulator. Various numerical results were presented by analyzing simulated as well as real data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Flath, Hannah Pearl. „Hessian-based response surface approximations for uncertainty quantification in large-scale statistical inverse problems, with applications to groundwater flow“. 2013. http://hdl.handle.net/2152/21157.

Der volle Inhalt der Quelle
Annotation:
Subsurface flow phenomena characterize many important societal issues in energy and the environment. A key feature of these problems is that subsurface properties are uncertain, due to the sparsity of direct observations of the subsurface. The Bayesian formulation of this inverse problem provides a systematic framework for inferring uncertainty in the properties given uncertainties in the data, the forward model, and prior knowledge of the properties. We address the problem: given noisy measurements of the head, the pdf describing the noise, prior information in the form of a pdf of the hydraulic conductivity, and a groundwater flow model relating the head to the hydraulic conductivity, find the posterior probability density function (pdf) of the parameters describing the hydraulic conductivity field. Unfortunately, conventional sampling of this pdf to compute statistical moments is intractable for problems governed by large-scale forward models and high-dimensional parameter spaces. We construct a Gaussian process surrogate of the posterior pdf based on Bayesian interpolation between a set of "training" points. We employ a greedy algorithm to find the training points by solving a sequence of optimization problems where each new training point is placed at the maximizer of the error in the approximation. Scalable Newton optimization methods solve this "optimal" training point problem. We tailor the Gaussian process surrogate to the curvature of the underlying posterior pdf according to the Hessian of the log posterior at a subset of training points, made computationally tractable by a low-rank approximation of the data misfit Hessian. A Gaussian mixture approximation of the posterior is extracted from the Gaussian process surrogate, and used as a proposal in a Markov chain Monte Carlo method for sampling both the surrogate as well as the true posterior. The Gaussian process surrogate is used as a first stage approximation in a two-stage delayed acceptance MCMC method. We provide evidence for the viability of the low-rank approximation of the Hessian through numerical experiments on a large scale atmospheric contaminant transport problem and analysis of an infinite dimensional model problem. We provide similar results for our groundwater problem. We then present results from the proposed MCMC algorithms.
text
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Sawlan, Zaid A. „Statistical Analysis and Bayesian Methods for Fatigue Life Prediction and Inverse Problems in Linear Time Dependent PDEs with Uncertainties“. Diss., 2018. http://hdl.handle.net/10754/629731.

Der volle Inhalt der Quelle
Annotation:
This work employs statistical and Bayesian techniques to analyze mathematical forward models with several sources of uncertainty. The forward models usually arise from phenomenological and physical phenomena and are expressed through regression-based models or partial differential equations (PDEs) associated with uncertain parameters and input data. One of the critical challenges in real-world applications is to quantify uncertainties of the unknown parameters using observations. To this purpose, methods based on the likelihood function, and Bayesian techniques constitute the two main statistical inferential approaches considered here. Two problems are studied in this thesis. The first problem is the prediction of fatigue life of metallic specimens. The second part is related to inverse problems in linear PDEs. Both problems require the inference of unknown parameters given certain measurements. We first estimate the parameters by means of the maximum likelihood approach. Next, we seek a more comprehensive Bayesian inference using analytical asymptotic approximations or computational techniques. In the fatigue life prediction, there are several plausible probabilistic stress-lifetime (S-N) models. These models are calibrated given uniaxial fatigue experiments. To generate accurate fatigue life predictions, competing S-N models are ranked according to several classical information-based measures. A different set of predictive information criteria is then used to compare the candidate Bayesian models. Moreover, we propose a spatial stochastic model to generalize S-N models to fatigue crack initiation in general geometries. The model is based on a spatial Poisson process with an intensity function that combines the S-N curves with an averaged effective stress that is computed from the solution of the linear elasticity equations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Martin, James Robert Ph D. „A computational framework for the solution of infinite-dimensional Bayesian statistical inverse problems with application to global seismic inversion“. Thesis, 2015. http://hdl.handle.net/2152/31374.

Der volle Inhalt der Quelle
Annotation:
Quantifying uncertainties in large-scale forward and inverse PDE simulations has emerged as a central challenge facing the field of computational science and engineering. The promise of modeling and simulation for prediction, design, and control cannot be fully realized unless uncertainties in models are rigorously quantified, since this uncertainty can potentially overwhelm the computed result. While statistical inverse problems can be solved today for smaller models with a handful of uncertain parameters, this task is computationally intractable using contemporary algorithms for complex systems characterized by large-scale simulations and high-dimensional parameter spaces. In this dissertation, I address issues regarding the theoretical formulation, numerical approximation, and algorithms for solution of infinite-dimensional Bayesian statistical inverse problems, and apply the entire framework to a problem in global seismic wave propagation. Classical (deterministic) approaches to solving inverse problems attempt to recover the “best-fit” parameters that match given observation data, as measured in a particular metric. In the statistical inverse problem, we go one step further to return not only a point estimate of the best medium properties, but also a complete statistical description of the uncertain parameters. The result is a posterior probability distribution that describes our state of knowledge after learning from the available data, and provides a complete description of parameter uncertainty. In this dissertation, a computational framework for such problems is described that wraps around the existing forward solvers, as long as they are appropriately equipped, for a given physical problem. Then a collection of tools, insights and numerical methods may be applied to solve the problem, and interrogate the resulting posterior distribution, which describes our final state of knowledge. We demonstrate the framework with numerical examples, including inference of a heterogeneous compressional wavespeed field for a problem in global seismic wave propagation with 10⁶ parameters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Yousefpour, Negin. „Comparative Deterministic and Probabilistic Modeling in Geotechnics: Applications to Stabilization of Organic Soils, Determination of Unknown Foundations for Bridge Scour, and One-Dimensional Diffusion Processes“. Thesis, 2013. http://hdl.handle.net/1969.1/151268.

Der volle Inhalt der Quelle
Annotation:
This study presents different aspects on the use of deterministic methods including Artificial Neural Networks (ANNs), and linear and nonlinear regression, as well as probabilistic methods including Bayesian inference and Monte Carlo methods to develop reliable solutions for challenging problems in geotechnics. This study addresses the theoretical and computational advantages and limitations of these methods in application to: 1) prediction of the stiffness and strength of stabilized organic soils, 2) determination of unknown foundations for bridges vulnerable to scour, and 3) uncertainty quantification for one-dimensional diffusion processes. ANNs were successfully implemented in this study to develop nonlinear models for the mechanical properties of stabilized organic soils. ANN models were able to learn from the training examples and then generalize the trend to make predictions for the stiffness and strength of stabilized organic soils. A stepwise parameter selection and a sensitivity analysis method were implemented to identify the most relevant factors for the prediction of the stiffness and strength. Also, the variations of the stiffness and strength with respect to each factor were investigated. A deterministic and a probabilistic approach were proposed to evaluate the characteristics of unknown foundations of bridges subjected to scour. The proposed methods were successfully implemented and validated by collecting data for bridges in the Bryan District. ANN models were developed and trained using the database of bridges to predict the foundation type and embedment depth. The probabilistic Bayesian approach generated probability distributions for the foundation and soil characteristics and was able to capture the uncertainty in the predictions. The parametric and numerical uncertainties in the one-dimensional diffusion process were evaluated under varying observation conditions. The inverse problem was solved using Bayesian inference formulated by both the analytical and numerical solutions of the ordinary differential equation of diffusion. The numerical uncertainty was evaluated by comparing the mean and standard deviation of the posterior realizations of the process corresponding to the analytical and numerical solutions of the forward problem. It was shown that higher correlation in the structure of the observations increased both parametric and numerical uncertainties, whereas increasing the number of data dramatically decreased the uncertainties in the diffusion process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

(11166777), Peiyi Zhang. „Langevinized Ensemble Kalman Filter for Large-Scale Dynamic Systems“. Thesis, 2021.

Den vollen Inhalt der Quelle finden
Annotation:

The Ensemble Kalman filter (EnKF) has achieved great successes in data assimilation in atmospheric and oceanic sciences, but its failure in convergence to the right filtering distribution precludes its use for uncertainty quantification. Other existing methods, such as particle filter or sequential importance sampler, do not scale well to the dimension of the system and the sample size of the datasets. In this dissertation, we address these difficulties in a coherent way.


In the first part of the dissertation, we reformulate the EnKF under the framework of Langevin dynamics, which leads to a new particle filtering algorithm, the so-called Langevinized EnKF (LEnKF). The LEnKF algorithm inherits the forecast-analysis procedure from the EnKF and the use of mini-batch data from the stochastic gradient Langevin-type algorithms, which make it scalable with respect to both the dimension and sample size. We prove that the LEnKF converges to the right filtering distribution in Wasserstein distance under the big data scenario that the dynamic system consists of a large number of stages and has a large number of samples observed at each stage, and thus it can be used for uncertainty quantification. We reformulate the Bayesian inverse problem as a dynamic state estimation problem based on the techniques of subsampling and Langevin diffusion process. We illustrate the performance of the LEnKF using a variety of examples, including the Lorenz-96 model, high-dimensional variable selection, Bayesian deep learning, and Long Short-Term Memory (LSTM) network learning with dynamic data.


In the second part of the dissertation, we focus on two extensions of the LEnKF algorithm. Like the EnKF, the LEnKF algorithm was developed for Gaussian dynamic systems containing no unknown parameters. We propose the so-called stochastic approximation- LEnKF (SA-LEnKF) for simultaneously estimating the states and parameters of dynamic systems, where the parameters are estimated on the fly based on the state variables simulated by the LEnKF under the framework of stochastic approximation. Under mild conditions, we prove the consistency of resulting parameter estimator and the ergodicity of the SA-LEnKF. For non-Gaussian dynamic systems, we extend the LEnKF algorithm (Extended LEnKF) by introducing a latent Gaussian measurement variable to dynamic systems. Those two extensions inherit the scalability of the LEnKF algorithm with respect to the dimension and sample size. The numerical results indicate that they outperform other existing methods in both states/parameters estimation and uncertainty quantification.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie