Dissertations / Theses on the topic 'Inverse Uncertainty Quantification'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 24 dissertations / theses for your research on the topic 'Inverse Uncertainty Quantification.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Chue, Bryan C. "Efficient Hessian computation in inverse problems with application to uncertainty quantification." Thesis, Boston University, 2013. https://hdl.handle.net/2144/21138.
Full textThis thesis considers the efficient Hessian computation in inverse problems with specific application to the elastography inverse problem. Inverse problems use measurements of observable parameters to infer information about model parameters, and tend to be ill-posed. They are typically formulated and solved as regularized constrained optimization problems, whose solutions best fit the measured data. Approaching the same inverse problem from a probabilistic Bayesian perspective produces the same optimal point called the maximum a posterior (MAP) estimate of the parameter distribution, but also produces a posterior probability distribution of the parameter estimate, from which a measure of the solution's uncertainty may be obtained. This probability distribution is a very high dimensional function with which it can be difficult to work. For example, in a modest application with N = 104 optimization variables, representing this function with just three values in each direction requires 3^10000 U+2248 10^5000 variables, which far exceeds the number of atoms in the universe. The uncertainty of the MAP estimate describes the shape of the probability distribution and to leading order may be parameterized by the covariance. Directly calculating the Hessian and hence the covariance, requires O(N) solutions of the constraint equations. Given the size of the problems of interest (N = O(10^4 - 10^6)), this is impractical. Instead, an accurate approximation of the Hessian can be assembled using a Krylov basis. The ill-posed nature of inverse problems suggests that its Hessian has low rank and therefore can be approximated with relatively few Krylov vectors. This thesis proposes a method to calculate this Krylov basis in the process of determining the MAP estimate of the parameter distribution. Using the Krylov space based conjugate gradient (CG) method, the MAP estimate is computed. Minor modifications to the algorithm permit storage of the Krylov approximation of the Hessian. As the accuracy of the Hessian approximation is directly related to the Krylov basis, long term orthogonality amongst the basis vectors is maintained via full reorthogonalization. Upon reaching the MAP estimate, the method produces a low rank approximation of the Hessian that can be used to compute the covariance.
2031-01-01
Hebbur, Venkata Subba Rao Vishwas. "Adjoint based solution and uncertainty quantification techniques for variational inverse problems." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/76665.
Full textPh. D.
Devathi, Duttaabhinivesh. "Uncertainty Quantification for Underdetermined Inverse Problems via Krylov Subspace Iterative Solvers." Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case155446130705089.
Full textAndersson, Hjalmar. "Inverse Uncertainty Quantification using deterministic sampling : An intercomparison between different IUQ methods." Thesis, Uppsala universitet, Tillämpad kärnfysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447070.
Full textLal, Rajnesh. "Data assimilation and uncertainty quantification in cardiovascular biomechanics." Thesis, Montpellier, 2017. http://www.theses.fr/2017MONTS088/document.
Full textCardiovascular blood flow simulations can fill several critical gaps in current clinical capabilities. They offer non-invasive ways to quantify hemodynamics in the heart and major blood vessels for patients with cardiovascular diseases, that cannot be directly obtained from medical imaging. Patient-specific simulations (incorporating data unique to the individual) enable individualised risk prediction, provide key insights into disease progression and/or abnormal physiologic detection. They also provide means to systematically design and test new medical devices, and are used as predictive tools to surgical and personalize treatment planning and, thus aid in clinical decision-making. Patient-specific predictive simulations require effective assimilation of medical data for reliable simulated predictions. This is usually achieved by the solution of an inverse hemodynamic problem, where uncertain model parameters are estimated using the techniques for merging data and numerical models known as data assimilation methods.In this thesis, the inverse problem is solved through a data assimilation method using an ensemble Kalman filter (EnKF) for parameter estimation. By using an ensemble Kalman filter, the solution also comes with a quantification of the uncertainties for the estimated parameters. An ensemble Kalman filter-based parameter estimation algorithm is proposed for patient-specific hemodynamic computations in a schematic arterial network from uncertain clinical measurements. Several in silico scenarii (using synthetic data) are considered to investigate the efficiency of the parameter estimation algorithm using EnKF. The usefulness of the parameter estimation algorithm is also assessed using experimental data from an in vitro test rig and actual real clinical data from a volunteer (patient-specific case). The proposed algorithm is evaluated on arterial networks which include single arteries, cases of bifurcation, a simple human arterial network and a complex arterial network including the circle of Willis.The ultimate aim is to perform patient-specific hemodynamic analysis in the network of the circle of Willis. Common hemodynamic properties (parameters), like arterial wall properties (Young’s modulus, wall thickness, and viscoelastic coefficient) and terminal boundary parameters (reflection coefficient and Windkessel model parameters) are estimated as the solution to an inverse problem using time series pressure values and blood flow rate as measurements. It is also demonstrated that a proper reduced order zero-dimensional compartment model can lead to a simple and reliable estimation of blood flow features in the circle of Willis. The simulations with the estimated parameters capture target pressure or flow rate waveforms at given specific locations
Narayanamurthi, Mahesh. "Advanced Time Integration Methods with Applications to Simulation, Inverse Problems, and Uncertainty Quantification." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/104357.
Full textDoctor of Philosophy
The study of modern science and engineering begins with descriptions of a system of mathematical equations (a model). Different models require different techniques to both accurately and effectively solve them on a computer. In this dissertation, we focus on developing novel mathematical solvers for models expressed as a system of equations, where only the initial state and the rate of change of state as a function are known. The solvers we develop can be used to both forecast the behavior of the system and to optimize its characteristics to achieve specific goals. We also build methodologies to estimate and control errors introduced by mathematical solvers in obtaining a solution for models involving multiple interacting physical, chemical, or biological phenomena. Our solvers build on state of the art in the research community by introducing new approximations that exploit the underlying mathematical structure of a model. Where it is necessary, we provide concrete mathematical proofs to validate theoretically the correctness of the approximations we introduce and correlate with follow-up experiments. We also present detailed descriptions of the procedure for implementing each mathematical solver that we develop throughout the dissertation while emphasizing on means to obtain maximal performance from the solver. We demonstrate significant performance improvements on a range of models that serve as running examples, describing chemical reactions among distinct species as they diffuse over a surface medium. Also provided are results and procedures that a curious researcher can use to advance the ideas presented in the dissertation to other types of solvers that we have not considered. Research on mathematical solvers for different mathematical models is rich and rewarding with numerous open-ended questions and is a critical component in the progress of modern science and engineering.
Ray, Kolyan Michael. "Asymptotic theory for Bayesian nonparametric procedures in inverse problems." Thesis, University of Cambridge, 2015. https://www.repository.cam.ac.uk/handle/1810/278387.
Full textAlhossen, Iman. "Méthode d'analyse de sensibilité et propagation inverse d'incertitude appliquées sur les modèles mathématiques dans les applications d'ingénierie." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30314/document.
Full textApproaches for studying uncertainty are of great necessity in all disciplines. While the forward propagation of uncertainty has been investigated extensively, the backward propagation is still under studied. In this thesis, a new method for backward propagation of uncertainty is presented. The aim of this method is to determine the input uncertainty starting from the given data of the uncertain output. In parallel, sensitivity analysis methods are also of great necessity in revealing the influence of the inputs on the output in any modeling process. This helps in revealing the most significant inputs to be carried in an uncertainty study. In this work, the Sobol sensitivity analysis method, which is one of the most efficient global sensitivity analysis methods, is considered and its application framework is developed. This method relies on the computation of sensitivity indexes, called Sobol indexes. These indexes give the effect of the inputs on the output. Usually inputs in Sobol method are considered to vary as continuous random variables in order to compute the corresponding indexes. In this work, the Sobol method is demonstrated to give reliable results even when applied in the discrete case. In addition, another advancement for the application of the Sobol method is done by studying the variation of these indexes with respect to some factors of the model or some experimental conditions. The consequences and conclusions derived from the study of this variation help in determining different characteristics and information about the inputs. Moreover, these inferences allow the indication of the best experimental conditions at which estimation of the inputs can be done
Gehre, Matthias [Verfasser], Peter [Akademischer Betreuer] Maaß, and Bangti [Akademischer Betreuer] Jin. "Rapid Uncertainty Quantification for Nonlinear Inverse Problems / Matthias Gehre. Gutachter: Peter Maaß ; Bangti Jin. Betreuer: Peter Maaß." Bremen : Staats- und Universitätsbibliothek Bremen, 2013. http://d-nb.info/1072078589/34.
Full textKamilis, Dimitrios. "Uncertainty Quantification for low-frequency Maxwell equations with stochastic conductivity models." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31415.
Full textAttia, Ahmed Mohamed Mohamed. "Advanced Sampling Methods for Solving Large-Scale Inverse Problems." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73683.
Full textPh. D.
Galbally, David. "Nonlinear model reduction for uncertainty quantification in large-scale inverse problems : application to nonlinear convection-diffusion-reaction equation." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43079.
Full textIncludes bibliographical references (p. 147-152).
There are multiple instances in science and engineering where quantities of interest are evaluated by solving one or several nonlinear partial differential equations (PDEs) that are parametrized in terms of a set of inputs. Even though well-established numerical techniques exist for solving these problems, their computational cost often precludes their use in cases where the outputs of interest must be evaluated repeatedly for different values of the input parameters such as probabilistic analysis applications. In this thesis we present a model reduction methodology that combines efficient representation of the nonlinearities in the governing PDE with an efficient model-constrained, greedy algorithm for sampling the input parameter space. The nonlinearities in the PDE are represented using a coefficient-function approximation that enables the development of an efficient offline-online computational procedure where the online computational cost is independent of the size of the original high-fidelity model. The input space sampling algorithm used for generating the reduced space basis adaptively improves the quality of the reduced order approximation by solving a PDE-constrained continuous optimization problem that targets the output error between the reduced and full order models in order to determine the optimal sampling point at every greedy cycle. The resulting model reduction methodology is applied to a highly nonlinear combustion problem governed by a convection-diffusion-reaction PDE with up to 3 input parameters. The reduced basis approximation developed for this problem is up to 50, 000 times faster to solve than the original high-fidelity finite element model with an average relative error in prediction of outputs of interest of 2.5 - 10-6 over the input parameter space. The reduced order model developed in this thesis is used in a novel probabilistic methodology for solving inverse problems.
(cont) The extreme computational cost of the Bayesian framework approach for inferring the values of the inputs that generated a given set of empirically measured outputs often precludes its use in practical applications. In this thesis we show that using a reduced order model for running the Markov
by David Galbally.
S.M.
John, David Nicholas [Verfasser], and Vincent [Akademischer Betreuer] Heuveline. "Uncertainty quantification for an electric motor inverse problem - tackling the model discrepancy challenge / David Nicholas John ; Betreuer: Vincent Heuveline." Heidelberg : Universitätsbibliothek Heidelberg, 2021. http://d-nb.info/122909265X/34.
Full textCho, Taewon. "Computational Advancements for Solving Large-scale Inverse Problems." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103772.
Full textDoctor of Philosophy
For many scientific applications, inverse problems have played a key role in solving important problems by enabling researchers to estimate desired parameters of a system from observed measurements. For example, large-scale inverse problems arise in many global problems such as greenhouse gas tracking where the problem of estimating the amount of added or removed greenhouse gas at the atmosphere gets more difficult. The number of observations has been increased with improvements in measurement technologies (e.g., satellite). Therefore, the inverse problems become large-scale and they are computationally hard to solve. Another example of an inverse problem arises in tomography, where the goal is to examine materials deep underground (e.g., to look for gas or oil) or reconstruct an image of the interior of the human body from exterior measurements (e.g., to look for tumors). For tomography applications, there are typically fewer measurements than unknowns, which results in non-unique solutions. In this dissertation, we treat unknowns as random variables with prior probability distributions in order to compensate for a deficiency in measurements. We consider various additional assumptions on the prior distribution and develop efficient and robust numerical methods for solving inverse problems and for performing uncertainty quantification. We apply our developed methods to many numerical applications such as greenhouse gas tracking, seismic tomography, spherical tomography problems, and the estimation of CO2 of living organisms.
Perrin, Guillaume. "Random fields and associated statistical inverse problems for uncertainty quantification : application to railway track geometries for high-speed trains dynamical responses and risk assessment." Phd thesis, Université Paris-Est, 2013. http://pastel.archives-ouvertes.fr/pastel-01001045.
Full textCortesi, Andrea Francesco. "Predictive numerical simulations for rebuilding freestream conditions in atmospheric entry flows." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0021/document.
Full textAccurate prediction of hypersonic high-enthalpy flows is of main relevance for atmospheric entry missions. However, uncertainties are inevitable on freestream conditions and other parameters of the physico-chemical models. For this reason, a rigorous quantification of the effect of uncertainties is mandatory to assess the robustness and predictivity of numerical simulations. Furthermore, a proper reconstruction of uncertain parameters from in-flight measurements can help reducing the level of uncertainties of the output. In this work, we will use a statistical framework for direct propagation of uncertainties and inverse freestream reconstruction applied to atmospheric entry flows. We propose an assessment of the possibility of exploiting forebody heat flux measurements for the reconstruction of freestream variables and uncertain parameters of the model for hypersonic entry flows. This reconstruction is performed in a Bayesian framework, allowing to account for sources of uncertainties and measurement errors. Different techniques are introduced to enhance the capabilities of the statistical framework for quantification of uncertainties. First, an improved surrogate modeling technique is proposed, based on Kriging and Sparse Polynomial Dimensional Decomposition. Then a method is proposed to adaptively add new training points to an existing experimental design to improve the accuracy of the trained surrogate model. A way to exploit active subspaces in Markov Chain Monte Carlo algorithms for Bayesian inverse problems is also proposed
Calatayud, Gregori Julia. "Computational methods for random differential equations: probability density function and estimation of the parameters." Doctoral thesis, Universitat Politècnica de València, 2020. http://hdl.handle.net/10251/138396.
Full text[ES] Los modelos matemáticos basados en ecuaciones diferenciales deterministas no tienen en cuenta la incertidumbre inherente del fenómeno físico (en un sentido amplio) bajo estudio. Además, a menudo se producen inexactitudes en los datos recopilados debido a errores en las mediciones. Por lo tanto, es necesario tratar los parámetros de entrada del modelo como cantidades aleatorias, en forma de variables aleatorias o procesos estocásticos. Esto da lugar al estudio de las ecuaciones diferenciales aleatorias. El cálculo de la función de densidad de probabilidad de la solución estocástica es importante en la cuantificación de la incertidumbre de la respuesta del modelo. Aunque dicho cálculo es un objetivo difícil en general, ciertas expansiones estocásticas para los coeficientes del modelo dan lugar a representaciones fieles de la solución estocástica, lo que permite aproximar su función de densidad. En este sentido, las expansiones de Karhunen-Loève y de caos polinomial generalizado constituyen herramientas para dicha aproximación de la densidad. Además, los métodos basados en discretizaciones de esquemas numéricos de diferencias finitas permiten aproximar la solución estocástica, por lo tanto, su función de densidad de probabilidad. La parte principal de esta disertación tiene como objetivo aproximar la función de densidad de probabilidad de modelos matemáticos importantes con incertidumbre en su formulación. Concretamente, en esta memoria se estudian, en un sentido estocástico, los siguientes modelos que aparecen en diferentes áreas científicas: en Física, el modelo del péndulo amortiguado; en Biología y Epidemiología, los modelos de crecimiento logístico y de Bertalanffy, así como modelos de tipo epidemiológico; y en Termodinámica, la ecuación en derivadas parciales del calor. Utilizamos expansiones de Karhunen-Loève y de caos polinomial generalizado y esquemas de diferencias finitas para la aproximación de la densidad de la solución. Estas técnicas solo son aplicables cuando tenemos un modelo directo en el que los parámetros de entrada ya tienen determinadas distribuciones de probabilidad establecidas. Cuando los coeficientes del modelo se estiman a partir de los datos recopilados, tenemos un problema inverso. El enfoque de inferencia Bayesiana permite estimar la distribución de probabilidad de los parámetros del modelo a partir de su distribución de probabilidad previa y la verosimilitud de los datos. La cuantificación de la incertidumbre para la respuesta del modelo se lleva a cabo utilizando la distribución predictiva a posteriori. En este sentido, la última parte de la tesis muestra la estimación de las distribuciones de los parámetros del modelo a partir de datos experimentales sobre el crecimiento de bacterias. Para hacerlo, se utiliza un método híbrido que combina la estimación de parámetros Bayesianos y los desarrollos de caos polinomial generalizado.
[CAT] Els models matemàtics basats en equacions diferencials deterministes no tenen en compte la incertesa inherent al fenomen físic (en un sentit ampli) sota estudi. A més a més, sovint es produeixen inexactituds en les dades recollides a causa d'errors de mesurament. Es fa així necessari tractar els paràmetres d'entrada del model com a quantitats aleatòries, en forma de variables aleatòries o processos estocàstics. Açò dóna lloc a l'estudi de les equacions diferencials aleatòries. El càlcul de la funció de densitat de probabilitat de la solució estocàstica és important per a quantificar la incertesa de la sortida del model. Tot i que, en general, aquest càlcul és un objectiu difícil d'assolir, certes expansions estocàstiques dels coeficients del model donen lloc a representacions fidels de la solució estocàstica, el que permet aproximar la seua funció de densitat. En aquest sentit, les expansions de Karhunen-Loève i de caos polinomial generalitzat esdevenen eines per a l'esmentada aproximació de la densitat. A més a més, els mètodes basats en discretitzacions mitjançant esquemes numèrics de diferències finites permeten aproximar la solució estocàstica, per tant la seua funció de densitat de probabilitat. La part principal d'aquesta dissertació té com a objectiu aproximar la funció de densitat de probabilitat d'importants models matemàtics amb incerteses en la seua formulació. Concretament, en aquesta memòria s'estudien, en un sentit estocàstic, els següents models que apareixen en diferents àrees científiques: en Física, el model del pèndol amortit; en Biologia i Epidemiologia, els models de creixement logístic i de Bertalanffy, així com models de tipus epidemiològic; i en Termodinàmica, l'equació en derivades parcials de la calor. Per a l'aproximació de la densitat de la solució, ens basem en expansions de Karhunen-Loève i de caos polinomial generalitzat i en esquemes de diferències finites. Aquestes tècniques només són aplicables quan tenim un model cap avant en què els paràmetres d'entrada tenen ja determinades distribucions de probabilitat. Quan els coeficients del model s'estimen a partir de les dades recollides, tenim un problema invers. L'enfocament de la inferència Bayesiana permet estimar la distribució de probabilitat dels paràmetres del model a partir de la seua distribució de probabilitat prèvia i la versemblança de les dades. La quantificació de la incertesa per a la resposta del model es fa mitjançant la distribució predictiva a posteriori. En aquest sentit, l'última part de la tesi mostra l'estimació de les distribucions dels paràmetres del model a partir de dades experimentals sobre el creixement de bacteris. Per a fer-ho, s'utilitza un mètode híbrid que combina l'estimació de paràmetres Bayesiana i els desenvolupaments de caos polinomial generalitzat.
This work has been supported by the Spanish Ministerio de Econom´ıa y Competitividad grant MTM2017–89664–P.
Calatayud Gregori, J. (2020). Computational methods for random differential equations: probability density function and estimation of the parameters [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/138396
TESIS
Premiado
Cioaca, Alexandru George. "A Computational Framework for Assessing and Optimizing the Performance of Observational Networks in 4D-Var Data Assimilation." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/51795.
Full textPh. D.
Mondal, Anirban. "Bayesian Uncertainty Quantification for Large Scale Spatial Inverse Problems." Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-08-9905.
Full textFlath, Hannah Pearl. "Hessian-based response surface approximations for uncertainty quantification in large-scale statistical inverse problems, with applications to groundwater flow." 2013. http://hdl.handle.net/2152/21157.
Full texttext
Sawlan, Zaid A. "Statistical Analysis and Bayesian Methods for Fatigue Life Prediction and Inverse Problems in Linear Time Dependent PDEs with Uncertainties." Diss., 2018. http://hdl.handle.net/10754/629731.
Full textMartin, James Robert Ph D. "A computational framework for the solution of infinite-dimensional Bayesian statistical inverse problems with application to global seismic inversion." Thesis, 2015. http://hdl.handle.net/2152/31374.
Full textYousefpour, Negin. "Comparative Deterministic and Probabilistic Modeling in Geotechnics: Applications to Stabilization of Organic Soils, Determination of Unknown Foundations for Bridge Scour, and One-Dimensional Diffusion Processes." Thesis, 2013. http://hdl.handle.net/1969.1/151268.
Full text(11166777), Peiyi Zhang. "Langevinized Ensemble Kalman Filter for Large-Scale Dynamic Systems." Thesis, 2021.
Find full textThe Ensemble Kalman filter (EnKF) has achieved great successes in data assimilation in atmospheric and oceanic sciences, but its failure in convergence to the right filtering distribution precludes its use for uncertainty quantification. Other existing methods, such as particle filter or sequential importance sampler, do not scale well to the dimension of the system and the sample size of the datasets. In this dissertation, we address these difficulties in a coherent way.
In the first part of the dissertation, we reformulate the EnKF under the framework of Langevin dynamics, which leads to a new particle filtering algorithm, the so-called Langevinized EnKF (LEnKF). The LEnKF algorithm inherits the forecast-analysis procedure from the EnKF and the use of mini-batch data from the stochastic gradient Langevin-type algorithms, which make it scalable with respect to both the dimension and sample size. We prove that the LEnKF converges to the right filtering distribution in Wasserstein distance under the big data scenario that the dynamic system consists of a large number of stages and has a large number of samples observed at each stage, and thus it can be used for uncertainty quantification. We reformulate the Bayesian inverse problem as a dynamic state estimation problem based on the techniques of subsampling and Langevin diffusion process. We illustrate the performance of the LEnKF using a variety of examples, including the Lorenz-96 model, high-dimensional variable selection, Bayesian deep learning, and Long Short-Term Memory (LSTM) network learning with dynamic data.
In the second part of the dissertation, we focus on two extensions of the LEnKF algorithm. Like the EnKF, the LEnKF algorithm was developed for Gaussian dynamic systems containing no unknown parameters. We propose the so-called stochastic approximation- LEnKF (SA-LEnKF) for simultaneously estimating the states and parameters of dynamic systems, where the parameters are estimated on the fly based on the state variables simulated by the LEnKF under the framework of stochastic approximation. Under mild conditions, we prove the consistency of resulting parameter estimator and the ergodicity of the SA-LEnKF. For non-Gaussian dynamic systems, we extend the LEnKF algorithm (Extended LEnKF) by introducing a latent Gaussian measurement variable to dynamic systems. Those two extensions inherit the scalability of the LEnKF algorithm with respect to the dimension and sample size. The numerical results indicate that they outperform other existing methods in both states/parameters estimation and uncertainty quantification.