Teses / dissertações sobre o tema "Uncertainty Quantification model"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Uncertainty Quantification model".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Fadikar, Arindam. "Stochastic Computer Model Calibration and Uncertainty Quantification". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91985.
Texto completo da fonteDoctor of Philosophy
Mathematical models are versatile and often provide accurate description of physical events. Scientific models are used to study such events in order to gain understanding of the true underlying system. These models are often complex in nature and requires advance algorithms to solve their governing equations. Outputs from these models depend on external information (also called model input) supplied by the user. Model inputs may or may not have a physical meaning, and can sometimes be only specific to the scientific model. More often than not, optimal values of these inputs are unknown and need to be estimated from few actual observations. This process is known as inverse problem, i.e. inferring the input from the output. The inverse problem becomes challenging when the mathematical model is stochastic in nature, i.e., multiple execution of the model result in different outcome. In this dissertation, three methodologies are proposed that talk about the calibration and prediction of a stochastic disease simulation model which simulates contagion of an infectious disease through human-human contact. The motivating examples are taken from the Ebola epidemic in West Africa in 2014 and seasonal flu in New York City in USA.
White, Jeremy. "Computer Model Inversion and Uncertainty Quantification in the Geosciences". Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5329.
Texto completo da fontePark, Inseok. "Quantification of Multiple Types of Uncertainty in Physics-Based Simulation". Wright State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wright1348702461.
Texto completo da fonteBlumer, Joel David. "Cross-scale model validation with aleatory and epistemic uncertainty". Thesis, Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53571.
Texto completo da fonteEzvan, Olivier. "Multilevel model reduction for uncertainty quantification in computational structural dynamics". Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1109/document.
Texto completo da fonteThis work deals with an extension of the classical construction of reduced-order models (ROMs) that are obtained through modal analysis in computational linear structural dynamics. It is based on a multilevel projection strategy and devoted to complex structures with uncertainties. Nowadays, it is well recognized that the predictions in structural dynamics over a broad frequency band by using a finite element model must be improved in taking into account the model uncertainties induced by the modeling errors, for which the role increases with the frequency. In such a framework, the nonparametric probabilistic approach of uncertainties is used, which requires the introduction of a ROM. Consequently, these two aspects, frequency-evolution of the uncertainties and reduced-order modeling, lead us to consider the development of a multilevel ROM in computational structural dynamics, which has the capability to adapt the level of uncertainties to each part of the frequency band. In this thesis, we are interested in the dynamical analysis of complex structures in a broad frequency band. By complex structure is intended a structure with complex geometry, constituted of heterogeneous materials and more specifically, characterized by the presence of several structural levels, for instance, a structure that is made up of a stiff main part embedding various flexible sub-parts. For such structures, it is possible having, in addition to the usual global-displacements elastic modes associated with the stiff skeleton, the apparition of numerous local elastic modes, which correspond to predominant vibrations of the flexible sub-parts. For such complex structures, the modal density may substantially increase as soon as low frequencies, leading to high-dimension ROMs with the modal analysis method (with potentially thousands of elastic modes in low frequencies). In addition, such ROMs may suffer from a lack of robustness with respect to uncertainty, because of the presence of the numerous local displacements, which are known to be very sensitive to uncertainties. It should be noted that in contrast to the usual long-wavelength global displacements of the low-frequency (LF) band, the local displacements associated with the structural sub-levels, which can then also appear in the LF band, are characterized by short wavelengths, similarly to high-frequency (HF) displacements. As a result, for the complex structures considered, there is an overlap of the three vibration regimes, LF, MF, and HF, and numerous local elastic modes are intertwined with the usual global elastic modes. This implies two major difficulties, pertaining to uncertainty quantification and to computational efficiency. The objective of this thesis is thus double. First, to provide a multilevel stochastic ROM that is able to take into account the heterogeneous variability introduced by the overlap of the three vibration regimes. Second, to provide a predictive ROM whose dimension is decreased with respect to the classical ROM of the modal analysis method. A general method is presented for the construction of a multilevel ROM, based on three orthogonal reduced-order bases (ROBs) whose displacements are either LF-, MF-, or HF-type displacements (associated with the overlapping LF, MF, and HF vibration regimes). The construction of these ROBs relies on a filtering strategy that is based on the introduction of global shape functions for the kinetic energy (in contrast to the local shape functions of the finite elements). Implementing the nonparametric probabilistic approach in the multilevel ROM allows each type of displacements to be affected by a particular level of uncertainties. The method is applied to a car, for which the multilevel stochastic ROM is identified with respect to experiments, solving a statistical inverse problem. The proposed ROM allows for obtaining a decreased dimension as well as an improved prediction with respect to a classical stochastic ROM
Chiang, Shen. "Hydrological model comparison and refinement through uncertainty recognition and quantification". 京都大学 (Kyoto University), 2005. http://hdl.handle.net/2433/144539.
Texto completo da fonteRiley, Matthew E. "Quantification of Model-Form, Predictive, and Parametric Uncertainties in Simulation-Based Design". Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1314895435.
Texto completo da fonteRashidi, Mehrabadi Niloofar. "Power Electronics Design Methodologies with Parametric and Model-Form Uncertainty Quantification". Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/82934.
Texto completo da fontePh. D.
Xie, Yimeng. "Advancements in Degradation Modeling, Uncertainty Quantification and Spatial Variable Selection". Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/71687.
Texto completo da fontePh. D.
Karlén, Johan. "Uncertainty Quantification of a Large 1-D Dynamic Aircraft System Simulation Model". Thesis, Linköpings universitet, Reglerteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-120189.
Texto completo da fonteWhiting, Nolan Wagner. "Assessment of Model Validation, Calibration, and Prediction Approaches in the Presence of Uncertainty". Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/91903.
Texto completo da fonteMaster of Science
Uncertainties often exists when conducting physical experiments, and whether this uncertainty exists due to input uncertainty, uncertainty in the environmental conditions in which the experiment takes place, or numerical uncertainty in the model, it can be difficult to validate and compare the results of a model with those of an experiment. Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the uncertainty that exists within the model or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation (model) and/or experimental outcomes. These uncertainties can be in the form of aleatory (uncertainties which a probability distribution can be applied for likelihood of drawing values) or epistemic uncertainties (no knowledge, inputs drawn within an interval). Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME V&V 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics(CFD) simulations of a multi-element airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Also of interest was to assess how well each method could predict the uncertainties about the simulation outside of the region in which experimental observations were made, and model form uncertainties could be observed.
Tavares, Ivo Alberto Valente. "Uncertainty quantification with a Gaussian Process Prior : an example from macroeconomics". Doctoral thesis, Instituto Superior de Economia e Gestão, 2021. http://hdl.handle.net/10400.5/21444.
Texto completo da fonteThis thesis may be broadly divided into 4 parts. In the first part, we do a literature review of the state of the art in misspecification in Macroeconomics, and what so far has been the contribution of a relatively new area of research called Uncertainty Quantification to the Macroeconomics subject. These reviews are essential to contextualize the contribution of this thesis in the furthering of research dedicated to correcting non-linear misspecifications, and to account for several other sources of uncertainty, when modelling from an economic perspective. In the next three parts, we give an example, using the same simple DSGE model from macroeconomic theory, of how researchers may quantify uncertainty in a State-Space Model using a discrepancy term with a Gaussian Process prior. The second part of the thesis, we used a full Gaussian Process (GP) prior on the discrepancy term. Our experiments showed that despite the heavy computational constraints of our full GP method, we still managed to obtain a very interesting forecasting performance with such a restricted sample size, when compared with similar uncorrected DSGE models, or corrected DSGE models using state of the art methods for time series, such as imposing a VAR on the observation error of the state-space model. In the third part of our work, we improved on the computational performance of our previous method, using what has been referred in the literature as Hilbert Reduced Rank GP. This method has close links to Functional Analysis, and the Spectral Theorem for Normal Operators, and Partial Differential Equations. It indeed improved the computational processing time, albeit just slightly, and was accompanied with a similarly slight decrease in the forecasting performance. The fourth part of our work delved into how our method would account for model uncertainty just prior, and during, the great financial crisis of 2007-2009. Our technique allowed us to capture the crisis, albeit at a reduced applicability possibly due to computational constraints. This latter part also was used to deepen the understanding of our model uncertainty quantification technique with a GP. Identifiability issues were also studied. One of our overall conclusions was that more research is needed until this uncertainty quantification technique may be used in as part of the toolbox of central bankers and researchers for forecasting economic fluctuations, specially regarding the computational performance of either method.
info:eu-repo/semantics/publishedVersion
Balch, Michael Scott. "Methods for Rigorous Uncertainty Quantification with Application to a Mars Atmosphere Model". Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/30115.
Texto completo da fontePh. D.
Amarchinta, Hemanth. "Uncertainty Quantification of Residual Stresses Induced By Laser Peening Simulation". Wright State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=wright1278028187.
Texto completo da fonteConrad, Yvonne [Verfasser]. "Model-based quantification of nitrate-nitrogen leaching considering sources of uncertainty / Yvonne Conrad". Kiel : Universitätsbibliothek Kiel, 2017. http://d-nb.info/1128149249/34.
Texto completo da fonteKamilis, Dimitrios. "Uncertainty Quantification for low-frequency Maxwell equations with stochastic conductivity models". Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31415.
Texto completo da fonteSmit, Jacobus Petrus Johannes. "The quantification of prediction uncertainty associated with water quality models using Monte Carlo Simulation". Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85814.
Texto completo da fonteENGLISH ABSTRACT: Water Quality Models are mathematical representations of ecological systems and they play a major role in the planning and management of water resources and aquatic environments. Important decisions concerning capital investment and environmental consequences often rely on the results of Water Quality Models and it is therefore very important that decision makers are aware and understand the uncertainty associated with these models. The focus of this study was on the use of Monte Carlo Simulation for the quantification of prediction uncertainty associated with Water Quality Models. Two types of uncertainty exist: Epistemic Uncertainty and Aleatory Uncertainty. Epistemic uncertainty is a result of a lack of knowledge and aleatory uncertainty is due to the natural variability of an environmental system. It is very important to distinguish between these two types of uncertainty because the analysis of a model’s uncertainty depends on it. Three different configurations of Monte Carlo Simulation in the analysis of uncertainty were discussed and illustrated: Single Phase Monte Carlo Simulation (SPMCS), Two Phase Monte Carlo Simulation (TPMCS) and Parameter Monte Carlo Simulation (PMCS). Each configuration of Monte Carlo Simulation has its own objective in the analysis of a model’s uncertainty and depends on the distinction between the types of uncertainty. As an experiment, a hypothetical river was modelled using the Streeter-Phelps model and synthetic data was generated for the system. The generation of the synthetic data allowed for the experiment to be performed under controlled conditions. The modelling protocol followed in the experiment included two uncertainty analyses. All three types of Monte Carlo Simulations were used in these uncertainty analyses to quantify the model’s prediction uncertainty in fulfilment of their different objectives. The first uncertainty analysis, known as the preliminary uncertainty analysis, was performed to take stock of the model’s situation concerning uncertainty before any effort was made to reduce the model’s prediction uncertainty. The idea behind the preliminary uncertainty analysis was that it would help in further modelling decisions with regards to calibration and parameter estimation experiments. Parameter uncertainty was reduced by the calibration of the model. Once parameter uncertainty was reduced, the second uncertainty analysis, known as the confirmatory uncertainty analysis, was performed to confirm that the uncertainty associated with the model was indeed reduced. The two uncertainty analyses were conducted in exactly the same way. In conclusion to the experiment, it was illustrated how the quantification of the model’s prediction uncertainty aided in the calculation of a Total Maximum Daily Load (TMDL). The Margin of Safety (MOS) included in the TMDL could be determined based on scientific information provided by the uncertainty analysis. The total MOS assigned to the TMDL was -35% of the mean load allocation for the point source. For the sake of simplicity load allocations from non-point sources were disregarded.
AFRIKAANSE OPSOMMING: Watergehalte modelle is wiskundige voorstellings van ekologiese sisteme en speel ’n belangrike rol in die beplanning en bestuur van waterhulpbronne en wateromgewings. Belangrike besluite rakende finansiële beleggings en besluite rakende die omgewing maak dikwels staat op die resultate van watergehalte modelle. Dit is dus baie belangrik dat besluitnemers bewus is van die onsekerhede verbonde met die modelle en dit verstaan. Die fokus van hierdie studie het berus op die gebruik van die Monte Carlo Simulasie om die voorspellingsonsekerhede van watergehalte modelle te kwantifiseer. Twee tipes onsekerhede bestaan: Epistemologiese onsekerheid en toeval afhangende onsekerheid. Epistemologiese onsekerheid is die oorsaak van ‘n gebrek aan kennis terwyl toeval afhangende onsekerheid die natuurlike wisselvalligheid in ’n natuurlike omgewing behels. Dit is belangrik om te onderskei tussen hierdie twee tipes onsekerhede aangesien die analise van ’n model se onsekerheid hiervan afhang. Drie verskillende rangskikkings van Monte Carlo Simulasies in die analise van die onsekerhede word bespreek en geïllustreer: Enkel Fase Monte Carlo Simulasie (SPMCS), Dubbel Fase Monte Carlo Simulasie (TPMCS) en Parameter Monte Carlo Simulasie (PMCS). Elke rangskikking van Monte Carlo Simulasie het sy eie doelwit in die analise van ’n model se onsekerheid en hang af van die onderskeiding tussen die twee tipes onsekerhede. As eksperiment is ’n hipotetiese rivier gemodelleer deur gebruik te maak van die Streeter-Phelps teorie en sintetiese data is vir die rivier gegenereer. Die sintetiese data het gesorg dat die eksperiment onder beheerde toestande kon plaasvind. Die protokol in die eksperiment het twee onsekerheids analises ingesluit. Al drie die rangskikkings van die Monte Carlo Simulasie is gebruik in hierdie analises om die voorspellingsonsekerheid van die model te kwantifiseer en hul doelwitte te bereik. Die eerste analise, die voorlopige onsekerheidsanalise, is uitgevoer om die model se situasie met betrekking tot die onsekerheid op te som voor enige stappe geneem is om die model se voorspellings onsekerheid te probeer verminder. Die idee agter die voorlopige onsekerheidsanalise was dat dit sou help in verdere modelleringsbesluite ten opsigte van kalibrasie en die skatting van parameters. Onsekerhede binne die parameters is verminder deur die model te kalibreer, waarna die tweede onsekerheidsanalise uitgevoer is. Hierdie analise word die bevestigingsonsekerheidsanalise genoem en word uitgevoer met die doel om vas te stel of die onsekerheid geassosieer met die model wel verminder is. Die twee tipes analises word op presies dieselfde manier toegepas. In die afloop tot die eksperiment, is gewys hoe die resultate van ’n onsekerheidsanalise gebruik is in die berekening van ’n totale maksimum daaglikse belading (TMDL) vir die rivier. Die veiligheidgrens (MOS) ingesluit in die TMDL kon vasgestel word deur die gebruik van wetenskaplike kennis wat voorsien is deur die onsekerheidsanalise. Die MOS het bestaan uit -35% van die gemiddelde toegekende lading vir puntbelasting van besoedeling in die rivier. Om die eksperiment eenvoudig te hou is verspreide laste van besoedeling nie gemodelleer nie.
Gatian, Katherine N. "A quantitative, model-driven approach to technology selection and development through epistemic uncertainty reduction". Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53636.
Texto completo da fonteKim, Jee Yun. "Data-driven Methods in Mechanical Model Calibration and Prediction for Mesostructured Materials". Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/85210.
Texto completo da fonteMaster of Science
A material system obtained by applying a pattern of multiple materials has proven its adaptability to complex practical conditions. The layer by layer manufacturing process of additive manufacturing can allow for this type of design because of its control over where material can be deposited. This possibility then raises the question of how a multi-material system can be optimized in its design for a given application. In this research, we focus mainly on the problem of accurately predicting the response of the material when subjected to stimuli. Conventionally, simulations aided by finite element analysis (FEA) were relied upon for prediction, however it also presents many issues such as long run times and uncertainty in context-specific inputs of the simulation. We instead have adopted a framework using advanced statistical methodology able to combine both experimental and simulation data to significantly reduce run times as well as quantify the various uncertainties associated with running simulations.
Ricciardi, Denielle E. "Uncertainty Quantification and Propagation in Materials Modeling Using a Bayesian Inferential Framework". The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587473424147276.
Texto completo da fonteShi, Hongxiang. "Hierarchical Statistical Models for Large Spatial Data in Uncertainty Quantification and Data Fusion". University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1504802515691938.
Texto completo da fonteAndersson, Hjalmar. "Inverse Uncertainty Quantification using deterministic sampling : An intercomparison between different IUQ methods". Thesis, Uppsala universitet, Tillämpad kärnfysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447070.
Texto completo da fonteKacker, Shubhra. "The Role of Constitutive Model in Traumatic Brain Injury Prediction". University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563874757653453.
Texto completo da fonteVilhelmsson, Markus, e Isac Strömberg. "Investigating Validation of a Simulation Model for Development and Certification of Future Fighter Aircraft Fuel Systems". Thesis, Linköpings universitet, Reglerteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129300.
Texto completo da fonteCheng, Xi. "Quantification of the parametric uncertainty in the specific absorption rate calculation of a mobile phone". Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLS258/document.
Texto completo da fonteThis thesis focuses on parameter uncertainty quantification (UQ) in specific absorptionrate (SAR) calculation using a computer-aided design (CAD) mobile phone model.The impact of uncertainty, e.g., lack of detailed knowledge about material electricalproperties, system geometrical features, etc., in SAR calculation is quantified by threecomputationally efficient non-intrusive UQ methods: unscented transformation (UT),stochastic collocation (SC) and non-intrusive polynomial chaos (NIPC). They are callednon-intrusive methods because the simulation process is simply considered as a blackboxwithout changing the code of the simulation solver. Their performances for thecases of one and two random variables are analysed. In contrast to the traditionaluncertainty analysis method: Monte Carlo method, the time of the calculation becomesacceptable. To simplify the UQ procedure for the case of multiple uncertain inputs, it isdemonstrated that uncertainties can be combined to evaluate the parameter uncertaintyof the output. Combining uncertainties is an approach generally used in the field ofmeasurement, in this thesis, it is used in SAR calculations in the complex situation. Oneof the necessary steps in the framework of uncertainty analysis is sensitivity analysis (SA)which aims at quantifying the relative importance of each uncertain input parameterwith respect to the uncertainty of the output. Polynomial chaos (PC) based Sobol’indices method whose SA indices are evaluated by PC expansion instead of Monte Carlomethod is used in SAR calculation. The results of the investigations are presented anddiscussed.In order to make the reading easier, elementary notions of SAR, modelling, uncertaintyin modelling, and probability theory are given in introduction (chapter 1). Thenthe main content of this thesis are presented in chapter 2 and chapter 3. In chapter 4,another approach to use PC expansion is given, and it is used in the finite-differencetime-domain (FDTD) code. Since the FDTD code in the simulation solver should bechanged, it is so-called intrusive PC expansion. Intrusive method already investigatedin details in other people’s thesis. In chapter 5, conclusions and future work are given
Wang, Jianxun. "Physics-Informed, Data-Driven Framework for Model-Form Uncertainty Estimation and Reduction in RANS Simulations". Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/77035.
Texto completo da fontePh. D.
Bazargan, Hamid. "An efficient polynomial chaos-based proxy model for history matching and uncertainty quantification of complex geological structures". Thesis, Heriot-Watt University, 2014. http://hdl.handle.net/10399/2757.
Texto completo da fonteHolland, Troy Michael. "A Comprehensive Coal Conversion Model Extended to Oxy-Coal Conditions". BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6525.
Texto completo da fonteWu, Jinlong. "Predictive Turbulence Modeling with Bayesian Inference and Physics-Informed Machine Learning". Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/85129.
Texto completo da fontePh. D.
Reynolds-Averaged Navier–Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled.
Galbally, David. "Nonlinear model reduction for uncertainty quantification in large-scale inverse problems : application to nonlinear convection-diffusion-reaction equation". Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43079.
Texto completo da fonteIncludes bibliographical references (p. 147-152).
There are multiple instances in science and engineering where quantities of interest are evaluated by solving one or several nonlinear partial differential equations (PDEs) that are parametrized in terms of a set of inputs. Even though well-established numerical techniques exist for solving these problems, their computational cost often precludes their use in cases where the outputs of interest must be evaluated repeatedly for different values of the input parameters such as probabilistic analysis applications. In this thesis we present a model reduction methodology that combines efficient representation of the nonlinearities in the governing PDE with an efficient model-constrained, greedy algorithm for sampling the input parameter space. The nonlinearities in the PDE are represented using a coefficient-function approximation that enables the development of an efficient offline-online computational procedure where the online computational cost is independent of the size of the original high-fidelity model. The input space sampling algorithm used for generating the reduced space basis adaptively improves the quality of the reduced order approximation by solving a PDE-constrained continuous optimization problem that targets the output error between the reduced and full order models in order to determine the optimal sampling point at every greedy cycle. The resulting model reduction methodology is applied to a highly nonlinear combustion problem governed by a convection-diffusion-reaction PDE with up to 3 input parameters. The reduced basis approximation developed for this problem is up to 50, 000 times faster to solve than the original high-fidelity finite element model with an average relative error in prediction of outputs of interest of 2.5 - 10-6 over the input parameter space. The reduced order model developed in this thesis is used in a novel probabilistic methodology for solving inverse problems.
(cont) The extreme computational cost of the Bayesian framework approach for inferring the values of the inputs that generated a given set of empirically measured outputs often precludes its use in practical applications. In this thesis we show that using a reduced order model for running the Markov
by David Galbally.
S.M.
Yuan, Mengfei. "Machine Learning-Based Reduced-Order Modeling and Uncertainty Quantification for "Structure-Property" Relations for ICME Applications". The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555580083945861.
Texto completo da fonteHuang, Chao-Min. "Robust Design Framework for Automating Multi-component DNA Origami Structures with Experimental and MD coarse-grained Model Validation". The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu159051496861178.
Texto completo da fonteCarmassi, Mathieu. "Uncertainty quantification and calibration of a photovoltaic plant model : warranty of performance and robust estimation of the long-term production". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLA042/document.
Texto completo da fonteField experiments are often difficult and expensive to make. To bypass these issues, industrial companies have developed computational codes. These codes intend to be representative of the physical system, but come with a certain amount of problems. The code intends to be as close as possible to the physical system. It turns out that, despite continuous code development, the difference between the code outputs and experiments can remain significant. Two kinds of uncertainties are observed. The first one comes from the difference between the physical phenomenon and the values recorded experimentally. The second concerns the gap between the code and the physical system. To reduce this difference, often named model bias, discrepancy, or model error, computer codes are generally complexified in order to make them more realistic. These improvements lead to time consuming codes. Moreover, a code often depends on parameters to be set by the user to make the code as close as possible to field data. This estimation task is called calibration. This thesis first proposes a review of the statistical methods necessary to understand Bayesian calibration. Then, a review of the main calibration methods is presented with a comparative example based on a numerical code used to predict the power of a photovoltaic plant. The package called CaliCo which allows to quickly perform a Bayesian calibration on a lot of numerical codes is then presented. Finally, a real case study of a large photovoltaic power plant will be introduced and the calibration carried out as part of a performance monitoring framework. This particular case of industrial code introduces numerical calibration specificities that will be discussed with two statistical models
John, David Nicholas [Verfasser], e Vincent [Akademischer Betreuer] Heuveline. "Uncertainty quantification for an electric motor inverse problem - tackling the model discrepancy challenge / David Nicholas John ; Betreuer: Vincent Heuveline". Heidelberg : Universitätsbibliothek Heidelberg, 2021. http://d-nb.info/122909265X/34.
Texto completo da fonteBraun, Mathias. "Reduced Order Modelling and Uncertainty Propagation Applied to Water Distribution Networks". Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0050/document.
Texto completo da fonteWater distribution systems are large, spatially distributed infrastructures that ensure the distribution of potable water of sufficient quantity and quality. Mathematical models of these systems are characterized by a large number of state variables and parameter. Two major challenges are given by the time constraints for the solution and the uncertain character of the model parameters. The main objectives of this thesis are thus the investigation of projection based reduced order modelling techniques for the time efficient solution of the hydraulic system as well as the spectral propagation of parameter uncertainties for the improved quantification of uncertainties. The thesis gives an overview of the mathematical methods that are being used. This is followed by the definition and discussion of the hydraulic network model, for which a new method for the derivation of the sensitivities is presented based on the adjoint method. The specific objectives for the development of reduced order models are the application of projection based methods, the development of more efficient adaptive sampling strategies and the use of hyper-reduction methods for the fast evaluation of non-linear residual terms. For the propagation of uncertainties spectral methods are introduced to the hydraulic model and an intrusive hydraulic model is formulated. With the objective of a more efficient analysis of the parameter uncertainties, the spectral propagation is then evaluated on the basis of the reduced model. The results show that projection based reduced order models give a considerable benefit with respect to the computational effort. While the use of adaptive sampling resulted in a more efficient use of pre-calculated system states, the use of hyper-reduction methods could not improve the computational burden and has to be explored further. The propagation of the parameter uncertainties on the basis of the spectral methods is shown to be comparable to Monte Carlo simulations in accuracy, while significantly reducing the computational effort
Carozzi, M. "AMMONIA EMISSIONS FROM ARABLE LANDS IN PO VALLEY: METHODOLOGIES, DYNAMICS AND QUANTIFICATION". Doctoral thesis, Università degli Studi di Milano, 2012. http://hdl.handle.net/2434/170268.
Texto completo da fonteKrifa, Mohamed. "Amortissement virtuel pour la conception vibroacoustique des lanceurs futurs". Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCD058.
Texto completo da fonteIn the dimensioning of space launchers, controlling depreciation is a major problem. In the absence of very expensive real structural tests before the final qualification phase, damping modeling can lead to over-sizing of the structure while the aim is to reduce the cost of launching a rocket while guaranteeing the vibratory comfort of the payload.[...]
Reimer, Joscha Verfasser], Thomas [Akademischer Betreuer] [Slawig e Andreas [Gutachter] Oschlies. "Optimization of model parameters, uncertainty quantification and experimental designs in climate research / Joscha Reimer ; Gutachter: Andreas Oschlies ; Betreuer: Thomas Slawig". Kiel : Universitätsbibliothek Kiel, 2020. http://nbn-resolving.de/urn:nbn:de:gbv:8-mods-2020-00067-3.
Texto completo da fonteReimer, Joscha [Verfasser], Thomas [Akademischer Betreuer] Slawig e Andreas [Gutachter] Oschlies. "Optimization of model parameters, uncertainty quantification and experimental designs in climate research / Joscha Reimer ; Gutachter: Andreas Oschlies ; Betreuer: Thomas Slawig". Kiel : Universitätsbibliothek Kiel, 2020. http://d-nb.info/1205735364/34.
Texto completo da fonteSittichok, Ketvara. "Improving Seasonal Rainfall and Streamflow Forecasting in the Sahel Region via Better Predictor Selection, Uncertainty Quantification and Forecast Economic Value Assessment". Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34229.
Texto completo da fonteSevieri, Giacomo [Verfasser], Hermann G. [Akademischer Betreuer] Matthies e Falco Anna [Akademischer Betreuer] De. "The seismic assessment of existing concrete gravity dams : FE model uncertainty quantification and reduction / Giacomo Sevieri ; Hermann G. Matthies, Anna De Falco". Braunschweig : Technische Universität Braunschweig, 2021. http://d-nb.info/1225038251/34.
Texto completo da fonteTamssaouet, Ferhat. "Towards system-level prognostics : modeling, uncertainty propagation and system remaining useful life prediction". Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0079.
Texto completo da fontePrognostics is the process of predicting the remaining useful life (RUL) of components, subsystems, or systems. However, until now, the prognostics has often been approached from a component view without considering interactions between components and effects of the environment, leading to a misprediction of the complex systems failure time. In this work, a prognostics approach to system-level is proposed. This approach is based on a new modeling framework: the inoperability input-output model (IIM), which allows tackling the issue related to the interactions between components and the mission profile effects and can be applied for heterogeneous systems. Then, a new methodology for online joint system RUL (SRUL) prediction and model parameter estimation is developed based on particle filtering (PF) and gradient descent (GD). In detail, the state of health of system components is estimated and predicted in a probabilistic manner using PF. In the case of consecutive discrepancy between the prior and posterior estimates of the system health state, the proposed estimation method is used to correct and to adapt the IIM parameters. Finally, the developed methodology is verified on a realistic industrial system: The Tennessee Eastman Process. The obtained results highlighted its effectiveness in predicting the SRUL in reasonable computing time
Muhammad, Ruqiah. "A new dynamic model for non-viral multi-treatment gene delivery systems for bone regeneration: parameter extraction, estimation, and sensitivity". Diss., University of Iowa, 2019. https://ir.uiowa.edu/etd/6996.
Texto completo da fonteMesado, Melia Carles. "Uncertainty Quantification and Sensitivity Analysis for Cross Sections and Thermohydraulic Parameters in Lattice and Core Physics Codes. Methodology for Cross Section Library Generation and Application to PWR and BWR". Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/86167.
Texto completo da fonteEste trabajo de doctorado, desarrollado en la Universitat Politècnica de València (UPV), tiene como objetivo cubrir la primera fase del benchmark presentado por el grupo de expertos Uncertainty Analysis in Modeling (UAM-LWR). La principal contribución al benchmark, por parte del autor de esta tesis, es el desarrollo de un programa de MATLAB solicitado por los organizadores del benchmark, el cual se usa para generar librerías neutrónicas a distribuir entre los participantes del benchmark. El benchmark del UAM pretende determinar la incertidumbre introducida por los códigos multifísicos y multiescala acoplados de análisis de reactores de agua ligera. El citado benchmark se divide en tres fases: 1. Fase neutrónica: obtener los parámetros neutrónicos y secciones eficaces del problema específico colapsados y homogenizados, además del análisis de criticidad. 2. Fase de núcleo: análisis termo-hidráulico y neutrónico por separado. 3. Fase de sistema: análisis termo-hidráulico y neutrónico acoplados. En esta tesis se completan los principales objetivos de la primera fase. Concretamente, se desarrolla una metodología para propagar la incertidumbre de secciones eficaces y otros parámetros neutrónicos a través de un código lattice y un simulador de núcleo. Se lleva a cabo un análisis de incertidumbre y sensibilidad para las secciones eficaces contenidas en la librería neutrónica ENDF/B-VII. Su incertidumbre se propaga a través del código lattice SCALE6.2.1, incluyendo las fases de colapsación y homogenización, hasta llegar a la generación de una librería neutrónica específica del problema. Luego, la incertidumbre contenida en dicha librería puede continuar propagándose a través de un simulador de núcleo, para este estudio PARCSv3.2. Para el análisis de incertidumbre y sensibilidad se ha usado el módulo SAMPLER -disponible en la última versión de SCALE- y la herramienta estadística DAKOTA 6.3. Como parte de este proceso, también se ha desarrollado una metodología para obtener librerías neutrónicas en formato NEMTAB para ser usadas en simuladores de núcleo. Se ha realizado una comparación con el código CASMO-4 para obtener una verificación de la metodología completa. Esta se ha probado usando un reactor de agua en ebullición del tipo BWR. Sin embargo, no hay ninguna preocupación o limitación respecto a su uso con otro tipo de reactor nuclear. Para la cuantificación de la incertidumbre se usa la metodología estocástica Gesellschaft für Anlagen und Reaktorsicherheit (GRS). Esta metodología hace uso del modelo de alta fidelidad y un muestreo no paramétrico para propagar la incertidumbre. Como resultado, el número de muestras (determinado con la fórmula revisada de Wilks) no depende del número de parámetros de entrada, sólo depende del nivel de confianza e incertidumbre deseados de los parámetros de salida. Además, las funciones de distribución de probabilidad no están limitadas a normalidad. El principal inconveniente es que se ha de disponer de las distribuciones de probabilidad de cada parámetro de entrada. Si es posible, las distribuciones de probabilidad de entrada se definen usando información encontrada en la literatura relacionada. En caso contrario, la incertidumbre se define en base a la opinión de un experto. Se usa un segundo escenario para propagar la incertidumbre de diferentes parámetros termo-hidráulicos a través del código acoplado TRACE5.0p3/PARCSv3.0. En este caso, se utiliza un reactor tipo PWR para simular un transitorio de una caída de barra. Como nueva característica, el núcleo se modela elemento a elemento siguiendo una discretización totalmente en 3D. No se ha encontrado ningún otro estudio que use un núcleo tan detallado en 3D. También se usa la metodología GRS y el DAKOTA 6.3 para este análisis de incertidumbre y sensibilidad.
Aquest treball de doctorat, desenvolupat a la Universitat Politècnica de València (UPV), té com a objectiu cobrir la primera fase del benchmark presentat pel grup d'experts Uncertainty Analysis in Modeling (UAM-LWR). La principal contribució al benchmark, per part de l'autor d'aquesta tesi, es el desenvolupament d'un programa de MATLAB sol¿licitat pels organitzadors del benchmark, el qual s'utilitza per a generar llibreries neutròniques a distribuir entre els participants del benchmark. El benchmark del UAM pretén determinar la incertesa introduïda pels codis multifísics i multiescala acoblats d'anàlisi de reactors d'aigua lleugera. El citat benchmark es divideix en tres fases: 1. Fase neutrònica: obtenir els paràmetres neutrònics i seccions eficaces del problema específic, col¿lapsats i homogeneïtzats, a més de la anàlisi de criticitat. 2. Fase de nucli: anàlisi termo-hidràulica i neutrònica per separat. 3. Fase de sistema: anàlisi termo-hidràulica i neutrònica acoblats. En aquesta tesi es completen els principals objectius de la primera fase. Concretament, es desenvolupa una metodologia per propagar la incertesa de les seccions eficaces i altres paràmetres neutrònics a través d'un codi lattice i un simulador de nucli. Es porta a terme una anàlisi d'incertesa i sensibilitat per a les seccions eficaces contingudes en la llibreria neutrònica ENDF/B-VII. La seua incertesa es propaga a través del codi lattice SCALE6.2.1, incloent les fases per col¿lapsar i homogeneïtzar, fins aplegar a la generació d'una llibreria neutrònica específica del problema. Després, la incertesa continguda en la esmentada llibreria pot continuar propagant-se a través d'un simulador de nucli, per a aquest estudi PARCSv3.2. Per a l'anàlisi d'incertesa i sensibilitat s'ha utilitzat el mòdul SAMPLER -disponible a l'última versió de SCALE- i la ferramenta estadística DAKOTA 6.3. Com a part d'aquest procés, també es desenvolupa una metodologia per a obtenir llibreries neutròniques en format NEMTAB per ser utilitzades en simuladors de nucli. S'ha realitzat una comparació amb el codi CASMO-4 per obtenir una verificació de la metodologia completa. Aquesta s'ha provat utilitzant un reactor d'aigua en ebullició del tipus BWR. Tanmateix, no hi ha cap preocupació o limitació respecte del seu ús amb un altre tipus de reactor nuclear. Per a la quantificació de la incertesa s'utilitza la metodologia estocàstica Gesellschaft für Anlagen und Reaktorsicherheit (GRS). Aquesta metodologia fa ús del model d'alta fidelitat i un mostreig no paramètric per propagar la incertesa. Com a resultat, el nombre de mostres (determinat amb la fórmula revisada de Wilks) no depèn del nombre de paràmetres d'entrada, sols depèn del nivell de confiança i incertesa desitjats dels paràmetres d'eixida. A més, las funcions de distribució de probabilitat no estan limitades a la normalitat. El principal inconvenient és que s'ha de disposar de les distribucions de probabilitat de cada paràmetre d'entrada. Si és possible, les distribucions de probabilitat d'entrada es defineixen utilitzant informació trobada a la literatura relacionada. En cas contrari, la incertesa es defineix en base a l'opinió d'un expert. S'utilitza un segon escenari per propagar la incertesa de diferents paràmetres termo-hidràulics a través del codi acoblat TRACE5.0p3/PARCSv3.0. En aquest cas, s'utilitza un reactor tipus PWR per simular un transitori d'una caiguda de barra. Com a nova característica, cal assenyalar que el nucli es modela element a element seguint una discretizació totalment 3D. No s'ha trobat cap altre estudi que utilitze un nucli tan detallat en 3D. També s'utilitza la metodologia GRS i el DAKOTA 6.3 per a aquesta anàlisi d'incertesa i sensibilitat.¿
Mesado Melia, C. (2017). Uncertainty Quantification and Sensitivity Analysis for Cross Sections and Thermohydraulic Parameters in Lattice and Core Physics Codes. Methodology for Cross Section Library Generation and Application to PWR and BWR [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86167
TESIS
El, Bouti Tamara. "Optimisation robuste et application à la reconstruction du réseau artériel humain". Thesis, Versailles-St Quentin en Yvelines, 2015. http://www.theses.fr/2015VERS018V/document.
Texto completo da fonteCardiovascular diseases are currently the leading cause of mortality in developed countries, due to the constant increase in risk factors in the population. Several prospective and retrospective studies have shown that arterial stiffness is an important predictor factor of these diseases. Unfortunately, these parameters are difficult to measure experimentally. We propose a numerical approach to determine the arterial stiffness of an arterial network using a patient specificone-dimensional model of the temporal variation of the section and blood flow of the arteries. The proposed approach estimates the optimal parameters of the reduced model, including the arterial stiffness, using non-invasive measurements such MRI, echotracking and tonometry aplanation. Different optimization results applied on experimental cases will be presented. In order to determine the robustness of the model towards its parameters, an uncertainty analysis hasbeen also carried out to measure the contribution of the model input parameters, alone or by interaction with other inputs, to the variation of model output, here the arterial pulse pressure. This study has shown that the numerical pulse pressure is a reliable indicator that can help to diagnose arterial hypertension.We can then provide the practitioner a robust patient-specific tool allowing an early and reliable diagnosis of cardiovascular diseases based on a non-invasive exam
Ait, Mamoun Khadija. "Vehicle rοuting prοblem under uncertainty : case οf pharmaceutical supply chain". Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMIR08.
Texto completo da fonteThe enhancement of logistics distribution performance and the optimization of transportation have emerged as critical concerns in recent years. The pharmaceutical distribution sector faces significant challenges in route planning and transport network optimization, with uncertainties often leading to delays and losses. The multifaceted challenges encompass the imperative to elevate product quality, reduce costs, minimize total travel distance, and streamline transportation time for effective planning. Within this context, the Vehicle Routing Problem (VRP) stands out as one of the extensively analysed problems in the realms of transportation, distribution, and logistics. Achieving a delicate equilibrium between cost considerations and delivering high-quality pharmaceutical products is a primary objective in pharmaceutical distribution. This research delves into both the Static Vehicle Routing Problem (SVRP) and the Dynamic Vehicle Routing Problem (DVRP). Real-world logistical planning frequently encounters uncertainties at the outset, including uncertain customer demand, delivery quantities, time constraints, and more. This thesis introduces the "temperature condition" as a fundamental constraint in pharmaceutical distribution, representing a source of uncertainty that directly impacts drug quality, thereby influencing logistics distribution and overall supply chain performance. Furthermore, the thesis incorporates uncertainty quantification for modelling uncertain travel times in both recurrent and non-recurrent congestion scenarios. The methodology employed for this purpose is the collocation method, initially validated through Monte Carlo Simulation (MCS). By addressing these multifaceted challenges and uncertainties, this research seeks to contribute to the development of robust strategies in pharmaceutical distribution, ensuring the optimization of routes, reduction of costs, and maintenance of high-quality product standards. The findings of this study offer valuable insights for logistics managers and planners aiming to navigate the complexities of pharmaceutical distribution, fostering efficiency and resilience in the face of uncertainties
Rubio, Paul-Baptiste. "Stratégies numériques innovantes pour l’assimilation de données par inférence bayésienne". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLN055/document.
Texto completo da fonteThe work is placed into the framework of data assimilation in structural mechanics. It aims at developing new numerical tools in order to permit real-time and robust data assimilation that could then be used in various engineering activities. A specific targeted activity is the implementation of DDDAS (Dynamic Data Driven Application System) applications in which a continuous exchange between simulation tools and experimental measurements is envisioned to the end of creating retroactive control loops on mechanical systems. In this context, and in order to take various uncertainty sources (modeling error, measurement noise,..) into account, a powerful and general stochastic methodology with Bayesian inference is considered. However, a well-known drawback of such an approach is the computational complexity which makes real-time simulations and sequential assimilation some difficult tasks.The PhD work thus proposes to couple Bayesian inference with attractive and advanced numerical techniques so that real-time and sequential assimilation can be envisioned. First, PGD model reduction is introduced to facilitate the computation of the likelihood function, uncertainty propagation through complex models, and the sampling of the posterior density. Then, Transport Map sampling is investigated as a substitute to classical MCMC procedures for posterior sampling. It is shown that this technique leads to deterministic computations, with clear convergence criteria, and that it is particularly suited to sequential data assimilation. Here again, the use of PGD model reduction highly facilitates the process by recovering gradient and Hessian information in a straightforward manner. Eventually, and to increase robustness, on-the-fly correction of model bias is addressed using data-based enrichment terms.The overall cost-effective methodology is applied and illustrated on several academic and real-life test cases, including for instance the real-time updating of models for the control of welding processes, or that of mechanical tests involving damageable concrete structures with full-field measurements
DuFour, Mark R. "Hydroacoustic Quantification of Lake Erie Walleye (Sander vitreus)Distribution and Abundance". University of Toledo / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1483715286731694.
Texto completo da fonteResseguier, Valentin. "Mixing and fluid dynamics under location uncertainty". Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S004/document.
Texto completo da fonteThis thesis develops, analyzes and demonstrates several valuable applications of randomized fluid dynamics models referred to as under location uncertainty. The velocity is decomposed between large-scale components and random time-uncorrelated small-scale components. This assumption leads to a modification of the material derivative and hence of every fluid dynamics models. Through the thesis, the mixing induced by deterministic low-resolution flows is also investigated. We first applied that decomposition to reduced order models (ROM). The fluid velocity is expressed on a finite-dimensional basis and its evolution law is projected onto each of these modes. We derive two types of ROMs of Navier-Stokes equations. A deterministic LES-like model is able to stabilize ROMs and to better analyze the influence of the residual velocity on the resolved component. The random one additionally maintains the variability of stable modes and quantifies the model errors. We derive random versions of several geophysical models. We numerically study the transport under location uncertainty through a simplified one. A single realization of our model better retrieves the small-scale tracer structures than a deterministic simulation. Furthermore, a small ensemble of simulations accurately predicts and describes the extreme events, the bifurcations as well as the amplitude and the position of the ensemble errors. Another of our derived simplified model quantifies the frontolysis and the frontogenesis in the upper ocean. This thesis also studied the mixing of tracers generated by smooth fluid flows, after a finite time. We propose a simple model to describe the stretching as well as the spatial and spectral structures of advected tracers. With a toy flow but also with satellite images, we apply our model to locally and globally describe the mixing, specify the advection time and the filter width of the Lagrangian advection method, as well as the turbulent diffusivity in numerical simulations
Boopathy, Komahan. "Uncertainty Quantification and Optimization Under Uncertainty Using Surrogate Models". University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1398302731.
Texto completo da fonte