Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Predictive uncertainty quantification.

Дисертації з теми "Predictive uncertainty quantification"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-32 дисертацій для дослідження на тему "Predictive uncertainty quantification".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Lonsdale, Jack Henry. "Predictive modelling and uncertainty quantification of UK forest growth." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/16202.

Повний текст джерела
Анотація:
Forestry in the UK is dominated by coniferous plantations. Sitka spruce (Picea sitchensis) and Scots pine (Pinus sylvestris) are the most prevalent species and are mostly grown in single age mono-culture stands. Forest strategy for Scotland, England, and Wales all include efforts to achieve further afforestation. The aim of this afforestation is to provide a multi-functional forest with a broad range of benefits. Due to the time scale involved in forestry, accurate forecasts of stand productivity (along with clearly defined uncertainties) are essential to forest managers. These can be provided by a range of approaches to modelling forest growth. In this project model comparison, Bayesian calibration, and data assimilation methods were all used to attempt to improve forecasts and understanding of uncertainty therein of the two most important conifers in UK forestry. Three different forest growth models were compared in simulating growth of Scots pine. A yield table approach, the process-based 3PGN model, and a Stand Level Dynamic Growth (SLeDG) model were used. Predictions were compared graphically over the typical productivity range for Scots pine in the UK. Strengths and weaknesses of each model were considered. All three produced similar growth trajectories. The greatest difference between models was in volume and biomass in unthinned stands where the yield table predicted a much larger range compared to the other two models. Future advances in data availability and computing power should allow for greater use of process-based models, but in the interim more flexible dynamic growth models may be more useful than static yield tables for providing predictions which extend to non-standard management prescriptions and estimates of early growth and yield. A Bayesian calibration of the SLeDG model was carried out for both Sitka spruce and Scots pine in the UK for the first time. Bayesian calibrations allow both model structure and parameters to be assessed simultaneously in a probabilistic framework, providing a model with which forecasts and their uncertainty can be better understood and quantified using posterior probability distributions. Two different structures for including local productivity in the model were compared with a Bayesian model comparison. A complete calibration of the more probable model structure was then completed. Example forecasts from the calibration were compatible with existing yield tables for both species. This method could be applied to other species or other model structures in the future. Finally, data assimilation was investigated as a way of reducing forecast uncertainty. Data assimilation assumes that neither observations nor models provide a perfect description of a system, but combining them may provide the best estimate. SLeDG model predictions and LiDAR measurements for sub-compartments within Queen Elizabeth Forest Park were combined with an Ensemble Kalman Filter. Uncertainty was reduced following the second data assimilation in all of the state variables. However, errors in stand delineation and estimated stand yield class may have caused observational uncertainty to be greater thus reducing the efficacy of the method for reducing overall uncertainty.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Gligorijevic, Djordje. "Predictive Uncertainty Quantification and Explainable Machine Learning in Healthcare." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/520057.

Повний текст джерела
Анотація:
Computer and Information Science
Ph.D.
Predictive modeling is an ever-increasingly important part of decision making. The advances in Machine Learning predictive modeling have spread across many domains bringing significant improvements in performance and providing unique opportunities for novel discoveries. A notably important domains of the human world are medical and healthcare domains, which take care of peoples' wellbeing. And while being one of the most developed areas of science with active research, there are many ways they can be improved. In particular, novel tools developed based on Machine Learning theory have drawn benefits across many areas of clinical practice, pushing the boundaries of medical science and directly affecting well-being of millions of patients. Additionally, healthcare and medicine domains require predictive modeling to anticipate and overcome many obstacles that future may hold. These kinds of applications employ a precise decision--making processes which requires accurate predictions. However, good prediction by its own is often insufficient. There has been no major focus in developing algorithms with good quality uncertainty estimates. Ergo, this thesis aims at providing a variety of ways to incorporate solutions by learning high quality uncertainty estimates or providing interpretability of the models where needed for purpose of improving existing tools built in practice and allowing many other tools to be used where uncertainty is the key factor for decision making. The first part of the thesis proposes approaches for learning high quality uncertainty estimates for both short- and long-term predictions in multi-task learning, developed on top for continuous probabilistic graphical models. In many scenarios, especially in long--term predictions, it may be of great importance for the models to provide a reliability flag in order to be accepted by domain experts. To this end we explored a widely applied structured regression model with a goal of providing meaningful uncertainty estimations on various predictive tasks. Our particular interest is in modeling uncertainty propagation while predicting far in the future. To address this important problem, our approach centers around providing an uncertainty estimate by modeling input features as random variables. This allows modeling uncertainty from noisy inputs. In cases when model iteratively produces errors it should propagate uncertainty over the predictive horizon, which may provide invaluable information for decision making based on predictions. In the second part of the thesis we propose novel neural embedding models for learning low-dimensional embeddings of medical concepts, such are diseases and genes, and show how they can be interpreted to allow accessing their quality, and show how can they be used to solve many problems in medical and healthcare research. We use EHR data to discover novel relationships between diseases by studying their comorbidities (i.e., co-occurrences in patients). We trained our models on a large-scale EHR database comprising more than 35 million inpatient cases. To confirm value and potential of the proposed approach we evaluate its effectiveness on a held-out set. Furthermore, for select diseases we provide a candidate gene list for which disease-gene associations were not studied previously, allowing biomedical researchers to better focus their often very costly lab studies. We furthermore examine how disease heterogeneity can affect the quality of learned embeddings and propose an approach for learning types of such heterogeneous diseases, while in our study we primarily focus on learning types of sepsis. Finally, we evaluate the quality of low-dimensional embeddings on tasks of predicting hospital quality indicators such as length of stay, total charges and mortality likelihood, demonstrating their superiority over other approaches. In the third part of the thesis we focus on decision making in medicine and healthcare domain by developing state-of-the-art deep learning models capable of outperforming human performance while maintaining good interpretability and uncertainty estimates.
Temple University--Theses
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Zaffran, Margaux. "Post-hoc predictive uncertainty quantification : methods with applications to electricity price forecasting." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX033.

Повний текст джерела
Анотація:
L'essor d'algorithmes d'apprentissage statistique offre des perspectives prometteuses pour prévoir les prix de l'électricité. Cependant, ces méthodes fournissent des prévisions ponctuelles, sans indication du degré de confiance à leur accorder. Pour garantir un déploiement sûr de ces modèles prédictifs, il est crucial de quantifier leur incertitude prédictive. Cette thèse porte sur le développement d'intervalles prédictifs pour tout algorithme de prédiction. Bien que motivées par le secteur électrique, les méthodes développées, basées sur la prédiction conforme par partition (SCP), sont génériques : elles peuvent être appliquées dans de nombreux autres domaines sensibles.Dans un premier temps,cette thèse étudie la quantification post-hoc de l'incertitude prédictive pour les séries temporelles. Le premier obstacle à l'application de SCP pour obtenir des prévisions probabilistes théoriquement valides des prix de l'électricité de manière post-hoc est l'aspect temporel hautement non-stationnaire des prix de l'électricité, brisant l'hypothèse d'échangeabilité. La première contribution propose un algorithme qui ne dépend pas d'un paramètre et adapté aux séries temporelles, reposant sur l'analyse théorique de l'efficacité d'une méthode pré-existante, l'Inférence Conforme Adaptative. La deuxième contribution mène une étude d'application détaillée sur un nouveau jeu de données de prix spot français récents et turbulents en 2020 et 2021.Un autre défi sont les valeurs manquantes (NAs). Dans un deuxièmte temps, cette thèse analyse l'interaction entre les NAs et la quantification de l'incertitude prédictive. La troisième contribution montre que les NAs induisent de l'hétéroscédasticité, ce qui conduit à une couverture inégale en fonction de quelles valeurs sont manquantes. Deux algorithmes sont conçus afin d'assurer une couverture constante quelque soit le schéma de NAs, ceci étant assuré sous des hypothèses distributionnelles sur les NAs. La quatrième contribution approfondit l'analyse théorique afin de comprendre précisément quelles hypothèses de distribution sont inévitables pour construite des régions prédictives informatives. Elle unifie également les algorithmes proposés précédemment dans un cadre général qui démontre empiriquement être robuste aux violations des hypothèses distributionnelles sur les NAs
The surge of more and more powerful statistical learning algorithms offers promising prospects for electricity prices forecasting. However, these methods provide ad hoc forecasts, with no indication of the degree of confidence to be placed in them. To ensure the safe deployment of these predictive models, it is crucial to quantify their predictive uncertainty. This PhD thesis focuses on developing predictive intervals for any underlying algorithm. While motivated by the electrical sector, the methods developed, based on Split Conformal Prediction (SCP), are generic: they can be applied in many sensitive fields.First, this thesis studies post-hoc predictive uncertainty quantification for time series. The first bottleneck to apply SCP in order to obtain guaranteed probabilistic electricity price forecasting in a post-hoc fashion is the highly non-stationary temporal aspect of electricity prices, breaking the exchangeability assumption. The first contribution proposes a parameter-free algorithm tailored for time series, which is based on theoretically analysing the efficiency of the existing Adaptive Conformal Inference method. The second contribution conducts an extensive application study on novel data set of recent turbulent French spot prices in 2020 and 2021.Another challenge are missing values (NAs). In a second part, this thesis analyzes the interplay between NAs and predictive uncertainty quantification. The third contribution highlights that NAs induce heteroskedasticity, leading to uneven coverage depending on which features are observed. Two algorithms recovering equalized coverage for any NAs under distributional assumptions on the missigness mechanism are designed. The forth contribution pushes forwards the theoretical analysis to understand precisely which distributional assumptions are unavoidable for theoretical informativeness. It also unifies the previously proposed algorithms into a general framework that demontrastes empirical robustness to violations of the supposed missingness distribution
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Riley, Matthew E. "Quantification of Model-Form, Predictive, and Parametric Uncertainties in Simulation-Based Design." Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1314895435.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Freeman, Jacob Andrew. "Optimization Under Uncertainty and Total Predictive Uncertainty for a Tractor-Trailer Base-Drag Reduction Device." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/77168.

Повний текст джерела
Анотація:
One key outcome of this research is the design for a 3-D tractor-trailer base-drag reduction device that predicts a 41% reduction in wind-averaged drag coefficient at 57 mph (92 km/h) and that is relatively insensitive to uncertain wind speed and direction and uncertain deflection angles due to mounting accuracy and static aeroelastic loading; the best commercial device of non-optimized design achieves a 12% reduction at 65 mph. Another important outcome is the process by which the optimized design is obtained. That process includes verification and validation of the flow solver, a less complex but much broader 2-D pathfinder study, and the culminating 3-D aerodynamic shape optimization under uncertainty (OUU) study. To gain confidence in the accuracy and precision of a computational fluid dynamics (CFD) flow solver and its Reynolds-averaged Navier-Stokes (RANS) turbulence models, it is necessary to conduct code verification, solution verification, and model validation. These activities are accomplished using two commercial CFD solvers, Cobalt and RavenCFD, with four turbulence models: Spalart-Allmaras (S-A), S-A with rotation and curvature, Menter shear-stress transport (SST), and Wilcox 1998 k-ω. Model performance is evaluated for three low subsonic 2-D applications: turbulent flat plate, planar jet, and NACA 0012 airfoil at α = 0°. The S-A turbulence model is selected for the 2-D OUU study. In the 2-D study, a tractor-trailer base flap model is developed that includes six design variables with generous constraints; 400 design candidates are evaluated. The design optimization loop includes the effect of uncertain wind speed and direction, and post processing addresses several other uncertain effects on drag prediction. The study compares the efficiency and accuracy of two optimization algorithms, evolutionary algorithm (EA) and dividing rectangles (DIRECT), twelve surrogate models, six sampling methods, and surrogate-based global optimization (SBGO) methods. The DAKOTA optimization and uncertainty quantification framework is used to interface the RANS flow solver, grid generator, and optimization algorithm. The EA is determined to be more efficient in obtaining a design with significantly reduced drag (as opposed to more efficient in finding the true drag minimum), and total predictive uncertainty is estimated as ±11%. While the SBGO methods are more efficient than a traditional optimization algorithm, they are computationally inefficient due to their serial nature, as implemented in DAKOTA. Because the S-A model does well in 2-D but not in 3-D under these conditions, the SST turbulence model is selected for the 3-D OUU study that includes five design variables and evaluates a total of 130 design candidates. Again using the EA, the study propagates aleatory (wind speed and direction) and epistemic (perturbations in flap deflection angle) uncertainty within the optimization loop and post processes several other uncertain effects. For the best 3-D design, total predictive uncertainty is +15/-42%, due largely to using a relatively coarse (six million cell) grid. That is, the best design drag coefficient estimate is within 15 and 42% of the true value; however, its improvement relative to the no-flaps baseline is accurate within 3-9% uncertainty.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wu, Jinlong. "Predictive Turbulence Modeling with Bayesian Inference and Physics-Informed Machine Learning." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/85129.

Повний текст джерела
Анотація:
Reynolds-Averaged Navier-Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high-fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled.
Ph. D.
Reynolds-Averaged Navier–Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Cortesi, Andrea Francesco. "Predictive numerical simulations for rebuilding freestream conditions in atmospheric entry flows." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0021/document.

Повний текст джерела
Анотація:
Une prédiction fidèle des écoulements hypersoniques à haute enthalpie est capitale pour les missions d'entrée atmosphérique. Cependant, la présence d'incertitudes est inévitable, sur les conditions de l'écoulement libre comme sur d'autres paramètres des modèles physico-chimiques. Pour cette raison, une quantification rigoureuse de l'effet de ces incertitudes est obligatoire pour évaluer la robustesse et la prédictivité des simulations numériques. De plus, une reconstruction correcte des paramètres incertains à partir des mesures en vol peut aider à réduire le niveau d'incertitude sur les sorties. Dans ce travail, nous utilisons un cadre statistique pour la propagation directe des incertitudes ainsi que pour la reconstruction inverse des conditions de l'écoulement libre dans le cas d'écoulements de rentrée atmosphérique. La possibilité d'exploiter les mesures de flux thermique au nez du véhicule pour la reconstruction des variables de l'écoulement libre et des paramètres incertains du modèle est évaluée pour les écoulements de rentrée hypersoniques. Cette reconstruction est réalisée dans un cadre bayésien, permettant la prise en compte des différentes sources d'incertitudes et des erreurs de mesure. Différentes techniques sont introduites pour améliorer les capacités de la stratégie statistique de quantification des incertitudes. Premièrement, une approche est proposée pour la génération d'un métamodèle amélioré, basée sur le couplage de Kriging et Sparse Polynomial Dimensional Decomposition. Ensuite, une méthode d'ajoute adaptatif de nouveaux points à un plan d'expériences existant est présentée dans le but d'améliorer la précision du métamodèle créé. Enfin, une manière d'exploiter les sous-espaces actifs dans les algorithmes de Markov Chain Monte Carlo pour les problèmes inverses bayésiens est également exposée
Accurate prediction of hypersonic high-enthalpy flows is of main relevance for atmospheric entry missions. However, uncertainties are inevitable on freestream conditions and other parameters of the physico-chemical models. For this reason, a rigorous quantification of the effect of uncertainties is mandatory to assess the robustness and predictivity of numerical simulations. Furthermore, a proper reconstruction of uncertain parameters from in-flight measurements can help reducing the level of uncertainties of the output. In this work, we will use a statistical framework for direct propagation of uncertainties and inverse freestream reconstruction applied to atmospheric entry flows. We propose an assessment of the possibility of exploiting forebody heat flux measurements for the reconstruction of freestream variables and uncertain parameters of the model for hypersonic entry flows. This reconstruction is performed in a Bayesian framework, allowing to account for sources of uncertainties and measurement errors. Different techniques are introduced to enhance the capabilities of the statistical framework for quantification of uncertainties. First, an improved surrogate modeling technique is proposed, based on Kriging and Sparse Polynomial Dimensional Decomposition. Then a method is proposed to adaptively add new training points to an existing experimental design to improve the accuracy of the trained surrogate model. A way to exploit active subspaces in Markov Chain Monte Carlo algorithms for Bayesian inverse problems is also proposed
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Erbas, Demet. "Sampling strategies for uncertainty quantification in oil recovery prediction." Thesis, Heriot-Watt University, 2007. http://hdl.handle.net/10399/70.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Whiting, Nolan Wagner. "Assessment of Model Validation, Calibration, and Prediction Approaches in the Presence of Uncertainty." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/91903.

Повний текст джерела
Анотація:
Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the model form uncertainty or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation and/or experimental outcomes. These uncertainties can be in the form of aleatory uncertainties due to randomness or epistemic uncertainties due to lack of knowledge. Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME VandV 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics simulations of a multi-element airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Generally it was seen that the MAVM performed the best in cases where there is a sparse amount of data and/or large extrapolations and Bayesian calibration outperformed the others where there is an extensive amount of experimental data that covers the application domain.
Master of Science
Uncertainties often exists when conducting physical experiments, and whether this uncertainty exists due to input uncertainty, uncertainty in the environmental conditions in which the experiment takes place, or numerical uncertainty in the model, it can be difficult to validate and compare the results of a model with those of an experiment. Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the uncertainty that exists within the model or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation (model) and/or experimental outcomes. These uncertainties can be in the form of aleatory (uncertainties which a probability distribution can be applied for likelihood of drawing values) or epistemic uncertainties (no knowledge, inputs drawn within an interval). Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME V&V 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics(CFD) simulations of a multi-element airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Also of interest was to assess how well each method could predict the uncertainties about the simulation outside of the region in which experimental observations were made, and model form uncertainties could be observed.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Phadnis, Akash. "Uncertainty quantification and prediction for non-autonomous linear and nonlinear systems." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85476.

Повний текст джерела
Анотація:
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 189-197).
The science of uncertainty quantification has gained a lot of attention over recent years. This is because models of real processes always contain some elements of uncertainty, and also because real systems can be better described using stochastic components. Stochastic models can therefore be utilized to provide a most informative prediction of possible future states of the system. In light of the multiple scales, nonlinearities and uncertainties in ocean dynamics, stochastic models can be most useful to describe ocean systems. Uncertainty quantification schemes developed in recent years include order reduction methods (e.g. proper orthogonal decomposition (POD)), error subspace statistical estimation (ESSE), polynomial chaos (PC) schemes and dynamically orthogonal (DO) field equations. In this thesis, we focus our attention on DO and various PC schemes for quantifying and predicting uncertainty in systems with external stochastic forcing. We develop and implement these schemes in a generic stochastic solver for a class of non-autonomous linear and nonlinear dynamical systems. This class of systems encapsulates most systems encountered in classic nonlinear dynamics and ocean modeling, including flows modeled by Navier-Stokes equations. We first study systems with uncertainty in input parameters (e.g. stochastic decay models and Kraichnan-Orszag system) and then with external stochastic forcing (autonomous and non-autonomous self-engineered nonlinear systems). For time-integration of system dynamics, stochastic numerical schemes of varied order are employed and compared. Using our generic stochastic solver, the Monte Carlo, DO and polynomial chaos schemes are inter-compared in terms of accuracy of solution and computational cost. To allow accurate time-integration of uncertainty due to external stochastic forcing, we also derive two novel PC schemes, namely, the reduced space KLgPC scheme and the modified TDgPC (MTDgPC) scheme. We utilize a set of numerical examples to show that the two new PC schemes and the DO scheme can integrate both additive and multiplicative stochastic forcing over significant time intervals. For the final example, we consider shallow water ocean surface waves and the modeling of these waves by deterministic dynamics and stochastic forcing components. Specifically, we time-integrate the Korteweg-de Vries (KdV) equation with external stochastic forcing, comparing the performance of the DO and Monte Carlo schemes. We find that the DO scheme is computationally efficient to integrate uncertainty in such systems with external stochastic forcing.
by Akash Phadnis.
S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Kim, Jee Yun. "Data-driven Methods in Mechanical Model Calibration and Prediction for Mesostructured Materials." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/85210.

Повний текст джерела
Анотація:
Mesoscale design involving control of material distribution pattern can create a statistically heterogeneous material system, which has shown increased adaptability to complex mechanical environments involving highly non-uniform stress fields. Advances in multi-material additive manufacturing can aid in this mesoscale design, providing voxel level control of material property. This vast freedom in design space also unlocks possibilities within optimization of the material distribution pattern. The optimization problem can be divided into a forward problem focusing on accurate predication and an inverse problem focusing on efficient search of the optimal design. In the forward problem, the physical behavior of the material can be modeled based on fundamental mechanics laws and simulated through finite element analysis (FEA). A major limitation in modeling is the unknown parameters in constitutive equations that describe the constituent materials; determining these parameters via conventional single material testing has been proven to be insufficient, which necessitates novel and effective approaches of calibration. A calibration framework based in Bayesian inference, which integrates data from simulations and physical experiments, has been applied to a study involving a mesostructured material fabricated by fused deposition modeling. Calibration results provide insights on what values these parameters converge to as well as which material parameters the model output has the largest dependence on while accounting for sources of uncertainty introduced during the modeling process. Additionally, this statistical formulation is able to provide quick predictions of the physical system by implementing a surrogate and discrepancy model. The surrogate model is meant to be a statistical representation of the simulation results, circumventing issues arising from computational load, while the discrepancy is aimed to account for the difference between the simulation output and physical experiments. In this thesis, this Bayesian calibration framework is applied to a material bending problem, where in-situ mechanical characterization data and FEA simulations based on constitutive modeling are combined to produce updated values of the unknown material parameters with uncertainty.
Master of Science
A material system obtained by applying a pattern of multiple materials has proven its adaptability to complex practical conditions. The layer by layer manufacturing process of additive manufacturing can allow for this type of design because of its control over where material can be deposited. This possibility then raises the question of how a multi-material system can be optimized in its design for a given application. In this research, we focus mainly on the problem of accurately predicting the response of the material when subjected to stimuli. Conventionally, simulations aided by finite element analysis (FEA) were relied upon for prediction, however it also presents many issues such as long run times and uncertainty in context-specific inputs of the simulation. We instead have adopted a framework using advanced statistical methodology able to combine both experimental and simulation data to significantly reduce run times as well as quantify the various uncertainties associated with running simulations.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Zhang, Y. "Quantification of prediction uncertainty for principal components regression and partial least squares regression." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1433990/.

Повний текст джерела
Анотація:
Principal components regression (PCR) and partial least squares regression (PLS) are widely used in multivariate calibration in the fields of chemometrics, econometrics, social science and so forth, serving as alternative solutions to the problems which arise in ordinary least squares regression when explanatory variables are either collinear, or there are hundreds of explanatory variables with a relatively small sample size. Both PCR and PLS tackle the problems by constructing lower dimensional factors based on the explanatory variables. The extra step of factor construction makes the standard prediction uncertainty theory of ordinary least squares regression not directly applicable to the two reduced dimension methods. In the thesis, we start by reviewing the ordinary least squares regression prediction uncertainty theory, and then investigate how the theory performs when it extends to PCR and PLS, aiming at potentially better approaches. The first main contribution of the thesis is to clarify the quantification of prediction uncertainty for PLS. We rephrase existing methods with consistent mathematical notations in the hope of giving a clear guidance to practitioners. The second main contribution is to develop a new linearisation method for PLS. After establishing the theory, simulation and real data studies have been employed to understand and compare the new method with several commonly used methods. From the studies of simulations and a real dataset, we investigate the properties of simple approaches based on the theory of ordinary least squares theory, the approaches using resampling of data, and the local linearisation approaches including a classical and our improved new methods. It is advisable to use the ordinary least squares type prediction variance with the estimated regression error variance from the tuning set in both PCR and PLS in practice.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Kacker, Shubhra. "The Role of Constitutive Model in Traumatic Brain Injury Prediction." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563874757653453.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Zavar, Moosavi Azam Sadat. "Probabilistic and Statistical Learning Models for Error Modeling and Uncertainty Quantification." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/82491.

Повний текст джерела
Анотація:
Simulations and modeling of large-scale systems are vital to understanding real world phenomena. However, even advanced numerical models can only approximate the true physics. The discrepancy between model results and nature can be attributed to different sources of uncertainty including the parameters of the model, input data, or some missing physics that is not included in the model due to a lack of knowledge or high computational costs. Uncertainty reduction approaches seek to improve the model accuracy by decreasing the overall uncertainties in models. Aiming to contribute to this area, this study explores uncertainty quantification and reduction approaches for complex physical problems. This study proposes several novel probabilistic and statistical approaches for identifying the sources of uncertainty, modeling the errors, and reducing uncertainty to improve the model predictions for large-scale simulations. We explore different computational models. The first class of models studied herein are inherently stochastic, and numerical approximations suffer from stability and accuracy issues. The second class of models are partial differential equations, which capture the laws of mathematical physics; however, they only approximate a more complex reality, and have uncertainties due to missing dynamics which is not captured by the models. The third class are low-fidelity models, which are fast approximations of very expensive high-fidelity models. The reduced-order models have uncertainty due to loss of information in the dimension reduction process. We also consider uncertainty analysis in the data assimilation framework, specifically for ensemble based methods where the effect of sampling errors is alleviated by localization. Finally, we study the uncertainty in numerical weather prediction models coming from approximate descriptions of physical processes.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Mohamed, Lina Mahgoub Yahya. "Novel sampling techniques for reservoir history matching optimisation and uncertainty quantification in flow prediction." Thesis, Heriot-Watt University, 2011. http://hdl.handle.net/10399/2435.

Повний текст джерела
Анотація:
Modern reservoir management has an increasing focus on accurately predicting the likely range of field recoveries. A variety of assisted history matching techniques has been developed across the research community concerned with this topic. These techniques are based on obtaining multiple models that closely reproduce the historical flow behaviour of a reservoir. The set of resulted history matched models is then used to quantify uncertainty in predicting the future performance of the reservoir and providing economic evaluations for different field development strategies. The key step in this workflow is to employ algorithms that sample the parameter space in an efficient but appropriate manner. The algorithm choice has an impact on how fast a model is obtained and how well the model fits the production data. The sampling techniques that have been developed to date include, among others, gradient based methods, evolutionary algorithms, and ensemble Kalman filter (EnKF). This thesis has investigated and further developed the following sampling and inference techniques: Particle Swarm Optimisation (PSO), Hamiltonian Monte Carlo, and Population Markov Chain Monte Carlo. The inspected techniques have the capability of navigating the parameter space and producing history matched models that can be used to quantify the uncertainty in the forecasts in a faster and more reliable way. The analysis of these techniques, compared with Neighbourhood Algorithm (NA), has shown how the different techniques affect the predicted recovery from petroleum systems and the benefits of the developed methods over the NA. The history matching problem is multi-objective in nature, with the production data possibly consisting of multiple types, coming from different wells, and collected at different times. Multiple objectives can be constructed from these data and explicitly be optimised in the multi-objective scheme. The thesis has extended the PSO to handle multi-objective history matching problems in which a number of possible conflicting objectives must be satisfied simultaneously. The benefits and efficiency of innovative multi-objective particle swarm scheme (MOPSO) are demonstrated for synthetic reservoirs. It is demonstrated that the MOPSO procedure can provide a substantial improvement in finding a diverse set of good fitting models with a fewer number of very costly forward simulations runs than the standard single objective case, depending on how the objectives are constructed. The thesis has also shown how to tackle a large number of unknown parameters through the coupling of high performance global optimisation algorithms, such as PSO, with model reduction techniques such as kernel principal component analysis (PCA), for parameterising spatially correlated random fields. The results of the PSO-PCA coupling applied to a recent SPE benchmark history matching problem have demonstrated that the approach is indeed applicable for practical problems. A comparison of PSO with the EnKF data assimilation method has been carried out and has concluded that both methods have obtained comparable results on the example case. This point reinforces the need for using a range of assisted history matching algorithms for more confidence in predictions.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Smit, Jacobus Petrus Johannes. "The quantification of prediction uncertainty associated with water quality models using Monte Carlo Simulation." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85814.

Повний текст джерела
Анотація:
Thesis (MEng)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: Water Quality Models are mathematical representations of ecological systems and they play a major role in the planning and management of water resources and aquatic environments. Important decisions concerning capital investment and environmental consequences often rely on the results of Water Quality Models and it is therefore very important that decision makers are aware and understand the uncertainty associated with these models. The focus of this study was on the use of Monte Carlo Simulation for the quantification of prediction uncertainty associated with Water Quality Models. Two types of uncertainty exist: Epistemic Uncertainty and Aleatory Uncertainty. Epistemic uncertainty is a result of a lack of knowledge and aleatory uncertainty is due to the natural variability of an environmental system. It is very important to distinguish between these two types of uncertainty because the analysis of a model’s uncertainty depends on it. Three different configurations of Monte Carlo Simulation in the analysis of uncertainty were discussed and illustrated: Single Phase Monte Carlo Simulation (SPMCS), Two Phase Monte Carlo Simulation (TPMCS) and Parameter Monte Carlo Simulation (PMCS). Each configuration of Monte Carlo Simulation has its own objective in the analysis of a model’s uncertainty and depends on the distinction between the types of uncertainty. As an experiment, a hypothetical river was modelled using the Streeter-Phelps model and synthetic data was generated for the system. The generation of the synthetic data allowed for the experiment to be performed under controlled conditions. The modelling protocol followed in the experiment included two uncertainty analyses. All three types of Monte Carlo Simulations were used in these uncertainty analyses to quantify the model’s prediction uncertainty in fulfilment of their different objectives. The first uncertainty analysis, known as the preliminary uncertainty analysis, was performed to take stock of the model’s situation concerning uncertainty before any effort was made to reduce the model’s prediction uncertainty. The idea behind the preliminary uncertainty analysis was that it would help in further modelling decisions with regards to calibration and parameter estimation experiments. Parameter uncertainty was reduced by the calibration of the model. Once parameter uncertainty was reduced, the second uncertainty analysis, known as the confirmatory uncertainty analysis, was performed to confirm that the uncertainty associated with the model was indeed reduced. The two uncertainty analyses were conducted in exactly the same way. In conclusion to the experiment, it was illustrated how the quantification of the model’s prediction uncertainty aided in the calculation of a Total Maximum Daily Load (TMDL). The Margin of Safety (MOS) included in the TMDL could be determined based on scientific information provided by the uncertainty analysis. The total MOS assigned to the TMDL was -35% of the mean load allocation for the point source. For the sake of simplicity load allocations from non-point sources were disregarded.
AFRIKAANSE OPSOMMING: Watergehalte modelle is wiskundige voorstellings van ekologiese sisteme en speel ’n belangrike rol in die beplanning en bestuur van waterhulpbronne en wateromgewings. Belangrike besluite rakende finansiële beleggings en besluite rakende die omgewing maak dikwels staat op die resultate van watergehalte modelle. Dit is dus baie belangrik dat besluitnemers bewus is van die onsekerhede verbonde met die modelle en dit verstaan. Die fokus van hierdie studie het berus op die gebruik van die Monte Carlo Simulasie om die voorspellingsonsekerhede van watergehalte modelle te kwantifiseer. Twee tipes onsekerhede bestaan: Epistemologiese onsekerheid en toeval afhangende onsekerheid. Epistemologiese onsekerheid is die oorsaak van ‘n gebrek aan kennis terwyl toeval afhangende onsekerheid die natuurlike wisselvalligheid in ’n natuurlike omgewing behels. Dit is belangrik om te onderskei tussen hierdie twee tipes onsekerhede aangesien die analise van ’n model se onsekerheid hiervan afhang. Drie verskillende rangskikkings van Monte Carlo Simulasies in die analise van die onsekerhede word bespreek en geïllustreer: Enkel Fase Monte Carlo Simulasie (SPMCS), Dubbel Fase Monte Carlo Simulasie (TPMCS) en Parameter Monte Carlo Simulasie (PMCS). Elke rangskikking van Monte Carlo Simulasie het sy eie doelwit in die analise van ’n model se onsekerheid en hang af van die onderskeiding tussen die twee tipes onsekerhede. As eksperiment is ’n hipotetiese rivier gemodelleer deur gebruik te maak van die Streeter-Phelps teorie en sintetiese data is vir die rivier gegenereer. Die sintetiese data het gesorg dat die eksperiment onder beheerde toestande kon plaasvind. Die protokol in die eksperiment het twee onsekerheids analises ingesluit. Al drie die rangskikkings van die Monte Carlo Simulasie is gebruik in hierdie analises om die voorspellingsonsekerheid van die model te kwantifiseer en hul doelwitte te bereik. Die eerste analise, die voorlopige onsekerheidsanalise, is uitgevoer om die model se situasie met betrekking tot die onsekerheid op te som voor enige stappe geneem is om die model se voorspellings onsekerheid te probeer verminder. Die idee agter die voorlopige onsekerheidsanalise was dat dit sou help in verdere modelleringsbesluite ten opsigte van kalibrasie en die skatting van parameters. Onsekerhede binne die parameters is verminder deur die model te kalibreer, waarna die tweede onsekerheidsanalise uitgevoer is. Hierdie analise word die bevestigingsonsekerheidsanalise genoem en word uitgevoer met die doel om vas te stel of die onsekerheid geassosieer met die model wel verminder is. Die twee tipes analises word op presies dieselfde manier toegepas. In die afloop tot die eksperiment, is gewys hoe die resultate van ’n onsekerheidsanalise gebruik is in die berekening van ’n totale maksimum daaglikse belading (TMDL) vir die rivier. Die veiligheidgrens (MOS) ingesluit in die TMDL kon vasgestel word deur die gebruik van wetenskaplike kennis wat voorsien is deur die onsekerheidsanalise. Die MOS het bestaan uit -35% van die gemiddelde toegekende lading vir puntbelasting van besoedeling in die rivier. Om die eksperiment eenvoudig te hou is verspreide laste van besoedeling nie gemodelleer nie.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Tamssaouet, Ferhat. "Towards system-level prognostics : modeling, uncertainty propagation and system remaining useful life prediction." Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0079.

Повний текст джерела
Анотація:
Le pronostic est le processus de prédiction de la durée de vie résiduelle utile (RUL) des composants, sous-systèmes ou systèmes. Cependant, jusqu'à présent, le pronostic a souvent été abordé au niveau composant sans tenir compte des interactions entre les composants et l'impact de l'environnement, ce qui peut conduire à une mauvaise prédiction du temps de défaillance dans des systèmes complexes. Dans ce travail, une approche de pronostic au niveau du système est proposée. Cette approche est basée sur un nouveau cadre de modélisation : le modèle d'inopérabilité entrée-sortie (IIM), qui permet de prendre en compte les interactions entre les composants et les effets du profil de mission et peut être appliqué pour des systèmes hétérogènes. Ensuite, une nouvelle méthodologie en ligne pour l'estimation des paramètres (basée sur l'algorithme de la descente du gradient) et la prédiction du RUL au niveau système (SRUL) en utilisant les filtres particulaires (PF), a été proposée. En détail, l'état de santé des composants du système est estimé et prédit d'une manière probabiliste en utilisant les PF. En cas de divergence consécutive entre les estimations a priori et a posteriori de l'état de santé du système, la méthode d'estimation proposée est utilisée pour corriger et adapter les paramètres de l'IIM. Finalement, la méthodologie développée, a été appliquée sur un système industriel réaliste : le Tennessee Eastman Process, et a permis une prédiction du SRUL dans un temps de calcul raisonnable
Prognostics is the process of predicting the remaining useful life (RUL) of components, subsystems, or systems. However, until now, the prognostics has often been approached from a component view without considering interactions between components and effects of the environment, leading to a misprediction of the complex systems failure time. In this work, a prognostics approach to system-level is proposed. This approach is based on a new modeling framework: the inoperability input-output model (IIM), which allows tackling the issue related to the interactions between components and the mission profile effects and can be applied for heterogeneous systems. Then, a new methodology for online joint system RUL (SRUL) prediction and model parameter estimation is developed based on particle filtering (PF) and gradient descent (GD). In detail, the state of health of system components is estimated and predicted in a probabilistic manner using PF. In the case of consecutive discrepancy between the prior and posterior estimates of the system health state, the proposed estimation method is used to correct and to adapt the IIM parameters. Finally, the developed methodology is verified on a realistic industrial system: The Tennessee Eastman Process. The obtained results highlighted its effectiveness in predicting the SRUL in reasonable computing time
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Puckett, Kerri A. "Uncertainty quantification in predicting deep aquifer recharge rates, with applicability in the Powder River Basin, Wyoming." Laramie, Wyo. : University of Wyoming, 2008. http://proquest.umi.com/pqdweb?did=1594477301&sid=2&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Sun, Yuming. "Closing the building energy performance gap by improving our predictions." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52285.

Повний текст джерела
Анотація:
Increasing studies imply that predicted energy performance of buildings significantly deviates from actual measured energy use. This so-called "performance gap" may undermine one's confidence in energy-efficient buildings, and thereby the role of building energy efficiency in the national carbon reduction plan. Closing the performance gap becomes a daunting challenge for the involved professions, stimulating them to reflect on how to investigate and better understand the size, origins, and extent of the gap. The energy performance gap underlines the lack of prediction capability of current building energy models. Specifically, existing predictions are predominantly deterministic, providing point estimation over the future quantity or event of interest. It, thus, largely ignores the error and noise inherent in an uncertain future of building energy consumption. To overcome this, the thesis turns to a thriving area in engineering statistics that focuses on computation-based uncertainty quantification. The work provides theories and models that enable probabilistic prediction over future energy consumption, forming the basis of risk assessment in decision-making. Uncertainties that affect the wide variety of interacting systems in buildings are organized into five scales (meteorology - urban - building - systems - occupants). At each level both model form and input parameter uncertainty are characterized with probability, involving statistical modeling and parameter distributional analysis. The quantification of uncertainty at different system scales is accomplished using the network of collaborators established through an NSF-funded research project. The bottom-up uncertainty quantification approach, which deals with meta uncertainty, is fundamental for generic application of uncertainty analysis across different types of buildings, under different urban climate conditions, and in different usage scenarios. Probabilistic predictions are evaluated by two criteria: coverage and sharpness. The goal of probabilistic prediction is to maximize the sharpness of the predictive distributions subject to the coverage of the realized values. The method is evaluated on a set of buildings on the Georgia Tech campus. The energy consumption of each building is monitored in most cases by a collection of hourly sub-metered consumption data. This research shows that a good match of probabilistic predictions and the real building energy consumption in operation is achievable. Results from the six case buildings show that using the best point estimations of the probabilistic predictions reduces the mean absolute error (MAE) from 44% to 15% and the root mean squared error (RMSE) from 49% to 18% in total annual cooling energy consumption. As for monthly cooling energy consumption, the MAE decreases from 44% to 21% and the RMSE decreases from 53% to 28%. More importantly, the entire probability distributions are statistically verified at annual level of building energy predictions. Based on uncertainty and sensitivity analysis applied to these buildings, the thesis concludes that the proposed method significantly reduces the magnitude and effectively infers the origins of the building energy performance gap.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Messerly, Richard Alma. "How a Systematic Approach to Uncertainty Quantification Renders Molecular Simulation a Quantitative Tool in Predicting the Critical Constants for Large n-Alkanes." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/6598.

Повний текст джерела
Анотація:
Accurate thermophysical property data are crucial for designing efficient chemical processes. For this reason, the Design Institute for Physical Properties (DIPPR 801) provides evaluated experimental data and prediction of various thermophysical properties. The critical temperature (Tc), critical density (ρc), critical pressure (Pc), critical compressibility factor (Zc), and normal boiling point (Tb) are important constants to check for thermodynamic consistency and to estimate other properties. The n-alkane family is of primary interest because it is generally assumed that other families of compounds behave similarly to the n-alkane family with increasing chain-length. Unfortunately, due to thermal decomposition, experimental measurements of Tc, ρc, and Pc for large n-alkanes are scarce and potentially unreliable. For this reason, molecular simulation is an attractive alternative for estimating the critical constants. However, molecular simulation has often been viewed as a tool that is limited to providing qualitative insight. One key reason for this perceived weakness is the difficulty in quantifying the uncertainty of the simulation results. This research focuses on a systematic top-down approach to quantifying the uncertainty in Gibbs Ensemble Monte Carlo (GEMC) simulations for large n-alkanes. We implemented four different methods in order to obtain quantitatively reliable molecular simulation results. First, we followed a rigorous statistical analysis to assign the uncertainty of the critical constants when obtained from GEMC. Second, we developed an improved method for predicting Pc with the standard force field models in the literature. Third, we implemented an experimental design to reduce the uncertainty associated with Tc, ρc, Pc, and Zc. Finally, we quantified the uncertainty associated with the Lennard-Jones 12-6 potential parameters. This research demonstrates how uncertainty quantification renders molecular simulation a quantitative tool for thermophysical property evaluation. Specifically, by quantifying and reducing the uncertainty associated with molecular simulation results, we were able to discern between different experimental data sets and prediction models for the critical constants. In this regard, our results enabled the development of improved prediction models for Tc, ρc, Pc, and Zc for large n-alkanes. In addition, we developed a new Tb prediction model in order to ensure thermodynamic consistency between Tc, Pc, and Tb.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Bulthuis, Kevin. "Towards robust prediction of the dynamics of the Antarctic ice sheet: Uncertainty quantification of sea-level rise projections and grounding-line retreat with essential ice-sheet models." Doctoral thesis, Universite Libre de Bruxelles, 2020. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/301049.

Повний текст джерела
Анотація:
Recent progress in the modelling of the dynamics of the Antarctic ice sheet has led to a paradigm shift in the perception of the Antarctic ice sheet in a changing climate. New understanding of the dynamics of the Antarctic ice sheet now suggests that the response of the Antarctic ice sheet to climate change will be driven by instability mechanisms in marine sectors. As concerns have grown about the response of the Antarctic ice sheet in a warming climate, interest has grown simultaneously in predicting with quantified uncertainty the evolution of the Antarctic ice sheet and in clarifying the role played by uncertainties in predicting the response of the Antarctic ice sheet to climate change. Essential ice-sheet models have recently emerged as computationally efficient ice-sheet models for large-scale and long-term simulations of the ice-sheet dynamics and integration into Earth system models. Essential ice-sheet models, such as the fast Elementary Thermomechanical Ice Sheet (f.ETISh) model developed at the Université Libre de Bruxelles, achieve computational tractability by representing essential mechanisms and feedbacks of ice-sheet thermodynamics through reduced-order models and appropriate parameterisations. Given their computational tractability, essential ice-sheet models combined with methods from the field of uncertainty quantification provide opportunities for more comprehensive analyses of the impact of uncertainty in ice-sheet models and for expanding the range of uncertainty quantification methods employed in ice-sheet modelling. The main contributions of this thesis are twofold. On the one hand, we contribute a new assessment and new understanding of the impact of uncertainties on the multicentennial response of the Antarctic ice sheet. On the other hand, we contribute new methods for uncertainty quantification of geometrical characteristics of the spatial response of physics-based computational models, with, as a motivation in glaciology, a focus on predicting with quantified uncertainty the retreat of the grounded region of the Antarctic ice sheet. For the first contribution, we carry out new probabilistic projections of the multicentennial response of the Antarctic ice sheet to climate change using the f.ETISh model. We apply methods from the field of uncertainty quantification to the f.ETISh model to investigate the influence of several sources of uncertainty, namely sources of uncertainty in atmospheric forcing, basal sliding, grounding-line flux parameterisation, calving, sub-shelf melting, ice-shelf rheology, and bedrock relation, on the continental response on the Antarctic ice sheet. We provide new probabilistic projections of the contribution of the Antarctic ice sheet to future sea-level rise; we carry out stochastic sensitivity analysis to determine the most influential sources of uncertainty; and we provide new probabilistic projections of the retreat of the grounded portion of the Antarctic ice sheet. For the second contribution, we propose to address uncertainty quantification of geometrical characteristics of the spatial response of physics-based computational models within the probabilistic context of the random set theory. We contribute to the development of the concept of confidence sets that either contain or are contained within an excursion set of the spatial response with a specified probability level. We propose a new multifidelity quantile-based method for the estimation of such confidence sets and we demonstrate the performance of the proposed method on an application concerned with predicting with quantified uncertainty the retreat of the Antarctic ice sheet. In addition to these two main contributions, we contribute to two additional pieces of research pertaining to the computation of Sobol indices in global sensitivity analysis in small-data settings using the recently introduced probabilistic learning on manifolds (PLoM) and to a multi-model comparison of the projections of the contribution of the Antarctic ice sheet to global mean sea-level rise.
Les progrès récents effectués dans la modélisation de la dynamique de la calotte polaire de l'Antarctique ont donné lieu à un changement de paradigme vis-à-vis de la perception de la calotte polaire de l'Antarctique face au changement climatique. Une meilleure compréhension de la dynamique de la calotte polaire de l'Antarctique suggère désormais que la réponse de la calotte polaire de l'Antarctique au changement climatique sera déterminée par des mécanismes d'instabilité dans les régions marines. Tandis qu'un nouvel engouement se porte sur une meilleure compréhension de la réponse de la calotte polaire de l'Antarctique au changement climatique, un intérêt particulier se porte simultanément vers le besoin de quantifier les incertitudes sur l'évolution de la calotte polaire de l'Antarctique ainsi que de clarifier le rôle joué par les incertitudes sur le comportement de la calotte polaire de l'Antarctique en réponse au changement climatique. D'un point de vue numérique, les modèles glaciologiques dits essentiels ont récemment été développés afin de fournir des modèles numériques efficaces en temps de calcul dans le but de réaliser des simulations à grande échelle et sur le long terme de la dynamique des calottes polaires ainsi que dans l'optique de coupler le comportement des calottes polaires avec des modèles globaux du sytème terrestre. L'efficacité en temps de calcul de ces modèles glaciologiques essentiels, tels que le modèle f.ETISh (fast Elementary Thermomechanical Ice Sheet) développé à l'Université Libre de Bruxelles, repose sur une modélisation des mécanismes et des rétroactions essentiels gouvernant la thermodynamique des calottes polaires au travers de modèles d'ordre réduit et de paramétrisations. Vu l'efficacité en temps de calcul des modèles glaciologiques essentiels, l'utilisation de ces modèles en complément des méthodes du domaine de la quantification des incertitudes offrent de nombreuses opportunités afin de mener des analyses plus complètes de l'impact des incertitudes dans les modèles glaciologiques ainsi que de développer de nouvelles méthodes du domaine de la quantification des incertitudes dans le cadre de la modélisation glaciologique. Les contributions de cette thèse sont doubles. D'une part, nous contribuons à une nouvelle estimation et une nouvelle compréhension de l'impact des incertitudes sur la réponse de la calotte polaire de l'Antarctique dans les prochains siècles. D'autre part, nous contribuons au développement de nouvelles méthodes pour la quantification des incertitudes sur les caractéristiques géométriques de la réponse spatiale de modèles physiques numériques avec, comme motivation en glaciologie, un intérêt particulier vers la prédiction sous incertitudes du retrait de la région de la calotte polaire de l'Antarctique en contact avec le lit rocheux. Dans le cadre de la première contribution, nous réalisons de nouvelles projections probabilistes de la réponse de la calotte polaire de l'Antarctique au changement climatique au cours des prochains siècles à l'aide du modèle numérique f.ETISh. Nous appliquons des méthodes du domaine de la quantification des incertitudes au modèle numérique f.ETISh afin d'étudier l'impact de différentes sources d'incertitude sur la réponse continentale de la calotte polaire de l'Antarctique. Les sources d'incertitude étudiées sont relatives au forçage atmosphérique, au glissement basal, à la paramétrisation du flux à la ligne d'ancrage, au vêlage, à la fonte sous les barrières de glace, à la rhéologie des barrières de glace et à la relaxation du lit rocheux. Nous réalisons de nouvelles projections probabilistes de la contribution de la calotte polaire de l'Antarctique à l'augmentation future du niveau des mers; nous réalisons une analyse de sensibilité afin de déterminer les sources d'incertitude les plus influentes; et nous réalisons de nouvelles projections probabilistes du retrait de la région de la calotte polaire de l'Antarctique en contact avec le lit rocheux.Dans le cadre de la seconde contribution, nous étudions la quantification des incertitudes sur les caractéristiques géométriques de la réponse spatiale de modèles physiques numériques dans le cadre de la théorie des ensembles aléatoires. Dans le cadre de la théorie des ensembles aléatoires, nous développons le concept de régions de confiance qui contiennent ou bien sont inclus dans un ensemble d'excursion de la réponse spatiale du modèle numérique avec un niveau donné de probabilité. Afin d'estimer ces régions de confiance, nous proposons de formuler l'estimation de ces régions de confiance dans une famille d'ensembles paramétrés comme un problème d'estimation de quantiles d'une variable aléatoire et nous proposons une nouvelle méthode de type multifidélité pour estimer ces quantiles. Finalement, nous démontrons l'efficacité de cette nouvelle méthode dans le cadre d'une application relative au retrait de la région de la calotte polaire de l'Antarctique en contact avec le lit rocheux. En plus de ces deux contributions principales, nous contribuons à deux travaux de recherche additionnels. D'une part, nous contribuons à un travail de recherche relatif au calcul des indices de Sobol en analyse de sensibilité dans le cadre de petits ensembles de données à l'aide d'une nouvelle méthode d'apprentissage probabiliste sur des variétés géométriques. D'autre part, nous fournissons une comparaison multimodèle de différentes projections de la contribution de la calotte polaire de l'Antarctique à l'augmentation du niveau des mers.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Calanni, Fraccone Giorgio M. "Bayesian networks for uncertainty estimation in the response of dynamic structures." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24714.

Повний текст джерела
Анотація:
Thesis (Ph.D.)--Aerospace Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Dr. Vitali Volovoi; Committee Co-Chair: Dr. Massimo Ruzzene; Committee Member: Dr. Andrew Makeev; Committee Member: Dr. Dewey Hodges; Committee Member: Dr. Peter Cento
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Rafael-Palou, Xavier. "Detection, quantification, malignancy prediction and growth forecasting of pulmonary nodules using deep learning in follow-up CT scans." Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/672964.

Повний текст джерела
Анотація:
Nowadays, lung cancer assessment is a complex and tedious task mainly per- formed by radiological visual inspection of suspicious pulmonary nodules, using computed tomography (CT) scan images taken to patients over time. Several computational tools relying on conventional artificial intelligence and computer vision algorithms have been proposed for supporting lung cancer de- tection and classification. These solutions mostly rely on the analysis of indi- vidual lung CT images of patients and on the use of hand-crafted image de- scriptors. Unfortunately, this makes them unable to cope with the complexity and variability of the problem. Recently, the advent of deep learning has led to a major breakthrough in the medical image domain, outperforming conven- tional approaches. Despite recent promising achievements in nodule detection, segmentation, and lung cancer classification, radiologists are still reluctant to use these solutions in their day-to-day clinical practice. One of the main rea- sons is that current solutions do not provide support to automatic analysis of the temporal evolution of lung tumours. The difficulty to collect and annotate longitudinal lung CT cases to train models may partially explain the lack of deep learning studies that address this issue. In this dissertation, we investigate how to automatically provide lung can- cer assessment through deep learning algorithms and computer vision pipelines, especially taking into consideration the temporal evolution of the pulmonary nodules. To this end, our first goal consisted on obtaining accurate methods for lung cancer assessment (diagnostic ground truth) based on individual lung CT images. Since these types of labels are expensive and difficult to collect (e.g. usually after biopsy), we proposed to train different deep learning models, based on 3D convolutional neural networks (CNN), to predict nodule malig- nancy based on radiologist visual inspection annotations (which are reasonable to obtain). These classifiers were built based on ground truth consisting of the nodule malignancy, the position and the size of the nodules to classify. Next, we evaluated different ways of synthesizing the knowledge embedded by the nodule malignancy neural network, into an end-to-end pipeline aimed to detect pul- monary nodules and predict lung cancer at the patient level, given a lung CT image. The positive results confirmed the convenience of using CNNs for mod- elling nodule malignancy, according to radiologists, for the automatic prediction of lung cancer. Next, we focused on the analysis of lung CT image series. Thus, we first faced the problem of automatically re-identifying pulmonary nodules from dif- ferent lung CT scans of the same patient. To do this, we present a novel method based on a Siamese neural network (SNN) to rank similarity between nodules, overpassing the need for image registration. This change of paradigm avoided introducing potentially erroneous image deformations and provided computa- tionally faster results. Different configurations of the SNN were examined, in- cluding the application of transfer learning, using different loss functions, and the combination of several feature maps of different network levels. This method obtained state-of-the-art performances for nodule matching both in an isolated manner and embedded in an end-to-end nodule growth detection pipeline. Afterwards, we moved to the core problem of supporting radiologists in the longitudinal management of lung cancer. For this purpose, we created a novel end-to-end deep learning pipeline, composed of four stages that completely au- tomatize from the detection of nodules to the classification of cancer, through the detection of growth in the nodules. In addition, the pipeline integrated a novel approach for nodule growth detection, which relies on a recent hierarchi- cal probabilistic segmentation network adapted to report uncertainty estimates. Also, a second novel method was introduced for lung cancer nodule classification, integrating into a two stream 3D-CNN the estimated nodule malignancy prob- abilities derived from a pre-trained nodule malignancy network. The pipeline was evaluated in a longitudinal cohort and the reported outcomes (i.e. nodule detection, re-identification, growth quantification, and malignancy prediction) were comparable with state-of-the-art work, focused on solving one or a few of the functionalities of our pipeline. Thereafter, we also investigated how to help clinicians to prescribe more accurate tumour treatments and surgical planning. Thus, we created a novel method to forecast nodule growth given a single image of the nodule. Partic- ularly, the method relied on a hierarchical, probabilistic and generative deep neural network able to produce multiple consistent future segmentations of the nodule at a given time. To do this, the network learned to model the mul- timodal posterior distribution of future lung tumour segmentations by using variational inference and injecting the posterior latent features. Eventually, by applying Monte-Carlo sampling on the outputs of the trained network, we esti- mated the expected tumour growth mean and the uncertainty associated with the prediction. Although further evaluation in a larger cohort would be highly recommended, the proposed methods reported accurate results to adequately support the ra- diological workflow of pulmonary nodule follow-up. Beyond this specific appli- cation, the outlined innovations, such as the methods for integrating CNNs into computer vision pipelines, the re-identification of suspicious regions over time based on SNNs, without the need to warp the inherent image structure, or the proposed deep generative and probabilistic network to model tumour growth considering ambiguous images and label uncertainty, they could be easily appli- cable to other types of cancer (e.g. pancreas), clinical diseases (e.g. Covid-19) or medical applications (e.g. therapy follow-up).
Avui en dia, l’avaluació del càncer de pulmó ´es una tasca complexa i tediosa, principalment realitzada per inspecció visual radiològica de nòduls pulmonars sospitosos, mitjançant imatges de tomografia computada (TC) preses als pacients al llarg del temps. Actualment, existeixen diverses eines computacionals basades en intel·ligència artificial i algorismes de visió per computador per donar suport a la detecció i classificació del càncer de pulmó. Aquestes solucions es basen majoritàriament en l’anàlisi d’imatges individuals de TC pulmonar dels pacients i en l’ús de descriptors d’imatges fets a mà. Malauradament, això les fa incapaces d’afrontar completament la complexitat i la variabilitat del problema. Recentment, l’aparició de l’aprenentatge profund ha permès un gran avenc¸ en el camp de la imatge mèdica. Malgrat els prometedors assoliments en detecció de nòduls, segmentació i classificació del càncer de pulmó, els radiòlegs encara són reticents a utilitzar aquestes solucions en el seu dia a dia. Un dels principals motius ´es que les solucions actuals no proporcionen suport automàtic per analitzar l’evolució temporal dels tumors pulmonars. La dificultat de recopilar i anotar cohorts longitudinals de TC pulmonar poden explicar la manca de treballs d’aprenentatge profund que aborden aquest problema. En aquesta tesi investiguem com abordar el suport automàtic a l’avaluació del càncer de pulmó, construint algoritmes d’aprenentatge profund i pipelines de visió per ordinador que, especialment, tenen en compte l’evolució temporal dels nòduls pulmonars. Així doncs, el nostre primer objectiu va consistir a obtenir mètodes precisos per a l’avaluació del càncer de pulmó basats en imatges de CT pulmonar individuals. Atès que aquests tipus d’etiquetes són costoses i difícils d’obtenir (per exemple, després d’una biòpsia), vam dissenyar diferents xarxes neuronals profundes, basades en xarxes de convolució 3D (CNN), per predir la malignitat dels nòduls basada en la inspecció visual dels radiòlegs (més senzilles de recol.lectar). A continuació, vàrem avaluar diferents maneres de sintetitzar aquest coneixement representat en la xarxa neuronal de malignitat, en una pipeline destinada a proporcionar predicció del càncer de pulmó a nivell de pacient, donada una imatge de TC pulmonar. Els resultats positius van confirmar la conveniència d’utilitzar CNN per modelar la malignitat dels nòduls, segons els radiòlegs, per a la predicció automàtica del càncer de pulmó. Seguidament, vam dirigir la nostra investigació cap a l’anàlisi de sèries d’imatges de TC pulmonar. Per tant, ens vam enfrontar primer a la reidentificació automàtica de nòduls pulmonars de diferents tomografies pulmonars. Per fer-ho, vam proposar utilitzar xarxes neuronals siameses (SNN) per classificar la similitud entre nòduls, superant la necessitat de registre d’imatges. Aquest canvi de paradigma va evitar possibles pertorbacions de la imatge i va proporcionar resultats computacionalment més ràpids. Es van examinar diferents configuracions del SNN convencional, que van des de l’aplicació de l’aprenentatge de transferència, utilitzant diferents funcions de pèrdua, fins a la combinació de diversos mapes de característiques de diferents nivells de xarxa. Aquest mètode va obtenir resultats d’estat de la tècnica per reidentificar nòduls de manera aïllada, i de forma integrada en una pipeline per a la quantificació de creixement de nòduls. A més, vam abordar el problema de donar suport als radiòlegs en la gestió longitudinal del càncer de pulmó. Amb aquesta finalitat, vam proposar una nova pipeline d’aprenentatge profund, composta de quatre etapes que s’automatitzen completament i que van des de la detecció de nòduls fins a la classificació del càncer, passant per la detecció del creixement dels nòduls. A més, la pipeline va integrar un nou enfocament per a la detecció del creixement dels nòduls, que es basava en una recent xarxa de segmentació probabilística jeràrquica adaptada per informar estimacions d’incertesa. A més, es va introduir un segon mètode per a la classificació dels nòduls del càncer de pulmó, que integrava en una xarxa 3D-CNN de dos fluxos les probabilitats estimades de malignitat dels nòduls derivades de la xarxa pre-entrenada de malignitat dels nòduls. La pipeline es va avaluar en una cohort longitudinal i va informar rendiments comparables a l’estat de la tècnica utilitzats individualment o en pipelines però amb menys components que la proposada. Finalment, també vam investigar com ajudar els metges a prescriure de forma més acurada tractaments tumorals i planificacions quirúrgiques més precises. Amb aquesta finalitat, hem realitzat un nou mètode per predir el creixement dels nòduls donada una única imatge del nòdul. Particularment, el mètode es basa en una xarxa neuronal profunda jeràrquica, probabilística i generativa capaç de produir múltiples segmentacions de nòduls futurs consistents del nòdul en un moment determinat. Per fer-ho, la xarxa aprèn a modelar la distribució posterior multimodal de futures segmentacions de tumors pulmonars mitjançant la utilització d’inferència variacional i la injecció de les característiques latents posteriors. Finalment, aplicant el mostreig de Monte-Carlo a les sortides de la xarxa, podem estimar la mitjana de creixement del tumor i la incertesa associada a la predicció. Tot i que es recomanable una avaluació posterior en una cohort més gran, els mètodes proposats en aquest treball han informat resultats prou precisos per donar suport adequadament al flux de treball radiològic del seguiment dels nòduls pulmonars. Més enllà d’aquesta aplicació especifica, les innovacions presentades com, per exemple, els mètodes per integrar les xarxes CNN a pipelines de visió per ordinador, la reidentificació de regions sospitoses al llarg del temps basades en SNN, sense la necessitat de deformar l’estructura de la imatge inherent o la xarxa probabilística per modelar el creixement del tumor tenint en compte imatges ambigües i la incertesa en les prediccions, podrien ser fàcilment aplicables a altres tipus de càncer (per exemple, pàncrees), malalties clíniques (per exemple, Covid-19) o aplicacions mèdiques (per exemple, seguiment de la teràpia).
Стилі APA, Harvard, Vancouver, ISO та ін.
24

GRIFFINI, DUCCIO. "Development of Predictive Models for Synchronous Thermal Instability." Doctoral thesis, 2017. http://hdl.handle.net/2158/1081044.

Повний текст джерела
Анотація:
The increasing demand of higher efficiency and increased equipment compactness is pushing the modern rotordynamic design towards higher and higher bearing peripheral speed. Due to the increased viscous dissipation, modern fluid film bearings are prone to the development of complex thermal phenomena that, under certain conditions, can result in synchronous thermal instability, often referred to as Morton effect. Although the phenomenon is known and studied from the late 1970s a lack of knowledge is highlighted in literature and the strategy to approach its prediction and analysis is yet debated within the scientific community. This work presents the development and validation of the numerical models for the prediction of the synchronous thermal instability. The proposed models are derived from a preliminary analysis of the physical time scales of the problem and of the orders of magnitude of the equations, which allowed an aware selection of the modelling strategies from a dual point of view: the physical genesis of the Morton effect (i.e., the differential heating of the shaft) and the assessment of the stability of the rotor-bearing system under the influence of the thermal effects. Particular focus is devoted to the fluid-dynamical problem with the description of two dedicated codes, developed, respectively, for the analysis of the thermo-hydrodyanmics of fluid film bearings and for the modelling of the differential temperature developed across the shaft. This latter phenomenon is due to the differential heating and results to be the driving parameter of the problem. Once the two codes has been individually validated, these have been inserted in more complex systems in order to evaluate their ability to enable the prediction of the Morton effect. A linear stability analysis has been firstly performed and results, although affected by discrepancies with respect to the experimental data, have shown the potential of the codes to reach the objective of the work. Better results have been finally obtained when the models have been inserted in a more complex architecture. This latter has been developed in collaboration with the MDMlab of the Department of Industrial Engineering of the University of Florence in order to model the synchronous thermal instability by means of an iterative approach. A comparison with available experimental data, derived from a dedicated test campaign carried out at the GE Oil & Gas facility in Florence, is shown in order to validate both the procedure and the models. Moreover, some key parameters driving the Morton effect are presented and a study of the sensitivity of the phenomenon to the thermal expansion coefficient is proposed in order to improve researchers’ knowledge on the topic.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Hawkins-Daarud, Andrea Jeanine. "Toward a predictive model of tumor growth." Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-05-3395.

Повний текст джерела
Анотація:
In this work, an attempt is made to lay out a framework in which models of tumor growth can be built, calibrated, validated, and differentiated in their level of goodness in such a manner that all the uncertainties associated with each step of the modeling process can be accounted for in the final model prediction. The study can be divided into four basic parts. The first involves the development of a general family of mathematical models of interacting species representing the various constituents of living tissue, which generalizes those previously available in the literature. In this theory, surface effects are introduced by incorporating in the Helmholtz free ` gradients of the volume fractions of the interacting species, thus providing a generalization of the Cahn-Hilliard theory of phase change in binary media and leading to fourth-order, coupled systems of nonlinear evolution equations. A subset of these governing equations is selected as the primary class of models of tumor growth considered in this work. The second component of this study focuses on the emerging and fundamentally important issue of predictive modeling, the study of model calibration, validation, and quantification of uncertainty in predictions of target outputs of models. The Bayesian framework suggested by Babuska, Nobile, and Tempone is employed to embed the calibration and validation processes within the framework of statistical inverse theory. Extensions of the theory are developed which are regarded as necessary for certain scenarios in these methods to models of tumor growth. The third part of the study focuses on the numerical approximation of the diffuse-interface models of tumor growth and on the numerical implementations of the statistical inverse methods at the core of the validation process. A class of mixed finite element models is developed for the considered mass-conservation models of tumor growth. A family of time marching schemes is developed and applied to representative problems of tumor evolution. Finally, in the fourth component of this investigation, a collection of synthetic examples, mostly in two-dimensions, is considered to provide a proof-of-concept of the theory and methods developed in this work.
text
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Romero, Cuellar Jonathan. "Improving hydrological post-processing for assessing the conditional predictive uncertainty of monthly streamflows." Doctoral thesis, 2020. http://hdl.handle.net/10251/133999.

Повний текст джерела
Анотація:
[ES] La cuantificación de la incertidumbre predictiva es de vital importancia para producir predicciones hidrológicas confiables que soporten y apoyen la toma de decisiones en el marco de la gestión de los recursos hídricos. Los post-procesadores hidrológicos son herramientas adecuadas para estimar la incertidumbre predictiva de las predicciones hidrológicas (salidas del modelo hidrológico). El objetivo general de esta tesis es mejorar los métodos de post-procesamiento hidrológico para estimar la incertidumbre predictiva de caudales mensuales. Esta tesis pretende resolver dos problemas del post-procesamiento hidrológico: i) la heterocedasticidad y ii) la función de verosimilitud intratable. Los objetivos específicos de esta tesis son tres. Primero y relacionado con la heterocedasticidad, se propone y evalúa un nuevo método de post-procesamiento llamado GMM post-processor que consiste en la combinación del esquema de modelado de probabilidad Bayesiana conjunta y la mezcla de Gaussianas múltiples. Además, se comparó el desempeño del post-procesador propuesto con otros métodos tradicionales y bien aceptados en caudales mensuales a través de las doce cuencas hidrográficas del proyecto MOPEX. A partir de este objetivo (capitulo 2), encontramos que GMM post-processor es el mejor para estimar la incertidumbre predictiva de caudales mensuales, especialmente en cuencas de clima seco. Segundo, se propone un método para cuantificar la incertidumbre predictiva en el contexto de post-procesamiento hidrológico cuando sea difícil calcular la función de verosimilitud (función de verosimilitud intratable). Algunas veces en modelamiento hidrológico es difícil calcular la función de verosimilitud, por ejemplo, cuando se trabaja con modelos complejos o en escenarios de escasa información como en cuencas no aforadas. Por lo tanto, se propone el ABC post-processor que intercambia la estimación de la función de verosimilitud por el uso de resúmenes estadísticos y datos simulados. De este objetivo específico (capitulo 3), se demuestra que la distribución predictiva estimada por un método exacto (MCMC post-processor) o por un método aproximado (ABC post-processor) es similar. Este resultado es importante porque trabajar con escasa información es una característica común en los estudios hidrológicos. Finalmente, se aplica el ABC post-processor para estimar la incertidumbre de los estadísticos de los caudales obtenidos desde las proyecciones de cambio climático, como un caso particular de un problema de función de verosimilitud intratable. De este objetivo específico (capitulo 4), encontramos que el ABC post-processor ofrece proyecciones de cambio climático más confiables que los 14 modelos climáticos (sin post-procesamiento). De igual forma, ABC post-processor produce bandas de incertidumbre más realista para los estadísticos de los caudales que el método clásico de múltiples conjuntos (ensamble).
[CAT] La quantificació de la incertesa predictiva és de vital importància per a produir prediccions hidrològiques confiables que suporten i recolzen la presa de decisions en el marc de la gestió dels recursos hídrics. Els post-processadors hidrològics són eines adequades per a estimar la incertesa predictiva de les prediccions hidrològiques (eixides del model hidrològic). L'objectiu general d'aquesta tesi és millorar els mètodes de post-processament hidrològic per a estimar la incertesa predictiva de cabals mensuals. Els objectius específics d'aquesta tesi són tres. Primer, es proposa i avalua un nou mètode de post-processament anomenat GMM post-processor que consisteix en la combinació de l'esquema de modelatge de probabilitat Bayesiana conjunta i la barreja de Gaussianes múltiples. A més, es compara l'acompliment del post-processador proposat amb altres mètodes tradicionals i ben acceptats en cabals mensuals a través de les dotze conques hidrogràfiques del projecte MOPEX. A partir d'aquest objectiu (capítol 2), trobem que GMM post-processor és el millor per a estimar la incertesa predictiva de cabals mensuals, especialment en conques de clima sec. En segon lloc, es proposa un mètode per a quantificar la incertesa predictiva en el context de post-processament hidrològic quan siga difícil calcular la funció de versemblança (funció de versemblança intractable). Algunes vegades en modelació hidrològica és difícil calcular la funció de versemblança, per exemple, quan es treballa amb models complexos o amb escenaris d'escassa informació com a conques no aforades. Per tant, es proposa l'ABC post-processor que intercanvia l'estimació de la funció de versemblança per l'ús de resums estadístics i dades simulades. D'aquest objectiu específic (capítol 3), es demostra que la distribució predictiva estimada per un mètode exacte (MCMC post-processor) o per un mètode aproximat (ABC post-processor) és similar. Aquest resultat és important perquè treballar amb escassa informació és una característica comuna als estudis hidrològics. Finalment, s'aplica l'ABC post-processor per a estimar la incertesa dels estadístics dels cabals obtinguts des de les projeccions de canvi climàtic. D'aquest objectiu específic (capítol 4), trobem que l'ABC post-processor ofereix projeccions de canvi climàtic més confiables que els 14 models climàtics (sense post-processament). D'igual forma, ABC post-processor produeix bandes d'incertesa més realistes per als estadístics dels cabals que el mètode clàssic d'assemble.
[EN] The predictive uncertainty quantification in monthly streamflows is crucial to make reliable hydrological predictions that help and support decision-making in water resources management. Hydrological post-processing methods are suitable tools to estimate the predictive uncertainty of deterministic streamflow predictions (hydrological model outputs). In general, this thesis focuses on improving hydrological post-processing methods for assessing the conditional predictive uncertainty of monthly streamflows. This thesis deal with two issues of the hydrological post-processing scheme i) the heteroscedasticity problem and ii) the intractable likelihood problem. Mainly, this thesis includes three specific aims. First and relate to the heteroscedasticity problem, we develop and evaluate a new post-processing approach, called GMM post-processor, which is based on the Bayesian joint probability modelling approach and the Gaussian mixture models. Besides, we compare the performance of the proposed post-processor with the well-known exiting post-processors for monthly streamflows across 12 MOPEX catchments. From this aim (chapter 2), we find that the GMM post-processor is the best suited for estimating the conditional predictive uncertainty of monthly streamflows, especially for dry catchments. Secondly, we introduce a method to quantify the conditional predictive uncertainty in hydrological post-processing contexts when it is cumbersome to calculate the likelihood (intractable likelihood). Sometimes, it can be challenging to estimate the likelihood itself in hydrological modelling, especially working with complex models or with ungauged catchments. Therefore, we propose the ABC post-processor that exchanges the requirement of calculating the likelihood function by the use of some sufficient summary statistics and synthetic datasets. With this aim in mind (chapter 3), we prove that the conditional predictive distribution is similarly produced by the exact predictive (MCMC post-processor) or the approximate predictive (ABC post-processor), qualitatively speaking. This finding is significant because dealing with scarce information is a common condition in hydrological studies. Finally, we apply the ABC post-processing method to estimate the uncertainty of streamflow statistics obtained from climate change projections, such as a particular case of intractable likelihood problem. From this specific objective (chapter 4), we find that the ABC post-processor approach: 1) offers more reliable projections than 14 climate models (without post-processing); 2) concerning the best climate models during the baseline period, produces more realistic uncertainty bands than the classical multi-model ensemble approach.
I would like to thank the Gobernación del Huila Scholarship Program No. 677 (Colombia) for providing the financial support for my PhD research.
Romero Cuellar, J. (2019). Improving hydrological post-processing for assessing the conditional predictive uncertainty of monthly streamflows [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/133999
TESIS
Стилі APA, Harvard, Vancouver, ISO та ін.
27

DI, ROCCO FEDERICO. "Predictive modeling analysis of a wet cooling tower - Adjoint sensitivity analysis, uncertainty quantification, data assimilation, model calibration, best-estimate predictions with reduced uncertainties." Doctoral thesis, 2018. http://hdl.handle.net/11573/1091474.

Повний текст джерела
Анотація:
It is common practice, in the modern era, to base the process of understanding and eventually predicting the behavior of complex physical systems upon simulating operational situations through system codes. In order to provide a more thorough and accurate comprehension of the system dynamics, these numerical simulations are often and preferably flanked by experimental measurements. In practice, repeated measurements of the same physical quantity produce values differing from each other and from the measured quantity true value, which remains unknown; the errors leading to this variation in results can be of methodological, instrumental or personal nature. It is not feasible to obtain experimental results devoid of uncertainty, and this means that a range of values possibly representative of the true value always exists around any value stemming from experimental measurements. A quantification of this range is critical to any practical application of the measured data, whose nominal measured values are insufficient for applications unless the quantitative uncertainties associated to the experimental data are also provided. Not even numerical models can reveal the true value of the investigated quantity, for two reasons: first, any numerical model is imperfect, meaning that it constitutes an inevitable simplification of the real world system it aims to represent; in second place, a hypothetically perfect model would still have uncertain values for its model parameters - such as initial conditions, boundary conditions and material properties - and the stemming results would therefore still be differing from the true value and from the experimental measurements of the quantity. With both computational and experimental results at hand, the final aim is to obtain a probabilistic description of possible future outcomes based on all recognized errors and uncertainties. This operation falls within the scope of predictive modeling procedures, which rely on three key elements: model calibration, model extrapolation and estimation of the validation domain. The first step of the procedure involves the adjustment of the numerical model parameters accordingly to the experimental results; this aim is achieved by integrating computed and measured data, and the associated procedure is known as model calibration. In order for this operation to be properly executed, all errors and uncertainties at any level of the modeling path leading to numerical results have to be identified and characterized, including errors and uncertainties on the model parameters, numerical discretization errors and possible incomplete knowledge of the physical process being modeled. Calibration of models is performed through the mathematical framework provided by data assimilation procedures; these procedures strongly rely on sensitivity analysis, and for this reason are often cumbersome in terms of computational load. Generally speaking, sensitivity analyses can be conducted with two different techniques, respectively known as direct or forward methods and adjoint methods. The forward methods calculate the finite difference of a small perturbation in a parameter by means of differences between two independent calculations, and are advantageous only for systems in which the number of responses exceeds the number of model parameters; unfortunately this is seldom the case in real large-scale systems. In this work, this problem has been overcome by using the adjoint sensitivity analysis methodology (ASAM) by Cacuci: as opposed to forward methods, the ASAM is most efficient for systems in which the number of parameters is greater than the number of responses, such as the model investigated in this thesis and many others currently used for numerical simulations of industrial systems. This methodology has been recently extended to second-order sensitivities (2nd-ASAM) by Cacuci for linear and nonlinear systems, for computing exactly and efficiently the second-order functional derivatives of system responses to the system model parameters. Model extrapolation addresses the prediction of uncertainty in new environments or conditions of interest, including both untested parts of the parameter space and higher levels of system complexity in the validation hierarchy. Estimation of the validation domain addresses the estimation of contours of constant uncertainty in the high-dimensional space that characterizes the application of interest. The present work focuses on performing sensitivity and uncertainty analysis, data assimilation, model calibration, model validation and best-estimate predictions with reduced uncertainties on a counter-flow, wet cooling tower model developed by Savannah River National Laboratory. A cooling tower generally discharges waste heat produced by an industrial plant to the external environment. The amount of thermal energy discharged into the environment can be determined by measurements of quantities representing the external conditions, such as outlet air temperature, outlet water temperature, and outlet air relative humidity, in conjunction with computational models that simulate numerically the cooling tower behavior. Variations in the model parameters (e.g., material properties, model correlations, boundary conditions) cause variations in the model response. The functional derivatives of the model response with respect to the model parameters (called “sensitivities”) are needed to quantify such response variations changes. In this work, the comprehensive adjoint sensitivity analysis methodology for nonlinear systems is applied to compute the cooling tower response sensitivities to all of its model parameters. Moreover, the utilization of the adjoint state functions allows the simultaneous computation of the sensitivities of each model response to all of the 47 model parameters just running a single adjoint model computation; obtaining the same results making use of finite-difference forward methods would have required 47 separate computations, with the relevant disadvantage of leading to approximate values of the sensitivities, as opposed to the exact ones yielded by applying the adjoint procedure. In addition, the forward cooling tower model presents nonlinearity in their state functions; the adjoint sensitivity model possess the relevant feature of being instead linear in the adjoint state functions, whose one-to-one correspondence to the forward state functions is essential for the calculation of the adjoint sensitivities. Sensitivities are subsequently used in this work to realize many operations, such as: (i) ranking the model parameters according to the magnitude of their contribution to response uncertainties; (ii) determine the propagation of uncertainties, in form of variances and covariances, of the parameters in the model in order to quantify the uncertainties of the model responses; (iii) allow predictive modeling operations, such as experimental data assimilation and model parameters calibration, with the aim to yield best-estimate predicted nominal values both for model parameters and responses, with correspondently reduced values for the predicted uncertainties associated. The methodologies are part of two distinct mathematical frameworks: the Adjoint Sensitivity Analysis Methodology (ASAM) is used to compute the adjoint sensitivities of the model quantities of interest (called “model responses”) with respect to the model parameters; the Predictive Modeling of Coupled Multi-Physics Systems (PM_CMPS) simultaneously combines all of the available computed information and experimentally measured data to yield optimal values of the system parameters and responses, while simultaneously reducing the corresponding uncertainties in parameters and responses. In the present work, a relevantly more efficient numerical method has been applied to the cooling tower model analyzed, leading to the accurate computation of the steady-state distributions for the following quantities of interest: (i) the water mass flow rates at the exit of each control volume along the height of the fill section of the cooling tower; (ii) the water temperatures at the exit of each control volume along the height of the fill section of the cooling tower; (iii) the air temperatures at the exit of each control volume along the height of the fill section of the cooling tower; (iv) the humidity ratios at the exit of each control volume along the height of the fill section of the cooling tower; and (v) the air mass flow rates at the exit of the cooling tower. The application of the numerical method selected eliminates any convergence issue, yielding accurate results for all the control volumes of the cooling tower and for all the data set of interest. This work is organized as follows: Chapter 2 provides a description of the physical system simulated, along with presenting the mathematical model used in this work for simulating a counter-flow cooling tower operating under saturated and unsaturated conditions. The three cases analyzed in this work and their corresponding sets of governing equations are detailed in this chapter. Chapter 3 presents the development of the adjoint sensitivity model for the counter-flow cooling tower operating under saturated and unsaturated conditions using the general adjoint sensitivity analysis methodology (ASAM) for nonlinear systems. Using a single adjoint computation enables the efficient and exact computation of the sensitivities (functional derivatives) of the model responses to all of the model parameters, thus alleviating the need for repeated forward model computations in conjunction with finite difference methods. The mathematical framework of the “predictive modeling for coupled multi-physics systems” (PM_CMPS) is also detailed. Chapter 4 presents the results of applying the ASAM and PM_CMPS methodologies to all the cases listed in Chapter 2: after being calculated, sensitivities are subsequently used for ranking the contributions of the single model parameters to the model responses variations, for computing the propagated uncertainties of the model responses, and for the application of the PM_CMPS methodology, aimed at yielding best-estimate predicted nominal values and uncertainties for model parameters and responses. This methodology simultaneously combines all of the available computed information and experimentally measured data for the counter-flow cooling tower operating under saturated and unsaturated conditions. The best-estimate results predicted by the PM_CMPS methodology reveal that the predicted values of the standard deviations for all the model responses, even those for which no experimental data have been recorded, are smaller than either the computed or the measured standards deviations for the respective responses. This work concludes with Chapter 5 by discussing the significance of these predicted results and by indicating possible further generalizations of the adjoint sensitivity analysis and PM_CMPS methodologies.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Suryawanshi, Anup Arvind. "Uncertainty Quantification in Flow and Flow Induced Structural Response." Thesis, 2015. http://etd.iisc.ac.in/handle/2005/3875.

Повний текст джерела
Анотація:
Response of flexible structures — such as cable-supported bridges and aircraft wings — is associated with a number of uncertainties in structural and flow parameters. This thesis is aimed at efficient uncertainty quantification in a few such flow and flow-induced structural response problems. First, the uncertainty quantification in the lift force exerted on a submerged body in a potential flow is considered. To this end, a new method — termed here as semi-intrusive stochastic perturbation (SISP) — is proposed. A sensitivity analysis is also performed, where for the global sensitivity analysis (GSA) the Sobol’ indices are used. The polynomial chaos expansion (PCE) is used for estimating these indices. Next, two stability problems —divergence and flutter — in the aeroelasticity are studied in the context of reliability based design optimization (RBDO). Two modifications are proposed to an existing PCE-based metamodel to reduce the computational cost, where the chaos coefficients are estimated using Gauss quadrature to gain computational speed and GSA is used to create nonuniform grid to reduce the cost even further. The proposed method is applied on a rectangular unswept cantilever wing model. Next, reliability computation in limit cycle oscillations (LCOs) is considered. While the metamodel performs poorly in this case due to bimodality in the distribution, a new simulation-based scheme proposed to this end. Accordingly, first a reduced-order model (ROM) is used to identify the critical region in the random parameter space. Then the full-scale expensive model is run only over a this critical region. This is applied to the rectangular unswept cantilever wing with cubic and fifth order stiffness terms in its equation of motion. Next, the wind speed is modeled as a spatio-temporal process, and accordingly new representations of spatio-temporal random processes are proposed based on tensor decompositions of the covariance kernel. These are applied to three problems: a heat equation, a vibration, and a readily available covariance model for wind speed. Finally, to assimilate available field measurement data on wind speed and to predict based on this assimilation, a new framework based on the tensor decompositions is proposed. The framework is successfully applied to a set of measured data on wind speed in Ireland, where the prediction based on simulation is found to be consistent with the observed data.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Suryawanshi, Anup Arvind. "Uncertainty Quantification in Flow and Flow Induced Structural Response." Thesis, 2015. http://etd.iisc.ernet.in/2005/3875.

Повний текст джерела
Анотація:
Response of flexible structures — such as cable-supported bridges and aircraft wings — is associated with a number of uncertainties in structural and flow parameters. This thesis is aimed at efficient uncertainty quantification in a few such flow and flow-induced structural response problems. First, the uncertainty quantification in the lift force exerted on a submerged body in a potential flow is considered. To this end, a new method — termed here as semi-intrusive stochastic perturbation (SISP) — is proposed. A sensitivity analysis is also performed, where for the global sensitivity analysis (GSA) the Sobol’ indices are used. The polynomial chaos expansion (PCE) is used for estimating these indices. Next, two stability problems —divergence and flutter — in the aeroelasticity are studied in the context of reliability based design optimization (RBDO). Two modifications are proposed to an existing PCE-based metamodel to reduce the computational cost, where the chaos coefficients are estimated using Gauss quadrature to gain computational speed and GSA is used to create nonuniform grid to reduce the cost even further. The proposed method is applied on a rectangular unswept cantilever wing model. Next, reliability computation in limit cycle oscillations (LCOs) is considered. While the metamodel performs poorly in this case due to bimodality in the distribution, a new simulation-based scheme proposed to this end. Accordingly, first a reduced-order model (ROM) is used to identify the critical region in the random parameter space. Then the full-scale expensive model is run only over a this critical region. This is applied to the rectangular unswept cantilever wing with cubic and fifth order stiffness terms in its equation of motion. Next, the wind speed is modeled as a spatio-temporal process, and accordingly new representations of spatio-temporal random processes are proposed based on tensor decompositions of the covariance kernel. These are applied to three problems: a heat equation, a vibration, and a readily available covariance model for wind speed. Finally, to assimilate available field measurement data on wind speed and to predict based on this assimilation, a new framework based on the tensor decompositions is proposed. The framework is successfully applied to a set of measured data on wind speed in Ireland, where the prediction based on simulation is found to be consistent with the observed data.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Sawlan, Zaid A. "Statistical Analysis and Bayesian Methods for Fatigue Life Prediction and Inverse Problems in Linear Time Dependent PDEs with Uncertainties." Diss., 2018. http://hdl.handle.net/10754/629731.

Повний текст джерела
Анотація:
This work employs statistical and Bayesian techniques to analyze mathematical forward models with several sources of uncertainty. The forward models usually arise from phenomenological and physical phenomena and are expressed through regression-based models or partial differential equations (PDEs) associated with uncertain parameters and input data. One of the critical challenges in real-world applications is to quantify uncertainties of the unknown parameters using observations. To this purpose, methods based on the likelihood function, and Bayesian techniques constitute the two main statistical inferential approaches considered here. Two problems are studied in this thesis. The first problem is the prediction of fatigue life of metallic specimens. The second part is related to inverse problems in linear PDEs. Both problems require the inference of unknown parameters given certain measurements. We first estimate the parameters by means of the maximum likelihood approach. Next, we seek a more comprehensive Bayesian inference using analytical asymptotic approximations or computational techniques. In the fatigue life prediction, there are several plausible probabilistic stress-lifetime (S-N) models. These models are calibrated given uniaxial fatigue experiments. To generate accurate fatigue life predictions, competing S-N models are ranked according to several classical information-based measures. A different set of predictive information criteria is then used to compare the candidate Bayesian models. Moreover, we propose a spatial stochastic model to generalize S-N models to fatigue crack initiation in general geometries. The model is based on a spatial Poisson process with an intensity function that combines the S-N curves with an averaged effective stress that is computed from the solution of the linear elasticity equations.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Rasheed, Md Muhibur. "Predicting multibody assembly of proteins." Thesis, 2014. http://hdl.handle.net/2152/26149.

Повний текст джерела
Анотація:
This thesis addresses the multi-body assembly (MBA) problem in the context of protein assemblies. [...] In this thesis, we chose the protein assembly domain because accurate and reliable computational modeling, simulation and prediction of such assemblies would clearly accelerate discoveries in understanding of the complexities of metabolic pathways, identifying the molecular basis for normal health and diseases, and in the designing of new drugs and other therapeutics. [...] [We developed] F²Dock (Fast Fourier Docking) which includes a multi-term function which includes both a statistical thermodynamic approximation of molecular free energy as well as several of knowledge-based terms. Parameters of the scoring model were learned based on a large set of positive/negative examples, and when tested on 176 protein complexes of various types, showed excellent accuracy in ranking correct configurations higher (F² Dock ranks the correcti solution as the top ranked one in 22/176 cases, which is better than other unsupervised prediction software on the same benchmark). Most of the protein-protein interaction scoring terms can be expressed as integrals over the occupied volume, boundary, or a set of discrete points (atom locations), of distance dependent decaying kernels. We developed a dynamic adaptive grid (DAG) data structure which computes smooth surface and volumetric representations of a protein complex in O(m log m) time, where m is the number of atoms assuming that the smallest feature size h is [theta](r[subscript max]) where r[subscript max] is the radius of the largest atom; updates in O(log m) time; and uses O(m)memory. We also developed the dynamic packing grids (DPG) data structure which supports quasi-constant time updates (O(log w)) and spherical neighborhood queries (O(log log w)), where w is the word-size in the RAM. DPG and DAG together results in O(k) time approximation of scoring terms where k << m is the size of the contact region between proteins. [...] [W]e consider the symmetric spherical shell assembly case, where multiple copies of identical proteins tile the surface of a sphere. Though this is a restricted subclass of MBA, it is an important one since it would accelerate development of drugs and antibodies to prevent viruses from forming capsids, which have such spherical symmetry in nature. We proved that it is possible to characterize the space of possible symmetric spherical layouts using a small number of representative local arrangements (called tiles), and their global configurations (tiling). We further show that the tilings, and the mapping of proteins to tilings on arbitrary sized shells is parameterized by 3 discrete parameters and 6 continuous degrees of freedom; and the 3 discrete DOF can be restricted to a constant number of cases if the size of the shell is known (in terms of the number of protein n). We also consider the case where a coarse model of the whole complex of proteins are available. We show that even when such coarse models do not show atomic positions, they can be sufficient to identify a general location for each protein and its neighbors, and thereby restricts the configurational space. We developed an iterative refinement search protocol that leverages such multi-resolution structural data to predict accurate high resolution model of protein complexes, and successfully applied the protocol to model gp120, a protein on the spike of HIV and currently the most feasible target for anti-HIV drug design.
text
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Xu, Chicheng. "Reservoir description with well-log-based and core-calibrated petrophysical rock classification." 2013. http://hdl.handle.net/2152/21315.

Повний текст джерела
Анотація:
Rock type is a key concept in modern reservoir characterization that straddles multiple scales and bridges multiple disciplines. Reservoir rock classification (or simply rock typing) has been recognized as one of the most effective description tools to facilitate large-scale reservoir modeling and simulation. This dissertation aims to integrate core data and well logs to enhance reservoir description by classifying reservoir rocks in a geologically and petrophysically consistent manner. The main objective is to develop scientific approaches for utilizing multi-physics rock data at different time and length scales to describe reservoir rock-fluid systems. Emphasis is placed on transferring physical understanding of rock types from limited ground-truthing core data to abundant well logs using fast log simulations in a multi-layered earth model. Bimodal log-normal pore-size distribution functions derived from mercury injection capillary pressure (MICP) data are first introduced to characterize complex pore systems in carbonate and tight-gas sandstone reservoirs. Six pore-system attributes are interpreted and integrated to define petrophysical orthogonality or dissimilarity between two pore systems of bimodal log-normal distributions. A simple three-dimensional (3D) cubic pore network model constrained by nuclear magnetic resonance (NMR) and MICP data is developed to quantify fluid distributions and phase connectivity for predicting saturation-dependent relative permeability during two-phase drainage. There is rich petrophysical information in spatial fluid distributions resulting from vertical fluid flow on a geologic time scale and radial mud-filtrate invasion on a drilling time scale. Log attributes elicited by such fluid distributions are captured to quantify dynamic reservoir petrophysical properties and define reservoir flow capacity. A new rock classification workflow that reconciles reservoir saturation-height behavior and mud-filtrate for more accurate dynamic reservoir modeling is developed and verified in both clastic and carbonate fields. Rock types vary and mix at the sub-foot scale in heterogeneous reservoirs due to depositional control or diagenetic overprints. Conventional well logs are limited in their ability to probe the details of each individual bed or rock type as seen from outcrops or cores. A bottom-up Bayesian rock typing method is developed to efficiently test multiple working hypotheses against well logs to quantify uncertainty of rock types and their associated petrophysical properties in thinly bedded reservoirs. Concomitantly, a top-down reservoir description workflow is implemented to characterize intermixed or hybrid rock classes from flow-unit scale (or seismic scale) down to the pore scale based on a multi-scale orthogonal rock class decomposition approach. Correlations between petrophysical rock types and geological facies in reservoirs originating from deltaic and turbidite depositional systems are investigated in detail. Emphasis is placed on the cause-and-effect relationship between pore geometry and rock geological attributes such as grain size and bed thickness. Well log responses to those geological attributes and associated pore geometries are subjected to numerical log simulations. Sensitivity of various physical logs to petrophysical orthogonality between rock classes is investigated to identify the most diagnostic log attributes for log-based rock typing. Field cases of different reservoir types from various geological settings are used to verify the application of petrophysical rock classification to assist reservoir characterization, including facies interpretation, permeability prediction, saturation-height analysis, dynamic petrophysical modeling, uncertainty quantification, petrophysical upscaling, and production forecasting.
text
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії