Siga este link para ver outros tipos de publicações sobre o tema: Uncertainty Quantification model.

Teses / dissertações sobre o tema "Uncertainty Quantification model"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Uncertainty Quantification model".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Fadikar, Arindam. "Stochastic Computer Model Calibration and Uncertainty Quantification". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91985.

Texto completo da fonte
Resumo:
This dissertation presents novel methodologies in the field of stochastic computer model calibration and uncertainty quantification. Simulation models are widely used in studying physical systems, which are often represented by a set of mathematical equations. Inference on true physical system (unobserved or partially observed) is drawn based on the observations from corresponding computer simulation model. These computer models are calibrated based on limited ground truth observations in order produce realistic predictions and associated uncertainties. Stochastic computer model differs from traditional computer model in the sense that repeated execution results in different outcomes from a stochastic simulation. This additional uncertainty in the simulation model requires to be handled accordingly in any calibration set up. Gaussian process (GP) emulator replaces the actual computer simulation when it is expensive to run and the budget is limited. However, traditional GP interpolator models the mean and/or variance of the simulation output as function of input. For a simulation where marginal gaussianity assumption is not appropriate, it does not suffice to emulate only the mean and/or variance. We present two different approaches addressing the non-gaussianity behavior of an emulator, by (1) incorporating quantile regression in GP for multivariate output, (2) approximating using finite mixture of gaussians. These emulators are also used to calibrate and make forward predictions in the context of an Agent Based disease model which models the Ebola epidemic outbreak in 2014 in West Africa. The third approach employs a sequential scheme which periodically updates the uncertainty inn the computer model input as data becomes available in an online fashion. Unlike other two methods which use an emulator in place of the actual simulation, the sequential approach relies on repeated run of the actual, potentially expensive simulation.
Doctor of Philosophy
Mathematical models are versatile and often provide accurate description of physical events. Scientific models are used to study such events in order to gain understanding of the true underlying system. These models are often complex in nature and requires advance algorithms to solve their governing equations. Outputs from these models depend on external information (also called model input) supplied by the user. Model inputs may or may not have a physical meaning, and can sometimes be only specific to the scientific model. More often than not, optimal values of these inputs are unknown and need to be estimated from few actual observations. This process is known as inverse problem, i.e. inferring the input from the output. The inverse problem becomes challenging when the mathematical model is stochastic in nature, i.e., multiple execution of the model result in different outcome. In this dissertation, three methodologies are proposed that talk about the calibration and prediction of a stochastic disease simulation model which simulates contagion of an infectious disease through human-human contact. The motivating examples are taken from the Ebola epidemic in West Africa in 2014 and seasonal flu in New York City in USA.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

White, Jeremy. "Computer Model Inversion and Uncertainty Quantification in the Geosciences". Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5329.

Texto completo da fonte
Resumo:
The subject of this dissertation is use of computer models as data analysis tools in several different geoscience settings, including integrated surface water/groundwater modeling, tephra fallout modeling, geophysical inversion, and hydrothermal groundwater modeling. The dissertation is organized into three chapters, which correspond to three individual publication manuscripts. In the first chapter, a linear framework is developed to identify and estimate the potential predictive consequences of using a simple computer model as a data analysis tool. The framework is applied to a complex integrated surface-water/groundwater numerical model with thousands of parameters. Several types of predictions are evaluated, including particle travel time and surface-water/groundwater exchange volume. The analysis suggests that model simplifications have the potential to corrupt many types of predictions. The implementation of the inversion, including how the objective function is formulated, what minimum of the objective function value is acceptable, and how expert knowledge is enforced on parameters, can greatly influence the manifestation of model simplification. Depending on the prediction, failure to specifically address each of these important issues during inversion is shown to degrade the reliability of some predictions. In some instances, inversion is shown to increase, rather than decrease, the uncertainty of a prediction, which defeats the purpose of using a model as a data analysis tool. In the second chapter, an efficient inversion and uncertainty quantification approach is applied to a computer model of volcanic tephra transport and deposition. The computer model simulates many physical processes related to tephra transport and fallout. The utility of the approach is demonstrated for two eruption events. In both cases, the importance of uncertainty quantification is highlighted by exposing the variability in the conditioning provided by the observations used for inversion. The worth of different types of tephra data to reduce parameter uncertainty is evaluated, as is the importance of different observation error models. The analyses reveal the importance using tephra granulometry data for inversion, which results in reduced uncertainty for most eruption parameters. In the third chapter, geophysical inversion is combined with hydrothermal modeling to evaluate the enthalpy of an undeveloped geothermal resource in a pull-apart basin located in southeastern Armenia. A high-dimensional gravity inversion is used to define the depth to the contact between the lower-density valley fill sediments and the higher-density surrounding host rock. The inverted basin depth distribution was used to define the hydrostratigraphy for the coupled groundwater-flow and heat-transport model that simulates the circulation of hydrothermal fluids in the system. Evaluation of several different geothermal system configurations indicates that the most likely system configuration is a low-enthalpy, liquid-dominated geothermal system.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Park, Inseok. "Quantification of Multiple Types of Uncertainty in Physics-Based Simulation". Wright State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wright1348702461.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Blumer, Joel David. "Cross-scale model validation with aleatory and epistemic uncertainty". Thesis, Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53571.

Texto completo da fonte
Resumo:
Nearly every decision must be made with a degree of uncertainty regarding the outcome. Decision making based on modeling and simulation predictions needs to incorporate and aggregate uncertain evidence. To validate multiscale simulation models, it may be necessary to consider evidence collected at a length scale that is different from the one at which a model predicts. In addition, traditional methods of uncertainty analysis do not distinguish between two types of uncertainty: uncertainty due to inherently random inputs, and uncertainty due to lack of information about the inputs. This thesis examines and applies a Bayesian approach for model parameter validation that uses generalized interval probability to separate these two types of uncertainty. A generalized interval Bayes’ rule (GIBR) is used to combine the evidence and update belief in the validity of parameters. The sensitivity of completeness and soundness for interval range estimation in GIBR is investigated. Several approaches to represent complete ignorance of probabilities’ values are tested. The result from the GIBR method is verified using Monte Carlo simulations. The method is first applied to validate the parameter set for a molecular dynamics simulation of defect formation due to radiation. Evidence is supplied by the comparison with physical experiments. Because the simulation includes variables whose effects are not directly observable, an expanded form of GIBR is implemented to incorporate the uncertainty associated with measurement in belief update. In a second example, the proposed method is applied to combining the evidence from two models of crystal plasticity at different length scales.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Ezvan, Olivier. "Multilevel model reduction for uncertainty quantification in computational structural dynamics". Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1109/document.

Texto completo da fonte
Resumo:
Ce travail de recherche présente une extension de la construction classique des modèles réduits (ROMs) obtenus par analyse modale, en dynamique numérique des structures linéaires. Cette extension est basée sur une stratégie de projection multi-niveau, pour l'analyse dynamique des structures complexes en présence d'incertitudes. De nos jours, il est admis qu'en dynamique des structures, la prévision sur une large bande de fréquence obtenue à l'aide d'un modèle éléments finis doit être améliorée en tenant compte des incertitudes de modèle induites par les erreurs de modélisation, dont le rôle croît avec la fréquence. Dans un tel contexte, l'approche probabiliste non-paramétrique des incertitudes est utilisée, laquelle requiert l'introduction d'un ROM. Par conséquent, ces deux aspects, évolution fréquentielle des niveaux d'incertitudes et réduction de modèle, nous conduisent à considérer le développement d'un ROM multi-niveau, pour lequel les niveaux d'incertitudes dans chaque partie de la bande de fréquence peuvent être adaptés. Dans cette thèse, on s'intéresse à l'analyse dynamique de structures complexes caractérisées par la présence de plusieurs niveaux structuraux, par exemple avec un squelette rigide qui supporte diverses sous-parties flexibles. Pour de telles structures, il est possible d'avoir, en plus des modes élastiques habituels dont les déplacements associés au squelette sont globaux, l'apparition de nombreux modes élastiques locaux, qui correspondent à des vibrations prédominantes des sous-parties flexibles. Pour ces structures complexes, la densité modale est susceptible d'augmenter fortement dès les basses fréquences (BF), conduisant, via la méthode d'analyse modale, à des ROMs de grande dimension (avec potentiellement des milliers de modes élastiques en BF). De plus, de tels ROMs peuvent manquer de robustesse vis-à-vis des incertitudes, en raison des nombreux déplacements locaux qui sont très sensibles aux incertitudes. Il convient de noter qu'au contraire des déplacements globaux de grande longueur d'onde caractérisant la bande BF, les déplacements locaux associés aux sous-parties flexibles de la structure, qui peuvent alors apparaître dès la bande BF, sont caractérisés par de courtes longueurs d'onde, similairement au comportement dans la bande hautes fréquences (HF). Par conséquent, pour les structures complexes considérées, les trois régimes vibratoires BF, MF et HF se recouvrent, et de nombreux modes élastiques locaux sont entremêlés avec les modes élastiques globaux habituels. Cela implique deux difficultés majeures, concernant la quantification des incertitudes d'une part et le coût numérique d'autre part. L'objectif de cette thèse est alors double. Premièrement, fournir un ROM stochastique multi-niveau qui est capable de rendre compte de la variabilité hétérogène introduite par le recouvrement des trois régimes vibratoires. Deuxièmement, fournir un ROM prédictif de dimension réduite par rapport à celui de l'analyse modale. Une méthode générale est présentée pour la construction d'un ROM multi-niveau, basée sur trois bases réduites (ROBs) dont les déplacements correspondent à l'un ou l'autre des régimes vibratoires BF, MF ou HF (associés à des déplacements de type BF, de type MF ou bien de type HF). Ces ROBs sont obtenues via une méthode de filtrage utilisant des fonctions de forme globales pour l'énergie cinétique (par opposition aux fonctions de forme locales des éléments finis). L'implémentation de l'approche probabiliste non-paramétrique dans le ROM multi-niveau permet d'obtenir un ROM stochastique multi-niveau avec lequel il est possible d'attribuer un niveau d'incertitude spécifique à chaque ROB. L'application présentée est relative à une automobile, pour laquelle le ROM stochastique multi-niveau est identifié par rapport à des mesures expérimentales. Le ROM proposé permet d'obtenir une dimension réduite ainsi qu'une prévision améliorée, en comparaison avec un ROM stochastique classique
This work deals with an extension of the classical construction of reduced-order models (ROMs) that are obtained through modal analysis in computational linear structural dynamics. It is based on a multilevel projection strategy and devoted to complex structures with uncertainties. Nowadays, it is well recognized that the predictions in structural dynamics over a broad frequency band by using a finite element model must be improved in taking into account the model uncertainties induced by the modeling errors, for which the role increases with the frequency. In such a framework, the nonparametric probabilistic approach of uncertainties is used, which requires the introduction of a ROM. Consequently, these two aspects, frequency-evolution of the uncertainties and reduced-order modeling, lead us to consider the development of a multilevel ROM in computational structural dynamics, which has the capability to adapt the level of uncertainties to each part of the frequency band. In this thesis, we are interested in the dynamical analysis of complex structures in a broad frequency band. By complex structure is intended a structure with complex geometry, constituted of heterogeneous materials and more specifically, characterized by the presence of several structural levels, for instance, a structure that is made up of a stiff main part embedding various flexible sub-parts. For such structures, it is possible having, in addition to the usual global-displacements elastic modes associated with the stiff skeleton, the apparition of numerous local elastic modes, which correspond to predominant vibrations of the flexible sub-parts. For such complex structures, the modal density may substantially increase as soon as low frequencies, leading to high-dimension ROMs with the modal analysis method (with potentially thousands of elastic modes in low frequencies). In addition, such ROMs may suffer from a lack of robustness with respect to uncertainty, because of the presence of the numerous local displacements, which are known to be very sensitive to uncertainties. It should be noted that in contrast to the usual long-wavelength global displacements of the low-frequency (LF) band, the local displacements associated with the structural sub-levels, which can then also appear in the LF band, are characterized by short wavelengths, similarly to high-frequency (HF) displacements. As a result, for the complex structures considered, there is an overlap of the three vibration regimes, LF, MF, and HF, and numerous local elastic modes are intertwined with the usual global elastic modes. This implies two major difficulties, pertaining to uncertainty quantification and to computational efficiency. The objective of this thesis is thus double. First, to provide a multilevel stochastic ROM that is able to take into account the heterogeneous variability introduced by the overlap of the three vibration regimes. Second, to provide a predictive ROM whose dimension is decreased with respect to the classical ROM of the modal analysis method. A general method is presented for the construction of a multilevel ROM, based on three orthogonal reduced-order bases (ROBs) whose displacements are either LF-, MF-, or HF-type displacements (associated with the overlapping LF, MF, and HF vibration regimes). The construction of these ROBs relies on a filtering strategy that is based on the introduction of global shape functions for the kinetic energy (in contrast to the local shape functions of the finite elements). Implementing the nonparametric probabilistic approach in the multilevel ROM allows each type of displacements to be affected by a particular level of uncertainties. The method is applied to a car, for which the multilevel stochastic ROM is identified with respect to experiments, solving a statistical inverse problem. The proposed ROM allows for obtaining a decreased dimension as well as an improved prediction with respect to a classical stochastic ROM
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Chiang, Shen. "Hydrological model comparison and refinement through uncertainty recognition and quantification". 京都大学 (Kyoto University), 2005. http://hdl.handle.net/2433/144539.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Riley, Matthew E. "Quantification of Model-Form, Predictive, and Parametric Uncertainties in Simulation-Based Design". Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1314895435.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Rashidi, Mehrabadi Niloofar. "Power Electronics Design Methodologies with Parametric and Model-Form Uncertainty Quantification". Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/82934.

Texto completo da fonte
Resumo:
Modeling and simulation have become fully ingrained into the set of design and development tools that are broadly used in the field of power electronics. To state simply, they represent the fastest and safest way to study a circuit or system, thus aiding in the research, design, diagnosis, and debugging phases of power converter development. Advances in computing technologies have also enabled the ability to conduct reliability and production yield analyses to ensure that the system performance can meet given requirements despite the presence of inevitable manufacturing variability and variations in the operating conditions. However, the trustworthiness of all the model-based design techniques depends entirely on the accuracy of the simulation models used, which, thus far, has not yet been fully considered. Prior to this research, heuristic safety factors were used to compensate for deviation of real system performance from the predictions made using modeling and simulation. This approach resulted invariably in a more conservative design process. In this research, a modeling and design approach with parametric and model-form uncertainty quantification is formulated to bridge the modeling and simulation accuracy and reliance gaps that have hindered the full exploitation of model-based design techniques. Prior to this research, a few design approaches were developed to account for variability in the design process; these approaches have not shown the capability to be applicable to complex systems. This research, however, demonstrates that the implementation of the proposed modeling approach is able to handle complex power converters and systems. A systematic study for developing a simplified test bed for uncertainty quantification analysis is introduced accordingly. For illustrative purposes, the proposed modeling approach is applied to the switching model of a modular multilevel converter to improve the existing modeling practice and validate the model used in the design of this large-scale power converter. The proposed modeling and design methodology is also extended to design optimization, where a robust multi-objective design and optimization approach with parametric and model form uncertainty quantification is proposed. A sensitivity index is defined accordingly as a quantitative measure of system design robustness, with regards to manufacturing variability and modeling inaccuracies in the design of systems with multiple performance functions. The optimum design solution is realized by exploring the Pareto Front of the enhanced performance space, where the model-form error associated with each design is used to modify the estimated performance measures. The parametric sensitivity of each design point is also considered to discern between cases and help identify the most parametrically-robust of the Pareto-optimal design solutions. To demonstrate the benefits of incorporating uncertainty quantification analysis into the design optimization from a more practical standpoint, a Vienna-type rectifier is used as a case study to compare the theoretical analysis with a comprehensive experimental validation. This research shows that the model-form error and sensitivity of each design point can potentially change the performance space and the resultant Pareto Front. As a result, ignoring these main sources of uncertainty in the design will result in incorrect decision-making and the choice of a design that is not an optimum design solution in practice.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Xie, Yimeng. "Advancements in Degradation Modeling, Uncertainty Quantification and Spatial Variable Selection". Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/71687.

Texto completo da fonte
Resumo:
This dissertation focuses on three research projects: 1) construction of simultaneous prediction intervals/bounds for at least k out of m future observations; 2) semi-parametric degradation model for accelerated destructive degradation test (ADDT) data; and 3) spatial variable selection and application to Lyme disease data in Virginia. Followed by the general introduction in Chapter 1, the rest of the dissertation consists of three main chapters. Chapter 2 presents the construction of two-sided simultaneous prediction intervals (SPIs) or one-sided simultaneous prediction bounds (SPBs) to contain at least k out of m future observations, based on complete or right censored data from (log)-location-scale family of distributions. SPI/SPB calculated by the proposed procedure has exact coverage probability for complete and Type II censored data. In Type I censoring case, it has asymptotically correct coverage probability and reasonably good results for small samples. The proposed procedures can be extended to multiply-censored data or randomly censored data. Chapter 3 focuses on the analysis of ADDT data. We use a general degradation path model with correlated covariance structure to describe ADDT data. Monotone B-splines are used to modeling the underlying degradation process. A likelihood based iterative procedure for parameter estimation is developed. The confidence intervals of parameters are calculated using the nonparametric bootstrap procedure. Both simulated data and real datasets are used to compare the semi-parametric model with the existing parametric models. Chapter 4 studies the Lyme disease emergence in Virginia. The objective is to find important environmental and demographical covariates that are associated with Lyme disease emergence. To address the high-dimentional integral problem in the loglikelihood function, we consider the penalized quasi loglikelihood and the approximated loglikelihood based on Laplace approximation. We impose the adaptive elastic net penalty to obtain sparse estimation of parameters and thus to achieve variable selection of important variables. The proposed methods are investigated in simulation studies. We also apply the proposed methods to Lyme disease data in Virginia. Finally, Chapter 5 contains general conclusions and discussions for future work.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Karlén, Johan. "Uncertainty Quantification of a Large 1-D Dynamic Aircraft System Simulation Model". Thesis, Linköpings universitet, Reglerteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-120189.

Texto completo da fonte
Resumo:
A 1-D dynamic simulation model of a new cooling system for the upcoming Gripen E aircraft has been developed in the Modelica-based tool Dymola in order to examine the cooling performance. These types of low-dimensioned simulation models, which generally are described by ordinary differential equations or differential-algebraic equations, are often used to describe entire fluid systems. These equations are easier to solve than partial differential equations, which are used in 2-D and 3-D simulation models. Some approximations and assumptions of the physical system have to be made when developing this type of 1-D dynamic simulation model. The impact from these approximations and assumptions can be examined with an uncertainty analysis in order to increase the understanding of the simulation results. Most uncertainty analysis methods are not practically feasible when analyzing large 1-D dynamic simulation models with many uncertainties, implying the importance to simplify these methods in order to make them practically feasible. This study was aimed at finding a method that is easy to realize with low computational expense and engineering workload. The evaluated simulation model consists of several sub-models that are linked together. These sub-models run much faster when simulated as standalone models, compared to running the total simulation model as a whole. It has been found that this feature of the sub-models can be utilized in an interval-based uncertainty analysis where the uncertainty parameter settings that give the minimum and maximum simulation model response can be derived. The number of simulations needed of the total simulation model, in order to perform an uncertainty analysis, is thereby significantly reduced. The interval-based method has been found to be enough for most simulations since the control software in the simulation model controls the liquid cooling temperature to a specific reference value. The control system might be able to keep this reference value, even for the worst case uncertainty combinations, implying no need to further analyze these simulations with a more refined uncertainty propagation, such as a probabilistic propagation approach, where different uncertainty combinations are examined. While the interval-based uncertainty analysis method lacks probability information it can still increase the understanding of the simulation results. It is also computationally inexpensive and does not rely on an accurate and time-consuming characterization of the probability distribution of the uncertainties. Uncertainties from all sub-models in the evaluated simulation model have not been included in the uncertainty analysis made in this thesis. These neglected sub-model uncertainties can be included using the interval-based method, as a future work. Also, a method for combining the interval-based method with aleatory uncertainties is proposed in the end of this thesis and can be examined.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Whiting, Nolan Wagner. "Assessment of Model Validation, Calibration, and Prediction Approaches in the Presence of Uncertainty". Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/91903.

Texto completo da fonte
Resumo:
Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the model form uncertainty or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation and/or experimental outcomes. These uncertainties can be in the form of aleatory uncertainties due to randomness or epistemic uncertainties due to lack of knowledge. Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME VandV 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics simulations of a multi-element airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Generally it was seen that the MAVM performed the best in cases where there is a sparse amount of data and/or large extrapolations and Bayesian calibration outperformed the others where there is an extensive amount of experimental data that covers the application domain.
Master of Science
Uncertainties often exists when conducting physical experiments, and whether this uncertainty exists due to input uncertainty, uncertainty in the environmental conditions in which the experiment takes place, or numerical uncertainty in the model, it can be difficult to validate and compare the results of a model with those of an experiment. Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the uncertainty that exists within the model or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation (model) and/or experimental outcomes. These uncertainties can be in the form of aleatory (uncertainties which a probability distribution can be applied for likelihood of drawing values) or epistemic uncertainties (no knowledge, inputs drawn within an interval). Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME V&V 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics(CFD) simulations of a multi-element airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Also of interest was to assess how well each method could predict the uncertainties about the simulation outside of the region in which experimental observations were made, and model form uncertainties could be observed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Tavares, Ivo Alberto Valente. "Uncertainty quantification with a Gaussian Process Prior : an example from macroeconomics". Doctoral thesis, Instituto Superior de Economia e Gestão, 2021. http://hdl.handle.net/10400.5/21444.

Texto completo da fonte
Resumo:
Doutoramento em Matemática Aplicada à Economia e Gestão
This thesis may be broadly divided into 4 parts. In the first part, we do a literature review of the state of the art in misspecification in Macroeconomics, and what so far has been the contribution of a relatively new area of research called Uncertainty Quantification to the Macroeconomics subject. These reviews are essential to contextualize the contribution of this thesis in the furthering of research dedicated to correcting non-linear misspecifications, and to account for several other sources of uncertainty, when modelling from an economic perspective. In the next three parts, we give an example, using the same simple DSGE model from macroeconomic theory, of how researchers may quantify uncertainty in a State-Space Model using a discrepancy term with a Gaussian Process prior. The second part of the thesis, we used a full Gaussian Process (GP) prior on the discrepancy term. Our experiments showed that despite the heavy computational constraints of our full GP method, we still managed to obtain a very interesting forecasting performance with such a restricted sample size, when compared with similar uncorrected DSGE models, or corrected DSGE models using state of the art methods for time series, such as imposing a VAR on the observation error of the state-space model. In the third part of our work, we improved on the computational performance of our previous method, using what has been referred in the literature as Hilbert Reduced Rank GP. This method has close links to Functional Analysis, and the Spectral Theorem for Normal Operators, and Partial Differential Equations. It indeed improved the computational processing time, albeit just slightly, and was accompanied with a similarly slight decrease in the forecasting performance. The fourth part of our work delved into how our method would account for model uncertainty just prior, and during, the great financial crisis of 2007-2009. Our technique allowed us to capture the crisis, albeit at a reduced applicability possibly due to computational constraints. This latter part also was used to deepen the understanding of our model uncertainty quantification technique with a GP. Identifiability issues were also studied. One of our overall conclusions was that more research is needed until this uncertainty quantification technique may be used in as part of the toolbox of central bankers and researchers for forecasting economic fluctuations, specially regarding the computational performance of either method.
info:eu-repo/semantics/publishedVersion
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Balch, Michael Scott. "Methods for Rigorous Uncertainty Quantification with Application to a Mars Atmosphere Model". Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/30115.

Texto completo da fonte
Resumo:
The purpose of this dissertation is to develop and demonstrate methods appropriate for the quantification and propagation of uncertainty in large, high-consequence engineering projects. The term "rigorous uncertainty quantification" refers to methods equal to the proposed task. The motivating practical example is uncertainty in a Mars atmosphere model due to the incompletely characterized presence of dust. The contributions made in this dissertation, though primarily mathematical and philosophical, are driven by the immediate needs of engineers applying uncertainty quantification in the field. Arguments are provided to explain how the practical needs of engineering projects like Mars lander missions motivate the use of the objective probability bounds approach, as opposed to the subjectivist theories which dominate uncertainty quantification in many research communities. An expanded formalism for Dempster-Shafer structures is introduced, allowing for the representation of continuous random variables and fuzzy variables as Dempster-Shafer structures. Then, the correctness and incorrectness of probability bounds analysis and the Cartesian product propagation method for Dempster-Shafer structures under certain dependency conditions are proven. It is also conclusively demonstrated that there exist some probability bounds problems in which the best-possible bounds on probability can not be represented using Dempster-Shafer structures. Nevertheless, Dempster-Shafer theory is shown to provide a useful mathematical framework for a wide range of probability bounds problems. The dissertation concludes with the application of these new methods to the problem of propagating uncertainty from the dust parameters in a Mars atmosphere model to uncertainty in that model's prediction of atmospheric density. A thirty-day simulation of the weather at Holden Crater on Mars is conducted using a meso-scale atmosphere model, MRAMS. Although this analysis only addresses one component of Mars atmosphere uncertainty, it demonstrates the applicability of probability bounds methods in practical engineering work. More importantly, the Mars atmosphere uncertainty analysis provides a framework in which to conclusively establish the practical importance of epistemology in rigorous uncertainty quantification.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Amarchinta, Hemanth. "Uncertainty Quantification of Residual Stresses Induced By Laser Peening Simulation". Wright State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=wright1278028187.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Conrad, Yvonne [Verfasser]. "Model-based quantification of nitrate-nitrogen leaching considering sources of uncertainty / Yvonne Conrad". Kiel : Universitätsbibliothek Kiel, 2017. http://d-nb.info/1128149249/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Kamilis, Dimitrios. "Uncertainty Quantification for low-frequency Maxwell equations with stochastic conductivity models". Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31415.

Texto completo da fonte
Resumo:
Uncertainty Quantification (UQ) has been an active area of research in recent years with a wide range of applications in data and imaging sciences. In many problems, the source of uncertainty stems from an unknown parameter in the model. In physical and engineering systems for example, the parameters of the partial differential equation (PDE) that model the observed data may be unknown or incompletely specified. In such cases, one may use a probabilistic description based on prior information and formulate a forward UQ problem of characterising the uncertainty in the PDE solution and observations in response to that in the parameters. Conversely, inverse UQ encompasses the statistical estimation of the unknown parameters from the available observations, which can be cast as a Bayesian inverse problem. The contributions of the thesis focus on examining the aforementioned forward and inverse UQ problems for the low-frequency, time-harmonic Maxwell equations, where the model uncertainty emanates from the lack of knowledge of the material conductivity parameter. The motivation comes from the Controlled-Source Electromagnetic Method (CSEM) that aims to detect and image hydrocarbon reservoirs by using electromagnetic field (EM) measurements to obtain information about the conductivity profile of the sub-seabed. Traditionally, algorithms for deterministic models have been employed to solve the inverse problem in CSEM by optimisation and regularisation methods, which aside from the image reconstruction provide no quantitative information on the credibility of its features. This work employs instead stochastic models where the conductivity is represented as a lognormal random field, with the objective of providing a more informative characterisation of the model observables and the unknown parameters. The variational formulation of these stochastic models is analysed and proved to be well-posed under suitable assumptions. For computational purposes the stochastic formulation is recast as a deterministic, parametric problem with distributed uncertainty, which leads to an infinite-dimensional integration problem with respect to the prior and posterior measure. One of the main challenges is thus the approximation of these integrals, with the standard choice being some variant of the Monte-Carlo (MC) method. However, such methods typically fail to take advantage of the intrinsic properties of the model and suffer from unsatisfactory convergence rates. Based on recently developed theory on high-dimensional approximation, this thesis advocates the use of Sparse Quadrature (SQ) to tackle the integration problem. For the models considered here and under certain assumptions, we prove that for forward UQ, Sparse Quadrature can attain dimension-independent convergence rates that out-perform MC. Typical CSEM models are large-scale and thus additional effort is made in this work to reduce the cost of obtaining forward solutions for each sampling parameter by utilising the weighted Reduced Basis method (RB) and the Empirical Interpolation Method (EIM). The proposed variant of a combined SQ-EIM-RB algorithm is based on an adaptive selection of training sets and a primal-dual, goal-oriented formulation for the EIM-RB approximation. Numerical examples show that the suggested computational framework can alleviate the computational costs associated with forward UQ for the pertinent large-scale models, thus providing a viable methodology for practical applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Smit, Jacobus Petrus Johannes. "The quantification of prediction uncertainty associated with water quality models using Monte Carlo Simulation". Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85814.

Texto completo da fonte
Resumo:
Thesis (MEng)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: Water Quality Models are mathematical representations of ecological systems and they play a major role in the planning and management of water resources and aquatic environments. Important decisions concerning capital investment and environmental consequences often rely on the results of Water Quality Models and it is therefore very important that decision makers are aware and understand the uncertainty associated with these models. The focus of this study was on the use of Monte Carlo Simulation for the quantification of prediction uncertainty associated with Water Quality Models. Two types of uncertainty exist: Epistemic Uncertainty and Aleatory Uncertainty. Epistemic uncertainty is a result of a lack of knowledge and aleatory uncertainty is due to the natural variability of an environmental system. It is very important to distinguish between these two types of uncertainty because the analysis of a model’s uncertainty depends on it. Three different configurations of Monte Carlo Simulation in the analysis of uncertainty were discussed and illustrated: Single Phase Monte Carlo Simulation (SPMCS), Two Phase Monte Carlo Simulation (TPMCS) and Parameter Monte Carlo Simulation (PMCS). Each configuration of Monte Carlo Simulation has its own objective in the analysis of a model’s uncertainty and depends on the distinction between the types of uncertainty. As an experiment, a hypothetical river was modelled using the Streeter-Phelps model and synthetic data was generated for the system. The generation of the synthetic data allowed for the experiment to be performed under controlled conditions. The modelling protocol followed in the experiment included two uncertainty analyses. All three types of Monte Carlo Simulations were used in these uncertainty analyses to quantify the model’s prediction uncertainty in fulfilment of their different objectives. The first uncertainty analysis, known as the preliminary uncertainty analysis, was performed to take stock of the model’s situation concerning uncertainty before any effort was made to reduce the model’s prediction uncertainty. The idea behind the preliminary uncertainty analysis was that it would help in further modelling decisions with regards to calibration and parameter estimation experiments. Parameter uncertainty was reduced by the calibration of the model. Once parameter uncertainty was reduced, the second uncertainty analysis, known as the confirmatory uncertainty analysis, was performed to confirm that the uncertainty associated with the model was indeed reduced. The two uncertainty analyses were conducted in exactly the same way. In conclusion to the experiment, it was illustrated how the quantification of the model’s prediction uncertainty aided in the calculation of a Total Maximum Daily Load (TMDL). The Margin of Safety (MOS) included in the TMDL could be determined based on scientific information provided by the uncertainty analysis. The total MOS assigned to the TMDL was -35% of the mean load allocation for the point source. For the sake of simplicity load allocations from non-point sources were disregarded.
AFRIKAANSE OPSOMMING: Watergehalte modelle is wiskundige voorstellings van ekologiese sisteme en speel ’n belangrike rol in die beplanning en bestuur van waterhulpbronne en wateromgewings. Belangrike besluite rakende finansiële beleggings en besluite rakende die omgewing maak dikwels staat op die resultate van watergehalte modelle. Dit is dus baie belangrik dat besluitnemers bewus is van die onsekerhede verbonde met die modelle en dit verstaan. Die fokus van hierdie studie het berus op die gebruik van die Monte Carlo Simulasie om die voorspellingsonsekerhede van watergehalte modelle te kwantifiseer. Twee tipes onsekerhede bestaan: Epistemologiese onsekerheid en toeval afhangende onsekerheid. Epistemologiese onsekerheid is die oorsaak van ‘n gebrek aan kennis terwyl toeval afhangende onsekerheid die natuurlike wisselvalligheid in ’n natuurlike omgewing behels. Dit is belangrik om te onderskei tussen hierdie twee tipes onsekerhede aangesien die analise van ’n model se onsekerheid hiervan afhang. Drie verskillende rangskikkings van Monte Carlo Simulasies in die analise van die onsekerhede word bespreek en geïllustreer: Enkel Fase Monte Carlo Simulasie (SPMCS), Dubbel Fase Monte Carlo Simulasie (TPMCS) en Parameter Monte Carlo Simulasie (PMCS). Elke rangskikking van Monte Carlo Simulasie het sy eie doelwit in die analise van ’n model se onsekerheid en hang af van die onderskeiding tussen die twee tipes onsekerhede. As eksperiment is ’n hipotetiese rivier gemodelleer deur gebruik te maak van die Streeter-Phelps teorie en sintetiese data is vir die rivier gegenereer. Die sintetiese data het gesorg dat die eksperiment onder beheerde toestande kon plaasvind. Die protokol in die eksperiment het twee onsekerheids analises ingesluit. Al drie die rangskikkings van die Monte Carlo Simulasie is gebruik in hierdie analises om die voorspellingsonsekerheid van die model te kwantifiseer en hul doelwitte te bereik. Die eerste analise, die voorlopige onsekerheidsanalise, is uitgevoer om die model se situasie met betrekking tot die onsekerheid op te som voor enige stappe geneem is om die model se voorspellings onsekerheid te probeer verminder. Die idee agter die voorlopige onsekerheidsanalise was dat dit sou help in verdere modelleringsbesluite ten opsigte van kalibrasie en die skatting van parameters. Onsekerhede binne die parameters is verminder deur die model te kalibreer, waarna die tweede onsekerheidsanalise uitgevoer is. Hierdie analise word die bevestigingsonsekerheidsanalise genoem en word uitgevoer met die doel om vas te stel of die onsekerheid geassosieer met die model wel verminder is. Die twee tipes analises word op presies dieselfde manier toegepas. In die afloop tot die eksperiment, is gewys hoe die resultate van ’n onsekerheidsanalise gebruik is in die berekening van ’n totale maksimum daaglikse belading (TMDL) vir die rivier. Die veiligheidgrens (MOS) ingesluit in die TMDL kon vasgestel word deur die gebruik van wetenskaplike kennis wat voorsien is deur die onsekerheidsanalise. Die MOS het bestaan uit -35% van die gemiddelde toegekende lading vir puntbelasting van besoedeling in die rivier. Om die eksperiment eenvoudig te hou is verspreide laste van besoedeling nie gemodelleer nie.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Gatian, Katherine N. "A quantitative, model-driven approach to technology selection and development through epistemic uncertainty reduction". Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53636.

Texto completo da fonte
Resumo:
When aggressive aircraft performance goals are set, he integration of new, advanced technologies into next generation aircraft concepts is required to bridge the gap between current capabilities and required capabilities. A large number of technologies exists that can be pursued, and only a subset may practically be selected to reach the chosen objectives. Additionally, the appropriate numerical and physical experimentation must be identified to further develop the selected technologies. These decisions must be made under a large amount of uncertainty because developing technologies introduce phenomena that have not been previously characterized. Traditionally, technology selection decisions are made based on deterministic performance assessments that do not capture the uncertainty of the technology impacts. Model-driven environments and new, advanced uncertainty quantification techniques provide the ability to characterize technology impact uncertainties and pinpoint how they are driving the system performance, which will aid technology selection decisions. Moreover, the probabilistic assessments can be used to plan experimentation that facilitates uncertainty reduction by targeting uncertainty sources with large performance impacts. The thesis formulates and implements a process that allows for risk-informed decision making throughout technology development. It focuses on quantifying technology readiness risk and performance risk by synthesizing quantitative, probabilistic performance information with qualitative readiness assessments. The Quantitative Uncertainty Modeling, Management, and Mitigation (QuantUM3) methodology was tested through the use of an environmentally-motivated aircraft design case study based upon NASAs Environmentally Responsible Aviation (ERA) technology development program. A physics-based aircraft design environment was created that has the ability to provide quantitative system-level performance assessments and was employed to model the technology impacts as probability distributions to facilitate the development of an overall process required to enable risk-informed technology and experimentation decisions. The outcome of the experimental e orts was a detailed outline of the entire methodology and a confirmation that the methodology enables risk-informed technology development decisions with respect to both readiness risk and performance risk. Furthermore, a new process for communicating technology readiness through morphological analysis was created as well as an experiment design process that utilizes the readiness information and quantitative uncertainty analysis to simultaneously increase readiness and decrease technology performance uncertainty.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Kim, Jee Yun. "Data-driven Methods in Mechanical Model Calibration and Prediction for Mesostructured Materials". Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/85210.

Texto completo da fonte
Resumo:
Mesoscale design involving control of material distribution pattern can create a statistically heterogeneous material system, which has shown increased adaptability to complex mechanical environments involving highly non-uniform stress fields. Advances in multi-material additive manufacturing can aid in this mesoscale design, providing voxel level control of material property. This vast freedom in design space also unlocks possibilities within optimization of the material distribution pattern. The optimization problem can be divided into a forward problem focusing on accurate predication and an inverse problem focusing on efficient search of the optimal design. In the forward problem, the physical behavior of the material can be modeled based on fundamental mechanics laws and simulated through finite element analysis (FEA). A major limitation in modeling is the unknown parameters in constitutive equations that describe the constituent materials; determining these parameters via conventional single material testing has been proven to be insufficient, which necessitates novel and effective approaches of calibration. A calibration framework based in Bayesian inference, which integrates data from simulations and physical experiments, has been applied to a study involving a mesostructured material fabricated by fused deposition modeling. Calibration results provide insights on what values these parameters converge to as well as which material parameters the model output has the largest dependence on while accounting for sources of uncertainty introduced during the modeling process. Additionally, this statistical formulation is able to provide quick predictions of the physical system by implementing a surrogate and discrepancy model. The surrogate model is meant to be a statistical representation of the simulation results, circumventing issues arising from computational load, while the discrepancy is aimed to account for the difference between the simulation output and physical experiments. In this thesis, this Bayesian calibration framework is applied to a material bending problem, where in-situ mechanical characterization data and FEA simulations based on constitutive modeling are combined to produce updated values of the unknown material parameters with uncertainty.
Master of Science
A material system obtained by applying a pattern of multiple materials has proven its adaptability to complex practical conditions. The layer by layer manufacturing process of additive manufacturing can allow for this type of design because of its control over where material can be deposited. This possibility then raises the question of how a multi-material system can be optimized in its design for a given application. In this research, we focus mainly on the problem of accurately predicting the response of the material when subjected to stimuli. Conventionally, simulations aided by finite element analysis (FEA) were relied upon for prediction, however it also presents many issues such as long run times and uncertainty in context-specific inputs of the simulation. We instead have adopted a framework using advanced statistical methodology able to combine both experimental and simulation data to significantly reduce run times as well as quantify the various uncertainties associated with running simulations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Ricciardi, Denielle E. "Uncertainty Quantification and Propagation in Materials Modeling Using a Bayesian Inferential Framework". The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587473424147276.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Shi, Hongxiang. "Hierarchical Statistical Models for Large Spatial Data in Uncertainty Quantification and Data Fusion". University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1504802515691938.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Andersson, Hjalmar. "Inverse Uncertainty Quantification using deterministic sampling : An intercomparison between different IUQ methods". Thesis, Uppsala universitet, Tillämpad kärnfysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447070.

Texto completo da fonte
Resumo:
In this thesis, two novel methods for Inverse Uncertainty Quantification are benchmarked against the more established methods of Monte Carlo sampling of output parameters(MC) and Maximum Likelihood Estimation (MLE). Inverse Uncertainty Quantification (IUQ) is the process of how to best estimate the values of the input parameters in a simulation, and the uncertainty of said estimation, given a measurement of the output parameters. The two new methods are Deterministic Sampling (DS) and Weight Fixing (WF). Deterministic sampling uses a set of sampled points such that the set of points has the same statistic as the output. For each point, the corresponding point of the input is found to be able to calculate the statistics of the input. Weight fixing uses random samples from the rough region around the input to create a linear problem that involves finding the right weights so that the output has the right statistic. The benchmarking between the four methods shows that both DS and WF are comparably accurate to both MC and MLE in most cases tested in this thesis. It was also found that both DS and WF uses approximately the same amount of function calls as MLE and all three methods use a lot fewer function calls to the simulation than MC. It was discovered that WF is not always able to find a solution. This is probably because the methods used for WF are not the optimal method for what they are supposed to do. Finding more optimal methods for WF is something that could be investigated further.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Kacker, Shubhra. "The Role of Constitutive Model in Traumatic Brain Injury Prediction". University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563874757653453.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Vilhelmsson, Markus, e Isac Strömberg. "Investigating Validation of a Simulation Model for Development and Certification of Future Fighter Aircraft Fuel Systems". Thesis, Linköpings universitet, Reglerteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129300.

Texto completo da fonte
Resumo:
In this thesis a method for verification, validation and uncertainty quantification (VV&UQ) has been tested and evaluated on a fuel transfer application in the fuel rig currently used at Saab. A simplified model has been developed for the limited part of the fuel system in the rig that is affected in the transfer, and VV&UQ has been performed on this model. The scope for the thesis has been to investigate if and how simulation models can be used for certification of the fuel system in a fighter aircraft. The VV&UQ-analysis was performed with the limitation that no probability distributions for uncertainties were considered. Instead, all uncertainties were described using intervals (so called epistemic uncertainties). Simulations were performed on five different operating points in terms of fuel flow to the engine with five different initial conditions for each, resulting in 25 different operating modes. For each of the 25 cases, the VV&UQ resulted in a minimum and maximum limit for how much fuel that could be transferred. 6 cases were chosen for validation measurements and the resulting amount of fuel transferred ended up between the corresponding epistemic intervals. Performing VV&UQ is a time demanding and computationally heavy task, which quickly grows as the model becomes more complex. Our conclusion is that a pilot study is necessary, where time and costs are evaluated, before choosing to use a simulation model and perform VV&UQ for certification. Further investigation of different methods for increasing confidence in simulation models is also needed, for which VV&UQ is one suitable option.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Cheng, Xi. "Quantification of the parametric uncertainty in the specific absorption rate calculation of a mobile phone". Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLS258/document.

Texto completo da fonte
Resumo:
La thèse porte sur la quantification d'incertitude de paramètres (Uncertainty Quantification ou UQ) dans le calcul du débit d'absorption spécifique (Specific Absorption Rate ou SAR) de téléphones mobiles. L'impact de l'incertitude, ainsi le manque de connaissances détaillées sur les propriétés électriques des matériaux, les caractéristiques géométriques du système, etc., dans le calcul SAR est quantifiée par trois méthodes de calcul efficaces dites non-intrusives : Transformation non parfumée (Unscented Transformation ou UT), collocation stochastique (Stochastic Collocation ou SC) et polynômes de chaos non-intrusifs (Non-Intrusive Polynomial Chaos ou NIPC).Ces méthodes sont en effet appelées méthodes non intrusives puisque le processus de simulation est tout simplement considéré comme une boîte noire sans que ne soit modifié le code du solveur de simulation. Leurs performances pour les cas de une et deux variables aléatoires sont analysées dans le présent travail. En contraste avec le procédé d'analyse d'incertitude traditionnel (la méthode de Monte Carlo ou MCM), le temps de calcul devient acceptable. Afin de simplifier la procédure UQ pour le cas de plusieurs entrées incertaines, il est démontré que des incertitudes peuvent être combinées de manière à évaluer l'incertitude sur les paramètres de la sortie.Combiner des incertitudes est une approche généralement utilisée dans le domaine des mesures, et ici, il est utilisé dans le calcul du SAR pour la situation complexe. Une des étapes nécessaires dans le cadre de l'analyse d'incertitude est l'analyse de sensibilité (Sensitivity Analysis ou SA), qui vise à quantifier l'importance relative de chaque paramètre d'entrée incertain par rapport à l'incertitude de la sortie. La méthode reposant sur le calcul des indices de sensibilité de Sobol est employée, ces indices étant évalués par un développement en polynômes de chaos, au lieu d'utiliser la méthode de Monte-Carlo dans le calcul SAR. Les résultats des investigations sont présentés et discutés.Afin de faciliter la lecture, des notions élémentaires de débit d'absorption spécifique, de modélisation, d'incertitude dans la modélisation, de théorie des probabilités, et de calcul SAR par l'un des solveurs de simulation sont proposés dans l'Introduction (chapitre 1). Puis l'usage des méthodes non-intrusives UQ telles que UT, SC et NIPC, et l'application de la méthode des indices de Sobol pour l'analyse de sensibilité dans le calcul SAR est présentée dans les chapitres 2 et 3. Dans le chapitre 4, une autre approche d'utilisation des polynômes de chaos est fournie, et elle est utilisée dans le domaine temporel par l'intermédiaire d'un code de différences finies (Finite Difference-Time Domain ou FD-TD). Puisque le code FD-TD dans le solveur de simulation peut en effet être modifié, c'est le développement en polynômes de chaos intrusifs, étudié en détail par un certain nombre de scientifiques déjà, qui est considéré. Dans le chapitre 5, les conclusions et un aperçu des travaux futurs sont fournis
This thesis focuses on parameter uncertainty quantification (UQ) in specific absorptionrate (SAR) calculation using a computer-aided design (CAD) mobile phone model.The impact of uncertainty, e.g., lack of detailed knowledge about material electricalproperties, system geometrical features, etc., in SAR calculation is quantified by threecomputationally efficient non-intrusive UQ methods: unscented transformation (UT),stochastic collocation (SC) and non-intrusive polynomial chaos (NIPC). They are callednon-intrusive methods because the simulation process is simply considered as a blackboxwithout changing the code of the simulation solver. Their performances for thecases of one and two random variables are analysed. In contrast to the traditionaluncertainty analysis method: Monte Carlo method, the time of the calculation becomesacceptable. To simplify the UQ procedure for the case of multiple uncertain inputs, it isdemonstrated that uncertainties can be combined to evaluate the parameter uncertaintyof the output. Combining uncertainties is an approach generally used in the field ofmeasurement, in this thesis, it is used in SAR calculations in the complex situation. Oneof the necessary steps in the framework of uncertainty analysis is sensitivity analysis (SA)which aims at quantifying the relative importance of each uncertain input parameterwith respect to the uncertainty of the output. Polynomial chaos (PC) based Sobol’indices method whose SA indices are evaluated by PC expansion instead of Monte Carlomethod is used in SAR calculation. The results of the investigations are presented anddiscussed.In order to make the reading easier, elementary notions of SAR, modelling, uncertaintyin modelling, and probability theory are given in introduction (chapter 1). Thenthe main content of this thesis are presented in chapter 2 and chapter 3. In chapter 4,another approach to use PC expansion is given, and it is used in the finite-differencetime-domain (FDTD) code. Since the FDTD code in the simulation solver should bechanged, it is so-called intrusive PC expansion. Intrusive method already investigatedin details in other people’s thesis. In chapter 5, conclusions and future work are given
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Wang, Jianxun. "Physics-Informed, Data-Driven Framework for Model-Form Uncertainty Estimation and Reduction in RANS Simulations". Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/77035.

Texto completo da fonte
Resumo:
Computational fluid dynamics (CFD) has been widely used to simulate turbulent flows. Although an increased availability of computational resources has enabled high-fidelity simulations (e.g. large eddy simulation and direct numerical simulation) of turbulent flows, the Reynolds-Averaged Navier-Stokes (RANS) equations based models are still the dominant tools for industrial applications. However, the predictive capability of RANS models is limited by potential inaccuracies driven by hypotheses in the Reynolds stress closure. With the ever-increasing use of RANS simulations in mission-critical applications, the estimation and reduction of model-form uncertainties in RANS models have attracted attention in the turbulence modeling community. In this work, I focus on estimating uncertainties stemming from the RANS turbulence closure and calibrating discrepancies in the modeled Reynolds stresses to improve the predictive capability of RANS models. Both on-line and off-line data are utilized to achieve this goal. The main contributions of this dissertation can be summarized as follows: First, a physics-based, data-driven Bayesian framework is developed for estimating and reducing model-form uncertainties in RANS simulations. An iterative ensemble Kalman method is employed to assimilate sparse on-line measurement data and empirical prior knowledge for a full-field inversion. The merits of incorporating prior knowledge and physical constraints in calibrating RANS model discrepancies are demonstrated and discussed. Second, a random matrix theoretic framework is proposed for estimating model-form uncertainties in RANS simulations. Maximum entropy principle is employed to identify the probability distribution that satisfies given constraints but without introducing artificial information. Objective prior perturbations of RANS-predicted Reynolds stresses in physical projections are provided based on comparisons between physics-based and random matrix theoretic approaches. Finally, a physics-informed, machine learning framework towards predictive RANS turbulence modeling is proposed. The functional forms of model discrepancies with respect to mean flow features are extracted from the off-line database of closely related flows based on machine learning algorithms. The RANS-modeled Reynolds stresses of prediction flows can be significantly improved by the trained discrepancy function, which is an important step towards the predictive turbulence modeling.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Bazargan, Hamid. "An efficient polynomial chaos-based proxy model for history matching and uncertainty quantification of complex geological structures". Thesis, Heriot-Watt University, 2014. http://hdl.handle.net/10399/2757.

Texto completo da fonte
Resumo:
A novel polynomial chaos proxy-based history matching and uncertainty quantification method is presented that can be employed for complex geological structures in inverse problems. For complex geological structures, when there are many unknown geological parameters with highly nonlinear correlations, typically more than 106 full reservoir simulation runs might be required to accurately probe the posterior probability space given the production history of reservoir. This is not practical for high-resolution geological models. One solution is to use a "proxy model" that replicates the simulation model for selected input parameters. The main advantage of the polynomial chaos proxy compared to other proxy models and response surfaces is that it is generally applicable and converges systematically as the order of the expansion increases. The Cameron and Martin theorem 2.24 states that the convergence rate of the standard polynomial chaos expansions is exponential for Gaussian random variables. To improve the convergence rate for non-Gaussian random variables, the generalized polynomial chaos is implemented that uses an Askey-scheme to choose the optimal basis for polynomial chaos expansions [199]. Additionally, for the non-Gaussian distributions that can be effectively approximated by a mixture of Gaussian distributions, we use the mixture-modeling based clustering approach where under each cluster the polynomial chaos proxy converges exponentially fast and the overall posterior distribution can be estimated more efficiently using different polynomial chaos proxies. The main disadvantage of the polynomial chaos proxy is that for high-dimensional problems, the number of the polynomial chaos terms increases drastically as the order of the polynomial chaos expansions increases. Although different non-intrusive methods have been developed in the literature to address this issue, still a large number of simulation runs is required to compute high-order terms of the polynomial chaos expansions. This work resolves this issue by proposing the reduced-terms polynomial chaos expansion which preserves only the relevant terms in the polynomial chaos representation. We demonstrated that the sparsity pattern in the polynomial chaos expansion, when used with the Karhunen-Loéve decomposition method or kernel PCA, can be systematically captured. A probabilistic framework based on the polynomial chaos proxy is also suggested in the context of the Bayesian model selection to study the plausibility of different geological interpretations of the sedimentary environments. The proposed surrogate-accelerated Bayesian inverse analysis can be coherently used in practical reservoir optimization workflows and uncertainty assessments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Holland, Troy Michael. "A Comprehensive Coal Conversion Model Extended to Oxy-Coal Conditions". BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6525.

Texto completo da fonte
Resumo:
CFD simulations are valuable tools in evaluating and deploying oxy-fuel and other carbon capture technologies either as retrofit technologies or for new construction. However, accurate predictive simulations require physically realistic submodels with low computational requirements. In particular, comprehensive char oxidation and gasification models have been developed that describe multiple reaction and diffusion processes. This work extends a comprehensive char conversion code (the Carbon Conversion Kinetics or CCK model), which treats surface oxidation and gasification reactions as well as processes such as film diffusion, pore diffusion, ash encapsulation, and annealing. In this work, the CCK model was thoroughly investigated with a global sensitivity analysis. The sensitivity analysis highlighted several submodels in the CCK code, which were updated with more realistic physics or otherwise extended to function in oxy-coal conditions. Improved submodels include a greatly extended annealing model, the swelling model, the mode of burning parameter, and the kinetic model, as well as the addition of the Chemical Percolation Devolatilization (CPD) model. The resultant Carbon Conversion Kinetics for oxy-coal combustion (CCK/oxy) model predictions were compared to oxy-coal data, and further compared to parallel data sets obtained at near conventional conditions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Wu, Jinlong. "Predictive Turbulence Modeling with Bayesian Inference and Physics-Informed Machine Learning". Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/85129.

Texto completo da fonte
Resumo:
Reynolds-Averaged Navier-Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high-fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled.
Ph. D.
Reynolds-Averaged Navier–Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Galbally, David. "Nonlinear model reduction for uncertainty quantification in large-scale inverse problems : application to nonlinear convection-diffusion-reaction equation". Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43079.

Texto completo da fonte
Resumo:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2008.
Includes bibliographical references (p. 147-152).
There are multiple instances in science and engineering where quantities of interest are evaluated by solving one or several nonlinear partial differential equations (PDEs) that are parametrized in terms of a set of inputs. Even though well-established numerical techniques exist for solving these problems, their computational cost often precludes their use in cases where the outputs of interest must be evaluated repeatedly for different values of the input parameters such as probabilistic analysis applications. In this thesis we present a model reduction methodology that combines efficient representation of the nonlinearities in the governing PDE with an efficient model-constrained, greedy algorithm for sampling the input parameter space. The nonlinearities in the PDE are represented using a coefficient-function approximation that enables the development of an efficient offline-online computational procedure where the online computational cost is independent of the size of the original high-fidelity model. The input space sampling algorithm used for generating the reduced space basis adaptively improves the quality of the reduced order approximation by solving a PDE-constrained continuous optimization problem that targets the output error between the reduced and full order models in order to determine the optimal sampling point at every greedy cycle. The resulting model reduction methodology is applied to a highly nonlinear combustion problem governed by a convection-diffusion-reaction PDE with up to 3 input parameters. The reduced basis approximation developed for this problem is up to 50, 000 times faster to solve than the original high-fidelity finite element model with an average relative error in prediction of outputs of interest of 2.5 - 10-6 over the input parameter space. The reduced order model developed in this thesis is used in a novel probabilistic methodology for solving inverse problems.
(cont) The extreme computational cost of the Bayesian framework approach for inferring the values of the inputs that generated a given set of empirically measured outputs often precludes its use in practical applications. In this thesis we show that using a reduced order model for running the Markov
by David Galbally.
S.M.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Yuan, Mengfei. "Machine Learning-Based Reduced-Order Modeling and Uncertainty Quantification for "Structure-Property" Relations for ICME Applications". The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555580083945861.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Huang, Chao-Min. "Robust Design Framework for Automating Multi-component DNA Origami Structures with Experimental and MD coarse-grained Model Validation". The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu159051496861178.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Carmassi, Mathieu. "Uncertainty quantification and calibration of a photovoltaic plant model : warranty of performance and robust estimation of the long-term production". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLA042/document.

Texto completo da fonte
Resumo:
Les difficultés de mise en œuvre d'expériences de terrain ou de laboratoire, ainsi que les coûts associés, conduisent les sociétés industrielles à se tourner vers des codes numériques de calcul. Ces codes, censés être représentatifs des phénomènes physiques en jeu, entraînent néanmoins tout un cortège de problèmes. Le premier de ces problèmes provient de la volonté de prédire la réalité à partir d'un modèle informatique. En effet, le code doit être représentatif du phénomène et, par conséquent, être capable de simuler des données proches de la réalité. Or, malgré le constant développement du réalisme de ces codes, des erreurs de prédiction subsistent. Elles sont de deux natures différentes. La première provient de la différence entre le phénomène physique et les valeurs relevées expérimentalement. La deuxième concerne l'écart entre le code développé et le phénomène physique. Pour diminuer cet écart, souvent qualifié de biais ou d'erreur de modèle, les développeurs complexifient en général les codes, les rendant très chronophages dans certains cas. De plus, le code dépend de paramètres à fixer par l'utilisateur qui doivent être choisis pour correspondre au mieux aux données de terrain. L'estimation de ces paramètres propres au code s'appelle le calage. Cette thèse propose dans un premier temps une revue des méthodes statistiques nécessaires à la compréhension du calage Bayésien. Ensuite, une revue des principales méthodes de calage est présentée accompagnée d'un exemple comparatif basé sur un code de calcul servant à prédire la puissance d'une centrale photovoltaïque. Le package appelé CaliCo qui permet de réaliser un calage rapide de beaucoup de codes numériques est alors présenté. Enfin, un cas d'étude réel d'une grande centrale photovoltaïque sera introduit et le calage réalisé pour effectuer un suivi de performance de la centrale. Ce cas de code industriel particulier introduit des spécificités de calage numériques qui seront abordées et deux modèles statistiques y seront exposés
Field experiments are often difficult and expensive to make. To bypass these issues, industrial companies have developed computational codes. These codes intend to be representative of the physical system, but come with a certain amount of problems. The code intends to be as close as possible to the physical system. It turns out that, despite continuous code development, the difference between the code outputs and experiments can remain significant. Two kinds of uncertainties are observed. The first one comes from the difference between the physical phenomenon and the values recorded experimentally. The second concerns the gap between the code and the physical system. To reduce this difference, often named model bias, discrepancy, or model error, computer codes are generally complexified in order to make them more realistic. These improvements lead to time consuming codes. Moreover, a code often depends on parameters to be set by the user to make the code as close as possible to field data. This estimation task is called calibration. This thesis first proposes a review of the statistical methods necessary to understand Bayesian calibration. Then, a review of the main calibration methods is presented with a comparative example based on a numerical code used to predict the power of a photovoltaic plant. The package called CaliCo which allows to quickly perform a Bayesian calibration on a lot of numerical codes is then presented. Finally, a real case study of a large photovoltaic power plant will be introduced and the calibration carried out as part of a performance monitoring framework. This particular case of industrial code introduces numerical calibration specificities that will be discussed with two statistical models
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

John, David Nicholas [Verfasser], e Vincent [Akademischer Betreuer] Heuveline. "Uncertainty quantification for an electric motor inverse problem - tackling the model discrepancy challenge / David Nicholas John ; Betreuer: Vincent Heuveline". Heidelberg : Universitätsbibliothek Heidelberg, 2021. http://d-nb.info/122909265X/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Braun, Mathias. "Reduced Order Modelling and Uncertainty Propagation Applied to Water Distribution Networks". Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0050/document.

Texto completo da fonte
Resumo:
Les réseaux de distribution d’eau consistent en de grandes infrastructures réparties dans l’espace qui assurent la distribution d’eau potable en quantité et en qualité suffisantes. Les modèles mathématiques de ces systèmes sont caractérisés par un grand nombre de variables d’état et de paramètres dont la plupart sont incertains. Les temps de calcul peuvent s’avérer conséquents pour les réseaux de taille importante et la propagation d’incertitude par des méthodes de Monte Carlo. Par conséquent, les deux principaux objectifs de cette thèse sont l’étude des techniques de modélisation à ordre réduit par projection ainsi que la propagation spectrale des incertitudes des paramètres. La thèse donne tout d’abord un aperçu des méthodes mathématiques utilisées. Ensuite, les équations permanentes des réseaux hydrauliques sont présentées et une nouvelle méthode de calcul des sensibilités est dérivée sur la base de la méthode adjointe. Les objectifs spécifiques du développement de modèles d’ordre réduit sont l’application de méthodes basées sur la projection, le développement de stratégies d’échantillonnage adaptatives plus efficaces et l’utilisation de méthodes d’hyper-réduction pour l’évaluation rapide des termes résiduels non linéaires. Pour la propagation des incertitudes, des méthodes spectrales sont introduites dans le modèle hydraulique et un modèle hydraulique intrusif est formulé. Dans le but d’une analyse plus efficace des incertitudes des paramètres, la propagation spectrale est ensuite évaluée sur la base du modèle réduit. Les résultats montrent que les modèles d’ordre réduit basés sur des projections offrent un avantage considérable par rapport à l’effort de calcul. Bien que l’utilisation de l’échantillonnage adaptatif permette une utilisation plus efficace des états système pré-calculés, l’utilisation de méthodes d’hyper-réduction n’a pas permis d’améliorer la charge de calcul. La propagation des incertitudes des paramètres sur la base des méthodes spectrales est comparable aux simulations de Monte Carlo en termes de précision, tout en réduisant considérablement l’effort de calcul
Water distribution systems are large, spatially distributed infrastructures that ensure the distribution of potable water of sufficient quantity and quality. Mathematical models of these systems are characterized by a large number of state variables and parameter. Two major challenges are given by the time constraints for the solution and the uncertain character of the model parameters. The main objectives of this thesis are thus the investigation of projection based reduced order modelling techniques for the time efficient solution of the hydraulic system as well as the spectral propagation of parameter uncertainties for the improved quantification of uncertainties. The thesis gives an overview of the mathematical methods that are being used. This is followed by the definition and discussion of the hydraulic network model, for which a new method for the derivation of the sensitivities is presented based on the adjoint method. The specific objectives for the development of reduced order models are the application of projection based methods, the development of more efficient adaptive sampling strategies and the use of hyper-reduction methods for the fast evaluation of non-linear residual terms. For the propagation of uncertainties spectral methods are introduced to the hydraulic model and an intrusive hydraulic model is formulated. With the objective of a more efficient analysis of the parameter uncertainties, the spectral propagation is then evaluated on the basis of the reduced model. The results show that projection based reduced order models give a considerable benefit with respect to the computational effort. While the use of adaptive sampling resulted in a more efficient use of pre-calculated system states, the use of hyper-reduction methods could not improve the computational burden and has to be explored further. The propagation of the parameter uncertainties on the basis of the spectral methods is shown to be comparable to Monte Carlo simulations in accuracy, while significantly reducing the computational effort
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Carozzi, M. "AMMONIA EMISSIONS FROM ARABLE LANDS IN PO VALLEY: METHODOLOGIES, DYNAMICS AND QUANTIFICATION". Doctoral thesis, Università degli Studi di Milano, 2012. http://hdl.handle.net/2434/170268.

Texto completo da fonte
Resumo:
Although the Po Valley (north Italy) is considered one of the most important ammonia (NH3) emitting regions in Europe, few data are available for an evaluation of the ammonia budget at field level in arable lands. Here the NH3 losses were quantify, considering different measurement and estimation approach, fertilisers and agronomic managements. The outputs of two concentration based-inverse dispersion models, together a mechanistic model were assessed with the direct measurements of ammonia fluxes by the micrometeorological technique eddy covariance, at hourly, daily and seasonal scales. A discussion on advantages, disadvantages and performances of each model is given in order to determine the most suitable method able to evaluate the ammonia emission in Po Valley at field scale. The selected inverse dispersion models were assessed in their uncertainty to quantify ammonia emissions rates, and their significance with regards to the Italian context. Moreover emissions from cattle slurry and urea application were performed in seven field trials in three different locations of Po Valley, in order to evaluate the best practices in reducing NH3 loss from arable land. The emission factors relative to different agronomical practices (slurry injection, slurry surface spreading with and without incorporation, urea surface spreading) are given, taking into account the main factors affecting the NH3 volatilization phenomenon and describing its dynamics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Krifa, Mohamed. "Amortissement virtuel pour la conception vibroacoustique des lanceurs futurs". Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCD058.

Texto completo da fonte
Resumo:
Dans le dimensionnement des lanceurs spatiaux, la maîtrise de l'amortissement est une problématique majeure. Faute d'essais sur structure réelle très couteux avant la phase finale de qualification, la modélisation de l'amortissement peut conduire à un sur-dimensionnement de la structure alors que le but recherché est de diminuer le coût du lancement d'une fusée tout en garantissant le confort vibratoire de la charge utile.Nos contributions sont les suivantes. Premièrement, une méthode de prédiction par le calcul des niveaux vibratoires dans les structures de lanceurs en utilisant une stratégie d'essais virtuels qui permet de prédire les amortissements en basses fréquences, est proposée. Cette méthode est basée sur l'utilisation de méta-modèles construits à partir de plans d'expériences numériques à l'aide de modèles détaillés des liaisons. Ces méta-modèles peuvent être obtenus grâce à des calculs spécifiques utilisant une résolution 3D par éléments finis avec prise en compte du contact. En utilisant ces méta-modèles, l'amortissement modal dans un cycle de vibration peut être calculé comme étant le ratio entre l'énergie dissipée et l'énergie de déformation. L'approche utilisée donne une approximation précise et peu coûteuse de la solution. Le calcul non-linéaire global qui est inaccessible pour les structures complexes est rendu accessible en utilisant l'approche virtuelle basées sur les abaques.Deuxièmement, une validation des essais virtuels sur la structure du lanceur Ariane 5 a été élaborée en tenant compte des liaisons boulonnées entre les étages afin d'illustrer l'approche proposée. Lorsque la matrice d'amortissement généralisé n'est pas diagonale (car des dissipations localisées), ces méthodes modales ne permettent pas de calculer ou d'estimer les termes d'amortissement généralisé extra-diagonaux. La problématique posée est alors la quantification de l'erreur commise lorsque l'on néglige les termes extra-diagonaux dans le calcul des niveaux vibratoires ; avec un bon ratio précision / coût de calcul.Troisièmement, la validité de l'hypothèse de diagonalité de la matrice d'amortissement généralisée a été examinée et une méthode très peu coûteuse de quantification a posteriori de l'erreur d'estimation de l'amortissement modal par la méthodes des perturbations a été proposée.Finalement, la dernière contribution de cette thèse est la proposition d'un outil d'aide à la décision qui permet de quantifier l'impact des méconnaissances sur l'amortissement dans les liaisons sur le comportement global des lanceurs via l'utilisation de la méthode info-gap
In the dimensioning of space launchers, controlling depreciation is a major problem. In the absence of very expensive real structural tests before the final qualification phase, damping modeling can lead to over-sizing of the structure while the aim is to reduce the cost of launching a rocket while guaranteeing the vibratory comfort of the payload.[...]
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Reimer, Joscha Verfasser], Thomas [Akademischer Betreuer] [Slawig e Andreas [Gutachter] Oschlies. "Optimization of model parameters, uncertainty quantification and experimental designs in climate research / Joscha Reimer ; Gutachter: Andreas Oschlies ; Betreuer: Thomas Slawig". Kiel : Universitätsbibliothek Kiel, 2020. http://nbn-resolving.de/urn:nbn:de:gbv:8-mods-2020-00067-3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Reimer, Joscha [Verfasser], Thomas [Akademischer Betreuer] Slawig e Andreas [Gutachter] Oschlies. "Optimization of model parameters, uncertainty quantification and experimental designs in climate research / Joscha Reimer ; Gutachter: Andreas Oschlies ; Betreuer: Thomas Slawig". Kiel : Universitätsbibliothek Kiel, 2020. http://d-nb.info/1205735364/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Sittichok, Ketvara. "Improving Seasonal Rainfall and Streamflow Forecasting in the Sahel Region via Better Predictor Selection, Uncertainty Quantification and Forecast Economic Value Assessment". Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34229.

Texto completo da fonte
Resumo:
The Sahel region located in Western Africa is well known for its high rainfall variability. Severe and recurring droughts have plagued the region during the last three decades of the 20th century, while heavy precipitation events (with return periods of up to 1,200 years) were reported between 2007 and 2014. Vulnerability to extreme events is partly due to the fact that people are not prepared to cope with them. It would be of great benefit to farmers if information about the magnitudes of precipitation and streamflow in the upcoming rainy season were available a few months before; they could then switch to more adapted crops and farm management systems if required. Such information would also be useful for other sectors of the economy, such as hydropower production, domestic/industrial water consumption, fishing and navigation. A logical solution to the above problem would be seasonal rainfall and streamflow forecasting, which would allow to generate knowledge about the upcoming rainy season based on information available before it's beginning. The research in this thesis sought to improve seasonal rainfall and streamflow forecasting in the Sahel by developing statistical rainfall and streamflow seasonal forecasting models. Sea surface temperature (SST) were used as pools of predictor. The developed method allowed for a systematic search of the best period to calculate the predictor before it was used to predict average rainfall or streamflow over the upcoming rainy season. Eight statistical models consisted of various statistical methods including linear and polynomial regressions were developed in this study. Two main approaches for seasonal streamflow forecasting were developed here: 1) A two steps streamflow forecasting approach (called the indirect method) which first linked the average SST over a period prior to the date of forecast to average rainfall amount in the upcoming rainy season using the eight statistical models, then linked the rainfall amount to streamflow using a rainfall-runoff model (Soil and Water Assessment Tool (SWAT)). In this approach, the forecasted rainfall was disaggregated to daily time step using a simple approach (the fragment method) before being fed into SWAT. 2) A one step streamflow forecasting approach (called as the direct method) which linked the average SST over a period prior to the date of forecast to the average streamflow in the upcoming rainy season using the eight statistical models. To decrease the uncertainty due to model selection, Bayesian Model Averaging (BMA) was also applied. This method is able to explore the possibility of combining all available potential predictors (instead of selecting one based on an arbitrary criterion). The BMA is also capability to produce the probability density of the forecast which allows end-users to visualize the density of expected value and assess the level of uncertainty of the generated forecast. Finally, the economic value of forecast system was estimated using a simple economic approach (the cost/loss ratio method). Each developed method was evaluated using three well known model efficiency criteria: the Nash-Sutcliffe coefficient (Ef), the coefficient of determination (R2) and the Hit score (H). The proposed models showed equivalent or better rainfall forecasting skills than most research conducted in the Sahel region. The linear model driven by the Pacific SST produced the best rainfall forecasts (Ef = 0.82, R2 = 0.83, and H = 82%) at a lead time of up to 12 months. The rainfall forecasting model based on polynomial regression and forced by the Atlantic ocean SST can be used using a lead time of up to 5 months and had a slightly lower performance (Ef = 0.80, R2 = 0.81, and H = 82%). Despite the fact that the natural relationship between rainfall and SST is nonlinear, this study found that good results can be achieved using linear models. For streamflow forecasting, the direct method using polynomial regression performed slightly better than the indirect method (Ef = 0.74, R2 = 0.76, and H = 84% for the direct method; Ef = 0.70, R2 = 0.69, and H = 77% for the indirect method). The direct method was driven by the Pacific SST and had five months lead time. The indirect method was driven by the Atlantic SST and had six months lead time. No significant difference was found in terms of performance between BMA and the linear regression models based on a single predictor for streamflow forecasting. However, BMA was able to provide a probabilistic forecast that accounts for model selection uncertainty, while the linear regression model had a longer lead time. The economic value of forecasts developed using the direct and indirect methods were estimated using the cost/loss ratio method. It was found that the direct method had a better value than the indirect method. The value of the forecast declined with higher return periods for all methods. Results also showed that for the particular watershed under investigation, the direct method provided a better information for flood protection. This research has demonstrated the possibility of decent seasonal streamflow forecasting in the Sirba watershed, using the tropical Pacific and Atlantic SSTs as predictors.The findings of this study can be used to improve the performance of seasonal streamflow forecasting in the Sahel. A package implementing the statistical models developed in this study was developed so that end users can apply them for seasonal rainfall or streamflow forecasting in any region they are interested in, and using any predictor they may want to try.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Sevieri, Giacomo [Verfasser], Hermann G. [Akademischer Betreuer] Matthies e Falco Anna [Akademischer Betreuer] De. "The seismic assessment of existing concrete gravity dams : FE model uncertainty quantification and reduction / Giacomo Sevieri ; Hermann G. Matthies, Anna De Falco". Braunschweig : Technische Universität Braunschweig, 2021. http://d-nb.info/1225038251/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Tamssaouet, Ferhat. "Towards system-level prognostics : modeling, uncertainty propagation and system remaining useful life prediction". Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0079.

Texto completo da fonte
Resumo:
Le pronostic est le processus de prédiction de la durée de vie résiduelle utile (RUL) des composants, sous-systèmes ou systèmes. Cependant, jusqu'à présent, le pronostic a souvent été abordé au niveau composant sans tenir compte des interactions entre les composants et l'impact de l'environnement, ce qui peut conduire à une mauvaise prédiction du temps de défaillance dans des systèmes complexes. Dans ce travail, une approche de pronostic au niveau du système est proposée. Cette approche est basée sur un nouveau cadre de modélisation : le modèle d'inopérabilité entrée-sortie (IIM), qui permet de prendre en compte les interactions entre les composants et les effets du profil de mission et peut être appliqué pour des systèmes hétérogènes. Ensuite, une nouvelle méthodologie en ligne pour l'estimation des paramètres (basée sur l'algorithme de la descente du gradient) et la prédiction du RUL au niveau système (SRUL) en utilisant les filtres particulaires (PF), a été proposée. En détail, l'état de santé des composants du système est estimé et prédit d'une manière probabiliste en utilisant les PF. En cas de divergence consécutive entre les estimations a priori et a posteriori de l'état de santé du système, la méthode d'estimation proposée est utilisée pour corriger et adapter les paramètres de l'IIM. Finalement, la méthodologie développée, a été appliquée sur un système industriel réaliste : le Tennessee Eastman Process, et a permis une prédiction du SRUL dans un temps de calcul raisonnable
Prognostics is the process of predicting the remaining useful life (RUL) of components, subsystems, or systems. However, until now, the prognostics has often been approached from a component view without considering interactions between components and effects of the environment, leading to a misprediction of the complex systems failure time. In this work, a prognostics approach to system-level is proposed. This approach is based on a new modeling framework: the inoperability input-output model (IIM), which allows tackling the issue related to the interactions between components and the mission profile effects and can be applied for heterogeneous systems. Then, a new methodology for online joint system RUL (SRUL) prediction and model parameter estimation is developed based on particle filtering (PF) and gradient descent (GD). In detail, the state of health of system components is estimated and predicted in a probabilistic manner using PF. In the case of consecutive discrepancy between the prior and posterior estimates of the system health state, the proposed estimation method is used to correct and to adapt the IIM parameters. Finally, the developed methodology is verified on a realistic industrial system: The Tennessee Eastman Process. The obtained results highlighted its effectiveness in predicting the SRUL in reasonable computing time
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Muhammad, Ruqiah. "A new dynamic model for non-viral multi-treatment gene delivery systems for bone regeneration: parameter extraction, estimation, and sensitivity". Diss., University of Iowa, 2019. https://ir.uiowa.edu/etd/6996.

Texto completo da fonte
Resumo:
In this thesis we develop new mathematical models, using dynamical systems, to represent localized gene delivery of bone morphogenetic protein 2 into bone marrow-derived mesenchymal stem cells and rat calvarial defects. We examine two approaches, using pDNA or cmRNA treatments, respectively, towards the production of calcium deposition and bone regeneration in in vitro and in vivo experiments. We first review the relevant scientific literature and survey existing mathematical representations for similar treatment approaches. We then motivate and develop our new models and determine model parameters from literature, heuristic approaches, and estimation using sparse data. We next conduct a qualitative analysis using dynamical systems theory. Due to the nature of the parameter estimation, it was important that we obtain local and global sensitivity analyses of model outputs to changes in model inputs. Finally we compared results from different treatment protocols. Our model suggests that cmRNA treatments may perform better than pDNA treatments towards bone fracture healing. This work is intended to be a foundation for predictive models of non-viral local gene delivery systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Mesado, Melia Carles. "Uncertainty Quantification and Sensitivity Analysis for Cross Sections and Thermohydraulic Parameters in Lattice and Core Physics Codes. Methodology for Cross Section Library Generation and Application to PWR and BWR". Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/86167.

Texto completo da fonte
Resumo:
This PhD study, developed at Universitat Politècnica de València (UPV), aims to cover the first phase of the benchmark released by the expert group on Uncertainty Analysis in Modeling (UAM-LWR). The main contribution to the benchmark, made by the thesis' author, is the development of a MATLAB program requested by the benchmark organizers. This is used to generate neutronic libraries to distribute among the benchmark participants. The UAM benchmark pretends to determine the uncertainty introduced by coupled multi-physics and multi-scale LWR analysis codes. The benchmark is subdivided into three phases: 1. Neutronic phase: obtain collapsed and homogenized problem-dependent cross sections and criticality analyses. 2. Core phase: standalone thermohydraulic and neutronic codes. 3. System phase: coupled thermohydraulic and neutronic code. In this thesis the objectives of the first phase are covered. Specifically, a methodology is developed to propagate the uncertainty of cross sections and other neutronic parameters through a lattice physics code and core simulator. An Uncertainty and Sensitivity (U&S) analysis is performed over the cross sections contained in the ENDF/B-VII nuclear library. Their uncertainty is propagated through the lattice physics code SCALE6.2.1, including the collapse and homogenization phase, up to the generation of problem-dependent neutronic libraries. Afterward, the uncertainty contained in these libraries can be further propagated through a core simulator, in this study PARCSv3.2. The module SAMPLER -available in the latest release of SCALE- and DAKOTA 6.3 statistical tool are used for the U&S analysis. As a part of this process, a methodology to obtain neutronic libraries in NEMTAB format -to be used in a core simulator- is also developed. A code-to-code comparison with CASMO-4 is used as a verification. The whole methodology is tested using a Boiling Water Reactor (BWR) reactor type. Nevertheless, there is not any concern or limitation regarding its use in any other type of nuclear reactor. The Gesellschaft für Anlagen und Reaktorsicherheit (GRS) stochastic methodology for uncertainty quantification is used. This methodology makes use of the high-fidelity model and nonparametric sampling to propagate the uncertainty. As a result, the number of samples (determined using the revised Wilks' formula) does not depend on the number of input parameters but only on the desired confidence and uncertainty of output parameters. Moreover, the output Probability Distribution Functions (PDFs) are not subject to normality. The main disadvantage is that each input parameter must have a pre-defined PDF. If possible, input PDFs are defined using information found in the related literature. Otherwise, the uncertainty definition is based on expert judgment. A second scenario is used to propagate the uncertainty of different thermohydraulic parameters through the coupled code TRACE5.0p3/PARCSv3.0. In this case, a PWR reactor type is used and a transient control rod drop occurrence is simulated. As a new feature, the core is modeled chan-by-chan following a fully 3D discretization. No other study is found using a detailed 3D core. This U&S analysis also makes use of the GRS methodology and DAKOTA 6.3.
Este trabajo de doctorado, desarrollado en la Universitat Politècnica de València (UPV), tiene como objetivo cubrir la primera fase del benchmark presentado por el grupo de expertos Uncertainty Analysis in Modeling (UAM-LWR). La principal contribución al benchmark, por parte del autor de esta tesis, es el desarrollo de un programa de MATLAB solicitado por los organizadores del benchmark, el cual se usa para generar librerías neutrónicas a distribuir entre los participantes del benchmark. El benchmark del UAM pretende determinar la incertidumbre introducida por los códigos multifísicos y multiescala acoplados de análisis de reactores de agua ligera. El citado benchmark se divide en tres fases: 1. Fase neutrónica: obtener los parámetros neutrónicos y secciones eficaces del problema específico colapsados y homogenizados, además del análisis de criticidad. 2. Fase de núcleo: análisis termo-hidráulico y neutrónico por separado. 3. Fase de sistema: análisis termo-hidráulico y neutrónico acoplados. En esta tesis se completan los principales objetivos de la primera fase. Concretamente, se desarrolla una metodología para propagar la incertidumbre de secciones eficaces y otros parámetros neutrónicos a través de un código lattice y un simulador de núcleo. Se lleva a cabo un análisis de incertidumbre y sensibilidad para las secciones eficaces contenidas en la librería neutrónica ENDF/B-VII. Su incertidumbre se propaga a través del código lattice SCALE6.2.1, incluyendo las fases de colapsación y homogenización, hasta llegar a la generación de una librería neutrónica específica del problema. Luego, la incertidumbre contenida en dicha librería puede continuar propagándose a través de un simulador de núcleo, para este estudio PARCSv3.2. Para el análisis de incertidumbre y sensibilidad se ha usado el módulo SAMPLER -disponible en la última versión de SCALE- y la herramienta estadística DAKOTA 6.3. Como parte de este proceso, también se ha desarrollado una metodología para obtener librerías neutrónicas en formato NEMTAB para ser usadas en simuladores de núcleo. Se ha realizado una comparación con el código CASMO-4 para obtener una verificación de la metodología completa. Esta se ha probado usando un reactor de agua en ebullición del tipo BWR. Sin embargo, no hay ninguna preocupación o limitación respecto a su uso con otro tipo de reactor nuclear. Para la cuantificación de la incertidumbre se usa la metodología estocástica Gesellschaft für Anlagen und Reaktorsicherheit (GRS). Esta metodología hace uso del modelo de alta fidelidad y un muestreo no paramétrico para propagar la incertidumbre. Como resultado, el número de muestras (determinado con la fórmula revisada de Wilks) no depende del número de parámetros de entrada, sólo depende del nivel de confianza e incertidumbre deseados de los parámetros de salida. Además, las funciones de distribución de probabilidad no están limitadas a normalidad. El principal inconveniente es que se ha de disponer de las distribuciones de probabilidad de cada parámetro de entrada. Si es posible, las distribuciones de probabilidad de entrada se definen usando información encontrada en la literatura relacionada. En caso contrario, la incertidumbre se define en base a la opinión de un experto. Se usa un segundo escenario para propagar la incertidumbre de diferentes parámetros termo-hidráulicos a través del código acoplado TRACE5.0p3/PARCSv3.0. En este caso, se utiliza un reactor tipo PWR para simular un transitorio de una caída de barra. Como nueva característica, el núcleo se modela elemento a elemento siguiendo una discretización totalmente en 3D. No se ha encontrado ningún otro estudio que use un núcleo tan detallado en 3D. También se usa la metodología GRS y el DAKOTA 6.3 para este análisis de incertidumbre y sensibilidad.
Aquest treball de doctorat, desenvolupat a la Universitat Politècnica de València (UPV), té com a objectiu cobrir la primera fase del benchmark presentat pel grup d'experts Uncertainty Analysis in Modeling (UAM-LWR). La principal contribució al benchmark, per part de l'autor d'aquesta tesi, es el desenvolupament d'un programa de MATLAB sol¿licitat pels organitzadors del benchmark, el qual s'utilitza per a generar llibreries neutròniques a distribuir entre els participants del benchmark. El benchmark del UAM pretén determinar la incertesa introduïda pels codis multifísics i multiescala acoblats d'anàlisi de reactors d'aigua lleugera. El citat benchmark es divideix en tres fases: 1. Fase neutrònica: obtenir els paràmetres neutrònics i seccions eficaces del problema específic, col¿lapsats i homogeneïtzats, a més de la anàlisi de criticitat. 2. Fase de nucli: anàlisi termo-hidràulica i neutrònica per separat. 3. Fase de sistema: anàlisi termo-hidràulica i neutrònica acoblats. En aquesta tesi es completen els principals objectius de la primera fase. Concretament, es desenvolupa una metodologia per propagar la incertesa de les seccions eficaces i altres paràmetres neutrònics a través d'un codi lattice i un simulador de nucli. Es porta a terme una anàlisi d'incertesa i sensibilitat per a les seccions eficaces contingudes en la llibreria neutrònica ENDF/B-VII. La seua incertesa es propaga a través del codi lattice SCALE6.2.1, incloent les fases per col¿lapsar i homogeneïtzar, fins aplegar a la generació d'una llibreria neutrònica específica del problema. Després, la incertesa continguda en la esmentada llibreria pot continuar propagant-se a través d'un simulador de nucli, per a aquest estudi PARCSv3.2. Per a l'anàlisi d'incertesa i sensibilitat s'ha utilitzat el mòdul SAMPLER -disponible a l'última versió de SCALE- i la ferramenta estadística DAKOTA 6.3. Com a part d'aquest procés, també es desenvolupa una metodologia per a obtenir llibreries neutròniques en format NEMTAB per ser utilitzades en simuladors de nucli. S'ha realitzat una comparació amb el codi CASMO-4 per obtenir una verificació de la metodologia completa. Aquesta s'ha provat utilitzant un reactor d'aigua en ebullició del tipus BWR. Tanmateix, no hi ha cap preocupació o limitació respecte del seu ús amb un altre tipus de reactor nuclear. Per a la quantificació de la incertesa s'utilitza la metodologia estocàstica Gesellschaft für Anlagen und Reaktorsicherheit (GRS). Aquesta metodologia fa ús del model d'alta fidelitat i un mostreig no paramètric per propagar la incertesa. Com a resultat, el nombre de mostres (determinat amb la fórmula revisada de Wilks) no depèn del nombre de paràmetres d'entrada, sols depèn del nivell de confiança i incertesa desitjats dels paràmetres d'eixida. A més, las funcions de distribució de probabilitat no estan limitades a la normalitat. El principal inconvenient és que s'ha de disposar de les distribucions de probabilitat de cada paràmetre d'entrada. Si és possible, les distribucions de probabilitat d'entrada es defineixen utilitzant informació trobada a la literatura relacionada. En cas contrari, la incertesa es defineix en base a l'opinió d'un expert. S'utilitza un segon escenari per propagar la incertesa de diferents paràmetres termo-hidràulics a través del codi acoblat TRACE5.0p3/PARCSv3.0. En aquest cas, s'utilitza un reactor tipus PWR per simular un transitori d'una caiguda de barra. Com a nova característica, cal assenyalar que el nucli es modela element a element seguint una discretizació totalment 3D. No s'ha trobat cap altre estudi que utilitze un nucli tan detallat en 3D. També s'utilitza la metodologia GRS i el DAKOTA 6.3 per a aquesta anàlisi d'incertesa i sensibilitat.¿
Mesado Melia, C. (2017). Uncertainty Quantification and Sensitivity Analysis for Cross Sections and Thermohydraulic Parameters in Lattice and Core Physics Codes. Methodology for Cross Section Library Generation and Application to PWR and BWR [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86167
TESIS
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

El, Bouti Tamara. "Optimisation robuste et application à la reconstruction du réseau artériel humain". Thesis, Versailles-St Quentin en Yvelines, 2015. http://www.theses.fr/2015VERS018V/document.

Texto completo da fonte
Resumo:
Les maladies cardiovasculaires représentent actuellement une des premières causes de mortalité dans les pays développés liées à l’augmentation constante des facteurs de risques dans les populations. Différentes études cliniques ont montré que la rigidité artérielle était un facteur prédictif important pour ces maladies.Malheureusement, il s’avère difficile d’accéder expérimentalement à la valeur de ce paramètre. On propose une approche qui permet de déterminer numériquement la rigidité artérielle d’un réseau d’artères à partir d’un modèle monodimensionnel personnalisé de la variation temporelle de la section et du débit sanguin des artères. L’approche proposée résout le problème inverse associé au modèle réduit pour déterminer la rigidité de chaque artère, à l’aide de mesures non invasives de type IRM, echotracking ettonométrie d’aplanation.Pour déterminer la robustesse du modèle construit vis à vis de ses paramètres, une quantification d’incertitude a été effectuée pour mesurer la contribution de ceux-ci, soit seuls soit par interaction, à la variation de la sortie du modèle, ici la pression pulsée. Cette étude a montré que la pression pulsée numérique est un indicateur numérique robuste pouvant aider au diagnostic de l’hypertension artérielle.Nous pouvons ainsi offrir au praticien un outil numérique robuste et peu coûteux permettant un diagnostic précoce et fiable des risques cardiovasculaires pour tout patient simplement à partir d’un examen non invasif
Cardiovascular diseases are currently the leading cause of mortality in developed countries, due to the constant increase in risk factors in the population. Several prospective and retrospective studies have shown that arterial stiffness is an important predictor factor of these diseases. Unfortunately, these parameters are difficult to measure experimentally. We propose a numerical approach to determine the arterial stiffness of an arterial network using a patient specificone-dimensional model of the temporal variation of the section and blood flow of the arteries. The proposed approach estimates the optimal parameters of the reduced model, including the arterial stiffness, using non-invasive measurements such MRI, echotracking and tonometry aplanation. Different optimization results applied on experimental cases will be presented. In order to determine the robustness of the model towards its parameters, an uncertainty analysis hasbeen also carried out to measure the contribution of the model input parameters, alone or by interaction with other inputs, to the variation of model output, here the arterial pulse pressure. This study has shown that the numerical pulse pressure is a reliable indicator that can help to diagnose arterial hypertension.We can then provide the practitioner a robust patient-specific tool allowing an early and reliable diagnosis of cardiovascular diseases based on a non-invasive exam
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Ait, Mamoun Khadija. "Vehicle rοuting prοblem under uncertainty : case οf pharmaceutical supply chain". Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMIR08.

Texto completo da fonte
Resumo:
L'amélioration des performances de la distribution logistique et l'optimisation des transports sont devenues des préoccupations cruciales ces dernières années. Le secteur de la distribution pharmaceutique est confronté à d'importants défis en matière de planification des itinéraires et d'optimisation des réseaux de transport, les incertitudes entraînant souvent des retards et des pertes. Ces défis complexes englobent l'impératif d'élever la qualité des produits, de réduire les coûts, de minimiser la distance totale parcourue et de rationaliser le temps de transport pour une planification efficace. Dans ce contexte, le Problème de Routage de Véhicules (VRP) se distingue comme l'un des problèmes les plus largement analysés dans les domaines du transport, de la distribution et de la logistique. Atteindre un équilibre délicat entre les considérations de coûts et la livraison de produits pharmaceutiques de haute qualité est un objectif majeur dans la distribution pharmaceutique. Ce travail explore à la fois le Problème de Routage de Véhicules Statique (SVRP) et le Problème de Routage de Véhicules Dynamique (DVRP). La planification logistique du monde réel rencontre fréquemment des incertitudes dès le départ, notamment une demande client incertaine, des quantités de livraison, des contraintes temporelles, et plus encore. Cette thèse introduit la "condition de température" comme une contrainte fondamentale dans la distribution pharmaceutique, représentant une source d'incertitude qui impacte directement la qualité des médicaments, influençant ainsi la distribution logistique et la performance globale de la chaîne d'approvisionnement. De plus, la thèse intègre la quantification de l'incertitude pour modéliser les temps de déplacement incertains dans les scénarios de congestion récurrente et non récurrente. La méthodologie utilisée à cette fin est la méthode de collocation, initialement validée par la Simulation de Monte Carlo (SMC). En abordant ces défis complexes et ces incertitudes, cette recherche vise à contribuer au développement de stratégies robustes dans la distribution pharmaceutique, assurant l'optimisation des itinéraires, la réduction des coûts et le maintien des normes élevées de qualité des produits. Les conclusions de cette étude offrent des éclairages précieux pour les gestionnaires logistiques et les planificateurs qui cherchent à naviguer dans les complexités de la distribution pharmaceutique, favorisant l'efficacité et la résilience face aux incertitudes
The enhancement of logistics distribution performance and the optimization of transportation have emerged as critical concerns in recent years. The pharmaceutical distribution sector faces significant challenges in route planning and transport network optimization, with uncertainties often leading to delays and losses. The multifaceted challenges encompass the imperative to elevate product quality, reduce costs, minimize total travel distance, and streamline transportation time for effective planning. Within this context, the Vehicle Routing Problem (VRP) stands out as one of the extensively analysed problems in the realms of transportation, distribution, and logistics. Achieving a delicate equilibrium between cost considerations and delivering high-quality pharmaceutical products is a primary objective in pharmaceutical distribution. This research delves into both the Static Vehicle Routing Problem (SVRP) and the Dynamic Vehicle Routing Problem (DVRP). Real-world logistical planning frequently encounters uncertainties at the outset, including uncertain customer demand, delivery quantities, time constraints, and more. This thesis introduces the "temperature condition" as a fundamental constraint in pharmaceutical distribution, representing a source of uncertainty that directly impacts drug quality, thereby influencing logistics distribution and overall supply chain performance. Furthermore, the thesis incorporates uncertainty quantification for modelling uncertain travel times in both recurrent and non-recurrent congestion scenarios. The methodology employed for this purpose is the collocation method, initially validated through Monte Carlo Simulation (MCS). By addressing these multifaceted challenges and uncertainties, this research seeks to contribute to the development of robust strategies in pharmaceutical distribution, ensuring the optimization of routes, reduction of costs, and maintenance of high-quality product standards. The findings of this study offer valuable insights for logistics managers and planners aiming to navigate the complexities of pharmaceutical distribution, fostering efficiency and resilience in the face of uncertainties
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Rubio, Paul-Baptiste. "Stratégies numériques innovantes pour l’assimilation de données par inférence bayésienne". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLN055/document.

Texto completo da fonte
Resumo:
Ce travail se place dans le cadre de l'assimilation de données en mécanique des structures. Il vise à développer de nouveaux outils numériques pour l'assimilation de données robuste et en temps réel afin d'être utilisés dans diverses activités d'ingénierie. Une activité cible est la mise en œuvre d'applications DDDAS (Dynamic Data Driven Application System) dans lesquelles un échange continu entre les outils de simulation et les mesures expérimentales est requis dans le but de créer une boucle de contrôle rétroactive sur des systèmes mécaniques connectés. Dans ce contexte, et afin de prendre en compte les différentes sources d'incertitude (erreur de modélisation, bruit de mesure,...), une méthodologie stochastique puissante est considérée dans le cadre général de l’inférence bayésienne. Cependant, un inconvénient bien connu d'une telle approche est la complexité informatique qu’elle engendre et qui rend les simulations en temps réel et l'assimilation séquentielle des données difficiles.Le travail de thèse propose donc de coupler l'inférence bayésienne avec des techniques numériques attrayantes et avancées afin d'envisager l’assimilation stochastique de données de façon séquentielle et en temps réel. Premièrement, la réduction de modèle PGD est introduite pour faciliter le calcul de la fonction de vraisemblance, la propagation des incertitudes dans des modèles complexes et l'échantillonnage de la densité a posteriori. Ensuite, l'échantillonnage par la méthode des Transport Maps est étudiée comme un substitut aux procédures classiques MCMC pour l'échantillonnage de la densité a posteriori. Il est démontré que cette technique conduit à des calculs déterministes, avec des critères de convergence clairs, et qu'elle est particulièrement adaptée à l'assimilation séquentielle de données. Là encore, l'utilisation de la réduction de modèle PGD facilite grandement le processus en utilisant les informations des gradients et hessiens d'une manière simple. Enfin, et pour accroître la robustesse, la correction à la volée du biais du modèle est abordée à l'aide de termes d'enrichissement fondés sur les données. Aussi, la sélection des données les plus pertinentes pour l’objectif d’assimilation est abordée.Cette méthodologie globale est appliquée et illustrée sur plusieurs applications académiques et réelles, comprenant par exemple le recalage en temps réel de modèles pour le contrôle des procédés de soudage, ou l’étude d'essais mécaniques impliquant des structures endommageables en béton instrumentées par mesures de champs
The work is placed into the framework of data assimilation in structural mechanics. It aims at developing new numerical tools in order to permit real-time and robust data assimilation that could then be used in various engineering activities. A specific targeted activity is the implementation of DDDAS (Dynamic Data Driven Application System) applications in which a continuous exchange between simulation tools and experimental measurements is envisioned to the end of creating retroactive control loops on mechanical systems. In this context, and in order to take various uncertainty sources (modeling error, measurement noise,..) into account, a powerful and general stochastic methodology with Bayesian inference is considered. However, a well-known drawback of such an approach is the computational complexity which makes real-time simulations and sequential assimilation some difficult tasks.The PhD work thus proposes to couple Bayesian inference with attractive and advanced numerical techniques so that real-time and sequential assimilation can be envisioned. First, PGD model reduction is introduced to facilitate the computation of the likelihood function, uncertainty propagation through complex models, and the sampling of the posterior density. Then, Transport Map sampling is investigated as a substitute to classical MCMC procedures for posterior sampling. It is shown that this technique leads to deterministic computations, with clear convergence criteria, and that it is particularly suited to sequential data assimilation. Here again, the use of PGD model reduction highly facilitates the process by recovering gradient and Hessian information in a straightforward manner. Eventually, and to increase robustness, on-the-fly correction of model bias is addressed using data-based enrichment terms.The overall cost-effective methodology is applied and illustrated on several academic and real-life test cases, including for instance the real-time updating of models for the control of welding processes, or that of mechanical tests involving damageable concrete structures with full-field measurements
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

DuFour, Mark R. "Hydroacoustic Quantification of Lake Erie Walleye (Sander vitreus)Distribution and Abundance". University of Toledo / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1483715286731694.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Resseguier, Valentin. "Mixing and fluid dynamics under location uncertainty". Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S004/document.

Texto completo da fonte
Resumo:
Cette thèse concerne le développement, l'extension et l'application d'une formulation stochastique des équations de la mécanique des fluides introduite par Mémin (2014). La vitesse petite échelle, non-résolue, est modélisée au moyen d'un champ aléatoire décorrélé en temps. Cela modifie l'expression de la dérivée particulaire et donc les équations de la mécanique des fluides. Les modèles qui en découlent sont dénommés modèles sous incertitude de position. La thèse s'articulent autour de l'étude successive de modèles réduits, de versions stochastiques du transport et de l'advection à temps long d'un champ de traceur par une vitesse mal résolue. La POD est une méthode de réduction de dimension, pour EDP, rendue possible par l'utilisation d'observations. L'EDP régissant l'évolution de la vitesse du fluide est remplacée par un nombre fini d'EDOs couplées. Grâce à la modélisation sous incertitude de position et à de nouveaux estimateurs statistiques, nous avons dérivé et simulé des versions réduites, déterministe et aléatoire, de l'équation de Navier-Stokes. Après avoir obtenu des versions aléatoires de plusieurs modèles océaniques, nous avons montré numériquement que ces modèles permettaient de mieux prendre en compte les petites échelles des écoulements, tout en donnant accès à des estimés de bonne qualité des erreurs du modèle. Ils permettent par ailleurs de mieux rendre compte des évènements extrêmes, des bifurcations ainsi que des phénomènes physiques réalistes absents de certains modèles déterministes équivalents. Nous avons expliqué, démontré et quantifié mathématiquement l'apparition de petites échelles de traceur, lors de l'advection par une vitesse mal résolu. Cette quantification permet de fixer proprement des paramètres de la méthode d'advection Lagrangienne, de mieux le comprendre le phénomène de mélange et d'aider au paramétrage des simulations grande échelle en mécanique des fluides
This thesis develops, analyzes and demonstrates several valuable applications of randomized fluid dynamics models referred to as under location uncertainty. The velocity is decomposed between large-scale components and random time-uncorrelated small-scale components. This assumption leads to a modification of the material derivative and hence of every fluid dynamics models. Through the thesis, the mixing induced by deterministic low-resolution flows is also investigated. We first applied that decomposition to reduced order models (ROM). The fluid velocity is expressed on a finite-dimensional basis and its evolution law is projected onto each of these modes. We derive two types of ROMs of Navier-Stokes equations. A deterministic LES-like model is able to stabilize ROMs and to better analyze the influence of the residual velocity on the resolved component. The random one additionally maintains the variability of stable modes and quantifies the model errors. We derive random versions of several geophysical models. We numerically study the transport under location uncertainty through a simplified one. A single realization of our model better retrieves the small-scale tracer structures than a deterministic simulation. Furthermore, a small ensemble of simulations accurately predicts and describes the extreme events, the bifurcations as well as the amplitude and the position of the ensemble errors. Another of our derived simplified model quantifies the frontolysis and the frontogenesis in the upper ocean. This thesis also studied the mixing of tracers generated by smooth fluid flows, after a finite time. We propose a simple model to describe the stretching as well as the spatial and spectral structures of advected tracers. With a toy flow but also with satellite images, we apply our model to locally and globally describe the mixing, specify the advection time and the filter width of the Lagrangian advection method, as well as the turbulent diffusivity in numerical simulations
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Boopathy, Komahan. "Uncertainty Quantification and Optimization Under Uncertainty Using Surrogate Models". University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1398302731.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia