To see the other types of publications on this topic, follow the link: Predictive uncertainty quantification.

Journal articles on the topic 'Predictive uncertainty quantification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Predictive uncertainty quantification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Cacuci, Dan Gabriel. "Sensitivity Analysis, Uncertainty Quantification and Predictive Modeling of Nuclear Energy Systems." Energies 15, no. 17 (September 1, 2022): 6379. http://dx.doi.org/10.3390/en15176379.

Full text
Abstract:
The Special Issue “Sensitivity Analysis, Uncertainty Quantification and Predictive Modeling of Nuclear Energy Systems” comprises nine articles that present important applications of concepts for performing sensitivity analyses and uncertainty quantifications of models of nuclear energy systems [...]
APA, Harvard, Vancouver, ISO, and other styles
2

Csillag, Daniel, Lucas Monteiro Paes, Thiago Ramos, João Vitor Romano, Rodrigo Schuller, Roberto B. Seixas, Roberto I. Oliveira, and Paulo Orenstein. "AmnioML: Amniotic Fluid Segmentation and Volume Prediction with Uncertainty Quantification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 15494–502. http://dx.doi.org/10.1609/aaai.v37i13.26837.

Full text
Abstract:
Accurately predicting the volume of amniotic fluid is fundamental to assessing pregnancy risks, though the task usually requires many hours of laborious work by medical experts. In this paper, we present AmnioML, a machine learning solution that leverages deep learning and conformal prediction to output fast and accurate volume estimates and segmentation masks from fetal MRIs with Dice coefficient over 0.9. Also, we make available a novel, curated dataset for fetal MRIs with 853 exams and benchmark the performance of many recent deep learning architectures. In addition, we introduce a conformal prediction tool that yields narrow predictive intervals with theoretically guaranteed coverage, thus aiding doctors in detecting pregnancy risks and saving lives. A successful case study of AmnioML deployed in a medical setting is also reported. Real-world clinical benefits include up to 20x segmentation time reduction, with most segmentations deemed by doctors as not needing any further manual refinement. Furthermore, AmnioML's volume predictions were found to be highly accurate in practice, with mean absolute error below 56mL and tight predictive intervals, showcasing its impact in reducing pregnancy complications.
APA, Harvard, Vancouver, ISO, and other styles
3

Lew, Jiann-Shiun, and Jer-Nan Juang. "Robust Generalized Predictive Control with Uncertainty Quantification." Journal of Guidance, Control, and Dynamics 35, no. 3 (May 2012): 930–37. http://dx.doi.org/10.2514/1.54510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Karimi, Hamed, and Reza Samavi. "Quantifying Deep Learning Model Uncertainty in Conformal Prediction." Proceedings of the AAAI Symposium Series 1, no. 1 (October 3, 2023): 142–48. http://dx.doi.org/10.1609/aaaiss.v1i1.27492.

Full text
Abstract:
Precise estimation of predictive uncertainty in deep neural networks is a critical requirement for reliable decision-making in machine learning and statistical modeling, particularly in the context of medical AI. Conformal Prediction (CP) has emerged as a promising framework for representing the model uncertainty by providing well-calibrated confidence levels for individual predictions. However, the quantification of model uncertainty in conformal prediction remains an active research area, yet to be fully addressed. In this paper, we explore state-of-the-art CP methodologies and their theoretical foundations. We propose a probabilistic approach in quantifying the model uncertainty derived from the produced prediction sets in conformal prediction and provide certified boundaries for the computed uncertainty. By doing so, we allow model uncertainty measured by CP to be compared by other uncertainty quantification methods such as Bayesian (e.g., MC-Dropout and DeepEnsemble) and Evidential approaches.
APA, Harvard, Vancouver, ISO, and other styles
5

Akitaya, Kento, and Masaatsu Aichi. "Land Subsidence Model Inversion with the Estimation of Both Model Parameter Uncertainty and Predictive Uncertainty Using an Evolutionary-Based Data Assimilation (EDA) and Ensemble Model Output Statistics (EMOS)." Water 16, no. 3 (January 28, 2024): 423. http://dx.doi.org/10.3390/w16030423.

Full text
Abstract:
The nonlinearity nature of land subsidence and limited observations cause premature convergence in typical data assimilation methods, leading to both underestimation and miscalculation of uncertainty in model parameters and prediction. This study focuses on a promising approach, the combination of evolutionary-based data assimilation (EDA) and ensemble model output statistics (EMOS), to investigate its performance in land subsidence modeling using EDA with a smoothing approach for parameter uncertainty quantification and EMOS for predictive uncertainty quantification. The methodology was tested on a one-dimensional subsidence model in Kawajima (Japan). The results confirmed the EDA’s robust capability: Model diversity was maintained even after 1000 assimilation cycles on the same dataset, and the obtained parameter distributions were consistent with the soil types. The ensemble predictions were converted to Gaussian predictions with EMOS using past observations statistically. The Gaussian predictions outperformed the ensemble predictions in predictive performance because EMOS compensated for the over/under-dispersive prediction spread and the short-term bias, a potential weakness for the smoothing approach. This case study demonstrates that combining EDA and EMOS contributes to groundwater management for land subsidence control, considering both the model parameter uncertainty and the predictive uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
6

Singh, Rishabh, and Jose C. Principe. "Toward a Kernel-Based Uncertainty Decomposition Framework for Data and Models." Neural Computation 33, no. 5 (April 13, 2021): 1164–98. http://dx.doi.org/10.1162/neco_a_01372.

Full text
Abstract:
Abstract This letter introduces a new framework for quantifying predictive uncertainty for both data and models that relies on projecting the data into a gaussian reproducing kernel Hilbert space (RKHS) and transforming the data probability density function (PDF) in a way that quantifies the flow of its gradient as a topological potential field (quantified at all points in the sample space). This enables the decomposition of the PDF gradient flow by formulating it as a moment decomposition problem using operators from quantum physics, specifically Schrödinger's formulation. We experimentally show that the higher-order moments systematically cluster the different tail regions of the PDF, thereby providing unprecedented discriminative resolution of data regions having high epistemic uncertainty. In essence, this approach decomposes local realizations of the data PDF in terms of uncertainty moments. We apply this framework as a surrogate tool for predictive uncertainty quantification of point-prediction neural network models, overcoming various limitations of conventional Bayesian-based uncertainty quantification methods. Experimental comparisons with some established methods illustrate performance advantages that our framework exhibits.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Peng, and Nicholas Zabaras. "Adaptive Locally Weighted Projection Regression Method for Uncertainty Quantification." Communications in Computational Physics 14, no. 4 (October 2013): 851–78. http://dx.doi.org/10.4208/cicp.060712.281212a.

Full text
Abstract:
AbstractWe develop an efficient, adaptive locally weighted projection regression (ALWPR) framework for uncertainty quantification (UQ) of systems governed by ordinary and partial differential equations. The algorithm adaptively selects the new input points with the largest predictive variance and decides when and where to add new local models. It effectively learns the local features and accurately quantifies the uncertainty in the prediction of the statistics. The developed methodology provides predictions and confidence intervals at any query input and can deal with multi-output cases. Numerical examples are presented to show the accuracy and efficiency of the ALWPR framework including problems with non-smooth local features such as discontinuities in the stochastic space.
APA, Harvard, Vancouver, ISO, and other styles
8

Omagbon, Jericho, John Doherty, Angus Yeh, Racquel Colina, John O'Sullivan, Julian McDowell, Ruanui Nicholson, Oliver J. Maclaren, and Michael O'Sullivan. "Case studies of predictive uncertainty quantification for geothermal models." Geothermics 97 (December 2021): 102263. http://dx.doi.org/10.1016/j.geothermics.2021.102263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nitschke, C. T., P. Cinnella, D. Lucor, and J. C. Chassaing. "Model-form and predictive uncertainty quantification in linear aeroelasticity." Journal of Fluids and Structures 73 (August 2017): 137–61. http://dx.doi.org/10.1016/j.jfluidstructs.2017.05.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mirzayeva, A., N. A. Slavinskaya, M. Abbasi, J. H. Starcke, W. Li, and M. Frenklach. "Uncertainty Quantification in Chemical Modeling." Eurasian Chemico-Technological Journal 20, no. 1 (March 31, 2018): 33. http://dx.doi.org/10.18321/ectj706.

Full text
Abstract:
A module of PrIMe automated data-centric infrastructure, Bound-to-Bound Data Collaboration (B2BDC), was used for the analysis of systematic uncertainty and data consistency of the H2/CO reaction model (73/17). In order to achieve this purpose, a dataset of 167 experimental targets (ignition delay time and laminar flame speed) and 55 active model parameters (pre-exponent factors in the Arrhenius form of the reaction rate coefficients) was constructed. Consistency analysis of experimental data from the composed dataset revealed disagreement between models and data. Two consistency measures were applied to identify the quality of experimental targets (Quantities of Interest, QoI): scalar consistency measure, which quantifies the tightening index of the constraints while still ensuring the existence of a set of the model parameter values whose associated modeling output predicts the experimental QoIs within the uncertainty bounds; and a newly-developed method of computing the vector consistency measure (VCM), which determines the minimal bound changes for QoIs initially identified as inconsistent, each bound by its own extent, while still ensuring the existence of a set of the model parameter values whose associated modeling output predicts the experimental QoIs within the uncertainty bounds. The consistency analysis suggested that elimination of 45 experimental targets, 8 of which were self- inconsistent, would lead to a consistent dataset. After that the feasible parameter set was constructed through decrease uncertainty parameters for several reaction rate coefficients. This dataset was subjected for the B2BDC framework model optimization and analysis on. Forth methods of parameter optimization were applied, including those unique in the B2BDC framework. The optimized models showed improved agreement with experimental values, as compared to the initially-assembled model. Moreover, predictions for experiments not included in the initial dataset were investigated. The results demonstrate benefits of applying the B2BDC methodology for development of predictive kinetic models.
APA, Harvard, Vancouver, ISO, and other styles
11

Albi, Giacomo, Lorenzo Pareschi, and Mattia Zanella. "Uncertainty Quantification in Control Problems for Flocking Models." Mathematical Problems in Engineering 2015 (2015): 1–14. http://dx.doi.org/10.1155/2015/850124.

Full text
Abstract:
The optimal control of flocking models with random inputs is investigated from a numerical point of view. The effect of uncertainty in the interaction parameters is studied for a Cucker-Smale type model using a generalized polynomial chaos (gPC) approach. Numerical evidence of threshold effects in the alignment dynamic due to the random parameters is given. The use of a selective model predictive control permits steering of the system towards the desired state even in unstable regimes.
APA, Harvard, Vancouver, ISO, and other styles
12

Kumar, Bhargava, Tejaswini Kumar, Swapna Nadakuditi, Hitesh Patel, and Karan Gupta. "Comparing Conformal and Quantile Regression for Uncertainty Quantification: An Empirical Investigation." International Journal of Computing and Engineering 5, no. 5 (May 27, 2024): 1–8. http://dx.doi.org/10.47941/ijce.1925.

Full text
Abstract:
Purpose: This research assesses the efficacy of conformal regression and standard quantile regression in uncertainty quantification for predictive modeling. Quantile regression estimates various quantiles within the conditional distribution, while conformal regression constructs prediction intervals with guaranteed coverage. Methodology: By training models on multiple quantile pairs and varying error rates, the analysis evaluates each method's performance. Findings: Results indicate consistent trends in coverage and prediction interval lengths, with no significant differences in performance. Quantile regression intervals lengthen toward the distribution tails, while conformal regression intervals lengthen with higher coverage. Unique contribution to theory, policy and practice: On the tested dataset, both methods perform similarly, but further testing is necessary to validate these findings across diverse datasets and conditions, considering computational efficiency and implementation ease to determine the best method for specific applications.
APA, Harvard, Vancouver, ISO, and other styles
13

Gorle, Catherine. "Improving the predictive capability of building simulations using uncertainty quantification." Science and Technology for the Built Environment 28, no. 5 (May 28, 2022): 575–76. http://dx.doi.org/10.1080/23744731.2022.2079261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Delottier, Hugo, John Doherty, and Philip Brunner. "Data space inversion for efficient uncertainty quantification using an integrated surface and sub-surface hydrologic model." Geoscientific Model Development 16, no. 14 (July 26, 2023): 4213–31. http://dx.doi.org/10.5194/gmd-16-4213-2023.

Full text
Abstract:
Abstract. It is incumbent on decision-support hydrological modelling to make predictions of uncertain quantities in a decision-support context. In implementing decision-support modelling, data assimilation and uncertainty quantification are often the most difficult and time-consuming tasks. This is because the imposition of history-matching constraints on model parameters usually requires a large number of model runs. Data space inversion (DSI) provides a highly model-run-efficient method for predictive uncertainty quantification. It does this by evaluating covariances between model outputs used for history matching (e.g. hydraulic heads) and model predictions based on model runs that sample the prior parameter probability distribution. By directly focusing on the relationship between model outputs under historical conditions and predictions of system behaviour under future conditions, DSI avoids the need to estimate or adjust model parameters. This is advantageous when using integrated surface and sub-surface hydrologic models (ISSHMs) because these models are associated with long run times, numerical instability and ideally complex parameterization schemes that are designed to respect geological realism. This paper demonstrates that DSI provides a robust and efficient means of quantifying the uncertainties of complex model predictions. At the same time, DSI provides a basis for complementary linear analysis that allows the worth of available observations to be explored, as well as of observations which are yet to be acquired. This allows for the design of highly efficient, future data acquisition campaigns. DSI is applied in conjunction with an ISSHM representing a synthetic but realistic river–aquifer system. Predictions of interest are fast travel times and surface water infiltration. Linear and non-linear estimates of predictive uncertainty based on DSI are validated against a more traditional uncertainty quantification which requires the adjustment of a large number of parameters. A DSI-generated surrogate model is then used to investigate the effectiveness and efficiency of existing and possible future monitoring networks. The example demonstrates the benefits of using DSI in conjunction with a complex numerical model to quantify predictive uncertainty and support data worth analysis in complex hydrogeological environments.
APA, Harvard, Vancouver, ISO, and other styles
15

Gerber, Eric A. E., and Bruce A. Craig. "A mixed effects multinomial logistic-normal model for forecasting baseball performance." Journal of Quantitative Analysis in Sports 17, no. 3 (January 6, 2021): 221–39. http://dx.doi.org/10.1515/jqas-2020-0007.

Full text
Abstract:
Abstract Prediction of player performance is a key component in the construction of baseball team rosters. As a result, most prediction models are the proprietary property of team or industrial sports entities, and little is known about them. Of those models that have been published, the main focus has been to separately model each outcome with nearly no emphasis on uncertainty quantification. This research introduces a joint modeling approach to predict seasonal plate appearance outcome vectors using a mixed-effects multinomial logistic-normal model. This model accounts for positive and negative correlations between outcomes, both across and within player seasons, and provides a joint posterior predictive outcome distribution from which uncertainty can be quantified. It is applied to the important, yet unaddressed, problem of predicting performance for players moving between the Japanese (NPB) and American (MLB) major leagues.
APA, Harvard, Vancouver, ISO, and other styles
16

Wells, S., A. Plotkowski, J. Coleman, M. Rolchigo, R. Carson, and M. J. M. Krane. "Uncertainty quantification for computational modelling of laser powder bed fusion." IOP Conference Series: Materials Science and Engineering 1281, no. 1 (May 1, 2023): 012024. http://dx.doi.org/10.1088/1757-899x/1281/1/012024.

Full text
Abstract:
Abstract Additive manufacturing (AM) may have many advantages over traditional casting and wrought methods, but our understanding of the various processes is still limited. Computational models are useful to study and isolate underlying physics and improve our understanding of the AM process-microstructure-property relations. However, these models necessarily rely on simplifications and parameters of uncertain value. These assumptions reduce the overall reliability of the predictive capabilities of these models, so it is important to estimate the uncertainty in model output. In doing so, we quantify the effect of model limitations and identify potential areas of improvement, a procedure made possible by uncertainty quantification (UQ). Here we highlight recent work which coupled and propagated statistical and systematic uncertainties from a melt pool transport model based in OpenFOAM, through a grain scale cellular automaton code. We demonstrate how a UQ framework can identify model parameters which most significantly impact the reliability of model predictions through both models and thus provide insight for future improvements in the models and suggest measurements to reduce output uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
17

Ma, Junwei, Xiao Liu, Xiaoxu Niu, Yankun Wang, Tao Wen, Junrong Zhang, and Zongxing Zou. "Forecasting of Landslide Displacement Using a Probability-Scheme Combination Ensemble Prediction Technique." International Journal of Environmental Research and Public Health 17, no. 13 (July 3, 2020): 4788. http://dx.doi.org/10.3390/ijerph17134788.

Full text
Abstract:
Data-driven models have been extensively employed in landslide displacement prediction. However, predictive uncertainty, which consists of input uncertainty, parameter uncertainty, and model uncertainty, is usually disregarded in deterministic data-driven modeling, and point estimates are separately presented. In this study, a probability-scheme combination ensemble prediction that employs quantile regression neural networks and kernel density estimation (QRNNs-KDE) is proposed for robust and accurate prediction and uncertainty quantification of landslide displacement. In the ensemble model, QRNNs serve as base learning algorithms to generate multiple base learners. Final ensemble prediction is obtained by integration of all base learners through a probability combination scheme based on KDE. The Fanjiaping landslide in the Three Gorges Reservoir area (TGRA) was selected as a case study to explore the performance of the ensemble prediction. Based on long-term (2006–2018) and near real-time monitoring data, a comprehensive analysis of the deformation characteristics was conducted for fully understanding the triggering factors. The experimental results indicate that the QRNNs-KDE approach can perform predictions with perfect performance and outperform the traditional backpropagation (BP), radial basis function (RBF), extreme learning machine (ELM), support vector machine (SVM) methods, bootstrap-extreme learning machine-artificial neural network (bootstrap-ELM-ANN), and Copula-kernel-based support vector machine quantile regression (Copula-KSVMQR). The proposed QRNNs-KDE approach has significant potential in medium-term to long-term horizon forecasting and quantification of uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
18

Feng, Jinchao, Joshua L. Lansford, Markos A. Katsoulakis, and Dionisios G. Vlachos. "Explainable and trustworthy artificial intelligence for correctable modeling in chemical sciences." Science Advances 6, no. 42 (October 2020): eabc3204. http://dx.doi.org/10.1126/sciadv.abc3204.

Full text
Abstract:
Data science has primarily focused on big data, but for many physics, chemistry, and engineering applications, data are often small, correlated and, thus, low dimensional, and sourced from both computations and experiments with various levels of noise. Typical statistics and machine learning methods do not work for these cases. Expert knowledge is essential, but a systematic framework for incorporating it into physics-based models under uncertainty is lacking. Here, we develop a mathematical and computational framework for probabilistic artificial intelligence (AI)–based predictive modeling combining data, expert knowledge, multiscale models, and information theory through uncertainty quantification and probabilistic graphical models (PGMs). We apply PGMs to chemistry specifically and develop predictive guarantees for PGMs generally. Our proposed framework, combining AI and uncertainty quantification, provides explainable results leading to correctable and, eventually, trustworthy models. The proposed framework is demonstrated on a microkinetic model of the oxygen reduction reaction.
APA, Harvard, Vancouver, ISO, and other styles
19

Banerjee, Sourav. "Uncertainty Quantification Driven Predictive Multi-Scale Model for Synthesis of Mycotoxins." Computational Biology and Bioinformatics 2, no. 1 (2014): 7. http://dx.doi.org/10.11648/j.cbb.20140201.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Riley, Matthew E., and Ramana V. Grandhi. "Quantification of model-form and predictive uncertainty for multi-physics simulation." Computers & Structures 89, no. 11-12 (June 2011): 1206–13. http://dx.doi.org/10.1016/j.compstruc.2010.10.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Zgraggen, Jannik, Gianmarco Pizza, and Lilach Goren Huber. "Uncertainty Informed Anomaly Scores with Deep Learning: Robust Fault Detection with Limited Data." PHM Society European Conference 7, no. 1 (June 29, 2022): 530–40. http://dx.doi.org/10.36001/phme.2022.v7i1.3342.

Full text
Abstract:
Quantifying the predictive uncertainty of a model is an important ingredient in data-driven decision making. Uncertainty quantification has been gaining interest especially for deep learning models, which are often hard to justify or explain. Various techniques for deep learning based uncertainty estimates have been developed primarily for image classification and segmentation, but also for regression and forecasting tasks. Uncertainty quantification for anomaly detection tasks is still rather limited for image data and has not yet been demonstrated for machine fault detection in PHM applications. In this paper we suggest an approach to derive an uncertaintyinformed anomaly score for regression models trained with normal data only. The score is derived using a deep ensemble of probabilistic neural networks for uncertainty quantification. Using an example of wind-turbine fault detection, we demonstrate the superiority of the uncertainty-informed anomaly score over the conventional score. The advantage is particularly clear in an ”out-of-distribution” scenario, in which the model is trained with limited data which does not represent all normal regimes that are observed during model deployment.
APA, Harvard, Vancouver, ISO, and other styles
22

Kefalas, Marios, Bas van Stein, Mitra Baratchi, Asteris Apostolidis, and Thomas Baeck. "End-to-End Pipeline for Uncertainty Quantification and Remaining Useful Life Estimation: An Application on Aircraft Engines." PHM Society European Conference 7, no. 1 (June 29, 2022): 245–60. http://dx.doi.org/10.36001/phme.2022.v7i1.3317.

Full text
Abstract:
Estimating the remaining useful life (RUL) of an asset lies at the heart of prognostics and health management (PHM) of many operations-critical industries such as aviation. Modern methods of RUL estimation adopt techniques from deep learning (DL). However, most of these contemporary techniques deliver only single-point estimates for the RUL without reporting on the confidence of the prediction. This practice usually provides overly confident predictions that can have severe consequences in operational disruptions or even safety. To address this issue, we propose a technique for uncertainty quantification (UQ) based on Bayesian deep learning (BDL). The hyperparameters of the framework are tuned using a novel bi-objective Bayesian optimization method with objectives the predictive performance and predictive uncertainty. The method also integrates the data pre-processing steps into the hyperparameter optimization (HPO) stage, models the RUL as a Weibull distribution, and returns the survival curves of the monitored assets to allow informed decision-making. We validate this method on the widely used C-MAPSS dataset against a single-objective HPO baseline that aggregates the two objectives through the harmonic mean (HM). We demonstrate the existence of trade-offs between the predictive performance and the predictive uncertainty and observe that the bi-objective HPO returns a larger number of hyperparameter configurations compared to the single-objective baseline. Furthermore, we see that with the proposed approach, it is possible to configure models for RUL estimation that exhibit better or comparable performance to the single-objective baseline when validated on the test sets.
APA, Harvard, Vancouver, ISO, and other styles
23

Sætrom, Jon, Joakim Hove, Jan-Arild Skjervheim, and Jon Gustav Vabø. "Improved Uncertainty Quantification in the Ensemble Kalman Filter Using Statistical Model-Selection Techniques." SPE Journal 17, no. 01 (January 31, 2012): 152–62. http://dx.doi.org/10.2118/145192-pa.

Full text
Abstract:
Summary The ensemble Kalman filter (EnKF) is a sequential Monte Carlo method for solving nonlinear spatiotemporal inverse problems, such as petroleum-reservoir evaluation, in high dimensions. Although the EnKF has seen successful applications in numerous areas, the classical EnKF algorithm can severely underestimate the prediction uncertainty. This can lead to biased production forecasts and an ensemble collapsing into a single realization. In this paper, we combine a previously suggested EnKF scheme based on dimension reduction in the data space, with an automatic cross-validation (CV) scheme to select the subspace dimension. The properties of both the dimension reduction and the CV scheme are well known in the statistical literature. In an EnKF setting, the former can reduce the effects caused by collinear ensemble members, while the latter can guard against model overfitting by evaluating the predictive capabilities of the EnKF scheme. The model-selection criterion traditionally used for determining the subspace dimension, on the other hand, does not take the predictive power of the EnKF scheme into account, and can potentially lead to severe problems of model overfitting. A reservoir case study is used to demonstrate that the CV scheme can substantially improve the reservoir predictions with associated uncertainty estimates.
APA, Harvard, Vancouver, ISO, and other styles
24

Olalusi, Oladimeji B., and Panagiotis Spyridis. "Probabilistic Studies on the Shear Strength of Slender Steel Fiber Reinforced Concrete Structures." Applied Sciences 10, no. 19 (October 4, 2020): 6955. http://dx.doi.org/10.3390/app10196955.

Full text
Abstract:
Shear failure is a brittle and undesirable mode of failure in reinforced concrete structures. Many of the existing shear design equations for steel fiber reinforced concrete (SFRC) beams include significant uncertainty due to the failure in accurately predicting the true shear capacity. Given these, adequate quantification and description of model uncertainties considering the systematic variation in the model prediction and measured shear capacity is crucial for reliability-based investigation. Reliability analysis must account for model uncertainties in order to predict the probability of failure under prescribed limit states. This study focuses on the quantification and description of model uncertainty related to the current shear resistance predictive models for SFRC beams without shear reinforcement. The German (DAfStB) model displayed the lowest bias and dispersion, whereas the fib Model 2010 and the Bernat et al., model displayed the highest bias and dispersion. The inconsistencies observed in the resistance model uncertainties at the variation of shear span to effective depth ratio are a major cause for concern, and differentiation with respect to this parameter is advised. Finally, in line with the EN 1990 semi-probabilistic approach for reliability-based design, the global partial safety factors related to model uncertainties in the shear resistance prediction of SFRC beams are proposed.
APA, Harvard, Vancouver, ISO, and other styles
25

Ding, Jing, Yizhuang David Wang, Saqib Gulzar, Youngsoo Richard Kim, and B. Shane Underwood. "Uncertainty Quantification of Simplified Viscoelastic Continuum Damage Fatigue Model using the Bayesian Inference-Based Markov Chain Monte Carlo Method." Transportation Research Record: Journal of the Transportation Research Board 2674, no. 4 (March 13, 2020): 247–60. http://dx.doi.org/10.1177/0361198120910149.

Full text
Abstract:
The simplified viscoelastic continuum damage model (S-VECD) has been widely accepted as a computationally efficient and a rigorous mechanistic model to predict the fatigue resistance of asphalt concrete. It operates in a deterministic framework, but in actual practice, there are multiple sources of uncertainty such as specimen preparation errors and measurement errors which need to be probabilistically characterized. In this study, a Bayesian inference-based Markov Chain Monte Carlo method is used to quantify the uncertainty in the S-VECD model. The dynamic modulus and cyclic fatigue test data from 32 specimens are used for parameter estimation and predictive envelope calculation of the dynamic modulus, damage characterization and failure criterion model. These parameter distributions are then propagated to quantify the uncertainty in fatigue prediction. The predictive envelope for each model is further used to analyze the decrease in variance with the increase in the number of replicates. Finally, the proposed methodology is implemented to compare three asphalt concrete mixtures from standard testing. The major findings of this study are: (1) the parameters in the dynamic modulus and damage characterization model have relatively strong correlation which indicates the necessity of Bayesian techniques; (2) the uncertainty of the damage characteristic curve for a single specimen propagated from parameter uncertainties of the dynamic modulus model is negligible compared to the difference in the replicates; (3) four replicates of the cyclic fatigue test are recommended considering the balance between the uncertainty of fatigue prediction and the testing efficiency; and (4) more replicates are needed to confidently detect the difference between different mixtures if their fatigue performance is close.
APA, Harvard, Vancouver, ISO, and other styles
26

Dogulu, N., P. López López, D. P. Solomatine, A. H. Weerts, and D. L. Shrestha. "Estimation of predictive hydrologic uncertainty using quantile regression and UNEEC methods and their comparison on contrasting catchments." Hydrology and Earth System Sciences Discussions 11, no. 9 (September 10, 2014): 10179–233. http://dx.doi.org/10.5194/hessd-11-10179-2014.

Full text
Abstract:
Abstract. In operational hydrology, estimation of predictive uncertainty of hydrological models used for flood modelling is essential for risk based decision making for flood warning and emergency management. In the literature, there exists a variety of methods analyzing and predicting uncertainty. However, case studies comparing performance of these methods, most particularly predictive uncertainty methods, are limited. This paper focuses on two predictive uncertainty methods that differ in their methodological complexity: quantile regression (QR) and UNcertainty Estimation based on local Errors and Clustering (UNEEC), aiming at identifying possible advantages and disadvantages of these methods (both estimating residual uncertainty) based on their comparative performance. We test these two methods on several catchments (from UK) that vary in its hydrological characteristics and models. Special attention is given to the errors for high flow/water level conditions. Furthermore, normality of model residuals is discussed in view of clustering approach employed within the framework of UNEEC method. It is found that basin lag time and forecast lead time have great impact on quantification of uncertainty (in the form of two quantiles) and achievement of normality in model residuals' distribution. In general, uncertainty analysis results from different case studies indicate that both methods give similar results. However, it is also shown that UNEEC method provides better performance than QR for small catchments with changing hydrological dynamics, i.e. rapid response catchments. We recommend that more case studies of catchments from regions of distinct hydrologic behaviour, with diverse climatic conditions, and having various hydrological features be tested.
APA, Harvard, Vancouver, ISO, and other styles
27

Karimanzira, Divas. "Probabilistic Uncertainty Consideration in Regionalization and Prediction of Groundwater Nitrate Concentration." Knowledge 4, no. 4 (September 25, 2024): 462–80. http://dx.doi.org/10.3390/knowledge4040025.

Full text
Abstract:
In this study, we extend our previous work on a two-dimensional convolutional neural network (2DCNN) for spatial prediction of groundwater nitrate, focusing on improving uncertainty quantification. Our enhanced model incorporates a fully probabilistic Bayesian framework and a structure aimed at optimizing both specific value predictions and predictive intervals (PIs). We implemented the Prediction Interval Validation and Estimation Network based on Quality Definition (2DCNN-QD) to refine the accuracy of probabilistic predictions and reduce the width of the prediction intervals. Applied to a model region in Germany, our results demonstrate an 18% improvement in the prediction interval width. While traditional Bayesian CNN models may yield broader prediction intervals to adequately capture uncertainties, the 2DCNN-QD method prioritizes quality-driven interval optimization, resulting in narrower prediction intervals without sacrificing coverage probability. Notably, this approach is nonparametric, allowing it to be effectively utilized across a range of real-world scenarios.
APA, Harvard, Vancouver, ISO, and other styles
28

Cacuci, Dan G. "TOWARDS OVERCOMING THE CURSE OF DIMENSIONALITY IN PREDICTIVE MODELLING AND UNCERTAINTY QUANTIFICATION." EPJ Web of Conferences 247 (2021): 00002. http://dx.doi.org/10.1051/epjconf/202124700002.

Full text
Abstract:
This invited presentation summarizes new methodologies developed by the author for performing high-order sensitivity analysis, uncertainty quantification and predictive modeling. The presentation commences by summarizing the newly developed 3rd-Order Adjoint Sensitivity Analysis Methodology (3rd-ASAM) for linear systems, which overcomes the “curse of dimensionality” for sensitivity analysis and uncertainty quantification of a large variety of model responses of interest in reactor physics systems. The use of the exact expressions of the 2nd-, and 3rd-order sensitivities computed using the 3rd-ASAM is subsequently illustrated by presenting 3rd-order formulas for the first three cumulants of the response distribution, for quantifying response uncertainties (covariance, skewness) stemming from model parameter uncertainties. The use of the 1st-, 2nd-, and 3rd-order sensitivities together with the formulas for the first three cumulants of the response distribution are subsequently used in the newly developed 2nd/3rd-BERRU-PM (“Second/Third-Order Best-Estimated Results with Reduced Uncertainties Predictive Modeling”), which aims at overcoming the curse of dimensionality in predictive modeling. The 2nd/3rd-BERRU-PM uses the maximum entropy principle to eliminate the need for introducing a subjective user-defined “cost functional quantifying the discrepancies between measurements and computations.” By utilizing the 1st-, 2nd- and 3rd-order response sensitivities to combine experimental and computational information in the joint phase-space of responses and model parameters, the 2nd/3rd-BERRU-PM generalizes the current data adjustment/assimilation methodologies. Even though all of the 2nd- and 3rd-order are comprised in the mathematical framework of the 2nd/3rd-BERRU-PM formalism, the computations underlying the 2nd/3rd-BERRU-PM require the inversion of a single matrix of dimensions equal to the number of considered responses, thus overcoming the curse of dimensionality which would affect the inversion of hessian and higher-order matrices in the parameter space.
APA, Harvard, Vancouver, ISO, and other styles
29

Cacuci, Dan G. "TOWARDS OVERCOMING THE CURSE OF DIMENSIONALITY IN PREDICTIVE MODELLING AND UNCERTAINTY QUANTIFICATION." EPJ Web of Conferences 247 (2021): 20005. http://dx.doi.org/10.1051/epjconf/202124720005.

Full text
Abstract:
This invited presentation summarizes new methodologies developed by the author for performing high-order sensitivity analysis, uncertainty quantification and predictive modeling. The presentation commences by summarizing the newly developed 3rd-Order Adjoint Sensitivity Analysis Methodology (3rd-ASAM) for linear systems, which overcomes the “curse of dimensionality” for sensitivity analysis and uncertainty quantification of a large variety of model responses of interest in reactor physics systems. The use of the exact expressions of the 2nd-, and 3rd-order sensitivities computed using the 3rd-ASAM is subsequently illustrated by presenting 3rd-order formulas for the first three cumulants of the response distribution, for quantifying response uncertainties (covariance, skewness) stemming from model parameter uncertainties. The use of the 1st-, 2nd-, and 3rd-order sensitivities together with the formulas for the first three cumulants of the response distribution are subsequently used in the newly developed 2nd/3rd-BERRU-PM (“Second/Third-Order Best-Estimated Results with Reduced Uncertainties Predictive Modeling”), which aims at overcoming the curse of dimensionality in predictive modeling. The 2nd/3rd-BERRU-PM uses the maximum entropy principle to eliminate the need for introducing a subjective user-defined “cost functional quantifying the discrepancies between measurements and computations.” By utilizing the 1st-, 2nd- and 3rd-order response sensitivities to combine experimental and computational information in the joint phase-space of responses and model parameters, the 2nd/3rd-BERRU-PM generalizes the current data adjustment/assimilation methodologies. Even though all of the 2nd- and 3rd-order are comprised in the mathematical framework of the 2nd/3rd-BERRU-PM formalism, the computations underlying the 2nd/3rd-BERRU-PM require the inversion of a single matrix of dimensions equal to the number of considered responses, thus overcoming the curse of dimensionality which would affect the inversion of hessian and higher-order matrices in the parameter space.
APA, Harvard, Vancouver, ISO, and other styles
30

Slavinskaya, N. A., M. Abbasi, J. H. Starcke, R. Whitside, A. Mirzayeva, U. Riedel, W. Li, et al. "Development of an Uncertainty Quantification Predictive Chemical Reaction Model for Syngas Combustion." Energy & Fuels 31, no. 3 (February 14, 2017): 2274–97. http://dx.doi.org/10.1021/acs.energyfuels.6b02319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Tran, Vinh Ngoc, and Jongho Kim. "Quantification of predictive uncertainty with a metamodel: toward more efficient hydrologic simulations." Stochastic Environmental Research and Risk Assessment 33, no. 7 (July 2019): 1453–76. http://dx.doi.org/10.1007/s00477-019-01703-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Walz, Eva-Maria, Alexander Henzi, Johanna Ziegel, and Tilmann Gneiting. "Easy Uncertainty Quantification (EasyUQ): Generating Predictive Distributions from Single-Valued Model Output." SIAM Review 66, no. 1 (February 2024): 91–122. http://dx.doi.org/10.1137/22m1541915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Heringhaus, Monika E., Yi Zhang, André Zimmermann, and Lars Mikelsons. "Towards Reliable Parameter Extraction in MEMS Final Module Testing Using Bayesian Inference." Sensors 22, no. 14 (July 20, 2022): 5408. http://dx.doi.org/10.3390/s22145408.

Full text
Abstract:
In micro-electro-mechanical systems (MEMS) testing high overall precision and reliability are essential. Due to the additional requirement of runtime efficiency, machine learning methods have been investigated in recent years. However, these methods are often associated with inherent challenges concerning uncertainty quantification and guarantees of reliability. The goal of this paper is therefore to present a new machine learning approach in MEMS testing based on Bayesian inference to determine whether the estimation is trustworthy. The overall predictive performance as well as the uncertainty quantification are evaluated with four methods: Bayesian neural network, mixture density network, probabilistic Bayesian neural network and BayesFlow. They are investigated under the variation in training set size, different additive noise levels, and an out-of-distribution condition, namely the variation in the damping factor of the MEMS device. Furthermore, epistemic and aleatoric uncertainties are evaluated and discussed to encourage thorough inspection of models before deployment striving for reliable and efficient parameter estimation during final module testing of MEMS devices. BayesFlow consistently outperformed the other methods in terms of the predictive performance. As the probabilistic Bayesian neural network enables the distinction between epistemic and aleatoric uncertainty, their share of the total uncertainty has been intensively studied.
APA, Harvard, Vancouver, ISO, and other styles
34

Incorvaia, Gabriele, Darryl Hond, and Hamid Asgari. "Uncertainty Quantification of Machine Learning Model Performance via Anomaly-Based Dataset Dissimilarity Measures." Electronics 13, no. 5 (February 29, 2024): 939. http://dx.doi.org/10.3390/electronics13050939.

Full text
Abstract:
The use of Machine Learning (ML) models as predictive tools has increased dramatically in recent years. However, data-driven systems (such as ML models) exhibit a degree of uncertainty in their predictions. In other words, they could produce unexpectedly erroneous predictions if the uncertainty stemming from the data, choice of model and model parameters is not taken into account. In this paper, we introduce a novel method for quantifying the uncertainty of the performance levels attained by ML classifiers. In particular, we investigate and characterize the uncertainty of model accuracy when classifying out-of-distribution data that are statistically dissimilar from the data employed during training. A main element of this novel Uncertainty Quantification (UQ) method is a measure of the dissimilarity between two datasets. We introduce an innovative family of data dissimilarity measures based on anomaly detection algorithms, namely the Anomaly-based Dataset Dissimilarity (ADD) measures. These dissimilarity measures process feature representations that are derived from the activation values of neural networks when supplied with dataset items. The proposed UQ method for classification performance employs these dissimilarity measures to estimate the classifier accuracy for unseen, out-of-distribution datasets, and to give an uncertainty band for those estimates. A numerical analysis of the efficacy of the UQ method is conducted using standard Artificial Neural Network (ANN) classifiers and public domain datasets. The results obtained generally demonstrate that the amplitude of the uncertainty band associated with the estimated accuracy values tends to increase as the data dissimilarity measure increases. Overall, this research contributes to the verification and run-time performance prediction of systems composed of ML-based elements.
APA, Harvard, Vancouver, ISO, and other styles
35

Ma, Junwei, Xiaoxu Niu, Huiming Tang, Yankun Wang, Tao Wen, and Junrong Zhang. "Displacement Prediction of a Complex Landslide in the Three Gorges Reservoir Area (China) Using a Hybrid Computational Intelligence Approach." Complexity 2020 (January 28, 2020): 1–15. http://dx.doi.org/10.1155/2020/2624547.

Full text
Abstract:
Displacement prediction of reservoir landslide remains inherently uncertain since a complete understanding of the complex nonlinear, dynamic landslide system is still lacking. An appropriate quantification of predictive uncertainties is a key underpinning of displacement prediction and mitigation of reservoir landslide. A density prediction, offering a full estimation of the probability density for future outputs, is promising for quantification of the uncertainty of landslide displacement. In the present study, a hybrid computational intelligence approach is proposed to build a density prediction model of landslide displacement and quantify the associated predictive uncertainties. The hybrid computational intelligence approach consists of two steps: first, the input variables are selected through copula analysis; second, kernel-based support vector machine quantile regression (KSVMQR) is employed to perform density prediction. The copula-KSVMQR approach is demonstrated through a complex landslide in the Three Gorges Reservoir Area (TGRA), China. The experimental study suggests that the copula-KSVMQR approach is capable of construction density prediction by providing full probability density distributions of the prediction with perfect performance. In addition, different types of predictions, including interval prediction and point prediction, can be derived from the obtained density predictions with excellent performance. The results show that the mean prediction interval widths of the proposed approach at ZG287 and ZG289 are 27.30 and 33.04, respectively, which are approximately 60 percent lower than that obtained using the traditional bootstrap-extreme learning machine-artificial neural network (Bootstrap-ELM-ANN). Moreover, the obtained point predictions show great consistency with the observations, with correlation coefficients of 0.9998. Given the satisfactory performance, the presented copula-KSVMQR approach shows a great ability to predict landslide displacement.
APA, Harvard, Vancouver, ISO, and other styles
36

Namadchian, Ali, Mehdi Ramezani, and Yuanyuan Zou. "Uncertainty quantification of model predictive control for nonlinear systems with parametric uncertainty using hybrid pseudo-spectral method." Cogent Engineering 6, no. 1 (January 1, 2019): 1691803. http://dx.doi.org/10.1080/23311916.2019.1691803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, Ming, Xinhu Zhang, Kechun Shen, and Guang Pan. "Sparse Polynomial Chaos Expansion for Uncertainty Quantification of Composite Cylindrical Shell with Geometrical and Material Uncertainty." Journal of Marine Science and Engineering 10, no. 5 (May 14, 2022): 670. http://dx.doi.org/10.3390/jmse10050670.

Full text
Abstract:
The geometrical dimensions and mechanical properties of composite materials exhibit inherent variation and uncertainty in practical engineering. Uncertainties in geometrical dimensions and mechanical properties propagate to the structural performance of composite cylindrical shells under hydrostatic pressure. However, traditional methods for quantification of uncertainty, such as Monte Carlo simulation and the response surface method, are either time consuming with low convergence rates or unable to deal with high-dimensional problems. In this study, the quantification of the high-dimensional uncertainty of the critical buckling pressure of a composite cylindrical shell with geometrical and material uncertainties was investigated by means of sparse polynomial chaos expansion (PCE). With limited design samples, sparse PCE was built and validated for predictive accuracy. Statistical moments (mean and standard deviation) and global sensitivity analysis results were obtained based on the sparse PCE. The mean and standard deviation of critical buckling pressure were 3.5777 MPa and 0.3149 MPa, with a coefficient of variation of 8.801%. Global sensitivity analysis results from Sobol’ indices and the Morris method showed that the uncertainty of longitudinal modulus has a massive influence on the critical buckling pressure of composite cylindrical shell, whereas the uncertainties of transverse modulus, shear modulus, and Poisson’s ratio have a weak influence. When the coefficient of variation of ply thickness and orientation angle does not surpass 2%, the uncertainties of ply thickness and orientation angle have a weak influence on the critical buckling pressure. The study shows that the sparse PCE is effective at resolving the problem of high-dimensional uncertainty quantification of composite cylindrical shell with geometrical and material uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
38

Shrestha, Durga L., Nagendra Kayastha, Dimitri Solomatine, and Roland Price. "Encapsulation of parametric uncertainty statistics by various predictive machine learning models: MLUE method." Journal of Hydroinformatics 16, no. 1 (July 25, 2013): 95–113. http://dx.doi.org/10.2166/hydro.2013.242.

Full text
Abstract:
Monte Carlo simulation-based uncertainty analysis techniques have been applied successfully in hydrology for quantification of the model output uncertainty. They are flexible, conceptually simple and straightforward, but provide only average measures of uncertainty based on past data. However, if one needs to estimate uncertainty of a model in a particular hydro-meteorological situation in real time application of complex models, Monte Carlo simulation becomes impractical because of the large number of model runs required. This paper presents a novel approach to encapsulating and predicting parameter uncertainty of hydrological models using machine learning techniques. Generalised likelihood uncertainty estimation method (a version of the Monte Carlo method) is first used to assess the parameter uncertainty of a hydrological model, and then the generated data are used to train three machine learning models. Inputs to these models are specially identified representative variables. The trained models are then employed to predict the model output uncertainty which is specific for the new input data. This method has been applied to two contrasting catchments. The experimental results demonstrate that the machine learning models are quite accurate. An important advantage of the proposed method is its efficiency allowing for assessing uncertainty of complex models in real time.
APA, Harvard, Vancouver, ISO, and other styles
39

Ye, Yanan, Alvaro Ruiz-Martinez, Peng Wang, and Daniel M. Tartakovsky. "Quantification of Predictive Uncertainty in Models of FtsZ ring assembly in Escherichia coli." Journal of Theoretical Biology 484 (January 2020): 110006. http://dx.doi.org/10.1016/j.jtbi.2019.110006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hasselman, Timothy, and George Lloyd. "A top-down approach to calibration, validation, uncertainty quantification and predictive accuracy assessment." Computer Methods in Applied Mechanics and Engineering 197, no. 29-32 (May 2008): 2596–606. http://dx.doi.org/10.1016/j.cma.2007.07.031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Xie, Shulian, Feng Xue, Weimin Zhang, and Jiawei Zhu. "Data-Driven Predictive Maintenance Policy Based on Dynamic Probability Distribution Prediction of Remaining Useful Life." Machines 11, no. 10 (September 25, 2023): 923. http://dx.doi.org/10.3390/machines11100923.

Full text
Abstract:
As the reliability, availability, maintainability, and safety of industrial equipment have become crucial in the context of intelligent manufacturing, there are increasing expectations and requirements for maintenance policies. Compared with traditional methods, data-driven Predictive Maintenance (PdM), a superior approach to equipment and system maintenance, has been paid considerable attention by scholars in this field due to its high applicability and accuracy with a highly reliable quantization basis provided by big data. However, current data-driven methods typically provide only point estimates of the state rather than quantification of uncertainty, impeding effective maintenance decision-making. In addition, few studies have conducted further research on maintenance decision-making based on state predictions to achieve the full functionality of PdM. A PdM policy is proposed in this work to obtain the continuous probability distribution of system states dynamically and make maintenance decisions. The policy utilizes the Long Short-Term Memory (LSTM) network and Kernel Density Estimation with a Single Globally-optimized Bandwidth (KDE-SGB) method to dynamic predicting of the continuous probability distribution of the Remaining Useful Life (RUL). A comprehensive optimization target is introduced to establish the maintenance decision-making approach acquiring recommended maintenance time. Finally, the proposed policy is validated through a bearing case study, indicating that it allows for obtaining the continuous probability distribution of RUL centralized over a range of ±10 sampling cycles. In comparison to the other two policies, it could reduce the maintenance costs by 24.49~70.02%, raise the availability by 0.46~1.90%, heighten the reliability by 0.00~27.50%, and promote more stable performance with various maintenance cost and duration. The policy has offered a new approach without priori hypotheses for RUL prediction and its uncertainty quantification and provided a reference for constructing a complete PdM policy integrating RUL prediction with maintenance decision-making.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhu, Hong-Yu, Gang Wang, Yi Liu, and Ze-Kun Zhou. "Numerical investigation of transonic buffet on supercritical airfoil considering uncertainties in wind tunnel testing." International Journal of Modern Physics B 34, no. 14n16 (April 20, 2020): 2040083. http://dx.doi.org/10.1142/s0217979220400834.

Full text
Abstract:
To improve the predictive ability of computational fluid dynamics (CFD) on the transonic buffet phenomenon, NASA SC(2)-0714 supercritical airfoil is numerically investigated by noninstructive probabilistic collocation method for uncertainty quantification. Distributions of uncertain parameters are established according to the NASA wind tunnel report. The effects of the uncertainties on lift, drag, mean pressure and root-mean square pressure are discussed. To represent the stochastic solution, the mean and standard deviation of variation of flow quantities such as lift and drag coefficients are computed. Furthermore, mean pressure distribution and root-mean square pressure distribution from the upper surface are displayed with uncertainty bounds containing 95% of all possible values. It is shown that the most sensitive part of flow to uncertain parameters is near the shock wave motion region. Comparing uncertainty bounds with experimental data, numerical results are reliable to predict the reduced frequency and mean pressure distribution. However, for root-mean square pressure distribution, numerical results are higher than the experimental data in the trailing edge region.
APA, Harvard, Vancouver, ISO, and other styles
43

Boso, F., and D. M. Tartakovsky. "Learning on dynamic statistical manifolds." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 476, no. 2239 (July 2020): 20200213. http://dx.doi.org/10.1098/rspa.2020.0213.

Full text
Abstract:
Hyperbolic balance laws with uncertain (random) parameters and inputs are ubiquitous in science and engineering. Quantification of uncertainty in predictions derived from such laws, and reduction of predictive uncertainty via data assimilation, remain an open challenge. That is due to nonlinearity of governing equations, whose solutions are highly non-Gaussian and often discontinuous. To ameliorate these issues in a computationally efficient way, we use the method of distributions, which here takes the form of a deterministic equation for spatio-temporal evolution of the cumulative distribution function (CDF) of the random system state, as a means of forward uncertainty propagation. Uncertainty reduction is achieved by recasting the standard loss function, i.e. discrepancy between observations and model predictions, in distributional terms. This step exploits the equivalence between minimization of the square error discrepancy and the Kullback–Leibler divergence. The loss function is regularized by adding a Lagrangian constraint enforcing fulfilment of the CDF equation. Minimization is performed sequentially, progressively updating the parameters of the CDF equation as more measurements are assimilated.
APA, Harvard, Vancouver, ISO, and other styles
44

Dogulu, N., P. López López, D. P. Solomatine, A. H. Weerts, and D. L. Shrestha. "Estimation of predictive hydrologic uncertainty using the quantile regression and UNEEC methods and their comparison on contrasting catchments." Hydrology and Earth System Sciences 19, no. 7 (July 23, 2015): 3181–201. http://dx.doi.org/10.5194/hess-19-3181-2015.

Full text
Abstract:
Abstract. In operational hydrology, estimation of the predictive uncertainty of hydrological models used for flood modelling is essential for risk-based decision making for flood warning and emergency management. In the literature, there exists a variety of methods analysing and predicting uncertainty. However, studies devoted to comparing the performance of the methods in predicting uncertainty are limited. This paper focuses on the methods predicting model residual uncertainty that differ in methodological complexity: quantile regression (QR) and UNcertainty Estimation based on local Errors and Clustering (UNEEC). The comparison of the methods is aimed at investigating how well a simpler method using fewer input data performs over a more complex method with more predictors. We test these two methods on several catchments from the UK that vary in hydrological characteristics and the models used. Special attention is given to the methods' performance under different hydrological conditions. Furthermore, normality of model residuals in data clusters (identified by UNEEC) is analysed. It is found that basin lag time and forecast lead time have a large impact on the quantification of uncertainty and the presence of normality in model residuals' distribution. In general, it can be said that both methods give similar results. At the same time, it is also shown that the UNEEC method provides better performance than QR for small catchments with the changing hydrological dynamics, i.e. rapid response catchments. It is recommended that more case studies of catchments of distinct hydrologic behaviour, with diverse climatic conditions, and having various hydrological features, be considered.
APA, Harvard, Vancouver, ISO, and other styles
45

Pandey, Deep Shankar, and Qi Yu. "Evidential Conditional Neural Processes." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (June 26, 2023): 9389–97. http://dx.doi.org/10.1609/aaai.v37i8.26125.

Full text
Abstract:
The Conditional Neural Process (CNP) family of models offer a promising direction to tackle few-shot problems by achieving better scalability and competitive predictive performance. However, the current CNP models only capture the overall uncertainty for the prediction made on a target data point. They lack a systematic fine-grained quantification on the distinct sources of uncertainty that are essential for model training and decision-making under the few-shot setting. We propose Evidential Conditional Neural Processes (ECNP), which replace the standard Gaussian distribution used by CNP with a much richer hierarchical Bayesian structure through evidential learning to achieve epistemic-aleatoric uncertainty decomposition. The evidential hierarchical structure also leads to a theoretically justified robustness over noisy training tasks. Theoretical analysis on the proposed ECNP establishes the relationship with CNP while offering deeper insights on the roles of the evidential parameters. Extensive experiments conducted on both synthetic and real-world data demonstrate the effectiveness of our proposed model in various few-shot settings.
APA, Harvard, Vancouver, ISO, and other styles
46

Davis, Gary A., and Christopher Cheong. "Pedestrian Injury Severity vs. Vehicle Impact Speed: Uncertainty Quantification and Calibration to Local Conditions." Transportation Research Record: Journal of the Transportation Research Board 2673, no. 11 (June 16, 2019): 583–92. http://dx.doi.org/10.1177/0361198119851747.

Full text
Abstract:
This paper describes a method for fitting predictive models that relate vehicle impact speeds to pedestrian injuries, in which results from a national sample are calibrated to reflect local injury statistics. Three methodological issues identified in the literature, outcome-based sampling, uncertainty regarding estimated impact speeds, and uncertainty quantification, are addressed by (i) implementing Bayesian inference using Markov Chain Monte Carlo sampling and (ii) applying multiple imputation to conditional maximum likelihood estimation. The methods are illustrated using crash data from the NHTSA Pedestrian Crash Data Study coupled with an exogenous sample of pedestrian crashes from Minnesota’s Twin Cities. The two approaches produced similar results and, given a reliable characterization of impact speed uncertainty, either approach can be applied in a jurisdiction having an exogenous sample of pedestrian crash severities.
APA, Harvard, Vancouver, ISO, and other styles
47

Gupta, Ishank, Deepak Devegowda, Vikram Jayaram, Chandra Rai, and Carl Sondergeld. "Machine learning regressors and their metrics to predict synthetic sonic and mechanical properties." Interpretation 7, no. 3 (August 1, 2019): SF41—SF55. http://dx.doi.org/10.1190/int-2018-0255.1.

Full text
Abstract:
Planning and optimizing completion design for hydraulic fracturing require a quantifiable understanding of the spatial distribution of the brittleness of the rock and other geomechanical properties. Eventually, the goal is to maximize the stimulated reservoir volume with minimal cost overhead. The compressional and shear velocities ([Formula: see text] and [Formula: see text], respectively) can also be used to calculate Young’s modulus, Poisson’s ratio, and other mechanical properties. In the field, sonic logs are not commonly acquired and operators often resort to regression to predict synthetic sonic logs. We have compared several machine learning regression techniques for their predictive ability to generate synthetic sonic ([Formula: see text] and [Formula: see text]) and a brittleness indicator, namely hardness, using the laboratory core data. We used techniques such as multilinear regression (MLR), least absolute shrinkage and selection operator regression, support vector regression, random forest (RF), gradient boosting (GB), and alternating conditional expectation. We found that the commonly used MLR is suboptimal with less-than-satisfactory predictive accuracies. Other techniques, particularly RF and GB, have greater predictive capabilities. We also used Gaussian process simulation for uncertainty quantification because it provides uncertainty estimates on the predicted values for a wide range of inputs. Random forest and extreme GB techniques also show low uncertainties in prediction.
APA, Harvard, Vancouver, ISO, and other styles
48

Guerra, Gabriel, Fernando A. Rochinha, Renato Elias, Daniel de Oliveira, Eduardo Ogasawara, Jonas Furtado Dias, Marta Mattoso, and Alvaro L. G. A. Coutinho. "UNCERTAINTY QUANTIFICATION IN COMPUTATIONAL PREDICTIVE MODELS FOR FLUID DYNAMICS USING A WORKFLOW MANAGEMENT ENGINE." International Journal for Uncertainty Quantification 2, no. 1 (2012): 53–71. http://dx.doi.org/10.1615/int.j.uncertaintyquantification.v2.i1.50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Peltz, James J., Dan G. Cacuci, Aurelian F. Badea, and Madalina C. Badea. "Predictive Modeling Applied to a Spent Fuel Dissolver Model—II: Uncertainty Quantification and Reduction." Nuclear Science and Engineering 183, no. 3 (July 1, 2016): 332–46. http://dx.doi.org/10.13182/nse15-99.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Kasiviswanathan, K. S., and K. P. Sudheer. "Quantification of the predictive uncertainty of artificial neural network based river flow forecast models." Stochastic Environmental Research and Risk Assessment 27, no. 1 (June 28, 2012): 137–46. http://dx.doi.org/10.1007/s00477-012-0600-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography