Journal articles on the topic 'Aleatoric uncertainty'

To see the other types of publications on this topic, follow the link: Aleatoric uncertainty.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Aleatoric uncertainty.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Pamungkas, Yayi Wira. "Penggunaan Aturan Ular Tangga dalam Musik Aleatorik Berbasis Serialisme Integral." Journal of Music Science, Technology, and Industry 3, no. 2 (October 21, 2020): 201–22. http://dx.doi.org/10.31091/jomsti.v3i2.1157.

Full text
Abstract:
Purpose: The author does an experiment by using the rules of snake and ladder to find out and understand how the concept of uncertainty can work in serialism-based aleatoric music: by testing it using the most stringent serialism system, namely the system of integral serialism. Research methods: The process of creating the composition of this artistic research work has five stages, namely the exploration stage, the concept preparation stage, the concept analysis stage, the macro structure preparation stage, and the concept application stage. Results and discussion: The concept of snake and ladder can optimize the concept of uncertainty in serialism-based aleatoric music. The integral serialism system dominates the formation of melody and harmony, while the concept of snake and ladder that is aleatoris is used as phrase control. Implication: There are two phenomena that stimulate the creation of the idea of creation of this artistic research work, namely the problem of stiffness and weak characteristics of the concept of uncertainty in serialism-based aleatoric music.
APA, Harvard, Vancouver, ISO, and other styles
2

Hong, Ming, Jianzhuang Liu, Cuihua Li, and Yanyun Qu. "Uncertainty-Driven Dehazing Network." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 906–13. http://dx.doi.org/10.1609/aaai.v36i1.19973.

Full text
Abstract:
Deep learning has made remarkable achievements for single image haze removal. However, existing deep dehazing models only give deterministic results without discussing the uncertainty of them. There exist two types of uncertainty in the dehazing models: aleatoric uncertainty that comes from noise inherent in the observations and epistemic uncertainty that accounts for uncertainty in the model. In this paper, we propose a novel uncertainty-driven dehazing network (UDN) that improves the dehazing results by exploiting the relationship between the uncertain and confident representations. We first introduce an Uncertainty Estimation Block (UEB) to predict the aleatoric and epistemic uncertainty together. Then, we propose an Uncertainty-aware Feature Modulation (UFM) block to adaptively enhance the learned features. UFM predicts a convolution kernel and channel-wise modulation cofficients conitioned on the uncertainty weighted representation. Moreover, we develop an uncertainty-driven self-distillation loss to improve the uncertain representation by transferring the knowledge from the confident one. Extensive experimental results on synthetic datasets and real-world images show that UDN achieves significant quantitative and qualitative improvements, outperforming the state-of-the-arts.
APA, Harvard, Vancouver, ISO, and other styles
3

Lyu, Yufeng, Zhenyu Liu, Xiang Peng, Jianrong Tan, and Chan Qiu. "Unified Reliability Measure Method Considering Uncertainties of Input Variables and Their Distribution Parameters." Applied Sciences 11, no. 5 (March 4, 2021): 2265. http://dx.doi.org/10.3390/app11052265.

Full text
Abstract:
Aleatoric and epistemic uncertainties can be represented probabilistically in mechanical systems. However, the distribution parameters of epistemic uncertainties are also uncertain due to sparsely available or inaccurate uncertainty information. Therefore, a unified reliability measure method that considers uncertainties of input variables and their distribution parameters simultaneously is proposed. The uncertainty information for distribution parameters of epistemic uncertainties could be as a result of insufficient data or interval information, which is represented with evidence theory. The probability density function of uncertain distribution parameters is constructed through fusing insufficient data and interval information based on a Gaussian interpolation algorithm, and the epistemic uncertainties are represented using a weighted sum of probability variables based on discrete distribution parameters. The reliability index considering aleatoric and epistemic uncertainties is calculated around the most probable point. The effectiveness of the proposed algorithm is demonstrated through comparison with the Monte Carlo method in the engineering example of a crank-slider mechanism and composite laminated plate.
APA, Harvard, Vancouver, ISO, and other styles
4

Mehltretter, M. "JOINT ESTIMATION OF DEPTH AND ITS UNCERTAINTY FROM STEREO IMAGES USING BAYESIAN DEEP LEARNING." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2022 (May 17, 2022): 69–78. http://dx.doi.org/10.5194/isprs-annals-v-2-2022-69-2022.

Full text
Abstract:
Abstract. The necessity to identify errors in the context of image-based 3D reconstruction has motivated the development of various methods for the estimation of uncertainty associated with depth estimates in recent years. Most of these methods exclusively estimate aleatoric uncertainty, which describes stochastic effects. On the other hand, epistemic uncertainty, which accounts for simplifications or incorrect assumptions with respect to the formulated model hypothesis, is often neglected. However, to accurately quantify the uncertainty inherent in a process, it is necessary to consider all potential sources of uncertainty and to model their stochastic behaviour appropriately. To approach this objective, a holistic method to jointly estimate disparity and uncertainty is presented in this work, taking into account both aleatoric and epistemic uncertainty. For this purpose, the proposed method is based on a Bayesian Neural Network, which is trained with variational inference using a probabilistic loss formulation. To evaluate the performance of the method proposed, extensive experiments are carried out on three datasets considering real-world indoor and outdoor scenes. The results of these experiments demonstrate that the proposed method is able to estimate the uncertainty accurately, while showing a similar and for some scenarios improved depth estimation capability compared to the dense stereo matching approach used as deterministic baseline. Moreover, the evaluation reveals the importance of considering both, aleatoric and epistemic uncertainty, in order to achieve an accurate estimation of the overall uncertainty related to a depth estimate.
APA, Harvard, Vancouver, ISO, and other styles
5

Rajbhandari, E., N. L. Gibson, and C. R. Woodside. "Quantifying uncertainty with stochastic collocation in the kinematic magentohydrodynamic framework." Journal of Physics: Conference Series 2207, no. 1 (March 1, 2022): 012007. http://dx.doi.org/10.1088/1742-6596/2207/1/012007.

Full text
Abstract:
Abstract We discuss an efficient numerical method for the uncertain kinematic magnetohydrodynamic system. We include aleatoric uncertainty in the parameters, and then describe a stochastic collocation method to handle this randomness. Numerical demonstrations of this method are discussed. We find that the shape of the parameter distributions affect not only the mean and variance, but also the shape of the solution distributions.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhong, Z., and M. Mehltretter. "MIXED PROBABILITY MODELS FOR ALEATORIC UNCERTAINTY ESTIMATION IN THE CONTEXT OF DENSE STEREO MATCHING." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2021 (June 17, 2021): 17–26. http://dx.doi.org/10.5194/isprs-annals-v-2-2021-17-2021.

Full text
Abstract:
Abstract. The ability to identify erroneous depth estimates is of fundamental interest. Information regarding the aleatoric uncertainty of depth estimates can be, for example, used to support the process of depth reconstruction itself. Consequently, various methods for the estimation of aleatoric uncertainty in the context of dense stereo matching have been presented in recent years, with deep learning-based approaches being particularly popular. Among these deep learning-based methods, probabilistic strategies are increasingly attracting interest, because the estimated uncertainty can be quantified in pixels or in metric units due to the consideration of real error distributions. However, existing probabilistic methods usually assume a unimodal distribution to describe the error distribution while simply neglecting cases in real-world scenarios that could violate this assumption. To overcome this limitation, we propose two novel mixed probability models consisting of Laplacian and Uniform distributions for the task of aleatoric uncertainty estimation. In this way, we explicitly address commonly challenging regions in the context of dense stereo matching and outlier measurements, respectively. To allow a fair comparison, we adapt a common neural network architecture to investigate the effects of the different uncertainty models. In an extensive evaluation using two datasets and two common dense stereo matching methods, the proposed methods demonstrate state-of-the-art accuracy.
APA, Harvard, Vancouver, ISO, and other styles
7

Pham, Nam, and Sergey Fomel. "Uncertainty and interpretability analysis of encoder-decoder architecture for channel detection." GEOPHYSICS 86, no. 4 (July 1, 2021): O49—O58. http://dx.doi.org/10.1190/geo2020-0409.1.

Full text
Abstract:
We have adopted a method to understand uncertainty and interpretability of a Bayesian convolutional neural network for detecting 3D channel geobodies in seismic volumes. We measure heteroscedastic aleatoric uncertainty and epistemic uncertainty. Epistemic uncertainty captures the uncertainty of the network parameters, whereas heteroscedastic aleatoric uncertainty accounts for noise in the seismic volumes. We train a network modified from U-Net architecture on 3D synthetic seismic volumes, and then we apply it to field data. Tests on 3D field data sets from the Browse Basin, offshore Australia, and from Parihaka in New Zealand prove that uncertainty volumes are related to geologic uncertainty, model mispicks, and input noise. We analyze model interpretability on these data sets by creating saliency volumes with gradient-weighted class activation mapping. We find that the model takes a global-to-local approach to localize channel geobodies as well as the importance of different model components in overall strategy. Using channel probability, uncertainty, and saliency volumes, interpreters can accurately identify channel geobodies in 3D seismic volumes and also understand the model predictions.
APA, Harvard, Vancouver, ISO, and other styles
8

Chowdhary, Kamaljit, and Paul Dupuis. "Distinguishing and integrating aleatoric and epistemic variation in uncertainty quantification." ESAIM: Mathematical Modelling and Numerical Analysis 47, no. 3 (March 29, 2013): 635–62. http://dx.doi.org/10.1051/m2an/2012038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Senge, Robin, Stefan Bösner, Krzysztof Dembczyński, Jörg Haasenritter, Oliver Hirsch, Norbert Donner-Banzhoff, and Eyke Hüllermeier. "Reliable classification: Learning classifiers that distinguish aleatoric and epistemic uncertainty." Information Sciences 255 (January 2014): 16–29. http://dx.doi.org/10.1016/j.ins.2013.07.030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hüllermeier, Eyke, and Willem Waegeman. "Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods." Machine Learning 110, no. 3 (March 2021): 457–506. http://dx.doi.org/10.1007/s10994-021-05946-3.

Full text
Abstract:
AbstractThe notion of uncertainty is of major importance in machine learning and constitutes a key element of machine learning methodology. In line with the statistical tradition, uncertainty has long been perceived as almost synonymous with standard probability and probabilistic predictions. Yet, due to the steadily increasing relevance of machine learning for practical applications and related issues such as safety requirements, new problems and challenges have recently been identified by machine learning scholars, and these problems may call for new methodological developments. In particular, this includes the importance of distinguishing between (at least) two different types of uncertainty, often referred to as aleatoric and epistemic. In this paper, we provide an introduction to the topic of uncertainty in machine learning as well as an overview of attempts so far at handling uncertainty in general and formalizing this distinction in particular.
APA, Harvard, Vancouver, ISO, and other styles
11

Khanzhina, N. E. "Bayesian losses for homoscedastic aleatoric uncertainty modeling in pollen image detection." Scientific and Technical Journal of Information Technologies, Mechanics and Optics 21, no. 4 (August 1, 2021): 535–44. http://dx.doi.org/10.17586/2226-1494-2021-21-4-535-544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ghasemi-Naraghi, Zeinab, Ahmad Nickabadi, and Reza Safabakhsh. "LogSE: An Uncertainty-Based Multi-Task Loss Function for Learning Two Regression Tasks." JUCS - Journal of Universal Computer Science 28, no. 2 (February 28, 2022): 141–59. http://dx.doi.org/10.3897/jucs.70549.

Full text
Abstract:
Multi-task learning (MTL) is a popular method in machine learning which utilizes related information of multi tasks to learn a task more efficiently and accurately. Naively, one can benefit from MTL by using a weighted linear sum of the different tasks loss functions. Manual specification of appropriate weights is difficult and typically does not improve performance, so it is critical to find an automatic weighting strategy for MTL. Also, there are three types of uncertainties that are captured in deep learning. Epistemic uncertainty is related to the lack of data. Heteroscedas- tic aleatoric uncertainty depends on the input data and differs from one input to another. In this paper, we focus on the third type, homoscedastic aleatoric uncertainty, which is constant for differ- ent inputs and is task-dependent. There are some methods for learning uncertainty-based weights as the parameters of a model. But in this paper, we introduce a novel multi-task loss function to capture homoscedastic uncertainty in multi regression tasks models, without increasing the complexity of the network. As the experiments show, the proposed loss function aids in learning a multi regression tasks network fairly with higher accuracy in fewer training steps.
APA, Harvard, Vancouver, ISO, and other styles
13

Feng, Runhai, Dario Grana, and Niels Balling. "Uncertainty quantification in fault detection using convolutional neural networks." GEOPHYSICS 86, no. 3 (March 19, 2021): M41—M48. http://dx.doi.org/10.1190/geo2020-0424.1.

Full text
Abstract:
Segmentation of faults based on seismic images is an important step in reservoir characterization. With the recent developments of deep-learning methods and the availability of massive computing power, automatic interpretation of seismic faults has become possible. The likelihood of occurrence for a fault can be quantified using a sigmoid function. Our goal is to quantify the fault model uncertainty that is generally not captured by deep-learning tools. We have used the dropout approach, a regularization technique to prevent overfitting and coadaptation in hidden units, to approximate the Bayesian inference and estimate the principled uncertainty over functions. Particularly, the variance of the learned model has been decomposed into aleatoric and epistemic parts. Our method is applied to a real data set from the Netherlands F3 block with two different dropout ratios in convolutional neural networks. The aleatoric uncertainty is irreducible because it relates to the stochastic dependency within the input observations. As the number of Monte Carlo realizations increases, the epistemic uncertainty asymptotically converges and the model standard deviation decreases because the variability of the model parameters is better simulated or explained with a larger sample size. This analysis can quantify the confidence to use fault predictions with less uncertainty. In addition, the analysis suggests where more training data are needed to reduce the uncertainty in low-confidence regions.
APA, Harvard, Vancouver, ISO, and other styles
14

Gurevich, Pavel, and Hannes Stuke. "Pairing an arbitrary regressor with an artificial neural network estimating aleatoric uncertainty." Neurocomputing 350 (July 2019): 291–306. http://dx.doi.org/10.1016/j.neucom.2019.03.031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Mehltretter, Max, and Christian Heipke. "Aleatoric uncertainty estimation for dense stereo matching via CNN-based cost volume analysis." ISPRS Journal of Photogrammetry and Remote Sensing 171 (January 2021): 63–75. http://dx.doi.org/10.1016/j.isprsjprs.2020.11.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Granados-Ortiz, F. J., and J. Ortega-Casanova. "Quantifying & analysing mixed aleatoric and structural uncertainty in complex turbulent flow simulations." International Journal of Mechanical Sciences 188 (December 2020): 105953. http://dx.doi.org/10.1016/j.ijmecsci.2020.105953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Hua, and Kejiang Zhang. "Development of a fuzzy-stochastic nonlinear model to incorporate aleatoric and epistemic uncertainty." Journal of Contaminant Hydrology 111, no. 1-4 (January 2010): 1–12. http://dx.doi.org/10.1016/j.jconhyd.2009.10.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Vassaux, Maxime, Shunzhou Wan, Wouter Edeling, and Peter V. Coveney. "Ensembles Are Required to Handle Aleatoric and Parametric Uncertainty in Molecular Dynamics Simulation." Journal of Chemical Theory and Computation 17, no. 8 (July 19, 2021): 5187–97. http://dx.doi.org/10.1021/acs.jctc.1c00526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Huang, Yingsong, Bing Bai, Shengwei Zhao, Kun Bai, and Fei Wang. "Uncertainty-Aware Learning against Label Noise on Imbalanced Datasets." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 6960–69. http://dx.doi.org/10.1609/aaai.v36i6.20654.

Full text
Abstract:
Learning against label noise is a vital topic to guarantee a reliable performance for deep neural networks.Recent research usually refers to dynamic noise modeling with model output probabilities and loss values, and then separates clean and noisy samples.These methods have gained notable success. However, unlike cherry-picked data, existing approaches often cannot perform well when facing imbalanced datasets, a common scenario in the real world.We thoroughly investigate this phenomenon and point out two major issues that hinder the performance, i.e., inter-class loss distribution discrepancy and misleading predictions due to uncertainty.The first issue is that existing methods often perform class-agnostic noise modeling. However, loss distributions show a significant discrepancy among classes under class imbalance, and class-agnostic noise modeling can easily get confused with noisy samples and samples in minority classes.The second issue refers to that models may output misleading predictions due to epistemic uncertainty and aleatoric uncertainty, thus existing methods that rely solely on the output probabilities may fail to distinguish confident samples. Inspired by our observations, we propose an Uncertainty-aware Label Correction framework(ULC) to handle label noise on imbalanced datasets. First, we perform epistemic uncertainty-aware class-specific noise modeling to identify trustworthy clean samples and refine/discard highly confident true/corrupted labels.Then, we introduce aleatoric uncertainty in the subsequent learning process to prevent noise accumulation in the label noise modeling process. We conduct experiments on several synthetic and real-world datasets. The results demonstrate the effectiveness of the proposed method, especially on imbalanced datasets.
APA, Harvard, Vancouver, ISO, and other styles
20

Kausik, Ravinath, Augustin Prado, Vasileios-Marios Gkortsas, Lalitha Venkataramanan, Harish Datir, and Yngve Bolstad Johansen. "Dual Neural Network Architecture for Determining Permeability and Associated Uncertainty." Petrophysics – The SPWLA Journal of Formation Evaluation and Reservoir Description 62, no. 1 (February 1, 2021): 122–34. http://dx.doi.org/10.30632/pjv62n1-2021a8.

Full text
Abstract:
The computation of permeability is vital for reservoir characterization because it is a key parameter in the reservoir models used for estimating and optimizing hydrocarbon production. Permeability is routinely predicted as a correlation from near-wellbore formation properties measured through wireline logs. Several such correlations, namely Schlumberger-Doll Research (SDR) permeability and Timur-Coates permeability models using nuclear magnetic resonance (NMR) measurements, K-lambda using mineralogy, and other variants, have often been used, with moderate success. In addition to permeability, the determination of the uncertainties, both epistemic (model) and aleatoric (data), are important for interpreting variations in the predictions of the reservoir models. In this paper, we demonstrate a novel dual deep neural network framework encompassing a Bayesian neural network (BNN) and an artificial neural network (ANN) for determining accurate permeability values along with associated uncertainties. Deep-learning techniques have been shown to be effective for regression problems but quantifying the uncertainty of their predictions and separating them into the epistemic and aleatoric fractions is still considered challenging. This is especially vital for petrophysical answer products because these algorithms need the ability to flag data from new geological formations that the model was not trained on as “out of distribution” and assign them higher uncertainty. Additionally, the model outputs need sensitivity to heteroscedastic aleatoric noise in the feature space arising due to tool and geological origins. Reducing these uncertainties is key to designing intelligent logging tools and applications, such as automated log interpretation. In this paper, we train a BNN with NMR and mineralogy data to determine permeability with associated epistemic uncertainty, obtained by determining the posterior weight distributions of the network by using variational inference. This provides us the ability to differentiate in- and out-of-distribution predictions, thereby identifying the suitability of the trained models for application in new geological formations. The errors in the prediction of the BNN are fed into a second ANN trained to correlate the predicted uncertainty to the error of the first BNN. Both networks are trained simultaneously and therefore optimized together to estimate permeability and associated uncertainty. The machine-learning permeability model is trained on a “ground-truth” core database and demonstrates considerable improvement over traditional SDR and Timur-Coates permeability models on wells from the Ivar Aasen Field. We also demonstrate the value of information (VOI) of different logging measurements by replacing the logs with their median values from nearby wells and studying the increase in the mean square errors.
APA, Harvard, Vancouver, ISO, and other styles
21

Paseka, Stanislav, and Daniel Marton. "The Impact of the Uncertain Input Data of Multi-Purpose Reservoir Volumes under Hydrological Extremes." Water 13, no. 10 (May 16, 2021): 1389. http://dx.doi.org/10.3390/w13101389.

Full text
Abstract:
The topic of uncertainties in water management tasks is a very extensive and highly discussed one. It is generally based on the theory that uncertainties comprise epistemic uncertainty and aleatoric uncertainty. This work deals with the comprehensive determination of the functional water volumes of a reservoir during extreme hydrological events under conditions of aleatoric uncertainty described as input data uncertainties. In this case, the input data uncertainties were constructed using the Monte Carlo method and applied to the data employed in the water management solution of the reservoir: (i) average monthly water inflows, (ii) hydrographs, (iii) bathygraphic curves and (iv) water losses by evaporation and dam seepage. To determine the storage volume of the reservoir, a simulation-optimization model of the reservoir was developed, which uses the balance equation of the reservoir to determine its optimal storage volume. For the second hydrological extreme, a simulation model for the transformation of flood discharges was developed, which works on the principle of the first order of the reservoir differential equation. By linking the two models, it is possible to comprehensively determine the functional volumes of the reservoir in terms of input data uncertainties. The practical application of the models was applied to a case study of the Vír reservoir in the Czech Republic, which fulfils the purpose of water storage and flood protection. The obtained results were analyzed in detail to verify whether the reservoir is sufficiently resistant to current hydrological extremes and also to suggest a redistribution of functional volumes of the reservoir under conditions of measurement uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
22

Brake, M. R. "The role of epistemic uncertainty of contact models in the design and optimization of mechanical systems with aleatoric uncertainty." Nonlinear Dynamics 77, no. 3 (April 6, 2014): 899–922. http://dx.doi.org/10.1007/s11071-014-1350-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Wu, S., M. Heitzler, and L. Hurni. "A CLOSER LOOK AT SEGMENTATION UNCERTAINTY OF SCANNED HISTORICAL MAPS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B4-2022 (June 1, 2022): 189–94. http://dx.doi.org/10.5194/isprs-archives-xliii-b4-2022-189-2022.

Full text
Abstract:
Abstract. Before modern earth observation techniques came into being, historical maps are almost the exclusive source to retrieve geo-spatial information on Earth. In recent years, the use of deep learning for historical map processing has gained popularity to replace tedious manual labor. However, neural networks, often referred to as “black boxes”, usually generate predictions not well calibrated for indicating if the predictions are trustworthy. Considering the diversity in designs and the graphic defects of scanned historical maps, uncertainty estimates can benefit us in deciding when and how to trust the extracted information. In this paper, we compare the effectiveness of different uncertainty indicators for segmenting hydrological features from scanned historical maps. Those uncertainty indicators can be categorized into two major types, namely aleatoric uncertainty (uncertainty in the observations) and epistemic uncertainty (uncertainty in the model). Specifically, we compare their effectiveness in indicating erroneous predictions, detecting noisy and out-of-distribution designs, and refining segmentation in a two-stage architecture.
APA, Harvard, Vancouver, ISO, and other styles
24

Alharbi, Mohammed, and Hassan A. Karimi. "Context-Aware Sensor Uncertainty Estimation for Autonomous Vehicles." Vehicles 3, no. 4 (October 25, 2021): 721–35. http://dx.doi.org/10.3390/vehicles3040042.

Full text
Abstract:
Sensor uncertainty significantly affects the performance of autonomous vehicles (AVs). Sensor uncertainty is predominantly linked to sensor specifications, and because sensor behaviors change dynamically, the machine learning approach is not suitable for learning them. This paper presents a novel learning approach for predicting sensor performance in challenging environments. The design of our approach incorporates both epistemic uncertainties, which are related to the lack of knowledge, and aleatoric uncertainties, which are related to the stochastic nature of the data acquisition process. The proposed approach combines a state-based model with a predictive model, where the former estimates the uncertainty in the current environment and the latter finds the correlations between the source of the uncertainty and its environmental characteristics. The proposed approach has been evaluated on real data to predict the uncertainties associated with global navigation satellite systems (GNSSs), showing that our approach can predict sensor uncertainty with high confidence.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Guotai, Wenqi Li, Michael Aertsen, Jan Deprest, Sébastien Ourselin, and Tom Vercauteren. "Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks." Neurocomputing 338 (April 2019): 34–45. http://dx.doi.org/10.1016/j.neucom.2019.01.103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Busk, Jonas, Peter Bjørn Jørgensen, Arghya Bhowmik, Mikkel N. Schmidt, Ole Winther, and Tejs Vegge. "Calibrated uncertainty for molecular property prediction using ensembles of message passing neural networks." Machine Learning: Science and Technology 3, no. 1 (December 22, 2021): 015012. http://dx.doi.org/10.1088/2632-2153/ac3eb3.

Full text
Abstract:
Abstract Data-driven methods based on machine learning have the potential to accelerate computational analysis of atomic structures. In this context, reliable uncertainty estimates are important for assessing confidence in predictions and enabling decision making. However, machine learning models can produce badly calibrated uncertainty estimates and it is therefore crucial to detect and handle uncertainty carefully. In this work we extend a message passing neural network designed specifically for predicting properties of molecules and materials with a calibrated probabilistic predictive distribution. The method presented in this paper differs from previous work by considering both aleatoric and epistemic uncertainty in a unified framework, and by recalibrating the predictive distribution on unseen data. Through computer experiments, we show that our approach results in accurate models for predicting molecular formation energies with well calibrated uncertainty in and out of the training data distribution on two public molecular benchmark datasets, QM9 and PC9. The proposed method provides a general framework for training and evaluating neural network ensemble models that are able to produce accurate predictions of properties of molecules with well calibrated uncertainty estimates.
APA, Harvard, Vancouver, ISO, and other styles
27

Urbina, Angel, Sankaran Mahadevan, and Thomas L. Paez. "Quantification of margins and uncertainties of complex systems in the presence of aleatoric and epistemic uncertainty." Reliability Engineering & System Safety 96, no. 9 (September 2011): 1114–25. http://dx.doi.org/10.1016/j.ress.2010.08.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Sensoy, Murat, Lance Kaplan, Federico Cerutti, and Maryam Saleki. "Uncertainty-Aware Deep Classifiers Using Generative Models." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5620–27. http://dx.doi.org/10.1609/aaai.v34i04.6015.

Full text
Abstract:
Deep neural networks are often ignorant about what they do not know and overconfident when they make uninformed predictions. Some recent approaches quantify classification uncertainty directly by training the model to output high uncertainty for the data samples close to class boundaries or from the outside of the training distribution. These approaches use an auxiliary data set during training to represent out-of-distribution samples. However, selection or creation of such an auxiliary data set is non-trivial, especially for high dimensional data such as images. In this work we develop a novel neural network model that is able to express both aleatoric and epistemic uncertainty to distinguish decision boundary and out-of-distribution regions of the feature space. To this end, variational autoencoders and generative adversarial networks are incorporated to automatically generate out-of-distribution exemplars for training. Through extensive analysis, we demonstrate that the proposed approach provides better estimates of uncertainty for in- and out-of-distribution samples, and adversarial examples on well-known data sets against state-of-the-art approaches including recent Bayesian approaches for neural networks and anomaly detection methods.
APA, Harvard, Vancouver, ISO, and other styles
29

Francis-Xavier, Fenila, Fabian Kubannek, and René Schenkendorf. "Hybrid Process Models in Electrochemical Syntheses under Deep Uncertainty." Processes 9, no. 4 (April 16, 2021): 704. http://dx.doi.org/10.3390/pr9040704.

Full text
Abstract:
Chemical process engineering and machine learning are merging rapidly, and hybrid process models have shown promising results in process analysis and process design. However, uncertainties in first-principles process models have an adverse effect on extrapolations and inferences based on hybrid process models. Parameter sensitivities are an essential tool to understand better the underlying uncertainty propagation and hybrid system identification challenges. Still, standard parameter sensitivity concepts may fail to address comprehensive parameter uncertainty problems, i.e., deep uncertainty with aleatoric and epistemic contributions. This work shows a highly effective and reproducible sampling strategy to calculate simulation uncertainties and global parameter sensitivities for hybrid process models under deep uncertainty. We demonstrate the workflow with two electrochemical synthesis simulation studies, including the synthesis of furfuryl alcohol and 4-aminophenol. Compared with Monte Carlo reference simulations, the CPU-time was significantly reduced. The general findings of the hybrid model sensitivity studies under deep uncertainty are twofold. First, epistemic uncertainty has a significant effect on uncertainty analysis. Second, the predicted parameter sensitivities of the hybrid process models add value to the interpretation and analysis of the hybrid models themselves but are not suitable for predicting the real process/full first-principles process model’s sensitivities.
APA, Harvard, Vancouver, ISO, and other styles
30

Pires, Catarina, Marília Barandas, Letícia Fernandes, Duarte Folgado, and Hugo Gamboa. "Towards Knowledge Uncertainty Estimation for Open Set Recognition." Machine Learning and Knowledge Extraction 2, no. 4 (October 30, 2020): 505–32. http://dx.doi.org/10.3390/make2040028.

Full text
Abstract:
Uncertainty is ubiquitous and happens in every single prediction of Machine Learning models. The ability to estimate and quantify the uncertainty of individual predictions is arguably relevant, all the more in safety-critical applications. Real-world recognition poses multiple challenges since a model’s knowledge about physical phenomenon is not complete, and observations are incomplete by definition. However, Machine Learning algorithms often assume that train and test data distributions are the same and that all testing classes are present during training. A more realistic scenario is the Open Set Recognition, where unknown classes can be submitted to an algorithm during testing. In this paper, we propose a Knowledge Uncertainty Estimation (KUE) method to quantify knowledge uncertainty and reject out-of-distribution inputs. Additionally, we quantify and distinguish aleatoric and epistemic uncertainty with the classical information-theoretical measures of entropy by means of ensemble techniques. We performed experiments on four datasets with different data modalities and compared our results with distance-based classifiers, SVM-based approaches and ensemble techniques using entropy measures. Overall, the effectiveness of KUE in distinguishing in- and out-distribution inputs obtained better results in most cases and was at least comparable in others. Furthermore, a classification with rejection option based on a proposed combination strategy between different measures of uncertainty is an application of uncertainty with proven results.
APA, Harvard, Vancouver, ISO, and other styles
31

Saberi, Nastaran, Katharine Andrea Scott, and Claude Duguay. "Incorporating Aleatoric Uncertainties in Lake Ice Mapping Using RADARSAT–2 SAR Images and CNNs." Remote Sensing 14, no. 3 (January 29, 2022): 644. http://dx.doi.org/10.3390/rs14030644.

Full text
Abstract:
With the increasing availability of SAR imagery in recent years, more research is being conducted using deep learning (DL) for the classification of ice and open water; however, ice and open water classification using conventional DL methods such as convolutional neural networks (CNNs) is not yet accurate enough to replace manual analysis for operational ice chart mapping. Understanding the uncertainties associated with CNN model predictions can help to quantify errors and, therefore, guide efforts on potential enhancements using more–advanced DL models and/or synergistic approaches. This paper evaluates an approach for estimating the aleatoric uncertainty [a measure used to identify the noise inherent in data] of CNN probabilities to map ice and open water with a custom loss function applied to RADARSAT–2 HH and HV observations. The images were acquired during the 2014 ice season of Lake Erie and Lake Ontario, two of the five Laurentian Great Lakes of North America. Operational image analysis charts from the Canadian Ice Service (CIS), which are based on visual interpretation of SAR imagery, are used to provide training and testing labels for the CNN model and to evaluate the accuracy of the model predictions. Bathymetry, as a variable that has an impact on the ice regime of lakes, was also incorporated during model training in supplementary experiments. Adding aleatoric loss and bathymetry information improved the accuracy of mapping water and ice. Results are evaluated quantitatively (accuracy metrics) and qualitatively (visual comparisons). Ice and open water scores were improved in some sections of the lakes by using aleatoric loss and including bathymetry. In Lake Erie, the ice score was improved by ∼2 on average in the shallow near–shore zone as a result of better mapping of dark ice (low backscatter) in the western basin. As for Lake Ontario, the open water score was improved by ∼6 on average in the deepest profundal off–shore zone.
APA, Harvard, Vancouver, ISO, and other styles
32

Doicu, Adrian, Alexandru Doicu, Dmitry S. Efremenko, Diego Loyola, and Thomas Trautmann. "An Overview of Neural Network Methods for Predicting Uncertainty in Atmospheric Remote Sensing." Remote Sensing 13, no. 24 (December 13, 2021): 5061. http://dx.doi.org/10.3390/rs13245061.

Full text
Abstract:
In this paper, we present neural network methods for predicting uncertainty in atmospheric remote sensing. These include methods for solving the direct and the inverse problem in a Bayesian framework. In the first case, a method based on a neural network for simulating the radiative transfer model and a Bayesian approach for solving the inverse problem is proposed. In the second case, (i) a neural network, in which the output is the convolution of the output for a noise-free input with the input noise distribution; and (ii) a Bayesian deep learning framework that predicts input aleatoric and model uncertainties, are designed. In addition, a neural network that uses assumed density filtering and interval arithmetic to compute uncertainty is employed for testing purposes. The accuracy and the precision of the methods are analyzed by considering the retrieval of cloud parameters from radiances measured by the Earth Polychromatic Imaging Camera (EPIC) onboard the Deep Space Climate Observatory (DSCOVR).
APA, Harvard, Vancouver, ISO, and other styles
33

Siddique, Talha, Md Mahmud, Amy Keesee, Chigomezyo Ngwira, and Hyunju Connor. "A Survey of Uncertainty Quantification in Machine Learning for Space Weather Prediction." Geosciences 12, no. 1 (January 7, 2022): 27. http://dx.doi.org/10.3390/geosciences12010027.

Full text
Abstract:
With the availability of data and computational technologies in the modern world, machine learning (ML) has emerged as a preferred methodology for data analysis and prediction. While ML holds great promise, the results from such models are not fully unreliable due to the challenges introduced by uncertainty. An ML model generates an optimal solution based on its training data. However, if the uncertainty in the data and the model parameters are not considered, such optimal solutions have a high risk of failure in actual world deployment. This paper surveys the different approaches used in ML to quantify uncertainty. The paper also exhibits the implications of quantifying uncertainty when using ML by performing two case studies with space physics in focus. The first case study consists of the classification of auroral images in predefined labels. In the second case study, the horizontal component of the perturbed magnetic field measured at the Earth’s surface was predicted for the study of Geomagnetically Induced Currents (GICs) by training the model using time series data. In both cases, a Bayesian Neural Network (BNN) was trained to generate predictions, along with epistemic and aleatoric uncertainties. Finally, the pros and cons of both Gaussian Process Regression (GPR) models and Bayesian Deep Learning (DL) are weighed. The paper also provides recommendations for the models that need exploration, focusing on space weather prediction.
APA, Harvard, Vancouver, ISO, and other styles
34

Eachempati, Prashanti, Roland Brian Büchter, Kiran Kumar KS, Sally Hanks, John Martin, and Mona Nasser. "Developing an integrated multilevel model of uncertainty in health care: a qualitative systematic review and thematic synthesis." BMJ Global Health 7, no. 5 (May 2022): e008113. http://dx.doi.org/10.1136/bmjgh-2021-008113.

Full text
Abstract:
IntroductionUncertainty is an inevitable part of healthcare and a source of confusion and challenge to decision-making. Several taxonomies of uncertainty have been developed, but mainly focus on decisions in clinical settings. Our goal was to develop a holistic model of uncertainty that can be applied to both clinical as well as public and global health scenarios.MethodsWe searched Medline, Embase, CINAHL, Scopus and Google scholar in March 2021 for literature reviews, qualitative studies and case studies related to classifications or models of uncertainty in healthcare. Empirical articles were assessed for study limitations using the Critical Appraisal Skills Programme (CASP) checklist. We synthesised the literature using a thematic analysis and developed a dynamic multilevel model of uncertainty. We sought patient input to assess relatability of the model and applied it to two case examples.ResultsWe screened 4125 studies and included 15 empirical studies, 13 literature reviews and 5 case studies. We identified 77 codes and organised these into 26 descriptive and 11 analytical themes of uncertainty. The themes identified are global, public health, healthcare system, clinical, ethical, relational, personal, knowledge exchange, epistemic, aleatoric and parameter uncertainty. The themes were included in a model, which captures the macro, meso and microlevels and the inter-relatedness of uncertainty. We successfully piloted the model on one public health example and an environmental topic. The main limitations are that the research input into our model predominantly came from North America and Europe, and that we have not yet tested the model in a real-life setting.ConclusionWe developed a model that can comprehensively capture uncertainty in public and global health scenarios. It builds on models that focus solely on clinical settings by including social and political contexts and emphasising the dynamic interplay between different areas of uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
35

Dohopolski, Michael, Liyuan Chen, David Sher, and Jing Wang. "Predicting lymph node metastasis in patients with oropharyngeal cancer by using a convolutional neural network with associated epistemic and aleatoric uncertainty." Physics in Medicine & Biology 65, no. 22 (November 12, 2020): 225002. http://dx.doi.org/10.1088/1361-6560/abb71c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Lukasczyk, Jonas, Garrett Aldrich, Michael Steptoe, Guillaume Favelier, Charles Gueunet, Julien Tierny, Ross Maciejewski, Bernd Hamann, and Heike Leitte. "Viscous Fingering: A Topological Visual Analytic Approach." Applied Mechanics and Materials 869 (August 2017): 9–19. http://dx.doi.org/10.4028/www.scientific.net/amm.869.9.

Full text
Abstract:
We present a methodology to analyze and visualize an ensemble of finite pointset method (FPM) simulations that model the viscous fingering process of salt solutions inside water. In course of the simulations the solutions form structures with increased salt concentration value, called viscous fingers. These structures are of primary interest to domain scientists since it is not deterministic when and where viscous fingers appear and how they evolve. To explore the aleatoric uncertainty embedded in the simulations we analyze an ensemble of simulation runs which differ due to stochastic effects. To detect and track the viscous fingers we derive a voxel volume for each simulation where fingers are identified as subvolumes that satisfy geometrical and topological constraints. Properties and the evolution of fingers are illustrated through tracking graphs that visualize when fingers form, dissolve, merge, and split. We provide multiple linked views to compare, browse, and analyze the ensemble in real-time.
APA, Harvard, Vancouver, ISO, and other styles
37

Gilda, Sankalp, Stark C. Draper, Sébastien Fabbro, William Mahoney, Simon Prunet, Kanoa Withington, Matthew Wilson, Yuan-Sen Ting, and Andrew Sheinis. "Uncertainty-aware learning for improvements in image quality of the Canada–France–Hawaii Telescope." Monthly Notices of the Royal Astronomical Society 510, no. 1 (November 11, 2021): 870–902. http://dx.doi.org/10.1093/mnras/stab3243.

Full text
Abstract:
ABSTRACT We leverage state-of-the-art machine learning methods and a decade’s worth of archival data from Canada–France–Hawaii Telescope (CFHT) to predict observatory image quality (IQ) from environmental conditions and observatory operating parameters. Specifically, we develop accurate and interpretable models of the complex dependence between data features and observed IQ for CFHT’s wide-field camera, MegaCam. Our contributions are several-fold. First, we collect, collate, and reprocess several disparate data sets gathered by CFHT scientists. Second, we predict probability distribution functions of IQ and achieve a mean absolute error of ∼0.07 arcsec for the predicted medians. Third, we explore the data-driven actuation of the 12 dome ‘vents’ installed in 2013–14 to accelerate the flushing of hot air from the dome. We leverage epistemic and aleatoric uncertainties in conjunction with probabilistic generative modelling to identify candidate vent adjustments that are in-distribution (ID); for the optimal configuration for each ID sample, we predict the reduction in required observing time to achieve a fixed signal-to-noise ratio. On average, the reduction is $\sim 12{{\ \rm per\ cent}}$. Finally, we rank input features by their Shapley values to identify the most predictive variables for each observation. Our long-term goal is to construct reliable and real-time models that can forecast optimal observatory operating parameters to optimize IQ. We can then feed such forecasts into scheduling protocols and predictive maintenance routines. We anticipate that such approaches will become standard in automating observatory operations and maintenance by the time CFHT’s successor, the Maunakea Spectroscopic Explorer, is installed in the next decade.
APA, Harvard, Vancouver, ISO, and other styles
38

Kong, Zhan, Yaqi Cui, Wei Xiong, Fucheng Yang, Zhenyu Xiong, and Pingliang Xu. "Ship Target Identification via Bayesian-Transformer Neural Network." Journal of Marine Science and Engineering 10, no. 5 (April 24, 2022): 577. http://dx.doi.org/10.3390/jmse10050577.

Full text
Abstract:
Ship target identification is of great significance in both military and civilian fields. Many methods have been proposed to identify the targets using tracks information. However, most of existing studies can only identify two or three types of targets, and the accuracy of identification needs to be further improved. Meanwhile, they do not provide a reliable probability of the identification result under a high-noise environment. To address these issues, a Bayesian-Transformer Neural Network (BTNN) is proposed to complete the ship target identification task using tracks information. The aim of the research is improving the ability of ship target identification to enhance the maritime situation awareness and strengthen the protection of maritime traffic safety. Firstly, a Bayesian-Transformer Encoder (BTE) module that contains four different Bayesian-Transformer Encoders is used to extract discriminate features of tracks. Then, a Bayesian fully connected layer and a SoftMax layer complete the classification. Benefiting from the superiority of the Bayesian neural network, BTNN can provide a reliable probability of the result, which captures both aleatoric uncertainty and epistemic uncertainty. The experiments show that the proposed method can successfully identify nine types of ship targets. Compared with traditional methods, the identification accuracy of BTNN increases by 3.8% from 90.16%. In addition, compared with non-Bayesian Transformer Neural Network, the BTNN can provide a more reliable probability of the identification result under a high-noise environment.
APA, Harvard, Vancouver, ISO, and other styles
39

Kong, Zhan, Yaqi Cui, Wei Xiong, Fucheng Yang, Zhenyu Xiong, and Pingliang Xu. "Ship Target Identification via Bayesian-Transformer Neural Network." Journal of Marine Science and Engineering 10, no. 5 (April 24, 2022): 577. http://dx.doi.org/10.3390/jmse10050577.

Full text
Abstract:
Ship target identification is of great significance in both military and civilian fields. Many methods have been proposed to identify the targets using tracks information. However, most of existing studies can only identify two or three types of targets, and the accuracy of identification needs to be further improved. Meanwhile, they do not provide a reliable probability of the identification result under a high-noise environment. To address these issues, a Bayesian-Transformer Neural Network (BTNN) is proposed to complete the ship target identification task using tracks information. The aim of the research is improving the ability of ship target identification to enhance the maritime situation awareness and strengthen the protection of maritime traffic safety. Firstly, a Bayesian-Transformer Encoder (BTE) module that contains four different Bayesian-Transformer Encoders is used to extract discriminate features of tracks. Then, a Bayesian fully connected layer and a SoftMax layer complete the classification. Benefiting from the superiority of the Bayesian neural network, BTNN can provide a reliable probability of the result, which captures both aleatoric uncertainty and epistemic uncertainty. The experiments show that the proposed method can successfully identify nine types of ship targets. Compared with traditional methods, the identification accuracy of BTNN increases by 3.8% from 90.16%. In addition, compared with non-Bayesian Transformer Neural Network, the BTNN can provide a more reliable probability of the identification result under a high-noise environment.
APA, Harvard, Vancouver, ISO, and other styles
40

Kirkwood, Charlie, Theo Economou, Nicolas Pugeault, and Henry Odbert. "Bayesian Deep Learning for Spatial Interpolation in the Presence of Auxiliary Information." Mathematical Geosciences 54, no. 3 (January 17, 2022): 507–31. http://dx.doi.org/10.1007/s11004-021-09988-0.

Full text
Abstract:
Abstract Earth scientists increasingly deal with ‘big data’. For spatial interpolation tasks, variants of kriging have long been regarded as the established geostatistical methods. However, kriging and its variants (such as regression kriging, in which auxiliary variables or derivatives of these are included as covariates) are relatively restrictive models and lack capabilities provided by deep neural networks. Principal among these is feature learning: the ability to learn filters to recognise task-relevant patterns in gridded data such as images. Here, we demonstrate the power of feature learning in a geostatistical context by showing how deep neural networks can automatically learn the complex high-order patterns by which point-sampled target variables relate to gridded auxiliary variables (such as those provided by remote sensing) and in doing so produce detailed maps. In order to cater for the needs of decision makers who require well-calibrated probabilities, we also demonstrate how both aleatoric and epistemic uncertainty can be quantified in our deep learning approach via a Bayesian approximation known as Monte Carlo dropout. In our example, we produce a national-scale probabilistic geochemical map from point-sampled observations with auxiliary data provided by a terrain elevation grid. By combining location information with automatically learned terrain derivatives, our deep learning approach achieves an excellent coefficient of determination ($$R^{2} = 0.74$$ R 2 = 0.74 ) and near-perfect probabilistic calibration on held-out test data. Our results indicate the suitability of Bayesian deep learning and its feature-learning capabilities for large-scale geostatistical applications where uncertainty matters. Graphic Abstract
APA, Harvard, Vancouver, ISO, and other styles
41

Weijs, S. V., N. van de Giesen, and M. B. Parlange. "Data compression to define information content of hydrological time series." Hydrology and Earth System Sciences 17, no. 8 (August 6, 2013): 3171–87. http://dx.doi.org/10.5194/hess-17-3171-2013.

Full text
Abstract:
Abstract. When inferring models from hydrological data or calibrating hydrological models, we are interested in the information content of those data to quantify how much can potentially be learned from them. In this work we take a perspective from (algorithmic) information theory, (A)IT, to discuss some underlying issues regarding this question. In the information-theoretical framework, there is a strong link between information content and data compression. We exploit this by using data compression performance as a time series analysis tool and highlight the analogy to information content, prediction and learning (understanding is compression). The analysis is performed on time series of a set of catchments. We discuss both the deeper foundation from algorithmic information theory, some practical results and the inherent difficulties in answering the following question: "How much information is contained in this data set?". The conclusion is that the answer to this question can only be given once the following counter-questions have been answered: (1) information about which unknown quantities? and (2) what is your current state of knowledge/beliefs about those quantities? Quantifying information content of hydrological data is closely linked to the question of separating aleatoric and epistemic uncertainty and quantifying maximum possible model performance, as addressed in the current hydrological literature. The AIT perspective teaches us that it is impossible to answer this question objectively without specifying prior beliefs.
APA, Harvard, Vancouver, ISO, and other styles
42

Styron, Richard. "The impact of earthquake cycle variability on neotectonic and paleoseismic slip rate estimates." Solid Earth 10, no. 1 (January 8, 2019): 15–25. http://dx.doi.org/10.5194/se-10-15-2019.

Full text
Abstract:
Abstract. Because of the natural (aleatoric) variability in earthquake recurrence intervals and coseismic displacements on a fault, cumulative slip on a fault does not increase linearly or perfectly step-wise with time; instead, some amount of variability in shorter-term slip rates results. Though this variability could greatly affect the accuracy of neotectonic (i.e., late Quaternary) and paleoseismic slip rate estimates, these effects have not been quantified. In this study, idealized faults with four different, representative, earthquake recurrence distributions are created with equal mean recurrence intervals (1000 years) and coseismic slip distributions, and the variability in slip rate estimates over 500- to 100 000-year measurement windows is calculated for all faults through Monte Carlo simulations. Slip rates are calculated as net offset divided by elapsed time, as in a typical neotectonic study. The recurrence distributions used are quasi-periodic, unclustered and clustered lognormal distributions, and an unclustered exponential distribution. The results demonstrate that the most important parameter is the coefficient of variation (CV = standard deviation ∕ mean) of the recurrence distributions rather than the shape of the distribution itself. Slip rate variability over short timescales (< 5000 years or 5 mean earthquake cycles) is quite high, varying by a factor of 3 or more from the mean, but decreases with time and is close to stable after ∼40 000 years (40 mean earthquake cycles). This variability is higher for recurrence distributions with a higher CV. The natural variability in the slip rate estimates compared to the true value is then used to estimate the epistemic uncertainty in a single slip rate measurement (as one would make in a geological study) in the absence of any measurement uncertainty. This epistemic uncertainty is very high (a factor of 2 or more) for measurement windows of a few mean earthquake cycles (as in a paleoseismic slip rate estimate), but decreases rapidly to a factor of 1–2 with > 5 mean earthquake cycles (as in a neotectonic slip rate study). These uncertainties are independent of, and should be propagated with, uncertainties in fault displacement and geochronologic measurements used to estimate slip rates. They may then aid in the comparison of slip rates from different methods or the evaluation of potential slip rate changes over time.
APA, Harvard, Vancouver, ISO, and other styles
43

Weijs, S. V., N. van de Giesen, and M. B. Parlange. "Data compression to define information content of hydrological time series." Hydrology and Earth System Sciences Discussions 10, no. 2 (February 14, 2013): 2029–65. http://dx.doi.org/10.5194/hessd-10-2029-2013.

Full text
Abstract:
Abstract. When inferring models from hydrological data or calibrating hydrological models, we might be interested in the information content of those data to quantify how much can potentially be learned from them. In this work we take a perspective from (algorithmic) information theory (AIT) to discuss some underlying issues regarding this question. In the information-theoretical framework, there is a strong link between information content and data compression. We exploit this by using data compression performance as a time series analysis tool and highlight the analogy to information content, prediction, and learning (understanding is compression). The analysis is performed on time series of a set of catchments, searching for the mechanisms behind compressibility. We discuss both the deeper foundation from algorithmic information theory, some practical results and the inherent difficulties in answering the question: "How much information is contained in this data?". The conclusion is that the answer to this question can only be given once the following counter-questions have been answered: (1) Information about which unknown quantities? (2) What is your current state of knowledge/beliefs about those quantities? Quantifying information content of hydrological data is closely linked to the question of separating aleatoric and epistemic uncertainty and quantifying maximum possible model performance, as addressed in current hydrological literature. The AIT perspective teaches us that it is impossible to answer this question objectively, without specifying prior beliefs. These beliefs are related to the maximum complexity one is willing to accept as a law and what is considered as random.
APA, Harvard, Vancouver, ISO, and other styles
44

PEDRONI, NICOLA, and ENRICO ZIO. "EMPIRICAL COMPARISON OF METHODS FOR THE HIERARCHICAL PROPAGATION OF HYBRID UNCERTAINTY IN RISK ASSESSMENT, IN PRESENCE OF DEPENDENCES." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 20, no. 04 (August 2012): 509–57. http://dx.doi.org/10.1142/s0218488512500250.

Full text
Abstract:
Risk analysis models describing aleatory (i.e., random) events contain parameters (e.g., probabilities, failure rates, …) that are epistemically-uncertain, i.e., known with poor precision. Whereas aleatory uncertainty is always described by probability distributions, epistemic uncertainty may be represented in different ways (e.g., probabilistic or possibilistic), depending on the information and data available. The work presented in this paper addresses the issue of accounting for (in)dependence relationships between epistemically-uncertain parameters. When a probabilistic representation of epistemic uncertainty is considered, uncertainty propagation is carried out by a two-dimensional (or double) Monte Carlo (MC) simulation approach; instead, when possibility distributions are used, two approaches are undertaken: the hybrid MC and Fuzzy Interval Analysis (FIA) method and the MC-based Dempster-Shafer (DS) approach employing Independent Random Sets (IRSs). The objectives are: i) studying the effects of (in)dependence between the epistemically-uncertain parameters of the aleatory probability distributions (when a probabilistic/possibilistic representation of epistemic uncertainty is adopted) and ii) studying the effect of the probabilistic/possibilistic representation of epistemic uncertainty (when the state of dependence between the epistemic parameters is defined). The Dependency Bound Convolution (DBC) approach is then undertaken within a hierarchical setting of hybrid (probabilistic and possibilistic) uncertainty propagation, in order to account for all kinds of (possibly unknown) dependences between the random variables. The analyses are carried out with reference to two toy examples, built in such a way to allow performing a fair quantitative comparison between the methods, and evaluating their rationale and appropriateness in relation to risk analysis.
APA, Harvard, Vancouver, ISO, and other styles
45

Zhou, Shuang, Jianguo Zhang, Lingfei You, and Qingyuan Zhang. "Uncertainty propagation in structural reliability with implicit limit state functions under aleatory and epistemic uncertainties." Eksploatacja i Niezawodnosc - Maintenance and Reliability 23, no. 2 (February 4, 2021): 231–41. http://dx.doi.org/10.17531/ein.2021.2.3.

Full text
Abstract:
Uncertainty propagation plays a pivotal role in structural reliability assessment. This paper introduces a novel uncertainty propagation method for structural reliability under different knowledge stages based on probability theory, uncertainty theory and chance theory. Firstly, a surrogate model combining the uniform design and least-squares method is presented to simulate the implicit limit state function with random and uncertain variables. Then, a novel quantification method based on chance theory is derived herein, to calculate the structural reliability under mixed aleatory and epistemic uncertainties. The concepts of chance reliability and chance reliability index (CRI) are defined to show the reliable degree of structure. Besides, the selection principles of uncertainty propagation types and the corresponding reliability estimation methods are given according to the different knowledge stages. The proposed methods are finally applied in a practical structural reliability problem, which illustrates the effectiveness and advantages of the techniques presented in this work.
APA, Harvard, Vancouver, ISO, and other styles
46

Xiao, N.-C., H.-Z. Huang, Z. Wang, Y. Li, and Y. Liu. "Reliability analysis of series systems with multiple failure modes under epistemic and aleatory uncertainties." Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability 226, no. 3 (October 10, 2011): 295–304. http://dx.doi.org/10.1177/1748006x11421266.

Full text
Abstract:
Uncertainty exists widely in engineering practice. An engineering system may have multiple failure criteria. In the current paper, system reliability analysis with multiple failure modes under both epistemic and aleatory uncertainties is presented. Epistemic uncertainty is modelled using p-boxes, while aleatory uncertainty is modelled using probability distributions. A first-order reliability method is developed and non-linear performance functions are linearized by the sampling method instead of the commonly used Taylor’s expansion at the most probable point. Furthermore, multiple failure modes in a system are often correlated because they depend on the same uncertain variables. In order to consider these correlated failure modes, the methods proposed by Feng and Frank are extended in this paper in order to calculate the joint probability of failure for two arbitrary failure modes under both aleatory and epistemic uncertainties. The Pearson correlation coefficient of two arbitrary failure modes is determined by the sampling method. Since two types of uncertainty exist in the system, the probability of system failure is an interval rather than a point value. The probability of failure of the system can be obtained by the combination of the extension ‘narrow’ bound method and the interval arithmetic. A numerical example is presented to demonstrate the applicability of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
47

Packard, Mark D., and Brent B. Clark. "Mitigating versus Managing Epistemic and Aleatory Uncertainty." Academy of Management Review 45, no. 4 (October 2020): 872–76. http://dx.doi.org/10.5465/amr.2020.0266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

UGATA, Takeshi. "LOAD FACTOR IN CASE OF SEPARATING ALEATORY UNCERTAINTY AND EPISTEMIC UNCERTAINTY." Journal of Structural and Construction Engineering (Transactions of AIJ) 73, no. 630 (2008): 1245–50. http://dx.doi.org/10.3130/aijs.73.1245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

You, Lingwan, Yeou-Koung Tung, and Chulsang Yoo. "Probabilistic assessment of hydrologic retention performance of green roof considering aleatory and epistemic uncertainties." Hydrology Research 51, no. 6 (October 14, 2020): 1377–96. http://dx.doi.org/10.2166/nh.2020.086.

Full text
Abstract:
Abstract Green roofs (GRs) are well known for source control of runoff quantity in sustainable urban stormwater management. By considering the inherent randomness of rainfall characteristics, this study derives the probability distribution of rainfall retention ratio and its statistical moments. The distribution function of can be used to establish a unique relationship between target retention ratio , achievable reliability AR, and substrate depth h for the aleatory-based probabilistic (AP) GR design. However, uncertainties of epistemic nature also exist in the AP GR model that makes AR uncertain. In the paper, the treatment of epistemic uncertainty in the AP GR model is presented and implemented for the uncertainty quantification of AR. It is shown that design without considering epistemic uncertainties by the AP GR model yields about 50% confidence of meeting . A procedure is presented to determine the design substrate depth having the stipulated confidence to satisfy and target achievable reliability .
APA, Harvard, Vancouver, ISO, and other styles
50

Engelhardt, Ellen G., Arwen H. Pieterse, Paul K. J. Han, Nanny van Duijn-Bakker, Frans Cluitmans, Ed Maartense, Monique M. E. M. Bos, et al. "Disclosing the Uncertainty Associated with Prognostic Estimates in Breast Cancer." Medical Decision Making 37, no. 3 (September 29, 2016): 179–92. http://dx.doi.org/10.1177/0272989x16670639.

Full text
Abstract:
Background. Treatment decision making is often guided by evidence-based probabilities, which may be presented to patients during consultations. These probabilities are intrinsically imperfect and embody 2 types of uncertainties: aleatory uncertainty arising from the unpredictability of future events and epistemic uncertainty arising from limitations in the reliability and accuracy of probability estimates. Risk communication experts have recommended disclosing uncertainty. We examined whether uncertainty was discussed during cancer consultations and whether and how patients perceived uncertainty. Methods. Consecutive patient consultations with medical oncologists discussing adjuvant treatment in early-stage breast cancer were audiotaped, transcribed, and coded. Patients were interviewed after the consultation to gain insight into their perceptions of uncertainty. Results. In total, 198 patients were included by 27 oncologists. Uncertainty was disclosed in 49% (97/197) of consultations. In those 97 consultations, 23 allusions to epistemic uncertainty were made and 84 allusions to aleatory uncertainty. Overall, the allusions to the precision of the probabilities were somewhat ambiguous. Interviewed patients mainly referred to aleatory uncertainty if not prompted about epistemic uncertainty. Even when specifically asked about epistemic uncertainty, 1 in 4 utterances referred to aleatory uncertainty. When talking about epistemic uncertainty, many patients contradicted themselves. In addition, 1 in 10 patients seemed not to realize that the probabilities communicated during the consultation are imperfect. Conclusions. Uncertainty is conveyed in only half of patient consultations. When uncertainty is communicated, oncologists mainly refer to aleatory uncertainty. This is also the type of uncertainty that most patients perceive and seem comfortable discussing. Given that it is increasingly common for clinicians to discuss outcome probabilities with their patients, guidance on whether and how to best communicate uncertainty is urgently needed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography