Journal articles on the topic 'Forecast probability density function'

To see the other types of publications on this topic, follow the link: Forecast probability density function.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Forecast probability density function.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Denholm-Price, J. C. W. "Can an ensemble give anything more than Gaussian probabilities?" Nonlinear Processes in Geophysics 10, no. 6 (December 31, 2003): 469–75. http://dx.doi.org/10.5194/npg-10-469-2003.

Full text
Abstract:
Abstract. Can a relatively small numerical weather prediction ensemble produce any more forecast information than can be reproduced by a Gaussian probability density function (PDF)? This question is examined using site-specific probability forecasts from the UK Met Office. These forecasts are based on the 51-member Ensemble Prediction System of the European Centre for Medium-range Weather Forecasts. Verification using Brier skill scores suggests that there can be statistically-significant skill in the ensemble forecast PDF compared with a Gaussian fit to the ensemble. The most significant increases in skill were achieved from bias-corrected, calibrated forecasts and for probability forecasts of thresholds that are located well inside the climatological limits at the examined sites. Forecast probabilities for more climatologically-extreme thresholds, where the verification more often lies within the tails or outside of the PDF, showed little difference in skill between the forecast PDF and the Gaussian forecast.
APA, Harvard, Vancouver, ISO, and other styles
2

Smith, Leonard A., Hailiang Du, and Sarah Higgins. "Designing Multimodel Applications with Surrogate Forecast Systems." Monthly Weather Review 148, no. 6 (May 5, 2020): 2233–49. http://dx.doi.org/10.1175/mwr-d-19-0061.1.

Full text
Abstract:
Abstract Probabilistic forecasting is common in a wide variety of fields including geoscience, social science, and finance. It is sometimes the case that one has multiple probability forecasts for the same target. How is the information in these multiple nonlinear forecast systems best “combined”? Assuming stationarity, in the limit of a very large forecast–outcome archive, each model-based probability density function can be weighted to form a “multimodel forecast” that will, in expectation, provide at least as much information as the most informative single model forecast system. If one of the forecast systems yields a probability distribution that reflects the distribution from which the outcome will be drawn, Bayesian model averaging will identify this forecast system as the preferred system in the limit as the number of forecast–outcome pairs goes to infinity. In many applications, like those of seasonal weather forecasting, data are precious; the archive is often limited to fewer than 26 entries. In addition, no perfect model is in hand. It is shown that in this case forming a single “multimodel probabilistic forecast” can be expected to prove misleading. These issues are investigated in the surrogate model (here a forecast system) regime, where using probabilistic forecasts of a simple mathematical system allows many limiting behaviors of forecast systems to be quantified and compared with those under more realistic conditions.
APA, Harvard, Vancouver, ISO, and other styles
3

Eckel, F. Anthony, Mark S. Allen, and Matthew C. Sittel. "Estimation of Ambiguity in Ensemble Forecasts." Weather and Forecasting 27, no. 1 (February 1, 2012): 50–69. http://dx.doi.org/10.1175/waf-d-11-00015.1.

Full text
Abstract:
Abstract Ambiguity is uncertainty in the prediction of forecast uncertainty, or in the forecast probability of a specific event, associated with random error in an ensemble forecast probability density function. In ensemble forecasting ambiguity arises from finite sampling and deficient simulation of the various sources of forecast uncertainty. This study introduces two practical methods of estimating ambiguity and demonstrates them on 5-day, 2-m temperature forecasts from the Japan Meteorological Agency’s Ensemble Prediction System. The first method uses the error characteristics of the calibrated ensemble as well as the ensemble spread to predict likely errors in forecast probability. The second method applies bootstrap resampling on the ensemble members to produce multiple likely values of forecast probability. Both methods include forecast calibration since ambiguity results from random and not systematic errors, which must be removed to reveal the ambiguity. Additionally, use of a more robust calibration technique (improving beyond just correcting average errors) is shown to reduce ambiguity. Validation using a low-order dynamical system reveals that both estimation methods have deficiencies but exhibit some skill, making them candidates for application to decision making—the subject of a companion paper.
APA, Harvard, Vancouver, ISO, and other styles
4

Schmid, W., S. Mecklenburg, and J. Joss. "Short-term risk forecasts of heavy rainfall." Water Science and Technology 45, no. 2 (January 1, 2002): 121–25. http://dx.doi.org/10.2166/wst.2002.0036.

Full text
Abstract:
Methodologies for risk forecasts of severe weather hardly exist on the scale of nowcasting (0–3 hours). Here we discuss short-term risk forecasts of heavy precipitation associated with local thunderstorms. We use COTREC/RainCast: a procedure to extrapolate radar images into the near future. An error density function is defined using the estimated error of location of the extrapolated radar patterns. The radar forecast is folded (“smeared”) with the density function, leading to a probability distribution of radar intensities. An algorithm to convert the radar intensities into values of precipitation intensity provides the desired probability (or risk) of heavy rainfall at any position within the considered window in space and time. We discuss, as an example, a flood event from summer 2000.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Liyang, Junji Wu, and Shaoliang Meng. "A Statistical Model for Wind Power Forecast Error Based on Kernel Density Estimation." Open Electrical & Electronic Engineering Journal 8, no. 1 (December 31, 2014): 501–7. http://dx.doi.org/10.2174/1874129001408010501.

Full text
Abstract:
Wind power has been developed rapidly as a clean energy in recent years. The forecast error of wind power, however, makes it difficult to use wind power effectively. In some former statistical models, the forecast error was usually assumed to be a Gaussian distribution, which had proven to be unreliable after a statistical analysis. In this paper, a more suitable probability density function for wind power forecast error based on kernel density estimation was proposed. The proposed model is a non-parametric statistical algorithm and can directly obtain the probability density function from the error data, which do not need to make any assumptions. This paper also presented an optimal bandwidth algorithm for kernel density estimation by using particle swarm optimization, and employed a Chi-squared test to validate the model. Compared with Gaussian distribution and Beta distribution, the mean squared error and Chi-squared test show that the proposed model is more effective and reliable.
APA, Harvard, Vancouver, ISO, and other styles
6

Thorarinsdottir, Thordis L., and Matthew S. Johnson. "Probabilistic Wind Gust Forecasting Using Nonhomogeneous Gaussian Regression." Monthly Weather Review 140, no. 3 (February 1, 2012): 889–97. http://dx.doi.org/10.1175/mwr-d-11-00075.1.

Full text
Abstract:
Abstract A joint probabilistic forecasting framework is proposed for maximum wind speed, the probability of gust, and, conditional on gust being observed, the maximum gust speed in a setting where only the maximum wind speed forecast is available. The framework employs the nonhomogeneous Gaussian regression (NGR) statistical postprocessing method with appropriately truncated Gaussian predictive distributions. For wind speed, the distribution is truncated at zero, the location parameter is a linear function of the wind speed ensemble forecast, and the scale parameter is a linear function of the ensemble variance. The gust forecasts are derived from the wind speed forecast using a gust factor, and the predictive distribution for gust speed is truncated according to its definition. The framework is applied to 48-h-ahead forecasts of wind speed over the North American Pacific Northwest obtained from the University of Washington mesoscale ensemble. The resulting density forecasts for wind speed and gust speed are calibrated and sharp, and offer substantial improvement in predictive performance over the raw ensemble or climatological reference forecasts.
APA, Harvard, Vancouver, ISO, and other styles
7

Schuhen, Nina, Thordis L. Thorarinsdottir, and Tilmann Gneiting. "Ensemble Model Output Statistics for Wind Vectors." Monthly Weather Review 140, no. 10 (October 1, 2012): 3204–19. http://dx.doi.org/10.1175/mwr-d-12-00028.1.

Full text
Abstract:
Abstract A bivariate ensemble model output statistics (EMOS) technique for the postprocessing of ensemble forecasts of two-dimensional wind vectors is proposed, where the postprocessed probabilistic forecast takes the form of a bivariate normal probability density function. The postprocessed means and variances of the wind vector components are linearly bias-corrected versions of the ensemble means and ensemble variances, respectively, and the conditional correlation between the wind components is represented by a trigonometric function of the ensemble mean wind direction. In a case study on 48-h forecasts of wind vectors over the North American Pacific Northwest with the University of Washington Mesoscale Ensemble, the bivariate EMOS density forecasts were calibrated and sharp, and showed considerable improvement over the raw ensemble and reference forecasts, including ensemble copula coupling.
APA, Harvard, Vancouver, ISO, and other styles
8

Qi, Haixia, Xiefei Zhi, Tao Peng, Yongqing Bai, and Chunze Lin. "Comparative Study on Probabilistic Forecasts of Heavy Rainfall in Mountainous Areas of the Wujiang River Basin in China Based on TIGGE Data." Atmosphere 10, no. 10 (October 9, 2019): 608. http://dx.doi.org/10.3390/atmos10100608.

Full text
Abstract:
Based on the ensemble precipitation forecast data in the summers of 2014–2018 from the Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE), a comparative study of two multi-model ensemble methods, the Bayesian model average (BMA) and the logistic regression (LR), was conducted. Meanwhile, forecasts of heavy precipitation from the two models over the Wujiang River Basin in China for the summer of 2018 were compared to verify their performances. The training period sensitivity test results show that a training period of 2 years was the best for BMA probability forecast model. Compared with the BMA method, the LR model required more statistical samples and its optimal length of the training period was 5 years. According to the Brier score (BS), for precipitation events exceeding 10 mm with lead times of 1–7 days, the BMA outperformed the LR and the raw ensemble prediction system forecasts (RAW) except for forecasts with a lead time of 1 day. Furthermore, for heavy rainfall events exceeding 25 and 50 mm, the RAW and the BMA performed much the same in terms of prediction. The reliability diagram of the two multi-model ensembles (i.e., BMA and LR) was more reliable than the RAW for heavy and moderate rainfall forecasts, and the BMA model had the best performance. The BMA probabilistic forecast can produce a highly concentrated probability density function (PDF) curve and can also provide deterministic forecasts through analyzing percentile forecast results. With regard to the heavy rainfall forecast in mountainous areas, it is recommended to refer to the forecast with a larger percentile between the 75th and 90th percentiles. Nevertheless, extreme events with low probability forecasts may occur and cannot be ignored.
APA, Harvard, Vancouver, ISO, and other styles
9

Veenhuis, Bruce A. "Spread Calibration of Ensemble MOS Forecasts." Monthly Weather Review 141, no. 7 (July 1, 2013): 2467–82. http://dx.doi.org/10.1175/mwr-d-12-00191.1.

Full text
Abstract:
Abstract Ensemble forecasting systems often contain systematic biases and spread deficiencies that can be corrected by statistical postprocessing. This study presents an improvement to an ensemble statistical postprocessing technique, called ensemble kernel density model output statistics (EKDMOS). EKDMOS uses model output statistics (MOS) equations and spread–skill relationships to generate calibrated probabilistic forecasts. The MOS equations are multiple linear regression equations developed by relating observations to ensemble mean-based predictors. The spread–skill relationships are one-term linear regression equations that predict the expected accuracy of the ensemble mean given the ensemble spread. To generate an EKDMOS forecast, the MOS equations are applied to each ensemble member. Kernel density fitting is used to create a probability density function (PDF) from the ensemble MOS forecasts. The PDF spread is adjusted to match the spread predicted by the spread–skill relationship, producing a calibrated forecast. The improved EKDMOS technique was used to produce probabilistic 2-m temperature forecasts from the North American Ensemble Forecast System (NAEFS) over the period 1 October 2007–31 March 2010. The results were compared with an earlier spread adjustment technique, as well as forecasts generated by rank sorting the bias-corrected ensemble members. Compared to the other techniques, the new EKDMOS forecasts were more reliable, had a better calibrated spread–error relationship, and showed increased day-to-day spread variability.
APA, Harvard, Vancouver, ISO, and other styles
10

Rodríguez, Lissette Guzmán, Vagner Anabor, Franciano Scremin Puhales, and Everson Dal Piva. "ESTIMATIVA DA PROBABILIDADE DE OCORRÊNCIA DE PRECIPITAÇÃO, A PARTIR DE TÉCNICAS ESTATÍSTICAS NÃO PARAMÉTRICAS APLICADAS A SIMULAÇÕES NUMÉRICAS DE WRF. UM CASO DE ESTUDO." Ciência e Natura 38 (July 20, 2016): 491. http://dx.doi.org/10.5902/2179460x20193.

Full text
Abstract:
In this paper was used the kernel density estimation (KDE), a nonparametric method to estimate the probability density function of a random variable, to obtain a probabilistic precipitation forecast, from an ensemble prediction with the WRF model. The nine members of the prediction were obtained by varying the convective parameterization of the model, for a heavy precipitation event in southern Brazil. Evaluating the results, the estimated probabilities obtained for periods of 3 and 24 hours, and various thresholds of precipitation, were compared with the estimated precipitation of the TRMM, without showing a clear morphological correspondence between them. For accumulated in 24 hours, it was possible to compare the specific values of the observations of INMET, finding better coherence between the observations and the predicted probabilities. Skill scores were calculated from contingency tables, for different ranks of probabilities, and the forecast of heavy rain had higher proportion correct in all ranks of probabilities, and forecasted precipitation with probability of 75%, for any threshold, did not produce false alarms. Furthermore, the precipitation of lower intensity with marginal probability was over-forecasted, showing also higher index of false alarms.
APA, Harvard, Vancouver, ISO, and other styles
11

Cao, Li Jun, Hui Bin Hu, Gui Bo Yu, and Shu Hai Wang. "Reliability Simulation and Forecast Based on Virtual Prototype for the Running System of Complicated Equipments." Applied Mechanics and Materials 543-547 (March 2014): 195–98. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.195.

Full text
Abstract:
The running system is the key part to finish training or battle tasks of complicated equipments. But formidable working conditions influence the measurement of load spectrums and it is difficult to analyze and forecast the reliability of running system. Actual vehicle experiments and virtual prototype are firstly combined to obtain complete load spectrum of running system. According to the materials S-N curve, stress and strain spectrums can be computed. Nominal stress method and local stress and strain method are combined with probability density accumulation damage theory to compute the probability density distribution function. Then, the reliability of running system can be forecasted, which provide adequate reference for the maintenance cycle confirmation and mission reliability prediction.
APA, Harvard, Vancouver, ISO, and other styles
12

Wilson, Laurence J., Stephane Beauregard, Adrian E. Raftery, and Richard Verret. "Calibrated Surface Temperature Forecasts from the Canadian Ensemble Prediction System Using Bayesian Model Averaging." Monthly Weather Review 135, no. 4 (April 1, 2007): 1364–85. http://dx.doi.org/10.1175/mwr3347.1.

Full text
Abstract:
Abstract Bayesian model averaging (BMA) has recently been proposed as a way of correcting underdispersion in ensemble forecasts. BMA is a standard statistical procedure for combining predictive distributions from different sources. The output of BMA is a probability density function (pdf), which is a weighted average of pdfs centered on the bias-corrected forecasts. The BMA weights reflect the relative contributions of the component models to the predictive skill over a training sample. The variance of the BMA pdf is made up of two components, the between-model variance, and the within-model error variance, both estimated from the training sample. This paper describes the results of experiments with BMA to calibrate surface temperature forecasts from the 16-member Canadian ensemble system. Using one year of ensemble forecasts, BMA was applied for different training periods ranging from 25 to 80 days. The method was trained on the most recent forecast period, then applied to the next day’s forecasts as an independent sample. This process was repeated through the year, and forecast quality was evaluated using rank histograms, the continuous rank probability score, and the continuous rank probability skill score. An examination of the BMA weights provided a useful comparative evaluation of the component models, both for the ensemble itself and for the ensemble augmented with the unperturbed control forecast and the higher-resolution deterministic forecast. Training periods around 40 days provided a good calibration of the ensemble dispersion. Both full regression and simple bias-correction methods worked well to correct the bias, except that the full regression failed to completely remove seasonal trend biases in spring and fall. Simple correction of the bias was sufficient to produce positive forecast skill out to 10 days with respect to climatology, which was improved by the BMA. The addition of the control forecast and the full-resolution model forecast to the ensemble produced modest improvement in the forecasts for ranges out to about 7 days. Finally, BMA produced significantly narrower 90% prediction intervals compared to a simple Gaussian bias correction, while achieving similar overall accuracy.
APA, Harvard, Vancouver, ISO, and other styles
13

Glahn, Bob, Matthew Peroutka, Jerry Wiedenfeld, John Wagner, Greg Zylstra, Bryan Schuknecht, and Bryan Jackson. "MOS Uncertainty Estimates in an Ensemble Framework." Monthly Weather Review 137, no. 1 (January 1, 2009): 246–68. http://dx.doi.org/10.1175/2008mwr2569.1.

Full text
Abstract:
Abstract It is being increasingly recognized that the uncertainty in weather forecasts should be quantified and furnished to users along with the single-value forecasts usually provided. Probabilistic forecasts of “events” have been made in special cases; for instance, probabilistic forecasts of the event defined as 0.01 in. or more of precipitation at a point over a specified time period [i.e., the probability of precipitation (PoP)] have been disseminated to the public by the Weather Bureau/National Weather Service since 1966. Within the past decade, ensembles of operational numerical weather prediction models have been produced and used to some degree to provide probabilistic estimates of events easily dealt with, such as the occurrence of specific amounts of precipitation. In most such applications, the number of ensembles restricts this “enumeration” method, and the ensembles are characteristically underdispersive. However, fewer attempts have been made to provide a probability density function (PDF) or cumulative distribution function (CDF) for a continuous variable. The Meteorological Development Laboratory (MDL) has used the error estimation capabilities of the linear regression framework and kernel density fitting applied to individual and aggregate ensemble members of the Global Ensemble Forecast System of the National Centers for Environmental Prediction to develop PDFs and CDFs. This paper describes the method and results for temperature, dewpoint, daytime maximum temperature, and nighttime minimum temperature. The method produces reliable forecasts with accuracy exceeding the raw ensembles. Points on the CDF for 1650 stations have been mapped to the National Digital Forecast Database 5-km grid and an example is provided.
APA, Harvard, Vancouver, ISO, and other styles
14

Gu, Wentao, Zhongdi Liu, Cui Dong, Jian He, and Ming-Chuan Hsieh. "Forecasting Realized Volatility in Financial Markets Based on a Time-Varying Non-Parametric Model." Journal of Advanced Computational Intelligence and Intelligent Informatics 23, no. 4 (July 20, 2019): 641–48. http://dx.doi.org/10.20965/jaciii.2019.p0641.

Full text
Abstract:
This paper proposes a new non-parametric adaptive combination model for the prediction of realized volatility on the basis of applying and extending the time-varying probability density function theory. We initially construct an adaptive time-varying weight mechanism for a combination forecast. To compare the predictive power of the models, we take the SPA test, which uses bootstrap as the evaluation criterion and employs the rolling window strategy for out-of-sample forecasting. The empirical study shows that the non-parametric TVF model forecasts more accurately than the HAR-RV model. In addition, the average combination forecast model does not have a significant advantage over any single model while our adaptive combination model does.
APA, Harvard, Vancouver, ISO, and other styles
15

Peel, Syd, and Laurence J. Wilson. "Modeling the Distribution of Precipitation Forecasts from the Canadian Ensemble Prediction System Using Kernel Density Estimation." Weather and Forecasting 23, no. 4 (August 1, 2008): 575–95. http://dx.doi.org/10.1175/2007waf2007023.1.

Full text
Abstract:
Abstract Kernel density estimation is employed to fit smooth probabilistic models to precipitation forecasts of the Canadian ensemble prediction system. An intuitive nonparametric technique, kernel density estimation has become a powerful tool widely used in the approximation of probability density functions. The density estimators were constructed using the gamma kernels prescribed by S.-X. Chen, confined as they are to the nonnegative real axis, which constitutes the support of the random variable representing precipitation accumulation. Performance of kernel density estimators for several different smoothing bandwidths is compared with the discrete probabilistic model obtained as the fraction of member forecasts predicting the events, which for this study consisted of threshold exceedances. A propitious choice of the smoothing bandwidth yields smooth forecasts comparable, or sometimes superior, to the discrete probabilistic forecast, depending on the character of the raw ensemble forecasts. At the same time more realistic models of the probability density are achieved, particularly in the tail of the distribution, yielding forecasts that can be optimally calibrated for extreme events.
APA, Harvard, Vancouver, ISO, and other styles
16

Gneiting, Tilmann, Adrian E. Raftery, Anton H. Westveld, and Tom Goldman. "Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation." Monthly Weather Review 133, no. 5 (May 1, 2005): 1098–118. http://dx.doi.org/10.1175/mwr2904.1.

Full text
Abstract:
Abstract Ensemble prediction systems typically show positive spread-error correlation, but they are subject to forecast bias and dispersion errors, and are therefore uncalibrated. This work proposes the use of ensemble model output statistics (EMOS), an easy-to-implement postprocessing technique that addresses both forecast bias and underdispersion and takes into account the spread-skill relationship. The technique is based on multiple linear regression and is akin to the superensemble approach that has traditionally been used for deterministic-style forecasts. The EMOS technique yields probabilistic forecasts that take the form of Gaussian predictive probability density functions (PDFs) for continuous weather variables and can be applied to gridded model output. The EMOS predictive mean is a bias-corrected weighted average of the ensemble member forecasts, with coefficients that can be interpreted in terms of the relative contributions of the member models to the ensemble, and provides a highly competitive deterministic-style forecast. The EMOS predictive variance is a linear function of the ensemble variance. For fitting the EMOS coefficients, the method of minimum continuous ranked probability score (CRPS) estimation is introduced. This technique finds the coefficient values that optimize the CRPS for the training data. The EMOS technique was applied to 48-h forecasts of sea level pressure and surface temperature over the North American Pacific Northwest in spring 2000, using the University of Washington mesoscale ensemble. When compared to the bias-corrected ensemble, deterministic-style EMOS forecasts of sea level pressure had root-mean-square error 9% less and mean absolute error 7% less. The EMOS predictive PDFs were sharp, and much better calibrated than the raw ensemble or the bias-corrected ensemble.
APA, Harvard, Vancouver, ISO, and other styles
17

Ivarsson, Karl-Ivar. "A Score for Probability Forecasts of Binary Events Based on the User’s Cost–Loss Relations." Weather and Forecasting 35, no. 1 (January 15, 2020): 113–27. http://dx.doi.org/10.1175/waf-d-19-0013.1.

Full text
Abstract:
Abstract A score for verifying probabilistic forecasts is presented. It is called the continuous specific score (CSS) and is intended for binary events only. The score is based on the user’s cost–loss relations and their relative importance. The relative importance is determined by a continuous function of the user’s density of loss for various cost–loss relations. One may also consider CSS as the result of the expected mean value based on a probability function of the various loss values for all possible cost–loss relations for one single user. The CSS is a negatively oriented score and has the following properties: perfect forecasts yield the score zero, and 100% probability for adverse weather (AW) when AW does not occur leads to the score one. The result of a 0% forecast of AW when AW occurs is the inverted value of the ratio of the average cost to the average loss minus one. Different possible usages of the score are discussed. An effective cost–loss ratio (ECLR) is defined. It measures how important low cost–loss ratios are compared to higher ones.
APA, Harvard, Vancouver, ISO, and other styles
18

Cruz, Miguel G. "Monte Carlo-based ensemble method for prediction of grassland fire spread." International Journal of Wildland Fire 19, no. 4 (2010): 521. http://dx.doi.org/10.1071/wf08195.

Full text
Abstract:
The operational prediction of fire spread to support fire management operations relies on a deterministic approach where a single ‘best-guess’ forecast is produced from the best estimate of the environmental conditions driving the fire. Although fire can be considered a phenomenon of low predictability and the estimation of input conditions for fire behaviour models is fraught with uncertainty, no error component is associated with these forecasts. At best, users will derive an uncertainty bound to the model outputs based on their own personal experience. A simple ensemble method that considers the uncertainty in the estimation of model input values and Monte Carlo sampling was applied with a grassland fire-spread model to produce a probability density function of rate of spread. This probability density function was then used to describe the uncertainty in the fire behaviour prediction and to produce probability-based outputs. The method was applied to a grassland wildfire case study dataset. The ensemble method did not improve the general statistics describing model fit but provided complementary information describing the uncertainty associated with the predictions and a probabilistic output for the occurrence of threshold levels of fire behaviour.
APA, Harvard, Vancouver, ISO, and other styles
19

Tang, Brian H., and Nick P. Bassill. "Point Downscaling of Surface Wind Speed for Forecast Applications." Journal of Applied Meteorology and Climatology 57, no. 3 (March 2018): 659–74. http://dx.doi.org/10.1175/jamc-d-17-0144.1.

Full text
Abstract:
AbstractA statistical downscaling algorithm is introduced to forecast surface wind speed at a location. The downscaling algorithm consists of resolved and unresolved components to yield a time series of synthetic wind speeds at high time resolution. The resolved component is a bias-corrected numerical weather prediction model forecast of the 10-m wind speed at the location. The unresolved component is a simulated time series of the high-frequency component of the wind speed that is trained to match the variance and power spectral density of wind observations at the location. Because of the stochastic nature of the unresolved wind speed, the downscaling algorithm may be repeated to yield an ensemble of synthetic wind speeds. The ensemble may be used to generate probabilistic predictions of the sustained wind speed or wind gusts. Verification of the synthetic winds produced by the downscaling algorithm indicates that it can accurately predict various features of the observed wind, such as the probability distribution function of wind speeds, the power spectral density, daily maximum wind gust, and daily maximum sustained wind speed. Thus, the downscaling algorithm may be broadly applicable to any application that requires a computationally efficient, accurate way of generating probabilistic forecasts of wind speed at various time averages or forecast horizons.
APA, Harvard, Vancouver, ISO, and other styles
20

Pérez, B., R. Brower, J. Beckers, D. Paradis, C. Balseiro, K. Lyons, M. Cure, et al. "ENSURF: multi-model sea level forecast – implementation and validation results for the IBIROOS and Western Mediterranean regions." Ocean Science Discussions 8, no. 2 (April 8, 2011): 761–800. http://dx.doi.org/10.5194/osd-8-761-2011.

Full text
Abstract:
Abstract. ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast that makes use of existing storm surge or circulation models today operational in Europe, as well as near-real time tide gauge data in the region, with the following main goals: – providing an easy access to existing forecasts, as well as to its performance and model validation, by means of an adequate visualization tool – generation of better forecasts of sea level, including confidence intervals, by means of the Bayesian Model Average Technique (BMA) The system was developed and implemented within ECOOP (C.No. 036355) European Project for the NOOS and the IBIROOS regions, based on MATROOS visualization tool developed by Deltares. Both systems are today operational at Deltares and Puertos del Estado respectively. The Bayesian Modelling Average technique generates an overall forecast probability density function (PDF) by making a weighted average of the individual forecasts PDF's; the weights represent the probability that a model will give the correct forecast PDF and are determined and updated operationally based on the performance of the models during a recent training period. This implies the technique needs the availability of sea level data from tide gauges in near-real time. Results of validation of the different models and BMA implementation for the main harbours will be presented for the IBIROOS and Western Mediterranean regions, where this kind of activity is performed for the first time. The work has proved to be useful to detect problems in some of the circulation models not previously well calibrated with sea level data, to identify the differences on baroclinic and barotropic models for sea level applications and to confirm the general improvement of the BMA forecasts.
APA, Harvard, Vancouver, ISO, and other styles
21

Rupšys, Petras, and Edmundas Petrauskas. "Analysis of Longitudinal Forest Data on Individual-Tree and Whole-Stand Attributes Using a Stochastic Differential Equation Model." Forests 13, no. 3 (March 8, 2022): 425. http://dx.doi.org/10.3390/f13030425.

Full text
Abstract:
This paper focuses on individual-tree and whole-stand growth models for uneven-aged and mixed-species stands in Lithuania. All the growth models were derived using a single trivariate diffusion process defined by a mixed-effect parameters trivariate stochastic differential equation describing the tree diameter, potentially available area, and height. The mixed-effect parameters of the newly developed trivariate transition probability density function were estimated using an approximate maximum likelihood procedure. Using the relationship between the multivariate probability density and univariate marginal (conditional) densities, the growth equations were derived to predict or forecast the individual-tree and whole-stand variables, such as diameter, potentially available area, height, basal area, and stand density. All the results are illustrated using an observed dataset from 53 permanent experimental plots remeasured from 1 to 7 times. The computed statistical measures showed high predictive and forecast accuracy compared with validation data that were not used to find parameter estimates. All the results were implemented in the Maple computer algebra system.
APA, Harvard, Vancouver, ISO, and other styles
22

Yuan, Yingzhong, Zhilin Qi, Zhangxing Chen, Wende Yan, and Zhiheng Zhao. "Production decline analysis of shale gas based on a probability density distribution function." Journal of Geophysics and Engineering 17, no. 2 (February 10, 2020): 365–76. http://dx.doi.org/10.1093/jge/gxz122.

Full text
Abstract:
Abstract Production decline analysis is a simple and efficient method to forecast production dynamics of shale gas. The traditional Arps decline model is also widely used in the production decline analysis of shale gas, but an obvious error is often generated. Based on the Weibull and χ2 probability density distribution function, the monotonic decreasing production prediction equations of shale gas are established. It is deduced that recently, the widely used Duong model is essentially a Weibull probability density distribution model. Decline analysis results of production data from actual shale gas well and numerical simulations indicate that the fitting results of the Weibull (Duong) model and χ2 distribution model are better than the Arps model whose deviation of early data is large. For a shale gas reservoir with very low permeability, pressure conformance area is small and it is obviously influenced by fractures. Early shale gas production rate mainly contributed to by fractures declines quickly and the later production rate mainly contributed to by the matrix declines slowly over time. The production decline curve has obvious long-tail distribution characteristics and it is a better fit to the data with a χ2 distribution model. As for the increase of permeability, the fitting accuracy of the Weibull (Duong) model gradually becomes better than the χ2 distribution model. Research results provide theoretical guidance for choosing a reasonable production decline model of a shale gas reservoir with a different permeability.
APA, Harvard, Vancouver, ISO, and other styles
23

Cassettari, Lucia, Ilaria Bendato, Marco Mosca, and Roberto Mosca. "A new stochastic multi source approach to improve the accuracy of the sales forecasts." foresight 19, no. 1 (March 13, 2017): 48–64. http://dx.doi.org/10.1108/fs-07-2016-0036.

Full text
Abstract:
Purpose The aim of this paper is to suggest a new approach to the problem of sales forecasting for improving forecast accuracy. The proposed method is capable of combining, by means of appropriate weights, both the responses supplied by the best-performing conventional algorithms, which base their output on historical data, and the insights of company’s forecasters which should take account future events that are impossible to predict with traditional mathematical methods. Design/methodology/approach The authors propose a six-step methodology using multiple forecasting sources. Each of these forecasts, to consider the uncertainty of the variables involved, is expressed in the form of suitable probability density function. A proper use of the Monte Carlo Simulation allows obtaining the best fit among these different sources and to obtain a value of forecast accompanied by a probability of error known a priori. Findings The proposed approach allows the company’s demand forecasters to provide timely response to market dynamics and make a choice of weights, gradually ever more accurate, triggering a continuous process of forecast improvement. The application on a real business case proves the validity and the practical utilization of the methodology. Originality/value Forecast definition is normally entrusted to the company’s demand forecasters who often may radically modify the information suggested by the conventional prediction algorithms or, contrarily, can be too influenced by their output. This issue is the origin of the methodological approach proposed that aims to improve the forecast accuracy merging, with appropriate weights and taking into account the stochasticity involved, the outputs of sales forecast algorithms with the contributions of the company’s forecasters.
APA, Harvard, Vancouver, ISO, and other styles
24

Pérez, B., R. Brouwer, J. Beckers, D. Paradis, C. Balseiro, K. Lyons, M. Cure, et al. "ENSURF: multi-model sea level forecast – implementation and validation results for the IBIROOS and Western Mediterranean regions." Ocean Science 8, no. 2 (March 30, 2012): 211–26. http://dx.doi.org/10.5194/os-8-211-2012.

Full text
Abstract:
Abstract. ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast that makes use of several storm surge or circulation models and near-real time tide gauge data in the region, with the following main goals: 1. providing easy access to existing forecasts, as well as to its performance and model validation, by means of an adequate visualization tool; 2. generation of better forecasts of sea level, including confidence intervals, by means of the Bayesian Model Average technique (BMA). The Bayesian Model Average technique generates an overall forecast probability density function (PDF) by making a weighted average of the individual forecasts PDF's; the weights represent the Bayesian likelihood that a model will give the correct forecast and are continuously updated based on the performance of the models during a recent training period. This implies the technique needs the availability of sea level data from tide gauges in near-real time. The system was implemented for the European Atlantic facade (IBIROOS region) and Western Mediterranean coast based on the MATROOS visualization tool developed by Deltares. Results of validation of the different models and BMA implementation for the main harbours are presented for these regions where this kind of activity is performed for the first time. The system is currently operational at Puertos del Estado and has proved to be useful in the detection of calibration problems in some of the circulation models, in the identification of the systematic differences between baroclinic and barotropic models for sea level forecasts and to demonstrate the feasibility of providing an overall probabilistic forecast, based on the BMA method.
APA, Harvard, Vancouver, ISO, and other styles
25

Luen Liou, Jeng, and Jen Fin Lin. "A New Microcontact Model Developed for Variable Fractal Dimension, Topothesy, Density of Asperity, and Probability Density Function of Asperity Heights." Journal of Applied Mechanics 74, no. 4 (April 19, 2006): 603–13. http://dx.doi.org/10.1115/1.2338059.

Full text
Abstract:
In the present study, the fractal theory is applied to modify the conventional model (the Greenwood and Williamson model) established in the statistical form for the microcontacts of two contact surfaces. The mean radius of curvature (R) and the density of asperities (η) are no longer taken as constants, but taken as variables as functions of the related parameters including the fractal dimension (D), the topothesy (G), and the mean separation of two contact surfaces. The fractal dimension and the topothesy varied by differing the mean separation of two contact surfaces are completely obtained from the theoretical model. Then the mean radius of curvature and the density of asperities are also varied by differing the mean separation. A numerical scheme is thus developed to determine the convergent values of the fractal dimension and topothesy corresponding to a given mean separation. The topographies of a surface obtained from the theoretical prediction of different separations show the probability density function of asperity heights to be no longer the Gaussian distribution. Both the fractal dimension and the topothesy are elevated by increasing the mean separation. The density of asperities is reduced by decreasing the mean separation. The contact load and the total contact area results predicted by variable D, G*, and η as well as non-Gaussian distribution are always higher than those forecast with constant D, G*, η, and Gaussian distribution.
APA, Harvard, Vancouver, ISO, and other styles
26

Shiu, Chein-Jung, Yi-Chi Wang, Huang-Hsiung Hsu, Wei-Ting Chen, Hua-Lu Pan, Ruiyu Sun, Yi-Hsuan Chen, and Cheng-An Chen. "GTS v1.0: a macrophysics scheme for climate models based on a probability density function." Geoscientific Model Development 14, no. 1 (January 12, 2021): 177–204. http://dx.doi.org/10.5194/gmd-14-177-2021.

Full text
Abstract:
Abstract. Cloud macrophysics schemes are unique parameterizations for general circulation models. We propose an approach based on a probability density function (PDF) that utilizes cloud condensates and saturation ratios to replace the assumption of critical relative humidity (RH). We test this approach, called the Global Forecast System (GFS) – Taiwan Earth System Model (TaiESM) – Sundqvist (GTS) scheme, using the macrophysics scheme within the Community Atmosphere Model version 5.3 (CAM5.3) framework. Via single-column model results, the new approach simulates the cloud fraction (CF)–RH distributions closer to those of the observations when compared to those of the default CAM5.3 scheme. We also validate the impact of the GTS scheme on global climate simulations with satellite observations. The simulated CF is comparable to CloudSat/Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) data. Comparisons of the vertical distributions of CF and cloud water content (CWC), as functions of large-scale dynamic and thermodynamic parameters, with the CloudSat/CALIPSO data suggest that the GTS scheme can closely simulate observations. This is particularly noticeable for thermodynamic parameters, such as RH, upper-tropospheric temperature, and total precipitable water, implying that our scheme can simulate variation in CF associated with RH more reliably than the default scheme. Changes in CF and CWC would affect climatic fields and large-scale circulation via cloud–radiation interaction. Both climatological means and annual cycles of many of the GTS-simulated variables are improved compared with the default scheme, particularly with respect to water vapor and RH fields. Different PDF shapes in the GTS scheme also significantly affect global simulations.
APA, Harvard, Vancouver, ISO, and other styles
27

Loken, Eric D., Adam J. Clark, Amy McGovern, Montgomery Flora, and Kent Knopfmeier. "Postprocessing Next-Day Ensemble Probabilistic Precipitation Forecasts Using Random Forests." Weather and Forecasting 34, no. 6 (December 1, 2019): 2017–44. http://dx.doi.org/10.1175/waf-d-19-0109.1.

Full text
Abstract:
Abstract Most ensembles suffer from underdispersion and systematic biases. One way to correct for these shortcomings is via machine learning (ML), which is advantageous due to its ability to identify and correct nonlinear biases. This study uses a single random forest (RF) to calibrate next-day (i.e., 12–36-h lead time) probabilistic precipitation forecasts over the contiguous United States (CONUS) from the Short-Range Ensemble Forecast System (SREF) with 16-km grid spacing and the High-Resolution Ensemble Forecast version 2 (HREFv2) with 3-km grid spacing. Random forest forecast probabilities (RFFPs) from each ensemble are compared against raw ensemble probabilities over 496 days from April 2017 to November 2018 using 16-fold cross validation. RFFPs are also compared against spatially smoothed ensemble probabilities since the raw SREF and HREFv2 probabilities are overconfident and undersample the true forecast probability density function. Probabilistic precipitation forecasts are evaluated at four precipitation thresholds ranging from 0.1 to 3 in. In general, RFFPs are found to have better forecast reliability and resolution, fewer spatial biases, and significantly greater Brier skill scores and areas under the relative operating characteristic curve compared to corresponding raw and spatially smoothed ensemble probabilities. The RFFPs perform best at the lower thresholds, which have a greater observed climatological frequency. Additionally, the RF-based postprocessing technique benefits the SREF more than the HREFv2, likely because the raw SREF forecasts contain more systematic biases than those from the raw HREFv2. It is concluded that the RFFPs provide a convenient, skillful summary of calibrated ensemble output and are computationally feasible to implement in real time. Advantages and disadvantages of ML-based postprocessing techniques are discussed.
APA, Harvard, Vancouver, ISO, and other styles
28

Chitanvis, Shirish M. "Dynamical model for social distancing in the U.S. during the COVID-19 epidemic." Computational and Mathematical Biophysics 8, no. 1 (November 18, 2020): 141–49. http://dx.doi.org/10.1515/cmb-2020-0107.

Full text
Abstract:
AbstractBackground Social distancing has led to a “flattening of the curve” in many states across the U.S. This is part of a novel, massive, global social experiment which has served to mitigate the COVID-19 pandemic in the absence of a vaccine or effective anti-viral drugs. Hence it is important to be able to forecast hospitalizations reasonably accurately.Methods We propose on phenomenological grounds a random walk/generalized diffusion equation which incorporates the effect of social distancing to describe the temporal evolution of the probability of having a given number of hospitalizations. The probability density function is log-normal in the number of hospitalizations, which is useful in describing pandemics where the number of hospitalizations is very high.Findings We used this insight and data to make forecasts for states using Monte Carlo methods. Back testing validates our approach, which yields good results about a week into the future. States are beginning to reopen at the time of submission of this paper and our forecasts indicate possible precursors of increased hospitalizations. However, the trends we forecast for hospitalizations as well as infections thus far show moderate growth.Additionally we studied the reproducibility Ro in New York (Italian strain) and California (Wuhan strain). We find that even if there is a difference in the transmission of the two strains, social distancing has been able to control the progression of COVID 19.
APA, Harvard, Vancouver, ISO, and other styles
29

Hamill, Thomas M., and Jeffrey S. Whitaker. "Probabilistic Quantitative Precipitation Forecasts Based on Reforecast Analogs: Theory and Application." Monthly Weather Review 134, no. 11 (November 1, 2006): 3209–29. http://dx.doi.org/10.1175/mwr3237.1.

Full text
Abstract:
Abstract A general theory is proposed for the statistical correction of weather forecasts based on observed analogs. An estimate is sought for the probability density function (pdf) of the observed state, given today’s numerical forecast. Assume that an infinite set of reforecasts (hindcasts) and associated observations are available and that the climate is stable. Assume that it is possible to find a set of past model forecast states that are nearly identical to the current forecast state. With the dates of these past forecasts, the asymptotically correct probabilistic forecast can be formed from the distribution of observed states on those dates. Unfortunately, this general theory of analogs is not useful for estimating the global pdf with a limited set of reforecasts, for the chance of finding even one effectively identical forecast analog in that limited set is vanishingly small, and the climate is not stable. Nonetheless, approximations can be made to this theory to make it useful for statistically correcting weather forecasts. For instance, when estimating the state in a local region, choose the dates of analogs based on a pattern match of the local weather forecast; with a few decades of reforecasts, there are usually many close analogs. Several approximate analog techniques are then tested for their ability to skillfully calibrate probabilistic forecasts of 24-h precipitation amount. A 25-yr set of reforecasts from a reduced-resolution global forecast model is used. The analog techniques find past ensemble-mean forecasts in a local region that are similar to today’s ensemble-mean forecasts in that region. Probabilistic forecasts are formed from the analyzed weather on the dates of the past analogs. All of the analog techniques provide dramatic improvements in the Brier skill score relative to basing probabilities on the raw ensemble counts or the counts corrected for bias. However, the analog techniques did not produce guidance that was much more skillful than that produced by a logistic regression technique. Among the analog techniques tested, it was determined that small improvements to the baseline analog technique that matches ensemble-mean precipitation forecasts are possible. Forecast skill can be improved slightly by matching the ranks of the mean forecasts rather than the raw mean forecasts by using highly localized search regions for shorter-term forecasts and larger search regions for longer forecasts, by matching precipitable water in addition to precipitation amount, and by spatially smoothing the probabilities.
APA, Harvard, Vancouver, ISO, and other styles
30

Unger, David A., Huug van den Dool, Edward O’Lenic, and Dan Collins. "Ensemble Regression." Monthly Weather Review 137, no. 7 (July 2009): 2365–79. http://dx.doi.org/10.1175/2008mwr2605.1.

Full text
Abstract:
A regression model was developed for use with ensemble forecasts. Ensemble members are assumed to represent a set of equally likely solutions, one of which will best fit the observation. If standard linear regression assumptions apply to the best member, then a regression relationship can be derived between the full ensemble and the observation without explicitly identifying the best member for each case. The ensemble regression equation is equivalent to linear regression between the ensemble mean and the observation, but is applied to each member of the ensemble. The “best member” error variance is defined in terms of the correlation between the ensemble mean and the observations, their respective variances, and the ensemble spread. A probability density function representing the ensemble prediction is obtained from the normalized sum of the best-member error distribution applied to the regression forecast from each ensemble member. Ensemble regression was applied to National Centers for Environmental Prediction (NCEP) Climate Forecast System (CFS) forecasts of seasonal mean Niño-3.4 SSTs on historical forecasts for the years 1981–2005. The skill of the ensemble regression was about the same as that of the linear regression on the ensemble mean when measured by the continuous ranked probability score (CRPS), and both methods produced reliable probabilities. The CFS spread appears slightly too high for its skill, and the CRPS of the CFS predictions can be slightly improved by reducing its ensemble spread to about 0.8 of its original value prior to regression calibration.
APA, Harvard, Vancouver, ISO, and other styles
31

Regonda, S., B. Rajagopalan, U. Lall, M. Clark, and Y. I. Moon. "Local polynomial method for ensemble forecast of time series." Nonlinear Processes in Geophysics 12, no. 3 (March 17, 2005): 397–406. http://dx.doi.org/10.5194/npg-12-397-2005.

Full text
Abstract:
Abstract. We present a nonparametric approach based on local polynomial regression for ensemble forecast of time series. The state space is first reconstructed by embedding the univariate time series of the response variable in a space of dimension (D) with a delay time (τ). To obtain a forecast from a given time point t, three steps are involved: (i) the current state of the system is mapped on to the state space, known as the feature vector, (ii) a small number (K=α*n, α=fraction (0,1] of the data, n=data length) of neighbors (and their future evolution) to the feature vector are identified in the state space, and (iii) a polynomial of order p is fitted to the identified neighbors, which is then used for prediction. A suite of parameter combinations (D, τ, α, p) is selected based on an objective criterion, called the Generalized Cross Validation (GCV). All of the selected parameter combinations are then used to issue a T-step iterated forecast starting from the current time t, thus generating an ensemble forecast which can be used to obtain the forecast probability density function (PDF). The ensemble approach improves upon the traditional method of providing a single mean forecast by providing the forecast uncertainty. Further, for short noisy data it can provide better forecasts. We demonstrate the utility of this approach on two synthetic (Henon and Lorenz attractors) and two real data sets (Great Salt Lake bi-weekly volume and NINO3 index). This framework can also be used to forecast a vector of response variables based on a vector of predictors.
APA, Harvard, Vancouver, ISO, and other styles
32

Sinha, Sonalika, and Bandi Kamaiah. "Estimating Option-implied Risk Aversion for Indian Markets." IIM Kozhikode Society & Management Review 6, no. 1 (January 2017): 90–97. http://dx.doi.org/10.1177/2277975216677600.

Full text
Abstract:
What do nearly 1.5 lakh observations of options data say about risk preferences of Indian investors? This paper explores a nonparametric technique to compute probability density functions (PDFs) directly from NIFTY 50 option prices in India, based on the utility preferences of the representative investor. Use of probability density functions to estimate investor expectations of the distribution of future levels of the underlying assets has gained tremendous popularity over the last decade. Studying option prices provides information about the market participants’ probability assessment of the future outcome of the underlying asset. We compare the forecast ability of the risk-neutral PDF and risk-adjusted density functions to arrive at a unique index of relative risk aversion for Indian markets. Results indicate that risk-adjusted PDFs are reasonably better forecasts of investor expectations of future levels of the underlying assets. We find that Indian investors are not neutral to risk, contrary to the theoretical assumption of risk-neutrality among investors. The computed time-series of relative risk aversion overcomes the limitations of the VIX (implied volatility index) to yield a more reliable index, particularly useful for the Indian markets. Validity of the computed index is established by comparing with existing measures of risk and the relationships are found to be consistent with market expectations.
APA, Harvard, Vancouver, ISO, and other styles
33

Madadgar, Shahrbanou, and Hamid Moradkhani. "A Bayesian Framework for Probabilistic Seasonal Drought Forecasting." Journal of Hydrometeorology 14, no. 6 (November 22, 2013): 1685–705. http://dx.doi.org/10.1175/jhm-d-13-010.1.

Full text
Abstract:
Abstract Seasonal drought forecasting is presented within a multivariate probabilistic framework. The standardized streamflow index (SSI) is used to characterize hydrologic droughts with different severities across the Gunnison River basin in the upper Colorado River basin. Since streamflow, and subsequently hydrologic droughts, are autocorrelated variables in time, this study presents a multivariate probabilistic approach using copula functions to perform drought forecasting within a Bayesian framework. The spring flow (April–June) is considered as the forecast variable and found to have the highest correlations with the previous winter (January–March) and fall (October–December). Incorporating copula functions into the Bayesian framework, two different forecast models are established to estimate the hydrologic drought of spring given either the previous winter (first-order conditional model) or previous winter and fall (second-order conditional model). Conditional probability density functions (PDFs) and cumulative distribution functions (CDFs) are generated to characterize the significant probabilistic features of spring droughts. According to forecasts, the spring drought is more sensitive to the winter status than the fall status, which approves the results of prior correlation analysis. The 90% predictive bound of the spring-flow forecast indicates the efficiency of the proposed model in estimating the spring droughts. The proposed model is compared with the conventional forecast model, the ensemble streamflow prediction (ESP), and it is found that their forecasts are generally in agreement with each other. However, the forecast uncertainty of the new method is more reliable than the ESP method. The new probabilistic forecast model can provide insights to water resources managers and stakeholders to facilitate the decision making and developing drought mitigation plans.
APA, Harvard, Vancouver, ISO, and other styles
34

Dr. S. Aruna, Dr L. V. Nandakishore,. "Analysis of Time-Series Trends and ARIMA models to Forecast COVID – 19 cases." Psychology and Education Journal 58, no. 2 (February 20, 2021): 6621–28. http://dx.doi.org/10.17762/pae.v58i2.3196.

Full text
Abstract:
COVID-19 a novel corona virus originated from Wuhan China. It turned into a pandemic resulting in a large number of deaths and loss of livelihood. It is vital to determine the manner in which the number of cases propagates so that future pandemics can be tackled scientifically. However the pandemic can be controlled systematically using efficient health care systems. It is difficult to predict the pandemic propagation over a large period of time due to various factors. In this paper an analysis is made for short periods using statistical tools like predicting the probability curve, probability density function. Forecasting of Covid-19 cases is done using time series trend analysis and ARIMA models. The test of hypothesis for difference of means and standard deviations of the actual and forecasted values with 99% CI showed no significant difference between them.
APA, Harvard, Vancouver, ISO, and other styles
35

Lee, Dan-Bi, Hye-Yeong Chun, and Jung-Hoon Kim. "Evaluation of Multimodel-Based Ensemble Forecasts for Clear-Air Turbulence." Weather and Forecasting 35, no. 2 (April 1, 2019): 507–21. http://dx.doi.org/10.1175/waf-d-19-0155.1.

Full text
Abstract:
Abstract To test more consistent and reliable upper-level turbulence forecasts, seven global numerical weather prediction (NWP) model outputs are used to construct the multimodel-based ensemble forecasts for clear-air turbulence (CAT). We used the updated version of the well-known Ellrod index, the Ellrod–Knox index (EKI), which is currently an operational CAT diagnostic for the significant weather chart at one of the World Area Forecast Centers. In this study, we tested two types of ensemble forecasts. First is an ensemble mean of all EKI forecasts from the NWP models. Second is a probabilistic forecast that is computed by counting how many individual EKI values from the seven NWP models exceed a certain EKI threshold at each grid point. Here, to calibrate the best EKI thresholds for the moderate-or-greater CAT intensity, the individual EKI thresholds, which vary depending on the resolutions and configurations of the NWP models, are selected using the 95th, 98th, and 98th percentiles of the probability density functions for the EKIs derived from the seven NWP models for a 6-month period. Finally, performance skills of both the ensemble mean and probabilistic forecasts are evaluated against the observations of in situ aircraft eddy dissipation rate and pilot reports. As a result, the ensemble mean forecast shows a better performance skill than the individual EKI forecasts. The reliability diagram for the probabilistic forecast gives a better reliability when using high-percentile EKI values as the threshold although it still suffers overestimation of CAT events likely due to the lack of observation and ensemble spreads.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhu, Jiangshan, Fanyou Kong, Lingkun Ran, and Hengchi Lei. "Bayesian Model Averaging with Stratified Sampling for Probabilistic Quantitative Precipitation Forecasting in Northern China during Summer 2010." Monthly Weather Review 143, no. 9 (August 31, 2015): 3628–41. http://dx.doi.org/10.1175/mwr-d-14-00301.1.

Full text
Abstract:
Abstract To study the impact of training sample heterogeneity on the performance of Bayesian model averaging (BMA), two BMA experiments were performed on probabilistic quantitative precipitation forecasts (PQPFs) in the northern China region in July and August of 2010 generated from an 11-member short-range ensemble forecasting system. One experiment, as in many conventional BMA studies, used an overall training sample that consisted of all available cases in the training period, while the second experiment used stratified sampling BMA by first dividing all available training cases into subsamples according to their ensemble spread, and then performing BMA on each subsample. The results showed that ensemble spread is a good criterion to divide ensemble precipitation cases into subsamples, and that the subsamples have different statistical properties. Pooling the subsamples together forms a heterogeneous overall sample. Conventional BMA is incapable of interpreting heterogeneous samples, and produces unreliable PQPF. It underestimates the forecast probability at high-threshold PQPF and local rainfall maxima in BMA percentile forecasts. BMA with stratified sampling according to ensemble spread overcomes the problem reasonably well, producing sharper predictive probability density functions and BMA percentile forecasts, and more reliable PQPF than the conventional BMA approach. The continuous ranked probability scores, Brier skill scores, and reliability diagrams of the two BMA experiments were examined for all available forecast days, along with a logistic regression experiment. Stratified sampling BMA outperformed the raw ensemble and conventional BMA in all verifications, and also showed better skill than logistic regression in low-threshold forecasts.
APA, Harvard, Vancouver, ISO, and other styles
37

Arthern, Robert J. "Exploring the use of transformation group priors and the method of maximum relative entropy for Bayesian glaciological inversions." Journal of Glaciology 61, no. 229 (2015): 947–62. http://dx.doi.org/10.3189/2015jog15j050.

Full text
Abstract:
AbstractIce-sheet models can be used to forecast ice losses from Antarctica and Greenland, but to fully quantify the risks associated with sea-level rise, probabilistic forecasts are needed. These require estimates of the probability density function (PDF) for various model parameters (e.g. the basal drag coefficient and ice viscosity). To infer such parameters from satellite observations it is common to use inverse methods. Two related approaches are in use: (1) minimization of a cost function that describes the misfit to the observations, often accompanied by explicit or implicit regularization, or (2) use of Bayes’ theorem to update prior assumptions about the probability of parameters. Both approaches have much in common and questions of regularization often map onto implicit choices of prior probabilities that are made explicit in the Bayesian framework. In both approaches questions can arise that seem to demand subjective input. One way to specify prior PDFs more objectively is by deriving transformation group priors that are invariant to symmetries of the problem, and then maximizing relative entropy, subject to any additional constraints. Here we investigate the application of these methods to the derivation of priors for a Bayesian approach to an idealized glaciological inverse problem.
APA, Harvard, Vancouver, ISO, and other styles
38

Min, Changgi. "Investigating the Effect of Uncertainty Characteristics of Renewable Energy Resources on Power System Flexibility." Applied Sciences 11, no. 12 (June 10, 2021): 5381. http://dx.doi.org/10.3390/app11125381.

Full text
Abstract:
This study investigates the effect of uncertainty characteristics of renewable energy resources on the flexibility of a power system. The more renewable energy resources introduced, the greater the imbalance between load and generation. Securing the flexibility of the system is becoming important to manage this situation. The degree of flexibility cannot be independent of the uncertainty of the power system. However, most existing studies on flexibility have not explicitly considered the effects of uncertainty characteristics. Therefore, this study proposes a method to quantitatively analyze the effect of uncertainty characteristics on power system flexibility. Here, the uncertainties of the power system indicate the net load forecast error, which can be represented as a probability distribution. Of the characteristics of the net load forecast error, skewness and kurtosis were considered. The net load forecast error was modeled with a Pearson distribution, which has been widely used to generate the probability density function with skewness and kurtosis. Scenarios for the forecast net load, skewness, and kurtosis were generated, and their effects on flexibility were evaluated. The simulation results for the scenarios based on a modified IEEE-RTS-96 revealed that skewness is more effective than kurtosis. The proposed method can help system operators to efficiently respond to changes in the uncertainty characteristics of renewable energy resources.
APA, Harvard, Vancouver, ISO, and other styles
39

He, Shaokun, Shenglian Guo, Zhangjun Liu, Jiabo Yin, Kebing Chen, and Xushu Wu. "Uncertainty analysis of hydrological multi-model ensembles based on CBP-BMA method." Hydrology Research 49, no. 5 (March 1, 2018): 1636–51. http://dx.doi.org/10.2166/nh.2018.160.

Full text
Abstract:
Abstract Quantification of the inherent uncertainty in hydrologic forecasting is essential for flood control and water resources management. The existing approaches, such as Bayesian model averaging (BMA), hydrologic uncertainty processor (HUP), copula-BMA (CBMA), aim at developing reliable probabilistic forecasts to characterize the uncertainty induced by model structures. In the probability forecast framework, these approaches either assume the probability density function (PDF) to follow a certain distribution, or are unable to reduce bias effectively for complex hydrological forecasts. To overcome these limitations, a copula Bayesian processor associated with BMA (CBP-BMA) method is proposed with ensemble lumped hydrological models. Comparing with the BMA and CBMA methods, the CBP-BMA method relaxes any assumption on the distribution of conditional PDFs. Several evaluation criteria, such as containing ratio, average bandwidth and average deviation amplitude of probabilistic application, are utilized to evaluate the model performance. The case study results demonstrate that the CBP-BMA method can improve hydrological forecasting precision with higher cover ratios more than 90%, which are increased by 4.4% and 3.2%, 2.2% and 1.7% over those of BMA and CBMA during the calibration and validation periods, respectively. The proposed CBP-BMA method provides an alternative approach for uncertainty estimation of hydrological multi-model forecasts.
APA, Harvard, Vancouver, ISO, and other styles
40

Bishop, Craig H., and Kevin T. Shanley. "Bayesian Model Averaging’s Problematic Treatment of Extreme Weather and a Paradigm Shift That Fixes It." Monthly Weather Review 136, no. 12 (December 1, 2008): 4641–52. http://dx.doi.org/10.1175/2008mwr2565.1.

Full text
Abstract:
Abstract Methods of ensemble postprocessing in which continuous probability density functions are constructed from ensemble forecasts by centering functions around each of the ensemble members have come to be called Bayesian model averaging (BMA) or “dressing” methods. Here idealized ensemble forecasting experiments are used to show that these methods are liable to produce systematically unreliable probability forecasts of climatologically extreme weather. It is argued that the failure of these methods is linked to an assumption that the distribution of truth given the forecast can be sampled by adding stochastic perturbations to state estimates, even when these state estimates have a realistic climate. It is shown that this assumption is incorrect, and it is argued that such dressing techniques better describe the likelihood distribution of historical ensemble-mean forecasts given the truth for certain values of the truth. This paradigm shift leads to an approach that incorporates prior climatological information into BMA ensemble postprocessing through Bayes’s theorem. This new approach is shown to cure BMA’s ill treatment of extreme weather by providing a posterior BMA distribution whose probabilistic forecasts are reliable for both extreme and nonextreme weather forecasts.
APA, Harvard, Vancouver, ISO, and other styles
41

Legrand, R., Y. Michel, and T. Montmerle. "Diagnosing non-Gaussianity of forecast and analysis errors in a convective scale model." Nonlinear Processes in Geophysics Discussions 2, no. 4 (July 18, 2015): 1061–90. http://dx.doi.org/10.5194/npgd-2-1061-2015.

Full text
Abstract:
Abstract. In numerical weather prediction, the problem of estimating initial conditions is usually based on a Bayesian framework. Two common derivations respectively lead to the Kalman filter and to variational approaches. They rely on either assumptions of linearity or assumptions of Gaussianity of the probability density functions of both observation and background errors. In practice, linearity and Gaussianity of errors are tied to one another, in the sense that a nonlinear model will yield non-Gaussian probability density functions, and that standard methods may perform poorly in the context of non-Gaussian probability density functions. This study aims to describe some aspects of non-Gaussianity of forecast and analysis errors in a convective scale model using a Monte-Carlo approach based on an ensemble of data assimilations. For this purpose, an ensemble of 90 members of cycled perturbed assimilations has been run over a highly precipitating case of interest. Non-Gaussianity is measured using the K2-statistics from the D'Agostino test, which is related to the sum of the squares of univariate skewness and kurtosis. Results confirm that specific humidity is the least Gaussian variable according to that measure, and also that non-Gaussianity is generally more pronounced in the boundary layer and in cloudy areas. The mass control variables used in our data assimilation, namely vorticity and divergence, also show distinct non-Gaussian behavior. It is shown that while non-Gaussianity increases with forecast lead time, it is efficiently reduced by the data assimilation step especially in areas well covered by observations. Our findings may have implication for the choice of the control variables.
APA, Harvard, Vancouver, ISO, and other styles
42

Ranjan, Rakesh, Subrata Kumar Ghosh, and Manoj Kumar. "Modelling of wear debris in planetary gear drive." Industrial Lubrication and Tribology 71, no. 2 (March 11, 2019): 199–204. http://dx.doi.org/10.1108/ilt-03-2018-0121.

Full text
Abstract:
Purpose The probability distribution of major length and aspect ratio (major length/minor length) of wear debris collected from gear oil used in planetary gear drive were analysed and modelled. The paper aims to find an appropriate probability distribution model to forecast the kind of wear particles at different running hour of the machine. Design/methodology/approach Used gear oil of the planetary gear box of a slab caster was drained out and charged with a fresh oil of grade (EP-460). Six chronological oil samples were collected at different time interval between 480 and 1,992 h of machine running. The oil samples were filtered to separate wear particles, and microscopic study of wear debris was carried out at 100X magnification. Statistical modelling of wear debris distribution was done using Weibull and exponential probability distribution model. A comparison was studied among actual, Weibull and exponential probability distribution of major length and aspect ratio of wear particles. Findings Distribution of major length of wear particle was found to be closer to the exponential probability density function, whereas Weibull probability density function fitted better to distribution of aspect ratio of wear particle. Originality/value The potential of the developed model can be used to analyse the distribution of major length and aspect ratio of wear debris present in planetary gear box of slab caster machine.
APA, Harvard, Vancouver, ISO, and other styles
43

Hu, Yiming, Maurice J. Schmeits, Schalk Jan van Andel, Jan S. Verkade, Min Xu, Dimitri P. Solomatine, and Zhongmin Liang. "A Stratified Sampling Approach for Improved Sampling from a Calibrated Ensemble Forecast Distribution." Journal of Hydrometeorology 17, no. 9 (September 1, 2016): 2405–17. http://dx.doi.org/10.1175/jhm-d-15-0205.1.

Full text
Abstract:
Abstract Before using the Schaake shuffle or empirical copula coupling (ECC) to reconstruct the dependence structure for postprocessed ensemble meteorological forecasts, a necessary step is to sample discrete samples from each postprocessed continuous probability density function (pdf), which is the focus of this paper. In addition to the equidistance quantiles (EQ) and independent random (IR) sampling methods commonly used at present, the stratified sampling (SS) method is proposed. The performance of the three sampling methods is compared using calibrated GFS ensemble precipitation reforecasts over the Xixian basin in China. The ensemble reforecasts are first calibrated using heteroscedastic extended logistic regression (HELR), and then the three sampling methods are used to sample calibrated pdfs with a varying number of discrete samples. Finally, the effect of the sampling method on the reconstruction of ensemble members with preserved space dependence structure is analyzed by using EQ, IR, and SS in ECC for reconstructing postprocessed ensemble members for four stations in the Xixian basin. There are three main results. 1) The HELR model has a significant improvement over the raw ensemble forecast. It clearly improves the mean and dispersion of the predictive distribution. 2) Compared to EQ and IR, SS can better cover the tails of the calibrated pdfs and a better dispersion of calibrated ensemble forecasts is obtained. In terms of probabilistic verification metrics like the ranked probability skill score (RPSS), SS is slightly better than EQ and clearly better than IR, while in terms of the deterministic verification metric, root-mean-square error, EQ is slightly better than SS. 3) ECC-SS, ECC-EQ, and ECC-IR all calibrate the raw ensemble forecast, but ECC-SS shows a better dispersion than ECC-EQ and ECC-IR in this study.
APA, Harvard, Vancouver, ISO, and other styles
44

Lee, Jared A., Walter C. Kolczynski, Tyler C. McCandless, and Sue Ellen Haupt. "An Objective Methodology for Configuring and Down-Selecting an NWP Ensemble for Low-Level Wind Prediction." Monthly Weather Review 140, no. 7 (July 1, 2012): 2270–86. http://dx.doi.org/10.1175/mwr-d-11-00065.1.

Full text
Abstract:
Abstract Ensembles of numerical weather prediction (NWP) model predictions are used for a variety of forecasting applications. Such ensembles quantify the uncertainty of the prediction because the spread in the ensemble predictions is correlated to forecast uncertainty. For atmospheric transport and dispersion and wind energy applications in particular, the NWP ensemble spread should accurately represent uncertainty in the low-level mean wind. To adequately sample the probability density function (PDF) of the forecast atmospheric state, it is necessary to account for several sources of uncertainty. Limited computational resources constrain the size of ensembles, so choices must be made about which members to include. No known objective methodology exists to guide users in choosing which combinations of physics parameterizations to include in an NWP ensemble, however. This study presents such a methodology. The authors build an NWP ensemble using the Advanced Research Weather Research and Forecasting Model (ARW-WRF). This 24-member ensemble varies physics parameterizations for 18 randomly selected 48-h forecast periods in boreal summer 2009. Verification focuses on 2-m temperature and 10-m wind components at forecast lead times from 12 to 48 h. Various statistical guidance methods are employed for down-selection, calibration, and verification of the ensemble forecasts. The ensemble down-selection is accomplished with principal component analysis. The ensemble PDF is then statistically dressed, or calibrated, using Bayesian model averaging. The postprocessing techniques presented here result in a recommended down-selected ensemble that is about half the size of the original ensemble yet produces similar forecast performance, and still includes critical diversity in several types of physics schemes.
APA, Harvard, Vancouver, ISO, and other styles
45

Wu, Di, Christa Peters-Lidard, Wei-Kuo Tao, and Walter Petersen. "Evaluation of NU-WRF Rainfall Forecasts for IFloodS." Journal of Hydrometeorology 17, no. 5 (April 14, 2016): 1317–35. http://dx.doi.org/10.1175/jhm-d-15-0134.1.

Full text
Abstract:
Abstract The Iowa Flood Studies (IFloodS) campaign was conducted in eastern Iowa as a pre-GPM-launch campaign from 1 May to 15 June 2013. During the campaign period, real-time forecasts were conducted utilizing the NASA-Unified Weather Research and Forecasting (NU-WRF) Model to support the daily weather briefing. In this study, two sets of the NU-WRF rainfall forecasts are conducted with different soil initializations, one from the spatially interpolated North American Mesoscale Forecast System (NAM) and the other produced by the Land Information System (LIS) using daily analysis of bias-corrected stage IV data. Both forecasts are then compared with NAM, stage IV, and Multi-Radar Multi-Sensor (MRMS) quantitative precipitation estimation (QPE) to understand the impact of land surface initialization on the predicted precipitation. In general, both NU-WRF runs are able to reproduce individual peaks of precipitation at the right time. NU-WRF is also able to replicate a better rainfall spatial distribution compared with NAM. Further sensitivity tests show that the high-resolution runs (1 and 3 km) are able to better capture the precipitation event compared to its coarser-resolution counterpart (9 km). Finally, the two sets of NU-WRF simulations produce very close rainfall characteristics in bias, spatial and temporal correlation scores, and probability density function. The land surface initialization does not show a significant impact on short-term rainfall forecast, which is largely because of high soil moisture during the field campaign period.
APA, Harvard, Vancouver, ISO, and other styles
46

Sloughter, J. Mc Lean, Adrian E. Raftery, Tilmann Gneiting, and Chris Fraley. "Probabilistic Quantitative Precipitation Forecasting Using Bayesian Model Averaging." Monthly Weather Review 135, no. 9 (September 1, 2007): 3209–20. http://dx.doi.org/10.1175/mwr3441.1.

Full text
Abstract:
Abstract Bayesian model averaging (BMA) is a statistical way of postprocessing forecast ensembles to create predictive probability density functions (PDFs) for weather quantities. It represents the predictive PDF as a weighted average of PDFs centered on the individual bias-corrected forecasts, where the weights are posterior probabilities of the models generating the forecasts and reflect the forecasts’ relative contributions to predictive skill over a training period. It was developed initially for quantities whose PDFs can be approximated by normal distributions, such as temperature and sea level pressure. BMA does not apply in its original form to precipitation, because the predictive PDF of precipitation is nonnormal in two major ways: it has a positive probability of being equal to zero, and it is skewed. In this study BMA is extended to probabilistic quantitative precipitation forecasting. The predictive PDF corresponding to one ensemble member is a mixture of a discrete component at zero and a gamma distribution. Unlike methods that predict the probability of exceeding a threshold, BMA gives a full probability distribution for future precipitation. The method was applied to daily 48-h forecasts of 24-h accumulated precipitation in the North American Pacific Northwest in 2003–04 using the University of Washington mesoscale ensemble. It yielded predictive distributions that were calibrated and sharp. It also gave probability of precipitation forecasts that were much better calibrated than those based on consensus voting of the ensemble members. It gave better estimates of the probability of high-precipitation events than logistic regression on the cube root of the ensemble mean.
APA, Harvard, Vancouver, ISO, and other styles
47

Raftery, Adrian E., Tilmann Gneiting, Fadoua Balabdaoui, and Michael Polakowski. "Using Bayesian Model Averaging to Calibrate Forecast Ensembles." Monthly Weather Review 133, no. 5 (May 1, 2005): 1155–74. http://dx.doi.org/10.1175/mwr2906.1.

Full text
Abstract:
Abstract Ensembles used for probabilistic weather forecasting often exhibit a spread-error correlation, but they tend to be underdispersive. This paper proposes a statistical method for postprocessing ensembles based on Bayesian model averaging (BMA), which is a standard method for combining predictive distributions from different sources. The BMA predictive probability density function (PDF) of any quantity of interest is a weighted average of PDFs centered on the individual bias-corrected forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts and reflect the models' relative contributions to predictive skill over the training period. The BMA weights can be used to assess the usefulness of ensemble members, and this can be used as a basis for selecting ensemble members; this can be useful given the cost of running large ensembles. The BMA PDF can be represented as an unweighted ensemble of any desired size, by simulating from the BMA predictive distribution. The BMA predictive variance can be decomposed into two components, one corresponding to the between-forecast variability, and the second to the within-forecast variability. Predictive PDFs or intervals based solely on the ensemble spread incorporate the first component but not the second. Thus BMA provides a theoretical explanation of the tendency of ensembles to exhibit a spread-error correlation but yet be underdispersive. The method was applied to 48-h forecasts of surface temperature in the Pacific Northwest in January–June 2000 using the University of Washington fifth-generation Pennsylvania State University–NCAR Mesoscale Model (MM5) ensemble. The predictive PDFs were much better calibrated than the raw ensemble, and the BMA forecasts were sharp in that 90% BMA prediction intervals were 66% shorter on average than those produced by sample climatology. As a by-product, BMA yields a deterministic point forecast, and this had root-mean-square errors 7% lower than the best of the ensemble members and 8% lower than the ensemble mean. Similar results were obtained for forecasts of sea level pressure. Simulation experiments show that BMA performs reasonably well when the underlying ensemble is calibrated, or even overdispersed.
APA, Harvard, Vancouver, ISO, and other styles
48

Johansson, Åke. "Prediction Skill of the NAO and PNA from Daily to Seasonal Time Scales." Journal of Climate 20, no. 10 (May 15, 2007): 1957–75. http://dx.doi.org/10.1175/jcli4072.1.

Full text
Abstract:
Abstract The skill of state-of-the-art operational dynamical models in predicting the two most important modes of variability in the Northern Hemisphere extratropical atmosphere, the North Atlantic Oscillation (NAO) and Pacific–North American (PNA) teleconnection patterns, is investigated at time scales ranging from daily to seasonal. Two uncoupled atmospheric models used for deterministic forecasting in the short to medium range as well as eight fully coupled atmosphere–land–ocean forecast models used for monthly and seasonal forecasting are examined and compared. For the short to medium range, the level of forecast skill for the two indices is higher than that for the entire Northern Hemisphere extratropical flow. The forecast skill of the PNA is higher than that of the NAO. The forecast skill increases with the magnitude of the NAO and PNA indices, but the relationship is not pronounced. The probability density function (PDF) of the NAO and PNA indices is negatively skewed, in agreement with the distribution of skewness of the geopotential field. The models maintain approximately the observed PDF, including the negative skewness, for the first week. Extreme negative NAO/PNA events have larger absolute values than positive extremes in agreement with the negative skewness of the two indices. Recent large extreme events are generally well forecasted by the models. On the intraseasonal time scale it is found that both NAO and PNA have lingering forecast skill, in contrast to the Northern Hemisphere extratropical flow as a whole. This fact offers some hope for extended range forecasting, even though the skill is quite low. No conclusive positive benefit is seen from using higher horizontal resolution or coupling to the oceans. On the monthly and seasonal time scales, the level of forecast skill for the two indices is generally quite low, with the exception of winter predictions at short lead times.
APA, Harvard, Vancouver, ISO, and other styles
49

Emanuel, Kerry, Fabian Fondriest, and James Kossin. "Potential Economic Value of Seasonal Hurricane Forecasts." Weather, Climate, and Society 4, no. 2 (April 1, 2012): 110–17. http://dx.doi.org/10.1175/wcas-d-11-00017.1.

Full text
Abstract:
Abstract This paper explores the potential utility of seasonal Atlantic hurricane forecasts to a hypothetical property insurance firm whose insured properties are broadly distributed along the U.S. Gulf and East Coasts. Using a recently developed hurricane synthesizer driven by large-scale meteorological variables derived from global reanalysis datasets, 1000 artificial 100-yr time series are generated containing both active and inactive hurricane seasons. The hurricanes thus produced damage to the property insurer’s portfolio of insured property, according to an aggregate wind-damage function. The potential value of seasonal hurricane forecasts is assessed by comparing the overall probability density of the company’s profits from a control experiment, in which the insurer purchases the same reinsurance coverage each year, to various test strategies in which the amount of risk retained by the primary insurer, and the corresponding premium paid to the reinsurer, varies according to whether the season is active or quiet, holding the risk of ruin constant. Under the highly idealized conditions of this experiment, there is a clear advantage to the hypothetical property insurance firm of using seasonal hurricane forecasts to adjust the amount of reinsurance it purchases each year. Under a strategy that optimizes the company’s profits by holding the risk of ruin constant, the probability distribution of profit clearly separates from that of the control strategy after less than 10 yr when the seasonal forecasts are perfect. But when a more realistic seasonal forecast skill is assumed, the potential value of forecasts becomes significant only after more than a decade.
APA, Harvard, Vancouver, ISO, and other styles
50

Davis, Justin R., Vladimir A. Paramygin, David Forrest, and Y. Peter Sheng. "Toward the Probabilistic Simulation of Storm Surge and Inundation in a Limited-Resource Environment." Monthly Weather Review 138, no. 7 (July 1, 2010): 2953–74. http://dx.doi.org/10.1175/2010mwr3136.1.

Full text
Abstract:
Abstract To create more useful storm surge and inundation forecast products, probabilistic elements are being incorporated. To achieve the highest levels of confidence in these products, it is essential that as many simulations as possible are performed during the limited amount of time available. This paper develops a framework by which probabilistic storm surge and inundation forecasts within the Curvilinear Hydrodynamics in 3D (CH3D) Storm Surge Modeling System and the Southeastern Universities Research Association Coastal Ocean Observing and Prediction Program’s forecasting systems are initiated with specific focus on the application of these methods in a limited-resource environment. Ensemble sets are created by dividing probability density functions (PDFs) of the National Hurricane Center model forecast error into bins, which are then grouped into priority levels (PLs) such that each subsequent level relies on results computed earlier and has an increasing confidence associated with it. The PDFs are then used to develop an ensemble of analytic wind and pressure fields for use by storm surge and inundation models. Using this approach applied with official National Hurricane Center (OFCL) forecast errors, an analysis of Hurricane Charley is performed. After first validating the simulation of storm surge, a series of ensemble simulations are performed representing the forecast errors for the 72-, 48-, 24-, and 12-h forecasts. Analysis of the aggregated products shows that PL4 (27 members) is sufficient to resolve 90% of the inundation within the domain and appears to represent the best balance between accuracy and timeliness of computed products for this case study. A 5-day forecast using the PL4 set is shown to complete in 83 min, while the intermediate PL2 and PL3 products, representing slightly less confidence, complete in 14 and 28 min, respectively.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography