Статті в журналах з теми "Run-Time bias-Correction"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Run-Time bias-Correction.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-49 статей у журналах для дослідження на тему "Run-Time bias-Correction".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Okui, Ryo. "Asymptotically Unbiased Estimation of Autocovariances and Autocorrelations with Panel Data in the Presence of Individual and Time Effects." Journal of Time Series Econometrics 6, no. 2 (July 1, 2014): 129–81. http://dx.doi.org/10.1515/jtse-2013-0017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThis article proposes asymptotically unbiased estimators of autocovariances and autocorrelations for panel data with both individual and time effects. We show that the conventional autocovariance estimators suffer from the bias caused by the elimination of individual and time effects. The bias related to individual effects is proportional to the long-run variance, and it related to time effects is proportional to the value of the estimated autocovariance. For the conventional autocorrelation estimators, the elimination of time effects does not cause a bias while the elimination of individual effects does. We develop methods to estimate the long-run variance and propose bias-corrected estimators based on the proposed long-run variance estimator. We also consider the half-panel jackknife estimation for bias correction. The theoretical results are given by employing double asymptotics under which both the number of observations and the length of the time series tend to infinity. Monte Carlo simulations show that the asymptotic theory provides a good approximation to the actual bias and that the proposed bias-correction methods work well.
2

Yan, Changxiang, and Jiang Zhu. "A Simple Bias Correction Scheme in Ocean Data Assimilation." Journal of Marine Science and Engineering 11, no. 1 (January 12, 2023): 205. http://dx.doi.org/10.3390/jmse11010205.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The mode bias is present and time-dependent due to imperfect configurations. Data assimilation is the process by which observations are used to correct the model forecast, and is affected by the bias. How to reduce the bias is an important issue. This paper investigates the roles of a simple bias correction scheme in ocean data assimilation. In this scheme, the misfits between modeled and monthly temperature and salinity with interannual variability from the Met Office Hadley Centre subsurface temperature and salinity data set (EN4.2.2) are used for the innovations in assimilation via the Ensemble Optimal Interpolation method. Two assimilation experiments are implemented to evaluate the impacts of bias correction. The first experiment is a data assimilation system without bias correction. In the second experiment, the bias correction is applied in assimilation. For comparison, the nature run with no assimilation and no bias correction is also conducted. When the bias correction is not applied, the assimilation alone leads to a rising trend in the heat and salt content that is not found in the observations. It is a spurious temporal variability due to the effect of the bias on the data assimilation. Meanwhile, the assimilation experiment without bias correction also produces significant negative impacts on the subsurface salinity. The experiment with bias correction performs best with notable improvements over the results of the other two experiments.
3

von Auer, Ludwig, and Alena Shumskikh. "Substitution Bias in the Measurement of Import and Export Price Indices: Causes and Correction." Journal of Official Statistics 38, no. 1 (March 1, 2022): 107–26. http://dx.doi.org/10.2478/jos-2022-0006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The import and export price indices of an economy are usually compiled by some Laspeyres type index. It is well known that such an index formula is prone to substitution bias. Therefore, also the terms of trade (ratio of export and import price index) are likely to be distorted. The underlying substitution bias accumulates over time. The present article introduces a simple and transparent retroactive correction approach that addresses the source of the substitution bias and produces meaningful long-run time series of import and export price levels and, therefore, of the terms of trade. Furthermore, an empirical case study is conducted that demonstrates the efficacy and versatility of the correction approach.
4

Krinner, Gerhard, Julien Beaumet, Vincent Favier, Michel Déqué, and Claire Brutel-Vuilmet. "Empirical Run-Time Bias Correction for Antarctic Regional Climate Projections With a Stretched-Grid AGCM." Journal of Advances in Modeling Earth Systems 11, no. 1 (January 2019): 64–82. http://dx.doi.org/10.1029/2018ms001438.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Osuch, M., R. J. Romanowicz, D. Lawrence, and W. K. Wong. "Assessment of the influence of bias correction on meteorological drought projections for Poland." Hydrology and Earth System Sciences Discussions 12, no. 10 (October 12, 2015): 10331–77. http://dx.doi.org/10.5194/hessd-12-10331-2015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. Possible future climate change effects on drought severity in Poland are estimated for six ENSEMBLE climate projections using the Standard Precipitation Index (SPI). The time series of precipitation represent six different RCM/GCM run under the A1B SRES scenario for the period 1971–2099. Monthly precipitation values were used to estimate the Standard Precipitation Index (SPI) for multiple time scales (1, 3, 6, 12 and 24 months) for a spatial resolution of 25 km × 25 km for the whole country. Trends in SPI were analysed using a Mann–Kendall test with Sen's slope estimator for each 25 km × 25 km grid cell for each RCM/GCM projection and timescale, and results obtained for uncorrected precipitation and bias corrected precipitation were compared. Bias correction was achieved using a distribution-based quantile mapping (QM) method in which the climate model precipitation series were adjusted relative to gridded E-OBS precipitation data for Poland. The results show that the spatial pattern of the trend depends on the climate model, the time scale considered and on the bias correction. The effect of change on the projected trend due to bias correction is small compared to the variability among climate models. We also summarise the mechanisms underlying the influence of bias correction on trends using a simple example of a linear bias correction procedure. In the case of precipitation the bias correction by QM does not change the direction of changes but can change the slope of trend. We also have noticed that the results for the same GCM, with differing RCMs, are characterized by similar pattern of changes, although this behaviour is not seen at all time scales and seasons.
6

Rogers, Bruce, David Giles, Nick Draper, Laurent Mourot, and Thomas Gronwald. "Influence of Artefact Correction and Recording Device Type on the Practical Application of a Non-Linear Heart Rate Variability Biomarker for Aerobic Threshold Determination." Sensors 21, no. 3 (January 26, 2021): 821. http://dx.doi.org/10.3390/s21030821.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recent study points to the value of a non-linear heart rate variability (HRV) biomarker using detrended fluctuation analysis (DFA a1) for aerobic threshold determination (HRVT). Significance of recording artefact, correction methods and device bias on DFA a1 during exercise and HRVT is unclear. Gas exchange and HRV data were obtained from 17 participants during an incremental treadmill run using both ECG and Polar H7 as recording devices. First, artefacts were randomly placed in the ECG time series to equal 1, 3 and 6% missed beats with correction by Kubios software’s automatic and medium threshold method. Based on linear regression, Bland Altman analysis and Wilcoxon paired testing, there was bias present with increasing artefact quantity. Regardless of artefact correction method, 1 to 3% missed beat artefact introduced small but discernible bias in raw DFA a1 measurements. At 6% artefact using medium correction, proportional bias was found (maximum 19%). Despite this bias, the mean HRVT determination was within 1 bpm across all artefact levels and correction modalities. Second, the HRVT ascertained from synchronous ECG vs. Polar H7 recordings did show an average bias of minus 4 bpm. Polar H7 results suggest that device related bias is possible but in the reverse direction as artefact related bias.
7

Phan Van, Tan, Hiep Van Nguyen, Long Trinh Tuan, Trung Nguyen Quang, Thanh Ngo-Duc, Patrick Laux, and Thanh Nguyen Xuan. "Seasonal Prediction of Surface Air Temperature across Vietnam Using the Regional Climate Model Version 4.2 (RegCM4.2)." Advances in Meteorology 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/245104.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
To investigate the ability of dynamical seasonal climate predictions for Vietnam, the RegCM4.2 is employed to perform seasonal prediction of 2 m mean (T2m), maximum (Tx), and minimum (Tn) air temperature for the period from January 2012 to November 2013 by downscaling the NCEP Climate Forecast System (CFS) data. For model bias correction, the model and observed climatology is constructed using the CFS reanalysis and observed temperatures over Vietnam for the period 1980–2010, respectively. The RegCM4.2 forecast is run four times per month from the current month up to the next six months. A model ensemble prediction initialized from the current month is computed from the mean of the four runs within the month. The results showed that, without any bias correction (CTL), the RegCM4.2 forecast has very little or no skill in both tercile and value predictions. With bias correction (BAS), model predictions show improved skill. The experiment in which the results from the BAS experiment are further successively adjusted (SUC) with model bias at one-month lead time of the previous run showed further improvement compared to CTL and BAS. Skill scores of the tercile probability forecasts were found to exceed 0.3 for most of the target months.
8

Gupta, Abhimanyu, and Myung Hwan Seo. "Robust Inference on Infinite and Growing Dimensional Time‐Series Regression." Econometrica 91, no. 4 (2023): 1333–61. http://dx.doi.org/10.3982/ecta17918.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We develop a class of tests for time‐series models such as multiple regression with growing dimension, infinite‐order autoregression, and nonparametric sieve regression. Examples include the Chow test and general linear restriction tests of growing rank p. Employing such increasing p asymptotics, we introduce a new scale correction to conventional test statistics, which accounts for a high‐order long‐run variance (HLV), which emerges as p grows with sample size. We also propose a bias correction via a null‐imposed bootstrap to alleviate finite‐sample bias without sacrificing power unduly. A simulation study shows the importance of robustifying testing procedures against the HLV even when p is moderate. The tests are illustrated with an application to the oil regressions in Hamilton (2003).
9

Wei, Linyong, Shanhu Jiang, Liliang Ren, Linqi Zhang, Menghao Wang, Yi Liu, and Zheng Duan. "Bias correction of GPM IMERG Early Run daily precipitation product using near real-time CPC global measurements." Atmospheric Research 279 (December 2022): 106403. http://dx.doi.org/10.1016/j.atmosres.2022.106403.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ma, Qiumei, Lihua Xiong, Jun Xia, Bin Xiong, Han Yang, and Chong-Yu Xu. "A Censored Shifted Mixture Distribution Mapping Method to Correct the Bias of Daily IMERG Satellite Precipitation Estimates." Remote Sensing 11, no. 11 (June 4, 2019): 1345. http://dx.doi.org/10.3390/rs11111345.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Satellite precipitation estimates (SPE) provide useful input for hydrological modeling. However, hydrological modeling is frequently hindered by large bias and errors in SPE, inducing the necessity for bias corrections. Traditional distribution mapping bias correction of daily precipitation commonly uses Bernoulli and gamma distributions to separately model the probability and intensities of precipitation and is insufficient towards extremes. This study developed an improved distribution mapping bias correction method, which established a censored shifted mixture distribution (CSMD) as a transfer function when mapping raw precipitation to the reference data. CSMD coupled the censored shifted statistical distribution to jointly model both the precipitation occurrence probability and intensity with a mixture of gamma and generalized Pareto distributions to enhance extreme-value modeling. The CSMD approach was applied to correct the up-to-date SPE of Integrated Multi-satelliE Retrievals for Global Precipitation Measurement (GPM) with near-real-time “Early” run (IMERG-E) over the Yangtze River basin. To verify the hydrological response of bias-corrected IMERG-E, the streamflow of the Wujiang River basin was simulated using Ge´nie Rural with 6 parameters (GR6J) and Coupled Routing Excess Storage (CREST) models. The results showed that the bias correction using both BerGam (traditional bias correction combining Bernoulli with gamma distributions) and the improved CSMD could reduce the systematic errors of IMERG-E. Furthermore, CSMD outperformed BerGam in correcting overall precipitation (with the median of mean absolute errors of 2.46 mm versus 2.81 mm for CSMD and BerGam respectively, and the median of modified Nash–Sutcliffe efficiency of 0.39 versus 0.29) and especially in extreme values for uniform format and particular attention paid to extremes. In addition, the hydrological effect that CSMD correction exerted on IMERG-E, driving GR6J and CREST rainfall-runoff modeling, outperformed that of the BerGam correction. This study provides a promising integrated distribution mapping framework to correct the biased daily SPE, contributing to more reliable hydrological forecasts by informing accurate precipitation forcing.
11

Sperna Weiland, F. C., L. P. H. van Beek, J. C. J. Kwadijk, and M. F. P. Bierkens. "The ability of a GCM-forced hydrological model to reproduce global discharge variability." Hydrology and Earth System Sciences Discussions 7, no. 1 (January 29, 2010): 687–724. http://dx.doi.org/10.5194/hessd-7-687-2010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. Data from General Circulation Models (GCMs) are often used in studies investigating hydrological impacts of climate change. However GCM data are known to have large biases, especially for precipitation. In this study the usefulness of GCM data for hydrological studies was tested by applying bias-corrected daily climate data of the 20CM3 control experiment from an ensemble of twelve GCMs as input to the global hydrological model PCR-GLOBWB. Results are compared with discharges calculated from a model run based on a reference meteorological dataset constructed from the CRU TS2.1 data and ERA-40 reanalysis time-series. Bias-correction was limited to monthly mean values as our focus was on the reproduction of runoff variability. The bias-corrected GCM based runs resemble the reference run reasonably well, especially for rivers with strong seasonal patterns. However, GCM derived discharge quantities are overall too low. Furthermore, from the arctic regimes it can be seen that a few deviating GCMs can bias the ensemble mean. Moreover, the GCMs do not well represent intra- and inter-year variability as exemplified by a limited persistence. This makes them less suitable for the projection of future runoff extremes.
12

Keppenne, C. L., M. M. Rienecker, N. P. Kurkowski, and D. A. Adamec. "Ensemble Kalman filter assimilation of temperature and altimeter data with bias correction and application to seasonal prediction." Nonlinear Processes in Geophysics 12, no. 4 (May 17, 2005): 491–503. http://dx.doi.org/10.5194/npg-12-491-2005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. To compensate for a poorly known geoid, satellite altimeter data is usually analyzed in terms of anomalies from the time mean record. When such anomalies are assimilated into an ocean model, the bias between the climatologies of the model and data is problematic. An ensemble Kalman filter (EnKF) is modified to account for the presence of a forecast-model bias and applied to the assimilation of TOPEX/Poseidon (T/P) altimeter data. The online bias correction (OBC) algorithm uses the same ensemble of model state vectors to estimate biased-error and unbiased-error covariance matrices. Covariance localization is used but the bias covariances have different localization scales from the unbiased-error covariances, thereby accounting for the fact that the bias in a global ocean model could have much larger spatial scales than the random error.The method is applied to a 27-layer version of the Poseidon global ocean general circulation model with about 30-million state variables. Experiments in which T/P altimeter anomalies are assimilated show that the OBC reduces the RMS observation minus forecast difference for sea-surface height (SSH) over a similar EnKF run in which OBC is not used. Independent in situ temperature observations show that the temperature field is also improved. When the T/P data and in situ temperature data are assimilated in the same run and the configuration of the ensemble at the end of the run is used to initialize the ocean component of the GMAO coupled forecast model, seasonal SSH hindcasts made with the coupled model are generally better than those initialized with optimal interpolation of temperature observations without altimeter data. The analysis of the corresponding sea-surface temperature hindcasts is not as conclusive.
13

Beaumet, Julien, Michel Déqué, Gerhard Krinner, Cécile Agosta, Antoinette Alias, and Vincent Favier. "Significant additional Antarctic warming in atmospheric bias-corrected ARPEGE projections with respect to control run." Cryosphere 15, no. 8 (August 6, 2021): 3615–35. http://dx.doi.org/10.5194/tc-15-3615-2021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. In this study, we use run-time bias correction to correct for the Action de Recherche Petite Echelle Grande Echelle (ARPEGE) atmospheric model systematic errors on large-scale atmospheric circulation. The bias-correction terms are built using the climatological mean of the adjustment terms on tendency errors in an ARPEGE simulation relaxed towards ERA-Interim reanalyses. The bias reduction with respect to the Atmospheric Model Intercomparison Project (AMIP)-style uncorrected control run for the general atmospheric circulation in the Southern Hemisphere is significant for mean state and daily variability. Comparisons for the Antarctic Ice Sheet with the polar-oriented regional atmospheric models MAR and RACMO2 and in situ observations also suggest substantial bias reduction for near-surface temperature and precipitation in coastal areas. Applying the method to climate projections for the late 21st century (2071–2100) leads to large differences in the projected changes of the atmospheric circulation in the southern high latitudes and of the Antarctic surface climate. The projected poleward shift and strengthening of the southern westerly winds are greatly reduced. These changes result in a significant 0.7 to 0.9 K additional warming and a 6 % to 9 % additional increase in precipitation over the grounded ice sheet. The sensitivity of precipitation increase to temperature increase (+7.7 % K−1 and +9 % K−1) found is also higher than previous estimates. The highest additional warming rates are found over East Antarctica in summer. In winter, there is a dipole of weaker warming and weaker precipitation increase over West Antarctica, contrasted by a stronger warming and a concomitant stronger precipitation increase from Victoria to Adélie Land, associated with a weaker intensification of the Amundsen Sea Low.
14

Balmaseda, Magdalena A., Arthur Vidard, and David L. T. Anderson. "The ECMWF Ocean Analysis System: ORA-S3." Monthly Weather Review 136, no. 8 (August 1, 2008): 3018–34. http://dx.doi.org/10.1175/2008mwr2433.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract A new operational ocean analysis/reanalysis system (ORA-S3) has been implemented at ECMWF. The reanalysis, started from 1 January 1959, is continuously maintained up to 11 days behind real time and is used to initialize seasonal forecasts as well as to provide a historical representation of the ocean for climate studies. It has several innovative features, including an online bias-correction algorithm, the assimilation of salinity data on temperature surfaces, and the assimilation of altimeter-derived sea level anomalies and global sea level trends. It is designed to reduce spurious climate variability in the resulting ocean reanalysis due to the nonstationary nature of the observing system, while still taking advantage of the observation information. The new analysis system is compared with the previous operational version; the equatorial temperature biases are reduced and equatorial currents are improved. The impact of assimilation in the ocean state is discussed by diagnosis of the assimilation increment and bias correction terms. The resulting analysis not only improves the fit to the data, but also improves the representation of the interannual variability. In addition to the basic analysis, a real-time analysis is produced (RT-S3). This is needed for monthly forecasts and in the future may be needed for shorter-range forecasts. It is initialized from the near-real-time ORA-S3 and run each day from it.
15

Leeuwenburgh, Olwijn. "Validation of an EnKF System for OGCM Initialization Assimilating Temperature, Salinity, and Surface Height Measurements." Monthly Weather Review 135, no. 1 (January 1, 2007): 125–39. http://dx.doi.org/10.1175/mwr3272.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Results are presented from a decade-long assimilation run with a 64-member OGCM ensemble in a global configuration. The assimilation system can be used to produce ocean initial conditions for seasonal forecasts. The ensemble is constructed with the Max Planck Institute Ocean Model, where each member is forced by differently perturbed 40-yr European Centre for Medium-Range Weather Forecasts Re-Analysis atmospheric fields over sequential 10-day intervals. Along-track altimetric data from the European Remote Sensing and the Ocean Topography Experiment (TOPEX)/Poseidon satellites, as well as quality-controlled subsurface temperature and salinity profiles, are subsequently assimilated using the standard formulation of the ensemble Kalman filter. The applied forcing perturbation method and data selection and processing procedures are described, as well as a framework for the construction of appropriate data constraint error models for all three data types. The results indicate that the system is stable, does not experience a tendency toward ensemble collapse, and provides smooth analyses that are closer to withheld data than an unconstrained control run. Subsurface bias and time-dependent errors are reduced by the assimilation but not entirely removed. Time series of assimilation and ensemble statistics also indicate that the model is not very strongly constrained by the data because of an overspecification of the data errors. A comparison of equatorial zonal velocity profiles with in situ current meter data shows mixed results. A shift in the time-mean profile in the central Pacific is primarily associated with an assimilation-induced bias. The use of an adaptive bias correction scheme is suggested as a solution to this problem.
16

Martinaitis, Steven M., Heather M. Grams, Carrie Langston, Jian Zhang, and Kenneth Howard. "A Real-Time Evaporation Correction Scheme for Radar-Derived Mosaicked Precipitation Estimations." Journal of Hydrometeorology 19, no. 1 (January 1, 2018): 87–111. http://dx.doi.org/10.1175/jhm-d-17-0093.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Precipitation values estimated by radar are assumed to be the amount of precipitation that occurred at the surface, yet this notion is inaccurate. Numerous atmospheric and microphysical processes can alter the precipitation rate between the radar beam elevation and the surface. One such process is evaporation. This study determines the applicability of integrating an evaporation correction scheme for real-time radar-derived mosaicked precipitation rates to reduce quantitative precipitation estimate (QPE) overestimation and to reduce the coverage of false surface precipitation. An evaporation technique previously developed for large-scale numerical modeling is applied to Multi-Radar Multi-Sensor (MRMS) precipitation rates through the use of 2D and 3D numerical weather prediction (NWP) atmospheric parameters as well as basic radar properties. Hourly accumulated QPE with evaporation adjustment compared against gauge observations saw an average reduction of the overestimation bias by 57%–76% for rain events and 42%–49% for primarily snow events. The removal of false surface precipitation also reduced the number of hourly gauge observations that were considered as “false zero” observations by 52.1% for rain and 38.2% for snow. Optimum computational efficiency was achieved through the use of simplified equations and hourly 10-km horizontal resolution NWP data. The run time for the evaporation correction algorithm is 6–7 s.
17

Saravanan, P., A. R. Prethivirajan, A. S. Sivaprasanna, K. Udhayakumar, and C. Sivapragasam. "Performance Evaluation of an ANN based Bias Correction algorithm in Monthly and Daily Precipitation Time Series of La Farge Station, USA." Disaster Advances 16, no. 4 (March 15, 2023): 27–33. http://dx.doi.org/10.25303/1604da027033.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Understanding the change of future precipitation over long run is highly necessary in climate change impact studies. Mostly, simulated future precipitation series are found to be biased more with the historically observed precipitation series which need to be corrected before use for any impact studies. Many conventional and data-driven methods are available to correct this bias. In this study, to bias correct the monthly and daily precipitation series, Artificial Neural Network based method is applied and compared with the conventional methods. The normalized root mean squared errors obtained for monthly and daily series are 0.786 and 2.55 respectively. It is found that the performance of ANN-based method is poor in daily series and good only in monthly series. The reason for poor performance in daily series is analysed. In addition, the superiority of ANN based method over conventional method is established in monthly precipitation time series.
18

Kumar, Dipankar, and Satoshi Takewaka. "Automatic Shoreline Position and Intertidal Foreshore Slope Detection from X-Band Radar Images using Modified Temporal Waterline Method with Corrected Wave Run-up." Journal of Marine Science and Engineering 7, no. 2 (February 12, 2019): 45. http://dx.doi.org/10.3390/jmse7020045.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Automatic and accurate shoreline position and intertidal foreshore slope detection are challenging and significantly important for coastal dynamics. In the present study, a time series shoreline position and intertidal foreshore slope have been automatically detected using modified Temporal Waterline Method (mTWM) from time-averaged X-band radar images captured throughout the course of two-week tidal cycle variation over an area spanning 5.6 km on the Hasaki coast between 12 April 2005 and 31 December 2008. The methodology is based on the correlation map between the pixel intensity variation of the time-averaged X-band radar images and the binary signal of the tide level ranging from −0.8 m to 0.8 m. In order to ensure the binary signal represented each of the water levels in the intertidal shore profile, determining the water level direction-wise bottom elevation is considered as the modification. Random gaps were detected in the captured images owing to the unclear or oversaturation of the waterline signal. A horizontal shift in the detected shoreline positions was observed compared to the survey data previously collected at Hasaki Oceanographical Research Station (HORS). This horizontal shift can be attributed to wave breaking and high wave conditions. Wave set-up and run-up are the effects of wave breaking and high wave conditions, respectively. The correction of the wave set-up and run-up is considered to allow the upward shift of the water level position, as well as shoreline position, to the landward direction. The findings indicate that the shoreline positions derived by mTWM with the corrected wave run-up reasonably agree with the survey data. The mean absolute bias (MAB) between the survey data and the shoreline positions detected using mTWM with the corrected wave run-up is approximately 5.9 m, which is theoretically smaller than the spatial resolution of the radar measurements. The random gaps in the mTWM-derived shoreline positions are filled by Garcia’s data filling algorithm which is a Penalized Least Squares regression method by means of the Discrete Cosine Transform (PLS-DCT). The MAB between survey data and the gap filled shoreline positions detected using TWM with corrected wave run-up is approximately 5.9 m. The obtained results indicate the accuracy of the mTWM with corrected wave run-up integrated with Garcia’s method compared to the survey observations. The executed approach in this study is considered as an efficient and robust tool to automatically detect shoreline positions and intertidal foreshore slopes extracted from X-band radar images with the consideration of wave run-up correction.
19

Tsaurai, Kunofiwa. "Personal remittances, banking sector development and economic growth in Israel: A trivariate causality test." Corporate Ownership and Control 13, no. 1 (2015): 1014–27. http://dx.doi.org/10.22495/cocv13i1c9p5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The current study investigates the causal relationship between personal remittances and economic growth using Israel time series data from 1975 to 2011. In a bid to contain the omission-of-variable bias not addressed in many past studies on this topic, this study included banking sector development as a third variable in the relationship between personal remittances and economic growth to create a tri-variate causality framework. Personal remittances as a ratio of GDP, domestic credit to private sector by banks as a ratio of GDP and GDP per capita were used as proxies for personal remittances, banking sector development and economic growth respectively for the purposes of this study. It used the Johansen co-integration test to examine the existence of the long run relationship and vector error correction model (VECM) to determine the direction of causality between personal remittances, banking sector development and economic growth both in the long and short run. The findings reveal that: (1) there is a significant long run causality relationship running from GDP per capita and banking sector development towards personal remittances, (2) there is an insignificant long run causality relationship running from personal remittances and GDP per capita towards banking sector development, (3) there is no long run causality relationship running from personal remittances and banking sector development towards GDP per capita and there is no short run causality relationship between the three variables that were under study in Israel. The author therefore recommends the authorities of Israel to speed up the implementation of banking sector development and economic growth programmes in order to increase the quantity of personal remittances inflows
20

van der A, R. J., M. A. F. Allaart, and H. J. Eskes. "Multi sensor reanalysis of total ozone." Atmospheric Chemistry and Physics Discussions 10, no. 4 (April 28, 2010): 11401–48. http://dx.doi.org/10.5194/acpd-10-11401-2010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. A single coherent total ozone dataset, called the Multi Sensor Reanalysis (MSR), has been created from all available ozone column data measured by polar orbiting satellites in the near-ultraviolet Huggins band in the last thirty years. Fourteen total ozone satellite retrieval datasets from the instruments TOMS (on the satellites Nimbus-7 and Earth Probe), SBUV (Nimbus-7, NOAA-9, NOAA-11 and NOAA-16), GOME (ERS-2), SCIAMACHY (Envisat), OMI (EOS-Aura), and GOME-2 (Metop-A) have been used in the MSR. As first step a bias correction scheme is applied to all satellite observations, based on independent ground-based total ozone data from the World Ozone and Ultraviolet Data Center. The correction is a function of solar zenith angle, viewing angle, time (trend), and stratospheric temperature. As second step data assimilation was applied to create a global dataset of total ozone analyses. The data assimilation method is a sub-optimal implementation of the Kalman filter technique, and is based on a chemical transport model driven by ECMWF meteorological fields. The chemical transport model provides a detailed description of (stratospheric) transport and uses parameterisations for gas-phase and ozone hole chemistry. The MSR dataset results from a 30-year data assimilation run with the 14 corrected satellite datasets as input, and is available on a grid of 1×1½ degrees with a sample frequency of 6 h for the complete time period (1978–2008). The Observation-minus-Analysis (OmA) statistics show that the bias of the MSR analyses is less than 1 percent with an RMS standard deviation of about 2 percent as compared to the corrected satellite observations used.
21

van der A, R. J., M. A. F. Allaart, and H. J. Eskes. "Multi sensor reanalysis of total ozone." Atmospheric Chemistry and Physics 10, no. 22 (November 30, 2010): 11277–94. http://dx.doi.org/10.5194/acp-10-11277-2010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. A single coherent total ozone dataset, called the Multi Sensor Reanalysis (MSR), has been created from all available ozone column data measured by polar orbiting satellites in the near-ultraviolet Huggins band in the last thirty years. Fourteen total ozone satellite retrieval datasets from the instruments TOMS (on the satellites Nimbus-7 and Earth Probe), SBUV (Nimbus-7, NOAA-9, NOAA-11 and NOAA-16), GOME (ERS-2), SCIAMACHY (Envisat), OMI (EOS-Aura), and GOME-2 (Metop-A) have been used in the MSR. As first step a bias correction scheme is applied to all satellite observations, based on independent ground-based total ozone data from the World Ozone and Ultraviolet Data Center. The correction is a function of solar zenith angle, viewing angle, time (trend), and effective ozone temperature. As second step data assimilation was applied to create a global dataset of total ozone analyses. The data assimilation method is a sub-optimal implementation of the Kalman filter technique, and is based on a chemical transport model driven by ECMWF meteorological fields. The chemical transport model provides a detailed description of (stratospheric) transport and uses parameterisations for gas-phase and ozone hole chemistry. The MSR dataset results from a 30-year data assimilation run with the 14 corrected satellite datasets as input, and is available on a grid of 1× 1 1/2° with a sample frequency of 6 h for the complete time period (1978–2008). The Observation-minus-Analysis (OmA) statistics show that the bias of the MSR analyses is less than 1% with an RMS standard deviation of about 2% as compared to the corrected satellite observations used.
22

Yakubu, Ibrahim Nandom, Aziza Hashi Abokor, and Iklim Gedik Balay. "Re-examining the impact of financial intermediation on economic growth: evidence from Turkey." Journal of Economics and Development 23, no. 2 (March 21, 2021): 116–27. http://dx.doi.org/10.1108/jed-09-2020-0139.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
PurposeThis study seeks to investigate the impact of financial intermediation on economic growth in Turkey using annual data spanning 1970–2017.Design/methodology/approachBased on the results of the augmented Dickey–Fuller and Phillips–Perron unit root tests for stationarity, the authors employ the Autoregressive Distributed Lag (ARDL) bounds testing to cointegration to establish the long-run impact of financial intermediation alongside other control factors on economic growth. The study also examines the short-run relationship between financial intermediation and economic growth by estimating the Error Correction Model (ECM).FindingsThe authors’ findings indicate that financial intermediation significantly influences economic growth in both short and long run. However, the effect is positive only in the short run, lending support to the supply-leading hypothesis. Regarding the control variables, the authors observe that while financial openness shows a positive significant impact on economic growth in the long run, gross fixed capital formation matters only in the short run. The results further infer that regardless of the time period, inflation impedes economic growth.Originality/valueIn the empirical analysis of the relationship between financial intermediation and economic growth, financial intermediation is always measured using a single variable. The authors argue that such studies could produce bias and misleading results given that a single proxy does not adequately reflect financial intermediation activities. Likewise, such findings may delude policy implementation. To provide a more vivid and robust analysis, the authors employ the Principal Component Analysis (PCA) to construct a composite index for financial intermediation based on three broad measures. The researchers’ are unaware of any study on the financial intermediation–economic growth nexus using a composite index of financial intermediation. Thus, this paper fills this lacuna in the literature.
23

Storto, Andrea, and Simona Masina. "C-GLORSv5: an improved multipurpose global ocean eddy-permitting physical reanalysis." Earth System Science Data 8, no. 2 (November 29, 2016): 679–96. http://dx.doi.org/10.5194/essd-8-679-2016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. Global ocean reanalyses combine in situ and satellite ocean observations with a general circulation ocean model to estimate the time-evolving state of the ocean, and they represent a valuable tool for a variety of applications, ranging from climate monitoring and process studies to downstream applications, initialization of long-range forecasts and regional studies. The purpose of this paper is to document the recent upgrade of C-GLORS (version 5), the latest ocean reanalysis produced at the Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC) that covers the meteorological satellite era (1980–present) and it is being updated in delayed time mode. The reanalysis is run at eddy-permitting resolution (1∕4° horizontal resolution and 50 vertical levels) and consists of a three-dimensional variational data assimilation system, a surface nudging and a bias correction scheme. With respect to the previous version (v4), C-GLORSv5 contains a number of improvements. In particular, background- and observation-error covariances have been retuned, allowing a flow-dependent inflation in the globally averaged background-error variance. An additional constraint on the Arctic sea-ice thickness was introduced, leading to a realistic ice volume evolution. Finally, the bias correction scheme and the initialization strategy were retuned. Results document that the new reanalysis outperforms the previous version in many aspects, especially in representing the variability of global heat content and associated steric sea level in the last decade, the top 80 m ocean temperature biases and root mean square errors, and the Atlantic Ocean meridional overturning circulation; slight worsening in the high-latitude salinity and deep ocean temperature emerge though, providing the motivation for further tuning of the reanalysis system. The dataset is available in NetCDF format at doi:10.1594/PANGAEA.857995.
24

Ishaq, Maryam, Ghulam Ghouse, and Muhammad Ishaq Bhatti. "Another Prospective on Real Exchange Rate and the Traded Goods Prices: Revisiting Balassa–Samuelson Hypothesis." Sustainability 14, no. 13 (June 21, 2022): 7529. http://dx.doi.org/10.3390/su14137529.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper proposes a new variant and reinvestigates the validity of the Balassa–Samuelson (BS) hypothesis for nine East and South Asian countries under new specifications. The BS hypothesis is often criticized for one of its fundamental, but oversimplified assumptions related to Purchasing Power Parity (PPP) holding which can be confirmed for cronss-country tradables’ prices, implying nontraded-sector prices are solely responsible for inducing trend deviations in real exchange rate. The assumption, when empirically tested, does not always hold valid, revealing a price difference in tradables for Asian countries against the world (U.S.), a potential driver of their trend in real exchange rate deviations (appreciation). A new approach based on Fully Modified OLS (FMOLS) and Dynamic OLS (DOLS) is used to estimate the long-run BS coefficients, while the error correction mechanism is employed to estimate the short-run estimates. These results motivated us to allow for the inexistence of PPP for cross-country tradables; the standard form of the BS model is then tested in its relaxed form using time-series and panel data econometric tests. Despite a relaxing of the BS model in favor of tradables’ price deviation from PPP, the results are not sufficiently supportive of the BS hypothesis. These findings hold strong economic implications for Asia, suggesting that intercountry sectoral productivity bias of regional economies with the world does not necessarily exert substantial effects on their long-run real exchange rates. Additionally, contrary to the core belief of the BS model, intercountry tradables’ price differentials are found to substantially explain real exchange rate movements away from their long-run equilibrium.
25

Emad, Anas, and Lukas Siebicke. "True eddy accumulation – Part 2: Theory and experiment of the short-time eddy accumulation method." Atmospheric Measurement Techniques 16, no. 1 (January 4, 2023): 41–55. http://dx.doi.org/10.5194/amt-16-41-2023.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. A new variant of the eddy accumulation method for measuring atmospheric exchange is derived, and a prototype sampler is evaluated. The new method, termed short-time eddy accumulation (STEA), overcomes the requirement of fixed accumulation intervals in the true eddy accumulation method (TEA) and enables the sampling system to run in a continuous flow-through mode. STEA enables adaptive time-varying accumulation intervals, which improves the system's dynamic range and brings many advantages to flux measurement and calculation. The STEA method was successfully implemented and deployed to measure CO2 fluxes over an agricultural field in Braunschweig, Germany. The measured fluxes matched very well against a conventional eddy covariance system (slope of 1.04, R2 of 0.86). We provide a detailed description of the setup and operation of the STEA system in the continuous flow-through mode, devise an empirical correction for the effect of buffer volumes, and describe the important considerations for the successful operation of the STEA method. The STEA method reduces the bias and uncertainty in the measured fluxes compared to conventional TEA and creates new ways to design eddy accumulation systems with finer control over sampling and accumulation. The results encourage the application of STEA for measuring fluxes of more challenging atmospheric constituents such as reactive species. This paper is Part 2 of a two-part series on true eddy accumulation.
26

Tobin, Kenneth J., and Marvin E. Bennett. "Impact of model complexity and precipitation data products on modeled streamflow." Journal of Hydroinformatics 16, no. 3 (September 25, 2013): 588–99. http://dx.doi.org/10.2166/hydro.2013.056.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the proliferation of remote sensing platforms as well as numerous ground products based on weather radar estimation, there are now multiple options for precipitation data beyond traditional rain gauges for which most hydrologic models were originally designed. This study evaluates four precipitation products as input for generating streamflow simulations using two hydrologic models that significantly vary in complexity. The four precipitation products include two ground products from the National Weather Service: the Multi-sensor Precipitation Estimator (MPE) and rain gauge data. The two satellite products come from NASA's Tropical Rainfall Measurement Mission (TRMM) and include the TRMM 3B42 Research Version 6, which has a built-in ground bias correction, and the real-time TRMM Multi-Satellite Precipitation Analysis. The two hydrologic models utilized include the Soil and Water Assessment Tool (SWAT) and Gridded Surface and Subsurface Hydrologic Analysis (GSSHA). Simulations were conducted in three, moderate- to large-sized basins across the southern United States, the San Casimiro (South Texas), Skuna (northern Mississippi), Alapaha (southern Georgia), and were run for over 2 years. This study affirms the realization that input precipitation is at least as important as the choice of hydrologic model.
27

Bianco, Laura, Irina V. Djalalova, James M. Wilczak, Joseph B. Olson, Jaymes S. Kenyon, Aditya Choukulkar, Larry K. Berg, et al. "Impact of model improvements on 80 m wind speeds during the second Wind Forecast Improvement Project (WFIP2)." Geoscientific Model Development 12, no. 11 (November 21, 2019): 4803–21. http://dx.doi.org/10.5194/gmd-12-4803-2019.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. During the second Wind Forecast Improvement Project (WFIP2; October 2015–March 2017, held in the Columbia River Gorge and Basin area of eastern Washington and Oregon states), several improvements to the parameterizations used in the High Resolution Rapid Refresh (HRRR – 3 km horizontal grid spacing) and the High Resolution Rapid Refresh Nest (HRRRNEST – 750 m horizontal grid spacing) numerical weather prediction (NWP) models were tested during four 6-week reforecast periods (one for each season). For these tests the models were run in control (CNT) and experimental (EXP) configurations, with the EXP configuration including all the improved parameterizations. The impacts of the experimental parameterizations on the forecast of 80 m wind speeds (wind turbine hub height) from the HRRR and HRRRNEST models are assessed, using observations collected by 19 sodars and three profiling lidars for comparison. Improvements due to the experimental physics (EXP vs. CNT runs) and those due to finer horizontal grid spacing (HRRRNEST vs. HRRR) and the combination of the two are compared, using standard bulk statistics such as mean absolute error (MAE) and mean bias error (bias). On average, the HRRR 80 m wind speed MAE is reduced by 3 %–4 % due to the experimental physics. The impact of the finer horizontal grid spacing in the CNT runs also shows a positive improvement of 5 % on MAE, which is particularly large at nighttime and during the morning transition. Lastly, the combined impact of the experimental physics and finer horizontal grid spacing produces larger improvements in the 80 m wind speed MAE, up to 7 %–8 %. The improvements are evaluated as a function of the model's initialization time, forecast horizon, time of the day, season of the year, site elevation, and meteorological phenomena. Causes of model weaknesses are identified. Finally, bias correction methods are applied to the 80 m wind speed model outputs to measure their impact on the improvements due to the removal of the systematic component of the errors.
28

Lee, Seunghee, Ganghan Kim, Myong-In Lee, Yonghan Choi, Chang-Keun Song, and Hyeon-Kook Kim. "Seasonal Dependence of Aerosol Data Assimilation and Forecasting Using Satellite and Ground-Based Observations." Remote Sensing 14, no. 9 (April 28, 2022): 2123. http://dx.doi.org/10.3390/rs14092123.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study examines the performance of a data assimilation and forecasting system that simultaneously assimilates satellite aerosol optical depth (AOD) and ground-based PM10 and PM2.5 observations into the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem). The data assimilation case for the surface PM10 and PM2.5 concentrations exhibits a higher consistency with the observed data by showing more correlation coefficients than the no-assimilation case. The data assimilation also shows beneficial impacts on the PM10 and PM2.5 forecasts for South Korea for up to 24 h from the updated initial condition. This study also finds deficiencies in data assimilation and forecasts, as the model shows a pronounced seasonal dependence of forecasting accuracy, on which the seasonal changes in regional atmospheric circulation patterns have a significant impact. In spring, the forecast accuracy decreases due to large uncertainties in natural dust transport from the continent by north-westerlies, while the model performs reasonably well in terms of anthropogenic emission and transport in winter. When the south-westerlies prevail in summer, the forecast accuracy increases with the overall reduction in ambient concentration. The forecasts also show significant accuracy degradation as the lead time increases because of systematic model biases. A simple statistical correction that adjusts the mean and variance of the forecast outputs to resemble those in the observed distribution can maintain the forecast skill at a practically useful level for lead times of more than a day. For a categorical forecast, the skill score of the data assimilation run increased by up to 37% compared to that of the case with no assimilation, and the skill score was further improved by 10% through bias correction.
29

Barron, Charlie N., A. Birol Kara, Harley E. Hurlburt, C. Rowley, and Lucy F. Smedstad. "Sea Surface Height Predictions from the Global Navy Coastal Ocean Model during 1998–2001*." Journal of Atmospheric and Oceanic Technology 21, no. 12 (December 1, 2004): 1876–93. http://dx.doi.org/10.1175/jtech-1680.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract A ⅛° global version of the Navy Coastal Ocean Model (NCOM), operational at the Naval Oceanographic Office (NAVOCEANO), is used for prediction of sea surface height (SSH) on daily and monthly time scales during 1998–2001. Model simulations that use 3-hourly wind and thermal forcing obtained from the Navy Operational Global Atmospheric Prediction System (NOGAPS) are performed with/without data assimilation to examine indirect/direct effects of atmospheric forcing in predicting SSH. Model–data evaluations are performed using the extensive database of daily averaged SSH values from tide gauges in the Atlantic, Pacific, and Indian Oceans obtained from the Joint Archive for Sea Level (JASL) center during 1998–2001. Model–data comparisons are based on observations from 282 tide gauge locations. An inverse barometer correction was applied to SSH time series from tide gauges for model–data comparisons, and a sensitivity study is undertaken to assess the impact of the inverse barometer correction on the SSH validation. A set of statistical metrics that includes conditional bias (Bcond), root-mean-square (rms) difference, correlation coefficient (R), and nondimensional skill score (SS) is used to evaluate the model performance. It is shown that global NCOM has skill in representing SSH even in a free-running simulation, with general improvement when SSH from satellite altimetry and sea surface temperature (SST) from satellite IR are assimilated via synthetic temperature and salinity profiles derived from climatological correlations. When the model was run from 1998 to 2001 with NOGAPS forcing, daily model SSH comparisons from 612 yearlong daily tide gauge time series gave a median rms difference of 5.98 cm (5.77 cm), an R value of 0.72 (0.76), and an SS value of 0.45 (0.51) for the ⅛° free-running (assimilative) NCOM. Similarly, error statistics based on the 30-day running averages of SSH time series for 591 yearlong daily tide gauge time series over the time frame 1998–2001 give a median rms difference of 3.63 cm (3.36 cm), an R value of 0.83 (0.85), and an SS value of 0.60 (0.64) for the ⅛° free-running (assimilated) NCOM. Model– data comparisons show that skill in 30-day running average SSH time series is as much as 30% higher than skill for daily SSH. Finally, SSH predictions from the free-running and assimilative ⅛° NCOM simulations are validated against sea level data from the tide gauges in two different ways: 1) using original detided sea level time series from tide gauges and 2) using the detided data with an inverse barometer correction derived using daily mean sea level pressure extracted from NOGAPS at each location. Based on comparisons with 612 yearlong daily tide gauge time series during 1998–2001, the inverse barometer correction lowered the median rms difference by about 1 cm (15%–20%). Results presented in this paper reveal that NCOM is able to predict SSH with reasonable accuracies, as evidenced by model simulations performed during 1998–2001. In an extension of the validation over broader ocean regions, the authors find good agreement in amplitude and distribution of SSH variability between NCOM and other operational model products.
30

Mooring, Todd A., Isaac M. Held, and R. John Wilson. "Effects of the Mean Flow on Martian Transient Eddy Activity: Studies with an Idealized General Circulation Model." Journal of the Atmospheric Sciences 76, no. 8 (July 16, 2019): 2375–97. http://dx.doi.org/10.1175/jas-d-18-0247.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The extent to which the eddy statistics of the Martian atmosphere can be inferred from the mean state and highly simplified assumptions about diabatic and frictional processes is investigated using an idealized general circulation model (GCM) with Newtonian relaxation thermal forcing. An iterative technique, adapted from previous terrestrial studies, is used to generate radiative equilibrium temperatures such that the three-dimensional time-mean temperature fields of the idealized model match means computed from the Mars Analysis Correction Data Assimilation (MACDA). Focusing on a period of strong Northern Hemisphere eddy activity prior to winter solstice, it is found that the idealized model reproduces some key features of the spatial patterns of the MACDA eddy temperature variance and kinetic energy fields. The idealized model can also simulate aspects of MACDA’s seasonal cycle of spatial patterns of low-level eddy meridional wind and temperature variances. The most notable weakness of the model is its eddy amplitudes—both their absolute values and seasonal variations are quite unrealistic, for reasons unclear. The idealized model was also run with a mean flow based on output from the Geophysical Fluid Dynamics Laboratory (GFDL) full-physics Mars GCM. The idealized model captures the difference in mean flows between MACDA and the GFDL Mars GCM and reproduces a bias in the more complex model’s eddy zonal wavenumber distribution. This implies that the mean flow is an important influence on transient eddy wavenumbers and that improving the GFDL Mars GCM’s mean flow would make its eddy scales more realistic.
31

Marçal, Emerson Fernandes, and Eli Hadad Junior. "Is It Possible to Beat the Random Walk Model in Exchange Rate Forecasting? More Evidence for Brazilian Case." Brazilian Review of Finance 14, no. 1 (April 22, 2016): 65. http://dx.doi.org/10.12660/rbfin.v14n1.2016.59329.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The seminal study of Meese et al. (1983) on exchange rate forecastability had a great impact on the international finance literature. The authors showed that exchange rate forecasts based on structural models are worse than a naive random walk. This result is known as the Meese--Rogoff (MR) puzzle. Although the validity of this result has been checked for many currencies, studies for the Brazilian currency are not common. In 1999, Brazil adopted the dirty floating exchange rate regime. Rossi (2013) ran an extensive study on the MR puzzle but did not analyse Brazilian data. Our goal is to run a “pseudo real-time experiment” to investigate whether forecasts based on econometric models that use the fundamentals suggested by the exchange rate monetary theory of the 80s can beat the random model for the case of the Brazilian currency. Our work has three main differences with respect to Rossi (2013). We use a bias correction technique and forecast combination in an attempt to improve the forecast accuracy of our projections. We also combine the random walk projections with the projections of the structural models to investigate if it is possible to further improve the accuracy of the random walk forecasts. However, our results are quite in line with Rossi (2013). We show that it is not difficult to beat the forecasts generated by the random walk with drift using Brazilian data, but that it is quite difficult to beat the random walk without drift. Our results suggest that it is advisable to use the random walk without drift, not only the random walk with drift, as a benchmark in exercises that claim the MR result is not valid.
32

Batté, Lauriane, and Michel Déqué. "Randomly correcting model errors in the ARPEGE-Climate v6.1 component of CNRM-CM: applications for seasonal forecasts." Geoscientific Model Development 9, no. 6 (June 7, 2016): 2055–76. http://dx.doi.org/10.5194/gmd-9-2055-2016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. Stochastic methods are increasingly used in global coupled model climate forecasting systems to account for model uncertainties. In this paper, we describe in more detail the stochastic dynamics technique introduced by Batté and Déqué (2012) in the ARPEGE-Climate atmospheric model. We present new results with an updated version of CNRM-CM using ARPEGE-Climate v6.1, and show that the technique can be used both as a means of analyzing model error statistics and accounting for model inadequacies in a seasonal forecasting framework.The perturbations are designed as corrections of model drift errors estimated from a preliminary weakly nudged re-forecast run over an extended reference period of 34 boreal winter seasons. A detailed statistical analysis of these corrections is provided, and shows that they are mainly made of intra-month variance, thereby justifying their use as in-run perturbations of the model in seasonal forecasts. However, the interannual and systematic error correction terms cannot be neglected. Time correlation of the errors is limited, but some consistency is found between the errors of up to 3 consecutive days.These findings encourage us to test several settings of the random draws of perturbations in seasonal forecast mode. Perturbations are drawn randomly but consistently for all three prognostic variables perturbed. We explore the impact of using monthly mean perturbations throughout a given forecast month in a first ensemble re-forecast (SMM, for stochastic monthly means), and test the use of 5-day sequences of perturbations in a second ensemble re-forecast (S5D, for stochastic 5-day sequences). Both experiments are compared in the light of a REF reference ensemble with initial perturbations only. Results in terms of forecast quality are contrasted depending on the region and variable of interest, but very few areas exhibit a clear degradation of forecasting skill with the introduction of stochastic dynamics. We highlight some positive impacts of the method, mainly on Northern Hemisphere extra-tropics. The 500 hPa geopotential height bias is reduced, and improvements project onto the representation of North Atlantic weather regimes. A modest impact on ensemble spread is found over most regions, which suggests that this method could be complemented by other stochastic perturbation techniques in seasonal forecasting mode.
33

Cheevaprasert, Sirikanya, Rajeshwar Mehrotra, Sansarith Thianpopirug, and Nutchanart Sriwongsitanon. "An Evaluation of Statistical Downscaling Techniques for Simulating Daily Rainfall Occurrences in the Upper Ping River Basin." Hydrology 7, no. 3 (September 2, 2020): 63. http://dx.doi.org/10.3390/hydrology7030063.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study presents an exhaustive evaluation of the performance of three statistical downscaling techniques for generating daily rainfall occurrences at 22 rainfall stations in the upper Ping river basin (UPRB), Thailand. The three downscaling techniques considered are the modified Markov model (MMM), a stochastic model, and two variants of regression models, statistical models, one with single relationship for all days of the year (RegressionYrly) and the other with individual relationships for each of the 366 days (Regression366). A stepwise regression is applied to identify the significant atmospheric (ATM) variables to be used as predictors in the downscaling models. Aggregated wetness state indicators (WIs), representing the recent past wetness state for the previous 30, 90 or 365 days, are also considered as additional potential predictors since they have been effectively used to represent the low-frequency variability in the downscaled sequences. Grouping of ATM and all possible combinations of WI is used to form eight predictor sets comprising ATM, ATM-WI30, ATM-WI90, ATM-WI365, ATM-WI30&90, ATM-WI30&365, ATM-WI90&365 and ATM-WI30&90&365. These eight predictor sets were used to run the three downscaling techniques to create 24 combination cases. These cases were first applied at each station individually (single site simulation) and thereafter collectively at all sites (multisite simulations) following multisite downscaling models leading to 48 combination cases in total that were run and evaluated. The downscaling models were calibrated using atmospheric variables from the National Centers for Environmental Prediction (NCEP) reanalysis database and validated using representative General Circulation Models (GCM) data. Identification of meaningful predictors to be used in downscaling, calibration and setting up of downscaling models, running all 48 possible predictor combinations and a thorough evaluation of results required considerable efforts and knowledge of the research area. The validation results show that the use of WIs remarkably improves the accuracy of downscaling models in terms of simulation of standard deviations of annual, monthly and seasonal wet days. By comparing the overall performance of the three downscaling techniques keeping common sets of predictors, MMM provides the best results of the simulated wet and dry spells as well as the standard deviation of monthly, seasonal and annual wet days. These findings are consistent across both single site and multisite simulations. Overall, the MMM multisite model with ATM and wetness indicators provides the best results. Upon evaluating the combinations of ATM and sets of wetness indicators, ATM-WI30&90 and ATM-WI30&365 were found to perform well during calibration in reproducing the overall rainfall occurrence statistics while ATM-WI30&365 was found to significantly improve the accuracy of monthly wet spells over the region. However, these models perform poorly during validation at annual time scale. The use of multi-dimension bias correction approaches is recommended for future research.
34

Eldering, Annmarie, Thomas E. Taylor, Christopher W. O'Dell, and Ryan Pavlick. "The OCO-3 mission: measurement objectives and expected performance based on 1 year of simulated data." Atmospheric Measurement Techniques 12, no. 4 (April 15, 2019): 2341–70. http://dx.doi.org/10.5194/amt-12-2341-2019.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. The Orbiting Carbon Observatory-3 (OCO-3) is NASA's next instrument dedicated to extending the record of the dry-air mole fraction of column carbon dioxide (XCO2) and solar-induced fluorescence (SIF) measurements from space. The current schedule calls for a launch from the Kennedy Space Center no earlier than April 2019 via a Space-X Falcon 9 and Dragon capsule. The instrument will be installed as an external payload on the Japanese Experimental Module Exposed Facility (JEM-EF) of the International Space Station (ISS) with a nominal mission lifetime of 3 years. The precessing orbit of the ISS will allow for viewing of the Earth at all latitudes less than approximately 52∘, with a ground repeat cycle that is much more complicated than the polar-orbiting satellites that so far have carried all of the instruments capable of measuring carbon dioxide from space. The grating spectrometer at the core of OCO-3 is a direct copy of the OCO-2 spectrometer, which was launched into a polar orbit in July 2014. As such, OCO-3 is expected to have similar instrument sensitivity and performance characteristics to OCO-2, which provides measurements of XCO2 with precision better than 1 ppm at 3 Hz, with each viewing frame containing eight footprints approximately 1.6 km by 2.2 km in size. However, the physical configuration of the instrument aboard the ISS, as well as the use of a new pointing mirror assembly (PMA), will alter some of the characteristics of the OCO-3 data compared to OCO-2. Specifically, there will be significant differences from day to day in the sampling locations and time of day. In addition, the flexible PMA system allows for a much more dynamic observation-mode schedule. This paper outlines the science objectives of the OCO-3 mission and, using a simulation of 1 year of global observations, characterizes the spatial sampling, time-of-day coverage, and anticipated data quality of the simulated L1b. After application of cloud and aerosol prescreening, the L1b radiances are run through the operational L2 full physics retrieval algorithm, as well as post-retrieval filtering and bias correction, to examine the expected coverage and quality of the retrieved XCO2 and to show how the measurement objectives are met. In addition, results of the SIF from the IMAP–DOAS algorithm are analyzed. This paper focuses only on the nominal nadir–land and glint–water observation modes, although on-orbit measurements will also be made in transition and target modes, similar to OCO-2, as well as the new snapshot area mapping (SAM) mode.
35

Fouotsa Manfouo, Noé Carème, Linke Potgieter, Andrew Watson, and Johanna H. Nel. "A Comparison of the Statistical Downscaling and Long-Short-Term-Memory Artificial Neural Network Models for Long-Term Temperature and Precipitations Forecasting." Atmosphere 14, no. 4 (April 12, 2023): 708. http://dx.doi.org/10.3390/atmos14040708.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
General circulation models (GCMs) run at regional resolution or at a continental scale. Therefore, these results cannot be used directly for local temperatures and precipitation prediction. Downscaling techniques are required to calibrate GCMs. Statistical downscaling models (SDSM) are the most widely used for bias correction of GCMs. However, few studies have compared SDSM with multi-layer perceptron artificial neural networks and in most of these studies, results indicate that SDSM outperform other approaches. This paper investigates an alternative architecture of neural networks, namely the long-short-term memory (LSTM), to forecast two critical climate variables, namely temperature and precipitation, with an application to five climate gauging stations in the Lake Chad Basin. Lake Chad is a data scarce area which has been impacted by severe drought, where water resources have been influenced by climate change and recent agricultural expansion. SDSM was used as the benchmark in this paper for temperature and precipitation downscaling for monthly time–scales weather prediction, using grid resolution GCM output at a 5 degrees latitude × 5 degrees longitude global grid. Three performance indicators were used in this study, namely the root mean square error (RMSE), to measure the sensitivity of the model to outliers, the mean absolute percentage error (MAPE), to estimate the overall performance of the predictions, as well as the Nash Sutcliffe Efficiency (NSE), which is a standard measure used in the field of climate forecasting. Results on the validation set for SDSM and test set for LSTM indicated that LSTM produced better accuracy on average compared to SDSM. For precipitation forecasting, the average RMSE and MAPE for LSTM were 33.21 mm and 24.82% respectively, while the average RMSE and MAPE for SDSM were 53.32 mm and 34.62% respectively. In terms of three year ahead minimum temperature forecasts, LSTM presents an average RMSE of 4.96 degree celsius and an average MAPE of 27.16%, while SDSM presents an average RMSE of 8.58 degree celsius and an average MAPE of 12.83%. For maximum temperatures forecast, LSTM presents an average RMSE of 4.27 degree celsius and an average MAPE of 11.09 percent, while SDSM presents an average RMSE of 9.93 degree celsius and an average RMSE of 12.07%. Given the results, LSTM may be a suitable alternative approach to downscale global climate simulation models’ output, to improve water management and long-term temperature and precipitations forecasting at local level.
36

Kinnard, Christophe, Olivier Larouche, Michael N. Demuth, and Brian Menounos. "Modelling glacier mass balance and climate sensitivity in the context of sparse observations: application to Saskatchewan Glacier, western Canada." Cryosphere 16, no. 8 (August 2, 2022): 3071–99. http://dx.doi.org/10.5194/tc-16-3071-2022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. Glacier mass balance models are needed at sites with scarce long-term observations to reconstruct past glacier mass balance and assess its sensitivity to future climate change. In this study, North American Regional Reanalysis (NARR) data were used to force a physically based, distributed glacier mass balance model of Saskatchewan Glacier for the historical period 1979–2016 and assess its sensitivity to climate change. A 2-year record (2014–2016) from an on-glacier automatic weather station (AWS) and historical precipitation records from nearby permanent weather stations were used to downscale air temperature, relative humidity, wind speed, incoming solar radiation and precipitation from the NARR to the station sites. The model was run with fixed (1979, 2010) and time-varying (dynamic) geometry using a multitemporal digital elevation model dataset. The model showed a good performance against recent (2012–2016) direct glaciological mass balance observations as well as with cumulative geodetic mass balance estimates. The simulated mass balance was not very sensitive to the NARR spatial interpolation method, as long as station data were used for bias correction. The simulated mass balance was however sensitive to the biases in NARR precipitation and air temperature, as well as to the prescribed precipitation lapse rate and ice aerodynamic roughness lengths, showing the importance of constraining these two parameters with ancillary data. The glacier-wide simulated energy balance regime showed a large contribution (57 %) of turbulent (sensible and latent) heat fluxes to melting in summer, higher than typical mid-latitude glaciers in continental climates, which reflects the local humid “icefield weather” of the Columbia Icefield. The static mass balance sensitivity to climate was assessed for prescribed changes in regional mean air temperature between 0 and 7 ∘C and precipitation between −20 % and +20 %, which comprise the spread of ensemble Representative Concentration Pathway (RCP) climate scenarios for the mid (2041–2070) and late (2071–2100) 21st century. The climate sensitivity experiments showed that future changes in precipitation would have a small impact on glacier mass balance, while the temperature sensitivity increases with warming, from −0.65 to −0.93 m w.e. a−1 ∘C−1. The mass balance response to warming was driven by a positive albedo feedback (44 %), followed by direct atmospheric warming impacts (24 %), a positive air humidity feedback (22 %) and a positive precipitation phase feedback (10 %). Our study underlines the key role of albedo and air humidity in modulating the response of winter-accumulation type mountain glaciers and upland icefield-outlet glacier settings to climate.
37

Brocca, Luca, Paolo Filippucci, Sebastian Hahn, Luca Ciabatta, Christian Massari, Stefania Camici, Lothar Schüller, Bojan Bojkov, and Wolfgang Wagner. "SM2RAIN–ASCAT (2007–2018): global daily satellite rainfall data from ASCAT soil moisture observations." Earth System Science Data 11, no. 4 (October 22, 2019): 1583–601. http://dx.doi.org/10.5194/essd-11-1583-2019.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. Long-term gridded precipitation products are crucial for several applications in hydrology, agriculture and climate sciences. Currently available precipitation products suffer from space and time inconsistency due to the non-uniform density of ground networks and the difficulties in merging multiple satellite sensors. The recent “bottom-up” approach that exploits satellite soil moisture observations for estimating rainfall through the SM2RAIN (Soil Moisture to Rain) algorithm is suited to build a consistent rainfall data record as a single polar orbiting satellite sensor is used. Here we exploit the Advanced SCATterometer (ASCAT) on board three Meteorological Operational (MetOp) satellites, launched in 2006, 2012, and 2018, as part of the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) Polar System programme. The continuity of the scatterometer sensor is ensured until the mid-2040s through the MetOp Second Generation Programme. Therefore, by applying the SM2RAIN algorithm to ASCAT soil moisture observations, a long-term rainfall data record will be obtained, starting in 2007 and lasting until the mid-2040s. The paper describes the recent improvements in data pre-processing, SM2RAIN algorithm formulation, and data post-processing for obtaining the SM2RAIN–ASCAT quasi-global (only over land) daily rainfall data record at a 12.5 km spatial sampling from 2007 to 2018. The quality of the SM2RAIN–ASCAT data record is assessed on a regional scale through comparison with high-quality ground networks in Europe, the United States, India, and Australia. Moreover, an assessment on a global scale is provided by using the triple-collocation (TC) technique allowing us also to compare these data with the latest, fifth-generation European Centre for Medium-Range Weather Forecasts (ECMWF) reanalysis (ERA5), the Early Run version of the Integrated Multi-Satellite Retrievals for Global Precipitation Measurement (IMERG), and the gauge-based Global Precipitation Climatology Centre (GPCC) products. Results show that the SM2RAIN–ASCAT rainfall data record performs relatively well at both a regional and global scale, mainly in terms of root mean square error (RMSE) when compared to other products. Specifically, the SM2RAIN–ASCAT data record provides performance better than IMERG and GPCC in data-scarce regions of the world, such as Africa and South America. In these areas, we expect larger benefits in using SM2RAIN–ASCAT for hydrological and agricultural applications. The limitations of the SM2RAIN–ASCAT data record consist of the underestimation of peak rainfall events and the presence of spurious rainfall events due to high-frequency soil moisture fluctuations that might be corrected in the future with more advanced bias correction techniques. The SM2RAIN–ASCAT data record is freely available at https://doi.org/10.5281/zenodo.3405563 (Brocca et al., 2019) (recently extended to the end of August 2019).
38

Landgraf, Steven, and Abdur Chowdhury. "Factoring emerging markets into the relationship between global liquidity and commodities." Journal of Economic Studies 42, no. 4 (September 14, 2015): 622–40. http://dx.doi.org/10.1108/jes-11-2013-0171.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Purpose – What caused the mid-2000s world commodity price “bubble” and the recent commodity price growth? Some have suggested that rapid global industrial growth over the past decade is the key driver of price growth. Others have argued that high commodity prices are a result of excessively loose monetary policy. The purpose of this paper is to extend the current research in this area by incorporating emerging economies, the BRIC (Brazil, Russia, India, and China) nations specifically, into global measures. Design/methodology/approach – The paper uses a vector error correction (VEC) model and computes variance decomposition and impulse response functions (IRFs). Findings – The empirical analysis suggest that the “demand channel” plays a large part in explaining commodity price growth whether BRIC countries are included or excluded from the analysis. However, excess liquidity may also play a part in explaining price growth. In addition, factoring in BRIC country data leads to the conclusion that unexpected movements in liquidity eventually explain more of the variation in commodity prices than unexpected demand shocks. This specific result is not caught in the sample that only incorporates advanced economies. Research limitations/implications – Despite the theory of Frankel (1986) and the findings of previous global vector autoregression (VAR)/VEC analyses, interest rates, especially shocks, have a minimal impact on consumer and commodity prices. Perhaps future studies should include an interest rate in their analysis that more closely reflects interest rates associated with information used by commodity consumers, producers, and investors. Some analyses such as Hua (1998) use the LIBOR rate, which is highly associated with developed financial markets in the advanced economies. Data quality and availability in the BRIC countries severely limited the length of the time period analyzed and the frequency of the data. Finding longer sample periods or higher frequency data can help to minimize bias in future research. In this paper, monetary aggregates and short-term interest rates were loosely connected to monetary policy. It would also be interesting to directly examine how special programs like quantitative easing influenced global liquidity. Practical implications – The results of the IRFs and variance decompositions confirm some of the previous findings reported in Belke et al. (2010), Hua (1998), and Swaray (2008) that suggest that positive shocks to liquidity positively impact commodity prices. In particular, both samples suggest that this is a short-run impact that occurs after two quarters. However, in the sample that includes information about liquidity from BRIC countries, excess liquidity positively affects commodity prices after six and seven quarters as well. The insignificant results of Granger causality tests of the effect of monetary variables on commodity prices suggests that this relationship is limited to movements in liquidity that is unexpected by agents in the system. These “shocks” could be attributed to a number of factors including exogenous monetary policy changes such as the unprecedented responses by the Federal Reserve during and after the 2008 global financial crisis. Social implications – First, empirical research that claims to analyze relationships at a “global” level needs to account for the growing influence of emerging economies and not simply the advanced economies. Otherwise, results may be biased as they were when too much of the forecast error variance in commodity prices was attributed to shocks to output when it should have been attributed to shocks to excess liquidity. Second, those who criticize expansionary monetary policy in the advanced countries, especially by the Federal Reserve, for pushing up commodity prices should also direct their attention toward monetary authorities elsewhere, especially the BRIC countries, since information on excess liquidity from these countries adds to the influence that global excess liquidity has on commodity prices. Third, monetary policymakers in the advanced countries need to closely monitor liquidity in the BRIC countries, since the discrepancies between the ALL and ADV samples suggests that BRIC excess liquidity affects commodity prices in a way that cannot be captured by examining advanced country data alone. Originality/value – No other paper in this area looked at the BRIC countries.
39

Qin, Jieye, Christopher J. Green, and Kavita Sirichand. "Spot–Futures Price Adjustments in the Nikkei 225: Linear or Smooth Transition? Financial Centre Leadership or Home Bias?" Journal of Risk and Financial Management 16, no. 2 (February 12, 2023): 117. http://dx.doi.org/10.3390/jrfm16020117.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper studies price discovery in Nikkei 225 markets through the nonlinear smooth transition price adjustments between spot and future prices and across all three futures markets. We test for smooth transition nonlinearity and employ an exponential smooth transition error correction model (ESTECM) with exponential generalised autoregressive conditional heteroscedasticity (EGARCH), allowing for the effects of transaction costs, heterogeneity, and asymmetry in Nikkei price adjustments. We show that the ESTECM-EGARCH is the appropriate model as it offers new insights into Nikkei price dynamics and information transmission across international markets. For spot–futures price dynamics, we find that futures led spot prices before the crisis, but spot prices led afterwards. This can be explained by the lower level of heterogeneity in the underlying spot transaction costs after the crisis. For cross-border futures prices, the foreign exchanges (Chicago and Singapore) lead in price discovery, which can be attributed to their roles as global information centres and their flexible trading conditions, such as a more heterogeneous structure of transaction costs. The foreign leadership is robust to the use of linear or nonlinear models, the time differences between Chicago and the other markets, and the long-run liquidity conditions of the Nikkei futures markets, and strongly supports the international centre hypothesis.
40

Balhane, Saloua, Frédérique Cheruy, Fatima Driouech, Khalid El Rhaz, Abderrahmane Idelkadi, Adriana Sima, Étienne Vignon, Philippe Drobinski, and Abdelghani Chehbouni. "Towards an advanced representation of precipitation over Morocco in a global climate model with resolution enhancement and empirical run‐time bias corrections." International Journal of Climatology, February 26, 2024. http://dx.doi.org/10.1002/joc.8405.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractMorocco, as a Mediterranean and North African country, is acknowledged as a climate change hotspot, where increased drought and related water resource shortages present a real challenge for human and natural systems. However, its geographic position and regional characteristics make the simulation of the distribution and variability of precipitation particularly challenging in the region. In this study, we propose an approach where the Laboratoire de Météorologie Dynamique Zoom (LMDZ) GCM is run with a stretched grid configuration developed with enhanced resolution (35 km) over the region, and we apply run‐time bias correction to deal with the atmospheric model's systematic errors on large‐scale circulation. The bias‐correction terms for wind and temperature are built using the climatological mean of the adjustment terms on tendency errors in an LMDZ simulation relaxed towards ERA5 reanalyses. The free reference run with the zoomed configuration is compared to two bias‐corrected runs. The free run exhibits noticeable improvements in mean low‐level circulation, high frequency variability and moisture transport and compares favourably to precipitation observations at the local scale. The mean simulated climate is substantially improved after bias correction w.r.t. to the uncorrected runs. At the regional scale, the bias‐correction showed improvements in moisture transport and precipitation distribution, but no noticeable effect was observed in mean precipitation amounts, interannual variability and extreme events. To address the latter, model tuning after grid refinement and developing more “scale‐aware” parameterizations are necessary. The observed improvements on the large‐scale circulation suggest that the run‐time bias correction can be used to drive regional climate models for a better representation of regional and local climate. It can also be combined with “a posteriori” bias correction methods to improve local precipitation simulation, including extreme events.
41

Tiwari, Amar Deep, Parthasarathi Mukhopadhyay, and Vimal Mishra. "Influence of bias correction of meteorological and streamflow forecast on hydrological prediction in India." Journal of Hydrometeorology, January 25, 2021. http://dx.doi.org/10.1175/jhm-d-20-0235.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThe efforts to develop a hydrologic model-based operational streamflow forecast in India are limited. We evaluate the role of bias correction of meteorological forecast and streamflow post-processing on hydrological prediction skill in India. We use the Variable Infiltration Capacity (VIC) model to simulate runoff and root zone soil moisture in the Narmada basin (drainage area: 97,410 km2), which was used as a testbed to examine the forecast skill along with the observed streamflow. We evaluated meteorological and hydrological forecasts during the monsoon (June-September) season for 2000-2018 period. The raw meteorological forecast displayed relatively low skill against the observed precipitation at 1-3 day lead time during the monsoon season. Similarly, the forecast skill was low with mean normalized root mean squared error (NRMSE) more than 0.9 and mean absolute bias larger than 60% for extreme precipitation at the 1-3-day lead time. We used Empirical Quantile Mapping (EQM) to bias correct precipitation forecast. The bias correction of precipitation forecast resulted in significant improvement in the precipitation forecast skill. Runoff and root zone soil moisture forecast was also significantly improved due to bias correction of precipitation forecast where the forecast evaluation is performed against the reference model run. However, bias correction of precipitation forecast did not cause considerable improvement in the streamflow prediction. Bias correction of streamflow forecast performs better than the streamflow forecast simulated using the bias-corrected meteorological forecast. The combination of the bias correction of precipitation forecast and post-processing of streamflow resulted in a significant improvement in the streamflow prediction (reduction in bias from 40% to 5%).
42

Krinner, Gerhard, Viatcheslav Kharin, Romain Roehrig, John Scinocca, and Francis Codron. "Historically-based run-time bias corrections substantially improve model projections of 100 years of future climate change." Communications Earth & Environment 1, no. 1 (October 14, 2020). http://dx.doi.org/10.1038/s43247-020-00035-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Climate models and/or their output are usually bias-corrected for climate impact studies. The underlying assumption of these corrections is that climate biases are essentially stationary between historical and future climate states. Under very strong climate change, the validity of this assumption is uncertain, so the practical benefit of bias corrections remains an open question. Here, this issue is addressed in the context of bias correcting the climate models themselves. Employing the ARPEGE, LMDZ and CanAM4 atmospheric models, we undertook experiments in which one centre’s atmospheric model takes another centre’s coupled model as observations during the historical period, to define the bias correction, and as the reference under future projections of strong climate change, to evaluate its impact. This allows testing of the stationarity assumption directly from the historical through future periods for three different models. These experiments provide evidence for the validity of the new bias-corrected model approach. In particular, temperature, wind and pressure biases are reduced by 40–60% and, with few exceptions, more than 50% of the improvement obtained over the historical period is on average preserved after 100 years of strong climate change. Below 3 °C global average surface temperature increase, these corrections globally retain 80% of their benefit.
43

Wade, Madeline, Aaron Daniel Viets, Theresa Chmiel, Madeline Stover та Leslie Wade. "Improving LIGO calibration accuracy by using time-dependent filters to compensate for temporal variations". Classical and Quantum Gravity, 15 грудня 2022. http://dx.doi.org/10.1088/1361-6382/acabf6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The response of the Advanced LIGO interferometers is known to vary with time [1]. Accurate calibration of the interferometers must therefore track and compensate for temporal variations in calibration model parameters. These variations were tracked during the first three Advanced LIGO observing runs, and compensation for some of them has been implemented in the calibration procedure. During the second observing run, multiplicative corrections to the interferometer response were applied while producing calibrated strain data both in real time and in high latency. In a high-latency calibration produced after the second observing run and during the entirety of the third observing run, a correction requiring periodic filter updates was applied to the calibration–the time dependence of the coupled cavity pole frequency f_cc. This paper describes the methods developed to compensate for variations in the interferometer response requiring time-dependent filters, including variable zeros, poles, gains, and time delays. The described methods were used to provide compensation for well-modeled time dependence of the interferometer response, which has helped to reduce systematic errors in the calibration to < 2% in magnitude and < 2 degrees in phase across LIGO’s most sensitive frequency band of 20 - 2000 Hz [2, 3]. Additionally, this paper shows how such compensation is relevant for astrophysical inference studies by reducing uncertainty and bias in the sky localization for a simulated binary neutron star merger.
44

S, Jayaram, G. Manavaalan, and S. Gunasekaran. "VLSI Architecture for Designing a True Random Number Generator with Modified Parallel Run Length Encoding." International Journal of Advanced Research in Science, Communication and Technology, April 22, 2021, 290–300. http://dx.doi.org/10.48175/ijarsct-1017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The secured communication is a means to provide privacy and security for the data being transmitted. The cryptographic system has thus become a vital and inevitable platform for achieving data security in our day to day life ranging from the generation of one time passwords, session keys, signature parameters, ephemeral keys. The encryption level is entirely dependent on the unpredictability of the digital bit streams. The paper focuses on generating true random number sequences using hardware, so as to safeguard the encryption keys patterns for digital communications. These sequences are generated using purely digital components supported by an efficient VLSI architecture. The implementation of the proposed model is done using Mojo-V3, supported by Xilinx ISE software platform. The generated random sequences will further undergo some post processing operations, viz; Von-Neumann correction (VNC) and Parallel Run Length Encoding (PRLE), to eliminate the bias in bit stream and also to compensate the high power dissipation respectively.
45

Chakraborty, Srija, Simona Gallerani, Tommaso Zana, Alberto Sesana, Milena Valentini, David Izquierdo-Villalba, Fabio Di Mascia, Fabio Vito, and Paramita Barai. "Probing z ≳ 6 massive black holes with gravitational waves." Monthly Notices of the Royal Astronomical Society, May 16, 2023. http://dx.doi.org/10.1093/mnras/stad1493.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract We investigate the coalescence of massive black hole ($M_{\rm BH}\gtrsim 10^{6}~\rm M_{\odot }$) binaries (MBHBs) at 6 &lt; z &lt; 10 by adopting a suite of cosmological hydrodynamical simulations of galaxy formation, zoomed-in on biased (&gt;3σ) overdense regions (Mh ∼ 1012 M⊙ dark matter halos at z = 6) of the Universe. We first analyse the impact of different resolutions and AGN feedback prescriptions on the merger rate, assuming instantaneous mergers. Then, we compute the halo bias correction factor due to the overdense simulated region. Our simulations predict merger rates that range between 3 – 15 $\rm yr^{-1}$ at z ∼ 6, depending on the run considered, and after correcting for a bias factor of ∼20 − 30. For our fiducial model, we further consider the effect of delay in the MBHB coalescence due to dynamical friction. We find that 83 per cent of MBHBs will merge within the Hubble time, and 21 per cent within 1 Gyr, namely the age of the Universe at z &gt; 6. We finally compute the expected properties of the gravitational wave (GW) signals and find the fraction of LISA detectable events with high signal-to-noise ratio (SNR &gt; 5) to range between 66-69 per cent. However, identifying the electro-magnetic counterpart of these events remains challenging due to the poor LISA sky localization that, for the loudest signals ($\mathcal {M}_c\sim 10^6\, M_{\odot }$ at z = 6), is around 10 $\rm deg^2$.
46

"Evaluation of Global Forecast System (GFS) Medium-Range Precipitation Forecasts in the Nile River Basin." Journal of Hydrometeorology 23, no. 1 (January 2022): 101–16. http://dx.doi.org/10.1175/jhm-d-21-0110.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Reliable weather forecasts are valuable in a number of applications, such as agriculture, hydropower, and weather-related disease outbreaks. Global weather forecasts are widely used, but detailed evaluation over specific regions is paramount for users and operational centers to enhance the usability of forecasts and improve their accuracy. This study presents evaluation of the Global Forecast System (GFS) medium-range (1–15 day) precipitation forecasts in the nine subbasins of the Nile basin using NASA’s Integrated Multisatellite Retrievals (IMERG) Final Run satellite–gauge merged rainfall observations. The GFS products are available at a temporal resolution of 3–6 h and a spatial resolution of 0.25°, and the version-15 products are available since 12 June 2019. GFS forecasts are evaluated at a temporal scale of 1–15 days, a spatial scale from 0.25° to all the way to the subbasin scale, and for a period of one year (15 June 2019–15 June 2020). The results show that performance of the 1-day lead daily basin-averaged GFS forecast performance, as measured through the modified Kling–Gupta efficiency (KGE), is poor (0 < KGE < 0.5) for most of the subbasins. The factors contributing to the low performance are 1) large overestimation bias in watersheds located in wet climate regimes in the northern hemispheres (Millennium watershed, Upper Atbara and Setit watershed, and Khashm El Gibra watershed), and 2) lower ability in capturing the temporal dynamics of watershed-averaged rainfall that have smaller watershed areas (Roseires at 14 110 km2 and Sennar at 13 895 km2). GFS has better bias for watersheds located in the dry parts of the Northern Hemisphere or wet parts of the Southern Hemisphere, and better ability in capturing the temporal dynamics of watershed-average rainfall for large watershed areas. IMERG Early has better bias than GFS forecast for the Millennium watershed but still comparable and worse bias for the Upper Atbara and Setit and Khashm El Gibra watersheds. The variation in the performance of the IMERG Early could be partly explained by the number of rain gauges used in the reference IMERG Final product, as 16 rain gauges were used for the Millennium watershed but only one rain gauge over each Upper Atbara and Setit and Khashm El Gibra watershed. A simple climatological bias correction of IMERG Early reduces in the bias in IMERG Early over most watersheds, but not all watersheds. We recommend exploring methods to increase the performance of GFS forecasts, including postprocessing techniques through the use of both near-real-time and research-version satellite rainfall products.
47

Volpi, Danila, Virna L. Meccia, Virginie Guemas, Pablo Ortega, Roberto Bilbao, Francisco J. Doblas-Reyes, Arthur Amaral, Pablo Echevarria, Rashed Mahmood, and Susanna Corti. "A Novel Initialization Technique for Decadal Climate Predictions." Frontiers in Climate 3 (June 17, 2021). http://dx.doi.org/10.3389/fclim.2021.681127.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Model initialization is a matter of transferring the observed information available at the start of a forecast to the model. An optimal initialization is generally recognized to be able to improve climate predictions up to a few years ahead. However, systematic errors in models make the initialization process challenging. When the observed information is transferred to the model at the initialization time, the discrepancy between the observed and model mean climate causes the drift of the prediction toward the model-biased attractor. Although such drifts can be generally accounted for with a posteriori bias correction techniques, the bias evolving along the prediction might affect the variability that we aim at predicting, and disentangling the small magnitude of the climate signal from the initial drift to be removed represents a challenge. In this study, we present an innovative initialization technique that aims at reducing the initial drift by performing a quantile matching between the observed state at the initialization time and the model state distribution. The adjusted initial state belongs to the model attractor and the observed variability amplitude is scaled toward the model one. Multi-annual climate predictions integrated for 5 years and run with the EC-Earth3 Global Coupled Model have been initialized with this novel methodology, and their prediction skill has been compared with the non-initialized historical simulations from CMIP6 and with the same decadal prediction system but based on full-field initialization. We perform a skill assessment of the surface temperature, the heat content in the ocean upper layers, the sea level pressure, and the barotropic ocean circulation. The added value of the quantile matching initialization is shown in the North Atlantic subpolar region and over the North Pacific surface temperature as well as for the ocean heat content up to 5 years. Improvements are also found in the predictive skill of the Atlantic Meridional Overturning Circulation and the barotropic stream function in the Labrador Sea throughout the 5 forecast years when compared to the full field method.
48

Chapman, William E., and Judith Berner. "Deterministic and Stochastic Tendency Adjustments Derived from Data Assimilation and Nudging." Quarterly Journal of the Royal Meteorological Society, December 27, 2023. http://dx.doi.org/10.1002/qj.4652.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We develop and compare model‐error representation schemes derived from data assimilation increments and nudging tendencies in multi‐decadal simulations of the community atmosphere model, version 6. Each scheme applies a bias correction during simulation run‐time to the zonal and meridional winds. We quantify to which extent such online adjustment schemes improve the model climatology and variability on daily to seasonal timescales. Generally, we observe a ca. 30% improvement to annual upper‐level zonal winds, with largest improvements in boreal spring (ca. 35%) and winter (ca. 47%). Despite only adjusting the wind fields, we additionally observe a ca. 20% improvement to annual precipitation over land, with the largest improvements in boreal fall (ca. 36%) and winter (ca. 25%), and a ca. 50% improvement to annual sea level pressure, globally. With mean state adjustments alone, the dominant pattern of boreal low‐frequency variability over the Atlantic (the North Atlantic Oscillation) is significantly improved. Additional stochasticity further increases the modal explained variances, which brings it closer to the observed value. A streamfunction tendency decomposition reveals that the improvement is due to an adjustment to the high‐ and low‐frequency eddy‐eddy interaction terms. In the Pacific, the mean state adjustment alone led to an erroneous deepening of the Aleutian low, but this was remedied with the addition of stochastically selected tendencies. Finally, from a practical standpoint, we discuss the performance of using data assimilation increments versus nudging tendencies for an online model‐error representation.This article is protected by copyright. All rights reserved.
49

Lien, Nguyen Phuong. "How Does Governance Modify the Relationship between Public Finance and Economic Growth: A Global Analysis." VNU Journal of Science: Economics and Business 34, no. 5E (December 25, 2018). http://dx.doi.org/10.25073/2588-1108/vnueab.4165.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Aiming to investigate the role of governance in modifying the relationship between public finance and economic growth, this study applied a seemingly unrelated regression model for the panel data of 38 developed and 44 developing countries from 1996 to 2016. It is easy to see that this research measures public finance by two parts of the subcomponents: total tax revenue and general government expenditure. We also call governance the “control of corruption indicator”. The finding indicates that governance always positively affects the economy. However, when it interacts with public finance, this interaction has a diverse effect on economic growth in developed countries, depending on tax revenue or government expenditure. Nevertheless, in developing countries, this interaction has a beneficial impact on the growth of an economy. Keywords: Governance, public finance, economic growth, developed and developing countries. References [1] Bird, R. M., Martinez-Vazquez, J. and Torgler, B., “Tax Effort in Developing Countries and High Income Countries: The Impact of Corruption, Voice and Accountability”, Economic Analysis and Policy, 38 (2008) 1, 55-71. https://doi.org/10.1016/S0313-5926(08)50006-3.[2] Dzhumashev, R. (2014) ‘Corruption and growth: The role of governance, public spending, and economic development’, Economic Modelling. Elsevier B.V., 37, pp. 202–215. https://doi.org/10.1016/j.econmod.2013.11.007.[3] d’Agostino, G., Dunne, J.P., & Pieroni, L. (2012). Corruption, military spending and growth. Defence and Peace Economics, 23(6), 591–604.[4] Ugur, M. (2014) ‘Corruption’s direct effects on per-capita income growth: A meta-analysis’, Journal of Economic Surveys, 28(3), pp. 472–490. https://doi.org/10.1111/joes.12035.[5] d’Agostino, G., Dunne, J. P. and Pieroni, L. (2016) ‘Government Spending, Corruption and Economic Growth’, World Development. Elsevier Ltd, 84(1997), pp. 190–205. https://doi.org/10.1016/j.worlddev.2016.03.011.[6] Kaul, I., & ConceiÇÃo, P.(2006). The new public finance: Responding to global challenges United Nations development programme, New York.[7] McGee, R. W. (2008) Taxation and public finance in transition and developing economies. Edited by R. W. Mcgee. North Miami: Springer.[8] Hague, R. and Martin, H. (2004) Comparative government and politics an introduction. 6th Editio. New York: Palgrave Macmillan.[9] Schumpeter, J. A. (1942). The Theory of Economic Development, Harvard Univer- sity Press, Cambridge, MA. [10] Cobb, C. W., & Douglas, P. H. (1928). A Theory of Production. American Economic Association, 18(1), 139–165.[11] Solow, R.M., 1956. A contribution to the theory of economic growth. The Quarterly Journal of Econometrics, 70(1), pp.65–94.[12] Mankiw, N.G., Romer, D. & Weil, D.N., 1992. A contribution to the empirics of economic growth*. Quarterly Journal of Economics, May(1992), pp.407–437.[13] Islam, Nazrul. (1995). “Growth empirics: A panel data approach.” TheQuarterly Journal of Economics, 110(4), pp. 1127-1170.[14] Barro, R. J. and Sala-i-Martin, X. (2004) Economic Growth. Second. London: The MIT press.[15] Devarajan, S., Swaroop, V., & Heng-fu, Z. (1996). The composition of public expenditure and economic growth. Journal of Monetary Economics, 37(2–3), pp.313–344.[16] Kneller, R., Bleaney, M.F., & Gemmell, N.(1999). Fiscal policy and growth: Evidence from OECD countries. Journal of Public Economics, 74(2), 171–190.[17] Ojede, A., & Yamarik, S. (2012). Tax policy and state economic growth: The long-run and short-run of it. Economics Letters, 116(2), 161–165.[18] Azam, M., Qayyum, A., Bakhtyar, B. and Emirullah, C. (2015) ‘The causal relationship between energy consumption and economic growth in the ASEAN-5 countries’, Renewable and Sustainable Energy Reviews. Elsevier, 47(2015), pp. 732–745. doi: 10.1016/j.rser.2015.03.023.[19] Ramírez, J. M., Díaz, Y. and Bedoya, J. G. (2017) ‘Property tax revenues and multidimensional poverty reduction in Colombia: A spatial approach’, World Development, 94, pp. 406–421. doi: 10.1016/j.worlddev.2017.02.005.[20] Stiglitz, J.E., (2000). Economics of the public sector Third edit. E. Parsons et al., eds., New York/London.[21] Hillman, A.L., 2009. Public Finance and Public policy, New York: Cambridge University Press.[22] Zellner, A. (1962) ‘An efficient method of estimating seemingly unrelated regressions and tests for aggregation bias’, Journal of the American Statistical Association, 57(298), pp. 348–368.[23] Yanev, P. I. and Kontoghiorghes, E. J. (2007) ‘Computationally efficient methods for estimating the updated-observations SUR models’, Applied Numerical Mathematics, 57(11-12), pp. 1245-1258. doi: 10.1016/j.apnum.2007.01.004.[24] Blundell, R. and Bond, S. (1998) ‘GMM estimation with persistent panel data : an application to production functions’, Journal of Econometrics, 87(1), pp. 115–143.[25] [25] Baltagi, B.H.(2005). Econometric analysis of panel data, JohnWiley & Sons Ltd., West Sussex PO19 8SQ, England.[26] Sasaki, Y. (2015). Heterogeneity and selection in dynamic panel data. Journal of Econometrics, 188(2015), 236–249.[27] Acemoglu, D. and Robinson, J. (2001) ‘A Theory of Political Transitions.pdf’, The American Economic Review, pp. 938–963. doi: Doi 10.1257/Aer.91.4.938.[28] Windmeijer, F. (2005). A finite sample correction for the variance of linear e cient two-step GMM estimators. Journal of Econometrics, 126(2005), 25-51. https://doi.org/10.1016/j.jeconom.2004.02.005.[29] Law, S. H., Lim, T. C., & Ismail, N. W. (2013). Institutions and economic development: A Granger causality analysis of panel data evidence. Economic Systems, 37(4), 610–624.[30] Harris, R. D. F., and Tzavalis, E. (1999). Inference for unit roots in dynamic panels where the time dimension is fixed. Journal of Econometrics 91, 201-226.[31] Im, K. S., Pesaran, M. H., and Shin, Y. (2003). Testing for unit roots in heterogeneous panels. Journal of Econometrics 115, 53-74.[32] Levin, A., Lin, C.-F. and Chu, C.-S. J. (2002), ‘Unit Root Tests in Panel Data: Asymptotic and Finite Sample Properties’, Journal of Econometrics, 108, pp. 1-24. https://doi.org/10.1016/S0304-4076(01)00098-7.[33] Lien, N. P. and Thanh, S. D. (2017) ‘Tax revenue, expenditure, and economic growth : An analysis of long-run relationships’, Journal of Economic Development, 24(3), pp. 4-26.[34] http://databank.worldbank.org/data/reports.aspx?source=world-development-indicators. Accessed in May 16, 2017.[35] Imam, P. A and Jacobs, D. F. (2007) ‘Effect of corruption on tax revenues in the Middle East’, IMF Journal, WP/07/270(1), pp. 1-36. doi: 10.1515/rmeef-2014-0001.

До бібліографії