Journal articles on the topic 'Ensemble source width'

To see the other types of publications on this topic, follow the link: Ensemble source width.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Ensemble source width.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Imamura, Hidetaka, and Sungyoung Kim. "Efficacy of a new spatial ear training program for “Ensemble width” and “Individual source width”." Journal of the Acoustical Society of America 140, no. 4 (October 2016): 2987. http://dx.doi.org/10.1121/1.4969251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Allen, Douglas R., Karl W. Hoppel, Gerald E. Nedoluha, Stephen D. Eckermann, and Cory A. Barton. "Ensemble-Based Gravity Wave Parameter Retrieval for Numerical Weather Prediction." Journal of the Atmospheric Sciences 79, no. 3 (March 2022): 621–48. http://dx.doi.org/10.1175/jas-d-21-0191.1.

Full text
Abstract:
Abstract Gravity wave (GW) momentum and energy deposition are large components of the momentum and heat budgets of the stratosphere and mesosphere, affecting predictability across scales. Since weather and climate models cannot resolve the entire GW spectrum, GW parameterizations are required. Tuning these parameterizations is time-consuming and must be repeated whenever model configurations are changed. We introduce a self-tuning approach, called GW parameter retrieval (GWPR), applied when the model is coupled to a data assimilation (DA) system. A key component of GWPR is a linearized model of the sensitivity of model wind and temperature to the GW parameters, which is calculated using an ensemble of nonlinear forecasts with perturbed parameters. GWPR calculates optimal parameters using an adaptive grid search that reduces DA analysis increments via a cost-function minimization. We test GWPR within the Navy Global Environmental Model (NAVGEM) using three latitude-dependent GW parameters: peak momentum flux, phase-speed width of the Gaussian source spectrum, and phase-speed weighting relative to the source-level wind. Compared to a baseline experiment with fixed parameters, GWPR reduces analysis increments and improves 5-day mesospheric forecasts. Relative to the baseline, retrieved parameters reveal enhanced source-level fluxes and westward shift of the wave spectrum in the winter extratropics, which we relate to seasonal variations in frontogenesis. The GWPR reduces stratospheric increments near 60°S during austral winter, compensating for excessive baseline nonorographic GW drag. Tropical sensitivity is weaker due to significant absorption of GW in the stratosphere, resulting in less confidence in tropical GWPR values.
APA, Harvard, Vancouver, ISO, and other styles
3

Connelly, Ryan, and Brian A. Colle. "Validation of Snow Multibands in the Comma Head of an Extratropical Cyclone Using a 40-Member Ensemble." Weather and Forecasting 34, no. 5 (September 11, 2019): 1343–63. http://dx.doi.org/10.1175/waf-d-18-0182.1.

Full text
Abstract:
Abstract This paper investigates the ability of the Weather Research and Forecasting (WRF) Model in simulating multiple small-scale precipitation bands (multibands) within the extratropical cyclone comma head using four winter storm cases from 2014 to 2017. Using the model output, some physical processes are explored to investigate band prediction. A 40-member WRF ensemble was constructed down to 2-km grid spacing over the Northeast United States using different physics, stochastic physics perturbations, different initial/boundary conditions from the first five perturbed members of the Global Forecast System (GFS) Ensemble Reforecast (GEFSR), and a stochastic kinetic energy backscatter scheme (SKEBS). It was found that 2-km grid spacing is adequate to resolve most snowbands. A feature-based verification is applied to hourly WRF reflectivity fields from each ensemble member and the WSR-88D radar reflectivity at 2-km height above sea level. The Method for Object-Based Diagnostic Evaluation (MODE) tool is used for identifying multibands, which are defined as two or more bands that are 5–20 km in width and that also exhibit a >2:1 aspect ratio. The WRF underpredicts the number of multibands and has a slight eastward position bias. There is no significant difference in frontogenetical forcing, vertical stability, moisture, and vertical shear between the banded versus nonbanded members. Underpredicted band members tend to have slightly stronger frontogenesis than observed, which may be consolidating the bands, but overall there is no clear linkage in ambient condition errors and band errors, thus leaving the source for the band underprediction motivation for future work.
APA, Harvard, Vancouver, ISO, and other styles
4

Jongaramrungruang, Siraput, Christian Frankenberg, Georgios Matheou, Andrew K. Thorpe, David R. Thompson, Le Kuai, and Riley M. Duren. "Towards accurate methane point-source quantification from high-resolution 2-D plume imagery." Atmospheric Measurement Techniques 12, no. 12 (December 17, 2019): 6667–81. http://dx.doi.org/10.5194/amt-12-6667-2019.

Full text
Abstract:
Abstract. Methane is the second most important anthropogenic greenhouse gas in the Earth climate system but emission quantification of localized point sources has been proven challenging, resulting in ambiguous regional budgets and source category distributions. Although recent advancements in airborne remote sensing instruments enable retrievals of methane enhancements at an unprecedented resolution of 1–5 m at regional scales, emission quantification of individual sources can be limited by the lack of knowledge of local wind speed. Here, we developed an algorithm that can estimate flux rates solely from mapped methane plumes, avoiding the need for ancillary information on wind speed. The algorithm was trained on synthetic measurements using large eddy simulations under a range of background wind speeds of 1–10 m s−1 and source emission rates ranging from 10 to 1000 kg h−1. The surrogate measurements mimic plume mapping performed by the next-generation Airborne Visible/Infrared Imaging Spectrometer (AVIRIS-NG) and provide an ensemble of 2-D snapshots of column methane enhancements at 5 m spatial resolution. We make use of the integrated total methane enhancement in each plume, denoted as integrated methane enhancement (IME), and investigate how this IME relates to the actual methane flux rate. Our analysis shows that the IME corresponds to the flux rate nonlinearly and is strongly dependent on the background wind speed over the plume. We demonstrate that the plume width, defined based on the plume angular distribution around its main axis, provides information on the associated background wind speed. This allows us to invert source flux rate based solely on the IME and the plume shape itself. On average, the error estimate based on randomly generated plumes is approximately 30 % for an individual estimate and less than 10 % for an aggregation of 30 plumes. A validation against a natural gas controlled-release experiment agrees to within 32 %, supporting the basis for the applicability of this technique to quantifying point sources over large geographical areas in airborne field campaigns and future space-based observations.
APA, Harvard, Vancouver, ISO, and other styles
5

Kang, Sarah M., Clara Deser, and Lorenzo M. Polvani. "Uncertainty in Climate Change Projections of the Hadley Circulation: The Role of Internal Variability." Journal of Climate 26, no. 19 (September 24, 2013): 7541–54. http://dx.doi.org/10.1175/jcli-d-12-00788.1.

Full text
Abstract:
Abstract The uncertainty arising from internal climate variability in climate change projections of the Hadley circulation (HC) is presently unknown. In this paper it is quantified by analyzing a 40-member ensemble of integrations of the Community Climate System Model, version 3 (CCSM3), under the Special Report on Emissions Scenarios (SRES) A1B scenario over the period 2000–60. An additional set of 100-yr-long time-slice integrations with the atmospheric component of the same model [Community Atmosphere Model, version 3.0 (CAM3)] is also analyzed. Focusing on simple metrics of the HC—its strength, width, and height—three key results emerge from the analysis of the CCSM3 ensemble. First, the projected weakening of the HC is almost entirely confined to the Northern Hemisphere, and is stronger in winter than in summer. Second, the projected widening of the HC occurs only in the winter season but in both hemispheres. Third, the projected rise of the tropical tropopause occurs in both hemispheres and in all seasons and is, by far, the most robust of the three metrics. This paper shows further that uncertainty in future trends of the HC width is largely controlled by extratropical variability, while those of HC strength and height are associated primarily with tropical dynamics. Comparison of the CCSM3 and CAM3 integrations reveals that ocean–atmosphere coupling is the dominant source of uncertainty in future trends of HC strength and height and of the tropical mean meridional circulation in general. Finally, uncertainty in future trends of the hydrological cycle is largely captured by the uncertainty in future trends of the mean meridional circulation.
APA, Harvard, Vancouver, ISO, and other styles
6

UNNIKRISHNAN, C. S., and C. P. SAFVAN. "EXPERIMENTAL TEST OF A QUANTUM-LIKE THEORY: MOTION OF ELECTRONS IN A UNIFORM MAGNETIC FIELD, IN A VARIABLE POTENTIAL WELL." Modern Physics Letters A 14, no. 07 (March 7, 1999): 479–90. http://dx.doi.org/10.1142/s0217732399000535.

Full text
Abstract:
We describe an experiment to test a quantum-like theory which predicts quantum-like behavior for an ensemble of electrons in a classical configuration with static magnetic and electric fields. Some of the earlier experiments had supporting evidence for anomalous, quantum-like effects in such a situation showing systematic modulations of electron current when a retarding potential is varied, even though the quantum wavelength of the electrons in such a configuration was less than a billionth of the spatial width of the potential well. Our experiment conclusively rules out any nonclassical, quantum-like behavior in electron transmission through simple electric barriers, when magnetic fields are present. We identify secondary electrons generated at various electrodes as the main source of apparent anomalous behavior. We also present a classical derivation of the quantum-like equation describing the modulations.
APA, Harvard, Vancouver, ISO, and other styles
7

Trier, Stanley B., Glen S. Romine, David A. Ahijevych, Robert J. Trapp, Russ S. Schumacher, Michael C. Coniglio, and David J. Stensrud. "Mesoscale Thermodynamic Influences on Convection Initiation near a Surface Dryline in a Convection-Permitting Ensemble." Monthly Weather Review 143, no. 9 (August 31, 2015): 3726–53. http://dx.doi.org/10.1175/mwr-d-15-0133.1.

Full text
Abstract:
Abstract In this study, the authors examine initiation of severe convection along a daytime surface dryline in a 10-member ensemble of convection-permitting simulations. Results indicate that the minimum buoyancy Bmin of PBL air parcels must be small (Bmin > −0.5°C) for successful deep convection initiation (CI) to occur along the dryline. Comparing different ensemble members reveals that CAPE magnitudes (allowing for entrainment) and the width of the zone of negligible Bmin extending eastward from the dryline act together to influence CI. Since PBL updrafts that initiate along the dryline move rapidly northeast in the vertically sheared flow as they grow into the free troposphere, a wider zone of negligible Bmin helps ensure adequate time for incipient storms to mature, which, itself, is hastened by larger CAPE. Local Bmin budget calculations and trajectory analysis are used to quantify physical processes responsible for the reduction of negative buoyancy prior to CI. Here, the grid-resolved forcing and forcing from temperature and moisture tendencies in the PBL scheme (arising from surface fluxes) contribute about equally in ensemble composites. However, greater spatial variability in grid-resolved forcing focuses the location of the greatest net forcing along the dryline. The grid-resolved forcing is influenced by a thermally direct vertical circulation, where time-averaged ascent at the east edge of the dryline results in locally deeper moisture and cooler conditions near the PBL top. Horizontal temperature advection spreads the cooler air eastward above higher equivalent potential temperature air at source levels of convecting air parcels, resulting in a wider zone of negligible Bmin that facilitates sustained CI.
APA, Harvard, Vancouver, ISO, and other styles
8

Rieznik, Olena. "Children’s orchestral set “Harmonika” by H. T. Statyvkin as a source of developing initial skills of ensemble and orchestral music playing for preschool children." Problems of Interaction Between Arts, Pedagogy and the Theory and Practice of Education 64, no. 64 (December 7, 2022): 75–91. http://dx.doi.org/10.34064/khnum1-64.05.

Full text
Abstract:
The figure of Hennadii Tymofiyovych Statyvkin as a reformer of the methodology of teaching the button accordion has always interested researchers for its versatility: ideas of restructuring the educational process of primary music education; experimental introduction of new methodological principles in the educational process; introduction of a seven-year period of training for button accordionists in children’s music schools; production of special training children’s selectable and ready-made button accordions; development and publication of a new curriculum and teaching aids. The above-mentioned ranges of H. Statyvkin’s activity have been covered by scholars at different times exclusively from the point of view of changing the methodology of teaching the button accordion from a selectable to a ready-made instrument. The problem of studying the structure of children’s accordion instruments created by H. Statyvkin has never acquired the format of a special separate study. The proposed article aims to reveal the design activity H. Statyvkin by analysing the design features of children’s musical instruments created by him for ensemble and orchestral music for preschool children. The scientific novelty of the research results is due to the introduction of some facts about the design features of the children’s orchestra set “Harmonika” into the scientific circulation of musicology, which made it possible to reveal the specifics of H. Statyvkin’s design thinking. Based on the analysis of the system of sound disposition at the right keyboards of the “Solo-1” and “Solo-2” instruments, we can conclude that H. Statyvkin’s system corresponds to the right keyboard of the piano-type accordion, but in the button version. Unusual for the button accordion disposition of the sounds at the right keyboard and grouping of major and minor triads in one row of the left keyboard is based on ergonomic and physiological data of preschool children (height and width of the chest, length of hands and forearms). It was this design that made it possible to reduce the overall sizes of the instruments and to make them in accordance with the physiology of a preschool children.
APA, Harvard, Vancouver, ISO, and other styles
9

Timme, Nicholas M., David Linsenbardt, and Christopher C. Lapish. "A Method to Present and Analyze Ensembles of Information Sources." Entropy 22, no. 5 (May 21, 2020): 580. http://dx.doi.org/10.3390/e22050580.

Full text
Abstract:
Information theory is a powerful tool for analyzing complex systems. In many areas of neuroscience, it is now possible to gather data from large ensembles of neural variables (e.g., data from many neurons, genes, or voxels). The individual variables can be analyzed with information theory to provide estimates of information shared between variables (forming a network between variables), or between neural variables and other variables (e.g., behavior or sensory stimuli). However, it can be difficult to (1) evaluate if the ensemble is significantly different from what would be expected in a purely noisy system and (2) determine if two ensembles are different. Herein, we introduce relatively simple methods to address these problems by analyzing ensembles of information sources. We demonstrate how an ensemble built of mutual information connections can be compared to null surrogate data to determine if the ensemble is significantly different from noise. Next, we show how two ensembles can be compared using a randomization process to determine if the sources in one contain more information than the other. All code necessary to carry out these analyses and demonstrations are provided.
APA, Harvard, Vancouver, ISO, and other styles
10

Bécar, Ramón, P. A. González, Joel Saavedra, Yerko Vásquez, and Bin Wang. "Phase transitions in four-dimensional AdS black holes with a nonlinear electrodynamics source." Communications in Theoretical Physics 73, no. 12 (November 12, 2021): 125402. http://dx.doi.org/10.1088/1572-9494/ac3073.

Full text
Abstract:
Abstract In this work we consider black hole solutions to Einstein’s theory coupled to a nonlinear power-law electromagnetic field with a fixed exponent value. We study the extended phase space thermodynamics in canonical and grand canonical ensembles, where the varying cosmological constant plays the role of an effective thermodynamic pressure. We examine thermodynamical phase transitions in such black holes and find that both first- and second-order phase transitions can occur in the canonical ensemble while, for the grand canonical ensemble, Hawking–Page and second-order phase transitions are allowed.
APA, Harvard, Vancouver, ISO, and other styles
11

Rubin, J. I., J. S. Reid, J. A. Hansen, J. L. Anderson, N. Collins, T. J. Hoar, T. Hogan, et al. "Development of the Ensemble Navy Aerosol Analysis Prediction System (ENAAPS) and its application of the Data Assimilation Research Testbed (DART) in support of aerosol forecasting." Atmospheric Chemistry and Physics Discussions 15, no. 20 (October 20, 2015): 28069–132. http://dx.doi.org/10.5194/acpd-15-28069-2015.

Full text
Abstract:
Abstract. An ensemble-based forecast and data assimilation system has been developed for use in Navy aerosol forecasting. The system makes use of an ensemble of the Navy Aerosol Analysis Prediction System (ENAAPS) at 1° × 1°, combined with an Ensemble Adjustment Kalman Filter from NCAR's Data Assimilation Research Testbed (DART). The base ENAAPS-DART system discussed in this work utilizes the Navy Operational Global Analysis Prediction System (NOGAPS) meteorological ensemble to drive offline NAAPS simulations coupled with the DART Ensemble Kalman Filter architecture to assimilate bias-corrected MODIS Aerosol Optical Thickness (AOT) retrievals. This work outlines the optimization of the 20-member ensemble system, including consideration of meteorology and source-perturbed ensemble members as well as covariance inflation. Additional tests with 80 meteorological and source members were also performed. An important finding of this work is that an adaptive covariance inflation method, which has not been previously tested for aerosol applications, was found to perform better than a temporally and spatially constant covariance inflation. Problems were identified with the constant inflation in regions with limited observational coverage. The second major finding of this work is that combined meteorology and aerosol source ensembles are superior to either in isolation and that both are necessary to produce a robust system with sufficient spread in the ensemble members as well as realistic correlation fields for spreading observational information. The inclusion of aerosol source ensembles improves correlation fields for large aerosol source regions such as smoke and dust in Africa, by statistically separating freshly emitted from transported aerosol species. However, the source ensembles have limited efficacy during long range transport. Conversely, the meteorological ensemble produces sufficient spread at the synoptic scale to enable observational impact through the ensemble data assimilation. The optimized ensemble system was compared to the Navy's current operational aerosol forecasting system which makes use of NAVDAS-AOD (NRL Atmospheric Variational Data Assimilation System for aerosol optical depth), a 2D VARiational data assimilation system. Overall, the two systems had statistically insignificant differences in RMSE, bias and correlation, relative to AERONET observed AOT. However, the ensemble system is clearly able to better capture sharp gradients in aerosol features compared to the variational system, which has a tendency to smooth out aerosol events. Such skill is not easily observable in bulk metrics. Further, the ENAAPS-DART system will allow for new avenues of model development, such as more efficient lidar and surface station assimilation as well as adaptive source functions. At this early stage of development, the parity with the current variational system is encouraging.
APA, Harvard, Vancouver, ISO, and other styles
12

Rubin, Juli I., Jeffrey S. Reid, James A. Hansen, Jeffrey L. Anderson, Nancy Collins, Timothy J. Hoar, Timothy Hogan, et al. "Development of the Ensemble Navy Aerosol Analysis Prediction System (ENAAPS) and its application of the Data Assimilation Research Testbed (DART) in support of aerosol forecasting." Atmospheric Chemistry and Physics 16, no. 6 (March 23, 2016): 3927–51. http://dx.doi.org/10.5194/acp-16-3927-2016.

Full text
Abstract:
Abstract. An ensemble-based forecast and data assimilation system has been developed for use in Navy aerosol forecasting. The system makes use of an ensemble of the Navy Aerosol Analysis Prediction System (ENAAPS) at 1 × 1°, combined with an ensemble adjustment Kalman filter from NCAR's Data Assimilation Research Testbed (DART). The base ENAAPS-DART system discussed in this work utilizes the Navy Operational Global Analysis Prediction System (NOGAPS) meteorological ensemble to drive offline NAAPS simulations coupled with the DART ensemble Kalman filter architecture to assimilate bias-corrected MODIS aerosol optical thickness (AOT) retrievals. This work outlines the optimization of the 20-member ensemble system, including consideration of meteorology and source-perturbed ensemble members as well as covariance inflation. Additional tests with 80 meteorological and source members were also performed. An important finding of this work is that an adaptive covariance inflation method, which has not been previously tested for aerosol applications, was found to perform better than a temporally and spatially constant covariance inflation. Problems were identified with the constant inflation in regions with limited observational coverage. The second major finding of this work is that combined meteorology and aerosol source ensembles are superior to either in isolation and that both are necessary to produce a robust system with sufficient spread in the ensemble members as well as realistic correlation fields for spreading observational information. The inclusion of aerosol source ensembles improves correlation fields for large aerosol source regions, such as smoke and dust in Africa, by statistically separating freshly emitted from transported aerosol species. However, the source ensembles have limited efficacy during long-range transport. Conversely, the meteorological ensemble generates sufficient spread at the synoptic scale to enable observational impact through the ensemble data assimilation. The optimized ensemble system was compared to the Navy's current operational aerosol forecasting system, which makes use of NAVDAS-AOD (NRL Atmospheric Variational Data Assimilation System for aerosol optical depth), a 2-D variational data assimilation system. Overall, the two systems had statistically insignificant differences in root-mean-squared error (RMSE), bias, and correlation relative to AERONET-observed AOT. However, the ensemble system is able to better capture sharp gradients in aerosol features compared to the 2DVar system, which has a tendency to smooth out aerosol events. Such skill is not easily observable in bulk metrics. Further, the ENAAPS-DART system will allow for new avenues of model development, such as more efficient lidar and surface station assimilation as well as adaptive source functions. At this early stage of development, the parity with the current variational system is encouraging.
APA, Harvard, Vancouver, ISO, and other styles
13

Lee, Okjeong, Jeonghyeon Choi, Jeongeun Won, and Sangdan Kim. "Uncertainty in nonstationary frequency analysis of South Korea's daily rainfall peak over threshold excesses associated with covariates." Hydrology and Earth System Sciences 24, no. 11 (November 3, 2020): 5077–93. http://dx.doi.org/10.5194/hess-24-5077-2020.

Full text
Abstract:
Abstract. Several methods have been proposed to analyze the frequency of nonstationary anomalies. The applicability of the nonstationary frequency analysis has been mainly evaluated based on the agreement between the time series data and the applied probability distribution. However, since the uncertainty in the parameter estimate of the probability distribution is the main source of uncertainty in frequency analysis, the uncertainty in the correspondence between samples and probability distribution is inevitably large. In this study, an extreme rainfall frequency analysis is performed that fits the peak over threshold series to the covariate-based nonstationary generalized Pareto distribution. By quantitatively evaluating the uncertainty of daily rainfall quantile estimates at 13 sites of the Korea Meteorological Administration using the Bayesian approach, we tried to evaluate the applicability of the nonstationary frequency analysis with a focus on uncertainty. The results indicated that the inclusion of dew point temperature (DPT) or surface air temperature (SAT) generally improved the goodness of fit of the model for the observed samples. The uncertainty of the estimated rainfall quantiles was evaluated by the confidence interval of the ensemble generated by the Markov chain Monte Carlo. The results showed that the width of the confidence interval of quantiles could be greatly amplified due to extreme values of the covariate. In order to compensate for the weakness of the nonstationary model exposed by the uncertainty, a method of specifying a reference value of a covariate corresponding to a nonexceedance probability has been proposed. The results of the study revealed that the reference covariate plays an important role in the reliability of the nonstationary model. In addition, when the reference covariate was given, it was confirmed that the uncertainty reduction in quantile estimates for the increase in the sample size was more pronounced in the nonstationary model. Finally, it was discussed how information on a global temperature rise could be integrated with a DPT or SAT-based nonstationary frequency analysis. Thus, a method to quantify the uncertainty of the rate of change in future quantiles due to global warming, using rainfall quantile ensembles obtained in the uncertainty analysis process, has been formulated.
APA, Harvard, Vancouver, ISO, and other styles
14

Sekiyama, Tsuyoshi Thomas, Mizuo Kajino, and Masaru Kunii. "Ensemble Dispersion Simulation of a Point-Source Radioactive Aerosol Using Perturbed Meteorological Fields over Eastern Japan." Atmosphere 12, no. 6 (May 22, 2021): 662. http://dx.doi.org/10.3390/atmos12060662.

Full text
Abstract:
We conducted single-model initial-perturbed ensemble simulations to quantify uncertainty in aerosol dispersion modeling, focusing on a point-source radioactive aerosol emitted from the Fukushima Daiichi Nuclear Power Plant (FDNPP) in March 2011. The ensembles of the meteorological variables were prepared using a data assimilation system that consisted of a non-hydrostatic weather-forecast model with a 3-km horizontal resolution and a four-dimensional local ensemble transform Kalman filter (4D-LETKF) with 20 ensemble members. The emission of radioactive aerosol was not perturbed. The weather and aerosol simulations were validated with in-situ measurements at Hitachi and Tokai, respectively, approximately 100 km south of the FDNPP. The ensemble simulations provided probabilistic information and multiple case scenarios for the radioactive aerosol plumes. Some of the ensemble members successfully reproduced the arrival time and intensity of the radioactive aerosol plumes, even when the deterministic simulation failed to reproduce them. We found that a small ensemble spread of wind speed produced large uncertainties in aerosol concentrations.
APA, Harvard, Vancouver, ISO, and other styles
15

Rezazadeh, Arezou, Josep Font-Segura, Alfonso Martinez, and Albert Guillén i Fàbregas. "Multi-Class Cost-Constrained Random Coding for Correlated Sources over the Multiple-Access Channel." Entropy 23, no. 5 (May 3, 2021): 569. http://dx.doi.org/10.3390/e23050569.

Full text
Abstract:
This paper studies a generalized version of multi-class cost-constrained random-coding ensemble with multiple auxiliary costs for the transmission of N correlated sources over an N-user multiple-access channel. For each user, the set of messages is partitioned into classes and codebooks are generated according to a distribution depending on the class index of the source message and under the constraint that the codewords satisfy a set of cost functions. Proper choices of the cost functions recover different coding schemes including message-dependent and message-independent versions of independent and identically distributed, independent conditionally distributed, constant-composition and conditional constant composition ensembles. The transmissibility region of the scheme is related to the Cover-El Gamal-Salehi region. A related family of correlated-source Gallager source exponent functions is also studied. The achievable exponents are compared for correlated and independent sources, both numerically and analytically.
APA, Harvard, Vancouver, ISO, and other styles
16

Malhotra, Ruchika, and Kusum Lata. "Using Ensembles for Class-Imbalance Problem to Predict Maintainability of Open Source Software." International Journal of Reliability, Quality and Safety Engineering 27, no. 05 (March 6, 2020): 2040011. http://dx.doi.org/10.1142/s0218539320400112.

Full text
Abstract:
To facilitate software maintenance and save the maintenance cost, numerous machine learning (ML) techniques have been studied to predict the maintainability of software modules or classes. An abundant amount of effort has been put by the research community to develop software maintainability prediction (SMP) models by relating software metrics to the maintainability of modules or classes. When software classes demanding the high maintainability effort (HME) are less as compared to the low maintainability effort (LME) classes, the situation leads to imbalanced datasets for training the SMP models. The imbalanced class distribution in SMP datasets could be a dilemma for various ML techniques because, in the case of an imbalanced dataset, minority class instances are either misclassified by the ML techniques or get discarded as noise. The recent development in predictive modeling has ascertained that ensemble techniques can boost the performance of ML techniques by collating their predictions. Ensembles themselves do not solve the class-imbalance problem much. However, aggregation of ensemble techniques with the certain techniques to handle class-imbalance problem (e.g., data resampling) has led to several proposals in research. This paper evaluates the performance of ensembles for the class-imbalance in the domain of SMP. The ensembles for class-imbalance problem (ECIP) are the modification of ensembles which pre-process the imbalanced data using data resampling before the learning process. This study experimentally compares the performance of several ECIP using performance metrics Balance and g-Mean over eight Apache software datasets. The results of the study advocate that for imbalanced datasets, ECIP improves the performance of SMP models as compared to classic ensembles.
APA, Harvard, Vancouver, ISO, and other styles
17

Adejo, Olugbenga Wilson, and Thomas Connolly. "Predicting student academic performance using multi-model heterogeneous ensemble approach." Journal of Applied Research in Higher Education 10, no. 1 (February 5, 2018): 61–75. http://dx.doi.org/10.1108/jarhe-09-2017-0113.

Full text
Abstract:
Purpose The purpose of this paper is to empirically investigate and compare the use of multiple data sources, different classifiers and ensembles of classifiers technique in predicting student academic performance. The study will compare the performance and efficiency of ensemble techniques that make use of different combination of data sources with that of base classifiers with single data source. Design/methodology/approach Using a quantitative research methodology, data samples of 141 learners enrolled in the University of the West of Scotland were extracted from the institution’s databases and also collected through survey questionnaire. The research focused on three data sources: student record system, learning management system and survey, and also used three state-of-art data mining classifiers, namely, decision tree, artificial neural network and support vector machine for the modeling. In addition, the ensembles of these base classifiers were used in the student performance prediction and the performances of the seven different models developed were compared using six different evaluation metrics. Findings The results show that the approach of using multiple data sources along with heterogeneous ensemble techniques is very efficient and accurate in prediction of student performance as well as help in proper identification of student at risk of attrition. Practical implications The approach proposed in this study will help the educational administrators and policy makers working within educational sector in the development of new policies and curriculum on higher education that are relevant to student retention. In addition, the general implications of this research to practice is its ability to accurately help in early identification of students at risk of dropping out of HE from the combination of data sources so that necessary support and intervention can be provided. Originality/value The research empirically investigated and compared the performance accuracy and efficiency of single classifiers and ensemble of classifiers that make use of single and multiple data sources. The study has developed a novel hybrid model that can be used for predicting student performance that is high in accuracy and efficient in performance. Generally, this research study advances the understanding of the application of ensemble techniques to predicting student performance using learner data and has successfully addressed these fundamental questions: What combination of variables will accurately predict student academic performance? What is the potential of the use of stacking ensemble techniques in accurately predicting student academic performance?
APA, Harvard, Vancouver, ISO, and other styles
18

Aleshin, A. N., and P. B. Straumal. "Diffusion in an Ensemble of Intersecting Grain Boundaries." Defect and Diffusion Forum 354 (June 2014): 121–27. http://dx.doi.org/10.4028/www.scientific.net/ddf.354.121.

Full text
Abstract:
Grain boundary (GB) diffusion in an ensemble of three grain boundaries intersecting in the point of GB triple junction is described on the basis of quasi-steady Fisher’s model. Two versions of the configuration of the ensemble are considered, namely, with different number of GBs adjacent to the surface covered with a diffuser source and with different angle between GB and surface. Analytical expressions for the distribution of diffuser concentration along each GB of an ensemble are derived supposing that the GB diffusion fluxes are equal in the GB triple junction. The expressions for the diffuser concentration distribution along GBs in both ensembles include not only diffusion constants (like GB and bulk diffusion coefficients) but also structural characteristics of the ensemble of grain boundaries (i.e. the depth of the triple junction point under the surface and the angle between GBs in the triple junction point). The specific features of diffusion kinetics in the ensembles of different configuration with an angle of 120o(the equilibrium angle in a polycrystal) were revealed by comparing the diffuser concentration distributions in the ensembles and in the single GB.
APA, Harvard, Vancouver, ISO, and other styles
19

Day, Cherie K., Adam T. Deller, Ryan M. Shannon, Hao Qiu(邱昊), Keith W. Bannister, Shivani Bhandari, Ron Ekers, et al. "High time resolution and polarization properties of ASKAP-localized fast radio bursts." Monthly Notices of the Royal Astronomical Society 497, no. 3 (July 24, 2020): 3335–50. http://dx.doi.org/10.1093/mnras/staa2138.

Full text
Abstract:
ABSTRACT Combining high time and frequency resolution full-polarization spectra of fast radio bursts (FRBs) with knowledge of their host galaxy properties provides an opportunity to study both the emission mechanism generating them and the impact of their propagation through their local environment, host galaxy, and the intergalactic medium. The Australian Square Kilometre Array Pathfinder (ASKAP) telescope has provided the first ensemble of bursts with this information. In this paper, we present the high time and spectral resolution, full polarization observations of five localized FRBs to complement the results published for the previously studied ASKAP FRB 181112. We find that every FRB is highly polarized, with polarization fractions ranging from 80 to 100 per cent, and that they are generally dominated by linear polarization. While some FRBs in our sample exhibit properties associated with an emerging archetype (i.e. repeating or apparently non-repeating), others exhibit characteristic features of both, implying the existence of a continuum of FRB properties. When examined at high time resolution, we find that all FRBs in our sample have evidence for multiple subcomponents and for scattering at a level greater than expected from the Milky Way. We find no correlation between the diverse range of FRB properties (e.g. scattering time, intrinsic width, and rotation measure) and any global property of their host galaxy. The most heavily scattered bursts reside in the outskirts of their host galaxies, suggesting that the source-local environment rather than the host interstellar medium is likely the dominant origin of the scattering in our sample.
APA, Harvard, Vancouver, ISO, and other styles
20

Lee, Jared A., Walter C. Kolczynski, Tyler C. McCandless, and Sue Ellen Haupt. "An Objective Methodology for Configuring and Down-Selecting an NWP Ensemble for Low-Level Wind Prediction." Monthly Weather Review 140, no. 7 (July 1, 2012): 2270–86. http://dx.doi.org/10.1175/mwr-d-11-00065.1.

Full text
Abstract:
Abstract Ensembles of numerical weather prediction (NWP) model predictions are used for a variety of forecasting applications. Such ensembles quantify the uncertainty of the prediction because the spread in the ensemble predictions is correlated to forecast uncertainty. For atmospheric transport and dispersion and wind energy applications in particular, the NWP ensemble spread should accurately represent uncertainty in the low-level mean wind. To adequately sample the probability density function (PDF) of the forecast atmospheric state, it is necessary to account for several sources of uncertainty. Limited computational resources constrain the size of ensembles, so choices must be made about which members to include. No known objective methodology exists to guide users in choosing which combinations of physics parameterizations to include in an NWP ensemble, however. This study presents such a methodology. The authors build an NWP ensemble using the Advanced Research Weather Research and Forecasting Model (ARW-WRF). This 24-member ensemble varies physics parameterizations for 18 randomly selected 48-h forecast periods in boreal summer 2009. Verification focuses on 2-m temperature and 10-m wind components at forecast lead times from 12 to 48 h. Various statistical guidance methods are employed for down-selection, calibration, and verification of the ensemble forecasts. The ensemble down-selection is accomplished with principal component analysis. The ensemble PDF is then statistically dressed, or calibrated, using Bayesian model averaging. The postprocessing techniques presented here result in a recommended down-selected ensemble that is about half the size of the original ensemble yet produces similar forecast performance, and still includes critical diversity in several types of physics schemes.
APA, Harvard, Vancouver, ISO, and other styles
21

Peltier, Leonard J., Sue Ellen Haupt, John C. Wyngaard, David R. Stauffer, Aijun Deng, Jared A. Lee, Kerrie J. Long, and Andrew J. Annunzio. "Parameterizing Mesoscale Wind Uncertainty for Dispersion Modeling." Journal of Applied Meteorology and Climatology 49, no. 8 (August 1, 2010): 1604–14. http://dx.doi.org/10.1175/2010jamc2396.1.

Full text
Abstract:
Abstract A parameterization of numerical weather prediction uncertainty is presented for use by atmospheric transport and dispersion models. The theoretical development applies Taylor dispersion concepts to diagnose dispersion metrics from numerical wind field ensembles, where the ensemble variability approximates the wind field uncertainty. This analysis identifies persistent wind direction differences in the wind field ensemble as a leading source of enhanced “virtual” dispersion, and thus enhanced uncertainty for the ensemble-mean contaminant plume. This dispersion is characterized by the Lagrangian integral time scale for the grid-resolved, large-scale, “outer” flow that is imposed through the initial and boundary conditions and by the ensemble deviation-velocity variance. Excellent agreement is demonstrated between an explicit ensemble-mean contaminant plume generated from a Gaussian plume model applied to the individual wind field ensemble members and the modeled ensemble-mean plume formed from the one Gaussian plume simulation enhanced with the new ensemble dispersion metrics.
APA, Harvard, Vancouver, ISO, and other styles
22

Korsakissok, I., R. Périllat, S. Andronopoulos, P. Bedwell, E. Berge, T. Charnock, G. Geertsema, et al. "Uncertainty propagation in atmospheric dispersion models for radiological emergencies in the pre- and early release phase: summary of case studies." Radioprotection 55 (May 2020): S57—S68. http://dx.doi.org/10.1051/radiopro/2020013.

Full text
Abstract:
In the framework of the European project CONFIDENCE, Work Package 1 (WP1) focused on the uncertainties in the pre- and early phase of a radiological emergency, when environmental observations are not available and the assessment of the environmental and health impact of the accident largely relies on atmospheric dispersion modelling. The latter is subject to large uncertainties coming from, in particular, meteorological and release data. In WP1, several case studies were identified, including hypothetical accident scenarios in Europe and the Fukushima accident, for which participants propagated input uncertainties through their atmospheric dispersion and subsequent dose models. This resulted in several ensembles of results (consisting of tens to hundreds of simulations) that were compared to each other and to radiological observations (in the Fukushima case). These ensembles were analysed in order to answer questions such as: among meteorology, source term and model-related uncertainties, which are the predominant ones? Are uncertainty assessments very different between the participants and can this inter-ensemble variability be explained? What are the optimal ways of characterizing and presenting the uncertainties? Is the ensemble modelling sufficient to encompass the observations, or are there sources of uncertainty not (sufficiently) taken into account? This paper describes the case studies of WP1 and presents some illustrations of the results, with a summary of the main findings.
APA, Harvard, Vancouver, ISO, and other styles
23

Velázquez, J. A., F. Anctil, M. H. Ramos, and C. Perrin. "Can a multi-model approach improve hydrological ensemble forecasting? A study on 29 French catchments using 16 hydrological model structures." Advances in Geosciences 29 (February 28, 2011): 33–42. http://dx.doi.org/10.5194/adgeo-29-33-2011.

Full text
Abstract:
Abstract. An operational hydrological ensemble forecasting system based on a meteorological ensemble prediction system (M-EPS) coupled with a hydrological model searches to capture the uncertainties associated with the meteorological prediction to better predict river flows. However, the structure of the hydrological model is also an important source of uncertainty that has to be taken into account. This study aims at evaluating and comparing the performance and the reliability of different types of hydrological ensemble prediction systems (H-EPS), when ensemble weather forecasts are combined with a multi-model approach. The study is based on 29 catchments in France and 16 lumped hydrological model structures, driven by the weather forecasts from the European centre for medium-range weather forecasts (ECMWF). Results show that the ensemble predictions produced by a combination of several hydrological model structures and meteorological ensembles have higher skill and reliability than ensemble predictions given either by one single hydrological model fed by weather ensemble predictions or by several hydrological models and a deterministic meteorological forecast.
APA, Harvard, Vancouver, ISO, and other styles
24

Ge, Shengguo, Siti Nurulain Mohd Rum, Hamidah Ibrahim, Erzam Marsilah, and Thinagaran Perumal. "A Source Number Enumeration Method at Low SNR Based on Ensemble Learning." International Journal of Emerging Technology and Advanced Engineering 13, no. 3 (March 1, 2023): 81–90. http://dx.doi.org/10.46338/ijetae0323_08.

Full text
Abstract:
Source number estimation is one of the important research directions in array signal processing. To solve the difficulty of estimating the number of signal sources under a low signal-to-noise ratio (SNR), a source number enumeration method based on ensemble learning is proposed. This method first preprocesses the signal data. The specific process is to decompose the original signal into several intrinsic mode functions (IMF) by using Complementary Ensemble Empirical Mode Decomposition (CEEMD), and then construct a covariance matrix and perform eigenvalue decomposition to obtain samples. Finally, the source number enumeration model based on ensemble learning is used to predict the number of sources. This model is divided into two layers. First, the primary learner is trained with the dataset, and then the prediction result on the primary learner is used as the input of the secondary learner for training, and then the prediction result is obtained. Computer theoretical signals and real measured signals are used to verify the proposed source number enumeration method, respectively. Experiments show that this method has better performance than other methods at low SNR, and it is more suitable for real environment. Keywords—Source number estimation; Array signal processing; SNR; IMF; CEEMD; Ensemble learning.
APA, Harvard, Vancouver, ISO, and other styles
25

Clark, Adam J., William A. Gallus, and Tsing-Chang Chen. "Contributions of Mixed Physics versus Perturbed Initial/Lateral Boundary Conditions to Ensemble-Based Precipitation Forecast Skill." Monthly Weather Review 136, no. 6 (June 1, 2008): 2140–56. http://dx.doi.org/10.1175/2007mwr2029.1.

Full text
Abstract:
Abstract An experiment is described that is designed to examine the contributions of model, initial condition (IC), and lateral boundary condition (LBC) errors to the spread and skill of precipitation forecasts from two regional eight-member 15-km grid-spacing Weather Research and Forecasting (WRF) ensembles covering a 1575 km × 1800 km domain. It is widely recognized that a skillful ensemble [i.e., an ensemble with a probability distribution function (PDF) that generates forecast probabilities with high resolution and reliability] should account for both error sources. Previous work suggests that model errors make a larger contribution than IC and LBC errors to forecast uncertainty in the short range before synoptic-scale error growth becomes nonlinear. However, in a regional model with unperturbed LBCs, the infiltration of the lateral boundaries will negate increasing spread. To obtain a better understanding of the contributions to the forecast errors in precipitation and to examine the window of forecast lead time before unperturbed ICs and LBCs begin to cause degradation in ensemble forecast skill, the “perfect model” assumption is made in an ensemble that uses perturbed ICs and LBCs (PILB ensemble), and the “perfect analysis” assumption is made in another ensemble that uses mixed physics–dynamic cores (MP ensemble), thus isolating the error contributions. For the domain and time period used in this study, unperturbed ICs and LBCs in the MP ensemble begin to negate increasing spread around forecast hour 24, and ensemble forecast skill as measured by relative operating characteristic curves (ROC scores) becomes lower in the MP ensemble than in the PILB ensemble, with statistical significance beginning after forecast hour 69. However, degradation in forecast skill in the MP ensemble relative to the PILB ensemble is not observed in an analysis of deterministic forecasts calculated from each ensemble using the probability matching method. Both ensembles were found to lack statistical consistency (i.e., to be underdispersive), with the PILB ensemble (MP ensemble) exhibiting more (less) statistical consistency with respect to forecast lead time. Spread ratios in the PILB ensemble are greater than those in the MP ensemble at all forecast lead times and thresholds examined; however, ensemble variance in the MP ensemble is greater than that in the PILB ensemble during the first 24 h of the forecast. This discrepancy in spread measures likely results from greater bias in the MP ensemble leading to an increase in ensemble variance and decrease in spread ratio relative to the PILB ensemble.
APA, Harvard, Vancouver, ISO, and other styles
26

Giles, Daniel, Brian McConnell, and Frédéric Dias. "Modelling with Volna-OP2—Towards Tsunami Threat Reduction for the Irish Coastline." Geosciences 10, no. 6 (June 10, 2020): 226. http://dx.doi.org/10.3390/geosciences10060226.

Full text
Abstract:
Tsunamis are infrequent events that have the potential to be extremely destructive. The last major tsunami to effect the Irish coastline was the Lisbon 1755 event. That event acts as a candidate worst case scenario for hazard assessment and the impacts on the Irish Coastline are presented here. As there is no general consensus on the 1755 earthquake source, multiple sources highlighted in the literature are investigated. These sources are used to generate the initial conditions and the resultant tsunami waves are simulated with the massively parallelised Volna-OP2 finite volume tsunami code. The hazard associated with the event is captured on three gradated levels. A reduced faster than real time tsunami ensemble is produced for the North-East Atlantic on a regional level in 93 s using two Nvidia V100 GPUs. By identifying the most vulnerable sections of the Irish coastline from this regional forecast, some locally refined simulations are further carried out in a faster than real time setting. As arrival times on the coastline can be on the O (mins), these faster than real time reduced ensembles are of great benefit for tsunami warning. Volna-OP2’s capabilities in this respect are clearly demonstrated here. Finally, high resolution inundation simulations, which build upon the ensemble results, are carried out. To date this study provides the best estimate of assessing the hazard associated with a Lisbon-type tsunami event for the Irish coastline. The results of the inundation mapping highlight that along the vulnerable sections of coastline, inundation is constrained to low-lying areas with maximum run-up heights of 3.4 m being found.
APA, Harvard, Vancouver, ISO, and other styles
27

Plu, Matthieu, Barbara Scherllin-Pirscher, Delia Arnold Arias, Rocio Baro, Guillaume Bigeard, Luca Bugliaro, Ana Carvalho, et al. "An ensemble of state-of-the-art ash dispersion models: towards probabilistic forecasts to increase the resilience of air traffic against volcanic eruptions." Natural Hazards and Earth System Sciences 21, no. 10 (October 5, 2021): 2973–92. http://dx.doi.org/10.5194/nhess-21-2973-2021.

Full text
Abstract:
Abstract. High-quality volcanic ash forecasts are crucial to minimize the economic impact of volcanic hazards on air traffic. Decision-making is usually based on numerical dispersion modelling with only one model realization. Given the inherent uncertainty of such an approach, a multi-model multi-source term ensemble has been designed and evaluated for the Eyjafjallajökull eruption in May 2010. Its use for flight planning is discussed. Two multi-model ensembles were built: the first is based on the output of four dispersion models and their own implementation of ash ejection. All a priori model source terms were constrained by observational evidence of the volcanic ash cloud top as a function of time. The second ensemble is based on the same four dispersion models, which were run with three additional source terms: (i) a source term obtained from a model background constrained with satellite data (a posteriori source term), (ii) its lower-bound estimate and (iii) its upper-bound estimate. The a priori ensemble gives valuable information about the probability of ash dispersion during the early phase of the eruption, when observational evidence is limited. However, its evaluation with observational data reveals lower quality compared to the second ensemble. While the second ensemble ash column load and ash horizontal location compare well to satellite observations, 3D ash concentrations are negatively biased. This might be caused by the vertical distribution of ash, which is too much diluted in all model runs, probably due to defaults in the a posteriori source term and vertical transport and/or diffusion processes in all models. Relevant products for the air traffic management are horizontal maps of ash concentration quantiles (median, 75 %, 99 %) at a finely resolved flight level grid as well as cross sections. These maps enable cost-optimized consideration of volcanic hazards and could result in much fewer flight cancellations, reroutings and traffic flow congestions. In addition, they could be used for route optimization in the areas where ash does not pose a direct and urgent threat to aviation, including the aspect of aeroplane maintenance.
APA, Harvard, Vancouver, ISO, and other styles
28

Plu, Matthieu, Barbara Scherllin-Pirscher, Delia Arnold Arias, Rocio Baro, Guillaume Bigeard, Luca Bugliaro, Ana Carvalho, et al. "An ensemble of state-of-the-art ash dispersion models: towards probabilistic forecasts to increase the resilience of air traffic against volcanic eruptions." Natural Hazards and Earth System Sciences 21, no. 10 (October 5, 2021): 2973–92. http://dx.doi.org/10.5194/nhess-21-2973-2021.

Full text
Abstract:
Abstract. High-quality volcanic ash forecasts are crucial to minimize the economic impact of volcanic hazards on air traffic. Decision-making is usually based on numerical dispersion modelling with only one model realization. Given the inherent uncertainty of such an approach, a multi-model multi-source term ensemble has been designed and evaluated for the Eyjafjallajökull eruption in May 2010. Its use for flight planning is discussed. Two multi-model ensembles were built: the first is based on the output of four dispersion models and their own implementation of ash ejection. All a priori model source terms were constrained by observational evidence of the volcanic ash cloud top as a function of time. The second ensemble is based on the same four dispersion models, which were run with three additional source terms: (i) a source term obtained from a model background constrained with satellite data (a posteriori source term), (ii) its lower-bound estimate and (iii) its upper-bound estimate. The a priori ensemble gives valuable information about the probability of ash dispersion during the early phase of the eruption, when observational evidence is limited. However, its evaluation with observational data reveals lower quality compared to the second ensemble. While the second ensemble ash column load and ash horizontal location compare well to satellite observations, 3D ash concentrations are negatively biased. This might be caused by the vertical distribution of ash, which is too much diluted in all model runs, probably due to defaults in the a posteriori source term and vertical transport and/or diffusion processes in all models. Relevant products for the air traffic management are horizontal maps of ash concentration quantiles (median, 75 %, 99 %) at a finely resolved flight level grid as well as cross sections. These maps enable cost-optimized consideration of volcanic hazards and could result in much fewer flight cancellations, reroutings and traffic flow congestions. In addition, they could be used for route optimization in the areas where ash does not pose a direct and urgent threat to aviation, including the aspect of aeroplane maintenance.
APA, Harvard, Vancouver, ISO, and other styles
29

Setiyowati, Eka, Agus Rusgiyono, and Tarno Tarno. "MODEL KOMBINASI ARIMA DALAM PERAMALAN HARGA MINYAK MENTAH DUNIA." Jurnal Gaussian 7, no. 1 (February 28, 2018): 54–63. http://dx.doi.org/10.14710/j.gauss.v7i1.26635.

Full text
Abstract:
Oil is the most important commodity in everyday life, because oil is one of the main sources of energy that is needed for other people. Changes in crude oil prices greatly affect the economic conditions of a country. Therefore, the aim of this study is develop an appropriate model for forecasting crude oil price based on the ARIMA and its ensembles. In this study, ensemble method uses some ARIMA models to create ensemble members which are then combined with averaging and stacking techniques. The data used are the price of world crude oil period 2003-2017. The results showed that ARIMA (1,1,0) model produces the smallest RMSE values for forecasting the next thirty six months. Keywords: Ensemble, ARIMA, Averaging, Stacking, Crude Oil Price
APA, Harvard, Vancouver, ISO, and other styles
30

Komaromi, William A., Patrick A. Reinecke, James D. Doyle, and Jonathan R. Moskaitis. "The Naval Research Laboratory’s Coupled Ocean–Atmosphere Mesoscale Prediction System-Tropical Cyclone Ensemble (COAMPS-TC Ensemble)." Weather and Forecasting 36, no. 2 (April 2021): 499–517. http://dx.doi.org/10.1175/waf-d-20-0038.1.

Full text
Abstract:
AbstractThe 11-member Coupled Ocean–Atmosphere Mesoscale Prediction System-Tropical Cyclones (COAMPS-TC) ensemble has been developed by the Naval Research Laboratory (NRL) to produce probabilistic forecasts of tropical cyclone (TC) track, intensity and structure. All members run with a storm-following inner grid at convection-permitting 4-km horizontal resolution. The COAMPS-TC ensemble is constructed via a combination of perturbations to initial and boundary conditions, the initial vortex, and model physics to account for a variety of different sources of uncertainty that affect track and intensity forecasts. Unlike global model ensembles, which do a reasonable job capturing track uncertainty but not intensity, mesoscale ensembles such as the COAMPS-TC ensemble are necessary to provide a realistic intensity forecast spectrum. The initial and boundary condition perturbations are responsible for generating the majority of track spread at all lead times, as well as the intensity spread from 36 to 120 h. The vortex and physics perturbations are necessary to produce meaningful spread in the intensity prediction from 0 to 36 h. In a large sample of forecasts from 2014 to 2017, the ensemble-mean track and intensity forecast is superior to the unperturbed control forecast at all lead times, demonstrating a clear advantage to running an ensemble versus a deterministic forecast. The spread–skill relationship of the ensemble is also examined, and is found to be very well calibrated for track, but is underdispersive for intensity. Using a mixture of lateral boundary conditions derived from different global models is found to improve upon the spread–skill score for intensity, but it is hypothesized that additional physics perturbations will be necessary to achieve realistic ensemble spread.
APA, Harvard, Vancouver, ISO, and other styles
31

Younas, Waqar, and Youmin Tang. "PNA Predictability at Various Time Scales." Journal of Climate 26, no. 22 (October 29, 2013): 9090–114. http://dx.doi.org/10.1175/jcli-d-12-00609.1.

Full text
Abstract:
Abstract In this study, the predictability of the Pacific–North American (PNA) pattern is evaluated on time scales from days to months using state-of-the-art dynamical multiple-model ensembles including the Canadian Historical Forecast Project (HFP2) ensemble, the Development of a European Multimodel Ensemble System for Seasonal-to-Interannual Prediction (DEMETER) ensemble, and the Ensemble-Based Predictions of Climate Changes and their Impacts (ENSEMBLES). Some interesting findings in this study include (i) multiple-model ensemble (MME) skill was better than most of the individual models; (ii) both actual prediction skill and potential predictability increased as the averaging time scale increased from days to months; (iii) there is no significant difference in actual skill between coupled and uncoupled models, in contrast with the potential predictability where coupled models performed better than uncoupled models; (iv) relative entropy (REA) is an effective measure in characterizing the potential predictability of individual prediction, whereas the mutual information (MI) is a reliable indicator of overall prediction skill; and (v) compared with conventional potential predictability measures of the signal-to-noise ratio, the MI-based measures characterized more potential predictability when the ensemble spread varied over initial conditions. Further analysis found that the signal component dominated the dispersion component in REA for PNA potential predictability from days to seasons. Also, the PNA predictability is highly related to the signal of the tropical sea surface temperature (SST), and SST–PNA correlation patterns resemble the typical ENSO structure, suggesting that ENSO is the main source of PNA seasonal predictability. The predictable component analysis (PrCA) of atmospheric variability further confirmed the above conclusion; that is, PNA is one of the most predictable patterns in the climate variability over the Northern Hemisphere, which originates mainly from the ENSO forcing.
APA, Harvard, Vancouver, ISO, and other styles
32

El-Ouartassy, Youness, Irène Korsakissok, Matthieu Plu, Olivier Connan, Laurent Descamps, and Laure Raynaud. "Combining short-range dispersion simulations with fine-scale meteorological ensembles: probabilistic indicators and evaluation during a 85Kr field campaign." Atmospheric Chemistry and Physics 22, no. 24 (December 16, 2022): 15793–816. http://dx.doi.org/10.5194/acp-22-15793-2022.

Full text
Abstract:
Abstract. Numerical atmospheric dispersion models (ADMs) are used for predicting the health and environmental consequences of nuclear accidents in order to anticipate countermeasures necessary to protect the populations. However, these simulations suffer from significant uncertainties, arising in particular from input data: weather conditions and source term. Meteorological ensembles are already used operationally to characterize uncertainties in weather predictions. Combined with dispersion models, these ensembles produce different scenarios of radionuclide dispersion, called “members”, representative of the variety of possible forecasts. In this study, the fine-scale operational weather ensemble AROME-EPS (Applications of Research to Operations at Mesoscale-Ensemble Prediction System) from Météo-France is coupled with the Gaussian puff model pX developed by the IRSN (French Institute for Radiation Protection and Nuclear Safety). The source term data are provided at 10 min resolution by the Orano La Hague reprocessing plant (RP) that regularly discharges 85Kr during the spent nuclear fuel reprocessing process. In addition, a continuous measurement campaign of 85Kr air concentration was recently conducted by the Laboratory of Radioecology in Cherbourg (LRC) of the IRSN, within 20 km of the RP in the North-Cotentin peninsula, and is used for model evaluation. This paper presents a probabilistic approach to study the meteorological uncertainties in dispersion simulations at local and medium distances (2–20 km). First, the quality of AROME-EPS forecasts is confirmed by comparison with observations from both Météo-France and the IRSN. Then, the probabilistic performance of the atmospheric dispersion simulations was evaluated by comparison to the 85Kr measurements carried out during a period of 2 months, using two probabilistic scores: relative operating characteristic (ROC) curves and Peirce skill score (PSS). The sensitivity of dispersion results to the method used for the calculation of atmospheric stability and associated Gaussian dispersion standard deviations is also discussed. A desirable feature for a model used in emergency response is the ability to correctly predict exceedance of a given value (for instance, a dose guide level). When using an ensemble of simulations, the “decision threshold” is the number of members predicting an event above which this event should be considered probable. In the case of the 16-member dispersion ensemble used here, the optimal decision threshold was found to be 3 members, above which the ensemble better predicts the observed peaks than the deterministic simulation. These results highlight the added value of ensemble forecasts compared to a single deterministic one and their potential interest in the decision process during crisis situations.
APA, Harvard, Vancouver, ISO, and other styles
33

Vialard, Jérôme, Frédéric Vitart, Magdalena A. Balmaseda, Timothy N. Stockdale, and David L. T. Anderson. "An Ensemble Generation Method for Seasonal Forecasting with an Ocean–Atmosphere Coupled Model." Monthly Weather Review 133, no. 2 (February 1, 2005): 441–53. http://dx.doi.org/10.1175/mwr-2863.1.

Full text
Abstract:
Abstract Seasonal forecasts are subject to various types of errors: amplification of errors in oceanic initial conditions, errors due to the unpredictable nature of the synoptic atmospheric variability, and coupled model error. Ensemble forecasting is usually used in an attempt to sample some or all of these various sources of error. How to build an ensemble forecasting system in the seasonal range remains a largely unexplored area. In this paper, various ensemble generation methodologies for the European Centre for Medium-Range Weather Forecasts (ECMWF) seasonal forecasting system are compared. A series of experiments using wind perturbations (applied when generating the oceanic initial conditions), sea surface temperature (SST) perturbations to those initial conditions, and random perturbation to the atmosphere during the forecast, individually and collectively, is presented and compared with the more usual lagged-average approach. SST perturbations are important during the first 2 months of the forecast to ensure a spread at least equal to the uncertainty level on the SST measure. From month 3 onward, all methods give a similar spread. This spread is significantly smaller than the rms error of the forecasts. There is also no clear link between the spread of the ensemble and the ensemble mean forecast error. These two facts suggest that factors not presently sampled in the ensemble, such as model error, act to limit the forecast skill. Methods that allow sampling of model error, such as multimodel ensembles, should be beneficial to seasonal forecasting.
APA, Harvard, Vancouver, ISO, and other styles
34

Groenemeijer, P., and G. C. Craig. "Ensemble forecasting with a stochastic convective parametrization based on equilibrium statistics." Atmospheric Chemistry and Physics Discussions 11, no. 11 (November 11, 2011): 30457–85. http://dx.doi.org/10.5194/acpd-11-30457-2011.

Full text
Abstract:
Abstract. The stochastic Plant-Craig scheme for deep convection was implemented in the COSMO mesoscale model and used for ensemble forecasting. Ensembles consisting of 100 48 h forecasts at 7 km horizontal resolution were generated for a 2000 × 2000 km domain covering central Europe. Forecasts were made for seven case studies and characterized by different large-scale meteorological environments. Each 100 member ensemble consisted of 10 groups of 10 members, with each group driven by boundary and initial conditions from a selected member from the global ECMWF Ensemble Prediction System. The precipitation variability within and among these groups of members was computed, and it was found that the relative contribution to the ensemble variance introduced by the stochastic convection scheme was substantial, amounting to as much as 76% of the total variance in the ensemble in one of the studied cases. The impact of the scheme was not confined to the grid scale, and typically contributed 25–50% of the total variance even after the precipitation fields had been smoothed to a resolution of 35 km. The variability of precipitation introduced by the scheme was approximately proportional to the total amount of convection that occurred, while the variability due to large-scale conditions changed from case to case, being highest in cases exhibiting strong mid-tropospheric flow and pronounced meso- to synoptic scale vorticity extrema. The stochastic scheme was thus found to be an important source of variability in precipitation cases of weak large-scale flow lacking strong vorticity extrema, but high convective activity.
APA, Harvard, Vancouver, ISO, and other styles
35

Groenemeijer, P., and G. C. Craig. "Ensemble forecasting with a stochastic convective parametrization based on equilibrium statistics." Atmospheric Chemistry and Physics 12, no. 10 (May 24, 2012): 4555–65. http://dx.doi.org/10.5194/acp-12-4555-2012.

Full text
Abstract:
Abstract. The stochastic Plant-Craig scheme for deep convection was implemented in the COSMO mesoscale model and used for ensemble forecasting. Ensembles consisting of 100 48-h forecasts at 7 km horizontal resolution were generated for a 2000×2000 km domain covering central Europe. Forecasts were made for seven case studies characterized by different large-scale meteorological environments. Each 100 member ensemble consisted of 10 groups of 10 members, with each group driven by boundary and initial conditions from a selected member from the global ECMWF Ensemble Prediction System. The precipitation variability within and among these groups of members was computed, and it was found that the relative contribution to the ensemble variance introduced by the stochastic convection scheme was substantial, amounting to as much as 76% of the total variance in the ensemble in one of the studied cases. The impact of the scheme was not confined to the grid scale, and typically contributed 25–50% of the total variance even after the precipitation fields had been smoothed to a resolution of 35 km. The variability of precipitation introduced by the scheme was approximately proportional to the total amount of convection that occurred, while the variability due to large-scale conditions changed from case to case, being highest in cases exhibiting strong mid-tropospheric flow and pronounced meso- to synoptic scale vorticity extrema. The stochastic scheme was thus found to be an important source of variability in precipitation cases of weak large-scale flow lacking strong vorticity extrema, but high convective activity.
APA, Harvard, Vancouver, ISO, and other styles
36

Yip, Stan, Christopher A. T. Ferro, David B. Stephenson, and Ed Hawkins. "A Simple, Coherent Framework for Partitioning Uncertainty in Climate Predictions." Journal of Climate 24, no. 17 (September 2011): 4634–43. http://dx.doi.org/10.1175/2011jcli4085.1.

Full text
Abstract:
A simple and coherent framework for partitioning uncertainty in multimodel climate ensembles is presented. The analysis of variance (ANOVA) is used to decompose a measure of total variation additively into scenario uncertainty, model uncertainty, and internal variability. This approach requires fewer assumptions than existing methods and can be easily used to quantify uncertainty related to model–scenario interaction—the contribution to model uncertainty arising from the variation across scenarios of model deviations from the ensemble mean. Uncertainty in global mean surface air temperature is quantified as a function of lead time for a subset of the Coupled Model Intercomparison Project phase 3 ensemble and results largely agree with those published by other authors: scenario uncertainty dominates beyond 2050 and internal variability remains approximately constant over the twenty-first century. Both elements of model uncertainty, due to scenario-independent and scenario-dependent deviations from the ensemble mean, are found to increase with time. Estimates of model deviations that arise as by-products of the framework reveal significant differences between models that could lead to a deeper understanding of the sources of uncertainty in multimodel ensembles. For example, three models show a diverging pattern over the twenty-first century, while another model exhibits an unusually large variation among its scenario-dependent deviations.
APA, Harvard, Vancouver, ISO, and other styles
37

Siswoyo, Bambang, Zuraida Abal Abas, Ahmad Naim Che Pee, Rita Komalasari, and Nano Suryana. "Ensemble machine learning algorithm optimization of bankruptcy prediction of bank." IAES International Journal of Artificial Intelligence (IJ-AI) 11, no. 2 (June 1, 2022): 679. http://dx.doi.org/10.11591/ijai.v11.i2.pp679-686.

Full text
Abstract:
The ensemble consists of a single set of individually trained models, the predictions of which are combined when classifying new cases, in building a good classification model requires the diversity of a single model. The algorithm, logistic regression, support vector machine, random forest, and neural network are single models as alternative sources of diversity information. Previous research has shown that ensembles are more accurate than single models. Single model and modified ensemble bagging model are some of the techniques we will study in this paper. We experimented with the banking industry’s financial ratios. The results of his observations are: First, an ensemble is always more accurate than a single model. Second, we observe that modified ensemble bagging models show improved classification model performance on balanced datasets, as they can adjust behavior and make them more suitable for relatively small datasets. The accuracy rate is 97% in the bagging ensemble learning model, an increase in the accuracy level of up to 16% compared to other models that use unbalanced datasets.
APA, Harvard, Vancouver, ISO, and other styles
38

Lauvaux, Thomas, Liza I. Díaz-Isaac, Marc Bocquet, and Nicolas Bousserez. "Diagnosing spatial error structures in CO<sub>2</sub> mole fractions and XCO<sub>2</sub> column mole fractions from atmospheric transport." Atmospheric Chemistry and Physics 19, no. 18 (September 26, 2019): 12007–24. http://dx.doi.org/10.5194/acp-19-12007-2019.

Full text
Abstract:
Abstract. Atmospheric inversions inform us about the magnitude and variations of greenhouse gas (GHG) sources and sinks from global to local scales. Deployment of observing systems such as spaceborne sensors and ground-based instruments distributed around the globe has started to offer an unprecedented amount of information to estimate surface exchanges of GHG at finer spatial and temporal scales. However, all inversion methods still rely on imperfect atmospheric transport models whose error structures directly affect the inverse estimates of GHG fluxes. The impact of spatial error structures on the retrieved fluxes increase concurrently with the density of the available measurements. In this study, we diagnose the spatial structures due to transport model errors affecting modeled in situ carbon dioxide (CO2) mole fractions and total-column dry air mole fractions of CO2 (XCO2). We implement a cost-effective filtering technique recently developed in the meteorological data assimilation community to describe spatial error structures using a small-size ensemble. This technique can enable ensemble-based error analysis for multiyear inversions of sources and sinks. The removal of noisy structures due to sampling errors in our small-size ensembles is evaluated by comparison with larger-size ensembles. A second filtering approach for error covariances is proposed (Wiener filter), producing similar results over the 1-month simulation period compared to a Schur filter. Based on a comparison to a reference 25-member calibrated ensemble, we demonstrate that error variances and spatial error correlation structures are recoverable from small-size ensembles of about 8 to 10 members, improving the representation of transport errors in mesoscale inversions of CO2 fluxes. Moreover, error variances of in situ near-surface and free-tropospheric CO2 mole fractions differ significantly from total-column XCO2 error variances. We conclude that error variances for remote-sensing observations need to be quantified independently of in situ CO2 mole fractions due to the complexity of spatial error structures at different altitudes. However, we show the potential use of meteorological error structures such as the mean horizontal wind speed, directly available from ensemble prediction systems, to approximate spatial error correlations of in situ CO2 mole fractions, with similarities in seasonal variations and characteristic error length scales.
APA, Harvard, Vancouver, ISO, and other styles
39

Shinei, Chikara, Yuta Masuyama, Masashi Miyakawa, Hiroshi Abe, Shuya Ishii, Seiichi Saiki, Shinobu Onoda, Takashi Taniguchi, Takeshi Ohshima, and Tokuyuki Teraji. "Nitrogen related paramagnetic defects: Decoherence source of ensemble of NV center." Journal of Applied Physics 132, no. 21 (December 7, 2022): 214402. http://dx.doi.org/10.1063/5.0103332.

Full text
Abstract:
We investigated spin-echo coherence times T2 of negatively charged nitrogen vacancy center (NV−) ensembles in single-crystalline diamond synthesized by either the high-pressure and high-temperature and chemical vapor deposition methods. This study specifically examined the magnetic dipole–dipole interaction (DDI) from the various electronic spin baths, which are the source of T2 decoherence. Diamond samples with NV− center concentration [NV−] comparable to those of neutral substitutional nitrogen concentration [Ns0] were used for DDI estimation. Results show that the T2 of the ensemble NV− center decreased in inverse proportion to the concentration of nitrogen-related paramagnetic defects [NPM], being the sum of [Ns0], [NV−], and [NV0], which is a neutrally charged state NV center. This inversely proportional relation between T2 and [NPM] indicates that the nitrogen-related paramagnetic defects of three kinds are the main decoherence source of the ensemble NV− center in the single-crystalline diamond. We found that the DDI coefficient of NVH− center was significantly smaller than that of Ns0, the NV0 center, or the NV− center. We ascertained the DDI coefficient of the NV− center [Formula: see text] through experimentation using a linear summation of the decoherence rates of each nitrogen-related paramagnetic defect. The obtained value of 89 μs ppm for [Formula: see text] corresponds well to the value estimated from the relation between DDI coefficient and spin multiplicity.
APA, Harvard, Vancouver, ISO, and other styles
40

Davolio, S., M. M. Miglietta, T. Diomede, C. Marsigli, and A. Montani. "A flood episode in northern Italy: multi-model and single-model mesoscale meteorological ensembles for hydrological predictions." Hydrology and Earth System Sciences 17, no. 6 (June 5, 2013): 2107–20. http://dx.doi.org/10.5194/hess-17-2107-2013.

Full text
Abstract:
Abstract. Numerical weather prediction models can be coupled with hydrological models to generate streamflow forecasts. Several ensemble approaches have been recently developed in order to take into account the different sources of errors and provide probabilistic forecasts feeding a flood forecasting system. Within this framework, the present study aims at comparing two high-resolution limited-area meteorological ensembles, covering short and medium range, obtained via different methodologies, but implemented with similar number of members, horizontal resolution (about 7 km), and driving global ensemble prediction system. The former is a multi-model ensemble, based on three mesoscale models (BOLAM, COSMO, and WRF), while the latter, following a single-model approach, is the operational ensemble forecasting system developed within the COSMO consortium, COSMO-LEPS (limited-area ensemble prediction system). The meteorological models are coupled with a distributed rainfall-runoff model (TOPKAPI) to simulate the discharge of the Reno River (northern Italy), for a recent severe weather episode affecting northern Apennines. The evaluation of the ensemble systems is performed both from a meteorological perspective over northern Italy and in terms of discharge prediction over the Reno River basin during two periods of heavy precipitation between 29 November and 2 December 2008. For each period, ensemble performance has been compared at two different forecast ranges. It is found that, for the intercomparison undertaken in this specific study, both mesoscale model ensembles outperform the global ensemble for application at basin scale. Horizontal resolution is found to play a relevant role in modulating the precipitation distribution. Moreover, the multi-model ensemble provides a better indication concerning the occurrence, intensity and timing of the two observed discharge peaks, with respect to COSMO-LEPS. This seems to be ascribable to the different behaviour of the involved meteorological models. Finally, a different behaviour comes out at different forecast ranges. For short ranges, the impact of boundary conditions is weaker and the spread can be mainly attributed to the different characteristics of the models. At longer forecast ranges, the similar behaviour of the multi-model members forced by the same large-scale conditions indicates that the systems are governed mainly by the boundary conditions, although the different limited area models' characteristics may still have a non-negligible impact.
APA, Harvard, Vancouver, ISO, and other styles
41

Davolio, S., M. M. Miglietta, T. Diomede, C. Marsigli, and A. Montani. "A flood episode in Northern Italy: multi-model and single-model mesoscale meteorological ensembles for hydrological predictions." Hydrology and Earth System Sciences Discussions 9, no. 12 (December 4, 2012): 13415–50. http://dx.doi.org/10.5194/hessd-9-13415-2012.

Full text
Abstract:
Abstract. Numerical weather prediction models can be coupled with hydrological models to generate streamflow forecasts. Several ensemble approaches have been recently developed in order to take into account the different sources of errors and provide probabilistic forecasts feeding a flood forecasting system. Within this framework, the present study aims at comparing two high-resolution limited-area meteorological ensembles, covering short and medium range, obtained via different methodologies, but implemented with similar number of members, horizontal resolution (about 7 km), and driving global ensemble prediction system. The former is a multi-model ensemble, based on three mesoscale models (BOLAM, COSMO, and WRF), while the latter, following a single-model approach, is the operational ensemble forecasting system developed within the COSMO consortium, COSMO-LEPS (Limited-area Ensemble Prediction System). The meteorological models are coupled with a distributed rainfall-runoff model (TOPKAPI) to simulate the discharge of the Reno River (Northern Italy), for a recent severe weather episode affecting Northern Apennines. The evaluation of the ensemble systems is performed both from a meteorological perspective over the entire Northern Italy and in terms of discharge prediction over the Reno River basin during two periods of heavy precipitation between 29 November and 2 December 2008. For each period, ensemble performance has been compared at two different forecast ranges. It is found that both mesoscale model ensembles remarkably outperform the global ensemble for application at basin scale as the horizontal resolution plays a relevant role in modulating the precipitation distribution. Moreover, the multi-model ensemble provides more informative probabilistic predictions with respect to COSMO-LEPS, since it is characterized by a larger spread especially at short lead times. A thorough analysis of the multi-model results shows that this behaviour is due to the different characteristics of the involved meteorological models and represents the added value of the multi-model approach. Finally, a different behaviour comes out at different forecast ranges. For short ranges, the impact of boundary conditions is weaker and the spread can be mainly attributed to the different characteristics of the models. At longer forecast ranges, the similar behaviour of the multi-model members, forced by the same large scale conditions, indicates that the systems are governed mainly by the large scale boundary conditions.
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Chelsie Chia-Hsin, Christina W. Tsai, and Yu-Ying Huang. "Development of a Backward–Forward Stochastic Particle Tracking Model for Identification of Probable Sedimentation Sources in Open Channel Flow." Mathematics 9, no. 11 (May 31, 2021): 1263. http://dx.doi.org/10.3390/math9111263.

Full text
Abstract:
As reservoirs subject to sedimentation, the dam gradually loses its ability to store water. The identification of the sources of deposited sediments is an effective and efficient means of tackling sedimentation problems. A state-of-the-art Lagrangian stochastic particle tracking model with backward–forward tracking methods is applied to identify the probable source regions of deposited sediments. An influence function is introduced into the models to represent the influence of a particular upstream area on the sediment deposition area. One can then verify if a specific area might be a probable source by cross-checking the values of influence functions calculated backward and forward, respectively. In these models, the probable sources of the deposited sediments are considered to be in a grid instead of at a point for derivation of the values of influence functions. The sediment concentrations in upstream regions must be known a priori to determine the influence functions. In addition, the accuracy of the different types of diffusivity at the water surface is discussed in the study. According to the results of the case study of source identification, the regions with higher sediment concentrations computed by only backward simulations do not necessarily imply a higher likelihood of sources. It is also shown that from the ensemble results when the ensemble mean of the concentration is higher, the ensemble standard deviation of the concentration is also increased.
APA, Harvard, Vancouver, ISO, and other styles
43

Wootten, A., A. Terando, B. J. Reich, R. P. Boyles, and F. Semazzi. "Characterizing Sources of Uncertainty from Global Climate Models and Downscaling Techniques." Journal of Applied Meteorology and Climatology 56, no. 12 (December 2017): 3245–62. http://dx.doi.org/10.1175/jamc-d-17-0087.1.

Full text
Abstract:
AbstractIn recent years, climate model experiments have been increasingly oriented toward providing information that can support local and regional adaptation to the expected impacts of anthropogenic climate change. This shift has magnified the importance of downscaling as a means to translate coarse-scale global climate model (GCM) output to a finer scale that more closely matches the scale of interest. Applying this technique, however, introduces a new source of uncertainty into any resulting climate model ensemble. Here a method is presented, on the basis of a previously established variance decomposition method, to partition and quantify the uncertainty in climate model ensembles that is attributable to downscaling. The method is applied to the southeastern United States using five downscaled datasets that represent both statistical and dynamical downscaling techniques. The combined ensemble is highly fragmented, in that only a small portion of the complete set of downscaled GCMs and emission scenarios is typically available. The results indicate that the uncertainty attributable to downscaling approaches ~20% for large areas of the Southeast for precipitation and ~30% for extreme heat days (>35°C) in the Appalachian Mountains. However, attributable quantities are significantly lower for time periods when the full ensemble is considered but only a subsample of all models is available, suggesting that overconfidence could be a serious problem in studies that employ a single set of downscaled GCMs. This article concludes with recommendations to advance the design of climate model experiments so that the uncertainty that accrues when downscaling is employed is more fully and systematically considered.
APA, Harvard, Vancouver, ISO, and other styles
44

Torn, Ryan D. "Evaluation of Atmosphere and Ocean Initial Condition Uncertainty and Stochastic Exchange Coefficients on Ensemble Tropical Cyclone Intensity Forecasts." Monthly Weather Review 144, no. 9 (September 2016): 3487–506. http://dx.doi.org/10.1175/mwr-d-16-0108.1.

Full text
Abstract:
Tropical cyclone (TC) intensity forecasts are impacted by errors in atmosphere and ocean initial conditions and the model formulation, which motivates using an ensemble approach. This study evaluates the impact of uncertainty in atmospheric and oceanic initial conditions, as well as stochastic representations of the drag Cd and enthalphy Ck exchange coefficients on ensemble Advanced Hurricane WRF (AHW) TC intensity forecasts of multiple Atlantic TCs from 2008 to 2011. Each ensemble experiment is characterized by different combinations of either deterministic or ensemble atmospheric and/or oceanic initial conditions, as well as fixed or stochastic representations of Cd or Ck. Among those experiments with a single uncertainty source, atmospheric uncertainty produces the largest standard deviation in TC intensity. While ocean uncertainty leads to continuous growth in ensemble standard deviation, the ensemble standard deviation in the experiments with Cd and Ck uncertainty levels off by 48 h. Combining atmospheric and oceanic uncertainty leads to larger intensity standard deviation than atmosphere or ocean uncertainty alone and preferentially adds variability outside of the TC core. By contrast, combining Cd or Ck uncertainty with any other source leads to negligible increases in standard deviation, which is mainly due to the lack of spatial correlation in the exchange coefficient perturbations. All of the ensemble experiments are deficient in ensemble standard deviation; however, the experiments with combinations of uncertainty sources generally have an ensemble standard deviation closer to the ensemble-mean errors.
APA, Harvard, Vancouver, ISO, and other styles
45

Meng, Zhiyong, and Fuqing Zhang. "Tests of an Ensemble Kalman Filter for Mesoscale and Regional-Scale Data Assimilation. Part III: Comparison with 3DVAR in a Real-Data Case Study." Monthly Weather Review 136, no. 2 (February 1, 2008): 522–40. http://dx.doi.org/10.1175/2007mwr2106.1.

Full text
Abstract:
Abstract The feasibility of using an ensemble Kalman filter (EnKF) for mesoscale and regional-scale data assimilation has been demonstrated in the authors’ recent studies via observing system simulation experiments (OSSEs) both under a perfect-model assumption and in the presence of significant model error. The current study extends the EnKF to assimilate real-data observations for a warm-season mesoscale convective vortex (MCV) event on 10–12 June 2003. Direct comparison between the EnKF and a three-dimensional variational data assimilation (3DVAR) system, both implemented in the Weather Research and Forecasting model (WRF), is carried out. It is found that the EnKF consistently performs better than the 3DVAR method by assimilating either individual or multiple data sources (i.e., sounding, surface, and wind profiler) for this MCV event. Background error covariance plays an important role in the performance of both the EnKF and the 3DVAR system. Proper covariance inflation and the use of different combinations of physical parameterization schemes in different ensemble members (the so-called multischeme ensemble) can significantly improve the EnKF performance. The 3DVAR system can benefit substantially from using short-term ensembles to improve the prior estimate (with the ensemble mean). Noticeable improvement is also achieved by including some flow dependence in the background error covariance of 3DVAR.
APA, Harvard, Vancouver, ISO, and other styles
46

Gallippi, Caterina M., and Gregg E. Trahey. "Adaptive Clutter Filtering via Blind Source Separation for Two-Dimensional Ultrasonic Blood Velocity Measurement." Ultrasonic Imaging 24, no. 4 (October 2002): 193–214. http://dx.doi.org/10.1177/016173460202400401.

Full text
Abstract:
A method for adaptive clutter rejection via blind source separation (BSS) using principal and independent component analyses is presented in application to blood velocity measurement in the carotid artery. In particular, the filtering method's efficacy for eliminating clutter and preserving lateral blood flow signal components is presented. The performance of IIR filters is compromised by shorth data ensembles (10 to 20 temporal samples) as implemented for color-flow and high frame-rate imaging due to initialization requirements. Further, the ultrasonic imaging system's transfer function maps axial wall and lateral blood motion to overlapping spectra. As such, frequency domain-based approaches to wall filtering are ineffective for distinguishing wall from blood motion signals. Rather than operating in the frequency domain, BSS performs clutter rejection by decomposing the input data ensemble into N constitutive source signals in time, where N is the ensemble length. Source signal energy coupled with respective signal depth and time course profiles reveal which source signals correspond to blood, noise and clutter components. Clutter components may then be removed without disruption of lateral blood flow information needed for two-dimensional blood velocity measurement. A simplistic data simulation is employed to offer an intuitive understanding of BSS methods for signal separation. The adaptive BSS filter is further demonstrated using a Field II simulation of blood flow through the carotid artery including tissue motion. BSS clutter filter performance is compared to the performance of FIR, IIR and polynomial regression clutter filters. Finally, the filter is employed for clinical application using a Siemens Elegra scanner, carotid artery data with lateral blood flow collected from healthy volunteers, and Speckle Tracking; velocity magnitude and angle profiles are shown. Once again, the BSS clutter filter is contrasted to FIR, IIR and polynomial regression clutter filters using clinical examples. Velocities computed with Speckle Tracking after BSS wall filtering are highest in the center of the artery and diminish to low velocities near the vessel walls, with velocity magnitudes consistent with physiological expectations. These results demonstrate that the BSS adaptive filter sufficiently suppresses wall motion signal for clinical lateral blood velocity measurement using data ensembles suitable for color-flow and high frame-rate imaging.
APA, Harvard, Vancouver, ISO, and other styles
47

SCASE, M. M., C. P. CAULFIELD, and S. B. DALZIEL. "Temporal variation of non-ideal plumes with sudden reductions in buoyancy flux." Journal of Fluid Mechanics 600 (March 26, 2008): 181–99. http://dx.doi.org/10.1017/s0022112008000487.

Full text
Abstract:
We model the behaviour of isolated sources of finite radius and volume flux which experience a sudden drop in buoyancy flux, generalizing the previous theory presented in Scase et al. (J. Fluid Mech., vol. 563, 2006, p. 443). In particular, we consider the problem of the source of an established plume suddenly increasing in area to provide a much wider plume source. Our calculations predict that, while our model remains applicable, the plume never fully pinches off into individual rising thermals.We report the results of a large number of experiments, which provide an ensemble to compare to theoretical predictions. We find that provided the source conditions are weakened in such a way that the well-known entrainment assumption remains valid, the established plume is not observed to pinch off into individual thermals. Further, not only is pinch-off not observed in the ensemble of experiments, it cannot be observed in any of the individual experiments. We consider both the temporal evolution of the plume profile and a concentration of passive tracer, and show that our model predictions compare well with our experimental observations.
APA, Harvard, Vancouver, ISO, and other styles
48

Miron, Marius, Julio J. Carabias-Orti, Juan J. Bosch, Emilia Gómez, and Jordi Janer. "Score-Informed Source Separation for Multichannel Orchestral Recordings." Journal of Electrical and Computer Engineering 2016 (2016): 1–19. http://dx.doi.org/10.1155/2016/8363507.

Full text
Abstract:
This paper proposes a system for score-informed audio source separation for multichannel orchestral recordings. The orchestral music repertoire relies on the existence of scores. Thus, a reliable separation requires a good alignment of the score with the audio of the performance. To that extent, automatic score alignment methods are reliable when allowing a tolerance window around the actual onset and offset. Moreover, several factors increase the difficulty of our task: a high reverberant image, large ensembles having rich polyphony, and a large variety of instruments recorded within a distant-microphone setup. To solve these problems, we design context-specific methods such as the refinement of score-following output in order to obtain a more precise alignment. Moreover, we extend a close-microphone separation framework to deal with the distant-microphone orchestral recordings. Then, we propose the first open evaluation dataset in this musical context, including annotations of the notes played by multiple instruments from an orchestral ensemble. The evaluation aims at analyzing the interactions of important parts of the separation framework on the quality of separation. Results show that we are able to align the original score with the audio of the performance and separate the sources corresponding to the instrument sections.
APA, Harvard, Vancouver, ISO, and other styles
49

Landel, Julien R., C. P. Caulfield, and Andrew W. Woods. "Streamwise dispersion and mixing in quasi-two-dimensional steady turbulent jets." Journal of Fluid Mechanics 711 (September 12, 2012): 212–58. http://dx.doi.org/10.1017/jfm.2012.388.

Full text
Abstract:
AbstractWe investigate experimentally and theoretically the streamwise transport and dispersion properties of steady quasi-two-dimensional plane turbulent jets discharged vertically from a slot of width $d$ into a fluid confined between two relatively close rigid boundaries with gap $W\ensuremath{\sim} O(d)$. We model the evolution in time and space of the concentration of passive tracers released in these jets using a one-dimensional time-dependent effective advection–diffusion equation. We make a mixing length hypothesis to model the streamwise turbulent eddy diffusivity such that it scales like $b(z){ \overline{w} }_{m} (z)$, where $z$ is the streamwise coordinate, $b$ is the jet width, ${ \overline{w} }_{m} $ is the maximum time-averaged vertical velocity. Under these assumptions, the effective advection–diffusion equation for $\phi (z, t)$, the horizontal integral of the ensemble-averaged concentration, is of the form ${\partial }_{t} \phi + {K}_{a} {\text{} {M}_{0} \text{} }^{1/ 2} {\partial }_{z} \left(\phi / {z}^{1/ 2} \right)= {K}_{d} {\text{} {M}_{0} \text{} }^{1/ 2} {\partial }_{z} \left({z}^{1/ 2} {\partial }_{z} \phi \right)$, where $t$ is time, ${K}_{a} $ (the advection parameter) and ${K}_{d} $ (the dispersion parameter) are empirical dimensionless parameters which quantify the importance of advection and dispersion, respectively, and ${M}_{0} $ is the source momentum flux. We find analytical solutions to this equation for $\phi $ in the cases of a constant-flux release and an instantaneous finite-volume release. We also give an integral formulation for the more general case of a time-dependent release, which we solve analytically when tracers are released at a constant flux over a finite period of time. From our experimental results, whose concentration distributions agree with the model, we find that ${K}_{a} = 1. 65\pm 0. 10$ and ${K}_{d} = 0. 09\pm 0. 02$, for both finite-volume releases and constant-flux releases using either dye or virtual passive tracers. The experiments also show that streamwise dispersion increases in time as ${t}^{2/ 3} $. As a result, in the case of finite-volume releases more than 50 % of the total volume of tracers is transported ahead of the purely advective front (i.e. the front location of the tracer distribution if all dispersion mechanisms are ignored and considering a ‘top-hat’ mean velocity profile in the jet); and in the case of constant-flux releases, at each instant in time, approximately 10 % of the total volume of tracers is transported ahead of the advective front.
APA, Harvard, Vancouver, ISO, and other styles
50

Beg, Md Nazmul Azim, Jorge Leandro, Punit Bhola, Iris Konnerth, Winfried Willems, Rita F. Carvalho, and Markus Disse. "Discharge Interval method for uncertain flood forecasts using a flood model chain: city of Kulmbach." Journal of Hydroinformatics 21, no. 5 (July 18, 2019): 925–44. http://dx.doi.org/10.2166/hydro.2019.131.

Full text
Abstract:
Abstract Real-time flood forecasting can help authorities in providing reliable warnings to the public. Ensemble prediction systems (EPS) have been progressively used for operational flood forecasting by European hydrometeorological agencies in recent years. This process, however, is non-deterministic such that uncertainty sources need to be considered before issuing forecasts. In this study, a new methodology for flood forecasting named Discharge Interval method is proposed. This method uses at least one historical event hindcast data, run in several ensembles and selects a pair of best ensemble discharge results for every certain discharge level. Later, the method uses the same parameter settings of the chosen ensemble discharge pair to forecast any certain flood discharge level. The methodology was implemented within the FloodEvac tool. The tool can handle calibration/validation of the hydrological model (LARSIM) and produces real-time flood forecasts with the associated uncertainty of the flood discharges. The proposed methodology is computationally efficient and suitable for real-time forecasts with uncertainty. The results using the Discharge Interval method were found comparable to the 90th percentile forecasted discharge range obtained with the Ensemble method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography