Статті в журналах з теми "Flux-metric methods"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Flux-metric methods.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-31 статей у журналах для дослідження на тему "Flux-metric methods".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Anderson, Bruce T., Guido Salvucci, Alex C. Ruane, John O. Roads, and Masao Kanamitsu. "A New Metric for Estimating the Influence of Evaporation on Seasonal Precipitation Rates." Journal of Hydrometeorology 9, no. 3 (June 1, 2008): 576–88. http://dx.doi.org/10.1175/2007jhm968.1.

Повний текст джерела
Анотація:
Abstract The objective of this paper is to introduce a diagnostic metric—termed the local-convergence ratio—that can be used to quantify the contribution of evaporation (and transpiration) to the atmospheric hydrologic cycle, and precipitation in particular, over a given region. Previous research into regional moisture (or precipitation) recycling has produced numerous methods for estimating the contributions of “local” (i.e., evaporated) moisture to climatological precipitation and its variations. In general, these metrics quantify the evaporative contribution to the mass of precipitable water within an atmospheric column by comparing the vertically integrated atmospheric fluxes of moisture across a region with the fluxes via evaporation. Here a new metric is proposed, based on the atmospheric moisture tendency equation, which quantifies the evaporative contribution to the rate of precipitation by comparing evaporative convergence into the column with large-scale moisture-flux convergence. Using self-consistent, model-derived estimates of the moisture-flux fields and the atmospheric moisture tendency terms, the authors compare estimates of the flux-based moisture-recycling ratio with the newly introduced local-convergence ratio. Differences between the two ratios indicate that they can be considered complementary, but independent, descriptors of the atmospheric hydroclimatology for a given region.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mõistus, Marta, and Mait Lang. "Leaf area index mapping with optical methods and allometric models in SMEAR flux tower footprint at Järvselja, Estonia." Forestry Studies 63, no. 1 (December 1, 2015): 85–99. http://dx.doi.org/10.1515/fsmu-2015-0010.

Повний текст джерела
Анотація:
AbstractLeaf area index (LAI) characterizes the amount of photosynthetically active tissue in plant canopies. LAI is one of the key factors determining ecosystem net primary production and gas and energy exchange between the canopy and the atmosphere. The aim of the present study was to test different methods for LAI and effective plant area index (PAIe) estimation in mixed hemiboreal forests in Järvselja SMEAR Estonia (Station for Measuring Ecosystem-Atmosphere Relations) flux tower footprint. We used digital hemispherical images from sample plots, forest management inventory data, allometric foliage mass models, airborne discrete-return recording laser scanner (ALS) data and multispectral satellite images. The free ware program HemiSpherical Project Manager (HSP) was used to calculate canopy gap fraction from digital hemispherical photographs taken in 25 sample plots. PAIewas calculated from the gap fraction for up-scaling based on ALS point cloud metrics. The all ALS pulse returns-based canopy transmission was found to be the most suitable lidar metric to estimate PAIein Järvselja forests. The 95-percentile (H95) of lidar point cloud height distribution correlates very well with allometric regression models based LAI and in birch stands the relationship was fitted with 0.7 m2m−2residual error. However, the relationship was specific to each allometric foliage mass model and systematic discrepancies were detected at large LAI values between the models. Relationships between the spectral reflectance and allometric LAI were not good enough to be used for LAI mapping. Therefore, airborne laser scanning data-based PAIemap was created for areas near SMEAR tower. We recommend to establish a network of permanent sample plots for forest growth and gap fraction measurements into the flux footprint of SMEAR Estonia flux tower in Järvselja to provide consistent up to date data for interpretation of the flux measurements.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kadam, Sunil A., Claudio O. Stöckle, Mingliang Liu, Zhongming Gao, and Eric S. Russell. "Suitability of Earth Engine Evaporation Flux (EEFlux) Estimation of Evapotranspiration in Rainfed Crops." Remote Sensing 13, no. 19 (September 28, 2021): 3884. http://dx.doi.org/10.3390/rs13193884.

Повний текст джерела
Анотація:
This study evaluated evapotranspiration (ET) estimated using the Earth Engine Evapotranspiration Flux (EEFlux), an automated version of the widely used Mapping Evapotranspiration at High Spatial Resolution with Internalized Calibration (METRIC) model, via comparison with ET measured using eddy covariance flux towers at two U.S. sites (St. John, WA, USA and Genesee, ID, USA) and for two years (2018 and 2019). Crops included spring wheat, winter pea, and winter wheat, all grown under rainfed conditions. The performance indices for daily EEFlux ET estimations combined for all sites and years dramatically improved when the cold pixel alfalfa reference ET fraction (ETrF) in METRIC was reduced from 1.05 (typically used for irrigated crops) to 0.85, with further improvement when the periods of early growth and canopy senescence were excluded. Large EEFlux ET overestimation during crop senescence was consistent in all sites and years. The seasonal absolute departure error was 51% (cold pixel ETrF = 1.05) and 23% (cold pixel ETrF = 0.85), the latter reduced to 12% when the early growth and canopy senescence periods were excluded. Departures of 10% are a reasonable expectation for methods of ET estimation, which EEFlux could achieve with more frequent satellite images, better daily weather data sources, automated adjustment of daily ETrF values during crop senescence, and a better understanding of the selection of adequate cold pixel ETrF values for rainfed crops.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

De Haan, Kevin, Myroslava Khomik, Adam Green, Warren Helgason, Merrin L. Macrae, Mazda Kompanizare, and Richard M. Petrone. "Assessment of Different Water Use Efficiency Calculations for Dominant Forage Crops in the Great Lakes Basin." Agriculture 11, no. 8 (August 4, 2021): 739. http://dx.doi.org/10.3390/agriculture11080739.

Повний текст джерела
Анотація:
Water use efficiency (WUE) can be calculated using a range of methods differing in carbon uptake and water use variable selection. Consequently, inconsistencies arise between WUE calculations due to complex physical and physiological interactions. The purpose of this study was to quantify and compare WUE estimates (harvest or flux-based) for alfalfa (C3 plant) and maize (C4 plant) and determine effects of input variables, plant physiology and farming practices on estimates. Four WUE calculations were investigated: two “harvest-based” methods, using above ground carbon content and either precipitation or evapotranspiration (ET), and two “flux-based” methods, using gross primary productivity (GPP) and either ET or transpiration. WUE estimates differed based on method used at both half-hourly and seasonal scales. Input variables used in calculations affected WUE estimates, and plant physiology led to different responses in carbon assimilation and water use variables. WUE estimates were also impacted by different plant physiological responses and processing methods, even when the same carbon assimilation and water use variables were considered. This study highlights a need to develop a metric of measuring cropland carbon-water coupling that accounts for all water use components, plant carbon responses, and biomass production.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Niemimäki, Ossi, and Stefan Kurz. "Quasi 3D modelling and simulation of axial flux machines." COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering 33, no. 4 (July 1, 2014): 1220–32. http://dx.doi.org/10.1108/compel-11-2012-0352.

Повний текст джерела
Анотація:
Purpose – The purpose of this paper is to investigate the theoretical foundation of the so-called quasi 3D modelling method of axial flux machines, and the means for the simulation of the resulting models. Design/methodology/approach – Starting from the first principles, a 3D magnetostatic problem is geometrically decomposed into a coupled system of 2D problems. Genuine 2D problems are derived by decoupling the system. The construction of the 2D simulation models is discussed, and their applicability is evaluated by comparing a finite element implementation to an existing industry-used model. Findings – The quasi 3D method relies on the assumption of vanishing radial magnetic flux. The validity of this assumption is reflected in a residual gained from the 3D coupled system. Moreover, under a modification of the metric of the 2D models, an axial flux machine can be presented as a family of radial flux machines. Research limitations/implications – The evaluation and interpretation of the residual has not been carried out. Furthermore, the inclusion of eddy currents has not been detailed in the present study. Originality/value – A summary of existing modelling and simulation methods of axial flux machines is provided. As a novel result, proper mathematical context for the quasi 3D method is given and the underlying assumptions are laid out. The implementation of the 2D models is approached from a general angle, strengthening the foundation for future research.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Park, Hyungwon John, Jeffrey S. Reid, Livia S. Freire, Christopher Jackson, and David H. Richter. "In situ particle sampling relationships to surface and turbulent fluxes using large eddy simulations with Lagrangian particles." Atmospheric Measurement Techniques 15, no. 23 (December 13, 2022): 7171–94. http://dx.doi.org/10.5194/amt-15-7171-2022.

Повний текст джерела
Анотація:
Abstract. Source functions for mechanically driven coarse-mode sea spray and dust aerosol particles span orders of magnitude owing to a combination of physical sensitivity in the system and large measurement uncertainty. Outside special idealized settings (such as wind tunnels), aerosol particle fluxes are largely inferred from a host of methods, including local eddy correlation, gradient methods, and dry deposition methods. In all of these methods, it is difficult to relate point measurements from towers, ships, or aircraft to a general representative flux of aerosol particles. This difficulty is from the particles' inhomogeneous distribution due to multiple spatiotemporal scales of an evolving marine environment. We hypothesize that the current representation of a point in situ measurement of sea spray or dust particles is a likely contributor to the unrealistic range of flux and concentration outcomes in the literature. This paper aims to help the interpretation of field data: we conduct a series of high-resolution, cloud-free large eddy simulations (LESs) with Lagrangian particles to better understand the temporal evolution and volumetric variability of coarse- to giant-mode marine aerosol particles and their relationship to turbulent transport. The study begins by describing the Lagrangian LES model framework and simulates flux measurements that were made using numerical analogs to field practices such as the eddy covariance method. Using these methods, turbulent flux sampling is quantified based on key features such as coherent structures within the marine atmospheric boundary layer (MABL) and aerosol particle size. We show that for an unstable atmospheric stability, the MABL exhibits large coherent eddy structures, and as a consequence, the flux measurement outcome becomes strongly tied to spatial length scales and relative sampling of crosswise and streamwise sampling. For example, through the use of ogive curves, a given sampling duration of a fixed numerical sampling instrument is found to capture 80 % of the aerosol flux given a sampling rate of zf/w∗∼ 0.2, whereas a spanwise moving instrument results in a 95 % capture. These coherent structures and other canonical features contribute to the lack of convergence to the true aerosol vertical flux at any height. As expected, sampling all of the flow features results in a statistically robust flux signal. Analysis of a neutral boundary layer configuration results in a lower predictive range due to weak or no vertical roll structures compared to the unstable boundary layer setting. Finally, we take the results of each approach and compare their surface flux variability: a baseline metric used in regional and global aerosol models.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Rywotycki, M., Z. Malinowski, J. Falkus, K. Sołek, A. Szajding, and K. Miłkowska-Piszczek. "Modelling of Heat Transfer at the Solid to Solid Interface." Archives of Metallurgy and Materials 61, no. 1 (March 1, 2016): 341–46. http://dx.doi.org/10.1515/amm-2016-0063.

Повний текст джерела
Анотація:
In technological process of steel industry heat transfer is a very important factor. Heat transfer plays an essential role especially in rolling and forging processes. Heat flux between a tool and work piece is a function of temperature, pressure and time. A methodology for the determination of the heat transfer at solid to solid interface has been developed. It involves physical experiment and numerical methods. The first one requires measurements of the temperature variations at specified points in the two samples brought into contact. Samples made of C45 and NC6 steels have been employed in physical experiment. One of the samples was heated to an initial temperature of: 800°C, 1000°C and 1100°C. The second sample has been kept at room temperature. The numerical part makes use of the inverse method for calculating the heat flux and at the interface. The method involves the temperature field simulation in the axially symmetrical samples. The objective function is bulled up as a dimensionless error norm between measured and computed temperatures. The variable metric method is employed in the objective function minimization. The heat transfer coefficient variation in time at the boundary surface is approximated by cubic spline functions. The influence of pressure and temperature on the heat flux has been analysed. The problem has been solved by applying the inverse procedure and finite element method for the temperature field simulations. The self-developed software has been used. The simulation results, along with their analysis, have been presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Park, Byung Kyu, and Charn-Jung Kim. "Unsteady Heat Flux Measurement and Predictions Using Long Short-Term Memory Networks." Buildings 13, no. 3 (March 8, 2023): 707. http://dx.doi.org/10.3390/buildings13030707.

Повний текст джерела
Анотація:
Energy consumption modeling has evolved along with building technology. Modeling techniques can be largely classified into white box, gray box, and black box. In this study, the thermal behavior characteristics of building components were identified through time-series data analysis using LSTM neural networks. Sensors were installed inside and outside the test room to measure physical quantities. As a result of calculating the overall heat transfer coefficient according to the international standard ISO 9869-1, the U value of the multi-window with antireflection coating was 1.84 W/(m2∙K). To understand the thermal behavior of multiple windows, we constructed a neural network using an LSTM architecture and used the measured data-set to predict and evaluate the heat flux through deep learning. From the measurement data, a wavelet transform was used to extract features and to find appropriate control time-step intervals. Performance was evaluated according to multistep measurement intervals using the error metric method. The multistep time interval for control monitoring is preferably no more than 240 s. In addition, multivariate analysis with several input variables was performed. In particular, the thermal behavior of building components can be analyzed through heat flux and temperature measurements in the transient state of physical properties of pre-installed building components, which were difficult to access with conventional steady-state measurement methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wu, Li, Tao Zhang, Yi Qin, and Wei Xue. "An effective parameter optimization with radiation balance constraint in CAM5 (version 5.3)." Geoscientific Model Development 13, no. 1 (January 3, 2020): 41–53. http://dx.doi.org/10.5194/gmd-13-41-2020.

Повний текст джерела
Анотація:
Abstract. Uncertain parameters in physical parameterizations of general circulation models (GCMs) greatly impact model performance. In recent years, automatic parameter optimization has been introduced for tuning model performance of GCMs, but most of the optimization methods are unconstrained optimization methods under a given performance indicator. Therefore, the calibrated model may break through essential constraints that models have to keep, such as the radiation balance at the top of the model. The radiation balance is known for its importance in the conservation of model energy. In this study, an automated and efficient parameter optimization with the radiation balance constraint is presented and applied in the Community Atmospheric Model (CAM5) in terms of a synthesized performance metric using normalized mean square error of radiation, precipitation, relative humidity, and temperature. The tuned parameters are from the parameterization schemes of convection and cloud. The radiation constraint is defined as the absolute difference of the net longwave flux at the top of the model (FLNT) and the net solar flux at the top of the model (FSNT) of less than 1 W m−2. Results show that the synthesized performance under the optimal parameters is 6.3 % better than the control run (CNTL) and the radiation imbalance is as low as 0.1 W m−2. The proposed method provides an insight for physics-guided optimization, and it can be easily applied to optimization problems with other prerequisite constraints in GCMs.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Shapiro, Griffin, David V. Stark, and Karen L. Masters. "Testing Algorithms for Identifying Source Confusion in the H i-MaNGA Survey." Research Notes of the AAS 6, no. 1 (January 5, 2022): 1. http://dx.doi.org/10.3847/2515-5172/ac4743.

Повний текст джерела
Анотація:
Abstract Astronomical observations of neutral atomic hydrogen (H i) are an important tracer of several key processes of galaxy evolution, but face significant difficulties with terrestrial telescopes. Among these is source confusion, or the inability to distinguish between emission from multiple nearby sources separated by distances smaller than the telescope’s spatial resolution. Confusion can compromise the data for the primary target if the flux from the secondary galaxy is sufficient. This paper presents an assessment of the confusion-flagging methods of the H i-MaNGA survey, using higher-resolution H i data from the Westorbork Synthesis Radio Telescope-Apertif survey. We find that removing potentially confused observations using a confusion probability metric—calculated from the relationship between galaxy color, surface brightness, and H i content—successfully eliminates all significantly confused observations in our sample, although roughly half of the eliminated observations are not significantly confused.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Wang, Yanyan, and Sean D. Willett. "Escarpment retreat rates derived from detrital cosmogenic nuclide concentrations." Earth Surface Dynamics 9, no. 5 (September 30, 2021): 1301–22. http://dx.doi.org/10.5194/esurf-9-1301-2021.

Повний текст джерела
Анотація:
Abstract. High-relief great escarpments at passive margins present a paradoxical combination of high-relief topography but low erosion rates suggesting low rates of landscape change. However, vertical erosion rates do not offer a straightforward metric of horizontal escarpment retreat rates, so we attempt to address this problem in this paper. We show that detrital cosmogenic nuclide concentrations can be interpreted as a directionally dependent mass flux to characterize patterns of non-vertical landscape evolution, e.g., an escarpment characterized by horizontal retreat. We present two methods for converting cosmogenic nuclide concentrations into escarpment retreat rates and calculate the retreat rates of escarpments with published cosmogenic 10Be concentrations from the Western Ghats of India. Escarpment retreat rates of the Western Ghats inferred from this study vary within a range of hundreds to thousands of meters per Myr. We show that the current position and morphology of the Western Ghats are consistent with an escarpment retreating at a near-constant rate from the coastline since rifting.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Max, Ndoumbé Matéké, Nyobe Yomé Jean Maurice, Eke Samuel, Mouné Cédric Jordan, Alain Biboum, and Bitjoka Laurent. "DTC with fuzzy logic for multi-machine systems: traction applications." International Journal of Power Electronics and Drive Systems (IJPEDS) 12, no. 4 (December 1, 2021): 2044. http://dx.doi.org/10.11591/ijpeds.v12.i4.pp2044-2058.

Повний текст джерела
Анотація:
In this work, a direct torque control (DTC) method for multi-machine systems is applied to electric vehicles (EVs). Initially, the DTC control method associated with the model reference adaptive system (MRAS) is used for speed control, and management of the magnetic quantities is ensured by the variable master-slave control (VMSC). In order to increase the technical performance of the studied system, a DTC method has been associated with a fuzzy logic approach. These two control methods are applied to the traction chain of an electric vehicle to highlight its speed, precision, stability, and robustness metric during particular stress tests imposed on the wheel motor. The results obtained in MATLAB/Simulink software made feasible a comparison of two proposed methods based on their technical performances. It should be noted that the direct fuzzy logic torque control (DFTC) has better performance than the DTC associated with the MRAS system as a rise time reduction of 1.4%, an oscillation of torque, and flux amplitude of less than 9%, static steady-state error near zero. The DTFC control method responds favorably to electric vehicle traction chain systems by the nature of the comfort and safety provided.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Chantry, L., V. Cayatte, C. Sauty, N. Vlahakis, and K. Tsinganos. "Nonradial and nonpolytropic astrophysical outflows." Astronomy & Astrophysics 612 (April 2018): A63. http://dx.doi.org/10.1051/0004-6361/201731793.

Повний текст джерела
Анотація:
Context. High-resolution radio imaging of active galactic nuclei (AGN) has revealed that the jets of some sources present superluminal knots and transverse stratification. Recent observational projects, such as ALMA and γ-ray telescopes, such as HESS and HESS2 have provided new observational constraints on the central regions of rotating black holes in AGN, suggesting that there is an inner- or spine-jet surrounded by a disk wind. This relativistic spine-jet is likely to be composed of electron-positron pairs extracting energy from the black hole and will be explored by the future γ-ray telescope CTA. Aims. In this article we present an extension to and generalization of relativistic jets in Kerr metric of the Newtonian meridional self-similar mechanism. We aim at modeling the inner spine-jet of AGN as a relativistic light outflow emerging from a spherical corona surrounding a Kerr black hole and its inner accretion disk. Methods. The model is built by expanding the metric and the forces with colatitude to first order in the magnetic flux function. As a result of the expansion, all colatitudinal variations of the physical quantities are quantified by a unique parameter. Unlike previous models, effects of the light cylinder are not neglected. Results. Solutions with high Lorentz factors are obtained and provide spine-jet models up to the polar axis. As in previous publications, we calculate the magnetic collimation efficiency parameter, which measures the variation of the available energy across the field lines. This collimation efficiency is an integral part of the model, generalizing the classical magnetic rotator efficiency criterion to Kerr metric. We study the variation of the magnetic efficiency and acceleration with the spin of the black hole and show their high sensitivity to this integral. Conclusions. These new solutions model collimated or radial, relativistic or ultra-relativistic outflows in AGN or γ-ray bursts. In particular, we discuss the relevance of our solutions to modeling the M 87 spine-jet. We study the efficiency of the central black hole spin to collimate a spine-jet and show that the jet power is of the same order as that determined by numerical simulations.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Guevara-Escobar, Aurelio, Enrique González-Sosa, Mónica Cervantes-Jiménez, Humberto Suzán-Azpiri, Mónica Elisa Queijeiro-Bolaños, Israel Carrillo-Ángeles, and Víctor Hugo Cambrón-Sandoval. "Machine learning estimates of eddy covariance carbon flux in a scrub in the Mexican highland." Biogeosciences 18, no. 2 (January 18, 2021): 367–92. http://dx.doi.org/10.5194/bg-18-367-2021.

Повний текст джерела
Анотація:
Abstract. Arid and semiarid ecosystems contain relatively high species diversity and are subject to intense use, in particular extensive cattle grazing, which has favored the expansion and encroachment of perennial thorny shrubs into the grasslands, thus decreasing the value of the rangeland. However, these environments have been shown to positively impact global carbon dynamics. Machine learning and remote sensing have enhanced our knowledge about carbon dynamics, but they need to be further developed and adapted to particular analysis. We measured the net ecosystem exchange (NEE) of C with the eddy covariance (EC) method and estimated gross primary production (GPP) in a thorny scrub at Bernal in Mexico. We tested the agreement between EC estimates and remotely sensed GPP estimates from the Moderate Resolution Imaging Spectroradiometer (MODIS), and also with two alternative modeling methods: ordinary-least-squares (OLS) regression and ensembles of machine learning algorithms (EMLs). The variables used as predictors were MODIS spectral bands, vegetation indices and products, and gridded environmental variables. The Bernal site was a carbon sink even though it was overgrazed, the average NEE during 15 months of 2017 and 2018 was −0.78 gCm-2d-1, and the flux was negative or neutral during the measured months. The probability of agreement (θs) represented the agreement between observed and estimated values of GPP across the range of measurement. According to the mean value of θs, agreement was higher for the EML (0.6) followed by OLS (0.5) and then MODIS (0.24). This graphic metric was more informative than r2 (0.98, 0.67, 0.58, respectively) to evaluate the model performance. This was particularly true for MODIS because the maximum θs of 4.3 was for measurements of 0.8 gCm-2d-1 and then decreased steadily below 1 θs for measurements above 6.5 gCm-2d-1 for this scrub vegetation. In the case of EML and OLS, the θs was stable across the range of measurement. We used an EML for the Ameriflux site US-SRM, which is similar in vegetation and climate, to predict GPP at Bernal, but θs was low (0.16), indicating the local specificity of this model. Although cacti were an important component of the vegetation, the nighttime flux was characterized by positive NEE, suggesting that the photosynthetic dark-cycle flux of cacti was lower than ecosystem respiration. The discrepancy between MODIS and EC GPP estimates stresses the need to understand the limitations of both methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Ellien, A., E. Slezak, N. Martinet, F. Durret, C. Adami, R. Gavazzi, C. R. Rabaça, C. Da Rocha, and D. N. Epitácio Pereira. "DAWIS: a detection algorithm with wavelets for intracluster light studies." Astronomy & Astrophysics 649 (May 2021): A38. http://dx.doi.org/10.1051/0004-6361/202038419.

Повний текст джерела
Анотація:
Context. Large numbers of deep optical images will be available in the near future, allowing statistically significant studies of low surface brightness structures such as intracluster light (ICL) in galaxy clusters. The detection of these structures requires efficient algorithms dedicated to this task, which traditional methods find difficult to solve. Aims. We present our new detection algorithm with wavelets for intracluster light studies (DAWIS), which we developed and optimized for the detection of low surface brightness sources in images, in particular (but not limited to) ICL. Methods. DAWIS follows a multiresolution vision based on wavelet representation to detect sources. It is embedded in an iterative procedure called synthesis-by-analysis approach to restore the unmasked light distribution of these sources with very good quality. The algorithm is built so that sources can be classified based on criteria depending on the analysis goal. We present the case of ICL detection and the measurement of ICL fractions. We test the efficiency of DAWIS on 270 mock images of galaxy clusters with various ICL profiles and compare its efficiency to more traditional ICL detection methods such as the surface brightness threshold method. We also run DAWIS on a real galaxy cluster image, and compare the output to results obtained with previous multiscale analysis algorithms. Results. We find in simulations that DAWIS is on average able to separate galaxy light from ICL more efficiently, and to detect a greater quantity of ICL flux because of the way sky background noise is treated. We also show that the ICL fraction, a metric used on a regular basis to characterize ICL, is subject to several measurement biases on galaxies and ICL fluxes. In the real galaxy cluster image, DAWIS detects a faint and extended source with an absolute magnitude two orders brighter than previous multiscale methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Lhermitte, Julien R., Cheng Tian, Aaron Stein, Atikur Rahman, Yugang Zhang, Lutz Wiegart, Andrei Fluerasu, Oleg Gang, and Kevin G. Yager. "Robust X-ray angular correlations for the study of meso-structures." Journal of Applied Crystallography 50, no. 3 (May 8, 2017): 805–19. http://dx.doi.org/10.1107/s1600576717003946.

Повний текст джерела
Анотація:
As self-assembling nanomaterials become more sophisticated, it is becoming increasingly important to measure the structural order of finite-sized assemblies of nano-objects. These mesoscale clusters represent an acute challenge to conventional structural probes, owing to the range of implicated size scales (10 nm to several micrometres), the weak scattering signal and the dynamic nature of meso-clusters in native solution environments. The high X-ray flux and coherence of modern synchrotrons present an opportunity to extract structural information from these challenging systems, but conventional ensemble X-ray scattering averages out crucial information about local particle configurations. Conversely, a single meso-cluster scatters too weakly to recover the full diffraction pattern. Using X-ray angular cross-correlation analysis, it is possible to combine multiple noisy measurements to obtain robust structural information. This paper explores the key theoretical limits and experimental challenges that constrain the application of these methods to probing structural order in real nanomaterials. A metric is presented to quantify the signal-to-noise ratio of angular correlations, and it is used to identify several experimental artifacts that arise. In particular, it is found that background scattering, data masking and inter-cluster interference profoundly affect the quality of correlation analyses. A robust workflow is demonstrated for mitigating these effects and extracting reliable angular correlations from realistic experimental data.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Tipka, Anne, Leopold Haimberger, and Petra Seibert. "Flex_extract v7.1.2 – a software package to retrieve and prepare ECMWF data for use in FLEXPART." Geoscientific Model Development 13, no. 11 (November 5, 2020): 5277–310. http://dx.doi.org/10.5194/gmd-13-5277-2020.

Повний текст джерела
Анотація:
Abstract. Flex_extract is an open-source software package to efficiently retrieve and prepare meteorological data from the European Centre for Medium-Range Weather Forecasts (ECMWF) as input for the widely used Lagrangian particle dispersion model FLEXPART and the related trajectory model FLEXTRA. ECMWF provides a variety of data sets which differ in a number of parameters (available fields, spatial and temporal resolution, forecast start times, level types etc.). Therefore, the selection of the right data for a specific application and the settings needed to obtain them are not trivial. Consequently, the data sets which can be retrieved through flex_extract by both member-state users and public users as well as their properties are explained. Flex_extract 7.1.2 is a substantially revised version with completely restructured code, mainly written in Python 3, which is introduced with all its input and output files and an explanation of the four application modes. Software dependencies and the methods for calculating the native vertical velocity η˙, the handling of flux data and the preparation of the final FLEXPART input files are documented. Considerations for applications give guidance with respect to the selection of data sets, caveats related to the land–sea mask and orography, etc. Formal software quality-assurance methods have been applied to flex_extract. A set of unit and regression tests as well as code metric data are also supplied. A short description of the installation and usage of flex_extract is provided in the Appendix. The paper points also to an online documentation which will be kept up to date with respect to future versions.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Chan, H. G., M. D. King, and M. M. Frey. "The impact of parameterising light penetration into snow on the photochemical production of NO<sub><i>x</i></sub> and OH radicals in snow." Atmospheric Chemistry and Physics 15, no. 14 (July 17, 2015): 7913–27. http://dx.doi.org/10.5194/acp-15-7913-2015.

Повний текст джерела
Анотація:
Abstract. Snow photochemical processes drive production of chemical trace gases in snowpacks, including nitrogen oxides (NOx = NO + NO2) and hydrogen oxide radical (HOx = OH + HO2), which are then released to the lower atmosphere. Coupled atmosphere–snow modelling of theses processes on global scales requires simple parameterisations of actinic flux in snow to reduce computational cost. The disagreement between a physical radiative-transfer (RT) method and a parameterisation based upon the e-folding depth of actinic flux in snow is evaluated. In particular, the photolysis of the nitrate anion (NO3-), the nitrite anion (NO2-) and hydrogen peroxide (H2O2) in snow and nitrogen dioxide (NO2) in the snowpack interstitial air are considered. The emission flux from the snowpack is estimated as the product of the depth-integrated photolysis rate coefficient, v, and the concentration of photolysis precursors in the snow. The depth-integrated photolysis rate coefficient is calculated (a) explicitly with an RT model (TUV), vTUV, and (b) with a simple parameterisation based on e-folding depth, vze. The metric for the evaluation is based upon the deviation of the ratio of the depth-integrated photolysis rate coefficient determined by the two methods, vTUV/vze, from unity. The ratio depends primarily on the position of the peak in the photolysis action spectrum of chemical species, solar zenith angle and physical properties of the snowpack, i.e. strong dependence on the light-scattering cross section and the mass ratio of light-absorbing impurity (i.e. black carbon and HULIS) with a weak dependence on density. For the photolysis of NO2, the NO2- anion, the NO3- anion and H2O2 the ratio vTUV/vze varies within the range of 0.82–1.35, 0.88–1.28, 0.93–1.27 and 0.91–1.28 respectively. The e-folding depth parameterisation underestimates for small solar zenith angles and overestimates at solar zenith angles around 60° compared to the RT method. A simple algorithm has been developed to improve the parameterisation which reduces the ratio vTUV/vze to 0.97–1.02, 0.99–1.02, 0.99–1.03 and 0.98–1.06 for photolysis of NO2, the NO2- anion, the NO3- anion and H2O2 respectively. The e-folding depth parameterisation may give acceptable results for the photolysis of the NO3- anion and H2O2 in cold polar snow with large solar zenith angles, but it can be improved by a correction based on solar zenith angle and for cloudy skies.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Kwon, Yuna G., Ludmilla Kolokolova, Jessica Agarwal, and Johannes Markkanen. "An update of the correlation between polarimetric and thermal properties of cometary dust." Astronomy & Astrophysics 650 (June 2021): L7. http://dx.doi.org/10.1051/0004-6361/202141199.

Повний текст джерела
Анотація:
Context. Comets are conglomerates of ice and dust particles, the latter of which encode information on changes in the radiative and thermal environments. Dust displays distinctive scattered and thermal radiation in the visible and mid-infrared (MIR) wavelengths, respectively, based on its inherent characteristics. Aims. We aim to identify a possible correlation between the properties of scattered and thermal radiation from dust and the principal dust characteristics responsible for this relationship, and therefrom gain insights into comet evolution. Methods. We use the NASA/PDS archival polarimetric data on cometary dust in the red (0.62−0.73 μm) and K (2.00−2.39 μm) domains to leverage the relative excess of the polarisation degree of a comet to the average trend at the given phase angle (Pexcess) as a metric of the dust’s scattered light characteristics. The flux excess of silicate emissions to the continuum around 10 μm (FSi/Fcont) is adopted from previous studies as a metric of the dust’s MIR feature. Results. The two observables – Pexcess and FSi/Fcont – show a positive correlation when Pexcess is measured in the K domain (Spearman’s rank correlation coefficient ρ = 0.71−0.19+0.10). No significant correlation was identified in the red domain (ρ = 0.13−0.15+0.16). The gas-rich comets have systematically weaker FSi/Fcont than the dust-rich ones, and yet both groups retain the same overall tendency with different slope values. Conclusions. The observed positive correlation between the two metrics indicates that composition is a peripheral factor in characterising the dust’s polarimetric and silicate emission properties. The systematic difference in FSi/Fcont for gas-rich versus dust-rich comets would instead correspond to the difference in their dust size distribution. Hence, our results suggest that the current MIR spectral models of cometary dust, which search for a minimum χ2 fit by considering various dust properties simultaneously, should prioritise the dust size and porosity over the composition. With light scattering being sensitive to different size scales in two wavebands, we expect the K-domain polarimetry to be sensitive to the properties of dust aggregates, such as size and porosity, which might have been influenced by evolutionary processes. On the other hand, the red-domain polarimetry reflects the characteristics of sub-micrometre constituents in the aggregate.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Perez, Gabriel M. P., Pier Luigi Vidale, Nicholas P. Klingaman, and Thomas C. M. Martin. "Atmospheric convergence zones stemming from large-scale mixing." Weather and Climate Dynamics 2, no. 2 (June 9, 2021): 475–88. http://dx.doi.org/10.5194/wcd-2-475-2021.

Повний текст джерела
Анотація:
Abstract. Organised cloud bands are important features of tropical and subtropical rainfall. These structures are often regarded as convergence zones, alluding to an association with coherent atmospheric flow. However, the flow kinematics is not usually taken into account in classification methods for this type of event, as large-scale lines are rarely evident in instantaneous diagnostics such as Eulerian convergence. Instead, existing convergence zone definitions rely on heuristic rules of shape, duration and size of cloudiness fields. Here we investigate the role of large-scale turbulence in shaping atmospheric moisture in South America. We employ the finite-time Lyapunov exponent (FTLE), a metric of deformation among neighbouring trajectories, to define convergence zones as attracting Lagrangian coherent structures (LCSs). Attracting LCSs frequent tropical and subtropical South America, with climatologies consistent with the South Atlantic Convergence Zone (SACZ), the South American Low-Level Jet (SALLJ) and the Intertropical Convergence Zone (ITCZ). In regions under the direct influence of the ITCZ and the SACZ, rainfall is significantly positively correlated with large-scale mixing measured by the FTLE. Attracting LCSs in south and southeast Brazil are associated with significant positive rainfall and moisture flux anomalies. Geopotential height composites suggest that the occurrence of attracting LCSs in these regions is related with teleconnection mechanisms such as the Pacific–South Atlantic. We believe that this kinematical approach can be used as an alternative to region-specific convergence zone classification algorithms; it may help advance the understanding of underlying mechanisms of tropical and subtropical rain bands and their role in the hydrological cycle.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Mullins, Darragh, Derek Coburn, Louise Hannon, Edward Jones, Eoghan Clifford, and Martin Glavin. "A novel image processing-based system for turbidity measurement in domestic and industrial wastewater." Water Science and Technology 77, no. 5 (January 19, 2018): 1469–82. http://dx.doi.org/10.2166/wst.2018.030.

Повний текст джерела
Анотація:
Abstract Wastewater treatment facilities are continually challenged to meet both environmental regulations and reduce running costs (particularly energy and staffing costs). Improving the efficiency of operational monitoring at wastewater treatment plants (WWTPs) requires the development and implementation of appropriate performance metrics; particularly those that are easily measured, strongly correlate to WWTP performance, and can be easily automated, with a minimal amount of maintenance or intervention by human operators. Turbidity is the measure of the relative clarity of a fluid. It is an expression of the optical property that causes light to be scattered and absorbed by fine particles in suspension (rather than transmitted with no change in direction or flux level through a fluid sample). In wastewater treatment, turbidity is often used as an indicator of effluent quality, rather than an absolute performance metric, although correlations have been found between turbidity and suspended solids. Existing laboratory-based methods to measure turbidity for WWTPs, while relatively simple, require human intervention and are labour intensive. Automated systems for on-site measuring of wastewater effluent turbidity are not commonly used, while those present are largely based on submerged sensors that require regular cleaning and calibration due to fouling from particulate matter in fluids. This paper presents a novel, automated system for estimating fluid turbidity. Effluent samples are imaged such that the light absorption characteristic is highlighted as a function of fluid depth, and computer vision processing techniques are used to quantify this characteristic. Results from the proposed system were compared with results from established laboratory-based methods and were found to be comparable. Tests were conducted using both synthetic dairy wastewater and effluent from multiple WWTPs, both municipal and industrial. This system has an advantage over current methods as it provides a multipoint analysis that can be easily repeated for large volumes of wastewater effluent. Although the system was specifically designed and tested for wastewater treatment applications, it could have applications such as in drinking water treatment, and in other areas where fluid turbidity is an important measurement.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Skaf, Nour, Olivier Guyon, Éric Gendron, Kyohoon Ahn, Arielle Bertrou-Cantou, Anthony Boccaletti, Jesse Cranney, et al. "On-sky validation of image-based adaptive optics wavefront sensor referencing." Astronomy & Astrophysics 659 (March 2022): A170. http://dx.doi.org/10.1051/0004-6361/202141514.

Повний текст джерела
Анотація:
Context. Differentiating between a true exoplanet signal and residual speckle noise is a key challenge in high-contrast imaging (HCI). Speckles result from a combination of fast, slow, and static wavefront aberrations introduced by atmospheric turbulence and instrument optics. While wavefront control techniques developed over the last decade have shown promise in minimizing fast atmospheric residuals, slow and static aberrations such as non-common path aberrations (NCPAs) remain a key limiting factor for exoplanet detection. NCPAs are not seen by the wavefront sensor (WFS) of the adaptive optics (AO) loop, hence the difficulty in correcting them. Aims. We propose to improve the identification and rejection of slow and static speckles in AO-corrected images. The algorithm known as the Direct Reinforcement Wavefront Heuristic Optimisation (DrWHO) performs a frequent compensation operation on static and quasi-static aberrations (including NCPAs) to boost image contrast. It is applicable to general-purpose AO systems as well as HCI systems. Methods. By changing the WFS reference at every iteration of the algorithm (a few tens of seconds), DrWHO changes the AO system point of convergence to lead it towards a compensation mechanism for the static and slow aberrations. References are calculated using an iterative lucky-imaging approach, where each iteration updates the WFS reference, ultimately favoring high-quality focal plane images. Results. We validated this concept through both numerical simulations and on-sky testing on the SCExAO instrument at the 8.2-m Subaru telescope. Simulations show a rapid convergence towards the correction of 82% of the NCPAs. On-sky tests were performed over a 10 min run in the visible (750 nm). We introduced a flux concentration (FC) metric to quantify the point spread function (PSF) quality and measure a 15.7% improvement compared to the pre-DrWHO image. Conclusions. The DrWHO algorithm is a robust focal-plane wavefront sensing calibration method that has been successfully demonstrated on-sky. It does not rely on a model and does not require wavefront sensor calibration or linearity. It is compatible with different wavefront control methods, and can be further optimized for speed and efficiency. The algorithm is ready to be incorporated in scientific observations, enabling better PSF quality and stability during observations.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Roelofs, Freek, Christian M. Fromm, Yosuke Mizuno, Jordy Davelaar, Michael Janssen, Ziri Younsi, Luciano Rezzolla, and Heino Falcke. "Black hole parameter estimation with synthetic very long baseline interferometry data from the ground and from space." Astronomy & Astrophysics 650 (June 2021): A56. http://dx.doi.org/10.1051/0004-6361/202039745.

Повний текст джерела
Анотація:
Context. The Event Horizon Telescope (EHT) has imaged the shadow of the supermassive black hole in M 87. A library of general relativistic magnetohydrodynamics (GMRHD) models was fit to the observational data, providing constraints on black hole parameters. Aims. We investigate how much better future experiments can realistically constrain these parameters and test theories of gravity. Methods. We generated realistic synthetic 230 GHz data from representative input models taken from a GRMHD image library for M 87, using the 2017, 2021, and an expanded EHT array. The synthetic data were run through an automated data reduction pipeline used by the EHT. Additionally, we simulated observations at 230, 557, and 690 GHz with the Event Horizon Imager (EHI) Space VLBI concept. Using one of the EHT parameter estimation pipelines, we fit the GRMHD library images to the synthetic data and investigated how the black hole parameter estimations are affected by different arrays and repeated observations. Results. Repeated observations play an important role in constraining black hole and accretion parameters as the varying source structure is averaged out. A modest expansion of the EHT already leads to stronger parameter constraints in our simulations. High-frequency observations from space with the EHI rule out all but ∼15% of the GRMHD models in our library, strongly constraining the magnetic flux and black hole spin. The 1σ constraints on the black hole mass improve by a factor of five with repeated high-frequency space array observations as compared to observations with the current ground array. If the black hole spin, magnetization, and electron temperature distribution can be independently constrained, the shadow size for a given black hole mass can be tested to ∼0.5% with the EHI space array, which allows tests of deviations from general relativity. With such a measurement, high-precision tests of the Kerr metric become within reach from observations of the Galactic Center black hole Sagittarius A*.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Berridge, C. T., L. H. Hadju, and A. J. Dolman. "How well can we predict soil respiration with climate indicators, now and in the future?" Biogeosciences Discussions 11, no. 2 (February 4, 2014): 1977–99. http://dx.doi.org/10.5194/bgd-11-1977-2014.

Повний текст джерела
Анотація:
Abstract. Soils contain the largest terrestrial store of carbon; three times greater than present atmospheric concentrations, whilst the annual soil-atmosphere exchange of carbon is an order of magnitude larger than all anthropogenic effluxes. Quantifying future pool sizes and fluxes is therefore sensitive to small methodological errors, yet unfortunately remains the second largest area of uncertainty in Intergovernmental Panel on Climate Change projections. The flux of carbon from heterotrophic decomposition of soil organic matter is parameterized as a rate constant. This parameter is calculated from observed total soil carbon efflux and contemporaneously observed temperature and soil moisture. This metric is then used to simulate future rates of heterotrophic respiration, as driven by the projections of future climate- temperature and precipitation. We examine two underlying assumptions: how well current climate (mean temperature and precipitation) can account for contemporary soil respiration, and whether an observational parameter derived from this data will be valid in the future. We find mean climate values to be of some use in capturing total soil respiration to the 95% confidence interval, but note an inability to distinguish between subtropical and Mediterranean fluxes, or wetland-grassland and wetland-forest fluxes. Regarding the future, we present a collection of CO2 enrichment studies demonstrating a strong agreement in soil respiration response (a 25% increase) independent of changes in temperature and moisture, however these data are spatially limited to the northern mid-latitudes. In order to "future-proof" simple statistical parameters used to calculate the output from heterotrophic soil respiration, we propose a correction factor derived from empirical observations, but note the spatial and temporal limitations. In conclusion, there seems to be no sound basis to assume that models with the best fit to contemporary data will produce the best estimates of future fluxes, given the methods, future dynamics and the nature of the observational constraints. Only through long-term field observations and appropriate, perhaps novel, data collection can we improve statistical respiration modelling, without adding mechanistic details at a computational cost.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Dhungel, Sulochan, and Michael Barber. "Estimating Calibration Variability in Evapotranspiration Derived from a Satellite-Based Energy Balance Model." Remote Sensing 10, no. 11 (October 26, 2018): 1695. http://dx.doi.org/10.3390/rs10111695.

Повний текст джерела
Анотація:
Computing evapotranspiration (ET) with satellite-based energy balance models such as METRIC (Mapping EvapoTranspiration at high Resolution with Internalized Calibration) requires internal calibration of sensible heat flux using anchor pixels (“hot” and “cold” pixels). Despite the development of automated anchor pixel selection methods that classify a pool of candidate pixels using the amount of vegetation (normalized difference vegetation index, NDVI) and surface temperature (Ts), final pixel selection still relies heavily on operator experience. Yet, differences in final ET estimates resulting from subjectivity in selecting the final “hot” and “cold” pixel pair (from within the candidate pixel pool) have not yet been investigated. This is likely because surface properties of these candidate pixels, as quantified by NDVI and surface temperature, are generally assumed to have low variability that can be attributed to random noise. In this study, we test the assumption of low variability by first applying an automated calibration pixel selection process to 42 nearly cloud-free Landsat images of the San Joaquin area in California taken between 2013 and 2015. We then compute Ts (vertical near-surface temperature differences) vs. Ts relationships at all pixels that could potentially be used for model calibration in order to explore ET variance between the results from multiple calibration schemes where NDVI and Ts variability is intrinsically negligible. Our results show significant variability in ET (ranging from 5% to 20%) and a high—and statistically consistent—variability in dT values, indicating that there are additional surface properties affecting the calibration process not captured when using only NDVI and Ts. Our findings further highlight the potential for calibration improvements by showing that the dT vs. Ts calibration relationship between the cold anchor pixel (with lowest dT) and the hot anchor pixel (with highest dT) consistently provides the best daily ET estimates. This approach of quantifying ET variability based on candidate pixel selection and the accompanying results illustrate an approach to quantify the biases inadvertently introduced by user subjectivity and can be used to inform improvements on model usability and performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Pazmiño, Andrea, Sophie Godin-Beekmann, Alain Hauchecorne, Chantal Claud, Sergey Khaykin, Florence Goutail, Elian Wolfram, Jacobo Salvador, and Eduardo Quel. "Multiple symptoms of total ozone recovery inside the Antarctic vortex during austral spring." Atmospheric Chemistry and Physics 18, no. 10 (May 31, 2018): 7557–72. http://dx.doi.org/10.5194/acp-18-7557-2018.

Повний текст джерела
Анотація:
Abstract. The long-term evolution of total ozone column inside the Antarctic polar vortex is investigated over the 1980–2017 period. Trend analyses are performed using a multilinear regression (MLR) model based on various proxies for the evaluation of ozone interannual variability (heat flux, quasi-biennial oscillation, solar flux, Antarctic oscillation and aerosols). Annual total ozone column measurements corresponding to the mean monthly values inside the vortex in September and during the period of maximum ozone depletion from 15 September to 15 October are used. Total ozone columns from the Multi-Sensor Reanalysis version 2 (MSR-2) dataset and from a combined record based on TOMS and OMI satellite datasets with gaps filled by MSR-2 (1993–1995) are considered in the study. Ozone trends are computed by a piece-wise trend (PWT) proxy that includes two linear functions before and after the turnaround year in 2001 and a parabolic function to account for the saturation of the polar ozone destruction. In order to evaluate average total ozone within the vortex, two classification methods are used, based on the potential vorticity gradient as a function of equivalent latitude. The first standard one considers this gradient at a single isentropic level (475 or 550 K), while the second one uses a range of isentropic levels between 400 and 600 K. The regression model includes a new proxy (GRAD) linked to the gradient of potential vorticity as a function of equivalent latitude and representing the stability of the vortex during the studied month. The determination coefficient (R2) between observations and modelled values increases by ∼ 0.05 when this proxy is included in the MLR model. Highest R2 (0.92–0.95) and minimum residuals are obtained for the second classification method for both datasets and months. Trends in September over the 2001–2017 period are statistically significant at 2σ level with values ranging between 1.84 ± 1.03 and 2.83 ± 1.48 DU yr−1 depending on the methods and considered proxies. This result confirms the recent studies of Antarctic ozone healing during that month. Trends from 2001 are 2 to 3 times smaller than before the turnaround year, as expected from the response to the slowly ozone-depleting substances decrease in polar regions. For the first time, significant trends are found for the period of maximum ozone depletion. Estimated trends from 2001 for the 15 September–15 October period over 2001–2017 vary from 1.21 ± 0.83 to 1.96 DU ± 0.99 yr−1 and are significant at 2σ level. MLR analysis is also applied to the ozone mass deficit (OMD) metric for both periods, considering a threshold at 220 DU and total ozone columns south of 60∘ S. Significant trend values are observed for all cases and periods. A decrease of OMD of 0.86 ± 0.36 and 0.65 ± 0.33 Mt yr−1 since 2001 is observed in September and 15 September–15 October, respectively. Ozone recovery is also confirmed by a steady decrease of the relative area of total ozone values lower than 175 DU within the vortex in the 15 September–15 October period since 2010 and a delay in the occurrence of ozone levels below 125 DU since 2005.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Belov, Aleksandr A., and Yuriy A. Sobchenko. "Modeling the Distribution of the Electromagnetic Field in the Device for Adding the Power of Microwave Magnetrons." Elektrotekhnologii i elektrooborudovanie v APK 67, no. 1 (March 28, 2020): 11–15. http://dx.doi.org/10.22314/2658-4859-2020-67-1-11-15.

Повний текст джерела
Анотація:
Substantiation of the physical nature of adding the power of magnetron generators in the device’s waveguide, assessment of the electromagnetic properties of objects without experimental verification, testing of scientific and technical hypotheses for adequacy without creating prototypes are possible by modeling the distribution of the electromagnetic field in the CST Studio program. (Research purpose) The research purpose is to simulate the distribution of the electromagnetic field in the device for adding the power of microwave magnetrons. (Materials and methods) CST Studio software was used to design models of the distribution of the electromagnetic field in the device for adding the power of microwave magnetrons. (Results and discussion) Models of the distribution of the electromagnetic field in the device for adding the power of microwave magnetrons was designed using the CST Studio program. It was found that at operating frequencies of 1, 3, 4 and 5 GHz, the distribution of the electromagnetic field of different densities is dispersed throughout the volume of the waveguide, which does not allow the use of a waveguide of the appropriate size to perform the basic functions of transmission and distribution of microwave energy, while at the operating frequency of 2 GHz the basic operating conditions of the waveguide are observed. It was found that, within the boundaries of the waveguide of the developed device, the force lines of the electromagnetic field propagate from the surface of the emitters into the space of the waveguide, are reflected from the walls, distributed over the volume of the waveguide, and rush to the output open end. (Conclusions) It was found that the generated pulses of the power flow of electromagnetic energy of an ultrahigh frequency of the flux of two magnetrons, corresponding to a value of 116 Volts·Amperes at square meter, propagating throughout the volume of the waveguide at different time intervals according to the operating frequency, take a one-way travel of their electromagnetic field lines from short-circuited end in the direction of the open part of the waveguide, are localized with the highest concentration in the output horn end of the waveguide, which is explained by the phenomena of power addition after providing traveling wave due to the coordination of the metric dimensions of the waveguide and the working wavelength.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Vodala, Sadanand, Andrew Nguyen, Noe Rodriguez, Peter Sieling, Charles Joseph Vaske, Jon Van Lew, Kayvan Niazi, John H. Lee, Patrick Soon-Shiong, and Shahrooz Rabizadeh. "TCR repertoire analysis from peripheral blood for prognostic assessment of patients during treatment." Journal of Clinical Oncology 37, no. 15_suppl (May 20, 2019): e14040-e14040. http://dx.doi.org/10.1200/jco.2019.37.15_suppl.e14040.

Повний текст джерела
Анотація:
e14040 Background: Immune checkpoint inhibitors offer substantial clinical advantage to a subset of patients but predictive/novel prognostic indicators are still scarce. T cell receptors (TCRs) play a crucial role in adaptive immunity and anti-tumor immune responses. Net diversity of TCR repertoires are altered in patients receiving immune checkpoint inhibitors. To study the prognostic significance of T cell repertoires as a biomarker of immune responses in cancer patients, we characterized TCR repertoires from peripheral blood using high throughput sequencing. Methods: Total RNA from peripheral blood mononuclear cells (PBMCs) was extracted and used to generate sequencing libraries from five pancreatic cancer patients, four triple negative breast cancer (TNBC) patients and two squamous cell carcinoma (SCC+) patients. Blood draws were collected pre- and post-treatment and target lesion analysis was performed using immune-related response criteria (irRC) and Recist1.1. TCR alpha and beta CDR3s were clonotyped for each sample, and profiles of clonotype proportions were tracked through time/serial biopsies. Additionally the Shannon-Wiener diversity metric was calculated for each time point. Results: Patient samples showing consistent positive responses as measured by irRC and Recist1.1 showed TCR clones persisting throughout all time points. A TNBC super responder showed dramatic increases in mean Shannon-Wiener index from 74 prior treatment to 1177 at first biopsy post treatment (34% decrease by irRC and 26% by Recist1.1 analysis) and achieved an index as high as 3516 in a subsequent biopsy (83% and 64% decrease by irRC and Recist 1.1 respectively). Patients that showed poor response by irRC and Recist1.1 showed TCR clones that were in constant flux. Loss of clonality and/or decrease in absolute numbers was observed in previous and later time points. Conclusions: Patients that show positive response had TCR clones that were stable, which may indicate an existing immune related response towards their tumor. Therapy would allow these existing T-cells to overcome blockade by tumor cells. Patients showing poor response show a TCR repertoire that is constantly changing potentially indicating that the tumor cells are not eliciting a strong T cell specific response. Further functional studies of T cell populations would expand our understanding of T cell based immune therapies.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Savill, Tim, Eifion Jewell, and Peter Barker. "Development of Techniques and Non-Destructive Methods for in-Situ Performance Monitoring of Organically Coated Pre-Finished Cladding Used in the Construction Sector." ECS Meeting Abstracts MA2022-01, no. 16 (July 7, 2022): 1016. http://dx.doi.org/10.1149/ma2022-01161016mtgabs.

Повний текст джерела
Анотація:
Asset corrosion is a huge problem for the construction and other industries with an estimated cost of approximately GBP 300 billion in the EU in 2013 [1]. To mitigate this cost and protect metal substrates from corrosion, organic coatings are often used. In 2017 the EU produced 4 million metric tonnes of organically coated steel, a large quantity of which is used for the production of building cladding material [2]. Cladding material is widely used in construction of both commercial, industrial, and residential buildings due to its convenience, speed of construction as well as aesthetic and weather resistant properties. Architects and customers are increasingly using pre-finished coated steel panels to provide a sleek modern design. In order to maintain the required aesthetic value offered by these panels, it is of crucial importance that the coatings provide appropriate protection from the harsh conditions faced by building facades. It is paramount that manufacturers of the cladding can provide reassurances of the long-term coating performance to provide confidence to the end customer. Despite this, coating performance is only currently estimated by accelerated lab-based tests and some short-term outdoor exposure testing. These tests are carried out in conditions that produce results that are often not representative of real life, leading to earlier than expected failure of the product in some conditions. The ability to monitor the environments that the coatings are exposed to, as well as the actual real-time performance of the coating itself, would provide a far better avenue to determine the expected lifetime of the coated product as well as maintenance scheduling and failure prevention. Furthermore, it would reduce the requirement for human inspection and allow remedial maintenance before the damage becomes too significant to warrant replacement. The advantages of in-situ, real time monitoring has long been recognized by the oil and gas industry, however, at this point in time they are the only sector deploying significant corrosion and coating monitoring techniques. However, as we move to a more connected world, with an increase in devices and IOT systems there is increased interest by the construction section in sensing. There has been significant research effort to develop corrosion sensing of concrete embedded rebar [3–5] and it is clear there is an appetite to grow the field of asset monitoring. The research undertaken develops novel deployments of existing techniques as well as new techniques to detect both corrosion of metallic substrates and degradation and failure of the organic coatings. The overall aim is to produce a sensor system that can work autonomously over long periods. This presented difficulties in terms of, powering, communication, durability, deployment, and sensitivity. The ideas explored include capacitive based sensing, magnetic flux leakage, RFID EMI based corrosion sensing and radiofrequency based dielectric sensing. The designed sensors show promise in detecting early stages of corrosion and coating failure as well as indicating the severity of such changes. The work presented will discuss the challenges faced and how they were/are being overcome as well as the current sensor development and results. Koch GH, Varney J, Thompson N, Moghissi O, Gould M, et al. (2012) International measures of prevention, application, and economics of corrosion technologies study. NACE International, Houston. Eurofer. European Steel in Figures 2008-2017. 2018. James A, Bazarchi E, Chiniforush AA, Panjebashi Aghdam P, Hosseini MR, Akbarnezhad A, et al. Rebar corrosion detection, protection, and rehabilitation of reinforced concrete structures in coastal environments: A review. Constr Build Mater [Internet]. 2019;224:1026–39. Available from: https://www.sciencedirect.com/science/article/pii/S0950061819319208 Xie L, Zhu X, Liu Z, Liu X, Wang T, Xing J. A rebar corrosion sensor embedded in concrete based on surface acoustic wave. Measurement [Internet]. 2020;165:108118. Available from: https://www.sciencedirect.com/science/article/pii/S0263224120306564 Fan L, Shi X. Techniques of corrosion monitoring of steel rebar in reinforced concrete structures: A review. Struct Heal Monit [Internet]. 0(0):14759217211030912. Available from: https://doi.org/10.1177/14759217211030911
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Irshad, Liu, Arshad, Sohail, Murthy, Khokhar, and Uba. "A Novel Localization Technique Using Luminous Flux." Applied Sciences 9, no. 23 (November 21, 2019): 5027. http://dx.doi.org/10.3390/app9235027.

Повний текст джерела
Анотація:
As global navigation satellite system (GNNS) signals are unable to enter indoor spaces, substitute methods such as indoor localization-based visible light communication (VLC) are gaining the attention of researchers. In this paper, the systematic investigation of a VLC channel is performed for both direct and indirect line of sight (LoS) by utilizing the impulse response of indoor optical wireless channels. In order to examine the localization scenario, two light-emitting diode (LED) grid patterns are used. The received signal strength (RSS) is observed based on the positional dilution of precision (PDoP), a subset of the dilution of precision (DoP) used in global navigation satellite system (GNSS) positioning. In total, 31 × 31 possible positional tags are set for a given PDoP configuration. The values for positional error in terms of root mean square error (RMSE) and the sum of squared errors (SSE) are taken into consideration. The performance of the proposed approach is validated by simulation results according to the selected indoor space. The results show that the position accuracy enhanced is at short range by 24% by utilizing the PDoP metric. As confirmation, the modeled accuracy is compared with perceived accuracy results. This study determines the application and design of future optical wireless systems specifically for indoor localization.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Teschner, F., and Ch Mundt. "Fully conservative overset mesh to overcome the blunt body metric singularity in finite difference methods." Shock Waves, May 12, 2022. http://dx.doi.org/10.1007/s00193-022-01078-2.

Повний текст джерела
Анотація:
AbstractA fully conservative overset mesh method is proposed and applied to overcome the metric singularity at the symmetry line for blunt bodies, e.g., capsules and blunted cones, in general curvilinear coordinates. The overset mesh is placed automatically at the symmetry line by avoiding the collapse of the grid lines using a hexahedral structure in contrast to the prismatic structure of a body-orientated mesh. In addition, the grid points of the overset mesh coincide with those of the body-orientated mesh to avoid interpolation techniques to interchange the flow variables between the two meshes. This coincidence ensures the conservation of the flow variables and avoids uncertainties at the shock as the method is naturally conservative. The thin-layer Navier–Stokes equations for high Reynolds number flows are solved using an AUSM+ or an AUSMPW+ flux vector splitting in combination with a mesh adaption to capture the shock accurately. For verification purposes of the proposed method, a supersonic 2D-axisymmetric hemisphere cylinder is chosen and the results along the wall are verified. Furthermore, the conservative properties of the applied overset mesh method are shown and the results on the stagnation line are presented. In addition, a supersonic 3D calculation is investigated to show the applicability of the presented method for simulations with an angle of attack.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії