To see the other types of publications on this topic, follow the link: Temporal window estimation.

Journal articles on the topic 'Temporal window estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Temporal window estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Heagerty, Patrick, Michael D. Ward, and Kristian Skrede Gleditsch. "Windows of Opportunity: Window Subseries Empirical Variance Estimators in International Relations." Political Analysis 10, no. 3 (2002): 304–17. http://dx.doi.org/10.1093/pan/10.3.304.

Full text
Abstract:
We show that temporal, spatial, and dyadic dependencies among observations complicate the estimation of covariance structures in panel databases. Ignoring these dependencies results in covariance estimates that are often too small and inferences that may be more confident about empirical patterns than is justified by the data. In this article, we detail the development of a nonparametric approach, window subseries empirical variance estimators (WSEV), that can more fully capture the impact of these dependencies on the covariance structure. We illustrate this approach in a simulation as well as with a statistical model of international conflict similar to many applications in the international relations literature.
APA, Harvard, Vancouver, ISO, and other styles
2

Taroni, Matteo, Giorgio Vocalelli, and Andrea De Polis. "Gutenberg–Richter B-Value Time Series Forecasting: A Weighted Likelihood Approach." Forecasting 3, no. 3 (August 6, 2021): 561–69. http://dx.doi.org/10.3390/forecast3030035.

Full text
Abstract:
We introduce a novel approach to estimate the temporal variation of the b-value parameter of the Gutenberg–Richter law, based on the weighted likelihood approach. This methodology allows estimating the b-value based on the full history of the available data, within a data-driven setting. We test this methodology against the classical “rolling window” approach using a high-definition Italian seismic catalogue as well as a global catalogue of high magnitudes. The weighted likelihood approach outperforms competing methods, and measures the optimal amount of past information relevant to the estimation.
APA, Harvard, Vancouver, ISO, and other styles
3

Agron, Danielle Jaye S., Jae-Min Lee, and Dong-Seong Kim. "Nozzle Thermal Estimation for Fused Filament Fabricating 3D Printer Using Temporal Convolutional Neural Networks." Applied Sciences 11, no. 14 (July 12, 2021): 6424. http://dx.doi.org/10.3390/app11146424.

Full text
Abstract:
A preventive maintenance embedded for the fused deposition modeling (FDM) printing technique is proposed. A monitoring and control integrated system is developed to reduce the risk of having thermal degradation on the fabricated products and prevent printing failure; nozzle clogging. As for the monitoring program, the proposed temporal neural network with a two-stage sliding window strategy (TCN-TS-SW) is utilized to accurately provide the predicted thermal values of the nozzle tip. These estimated thermal values are utilized to be the stimulus of the control system that performs countermeasures to prevent the anomaly that is bound to happen. The performance of the proposed TCN-TS-SW is presented in three case studies. The first scenario is when the proposed system outperforms the other existing machine learning algorithms namely multi-look back LSTM, GRU, LSTM, and the generic TCN architecture in terms of obtaining the highest training accuracy and lowest training loss. TCN-TS-SW also outperformed the mentioned algorithms in terms of prediction accuracy measured by the performance metrics like RMSE, MAE, and R2 scores. In the second case, the effect of varying the window length and the changing length of the forecasting horizon. This experiment reveals the optimized parameters for the network to produce an accurate nozzle thermal estimation.
APA, Harvard, Vancouver, ISO, and other styles
4

Tadić, Jovan M., Xuemei Qiu, Scot Miller, and Anna M. Michalak. "Spatio-temporal approach to moving window block kriging of satellite data v1.0." Geoscientific Model Development 10, no. 2 (February 15, 2017): 709–20. http://dx.doi.org/10.5194/gmd-10-709-2017.

Full text
Abstract:
Abstract. Numerous existing satellites observe physical or environmental properties of the Earth system. Many of these satellites provide global-scale observations, but these observations are often sparse and noisy. By contrast, contiguous, global maps are often most useful to the scientific community (i.e., Level 3 products). We develop a spatio-temporal moving window block kriging method to create contiguous maps from sparse and/or noisy satellite observations. This approach exhibits several advantages over existing methods: (1) it allows for flexibility in setting the spatial resolution of the Level 3 map, (2) it is applicable to observations with variable density, (3) it produces a rigorous uncertainty estimate, (4) it exploits both spatial and temporal correlations in the data, and (5) it facilitates estimation in real time. Moreover, this approach only requires the assumption that the observable quantity exhibits spatial and temporal correlations that are inferable from the data. We test this method by creating Level 3 products from satellite observations of CO2 (XCO2) from the Greenhouse Gases Observing Satellite (GOSAT), CH4 (XCH4) from the Infrared Atmospheric Sounding Interferometer (IASI) and solar-induced chlorophyll fluorescence (SIF) from the Global Ozone Monitoring Experiment-2 (GOME-2). We evaluate and analyze the difference in performance of spatio-temporal vs. recently developed spatial kriging methods.
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Zhijia, Wujiao Dai, Rock Santerre, Cuilin Kuang, and Qiang Shi. "A Spatiotemporal Deformation Modelling Method Based on Geographically and Temporally Weighted Regression." Mathematical Problems in Engineering 2019 (December 27, 2019): 1–11. http://dx.doi.org/10.1155/2019/4352396.

Full text
Abstract:
The geographically and temporally weighted regression (GTWR) model is a dynamic model which considers the spatiotemporal correlation and the spatiotemporal nonstationarity. Taking into account these advantages, we proposed a spatiotemporal deformation modelling method based on GTWR. In order to further improve the modelling accuracy and efficiency and considering the application characteristics of deformation modelling, the inverse window transformation method is used to search the optimal fitting window width and furthermore the local linear estimation method is used in the fitting coefficient function. Moreover, a comprehensive model for the statistical tests method is proposed in GTWR. The results of a dam deformation modelling application show that the GTWR model can establish a unified spatiotemporal model which can represent the whole deformation trend of the dam and furthermore can predict the deformation of any point in time and space, with stronger flexibility and applicability. Finally, the GTWR model improves the overall temporal prediction accuracy by 43.6% compared to the single-point time-weighted regression (TWR) model.
APA, Harvard, Vancouver, ISO, and other styles
6

Cho, Woon Hyun, and Terry W. Spencer. "Estimation of polarization and slowness in mixed wavefields." GEOPHYSICS 57, no. 6 (June 1992): 805–14. http://dx.doi.org/10.1190/1.1443294.

Full text
Abstract:
A new algorithm is developed for estimating the moveout velocities and polarization states in mixed wavefields recorded on multicomponent array data in the presence of random noise. The algorithm is applicable to a spatial and temporal data window in which more than two events are present. Three fundamental attributes of the waves are determined: polarization angle, apparent slowness, and the change in amplitude between adjacent detectors. In implementing the method, it is assumed that data is recorded at equispaced geophones located in a spatial window in which the three parameters are constant. Robustness is achieved by averaging the transfer matrix over all combinations of the subarrays that have the same transfer matrix. Application of a least‐squares criterion reduces the mathematics to an eigenvalue problem. The eigenvalues are complex, and their magnitude determines the amplitude change factor. The phase is a linear function of frequency with slope that determines the vertical slowness. The eigenvectors are the polarizations. The input data consists of the cross‐power spectra between subarrays that contain the same number of elements and are shifted by zero or one geophone separation. Examples illustrate the application of the algorithm to synthetic data. Numerical test results show that the performance of the method is not sensitive either to the time overlap between events or to the degree of similarity between waveforms.
APA, Harvard, Vancouver, ISO, and other styles
7

Alammari, Ammar, Ammar Ahmed Alkahtani, Mohd Riduan Ahmad, Ahmed Aljanad, Fuad Noman, and Zen Kawasaki. "Cross-Correlation Wavelet-Domain-Based Particle Swarm Optimization for Lightning Mapping." Applied Sciences 11, no. 18 (September 16, 2021): 8634. http://dx.doi.org/10.3390/app11188634.

Full text
Abstract:
Several processing methods have been proposed for estimating the real pattern of the temporal location and spatial map of the lightning strikes. However, due to the complexity of lightning signals, providing accurate lightning maps estimation remains a challenging task. This paper presents a cross-correlation wavelet-domain-based particle swarm optimization (CCWD-PSO) technique for an accurate and robust representation of lightning mapping. The CCWD method provides an initial estimate of the lightning map, while the PSO attempts to optimize the trajectory of the lightning map by finding the optimal sliding window of the cross-correlation. The technique was further enhanced through the introduction of a novel lightning event extraction method that enables faster processing of the lightning mapping. The CCWD-PSO method was validated and verified using three narrow bipolar events (NBEs) flashes. The observed results demonstrate that this technique offers high accuracy in representing the real lightning mapping with low estimation errors.
APA, Harvard, Vancouver, ISO, and other styles
8

Peng, Hui, Juhong Tie, and Dequan Guo. "An Iterative Axial and Lateral Ultrasound Strain Estimator Using Subband Division." Journal of Medical Imaging and Health Informatics 10, no. 5 (May 1, 2020): 1057–68. http://dx.doi.org/10.1166/jmihi.2020.3024.

Full text
Abstract:
Conventional ultrasound strain imaging usually only calculates the axial strain. Although axial strain is the main component of two dimensional strain field, lateral displacement and strain estimation can provide additional information of human mechanical properties. Shear strain and Poisson’s ratio can be estimated by using lateral strain estimation technique. Low lateral sampling rate and decorrelation noise of lateral radio frequency (RF) signal caused by axial displacement motion increase the difficulty of lateral strain estimation. Subband division technique is to divide a broadband signal into several narrowband signals. In this paper, the application of subband division technique in axial and lateral strain estimation is studied, and an iterative method for estimating axial and lateral strains is proposed based on subband technique. The subband division of this method is carried out along the axial direction, so that the bandwidth of the lateral subband signal is maintained and the quality of the lateral sub strain image is not reduced. In this paper, the number of subbands is three; the compounded lateral strain image is obtained by superimposing these sub strain images on the average. In each iteration, the temporal stretching technique is used to align the axial and lateral RF signals by using the axial and lateral displacement estimation information, which reduces the decorrelation noise of the RF signals. The length of temporal stretching window decreases with the number of iterations, so as to gradually improve the accuracy of temporal stretching. The phase zero algorithm is used to estimate the axial and lateral displacements. The effectiveness of this method is tested by simulations. The simulation results show that the elastographic signal-to-noise ratio (SNRe) of lateral strain image is increased by about 50%, the elastographic contrast noise ratio (CNRe) of lateral strain image is increased by about 120%, the SNRe of axial strain image is increased by about 4%, the CNRe of axial strain image is increased by 8%, and the signal-to-noise ratio of Poisson’s ratio image is increased by about 40%.
APA, Harvard, Vancouver, ISO, and other styles
9

Aldrin, Magne, Bjørnar Mortensen, Geir Storvik, Kjell Nedreaas, Asgeir Aglen, and Sondre Aanes. "Improving management decisions by predicting fish bycatch in the Barents Sea shrimp fishery." ICES Journal of Marine Science 69, no. 1 (November 20, 2011): 64–74. http://dx.doi.org/10.1093/icesjms/fsr172.

Full text
Abstract:
Abstract Aldrin, M., Mortensen, B., Storvik, G., Nedreaas, K., Aglen, A., and Aanes, S. 2012. Improving management decisions by predicting fish bycatch in the Barents Sea shrimp fishery. – ICES Journal of Marine Science, 69: 64–74. When the bycatch of juvenile fish within the Barents Sea shrimp fishery is too large, the area is closed to fishing for a certain period. Bycatch is estimated from sampled trawl hauls, for which the shrimp yield is recorded, along with the total number of various bycatch fish species. At present, bycatch estimation is based on a simple estimator, the sum of the number of fish caught within the area of interest within a small time window, divided by the corresponding shrimp yield (in weight). No historical data are used. A model-based estimation is proposed in which spatio-temporal models are constructed for the variation in both the yield of shrimp and the amount of bycatch in space and time. The main effects are described through generalized additive models, and local dependence structures are specified through correlated random effects. Model estimation includes historical and recent data. Experiments with both simulated and real data show that the model-based estimator outperforms the present simple estimator when a low or moderate number of samples (e.g. <20) is available, whereas the two estimators are equally good when the number of samples is high.
APA, Harvard, Vancouver, ISO, and other styles
10

Atencia, A., M. C. Llasat, L. Garrote, and L. Mediero. "Effect of radar rainfall time resolution on the predictive capability of a distributed hydrologic model." Hydrology and Earth System Sciences Discussions 7, no. 5 (October 13, 2010): 7995–8043. http://dx.doi.org/10.5194/hessd-7-7995-2010.

Full text
Abstract:
Abstract. The performance of distributed hydrological models depends on the resolution, both spatial and temporal, of the rainfall surface data introduced. The estimation of quantitative precipitation from meteorological radar or satellite can improve hydrological model results, thanks to an indirect estimation at higher spatial and temporal resolution. In this work, composed radar data from a network of three C-band radars, with 6-minutal temporal and 2 × 2 km2 spatial resolution, provided by the Catalan Meteorological Service, is used to feed the RIBS distributed hydrological model. A Window Probability Matching Method (gage-adjustment method) is applied to four cases of heavy rainfall to improve the observed rainfall sub-estimation in both convective and stratiform Z/R relations used over Catalonia. Once the rainfall field has been adequately obtained, an advection correction, based on cross-correlation between two consecutive images, was introduced to get several time resolutions from 1 min to 30 min. Each different resolution is treated as an independent event, resulting in a probable range of input rainfall data. This ensemble of rainfall data is used, together with other sources of uncertainty, such as the initial basin state or the accuracy of discharge measurements, to calibrate the RIBS model using probabilistic methodology. A sensitivity analysis of time resolutions was implemented by comparing the various results with real values from stream-flow measurement stations.
APA, Harvard, Vancouver, ISO, and other styles
11

McCabe, M. F., J. D. Kalma, and S. W. Franks. "Spatial and temporal patterns of land surface fluxes from remotely sensed surface temperatures within an uncertainty modelling framework." Hydrology and Earth System Sciences 9, no. 5 (October 13, 2005): 467–80. http://dx.doi.org/10.5194/hess-9-467-2005.

Full text
Abstract:
Abstract. Characterising the development of evapotranspiration through time is a difficult task, particularly when utilising remote sensing data, because retrieved information is often spatially dense, but temporally sparse. Techniques to expand these essentially instantaneous measures are not only limited, they are restricted by the general paucity of information describing the spatial distribution and temporal evolution of evaporative patterns. In a novel approach, temporal changes in land surface temperatures, derived from NOAA-AVHRR imagery and a generalised split-window algorithm, are used as a calibration variable in a simple land surface scheme (TOPUP) and combined within the Generalised Likelihood Uncertainty Estimation (GLUE) methodology to provide estimates of areal evapotranspiration at the pixel scale. Such an approach offers an innovative means of transcending the patch or landscape scale of SVAT type models, to spatially distributed estimates of model output. The resulting spatial and temporal patterns of land surface fluxes and surface resistance are used to more fully understand the hydro-ecological trends observed across a study catchment in eastern Australia. The modelling approach is assessed by comparing predicted cumulative evapotranspiration values with surface fluxes determined from Bowen ratio systems and using auxiliary information such as in-situ soil moisture measurements and depth to groundwater to corroborate observed responses.
APA, Harvard, Vancouver, ISO, and other styles
12

McCabe, M. F., J. D. Kalma, and S. W. Franks. "Spatial and temporal patterns of land surface fluxes from remotely sensed surface temperatures within an." Hydrology and Earth System Sciences Discussions 2, no. 2 (April 20, 2005): 569–603. http://dx.doi.org/10.5194/hessd-2-569-2005.

Full text
Abstract:
Abstract. Characterising the development of evapotranspiration through time is a difficult task, particularly when utilising remote sensing data, because retrieved information is often spatially dense, but temporally sparse. Techniques to expand these essentially instantaneous measures are not only limited, they are restricted by the general paucity of information describing the spatial distribution and temporal evolution of evaporative patterns. In a novel approach, temporal changes in land surface temperatures, derived from NOAA-AVHRR imagery and a generalised split-window algorithm, are used as a calibration variable in a simple land surface scheme (TOPUP) and combined within the Generalised Likelihood Uncertainty Estimation (GLUE) methodology, to provide estimates of areal evapotranspiration at the pixel scale. Such an approach offers an innovative means of transcending the patch or landscape scale of SVAT type models, to spatially distributed estimates of model output. The resulting spatial and temporal patterns of land surface fluxes and surface resistance are used to more fully understand the hydro-ecological trends observed across a study catchment in eastern Australia. The modelling approach is assessed by comparing predicted cumulative evapotranspiration values with surface fluxes determined from Bowen ratio systems and using auxiliary information such as in-situ soil moisture measurements and depth to groundwater to corroborate observed responses.
APA, Harvard, Vancouver, ISO, and other styles
13

Epskamp, Sacha, Claudia D. van Borkulo, Date C. van der Veen, Michelle N. Servaas, Adela-Maria Isvoranu, Harriëtte Riese, and Angélique O. J. Cramer. "Personalized Network Modeling in Psychopathology: The Importance of Contemporaneous and Temporal Connections." Clinical Psychological Science 6, no. 3 (January 19, 2018): 416–27. http://dx.doi.org/10.1177/2167702617744325.

Full text
Abstract:
Recent literature has introduced (a) the network perspective to psychology and (b) collection of time series data to capture symptom fluctuations and other time varying factors in daily life. Combining these trends allows for the estimation of intraindividual network structures. We argue that these networks can be directly applied in clinical research and practice as hypothesis generating structures. Two networks can be computed: a temporal network, in which one investigates if symptoms (or other relevant variables) predict one another over time, and a contemporaneous network, in which one investigates if symptoms predict one another in the same window of measurement. The contemporaneous network is a partial correlation network, which is emerging in the analysis of cross-sectional data but is not yet utilized in the analysis of time series data. We explain the importance of partial correlation networks and exemplify the network structures on time series data of a psychiatric patient.
APA, Harvard, Vancouver, ISO, and other styles
14

Dandekar, Sangita, Claudio Privitera, Thom Carney, and Stanley A. Klein. "Neural saccadic response estimation during natural viewing." Journal of Neurophysiology 107, no. 6 (March 15, 2012): 1776–90. http://dx.doi.org/10.1152/jn.00237.2011.

Full text
Abstract:
Studying neural activity during natural viewing conditions is not often attempted. Isolating the neural response of a single saccade is necessary to study neural activity during natural viewing; however, the close temporal spacing of saccades that occurs during natural viewing makes it difficult to determine the response to a single saccade. Herein, a general linear model (GLM) approach is applied to estimate the EEG neural saccadic response for different segments of the saccadic main sequence separately. It is determined that, in visual search conditions, neural responses estimated by conventional event-related averaging are significantly and systematically distorted relative to GLM estimates due to the close temporal spacing of saccades during visual search. Before the GLM is applied, analyses are applied that demonstrate that saccades during visual search with intersaccadic spacings as low as 100–150 ms do not exhibit significant refractory effects. Therefore, saccades displaying different intersaccadic spacings during visual search can be modeled using the same regressor in a GLM. With the use of the GLM approach, neural responses were separately estimated for five different ranges of saccade amplitudes during visual search. Occipital responses time locked to the onsets of saccades during visual search were found to account for, on average, 79 percent of the variance of EEG activity in a window 90–200 ms after the onsets of saccades for all five saccade amplitude ranges that spanned a range of 0.2–6.0 degrees. A GLM approach was also used to examine the lateralized ocular artifacts associated with saccades. Possible extensions of the methods presented here to account for the superposition of microsaccades in event-related EEG studies conducted in nominal fixation conditions are discussed.
APA, Harvard, Vancouver, ISO, and other styles
15

Margrave, Gary F. "Seismic signal band estimation by interpretation of f-x spectra." GEOPHYSICS 64, no. 1 (January 1999): 251–60. http://dx.doi.org/10.1190/1.1444522.

Full text
Abstract:
The signal band of reflection seismic data is that portion of the temporal Fourier spectrum which is dominated by reflected source energy. The signal bandwidth directly determines the spatial and temporal resolving power and is a useful measure of the value of such data. The realized signal band, which is the signal band of seismic data as optimized in processing, may be estimated by the interpretation of appropriately constructed f-x spectra. A temporal window, whose length has a specified random fluctuation from trace to trace, is applied to an ensemble of seismic traces, and the temporal Fourier transform is computed. The resultant f-x spectra are then separated into amplitude and phase sections, viewed as conventional seismic displays, and interpreted. The signal is manifested through the lateral continuity of spectral events; noise causes lateral incoherence. The fundamental assumption is that signal is correlated from trace to trace while noise is not. A variety of synthetic data examples illustrate that reasonable results are obtained even when the signal decays with time (i.e., is nonstationary) or geologic structure is extreme. Analysis of real data from a 3-C survey shows an easily discernible signal band for both P-P and P-S reflections, with the former being roughly twice the latter. The potential signal band, which may be regarded as the maximum possible signal band, is independent of processing techniques. An estimator for this limiting case is the corner frequency (the frequency at which a decaying signal drops below background noise levels) as measured on ensemble‐averaged amplitude spectra from raw seismic data. A comparison of potential signal band with realized signal band for the 3-C data shows good agreement for P-P data, which suggests the processing is nearly optimal. For P-S data, the realized signal band is about half of the estimated potential. This may indicate a relative immaturity of P-S processing algorithms or it may be due to P-P energy on the raw radial component records.
APA, Harvard, Vancouver, ISO, and other styles
16

Kuusela, Mikael, and Michael L. Stein. "Locally stationary spatio-temporal interpolation of Argo profiling float data." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 474, no. 2220 (December 2018): 20180400. http://dx.doi.org/10.1098/rspa.2018.0400.

Full text
Abstract:
Argo floats measure seawater temperature and salinity in the upper 2000 m of the global ocean. Statistical analysis of the resulting spatio-temporal dataset is challenging owing to its non-stationary structure and large size. We propose mapping these data using locally stationary Gaussian process regression where covariance parameter estimation and spatio-temporal prediction are carried out in a moving-window fashion. This yields computationally tractable non-stationary anomaly fields without the need to explicitly model the non-stationary covariance structure. We also investigate Student t -distributed fine-scale variation as a means to account for non-Gaussian heavy tails in ocean temperature data. Cross-validation studies comparing the proposed approach with the existing state of the art demonstrate clear improvements in point predictions and show that accounting for the non-stationarity and non-Gaussianity is crucial for obtaining well-calibrated uncertainties. This approach also provides data-driven local estimates of the spatial and temporal dependence scales for the global ocean, which are of scientific interest in their own right.
APA, Harvard, Vancouver, ISO, and other styles
17

Kechik, D. A., Yu P. Aslamov, and I. G. Davydov. "Method of estimation of frequency variation relying on estimation of shift of spectral peaks." «System analysis and applied information science», no. 1 (April 26, 2021): 53–61. http://dx.doi.org/10.21122/2309-4923-2021-1-53-61.

Full text
Abstract:
Problem of estimation of variated frequency of components of polyharmonic signals has been arose. Three-dimensional time-frequency representation of signals is usually used to resolve this problem. But simple and reliable method of instantaneous frequency tracking is needed. Frequency tracking method based on estimation of shifts of peaks of spectrogram has been proposed in this paper. It is assumed that shift of spectral peaks of components of signal is proportional to variation of fundamental frequency. Logarithmic scaling of time-frequency representation is used to make spectral peaks equidistant. Temporal dependence of shift of spectral maximums is obtained using correlation of windowed spectrum at the first frame and spectrum of signal in the current window. Then obtained track is translated in linear scale. Proposed method does not estimate values of instantaneous frequency or central frequency of signal component but estimates its variation. Advantage of the method is that it can estimate frequency track even if range of frequency variation and its central value are known roughly or unknown at all. Multiple components do not interfere to estimate fundamental frequency variation. Reduction of bandwidth is recommended to increase accuracy of frequency track estimation, but analysis of time-frequency representation containing a few components is also possible. Dependency of performance of analysis of synthetic signals using the method on various signal to noise ratios under different conditions was estimated. Applicability of the method for vibrational diagnosing of rotary equipment was checked out using spectral interference method.
APA, Harvard, Vancouver, ISO, and other styles
18

Potapov, Alexei. "Characteristic Scales of Reconstruction Distortions." International Journal of Bifurcation and Chaos 08, no. 04 (April 1998): 835–41. http://dx.doi.org/10.1142/s0218127498000632.

Full text
Abstract:
We consider the distortions of reconstructed chaotic attractors related with the choice of reconstruction parameters (embedding dimension m and delay τ) and characteristic scales of these distortions. The "spatial" scale may be interpreted as the upper bound for neighborhood size, which is used in many algorithms such as the calculation of Lyapunov exponents. The corresponding "temporal scale" gives the upper bound for the window length spanned by reconstructed vector w=(m-1)τ — the so-called "irrelevance time" [Casdagli et al., 1991]. The analysis enabled us to propose the new algorithm of time series processing for the direct estimation of generalized entropy from time series without calculating the correlation integral.
APA, Harvard, Vancouver, ISO, and other styles
19

Baskoutas, I., G. Panopoulou, and G. Papadopoulos. "LONG TEMPORAL VARIATION OF SEISMIC PARAMETERS FOR SEISMIC PATTERNS IDENTIFICATION IN GREECE." Bulletin of the Geological Society of Greece 36, no. 3 (January 1, 2004): 1362. http://dx.doi.org/10.12681/bgsg.16489.

Full text
Abstract:
A new approach of detailed spatio-temporal variation analysis of seismic data is proposed by means of FastBEE (Fast estimation of Big Expected Earthquake) aiming at the regional monitoring of seismic activity for premonitory seismic patterns identification. For the investigation of temporal variation, a set of seismic parameters is used, like the logarithm of the number of earthquakes logN, estimates of 6-value obtained by the maximum likelihood estimation model, time clustering of seismic activity AR(t) and of energy released EM, since they can be considered as precursory seismological indicators. Earthquake catalog data, used in this approach, were elaborated in order to construct the time series for each parameter within a time window, large enough, as to guarantee statistical meaningful result. The Hellenic trench-arc region under investigation is chosen in the basis of its seismotectonic characteristics, in relation to the spatial extent of the seismogenic zone. The tools were tested, for long temporal variation features in the Ionian Islands Sea and the North Aegean Sea regions and its successful applicability is presented. The rise of irregularity, along these temporal profiles, was formulated in specific quantitative premonitory seismic pattern. In most of the cases, FastBEE premonitory pattern found shows significant changes from the background values of each parameter. Parameter logN shows a valley form curve, which start to increase before the expected earthquake occurrence, as well as the energy parameter E273, while b-value temporal estimates are forming a mountain shape curve, before the occurrence of a big earthquake. Instead, parameter ÙR(t) present a rapid fluctuation, without any kind of premonitory character
APA, Harvard, Vancouver, ISO, and other styles
20

Quintana-Diaz, Gara, Torbjörn Ekman, José Miguel Lago Agra, Diego Hurtado de Mendoza, Alberto González Muíño, and Fernando Aguado Agelet. "In-Orbit Measurements and Analysis of Radio Interference in the UHF Amateur Radio Band from the LUME-1 Satellite." Remote Sensing 13, no. 16 (August 17, 2021): 3252. http://dx.doi.org/10.3390/rs13163252.

Full text
Abstract:
Radio interference in the uplink makes communication to satellites in the UHF amateur radio band (430–440 MHz) challenging for any satellite application. Interference measurements and characterisation can improve the robustness and reliability of the communication system design. Most published results focus on average power spectrum measurements and heatmaps. We apply a low complexity estimator on an SDR (Software-Defined Radio) to study the interference’s dispersion and temporal variation on-board a small satellite as an alternative. Measuring the Local Mean Envelope (LME) variability with different averaging window lengths enables the estimation of time variability of the interference. The coefficient of variation for the LME indicates how much the signals vary in time and the spread in magnitudes. In this article, theoretical analysis, simulations, and laboratory results were used to validate this measurement method. In-orbit measurements were performed on-board the LUME-1 satellite. Band-limited interference with pulsed temporal behaviour and a high coefficient of variation was detected over North America, Europe, and the Arctic, where space-tracking radars are located. Wide-band pulsed interference with high time variability was also detected over Europe. These measurements show why operators that use a communication system designed for Additive White Gaussian Noise (AWGN) at power levels obtained from heatmaps struggle to command their satellites.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhou, Ke, Hailei Liu, Xiaobo Deng, Hao Wang, and Shenglan Zhang. "Comparison of Machine-Learning Algorithms for Near-Surface Air-Temperature Estimation from FY-4A AGRI Data." Advances in Meteorology 2020 (October 6, 2020): 1–14. http://dx.doi.org/10.1155/2020/8887364.

Full text
Abstract:
Six machine-learning approaches, including multivariate linear regression (MLR), gradient boosting decision tree, k-nearest neighbors, random forest, extreme gradient boosting (XGB), and deep neural network (DNN), were compared for near-surface air-temperature (Tair) estimation from the new generation of Chinese geostationary meteorological satellite Fengyun-4A (FY-4A) observations. The brightness temperatures in split-window channels from the Advanced Geostationary Radiation Imager (AGRI) of FY-4A and numerical weather prediction data from the global forecast system were used as the predictor variables for Tair estimation. The performance of each model and the temporal and spatial distribution of the estimated Tair errors were analyzed. The results showed that the XGB model had better overall performance, with R2 of 0.902, bias of −0.087°C, and root-mean-square error of 1.946°C. The spatial variation characteristics of the Tair error of the XGB method were less obvious than those of the other methods. The XGB model can provide more stable and high-precision Tair for a large-scale Tair estimation over China and can serve as a reference for Tair estimation based on machine-learning models.
APA, Harvard, Vancouver, ISO, and other styles
22

Celecia, Alimed, Karla Figueiredo, Marley Vellasco, and René González. "A Portable Fuzzy Driver Drowsiness Estimation System." Sensors 20, no. 15 (July 23, 2020): 4093. http://dx.doi.org/10.3390/s20154093.

Full text
Abstract:
The adequate automatic detection of driver fatigue is a very valuable approach for the prevention of traffic accidents. Devices that can determine drowsiness conditions accurately must inherently be portable, adaptable to different vehicles and drivers, and robust to conditions such as illumination changes or visual occlusion. With the advent of a new generation of computationally powerful embedded systems such as the Raspberry Pi, a new category of real-time and low-cost portable drowsiness detection systems could become standard tools. Usually, the proposed solutions using this platform are limited to the definition of thresholds for some defined drowsiness indicator or the application of computationally expensive classification models that limits their use in real-time. In this research, we propose the development of a new portable, low-cost, accurate, and robust drowsiness recognition device. The proposed device combines complementary drowsiness measures derived from a temporal window of eyes (PERCLOS, ECD) and mouth (AOT) states through a fuzzy inference system deployed in a Raspberry Pi with the capability of real-time response. The system provides three degrees of drowsiness (Low-Normal State, Medium-Drowsy State, and High-Severe Drowsiness State), and was assessed in terms of its computational performance and efficiency, resulting in a significant accuracy of 95.5% in state recognition that demonstrates the feasibility of the approach.
APA, Harvard, Vancouver, ISO, and other styles
23

Eroglu, Orhan, Mehmet Kurum, Dylan Boyd, and Ali Cafer Gurbuz. "High Spatio-Temporal Resolution CYGNSS Soil Moisture Estimates Using Artificial Neural Networks." Remote Sensing 11, no. 19 (September 28, 2019): 2272. http://dx.doi.org/10.3390/rs11192272.

Full text
Abstract:
This paper presents a learning-based, physics-aware soil moisture (SM) retrieval algorithm for NASA’s Cyclone Global Navigation Satellite System (CYGNSS) mission. The goal of the proposed novel method is to advance CYGNSS-based SM estimations, exploiting the spatio-temporal resolution of the GNSS reflectometry (GNSS-R) signals to its highest potential within a machine learning framework. The methodology employs a fully connected Artificial Neural Network (ANN) regression model to perform SM predictions through learning the nonlinear relations of SM and other land geophysical parameters to the CYGNSS observables. In situ SM measurements from several International SM Network (ISMN) sites are used as reference labels; CYGNSS incidence angles, derived reflectivity and trailing edge slope (TES) values, as well as ancillary data, are exploited as input features for training and validation of the ANN model. In particular, the utilized ancillary data consist of normalized difference vegetation index (NDVI), vegetation water content (VWC), terrain elevation, terrain slope, and h-parameter (surface roughness). Land cover classification and inland water body masks are also used for the intermediate derivations and quality control purposes. The proposed algorithm assumes uniform SM over a 0.0833 ∘ × 0.0833 ∘ (approximately 9 km × 9 km around the equator) lat/lon grid for any CYGNSS observation that falls within this window. The proposed technique is capable of generating sub-daily and high-resolution SM predictions as it does not rely on time-series or spatial averaging of the CYGNSS observations. Once trained on the data from ISMN sites, the model is independent from other SM sources for retrieval. The estimation results obtained over unseen test data are promising: SM predictions with an unbiased root mean squared error of 0.0544 cm 3 /cm 3 and Pearson correlation coefficient of 0.9009 are reported for 2017 and 2018.
APA, Harvard, Vancouver, ISO, and other styles
24

Nguyen, Van Quan, Tien Nguyen Anh, and Hyung-Jeong Yang. "Real-time event detection using recurrent neural network in social sensors." International Journal of Distributed Sensor Networks 15, no. 6 (June 2019): 155014771985649. http://dx.doi.org/10.1177/1550147719856492.

Full text
Abstract:
We proposed an approach for temporal event detection using deep learning and multi-embedding on a set of text data from social media. First, a convolutional neural network augmented with multiple word-embedding architectures is used as a text classifier for the pre-processing of the input textual data. Second, an event detection model using a recurrent neural network is employed to learn time series data features by extracting temporal information. Recently, convolutional neural networks have been used in natural language processing problems and have obtained excellent results as performing on available embedding vector. In this article, word-embedding features at the embedding layer are combined and fed to convolutional neural network. The proposed method shows no size limitation, supplementation of more embeddings than standard multichannel based approaches, and obtained similar performance (accuracy score) on some benchmark data sets, especially in an imbalanced data set. For event detection, a long short-term memory network is used as a predictor that learns higher level temporal features so as to predict future values. An error distribution estimation model is built to calculate the anomaly score of observation. Events are detected using a window-based method on the anomaly scores.
APA, Harvard, Vancouver, ISO, and other styles
25

Kondrashov, D., and M. Ghil. "Spatio-temporal filling of missing points in geophysical data sets." Nonlinear Processes in Geophysics 13, no. 2 (May 24, 2006): 151–59. http://dx.doi.org/10.5194/npg-13-151-2006.

Full text
Abstract:
Abstract. The majority of data sets in the geosciences are obtained from observations and measurements of natural systems, rather than in the laboratory. These data sets are often full of gaps, due to to the conditions under which the measurements are made. Missing data give rise to various problems, for example in spectral estimation or in specifying boundary conditions for numerical models. Here we use Singular Spectrum Analysis (SSA) to fill the gaps in several types of data sets. For a univariate record, our procedure uses only temporal correlations in the data to fill in the missing points. For a multivariate record, multi-channel SSA (M-SSA) takes advantage of both spatial and temporal correlations. We iteratively produce estimates of missing data points, which are then used to compute a self-consistent lag-covariance matrix; cross-validation allows us to optimize the window width and number of dominant SSA or M-SSA modes to fill the gaps. The optimal parameters of our procedure depend on the distribution in time (and space) of the missing data, as well as on the variance distribution between oscillatory modes and noise. The algorithm is demonstrated on synthetic examples, as well as on data sets from oceanography, hydrology, atmospheric sciences, and space physics: global sea-surface temperature, flood-water records of the Nile River, the Southern Oscillation Index (SOI), and satellite observations of relativistic electrons.
APA, Harvard, Vancouver, ISO, and other styles
26

Garcia, João V. C., Stephan Stephany, and Augusto B. d'Oliveira. "Estimation of convective precipitation mass from lightning data using a temporal sliding-window for a series of thunderstorms in Southeastern Brazil." Atmospheric Science Letters 14, no. 4 (August 9, 2013): 281–86. http://dx.doi.org/10.1002/asl2.453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Atencia, A., L. Mediero, M. C. Llasat, and L. Garrote. "Effect of radar rainfall time resolution on the predictive capability of a distributed hydrologic model." Hydrology and Earth System Sciences 15, no. 12 (December 21, 2011): 3809–27. http://dx.doi.org/10.5194/hess-15-3809-2011.

Full text
Abstract:
Abstract. The performance of a hydrologic model depends on the rainfall input data, both spatially and temporally. As the spatial distribution of rainfall exerts a great influence on both runoff volumes and peak flows, the use of a distributed hydrologic model can improve the results in the case of convective rainfall in a basin where the storm area is smaller than the basin area. The aim of this study was to perform a sensitivity analysis of the rainfall time resolution on the results of a distributed hydrologic model in a flash-flood prone basin. Within such a catchment, floods are produced by heavy rainfall events with a large convective component. A second objective of the current paper is the proposal of a methodology that improves the radar rainfall estimation at a higher spatial and temporal resolution. Composite radar data from a network of three C-band radars with 6-min temporal and 2 × 2 km2 spatial resolution were used to feed the RIBS distributed hydrological model. A modification of the Window Probability Matching Method (gauge-adjustment method) was applied to four cases of heavy rainfall to improve the observed rainfall sub-estimation by computing new Z/R relationships for both convective and stratiform reflectivities. An advection correction technique based on the cross-correlation between two consecutive images was introduced to obtain several time resolutions from 1 min to 30 min. The RIBS hydrologic model was calibrated using a probabilistic approach based on a multiobjective methodology for each time resolution. A sensitivity analysis of rainfall time resolution was conducted to find the resolution that best represents the hydrological basin behaviour.
APA, Harvard, Vancouver, ISO, and other styles
28

Duchene, Sebastian, Philippe Lemey, Tanja Stadler, Simon Y. W. Ho, David A. Duchene, Vijaykrishna Dhanasekaran, and Guy Baele. "Bayesian Evaluation of Temporal Signal in Measurably Evolving Populations." Molecular Biology and Evolution 37, no. 11 (July 6, 2020): 3363–79. http://dx.doi.org/10.1093/molbev/msaa163.

Full text
Abstract:
Abstract Phylogenetic methods can use the sampling times of molecular sequence data to calibrate the molecular clock, enabling the estimation of evolutionary rates and timescales for rapidly evolving pathogens and data sets containing ancient DNA samples. A key aspect of such calibrations is whether a sufficient amount of molecular evolution has occurred over the sampling time window, that is, whether the data can be treated as having come from a measurably evolving population. Here, we investigate the performance of a fully Bayesian evaluation of temporal signal (BETS) in sequence data. The method involves comparing the fit to the data of two models: a model in which the data are accompanied by the actual (heterochronous) sampling times, and a model in which the samples are constrained to be contemporaneous (isochronous). We conducted simulations under a wide range of conditions to demonstrate that BETS accurately classifies data sets according to whether they contain temporal signal or not, even when there is substantial among-lineage rate variation. We explore the behavior of this classification in analyses of five empirical data sets: modern samples of A/H1N1 influenza virus, the bacterium Bordetella pertussis, coronaviruses from mammalian hosts, ancient DNA from Hepatitis B virus, and mitochondrial genomes of dog species. Our results indicate that BETS is an effective alternative to other tests of temporal signal. In particular, this method has the key advantage of allowing a coherent assessment of the entire model, including the molecular clock and tree prior which are essential aspects of Bayesian phylodynamic analyses.
APA, Harvard, Vancouver, ISO, and other styles
29

Koo, YoungHyun, Myeongchan Oh, Sung-Min Kim, and Hyeong-Dong Park. "Estimation and Mapping of Solar Irradiance for Korea by Using COMS MI Satellite Images and an Artificial Neural Network Model." Energies 13, no. 2 (January 7, 2020): 301. http://dx.doi.org/10.3390/en13020301.

Full text
Abstract:
The power capacity of solar photovoltaics (PVs) in Korea has grown dramatically in recent years, and an accurate estimation of solar resources is crucial for the efficient management of these solar PV systems. Since the number of solar irradiance measurement sites is insufficient for Korea, satellite images can be useful sources for estimating solar irradiance over a wide area of Korea. In this study, an artificial neural network (ANN) model was constructed to calculate hourly global horizontal solar irradiance (GHI) from Korea Communication, Ocean and Meteorological Satellite (COMS) Meteorological Imager (MI) images. Solar position variables and five COMS MI channels were used as inputs for the ANN model. The basic ANN model was determined to have a window size of five for the input satellite images and two hidden layers, with 30 nodes on each hidden layer. After these ANN parameters were determined, the temporal and spatial applicability of the ANN model for solar irradiance mapping was validated. The final ANN ensemble model, which calculated the hourly GHI from 10 independent ANN models, exhibited a correlation coefficient (R) of 0.975 and root mean square error (RMSE) of 54.44 W/m² (12.93%), which were better results than for other remote-sensing based works for Korea. Finally, GHI maps for Korea were generated using the final ANN ensemble model. This COMS-based ANN model can contribute to the efficient estimation of solar resources and the improvement of the operational efficiency of solar PV systems for Korea.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Ke, Xin Huang, JunLan Chen, Chuan Cao, Zhoubing Xiong, and Long Chen. "Forward and Backward Visual Fusion Approach to Motion Estimation with High Robustness and Low Cost." Remote Sensing 11, no. 18 (September 13, 2019): 2139. http://dx.doi.org/10.3390/rs11182139.

Full text
Abstract:
We present a novel low-cost visual odometry method of estimating the ego-motion (self-motion) for ground vehicles by detecting the changes that motion induces on the images. Different from traditional localization methods that use differential global positioning system (GPS), precise inertial measurement unit (IMU) or 3D Lidar, the proposed method only leverage data from inexpensive visual sensors of forward and backward onboard cameras. Starting with the spatial-temporal synchronization, the scale factor of backward monocular visual odometry was estimated based on the MSE optimization method in a sliding window. Then, in trajectory estimation, an improved two-layers Kalman filter was proposed including orientation fusion and position fusion . Where, in the orientation fusion step, we utilized the trajectory error space represented by unit quaternion as the state of the filter. The resulting system enables high-accuracy, low-cost ego-pose estimation, along with providing robustness capability of handing camera module degradation by automatic reduce the confidence of failed sensor in the fusion pipeline. Therefore, it can operate in the presence of complex and highly dynamic motion such as enter-in-and-out tunnel entrance, texture-less, illumination change environments, bumpy road and even one of the cameras fails. The experiments carried out in this paper have proved that our algorithm can achieve the best performance on evaluation indexes of average in distance (AED), average in X direction (AEX), average in Y direction (AEY), and root mean square error (RMSE) compared to other state-of-the-art algorithms, which indicates that the output results of our approach is superior to other methods.
APA, Harvard, Vancouver, ISO, and other styles
31

Clementini, Chiara, Andrea Pomente, Daniele Latini, Hideki Kanamaru, Maria Raffaella Vuolo, Ana Heureux, Mariko Fujisawa, Giovanni Schiavon, and Fabio Del Frate. "Long-Term Grass Biomass Estimation of Pastures from Satellite Data." Remote Sensing 12, no. 13 (July 6, 2020): 2160. http://dx.doi.org/10.3390/rs12132160.

Full text
Abstract:
The general consensus on future climate projections poses new and increased concerns about climate change and its impacts. Droughts are primarily worrying, since they contribute to altering the composition, distribution, and abundance of species. Grasslands, for example, are the primary source for grazing mammals and modifications in climate determine variation in the available yields for cattle. To support the agriculture sector, international organizations such as the Food and Agriculture Organization (FAO) of the United Nations are promoting the development of dedicated monitoring initiatives, with particular attention for undeveloped and disadvantaged countries. The temporal scale is very important in this context, where long time series of data are required to compute consistent analyses. In this research, we discuss the results regarding long-term grass biomass estimation in an extended African region. The results are obtained by means of a procedure that is mostly automatic and replicable in other contexts. Zambia has been identified as a significant test area due to its vulnerability to the adverse impacts of climate change as a result of its geographic location, socioeconomic stresses, and low adaptive capacity. In fact, analysis and estimations were performed over a long time window (21 years) to identify correlations with climate variables, such as precipitation, to clarify sensitivity to climate change and possible effects already in place. From the analysis, decline in both grass quality and quantity was not currently evident in the study area. However, pastures in the considered area were found to be vulnerable to changing climate and, in particular, to the water shortages accompanying drought periods.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhao, L., Q. Duan, J. Schaake, A. Ye, and J. Xia. "A hydrologic post-processor for ensemble streamflow predictions." Advances in Geosciences 29 (February 28, 2011): 51–59. http://dx.doi.org/10.5194/adgeo-29-51-2011.

Full text
Abstract:
Abstract. This paper evaluates the performance of a statistical post-processor for imperfect hydrologic model forecasts. Assuming that the meteorological forecasts are well-calibrated, we employ a "General Linear Model (GLM)" to post-process simulations produced by a hydrologic model. For a particular forecast date, the observations and simulations from an "analysis window" and hydrologic model forecasts for a "forecast window", the GLM Post-Processor (GLMPP) is used to produce an ensemble of predictions of the streamflow observations that will occur during the "forecast window". The objectives of the GLMPP are to: (1) preserve any skill in the original hydrologic ensemble forecast; (2) correct systematic model biases; (3) retain the equal-likelihood assumption for the ensemble; (4) preserve temporal scale dependency relationships in streamflow hydrographs and the uncertainty in the predictions; and, (5) produce reliable ensemble predictions. Observed and simulated daily streamflow data from the Second Workshop on Model Parameter Estimation Experiment (MOPEX) are used to test how well these objectives are met when the GLMPP is applied to ensemble hydrologic forecasts driven by well calibrated meteorological forecasts. A 39-year hydrologic dataset from the French Broad basin is split into calibration and verification periods. The results show that the GLMPP built using data from the calibration period removes the mean bias when applied to hydrologic model simulations from both the calibration and verification periods. Probability distributions of the post-processed model simulations are shown to be closer to the climatological probability distributions of observed streamflow than the distributions of the unadjusted simulated flows. A number of experiments with different GLMPP configurations were also conducted to examine the effects of different configurations for forecast and analysis window lengths on the robustness of the results.
APA, Harvard, Vancouver, ISO, and other styles
33

Garand, Louis, Mark Buehner, and Nicolas Wagneur. "Background Error Correlation between Surface Skin and Air Temperatures: Estimation and Impact on the Assimilation of Infrared Window Radiances." Journal of Applied Meteorology 43, no. 12 (December 1, 2004): 1853–63. http://dx.doi.org/10.1175/jam2175.1.

Full text
Abstract:
Abstract This paper makes use of ensemble forecasts to infer the correlation between surface skin temperature Ts and air temperature Ta model errors. The impact of this correlation in data assimilation is then investigated. In the process of assimilating radiances that are sensitive to the surface skin temperature, the Ts–Ta error correlation becomes important because it allows statistically optimal corrections to the background temperature profile in the boundary layer. In converse, through this correlation, surface air temperature data can substantially influence the analysis of skin temperature. One difficulty is that the Ts–Ta correlation depends on the local static stability conditions that link the two variables. Therefore, a correlation estimate based on spatial or temporal averages is not appropriate. Ensembles of forecasts valid at the analysis time provide a novel means to infer the correlation dynamically at each model grid point. Geostationary Operational Environmental Satellite (GOES)-8 and -10 surface-sensitive imager radiances are assimilated with and without the inferred correlations in a 3D variational analysis system. The impact of the correlation on analyses is assessed using independent radiosonde data. The impact on 6-h forecasts is also evaluated using surface synoptic reports. The influence of the correlation extends from the surface to about 1.5 km. Temperature differences in the resulting analyses on the order of 0.3–0.6 K are typical in the boundary layer and may extend over broad regions. These difference patterns persist beyond 6 h into the forecasts.
APA, Harvard, Vancouver, ISO, and other styles
34

Rautenberg, Alexander, Martin Graf, Norman Wildmann, Andreas Platis, and Jens Bange. "Reviewing Wind Measurement Approaches for Fixed-Wing Unmanned Aircraft." Atmosphere 9, no. 11 (October 28, 2018): 422. http://dx.doi.org/10.3390/atmos9110422.

Full text
Abstract:
One of the biggest challenges in probing the atmospheric boundary layer with small unmanned aerial vehicles is the turbulent 3D wind vector measurement. Several approaches have been developed to estimate the wind vector without using multi-hole flow probes. This study compares commonly used wind speed and direction estimation algorithms with the direct 3D wind vector measurement using multi-hole probes. This was done using the data of a fully equipped system and by applying several algorithms to the same data set. To cover as many aspects as possible, a wide range of meteorological conditions and common flight patterns were considered in this comparison. The results from the five-hole probe measurements were compared to the pitot tube algorithm, which only requires a pitot-static tube and a standard inertial navigation system measuring aircraft attitude (Euler angles), while the position is measured with global navigation satellite systems. Even less complex is the so-called no-flow-sensor algorithm, which only requires a global navigation satellite system to estimate wind speed and wind direction. These algorithms require temporal averaging. Two averaging periods were applied in order to see the influence and show the limitations of each algorithm. For a window of 4 min, both simplifications work well, especially with the pitot-static tube measurement. When reducing the averaging period to 1 min and thereby increasing the temporal resolution, it becomes evident that only circular flight patterns with full racetracks inside the averaging window are applicable for the no-flow-sensor algorithm and that the additional flow information from the pitot-static tube improves precision significantly.
APA, Harvard, Vancouver, ISO, and other styles
35

Dr. S.Varadarajan, A. Rajani,. "Estimation and Validation of Land Surface Temperature by using Remote Sensing & GIS for Chittoor District, Andhra Pradesh." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 5 (April 11, 2021): 607–17. http://dx.doi.org/10.17762/turcomat.v12i5.1059.

Full text
Abstract:
Land Surface Temperature (LST) quantification is needed in various applications like temporal analysis, identification of global warming, land use or land cover, water management, soil moisture estimation and natural disasters. The objective of this study is estimation as well as validation of temperature data at 14 Automatic Weather Stations (AWS) in Chittoor District of Andhra Pradesh with LST extracted by using remote sensing as well as Geographic Information System (GIS). Satellite data considered for estimation purpose is LANDSAT 8. Sensor data used for assessment of LST are OLI (Operational Land Imager) and TIR (Thermal Infrared). Thermal band contains spectral bands of 10 and 11 were considered for evaluating LST independently by using algorithm called Mono Window Algorithm (MWA). Land Surface Emissivity (LSE) is the vital parameter for calculating LST. The LSE estimation requires NDVI (Normalized Difference Vegetation Index) which is computed by using Band 4 (visible Red band) and band 5 (Near-Infra Red band) spectral radiance bands. Thermal band images having wavelength 11.2 µm and 12.5 µm of 30th May, 2015 and 21st October, 2015 were processed for the analysis of LST. Later on validation of estimated LST through in-suite temperature data obtained from 14 AWS stations in Chittoor district was carried out. The end results showed that, the LST retrieved by using proposed method achieved 5 per cent greater correlation coefficient (r) compared to LST retrieved by using existing method which is based on band 10.
APA, Harvard, Vancouver, ISO, and other styles
36

Hannes, M., U. Wollschläger, F. Schrader, W. Durner, S. Gebler, T. Pütz, J. Fank, G. von Unold, and H. J. Vogel. "High-resolution estimation of the water balance components from high-precision lysimeters." Hydrology and Earth System Sciences Discussions 12, no. 1 (January 14, 2015): 569–608. http://dx.doi.org/10.5194/hessd-12-569-2015.

Full text
Abstract:
Abstract. Lysimeters offer the opportunity to determine precipitation, evapotranspiration and groundwater-recharge with high accuracy. In contrast to other techniques, like Eddy-flux systems or evaporation pans, lysimeters provide a direct measurement of evapotranspiration from a clearly defined surface area at the scale of a soil profile via the built-in weighing system. In particular the estimation of precipitation can benefit from the much higher surface area compared to typical raingauge systems. Nevertheless, lysimeters are exposed to several external influences that could falsify the calculated fluxes. Therefore, the estimation of the relevant fluxes requires an appropriate data processing with respect to various error sources. Most lysimeter studies account for noise in the data by averaging. However, the effects of smoothing by averaging on the accuracy of the estimated water balance is rarely investigated. In this study, we present a filtering scheme, which is designed to deal with the various kinds of possible errors. We analyze the influence of averaging times and thresholds on the calculated water balance. We further investigate the ability of two adaptive filtering methods (the Adaptive Window and Adaptive Threshold filter (AWAT-filter) (Peters et al., 2014) and the consecutively described synchro-filter) in further reducing the filtering error. On the basis of the data sets of 18 simultanously running lysimeters of the TERENO SoilCan research site in Bad Lauchstädt, we show that the estimation of the water balance with high temporal resolution and good accuracy is possible.
APA, Harvard, Vancouver, ISO, and other styles
37

Salcedo-Castro, Julio, Natália Pillar da Silva, Ricardo de Camargo, Eduardo Marone, and Héctor H. Sepúlveda. "Estimation of extreme wave height return periods from short-term interpolation of multi-mission satellite data: application to the South Atlantic." Ocean Science 14, no. 4 (August 30, 2018): 911–21. http://dx.doi.org/10.5194/os-14-911-2018.

Full text
Abstract:
Abstract. We analyzed the spatial pattern of wave extremes in the South Atlantic Ocean by using multiple altimeter platforms spanning the period 1993–2015. Unlike the traditional approach adopted by previous studies, consisting of computing the monthly mean, median or maximum values inside a bin of certain size, we tackled the problem with a different procedure in order to capture more information from short-term events. All satellite tracks occurring during a 2-day temporal window were gathered in the whole area and then gridded data were generated onto a mesh size of 2∘×2∘ through optimal interpolation. The peaks over threshold (POT) method was applied, along with the generalized Pareto distribution (GPD). The results showed a spatial distribution comparable to previous studies and, additionally, this method allowed for capturing more information on shorter timescales without compromising spatial coverage. A comparison with buoy observations demonstrated that this approach improves the representativeness of short-term events in an extreme events analysis.
APA, Harvard, Vancouver, ISO, and other styles
38

Xin, Jingwei, Nannan Wang, Jie Li, Xinbo Gao, and Zhifeng Li. "Video Face Super-Resolution with Motion-Adaptive Feedback Cell." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12468–75. http://dx.doi.org/10.1609/aaai.v34i07.6934.

Full text
Abstract:
Video super-resolution (VSR) methods have recently achieved a remarkable success due to the development of deep convolutional neural networks (CNN). Current state-of-the-art CNN methods usually treat the VSR problem as a large number of separate multi-frame super-resolution tasks, at which a batch of low resolution (LR) frames is utilized to generate a single high resolution (HR) frame, and running a slide window to select LR frames over the entire video would obtain a series of HR frames. However, duo to the complex temporal dependency between frames, with the number of LR input frames increase, the performance of the reconstructed HR frames become worse. The reason is in that these methods lack the ability to model complex temporal dependencies and hard to give an accurate motion estimation and compensation for VSR process. Which makes the performance degrade drastically when the motion in frames is complex. In this paper, we propose a Motion-Adaptive Feedback Cell (MAFC), a simple but effective block, which can efficiently capture the motion compensation and feed it back to the network in an adaptive way. Our approach efficiently utilizes the information of the inter-frame motion, the dependence of the network on motion estimation and compensation method can be avoid. In addition, benefiting from the excellent nature of MAFC, the network can achieve better performance in the case of extremely complex motion scenarios. Extensive evaluations and comparisons validate the strengths of our approach, and the experimental results demonstrated that the proposed framework is outperform the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
39

Minkwitz, David, Karl Gerald van den Boogaart, Tatjana Gerzen, Mainul Hoque, and Manuel Hernández-Pajares. "Ionospheric tomography by gradient-enhanced kriging with STEC measurements and ionosonde characteristics." Annales Geophysicae 34, no. 11 (November 14, 2016): 999–1010. http://dx.doi.org/10.5194/angeo-34-999-2016.

Full text
Abstract:
Abstract. The estimation of the ionospheric electron density by kriging is based on the optimization of a parametric measurement covariance model. First, the extension of kriging with slant total electron content (STEC) measurements based on a spatial covariance to kriging with a spatial–temporal covariance model, assimilating STEC data of a sliding window, is presented. Secondly, a novel tomography approach by gradient-enhanced kriging (GEK) is developed. Beyond the ingestion of STEC measurements, GEK assimilates ionosonde characteristics, providing peak electron density measurements as well as gradient information. Both approaches deploy the 3-D electron density model NeQuick as a priori information and estimate the covariance parameter vector within a maximum likelihood estimation for the dedicated tomography time stamp. The methods are validated in the European region for two periods covering quiet and active ionospheric conditions. The kriging with spatial and spatial–temporal covariance model is analysed regarding its capability to reproduce STEC, differential STEC and foF2. Therefore, the estimates are compared to the NeQuick model results, the 2-D TEC maps of the International GNSS Service and the DLR's Ionospheric Monitoring and Prediction Center, and in the case of foF2 to two independent ionosonde stations. Moreover, simulated STEC and ionosonde measurements are used to investigate the electron density profiles estimated by the GEK in comparison to a kriging with STEC only. The results indicate a crucial improvement in the initial guess by the developed methods and point out the potential compensation for a bias in the peak height hmF2 by means of GEK.
APA, Harvard, Vancouver, ISO, and other styles
40

Mal, Eshita, Rajendhar Junjuri, Manoj Kumar Gundawar, and Alika Khare. "Temporal characterization of laser-induced plasma of tungsten in air." Laser and Particle Beams 38, no. 1 (January 17, 2020): 14–24. http://dx.doi.org/10.1017/s0263034619000788.

Full text
Abstract:
AbstractIn this manuscript, the time-resolved laser-induced breakdown spectroscopy (LIBS) on tungsten target in air and the coexistence of LTE among atoms and ions as well as the fulfillment of optically thin plasma condition are reported. The laser-induced plasma (LIP) of tungsten is generated by focusing the second harmonic of a Q-switched Nd:YAG laser of pulse width ~7 ns and repetition rate of 1 Hz on the tungsten target. The temporal evolution of LIP of tungsten is recorded at four different incident laser fluences of 60, 120, 180, and 270 J/cm2. The several atomic and singly ionized lines of tungsten are identified in LIP. For the estimation of plasma temperature via the Boltzmann plot, the transitions at 430.7, 449.4, 468.0, 484.3, 505.3, and 524.2 nm of Atomic transition of tungsten (WI) and that of the ionic transitions, First Ionic transition of Tungsten (WII) at 251.0, 272.9, and 357.2 nm are selected. The electron density is estimated using the Stark-broadened profile of WI line at 430.2 nm. The McWhirter criteria for the local thermodynamic equilibrium (LTE) condition is verified in present experimental conditions as well as the relaxation time and diffusion length are estimated to take into account the transient and inhomogeneous nature of the plasma. The optically thin plasma condition is studied by assessing the experimental intensity ratio of atomic lines and compared with that of the theoretical intensity ratio (branching ratio). The signal to noise ratio (SNR) is also obtained as a function of time with respect to laser pulse and incident laser fluence. All these observations indicate that the spectra should be recorded within the temporal window of 1–3.5 µs with respect to laser pulse where the plasma can be treated as optically thin as well as under LTE simultaneously along with the large SNR.
APA, Harvard, Vancouver, ISO, and other styles
41

Valler, Veronika, Jörg Franke, and Stefan Brönnimann. "Impact of different estimations of the background-error covariance matrix on climate reconstructions based on data assimilation." Climate of the Past 15, no. 4 (August 2, 2019): 1427–41. http://dx.doi.org/10.5194/cp-15-1427-2019.

Full text
Abstract:
Abstract. Data assimilation has been adapted in paleoclimatology to reconstruct past climate states. A key component of some assimilation systems is the background-error covariance matrix, which controls how the information from observations spreads into the model space. In ensemble-based approaches, the background-error covariance matrix can be estimated from the ensemble. Due to the usually limited ensemble size, the background-error covariance matrix is subject to the so-called sampling error. We test different methods to reduce the effect of sampling error in a published paleoclimate data assimilation setup. For this purpose, we conduct a set of experiments, where we assimilate early instrumental data and proxy records stored in trees, to investigate the effect of (1) the applied localization function and localization length scale; (2) multiplicative and additive inflation techniques; (3) temporal localization of monthly data, which applies if several time steps are estimated together in the same assimilation window. We find that the estimation of the background-error covariance matrix can be improved by additive inflation where the background-error covariance matrix is not only calculated from the sample covariance but blended with a climatological covariance matrix. Implementing a temporal localization for monthly resolved data also led to a better reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
42

Osuch, Marzena, Tomasz Wawrzyniak, and Adam Nawrot. "Diagnosis of the hydrology of a small Arctic permafrost catchment using HBV conceptual rainfall-runoff model." Hydrology Research 50, no. 2 (January 4, 2019): 459–78. http://dx.doi.org/10.2166/nh.2019.031.

Full text
Abstract:
Abstract Changes in active layer thickness (ALT) over Arctic and permafrost regions have an important impact on rainfall-runoff transformation. General warming is observed across Svalbard Archipelago and corresponds to increases in ground temperatures. Permafrost thaw and changes in ALT due to climate warming alter how water is routed and stored in catchments, and thus impact both surface and subsurface processes. The overall aim of the present study is to examine the relationships between temporal changes of active layer depth and hydrological model parameters, together with variation in the catchment response. The analysis was carried out for the small unglaciated catchment Fuglebekken, located in the vicinity of the Polish Polar Station Hornsund on Spitsbergen. For hydrological modelling, the conceptual rainfall-runoff HBV (Hydrologiska Byråns Vattenbalansavdelning) model was used. The model was calibrated and validated on runoff within subperiods. A moving window approach (3 weeks long) was applied to derive temporal variation of parameters. Model calibration, together with an estimation of parametric uncertainty, was carried out using the Shuffled Complex Evolution Metropolis algorithm. This allowed the dependence of HBV model parameters on ALT to be analysed. Also, we tested the influence of model simplification, correction of precipitation, and initial conditions on the modelling results.
APA, Harvard, Vancouver, ISO, and other styles
43

Paz Pellat, Fernando. "Minimización de los efectos atmosféricos en el índice espectral de la vegetación IVIS." REVISTA TERRA LATINOAMERICANA 36, no. 1 (January 22, 2018): 31. http://dx.doi.org/10.28940/terra.v36i1.230.

Full text
Abstract:
It is essential to minimize atmospheric effects on spectral information of remote sensors from space platforms to avoid under estimation of biophysical variables associated with satellite image data. In this paper, a generic algorithm was developed, based on sound theoretical arguments, to analyze time series ISVI spectral vegetation index (vegetation index based on iso-soil curves), thus avoiding the problems associated with the classic design of vegetation indices, where the spectral signal saturates quickly. The results, when applying the algorithm in pixel time series of AVHRR satellite images, showed that reduction and standardization of atmospheric effects in the ISVI was achieved. Using ISVI maximum values in time series (temporal window), a reasonable approximation to atmospheric conditions with minimum or standardized effects was obtained. In conclusion, although the scheme developed failed to eliminate the atmospheric effect on ISVI entirely, it was reduced to a minimum. The algorithm developed was simple enough for operational use, with regard to atmospheric correction methods using radiative model inversions.
APA, Harvard, Vancouver, ISO, and other styles
44

Du, Wenhui, Zhihao Qin, Jinlong Fan, Maofang Gao, Fei Wang, and Bilawal Abbasi. "An Efficient Approach to Remove Thick Cloud in VNIR Bands of Multi-Temporal Remote Sensing Images." Remote Sensing 11, no. 11 (May 29, 2019): 1284. http://dx.doi.org/10.3390/rs11111284.

Full text
Abstract:
Cloud-free remote sensing images are required for many applications, such as land cover classification, land surface temperature retrieval and agricultural-drought monitoring. Cloud cover in remote sensing images can be pervasive, dynamic and often unavoidable. Current techniques of cloud removal for the VNIR (visible and near-infrared) bands still encounters the problem of pixel values estimated for the cloudy area incomparable and inconsistent with the cloud-free region in the target image. In this paper, we proposed an efficient approach to remove thick clouds and their shadows in VNIR bands using multi-temporal images with good maintenance of DN (digital number) value consistency. We constructed the spectral similarity between the target image and reference one for DN value estimation of the cloudy pixels. The information reconstruction was done with 10 neighboring cloud-free pair-pixels with the highest similarity over a small window centering the cloudy pixel between target and reference images. Four Landsat5 TM images around Nanjing city of Jiangsu Province in Eastern China were used to validate the approach over four representative surface patterns (mountain, plain, water and city) for diverse sizes of cloud cover. Comparison with the conventional approaches indicates high accuracy of the approach in cloud removal for the VNIR bands. The approach was applied to the Landsat8 OLI (Operational Land Imager) image on 29 April 2016 in Nanjing area using two reference images. Very good consistency was achieved in the resulted images, which confirms that the proposed approach could be served as an alternative for cloud removal in the VNIR bands using multi-temporal images.
APA, Harvard, Vancouver, ISO, and other styles
45

Hannaford, J., N. Mastrantonas, G. Vesuviano, and S. Turner. "An updated national-scale assessment of trends in UK peak river flow data: how robust are observed increases in flooding?" Hydrology Research 52, no. 3 (March 26, 2021): 699–718. http://dx.doi.org/10.2166/nh.2021.156.

Full text
Abstract:
Abstract A cluster of recent floods in the UK has prompted significant interest in the question of whether floods are becoming more frequent or severe over time. Many trend assessments have addressed this in recent decades, typically concluding that there is evidence for positive trends in flood magnitude at the national scale. However, trend testing is a contentious area, and the resilience of such conclusions must be tested rigorously. Here, we provide a comprehensive assessment of flood magnitude trends using the UK national flood dataset (NRFA Peak Flows). Importantly, we assess trends using this full dataset as well as a subset of near-natural catchments with high-quality flood data. While headline conclusions are useful for advancing national flood-risk policy, for on-the-ground flood-risk estimation it is important to unpack these local changes to determine how climate-driven trends compare with those from the wider dataset that are subject to a wide range of human disturbances and data limitations. We also examine the sensitivity of reported trends to changes in study time window using a ‘multitemporal’ analysis. We find that the headline claim of increased flooding generally holds up regionally to nationally, although we show a much more complicated picture of spatio-temporal variability. While some reported trends, such as increased flooding in northern and western Britain, appear to be robust, trends in other regions are more mixed spatially and temporally – for example, trends in recent decades are not necessarily representative of longer-term change, and within regions (e.g. in southeast England) increasing and decreasing trends can be found in close proximity. While headline conclusions are useful for advancing national flood-risk policy, for flood-risk estimation it is important to unpack these local changes, and the results and methodological toolkit provided here could provide such supporting information to practitioners.
APA, Harvard, Vancouver, ISO, and other styles
46

Duricy, Erin, Brian Coffman, Mark Curtis, Rebekah Farris, Vanessa Fishel, Xi Ren, Dylan Seebold, Natasha Torrence, Yiming Wang, and Dean Salisbury. "S137. LEFT HEMISPHERE MEG DEFICIT IN PITCH AND DURATION MISMATCH NEGATIVITY IN FIRST EPISODE PSYCHOSIS." Schizophrenia Bulletin 46, Supplement_1 (April 2020): S87—S88. http://dx.doi.org/10.1093/schbul/sbaa031.203.

Full text
Abstract:
Abstract Background Mismatch negativity (MMN) is an event-related potential (ERP) that is severely impaired in long-term schizophrenia. However, though largely debated, numerous electroencephalography (EEG) studies suggest that MMN is less reduced in individuals at their first episode of psychosis (FEP). Pitch MMN (pMMN) is not reduced, although duration MMN (dMMN) may be reduced compared to healthy controls (HC), and may show left hemisphere deficits early on. Few MMN studies in FEP have utilized magnetoencephalography (MEG). MEG could potentially be more sensitive to circumscribed differences, given its ability to more effectively localize MMN sources within the auditory cortex. Using MEG, we compared pMMN and dMMN between FEP and HC to identify potential source location- and timing-dependent differences in early disease course. Methods Simultaneous MEG and EEG recordings of pMMN and dMMN were obtained with an oddball paradigm from 15 FEP and 19 HC. Of those, 12 FEP and 11 HC had usable EEG recordings, with the remainder lost due to a circuit board failure. MMN was calculated by subtraction of the sensor-space average to the standard tones from the average to the deviant tones. For EEG, mean amplitudes across fronto-central electrodes (F1, Fz, F2, FC1, FCz, FC2) were calculated over a 30 msec window at group peak latency. For MEG data, source activity was determined in Brainstorm using minimum norm estimation, multisphere head models, and individual MRI cortical surface reconstructions. Using the HCP Brain Atlas, regions of interest (ROIs) were determined based on areas of peak activity at two time windows for each task: an Early time window of 105–115 ms for both pMMN and dMMN, and a Late time window of 125–135 ms for pMMN and 155–165 ms for dMMN. Mean amplitudes were calculated over six auditory cortex ROIs: A1, Brodmann’s Area 43, Brodmann’s Area 52, Lateral Belt, Medial Belt, and Parabelt. Mixed-model ANOVA was then used to compare groups, regions, hemispheres, time windows, and tasks using SPSS software. Results For EEG peak MMNs, there were no significant differences between groups in pMMN or dMMN (p’s &gt; 0.1). For MEG sources, however, ANOVA indicated a Region x Hemisphere x Group interaction (p = 0.038), but no significant regional differences between Pitch and Duration or for either Early or Late time windows (p’s &gt; 0.1). When ROI data were averaged across tasks and time windows, the right hemisphere showed no difference between groups, whereas the left hemisphere showed a trend for reduced MMN activity in FEP (p = 0.069). Post-hoc comparisons of the six left hemisphere ROIs revealed that all regions but Brodmann’s Area 43 showed significant reduction in FEP (p’s &lt; 0.05). Discussion These findings support previous literature that indicate relatively preserved EEG scalp measures of MMN in FEP. However, the MEG source data indicate a deficit in the left hemisphere auditory cortex of FEP. This left hemisphere reduction was observed in 5 out of 6 source regions and at both time windows. This suggests broad left hemisphere auditory cortex pathophysiology early in disease course to which EEG may be relatively less sensitive by comparison with MEG. In conclusion, FEP may be associated with a selective left hemisphere MMN deficit in auditory regions on the superior temporal plane, but not the opercular cortex (i.e. Brodmann’s Area 43).
APA, Harvard, Vancouver, ISO, and other styles
47

Solabarrieta, Lohitzune, Sergey Frolov, Mike Cook, Jeff Paduan, Anna Rubio, Manuel González, Julien Mader, and Guillaume Charria. "Skill Assessment of HF Radar–Derived Products for Lagrangian Simulations in the Bay of Biscay." Journal of Atmospheric and Oceanic Technology 33, no. 12 (December 2016): 2585–97. http://dx.doi.org/10.1175/jtech-d-16-0045.1.

Full text
Abstract:
AbstractSince January 2009, two long-range high-frequency (HF) radar systems have been collecting hourly high-spatial-resolution surface current data in the southeastern corner of the Bay of Biscay. The temporal resolution of the HF radar surface currents permits simulating drifter trajectories with the same time step as that of real drifters deployed in the region in 2009. The main goal of this work is to compare real drifter trajectories with trajectories computed from HF radar currents obtained using different methods, including forecast currents. Open-boundary modal analysis (OMA) is applied to the radar radial velocities and then a linear autoregressive model on the empirical orthogonal function (EOF) decomposition of an historical data series is used to forecast OMA currents. Additionally, the accuracy of the forecast method in terms of the spatial and temporal distribution of the Lagrangian distances between observations and forecasts is investigated for a 4-yr period (2009–12). The skills of the different HF radar products are evaluated within a 48-h window. The mean distances between real trajectories and their radar-derived counterparts range from 4 to 5 km for real-time and forecast currents after 12 hours of simulations. The forecast model improves persistence (i.e., the simulations obtained by using the last available OMA fields as a constant variable) after 6 hours of simulation and improves the estimation of trajectories up to 28% after 48 hours. The performance of the forecast is observed to be variable in space and time, related to the different ocean processes governing the local ocean circulation.
APA, Harvard, Vancouver, ISO, and other styles
48

Babenhauserheide, A., S. Basu, S. Houweling, W. Peters, and A. Butz. "Comparing the CarbonTracker and TM5-4DVar data assimilation systems for CO<sub>2</sub> surface flux inversions." Atmospheric Chemistry and Physics Discussions 15, no. 6 (March 25, 2015): 8883–932. http://dx.doi.org/10.5194/acpd-15-8883-2015.

Full text
Abstract:
Abstract. Data assimilation systems allow for estimating surface fluxes of greenhouse gases from atmospheric concentration measurements. Good knowledge about fluxes is essential to understand how climate change affects ecosystems and to characterize feedback mechanisms. Based on assimilation of more than one year of atmospheric in-situ concentration measurements, we compare the performance of two established data assimilation models, CarbonTracker and TM5-4DVar, for CO2 flux estimation. CarbonTracker uses an Ensemble Kalman Filter method to optimize fluxes on ecoregions. TM5-4DVar employs a 4-D variational method and optimizes fluxes on a 6° × 4° longitude/latitude grid. Harmonizing the input data allows analyzing the strengths and weaknesses of the two approaches by direct comparison of the modelled concentrations and the estimated fluxes. We further assess the sensitivity of the two approaches to the density of observations and operational parameters such as temporal and spatial correlation lengths. Our results show that both models provide optimized CO2 concentration fields of similar quality. In Antarctica CarbonTracker underestimates the wintertime CO2 concentrations, since its 5-week assimilation window does not allow for adjusting the far-away surface fluxes in response to the detected concentration mismatch. Flux estimates by CarbonTracker and TM5-4DVar are consistent and robust for regions with good observation coverage, regions with low observation coverage reveal significant differences. In South America, the fluxes estimated by TM5-4DVar suffer from limited representativeness of the few observations. For the North American continent, mimicking the historical increase of measurement network density shows improving agreement between CarbonTracker and TM5-4DVar flux estimates for increasing observation density.
APA, Harvard, Vancouver, ISO, and other styles
49

Goto, Daisuke, Yu Morino, Toshimasa Ohara, Tsuyoshi Thomas Sekiyama, Junya Uchida, and Teruyuki Nakajima. "Application of linear minimum variance estimation to the multi-model ensemble of atmospheric radioactive Cs-137 with observations." Atmospheric Chemistry and Physics 20, no. 6 (March 25, 2020): 3589–607. http://dx.doi.org/10.5194/acp-20-3589-2020.

Full text
Abstract:
Abstract. Great efforts have been made to simulate atmospheric pollutants, but their spatial and temporal distributions are still highly uncertain. Observations can measure their concentrations with high accuracy but cannot estimate their spatial distributions due to the sporadic locations of sites. Here, we propose an ensemble method by applying a linear minimum variance estimation (LMVE) between multi-model ensemble (MME) simulations and measurements to derive a more realistic distribution of atmospheric pollutants. The LMVE is a classical and basic version of data assimilation, although the estimation itself is still useful for obtaining the best estimates by combining simulations and observations without a large amount of computer resources, even for high-resolution models. In this study, we adopt the proposed methodology for atmospheric radioactive caesium (Cs-137) in atmospheric particles emitted from the Fukushima Daiichi Nuclear Power Station (FDNPS) accident in March 2011. The uniqueness of this approach includes (1) the availability of observed Cs-137 concentrations near the surface at approximately 100 sites, thus providing dense coverage over eastern Japan; (2) the simplicity of identifying the emission source of Cs-137 due to the point source of FDNPS; (3) the novelty of MME with the high-resolution model (3 km horizontal grid) over complex terrain in eastern Japan; and (4) the strong need to better estimate the Cs-137 distribution due to its inhalation exposure among residents in Japan. The ensemble size is six, including two atmospheric transport models: the Weather Research and Forecasting – Community Multi-scale Air Quality (WRF-CMAQ) model and non-hydrostatic icosahedral atmospheric model (NICAM). The results showed that the MME that estimated Cs-137 concentrations using all available sites had the lowest geometric mean bias (GMB) against the observations (GMB =1.53), the lowest uncertainties based on the root mean square error (RMSE) against the observations (RMSE =9.12 Bq m−3), the highest Pearson correlation coefficient (PCC) with the observations (PCC =0.59) and the highest fraction of data within a factor of 2 (FAC2) with the observations (FAC2 =54 %) compared to the single-model members, which provided higher biases (GMB =1.83–4.29, except for 1.20 obtained from one member), higher uncertainties (RMSE =19.2–51.2 Bq m−3), lower correlation coefficients (PCC =0.29–0.45) and lower precision (FAC2 =10 %–29 %). At the model grid, excluding the measurements, the MME-estimated Cs-137 concentration was estimated by a spatial interpolation of the variance used in the LMVE equation using the inverse distance weights between the nearest two sites. To test this assumption, the available measurements were divided into two categories, i.e. learning and validation data; thus, the assumption for the spatial interpolation was found to guarantee a moderate PCC value (> 0.4) within an approximate distance of at least 70 km. Extra sensitivity tests for several parameters, i.e. the site number and the weighting coefficients in the spatial interpolation, the time window in the LMVE and the ensemble size, were performed. In conclusion, the important assumptions were the time window and the ensemble size; i.e. a shorter time window (the minimum in this study was 1 h, which is the observation interval) and a larger ensemble size (the maximum in this study was six, but five is also acceptable if the members are effectively selected) generated better results.
APA, Harvard, Vancouver, ISO, and other styles
50

Rochoux, M. C., C. Emery, S. Ricci, B. Cuenot, and A. Trouvé. "Towards predictive data-driven simulations of wildfire spread – Part 2: Ensemble Kalman Filter for the state estimation of a front-tracking simulator of wildfire spread." Natural Hazards and Earth System Sciences Discussions 2, no. 5 (May 28, 2014): 3769–820. http://dx.doi.org/10.5194/nhessd-2-3769-2014.

Full text
Abstract:
Abstract. This paper is the second part in a series of two articles, which aims at presenting a data-driven modeling strategy for forecasting wildfire spread scenarios based on the assimilation of observed fire front location and on the sequential correction of model parameters or model state. This model relies on an estimation of the local rate of fire spread (ROS) as a function of environmental conditions based on Rothermel's ROS formulation, in order to propagate the fire front with a~level-set-based front-tracking simulator. In Part I, a data assimilation system based on an ensemble Kalman filter (EnKF) was implemented to provide a spatially-uniform correction of biomass fuel and wind parameters and thereby, produce an improved forecast of the wildfire behavior (addressing uncertainties in the input parameters of the ROS model only). In Part II, the objective of the EnKF algorithm is to sequentially update the two-dimensional coordinates of the markers along the discretized fire front, in order to provide a spatially-distributed correction of the fire front location and thereby, a more reliable initial condition for further model time-integration (addressing all sources of uncertainties in the ROS model). The resulting prototype data-driven wildfire spread simulator is first evaluated in a series of verification tests using synthetically-generated observations; tests include representative cases with spatially-varying biomass properties and temporally-varying wind conditions. In order to properly account for uncertainties during the EnKF update step and to accurately represent error correlations along the fireline, it is shown that members of the EnKF ensemble must be generated through variations in estimates of the fire initial location as well as through variations in the parameters of the ROS model. The performance of the prototype simulator based on state estimation or parameter estimation is then evaluated by comparison with data taken from a controlled grassland fire experiment. Results indicate that data-driven simulations are capable of correcting inaccurate predictions of the fire front location and of subsequently providing an optimized forecast of the wildfire behavior at future lead-times. The complementary benefits of both parameter estimation and state estimation approaches, in terms of analysis and forecast performance, are also emphasized. In particular, it is found that the size of the assimilation window must be specified adequately with the persistence of the model initial condition and/or with the temporal and spatial variability of the environmental conditions in order to track sudden changes in wildfire behavior.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography