Статті в журналах з теми "Regular grid weighted smoothing"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Regular grid weighted smoothing.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Regular grid weighted smoothing".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Lauritzen, P. H., J. T. Bacmeister, P. F. Callaghan, and M. A. Taylor. "NCAR global model topography generation software for unstructured grids." Geoscientific Model Development Discussions 8, no. 6 (June 22, 2015): 4623–51. http://dx.doi.org/10.5194/gmdd-8-4623-2015.

Повний текст джерела
Анотація:
Abstract. It is the purpose of this paper to document the NCAR global model topography generation software for unstructured grids. Given a model grid, the software computes the fraction of the grid box covered by land, the gridbox mean elevation, and associated sub-grid scale variances commonly used for gravity wave and turbulent mountain stress parameterizations. The software supports regular latitude-longitude grids as well as unstructured grids; e.g. icosahedral, Voronoi, cubed-sphere and variable resolution grids. As an example application and in the spirit of documenting model development, exploratory simulations illustrating the impacts of topographic smoothing with the NCAR-DOE CESM (Community Earth System Model) CAM5.2-SE (Community Atmosphere Model version 5.2 – Spectral Elements dynamical core) are shown.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Estuti, Abdallah A., and Elemér Litvai. "Post-extrapolation for specified time-step results without interpolation in MOC-based 1D hydraulic transients and gas release computations." Journal of Computational and Applied Mechanics 18, no. 1 (2023): 85–95. http://dx.doi.org/10.32973/jcam.2023.003.

Повний текст джерела
Анотація:
The goal of the paper is to present a supplementary step called postextrapolation. When applied to the well-known method of characteristics (MOC), this assures the continuous use of the specified time steps or regular numerical grid without interpolations during computations of transients in 1D 2-phase flow in straight elastic pipes. The new method consists of two steps, the first being a typical MOC step, where the C− and C+ characteristics start from regular nodal points, allowing for the point of intersection to differ from a regular one. After defining the variables there the method transforms it corresponding to the near regular grid point, using the first derivatives contained in the original, nonlinear, governing equations, as evaluated numerically from the variables got earlier in the neighboring nodes. The procedure needs no interpolations; it deals with grid-point values only. Instead of the Courant-type stability conditions, shock-wave catching and smoothing techniques help to assure numerical stability between broad limits of parameters like the closing time of a valve and the initial gas content of the fluid. Comparison by runs with traditional codes under itemized boundary conditions and measurements on a simple TPV (tank-pipe-valve) setup show acceptable scatter.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Böhm, Gualtiero, and Aldo L. Vesnaver. "In quest of the grid." GEOPHYSICS 64, no. 4 (July 1999): 1116–25. http://dx.doi.org/10.1190/1.1444618.

Повний текст джерела
Анотація:
The possible nonuniqueness and inaccuracy of tomographic inversion solutions may be the result of an inadequate discretization of the model space with respect to the acquisition geometry and the velocity field sought. Void pixels and linearly dependent equations are introduced if the grid shape does not match the spatial distribution of rays, originating the well‐known null space. This is a common drawback when using regular pixels. By definition, the null space does not depend on the picked traveltimes, and so we cannot eliminate it by minimising the traveltime residuals. We show that the inversion quality can be improved by following a trial and error approach, that is, by adapting the pixels’ shape and distribution to the layer interfaces and velocity field. The resolution can be increased or decreased locally to search for an optimal grid, although this introduces a personal bias. On the other hand, we can so decide where, why, and which a priori information is introduced in the sought velocity field, which is hardly feasible by managing other stabilising tools such as damping factors and smoothing filters.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Billings, Stephen D., Garry N. Newsam, and Rick K. Beatson. "Smooth fitting of geophysical data using continuous global surfaces." GEOPHYSICS 67, no. 6 (November 2002): 1823–34. http://dx.doi.org/10.1190/1.1527082.

Повний текст джерела
Анотація:
Continuous global surfaces (CGS) are a general framework for interpolation and smoothing of geophysical data. The first of two smoothing techniques we consider in this paper is generalized cross validation (GCV), which is a bootstrap measure of the predictive error of a surface that requires no prior knowledge of noise levels. The second smoothing technique is to define the CGS surface with fewer centers than data points, and compute the fit by least squares (LSQR); the noise levels are implicitly estimated by the number and placement of the centers relative to the data points. We show that both smoothing methods can be implemented using extensions to the existing fast framework for interpolation, so that it is now possible to construct realistic smooth fits to the very large data sets typically collected in geophysics. Thin‐plate spline and kriging surfaces with GCV smoothing appear to produce realistic fits to noisy radiometric data. The resulting surfaces are similar, yet the thin‐plate spline required less parameterization. Given the simplicity and parsimony of GCV, this makes a combination of the two methods a reasonable default choice for the smoothing problem. LSQR smooth fitting with sinc functions defined on a regular grid of centers, effectively low‐pass filters the data and produces a reasonable surface, although one not as visually appealing as for splines and kriging.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Chevrot, Sébastien, and Maximilien Lehujeur. "Eikonal surface wave tomography with smoothing splines—application to Southern California." Geophysical Journal International 229, no. 3 (January 29, 2022): 1927–41. http://dx.doi.org/10.1093/gji/ggac034.

Повний текст джерела
Анотація:
SUMMARY The densification of both permanent and temporary seismic networks has raised new interest in surface wave eikonal tomography from which phase velocity maps can be obtained without resolving a tomographic inverse problem. However, eikonal tomography requires to reconstruct traveltime surfaces from a discrete number of measurements obtained at the station locations, which can be challenging. We present a new method to reconstruct these traveltime surfaces with smoothing splines discretized in a regular 2-D Cartesian grid. We impose Neumann boundary conditions so that the phase gradients on the edges of the grid are equal to the apparent slownesses of the average plane wave along the normal direction measured by beamforming. Using the eikonal equation, phase velocity maps are then derived from the norm of the gradient of the interpolated traveltime maps. The method is applied to Rayleigh waves recorded by the Southern California Seismic Network to derive phase velocity surfaces. Robust, stable and finely resolved phase velocity maps at 25 and 33 s period are obtained after averaging the phase velocity maps derived from the analysis of a selection of recent large (Mw ≥ 6.5) teleseismic events. The phase velocity map at 25 s mainly constrains the thickness of the Southern California crust, with results that are in excellent agreement with previous tomographic studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tai, Chang-Kou. "On the Aliasing and Resolving Power of Sea Level Low-Pass Filtered onto a Regular Grid from Along-Track Altimeter Data of Uncoordinated Satellites: The Smoothing Strategy." Journal of Atmospheric and Oceanic Technology 25, no. 4 (April 1, 2008): 617–24. http://dx.doi.org/10.1175/2007jtecho514.1.

Повний текст джерела
Анотація:
Abstract It is shown that smoothing (low-pass filtering) along-track altimeter data of uncoordinated satellites onto a regular space–time grid helps reduce the overall energy level of the aliasing from the aliasing levels of the individual satellites. The rough rule of thumb is that combining N satellites reduces the energy of the overall aliasing to 1/N of the average aliasing level of the N satellites. Assuming the aliasing levels of these satellites are roughly of the same order of magnitude (i.e., assuming that no special signal spectral content significantly favors one satellite over others at certain locations), combining data from uncoordinated satellites is clearly the right strategy. Moreover, contrary to the case of coordinated satellites, this reduction of aliasing is not achieved by the enhancement of the overall resolving power. In fact (by the strict definition of the resolving power as the largest bandwidths within which a band-limited signal remains free of aliasing), the resolving power is reduced to its smallest possible extent. If one characterizes the resolving power of each satellite as a spectral space within which all band-limited signals are resolved by the satellite, then the combined resolving power of the N satellite is characterized by the spectral space that is the intersection of all N spectral spaces (i.e., the spectral space that is common to all the resolved spectral spaces of the N satellites, hence the smallest). It is also shown that the least squares approach is superior to the smoothing approach in reducing the aliasing and upholding the resolving power of the raw data. To remedy one of the shortcomings of the smoothing approach, the author recommends a multismoother smoothing strategy that tailors the smoother to the sampling characteristics of each satellite. Last, a strategy based on the least squares approach is also described for combining data from uncoordinated satellites.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhang, Jie, Ping Duan, Jia Li, and Jiajia Liu. "Electromagnetic Radiation Space Field Construction Collected along the Road Based on Layered Radial Basis Function." Applied Sciences 13, no. 10 (May 17, 2023): 6153. http://dx.doi.org/10.3390/app13106153.

Повний текст джерела
Анотація:
The electromagnetic radiation (EMR) data collected along a road have a largely empty region overall, while they have a linear distribution locally. Moreover, the traditional spatial interpolation method is not suitable for the electromagnetic radiation space field (EMR-SF) construction collected along the road. In this paper, a layered radial basis function (LRBF) method is proposed to generate the EMR-SF, which interpolates from outside to inside in a layered strategy. First, the regular grid points are constructed based on RBF within the range of sampling data and then are layered based on Ripley’s K function. Second, on the basis of layering, the EMR of grid points is generated layer by layer using the LRBF method. Finally, EMR-SF is constructed by using the sampling data and grid points. The LRBF method is applied to EMR data from an area of Yunnan Normal University in Kunming, China. The results show that the LRBF accuracy is higher than that of the ordinary kriging (OK) and inverse-distance-weighted (IDW) interpolation methods. The LRBF interpolation accuracy can be improved through the strategy of regular grid point construction and layering, and the EMR-SF constructed by LRBF is more realistic than OK and IDW.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Chen, Xuejun, Jing Zhao, Wenchao Hu, and Yufeng Yang. "Short-Term Wind Speed Forecasting Using Decomposition-Based Neural Networks Combining Abnormal Detection Method." Abstract and Applied Analysis 2014 (2014): 1–21. http://dx.doi.org/10.1155/2014/984268.

Повний текст джерела
Анотація:
As one of the most promising renewable resources in electricity generation, wind energy is acknowledged for its significant environmental contributions and economic competitiveness. Because wind fluctuates with strong variation, it is quite difficult to describe the characteristics of wind or to estimate the power output that will be injected into the grid. In particular, short-term wind speed forecasting, an essential support for the regulatory actions and short-term load dispatching planning during the operation of wind farms, is currently regarded as one of the most difficult problems to be solved. This paper contributes to short-term wind speed forecasting by developing two three-stage hybrid approaches; both are combinations of the five-three-Hanning (53H) weighted average smoothing method, ensemble empirical mode decomposition (EEMD) algorithm, and nonlinear autoregressive (NAR) neural networks. The chosen datasets are ten-minute wind speed observations, including twelve samples, and our simulation indicates that the proposed methods perform much better than the traditional ones when addressing short-term wind speed forecasting problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Litvinchev, Igor, and Edith Lucero Ozuna Espinosa. "Integer Programming Formulations for Approximate Packing Circles in a Rectangular Container." Mathematical Problems in Engineering 2014 (2014): 1–6. http://dx.doi.org/10.1155/2014/317697.

Повний текст джерела
Анотація:
A problem of packing a limited number of unequal circles in a fixed size rectangular container is considered. The aim is to maximize the (weighted) number of circles placed into the container or minimize the waste. This problem has numerous applications in logistics, including production and packing for the textile, apparel, naval, automobile, aerospace, and food industries. Frequently the problem is formulated as a nonconvex continuous optimization problem which is solved by heuristic techniques combined with local search procedures. New formulations are proposed for approximate solution of packing problem. The container is approximated by a regular grid and the nodes of the grid are considered as potential positions for assigning centers of the circles. The packing problem is then stated as a large scale linear 0-1 optimization problem. The binary variables represent the assignment of centers to the nodes of the grid. Nesting circles inside one another is also considered. The resulting binary problem is then solved by commercial software. Numerical results are presented to demonstrate the efficiency of the proposed approach and compared with known results.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tait, Andrew, and Ross Woods. "Spatial Interpolation of Daily Potential Evapotranspiration for New Zealand Using a Spline Model." Journal of Hydrometeorology 8, no. 3 (June 1, 2007): 430–38. http://dx.doi.org/10.1175/jhm572.1.

Повний текст джерела
Анотація:
Abstract Potential evapotranspiration (PET) is an important component of water balance calculations, and these calculations form an equally important role in applications such as irrigation scheduling, pasture productivity forecasts, and groundwater recharge and streamflow modeling. This paper describes a method of interpolating daily PET data calculated at climate stations throughout New Zealand onto a regular 0.05° latitude–longitude grid using a thin-plate smoothing spline model. Maximum use is made of observational data by combining both Penman and Priestley–Taylor PET calculations and raised pan evaporation measurements. An analysis of the interpolation error using 20 validation sites shows that the average root-mean-square error varies between about 1 mm in the summer months to about 0.4 mm in winter. It is advised that interpolated data for areas above 500-m elevation should be used with caution, however, due to the paucity of input data from high-elevation sites.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Morgan, Ernest F., Omar Abdel-Rahim, Tamer F. Megahed, Junya Suehiro, and Sobhy M. Abdelkader. "Fault Ride-Through Techniques for Permanent Magnet Synchronous Generator Wind Turbines (PMSG-WTGs): A Systematic Literature Review." Energies 15, no. 23 (December 1, 2022): 9116. http://dx.doi.org/10.3390/en15239116.

Повний текст джерела
Анотація:
Global warming and rising energy demands have increased renewable energy (RE) usage globally. Wind energy has become the most technologically advanced renewable energy source. Wind turbines (WTs) must ride through faults to ensure power system stability. On the flip side, permanent magnet synchronous generators (PMSG)-based wind turbine power plants (WTPPs) are susceptible to grid voltage fluctuations and require extra regulations to maintain regular operations. Due to recent changes in grid code standards, it has become vital to explore alternate fault ride-through (FRT) methods to ensure their capabilities. This research will ensure that FRT solutions available via the Web of Science (WoS) database are vetted and compared in hardware retrofitting, internal software control changes, and hybrid techniques. In addition, a bibliometric analysis is provided, which reveals an ever-increasing volume of works dedicated to the topic. After that, a literature study of FRT techniques for PMSG WTs is carried out, demonstrating the evolution of these techniques over time. This paper concludes that additional research is required to enhance FRT capabilities in PMSG wind turbines and that further attention to topics, such as machine learning tools and the combination of FRT and wind power smoothing approaches, should arise in the following years.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Naghizadeh, Mostafa, and Mauricio D. Sacchi. "Multistep autoregressive reconstruction of seismic records." GEOPHYSICS 72, no. 6 (November 2007): V111—V118. http://dx.doi.org/10.1190/1.2771685.

Повний текст джерела
Анотація:
Linear prediction filters in the [Formula: see text] domain are widely used to interpolate regularly sampled data. We study the problem of reconstructing irregularly missing data on a regular grid using linear prediction filters. We propose a two-stage algorithm. First, we reconstruct the unaliased part of the data spectrum using a Fourier method (minimum-weighted norm interpolation). Then, prediction filters for all the frequencies are extracted from the reconstructed low frequencies. The latter is implemented via a multistep autoregressive (MSAR) algorithm. Finally, these prediction filters are used to reconstruct the complete data in the [Formula: see text] domain. The applicability of the proposed method is examined using synthetic and field data examples.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Lee, Jinwook, Jongyun Byun, Seunghyun Hwang, Changhyun Jun, and Jongjin Baik. "Spatiotemporal Analysis of Variability in Domestic PM10 Data Using Grid Based Spatial Interpolation Method." Journal of the Korean Society of Hazard Mitigation 22, no. 1 (February 28, 2022): 7–19. http://dx.doi.org/10.9798/kosham.2022.22.1.7.

Повний текст джерела
Анотація:
This study analyzed spatiotemporal variability in domestic PM10 data from 2001 to 2019. From annual numbers of stations between 175 and 484, the point data at each station were spatially interpolated using the inverse distance weighted method. A periodic variability in daily mean data was examined through wavelet analysis, which showed that there was a clear annual pattern with the periodic change following a regular cycle. The Mann-Kendall Test for monthly and annual mean data showed a decreasing trend in about 1 µg/m<sup>3</sup> per year. The spatial change in the grid data for annual mean data represented that it was relatively higher in the northern regions than that in the southern regions and its mean and deviation decreased significantly over time. For the entire period of observation data, it was found that annual mean and standard deviation of PM10 concentrations were relatively high in the region near the metropolitan area.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Lavrov, Alexandre, Paal Skjetne, Bjørnar Lund, Erik Bjønnes, Finn Olav Bjørnson, Jan Ove Busklein, Térence Coudert, et al. "Density-consistent Initialization of SPH on a Regular Cartesian Grid: Comparative Numerical Study of 10 Smoothing Kernels in 1, 2 and 3 Dimensions." Procedia IUTAM 18 (2015): 85–95. http://dx.doi.org/10.1016/j.piutam.2015.11.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Li, Qing, and Steven Liang. "Incipient Fault Diagnosis of Rolling Bearings Based on Impulse-Step Impact Dictionary and Re-Weighted Minimizing Nonconvex Penalty Lq Regular Technique." Entropy 19, no. 8 (August 18, 2017): 421. http://dx.doi.org/10.3390/e19080421.

Повний текст джерела
Анотація:
The periodical transient impulses caused by localized faults are sensitive and important characteristic information for rotating machinery fault diagnosis. However, it is very difficult to accurately extract transient impulses at the incipient fault stage because the fault impulse features are rather weak and always corrupted by heavy background noise. In this paper, a new transient impulse extraction methodology is proposed based on impulse-step dictionary and re-weighted minimizing nonconvex penalty Lq regular (R-WMNPLq, q = 0.5) for the incipient fault diagnosis of rolling bearings. Prior to the sparse representation, the original vibration signal is preprocessed by the variational mode decomposition (VMD) technique. Due to the physical mechanism of periodic double impacts, including step-like and impulse-like impacts, an impulse-step impact dictionary atom could be designed to match the natural waveform structure of vibration signals. On the other hand, the traditional sparse reconstruction approaches such as orthogonal matching pursuit (OMP), L1-norm regularization treat all vibration signal values equally and thus ignore the fact that the vibration peak value may have more useful information about periodical transient impulses and should be preserved at a larger weight value. Therefore, penalty and smoothing parameters are introduced on the reconstructed model to guarantee the reasonable distribution consistence of peak vibration values. Lastly, the proposed technique is applied to accelerated lifetime testing of rolling bearings, where it achieves a more noticeable and higher diagnostic accuracy compared with OMP, L1-norm regularization and traditional spectral Kurtogram (SK) method.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Sun, Kang, Lei Zhu, Karen Cady-Pereira, Christopher Chan Miller, Kelly Chance, Lieven Clarisse, Pierre-François Coheur, et al. "A physics-based approach to oversample multi-satellite, multispecies observations to a common grid." Atmospheric Measurement Techniques 11, no. 12 (December 18, 2018): 6679–701. http://dx.doi.org/10.5194/amt-11-6679-2018.

Повний текст джерела
Анотація:
Abstract. Satellite remote sensing of the Earth's atmospheric composition usually samples irregularly in space and time, and many applications require spatially and temporally averaging the satellite observations (level 2) to a regular grid (level 3). When averaging level 2 data over a long period to a target level 3 grid that is significantly finer than the sizes of level 2 pixels, this process is referred to as “oversampling”. An agile, physics-based oversampling approach is developed to represent each satellite observation as a sensitivity distribution on the ground, instead of a point or a polygon as assumed in previous methods. This sensitivity distribution can be determined by the spatial response function of each satellite sensor. A generalized 2-D super Gaussian function is proposed to characterize the spatial response functions of both imaging grating spectrometers (e.g., OMI, OMPS, and TROPOMI) and scanning Fourier transform spectrometers (e.g., GOSAT, IASI, and CrIS). Synthetic OMI and IASI observations were generated to compare the errors due to simplifying satellite fields of view (FOVs) as polygons (tessellation error) and the errors due to discretizing the smooth spatial response function on a finite grid (discretization error). The balance between these two error sources depends on the target grid size, the ground size of the FOV, and the smoothness of spatial response functions. Explicit consideration of the spatial response function is favorable for fine-grid oversampling and smoother spatial response. For OMI, it is beneficial to oversample using the spatial response functions for grids finer than ∼16 km. The generalized 2-D super Gaussian function also enables smoothing of the level 3 results by decreasing the shape-determining exponents, which is useful for a high noise level or sparse satellite datasets. This physical oversampling approach is especially advantageous during smaller temporal windows and shows substantially improved visualization of trace gas distribution and local gradients when applied to OMI NO2 products and IASI NH3 products. There is no appreciable difference in the computational time when using the physical oversampling versus other oversampling methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

VOLONTYR, Liudmyla. "INFORMATION SUPPORT FOR THE FORECASTING OF SUGAR-BEET PRODUCTION DEVELOPMENT." "EСONOMY. FINANСES. MANAGEMENT: Topical issues of science and practical activity", no. 1 (41) (January 2019): 71–82. http://dx.doi.org/10.37128/2411-4413-2019-1-6.

Повний текст джерела
Анотація:
Development of modern economic trends in the system of conceptual foundations for the improvements in sugar beet production sector has necessitated the introduction of new approaches in the processes of managing commodity, financial and information flows on the basis of the use of methods of economic and mathematical modeling. The main idea for implementation these methods is to evaluate the development of forecasts in terms of their formalization, systematization, optimization and adaptation under application of new information technologies. The quality of management decision-making depends on the accuracy and reliability of the developed long-term evaluations. In this regard, one of the most important areas of research in the economy is to forecast the parameters of the beet industry development and to obtain predictive decisions that form the basis for effective activity in the process of achieving tactical and strategic goals. Under a significant dispersion of the time series levels, a variety of smoothing procedures are used to detect and distinguish the trend: direct level equalization by the ordinary least squares technique, ordinary and weighted moving averages, exponential smoothing, spectral methods and application of splines, moving average method, or running median smoothing. The most common among them are regular and weighted moving averages and exponential smoothing. Investigation of methods of forecasting parameters of development of beet growing industry taking into account the peculiarities of constructing quantitative and qualitative forecasts requires solving the following tasks: - investigation of the specifics of the use of statistical methods of time series analysis in beet growing; - research of the specificity of the use of forecasting methods for the estimation of long-term solutions in beet growing; - carrying out practical implementation of the methods as exemplified by the estimation of forecasts of sugar beet yields at the enterprises of Ukraine. The method of exponential smoothing proposed by R. G. Brown gives the most accurate approximation to the original statistical series – it takes into account the variation of prices. The essence of this method lies in the fact that the statistical series is smoothed out with the help of a weighted moving average, which is subject to the exponential law. When calculating the exponential value of time t it is always necessary to have the exponential value at the previous moment of time, and therefore the first step is to determine some Sn-1 value that precedes Sn. In practice, there is no single approach to defining initial approximations – they are set in accordance with the conditions of economic research. Quite often, the arithmetic mean of all levels of the statistical series is used as Sn-1. It should be noted that a certain problem in forecasting with the help of exponential smoothing is the choice of the parameter a optimal value, on which the accuracy of the results of the forecast depends to a large extent. If the parameter a is close to the identity element, then the forecast model takes into account only the effects of the last observations, and if it approaches to zero, then almost all the previous observations are usually taken into account. However, scientific and methodical approaches to determining the optimal value of the smoothing parameter have not yet been developed. In practice, the value of a is chosen according to the smallest dispersion of deviations of the predicted values of the statistical series from its actual levels. The method of exponential smoothing gives positive findings when a statistical series consists of a large number of observations and it is assumed that the socioeconomic processes in the forecasting period will occur approximately under the same conditions as in the base period. A correctly selected model of the growth curve shall correspond to the nature of the trend change of the phenomenon under study. The procedure for developing a forecast using growth curves involves the following steps: - choice of one or several curves whose shape corresponds to the nature; - time series changes; - evaluation of the parameters of the selected curves; - verification of the adequacy of the selected curves of the process being foreseen; - evaluation of the accuracy of models and the final choice of the growth curve; - calculation of point and interval forecasts. The most common practice in forecasting are the functions used to describe processes with a monotonous nature of the trend of development and the absence of growth boundaries. On the basis of the studied models, smoothing of the statistical series of the sugar beet gross yields of in Ukraine was carried out. The statistical data from 1990 to 2017 have been taken for the survey. The forecast of the sugar beet yields for 2012-2017 have been used to determine the approximation error by the ordinary moving averages with a length of the smoothing interval of 5 years and 12 years, as well as by the method of exponential smoothing with the parameter α = 0,3 and α = 0, 7 The analysis of the quality of forecasts is based on the average absolute deviation. Therefore, this value is the smallest for the forecast, which is determined by the method of exponential smoothing with the constant value of a = 0,7. By this method, we will determine the forecast for the next 5 years.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Wang, Yiyang, Zhiying Jiang, and Zhixun Su. "A Nonuniform Method for Extracting Attractive Structures From Images." International Journal of Grid and High Performance Computing 10, no. 3 (July 2018): 14–28. http://dx.doi.org/10.4018/ijghpc.2018070102.

Повний текст джерела
Анотація:
This article describes how attractive structures are always correspond to objects of interest in human perception, thus extracting attractive structures is a fundamental problem in many image analysis tasks, which is of great practical importance. In this article, the authors propose a novel nonuniform method to maintain the attractive structures of images while removing their meaningless details. Different from the existing norm based operators that are a uniform method proposed on regular image grids, our nonuniform method is not limited to special type of datum and grid structure, which has better performance for image analysis tasks. Besides, a strategy based on proximal algorithms is put forward to obtain fast convergence in practice due to the nonconvex and nonsmooth property of the corresponding optimization. Though the model with our proposed nonuniform operator can be used for various applications, the authors chose the tasks of image smoothing and saliency detection to demonstrate the good performances of our nonuniform method and show its superiority against other state-of-the-art alternatives.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Chen, Chuanfa, and Yanyan Li. "A Fast Global Interpolation Method for Digital Terrain Model Generation from Large LiDAR-Derived Data." Remote Sensing 11, no. 11 (June 2, 2019): 1324. http://dx.doi.org/10.3390/rs11111324.

Повний текст джерела
Анотація:
Airborne light detection and ranging (LiDAR) datasets with a large volume pose a great challenge to the traditional interpolation methods for the production of digital terrain models (DTMs). Thus, a fast, global interpolation method based on thin plate spline (TPS) is proposed in this paper. In the methodology, a weighted version of finite difference TPS is first developed to deal with the problem of missing data in the grid-based surface construction. Then, the interpolation matrix of the weighted TPS is deduced and found to be largely sparse. Furthermore, the values and positions of each nonzero element in the matrix are analytically determined. Finally, to make full use of the sparseness of the interpolation matrix, the linear system is solved with an iterative manner. These make the new method not only fast, but also require less random-access memory. Tests on six simulated datasets indicate that compared to recently developed discrete cosine transformation (DCT)-based TPS, the proposed method has a higher speed and accuracy, lower memory requirement, and less sensitivity to the smoothing parameter. Real-world examples on 10 public and 1 private dataset demonstrate that compared to the DCT-based TPS and the locally weighted interpolation methods, such as linear, natural neighbor (NN), inverse distance weighting (IDW), and ordinary kriging (OK), the proposed method produces visually good surfaces, which overcome the problems of peak-cutting, coarseness, and discontinuity of the aforementioned interpolators. More importantly, the proposed method has a similar performance to the simple interpolation methods (e.g., IDW and NN) with respect to computing time and memory cost, and significantly outperforms OK. Overall, the proposed method with low memory requirement and computing cost offers great potential for the derivation of DTMs from large-scale LiDAR datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Grinyak, V. M., and A. V. Shulenina. "Marine Traffic Data Clustering for Ships Route Planning." Informacionnye Tehnologii 27, no. 11 (November 11, 2021): 607–15. http://dx.doi.org/10.17587/it.27.607-615.

Повний текст джерела
Анотація:
This paper is about maritime safety. The system of vessel traffic schemas is one of the key elements of sea traffic control at the arias with heavy traffic. Such system based on a set of rules and guidelines defined by traffic schemas for certain water areas. From the classic approach, vessels that are not following the guidelines do not necessarily create alarming situations at the moment, however, could lead to complex danger navigation situations with the time passed. The problem of ship route planning through the area with highly intensive traffic is considered in this paper. The importance of the problem becomes more significant these days when taking in account development of self-navigating autonomous vessels. It is expected to respect area navigation limitations while planning vessel path through the areas with identified traffic schema. One of the ways to identify navigation limitations could be trajectory pattern recognition at certain sea areas based on retrospective traffic analysis. Model representation for such task could be based on vessel moving parameters clustering. The presented model is based on solving the shortest path problem on weighted graph. There are several ways to create such weighted graphs are suggested in the paper: regular grid of vertices and edges, layer grid of vertices and edges, random grid of vertices and edges, vertices and edges identified based on retrospective data. All edges are defined as a weighted function of "desirability" of one or another vessel course for each location of sea area with consideration of identified trajectory patterns. For that the area is divided into sub areas where courses and velocity clustering is evaluated. Possible ways of clustering are discussed in the paper and the choice made in favor of subtractive clustering that does not require predefining of cluster count. Automatic Identification Systems (AIS) could be used as data source for the traffic at certain sea areas. The possibility of using AIS data available on specialized public Internet resources is shown in the paper. Although such data typically has low density, they still could well represent vessel traffic features at the certain sea area. In this paper are presenting samples of route panning for Tsugaru Straight ang Tokyo Bay.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Torres-Blanco, Pablo, and José Ángel Sánchez-Fernández. "Design and Analysis of a Decoupling Buoyancy Wave Energy Converter." Journal of Marine Science and Engineering 11, no. 8 (July 27, 2023): 1496. http://dx.doi.org/10.3390/jmse11081496.

Повний текст джерела
Анотація:
This study presents a new wave energy converter that operates in two phases. During the first phase, wave energy is stored, raising a mass up to a design height. During the second phase, the mass goes down. When going down, it compresses air that moves a turbine that drives an electrical generator. Because of this decoupling, generators that move much faster than seawater can be used. This allows using “off-the-shelf” electrical generators. The performance of the proposed design was evaluated via simulations. As the device operates in two phases, a different simulation model was built for each phase. The mass-rising simulation model assumes regular waves. The simulation results suggest that energy harvesting is near the theoretical maximum. Mass falling is braked by air compression. Simulations of this system showed oscillatory behavior. These oscillations are lightly damped by the drag against the walls and air. These oscillations translate into generated power. Therefore, smoothing is needed to avoid perturbing the grid. A possible solution, in the case of farms comprising dozens of these devices, is to delay the generation among individual devices. In this manner, the combined generation can be significantly smoothed.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Kirilenko, Daniil, Anton Andreychuk, Aleksandr Panov, and Konstantin Yakovlev. "TransPath: Learning Heuristics for Grid-Based Pathfinding via Transformers." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 10 (June 26, 2023): 12436–43. http://dx.doi.org/10.1609/aaai.v37i10.26465.

Повний текст джерела
Анотація:
Heuristic search algorithms, e.g. A*, are the commonly used tools for pathfinding on grids, i.e. graphs of regular structure that are widely employed to represent environments in robotics, video games, etc. Instance-independent heuristics for grid graphs, e.g. Manhattan distance, do not take the obstacles into account, and thus the search led by such heuristics performs poorly in obstacle-rich environments. To this end, we suggest learning the instance-dependent heuristic proxies that are supposed to notably increase the efficiency of the search. The first heuristic proxy we suggest to learn is the correction factor, i.e. the ratio between the instance-independent cost-to-go estimate and the perfect one (computed offline at the training phase). Unlike learning the absolute values of the cost-to-go heuristic function, which was known before, learning the correction factor utilizes the knowledge of the instance-independent heuristic. The second heuristic proxy is the path probability, which indicates how likely the grid cell is lying on the shortest path. This heuristic can be employed in the Focal Search framework as the secondary heuristic, allowing us to preserve the guarantees on the bounded sub-optimality of the solution. We learn both suggested heuristics in a supervised fashion with the state-of-the-art neural networks containing attention blocks (transformers). We conduct a thorough empirical evaluation on a comprehensive dataset of planning tasks, showing that the suggested techniques i) reduce the computational effort of the A* up to a factor of 4x while producing the solutions, whose costs exceed those of the optimal solutions by less than 0.3% on average; ii) outperform the competitors, which include the conventional techniques from the heuristic search, i.e. weighted A*, as well as the state-of-the-art learnable planners. The project web-page is: https://airi-institute.github.io/TransPath/.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Su, Shoubao, Wei Zhao, and Chishe Wang. "Parallel Swarm Intelligent Motion Planning with Energy-Balanced for Multirobot in Obstacle Environment." Wireless Communications and Mobile Computing 2021 (August 30, 2021): 1–16. http://dx.doi.org/10.1155/2021/8902328.

Повний текст джерела
Анотація:
Multirobot motion planning is always one of the critical techniques in edge intelligent systems, which involve a variety of algorithms, such as map modeling, path search, and trajectory optimization and smoothing. To overcome the slow running speed and imbalance of energy consumption, a swarm intelligence solution based on parallel computing is proposed to plan motion paths for multirobot with many task nodes in a complex scene that have multiple irregularly-shaped obstacles, which objective is to find a smooth trajectory under the constraints of the shortest total distance and the energy-balanced consumption for all robots to travel between nodes. In a practical scenario, the imbalance of task allocation will inevitably lead to some robots stopping on the way. Thus, we firstly model a gridded scene as a weighted MTSP (multitraveling salesman problem) in which the weights are the energies of obstacle constraints and path length. Then, a hybridization of particle swarm and ant colony optimization (GPSO-AC) based on a platform of Compute Unified Device Architecture (CUDA) is presented to find the optimal path for the weighted MTSPs. Next, we improve the A ∗ algorithm to generate a weighted obstacle avoidance path on the gridded map, but there are still many sharp turns on it. Therefore, an improved smooth grid path algorithm is proposed by integrating the dynamic constraints in this paper to optimize the trajectory smoothly, to be more in line with the law of robot motion, which can more realistically simulate the multirobot in a real scene. Finally, experimental comparisons with other methods on the designed platform of GPUs demonstrate the applicability of the proposed algorithm in different scenarios, and our method strikes a good balance between energy consumption and optimality, with significantly faster and better performance than other considered approaches, and the effects of the adjustment coefficient q on the performance of the algorithm are also discussed in the experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Ibrahim, Ayad Assad, Ikhlas Mahmoud Farhan, and Mohammed Ehasn Safi. "A nonlinearities inverse distance weighting spatial interpolation approach applied to the surface electromyography signal." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 2 (April 1, 2022): 1530. http://dx.doi.org/10.11591/ijece.v12i2.pp1530-1539.

Повний текст джерела
Анотація:
Spatial interpolation of a surface electromyography (sEMG) signal from a set of signals recorded from a multi-electrode array is a challenge in biomedical signal processing. Consequently, it could be useful to increase the electrodes' density in detecting the skeletal muscles' motor units under detection's vacancy. This paper used two types of spatial interpolation methods for estimation: Inverse distance weighted (IDW) and Kriging. Furthermore, a new technique is proposed using a modified nonlinearity formula based on IDW. A set of EMG signals recorded from the noninvasive multi-electrode grid from different types of subjects, sex, age, and type of muscles have been studied when muscles are under regular tension activity. A goodness of fit measure (R2) is used to evaluate the proposed technique. The interpolated signals are compared with the actual signals; the Goodness of fit measure's value is almost 99%, with a processing time of 100msec. The resulting technique is shown to be of high accuracy and matching of spatial interpolated signals to actual signals compared with IDW and Kriging techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Suraiya, Sayma, and M. Babul Hasan. "Identifying an Appropriate Forecasting Technique for Predicting Future Demand: A Case Study on a Private University of Bangladesh." Dhaka University Journal of Science 66, no. 1 (January 31, 2018): 15–19. http://dx.doi.org/10.3329/dujs.v66i1.54539.

Повний текст джерела
Анотація:
Demand forecasting and inventory control of printing paper is crucial that is frequently used every day for the different purposes in all sectors of educational area especially in Universities. A case study is conducted in a University store house to collect all historical demand data of printing papers for last 6 years (18 trimesters), from January (Spring) 2011 to December (Fall) 2016. We will use the different models of time series forecasting which always offers a steady base-level forecast and is good at handling regular demand patterns. The aim of the research paper is to find out the less and best error free forecasting techniques for the demand of printing paper for a particular time being by using the quantitative forecasting or time series forecasting models like weighted moving average, 3-point single moving average, 3-point double moving average, 5-point moving average, exponential smoothing, regression analysis/linear trend, Holt’s method and Winter’s method. According to the forecasting error measurement, we will observe in this research that the best forecasting technique is linear trend model. By using the quantities of data and drawing the conclusion with an acceptable accuracy, our analysis will help the university to decide how much inventory is absolutely needed for the planning horizon. Dhaka Univ. J. Sci. 66(1): 15-19, 2018 (January)
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Westgate, John N., Uwayemi M. Sofowote, Pat Roach, Phil Fellin, Ivy D'Sa, Ed Sverko, Yushan Su, Hayley Hung, and Frank Wania. "In search of potential source regions of semi-volatile organic contaminants in air in the Yukon Territory, Canada from 2007 to 2009 using hybrid receptor models." Environmental Chemistry 10, no. 1 (2013): 22. http://dx.doi.org/10.1071/en12164.

Повний текст джерела
Анотація:
Environmental context Some long-lived organic contaminants, such as chlorinated organics, brominated flame retardants and polycyclic aromatic hydrocarbons, can undergo transport through the atmosphere to remote regions. A series of measurements of these compounds taken over almost 3 years in the air at a remote location was combined with meteorological data to try to reveal potential source areas. After adjusting several parameters to optimise the method’s ability to identify sources it was found that for most contaminants no definitive sources are revealed. Abstract A suite of brominated flame retardants, chlorinated organic pesticides and some metabolites thereof were analysed in week-long and day-long air samples collected at Little Fox Lake in Canada’s Yukon Territory from 2007 to 2009. Several trajectory-based methods for source region identification were applied to this dataset, as well as to polycyclic aromatic hydrocarbon (PAH) concentrations in those same samples reported previously. A type of concentration weighted trajectory (CWT) analysis, using a modified grid to avoid difficulties near the Earth’s poles, and removing trajectory endpoints at altitudes greater than 700m did not identify distinct source regions for most analytes. Decreasing the spatial resolution of the grid made interpretation simpler but reinforced patterns that may have stemmed from single trajectories. The potential source contribution function (PSCF) is similar to CWT but treats the concentration data categorically, rather than numerically. PSCF provides more distinct results, highlighting the Arctic Ocean as a potential source of para,para′-dichlorodiphenyldichloroethene and both northern Siberia and Canada’s Yukon and Northwest Territories as potential sources of PAHs. To simulate the uncertainty associated with individual trajectories, a set of trajectories was also generated for six points surrounding the sampling station and included in the trajectory analyses. This had the effect of smoothing the CWT and PSCF values for those analytes with no clearly definable sources, and highlighting the source regions for the two that did. For the bulk of the analytes discussed here, Little Fox Lake is well positioned to act as a background monitoring site.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Zhang, Jian, Kenneth Howard, and J. J. Gourley. "Constructing Three-Dimensional Multiple-Radar Reflectivity Mosaics: Examples of Convective Storms and Stratiform Rain Echoes." Journal of Atmospheric and Oceanic Technology 22, no. 1 (January 1, 2005): 30–42. http://dx.doi.org/10.1175/jtech-1689.1.

Повний текст джерела
Анотація:
Abstract The advent of Internet-2 and effective data compression techniques facilitates the economic transmission of base-level radar data from the Weather Surveillance Radar-1988 Doppler (WSR-88D) network to users in real time. The native radar spherical coordinate system and large volume of data make the radar data processing a nontrivial task, especially when data from several radars are required to produce composite radar products. This paper investigates several approaches to remapping and combining multiple-radar reflectivity fields onto a unified 3D Cartesian grid with high spatial (≤1 km) and temporal (≤5 min) resolutions. The purpose of the study is to find an analysis approach that retains physical characteristics of the raw reflectivity data with minimum smoothing or introduction of analysis artifacts. Moreover, the approach needs to be highly efficient computationally for potential operational applications. The appropriate analysis can provide users with high-resolution reflectivity data that preserve the important features of the raw data, but in a manageable size with the advantage of a Cartesian coordinate system. Various interpolation schemes were evaluated and the results are presented here. It was found that a scheme combining a nearest-neighbor mapping on the range and azimuth plane and a linear interpolation in the elevation direction provides an efficient analysis scheme that retains high-resolution structure comparable to the raw data. A vertical interpolation is suited for analyses of convective-type echoes, while vertical and horizontal interpolations are needed for analyses of stratiform echoes, especially when large vertical reflectivity gradients exist. An automated brightband identification scheme is used to recognize stratiform echoes. When mosaicking multiple radars onto a common grid, a distance-weighted mean scheme can smooth possible discontinuities among radars due to calibration differences and can provide spatially consistent reflectivity mosaics. These schemes are computationally efficient due to their mathematical simplicity. Therefore, the 3D multiradar mosaic scheme can serve as a good candidate for providing high-spatial- and high-temporal-resolution base-level radar data in a Cartesian framework in real time.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Liu, Shuai, Jieyong Zhu, Dehu Yang, and Bo Ma. "Comparative Study of Geological Hazard Evaluation Systems Using Grid Units and Slope Units under Different Rainfall Conditions." Sustainability 14, no. 23 (December 2, 2022): 16153. http://dx.doi.org/10.3390/su142316153.

Повний текст джерела
Анотація:
The selection of evaluation units in geological hazard evaluation systems is crucial for the evaluation results. In an evaluation system, relevant geological evaluation factors are selected and the study area is divided into multiple regular or irregular independent units, such as grids, slopes, and basins. Each evaluation unit, which includes evaluation factor attributes and hazard point distribution data, is placed as an independent individual in a corresponding evaluation model for use in a calculation, and finally a risk index for the entire study area is obtained. In order to compare the influence of the selection of grid units or slope units—two units frequently used in geological hazard evaluation studies—on the accuracy of evaluation results, this paper takes Yuanyang County, Yunnan Province, China, as a case study area. The area was divided into 7851 slope units by the catchment basin method and 12,985,257 grid units by means of an optimal grid unit algorithm. Nine evaluation factors for geological hazards were selected, including elevation, slope, aspect, curvature, land-use type, distance from a fault, distance from a river, engineering geological rock group, and landform type. In order to ensure the objective comparison of evaluation results for geological hazard susceptibility with respect to grid units and slope units, the weighted information model combining the subjective weighting AHP (analytic hierarchy process) and the objective statistical ICM (information content model) were used to evaluate susceptibility with both units. Geological risk evaluation results for collapses and landslides under heavy rain (25–50 mm), rainstorm (50–100 mm), heavy rainstorm (150–250 mm), and extraordinary rainstorm (>250 mm) conditions were obtained. The results showed that the zoning results produced under the slope unit system were better than those produced under the grid unit system in terms of the distribution relationship between hazard points and hazard levels. In addition, ROC (receiver operating characteristic) curves were used to test the results of susceptibility and risk assessments. The AUC (area under the curve) values of the slope unit system were higher than those of the grid unit system. Finally, the evaluation results obtained with slope units were more reasonable and accurate. Compared with the results from an actual geological hazard susceptibility and risk survey, the evaluation results for collapse and landslide geological hazards under the slope unit system were highly consistent with the actual survey results.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Grinyak, V. M., A. V. Shulenina, and A. S. Devyatisilnyi. "Clustering of Marine Vessel Trajectory Data for Routes Planning through Water Areas with Heavy Traffic." IOP Conference Series: Earth and Environmental Science 988, no. 2 (February 1, 2022): 022054. http://dx.doi.org/10.1088/1755-1315/988/2/022054.

Повний текст джерела
Анотація:
Abstract The article is devoted to the problem of ensuring the safety of vessel traffic. One of the elements of the traffic organization in areas with heavy navigation is the system of establishing the routes of vessels. This system is a set of restrictions imposed by a certain traffic pattern and rules adopted in a particular water area. The paper considers the problem of planning a transition route for water areas with heavy marine traffic. The planning of the vessels transition route during the movement of the water area with established routes must be carried out taking into account the specified restrictions. A possible way to identify these restrictions is to isolate the movement patterns of a particular sea area from the retrospective information about its traffic. Model representations of such a problem can be formulated on the basis of the idea of clustering the parameters of traffic. The route planning problem model is based on finding the shortest path on a weighted graph. Several ways of constructing such a graph are proposed: a regular grid of vertices and edges; a layered ore random grid of vertices and edges; vertices and edges based on retrospective data. The weight of the edges is proposed to be set as a function of the “desirability” of a particular course of the vessel for each point of the water area, taking into account the identified movement patterns. The paper discusses possible clustering methods and makes a choice in favor of subtractive clustering.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Garzón Barrero, Julián, Carlos Eduardo Cubides Burbano, and Gonzalo Jiménez-Cleves. "Quantifying the Effect of LiDAR Data Density on DEM Quality." Ciencia e Ingeniería Neogranadina 31, no. 2 (December 31, 2021): 149–69. http://dx.doi.org/10.18359/rcin.5776.

Повний текст джерела
Анотація:
LiDAR sensors capture three-dimensional point clouds with high accuracy and density; since they are regularly obtained, interpolation methods are required to generate a regular grid. Given the large size of its files, processing becomes a challenge for researchers with not very powerful computer stations. This work aims to balance the sampling density and the volume of data, preserving the sensitivity of representation of complex topographic shapes as a function of three surface descriptors: slope, curvature, and roughness. This study explores the effect of the density of LiDAR data on the accuracy of the Digital Elevation Model (DEM), using a ground point cloud of 32 million measurements obtained from a LiDAR flight over a complex topographic area of 156 ha. Digital elevation models with different relative densities to the total point dataset were produced (100, 75, 50, 25, 10, and 1 % and at different grid sizes 23, 27, 33, 46, 73, and 230cm). Accuracy was evaluated using the Inverse Distance Weighted and Kriging interpolation algorithms, obtaining 72 surfaces from which their error statistics were calculated: root mean square error, mean absolute error, mean square error, and prediction effectiveness index; these were used to evaluate the quality of the results in contrast with validation data corresponding to 10 % of the original sample. The results indicated that Kriging was the most efficient algorithm, reducing data to 1 % without statistically significant differences with the original dataset, and curvature was the morphometric parameter with the most significant negative impact on interpolation accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Fissore, E., and F. Pirotti. "DSM AND DTM FOR EXTRACTING 3D BUILDING MODELS: ADVANTAGES AND LIMITATIONS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 5, 2019): 1539–44. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-1539-2019.

Повний текст джерела
Анотація:
<p><strong>Abstract.</strong> Using multiple sources of 3D information over buildings to go from building footprints (LOD0) to higher LODs in CityGML models is a widely investigated topic. In this investigation we propose to use a very common 2.5D product, i.e. digital terrain and surface models (DTMs and DSMs), to test how much they can contribute to improve a CityGML model. The minimal information required to represents a 3 dimensional space in an urban environment is the combination of a DTM, the footprints of buildings and their heights; in this way a representation of urban environment to define LOD1 CityGML is guaranteed. In this paper we discuss the following research questions: can DTMs and DSMs provide significant information for modelling buildings at higher LODs? What characteristics can be extracted depending on the ground sampling distance (GSD) of the DTM/DSM? Results show that the used DTM/DSM at 1&amp;thinsp;m GSD provides potential significant information for higher LODs and that the conversion of the unstructured point cloud to a regular grid helps in defining single buildings using connected component analysis. Regularization of the original point cloud does loose accuracy of the source information due to smoothing or interpolation, but has the advantage of providing a predictable distance between points, thus allowing to join points belonging to the same building and provide initial primitives for further modelling.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Song, Hongxia, Jixian Zhang, Jianzhang Zuo, Xinlian Liang, Wenli Han, and Juan Ge. "Subsidence Detection for Urban Roads Using Mobile Laser Scanner Data." Remote Sensing 14, no. 9 (May 7, 2022): 2240. http://dx.doi.org/10.3390/rs14092240.

Повний текст джерела
Анотація:
Pavement subsidence detection based on point cloud data acquired by mobile measurement systems is very challenging. First, the uncertainty and disorderly nature of object points data results in difficulties in point cloud comparison. Second, acquiring data with kinematic laser scanners introduces errors into systems during data acquisition, resulting in a reduction in data accuracy. Third, the high-precision measurement standard of pavement subsidence raises requirements for data processing. In this article, a data processing method is proposed to detect the subcentimeter-level subsidence of urban pavements using point cloud data comparisons in multiple time phases. The method mainly includes the following steps: First, the original data preprocessing is conducted, which includes point cloud matching and pavement point segmentation. Second, the interpolation of the pavement points into a regular grid is performed to solve the problem of point cloud comparison. Third, according to the high density of the pavement points and the performance of the pavement in the rough point cloud, using a Gaussian kernel convolution to smooth the pavement point cloud data, we aim to reduce the error in comparison. Finally, we determine the subsidence area by calculating the height difference and compare it with the threshold value. The experimental results show that the smoothing process can substantially improve the accuracy of the point cloud comparison results, effectively reducing the false detection rate and showing that subcentimeter-level pavement subsidence can be effectively detected.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Tamás, János, István Buzás, and Ildikó Nagy. "Spatially Discrete GIS Analysis of Sampling Points Based on Yield and Quality Analysis of Sugar Beet (Beta vulgaris L.)." Acta Agraria Debreceniensis, no. 19 (March 4, 2006): 32–37. http://dx.doi.org/10.34101/actaagrar/19/3144.

Повний текст джерела
Анотація:
Fulfilment of the increasing quality requirements of sugar beet production can be analysed with sampling of plants and soil at the cultivated area. Analyses of the spatial characteristics of samples require exact geodetic positioning. This is applied in practice using GPS in precision agriculture. The examinations were made in a sample area located in north-western Hungary with sugar beet test plant. According to the traditional sample taking procedure N=60 samples were taken in regular 20 x 20 m grid, where besides the plant micro and macro elements, the sugar industrial quality parameters (Equations 1-2) and the agro-chemical parameters of soils were analysed. Till now, to gain values of mean, weighted mean and standard variance values, geometric analogues used in geography were adapted, which correspond to the mean centre (Equation 3), the spatially weighted mean centre (Equation 4), the standard distance (Equation 5), and the standard distance circle values. Robust spatial statistical values provide abstractions, which can be visually estimated immediately, and applied to analyse several parameters in parallel or in time series (Figure 1). This interpretation technique considers the spatial position of each point to another individually (distance and direction), and the value of the plant and soil parameters. Mapping the sample area in GIS environment, the coordinates of the spatially weighted mean centre values of the measured plant and soil parameters correlated to the mean centre values showed a northwest direction. Exceptions were the total salt and calcium-carbonate contents, and the molybdenum concentration of the soil samples (Table 1). As a new visual analysis, the spatially weighted mean centre values of the parameters as eigenvectors were projected to the mean centre values as origin. To characterize the production yield, the raw and digested sugar contents of the sample area, the absolute rotation angles of the generated vectors were determined, which indicate numerically the inhomogenity of the area (Figure 2). The generated spatial analogues are applicable to characterise visually and quantitatively the spatial positions of sampling points and the measured parameters in a quick way. However, their disadvantage is that they do not provide information on the tightness and direction of the spatial correlation similarly to the original statistical parameters.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Chacón, Juan, Jairo Soriano, and Omar Salazar. "Sistema de Inferencia Difusa basado en Relaciones Booleanas y Kleeneanas con Combinador Convexo." Ingeniería 23, no. 1 (January 10, 2018): 7. http://dx.doi.org/10.14483/23448393.11138.

Повний текст джерела
Анотація:
Context: In the design process of Fuzzy Inference Systems based on Boolean and Kleenean Relations (FIS-BKR) there is a dilemma choosing the regular kleenean extensions of a given boolean function. The set of possible kleenean extensions of a boolean function has a lattice structure under the usual partial order of functions. The fuzzy convex combination proposed by Zadeh guarantees some properties related to this order.Method: The addition of a convex combiner just before the defuzzifier offers a solution to the above situation. The ISE (Integral Squared Error) and ITSE (Integral Time-weighted Squared Error) performance indexes were used on an application for tuning a liquid level control system.Results: The tuning process carried out on the FIS-BKR controller with fuzzy convex combiner using constant coefficients, implied an improvement of the controlled system up to 1.427% for ISE index and up to 21.99% for ITSE with respect to the extreme extensions.Conclusions: New evidence of convenient characteristics of FIS-BKR controllers with fuzzy convex combiner was presented when the performance indexes ISE and ITSE were evaluated. On the other hand, although in this work parameter tuning for convex combination was done by grid search (brute force), it would be interesting to study more effective optimization methods for this purpose.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Yu, Hairuo, Huili Gong, and Beibei Chen. "Analysis of the Superposition Effect of Land Subsidence and Sea-Level Rise in the Tianjin Coastal Area and Its Emerging Risks." Remote Sensing 15, no. 13 (June 30, 2023): 3341. http://dx.doi.org/10.3390/rs15133341.

Повний текст джерела
Анотація:
Tianjin is a coastal city of China. However, the continuous rise of the relative sea-level has brought huge hidden danger to Tianjin’s economic and social development. The land subsidence is the most important factor that influences relative sea-level rise. By analyzing the current situation of subsidence in Tianjin through PS-InSAR, it was found that the subsidence rate of the southern plain of Tianjin is slowing down as a whole. In addition, Wuqing and Jinghai sedimentary areas as well as other several subsidence centers have been formed. By establishing a regular grid of land subsidence and ground water to construct a geo-weighted regression model (GWR), it was found that Wuqing sedimentary area as a whole is positively correlated with TCA. According to the relative sea-level change, it can be predicted that the natural coastline of Tianjin will recede by about 87 km2 in 20 years. Based on the research results above, this paper, by using machine-learning method (XGBoost), has evaluated Tianjin’s urban safety and analyzed high-risk areas and main contributing factors. Potential risks to urban safety brought about by relative sea-level rise have been analyzed, which will improve the resilience of coastal areas to disasters.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Zakaria, Yakubu Saaka, Abdul-Ganiyu Shaibu, and Bernard N. Baatuuwie. "Assessment of Physical Suitability of Soils for Vegetable Production in the Libga Irrigation Scheme, Northern Region, Ghana Using the Analytic Hierarchy Process and Weighted Overlay Analysis." Turkish Journal of Agriculture - Food Science and Technology 10, no. 8 (August 24, 2022): 1395–403. http://dx.doi.org/10.24925/turjaf.v10i8.1395-1403.5004.

Повний текст джерела
Анотація:
Assessing the suitability of soils for agricultural production is critical in promoting sustainable agriculture. Knowledge gained from soil suitability analysis provides the sound basis for making informed decisions about soil management and crop selection in a given area. In view of this, this study was carried out to assess the physical suitability of soils in the Libga Irrigation Scheme for the sustainable cultivation of jute mallow (Corchorus olitorius), tomato (Solanum lycoperscum L.) and cabbage (Brassica oleracea var capitata). Soil samples were collected at 0–30 cm and 30–60 cm depths from 50 geo-referenced points located at the nodes of a 100 m × 100 m regular grid. Particle size distribution, bulk density, total porosity, field capacity, permanent wilting point, available water capacity, saturated hydraulic conductivity, electrical conductivity and pH were determined following standard laboratory protocols at the AGSSIP Laboratory of the University for Development Studies, Nyankpala campus, Ghana. Weighting of soil properties was achieved through the analytic hierarchy process (AHP). Soil suitability maps for the selected crops were produced using weighted overlay analysis in ArcGIS (10.5). The results showed that generally about 44.3 ha (76.4 %), 44.7 ha (82.2 %) and 55.7 ha (96.0 %) of the irrigation field are moderately suitable for jute mallow, tomato and cabbage production respectively. The major limiting factors for the crops were high BD and acidity levels. The AHP proved to be a very useful tool for the incorporation of farmers’ views into decision making about the suitability of soils for crop production.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Koren, Zvi, and Igor Ravve. "Constrained Dix inversion." GEOPHYSICS 71, no. 6 (November 2006): R113—R130. http://dx.doi.org/10.1190/1.2348763.

Повний текст джерела
Анотація:
We propose a stable inversion method to create geologically constrained instantaneous velocities from a set of sparse, irregularly picked stacking- or rms-velocity functions in vertical time. The method is primarily designed for building initial velocity models for curved-ray time migration and initial macromodels for depth migration and tomography. It is mainly applicable in regions containing compacted sediments, in which the velocity gradually increases with depth and can be laterally varying. Inversion is done in four stages: establishing a global initial background-velocity trend, applying an explicit unconstrained inversion, performing a constrained least-squares inversion, and finally, fine gridding. The method can be applied to create a new velocity field (create mode) or to update an existing one (update mode). In the create mode, initially, the velocity trend is assumed an exponential, asymptotically bounded function, defined locally by three parameters at each lateral node and calculated from a reference datum surface. Velocity picks related to nonsediment rocks, such as salt flanks or basalt boundaries, require different trend functions and therefore are treated differently. In the update mode, the velocity trend is a background-velocity field, normally used for time or depth imaging. The unconstrained inversion results in a piecewise-constant, residual instantaneous velocity with respect to the velocity trend and is mainly used for regularizing the input data. The constrained inversion is performed individually for each rms-velocity function in vertical time, and the lateral and vertical continuities are controlled by the global velocity-trend function. A special damping technique suppresses vertical oscillations of the results. Finally, smoothing and gridding (interpolation) are done for the resulting instantaneous velocity to generate a regular, fine grid in space and time. This method leads to a stable and geologically plausible velocity model, even in cases of noisy input rms-velocity or residual rms-velocity data.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Monjo, Robert, Dominic Royé, and Javier Martin-Vide. "Meteorological drought lacunarity around the world and its classification." Earth System Science Data 12, no. 1 (March 27, 2020): 741–52. http://dx.doi.org/10.5194/essd-12-741-2020.

Повний текст джерела
Анотація:
Abstract. The measure of drought duration strongly depends on the definition considered. In meteorology, dryness is habitually measured by means of fixed thresholds (e.g. 0.1 or 1 mm usually define dry spells) or climatic mean values (as is the case of the standardised precipitation index), but this also depends on the aggregation time interval considered. However, robust measurements of drought duration are required for analysing the statistical significance of possible changes. Herein we climatically classified the drought duration around the world according to its similarity to the voids of the Cantor set. Dryness time structure can be concisely measured by the n index (from the regular or irregular alternation of dry or wet spells), which is closely related to the Gini index and to a Cantor-based exponent. This enables the world’s climates to be classified into six large types based on a new measure of drought duration. To conclude, outcomes provide the ability to determine when droughts start and finish. We performed the dry-spell analysis using the full global gridded daily Multi-Source Weighted-Ensemble Precipitation (MSWEP) dataset. The MSWEP combines gauge-, satellite-, and reanalysis-based data to provide reliable precipitation estimates. The study period comprises the years 1979–2016 (total of 45 165 d), and a spatial resolution of 0.5∘, with a total of 259 197 grid points. The dataset is publicly available at https://doi.org/10.5281/zenodo.3247041 (Monjo et al., 2019).
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Tai, Chang-Kou. "Aliasing of Sea Level Sampled by a Single Exact-Repeat Altimetric Satellite or a Coordinated Constellation of Satellites: Analytic Aliasing Formulas." Journal of Atmospheric and Oceanic Technology 23, no. 2 (February 1, 2006): 252–67. http://dx.doi.org/10.1175/jtech1849.1.

Повний текст джерела
Анотація:
Abstract The aliasing problem for exact-repeat altimetric satellite sampling is solved analytically by the least squares method. To make the problem tractable, the latitudinal extent of the problem needs to be moderate for the satellite ground tracks to appear as two sets of parallel straight lines, and the along-track sampling is assumed to be dense enough to resolve any along-track features of interest. The aliasing formulas thus derived confirm the previously discovered resolving power, which is characterized by the Nyquist frequency ωc = π/T (where T is the repeat period of the satellite) and by (in local Cartesian coordinates) the zonal and meridional Nyquist wavenumbers kc = 2π/X and lc = 2π/Y, respectively (where X and Y are the east–west and north–south separations between adjacent parallel ground tracks). There are three major differences with the textbook aliasing. First, instead of the one-to-one correspondence, an outside spectral component is usually aliased into more than one inside spectral components (i.e., those inside the resolved spectral range with |ω| &lt; ωc, |k| &lt; kc, and |l| &lt; lc). Second, instead of power conservation, the aliased components have less power than their corresponding outside spectral components. Third, not all outside components are aliased into the resolved range. Rather, only outside components inside certain well-defined regions in the spectral space are aliased inside. Numerical confirmation of these formulas has been achieved. Moreover, the soundness of these formulas is demonstrated through real examples of tidal aliasing. Furthermore, it is shown that these results can be generalized easily to the case with a coordinated constellation of satellites. The least squares methodology yields the optimal solution, that is, the best fitting, as well as yielding the least aliasing. However, the usual practice is to smooth (i.e., low-pass filter) the altimeter data onto a regular space–time grid. The framework for computing the aliasing of smoothed altimeter data is provided. The smoothing approach produces two major differences with the least squares results. First, the one-to-one correspondence of aliasing is mostly restored. Second, and more important, smoothing reduces the effective Nyquist wavenumbers to π/X = kc/2 and π/Y = lc/2, respectively (i.e., the resolved spectral space is reduced to a quarter of the size that is obtained by the least squares methodology). Ironically, the more one tries to filter out the small-scale features, the more precise the above statement becomes. However, like the least squares results, only outside components inside certain well-defined regions in the spectral space are aliased inside, and this occurs with less power. How much less depends on the characteristics of the smoother.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Li, Peng, Naiqian Zhang, Larry Wagner, Fred Fox, Darrell Oard, Hubert Lagae, and MIngqiang Han. "A Vehicle-Based Laser System for High-Resolution DEM Development – Performance in Micro-topography Measurement." Canadian Biosystems Engineering 63, no. 1 (December 31, 2021): 2.33–2.40. http://dx.doi.org/10.7451/cbe.2021.63.2.33.

Повний текст джерела
Анотація:
A vehicle-based laser measurement system was developed to measure the surface microtopography and to generate high-resolution digital elevation models (DEM). The accuracy of the system in microtopography measurement was evaluated in the laboratory by comparing the DEM data generated by this system with that generated by a more accurate, stationary laser profile meter for several surfaces, including an artificial sand-stone-ridged surface. DEM data was created by interpolating the 3D raw data into a regular, square grid using a two-dimensional, distance-weighted interpolation algorithm. The DEMs were compared using an image-matching method to calculate the correlation coefficient. A test to study the effect of ambient light on elevation measurement under indoor and outdoor environments was also conducted. Correlation coefficients greater than 0.935 were achieved between the DEMs measured by the vehicle-based system and the stationary laser profile meter. The correlation coefficients among the four replications of the DEMs measured by the vehicle-based system were greater than 0.988, indicating that the vehicle-based laser system can provide consistent elevation measurements. Correlation coefficients among the DEMs of the sand-stone-ridged surface measured by the vehicle-based system at different times of the day and under different indoor fluorescent lighting conditions were all above 0.982. Correlation coefficients among DEMs taken at different times of the day and under different outdoor sunlight conditions were all above 0.971. These results indicated that neither the fluorescent light nor the sunlight had a significant effect on the measurements obtained by the vehicle-based laser system. The system provided consistent elevation measurements under both indoor and outdoor lighting conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Sousa, Raul Fortes, Ismênia Ribeiro de Oliveira, Rubens Alves de Oliveira, and Job Teixeira de Oliveira. "ANÁLISE DE TRILHA DE ATRIBUTOS DE UM LATOSSOLO MANEJADO SOB SEMEADURA DIRETA." Nativa 10, no. 3 (August 26, 2022): 366–72. http://dx.doi.org/10.31413/nativa.v10i3.13345.

Повний текст джерела
Анотація:
Os indicadores de qualidade do solo correlacionam-se afetando uns aos outros tanto positivamente como negativamente. A técnica de análise de trilha é amplamente utilizada para conhecer as correlações diretas e indiretas entre os atributos do solo. Considerando que o conhecimento da associação entre esses atributos é de grande importância na ciência do solo, o objetivo dos autores foi avaliar os atributos de um Latossolo sob semeadura direta através da análise de trilha. Na área de produção foi montada malha georreferenciada composta por 50 pontos com espaçamento regular de 40 m x 40 m. As amostras de solos, foram coletadas na profundidade de 0,00–0,20 m. As variáveis analisadas foram condutividade elétrica, altitude, umidade, densidade do solo, porosidade, matéria orgânica, pH, argila, silte, areia total, areia muito grossa, areia grossa, areia média, areia fina, areia muito fina, porcentagem de agregados maiores que: 2,00; 1,00; 0,50; 0,25; 0,125 mm, diâmetro médio ponderado e diâmetro médio geométrico. A partir da análise de trilha foi possível identificar que o atributo pH influenciou positivamente a condutividade elétrica. A correlação de Pearson evidenciou que os atributos porosidade, matéria orgânica, porcentagem de agregados maiores que: 2,00; 1,00; 0,50; 0,25 mm, e os diâmetros médio ponderado e médio geométrico exibem alta correlação positiva. A análise de trilha demonstrou que o pH é o atributo que melhor determina a condutividade elétrica no Latossolo de forma direta e significativa entre os atributos analisados. Palavras-chave: análise multivariada; física do solo; multicolinearidade. Path analysis of attributes of an Oxisol managed under no-till ABSTRACT: Soil quality indicators are correlated affecting each other both positively and negatively. The path analysis technique is widely used to know the direct and indirect correlations between soil attributes. Considering that the knowledge of the association between these attributes is of great importance in soil science, the objective of the authors was to evaluate the attributes of an Oxisol under no-till through path analysis. A georeferenced grid composed of 50 points with the regular spacing of 40 m x 40 m was assembled in the production area. The soil samples were collected at a depth of 0.00-0.20 m. The variables analyzed were electrical conductivity, altitude, humidity, soil density, porosity, organic matter, pH, clay, silt, total sand, very coarse sand, coarse sand, medium sand, fine sand, very fine sand, percentage of aggregates larger than: 2.00; 1.00; 0.50; 0.25; 0.125 mm, weighted average diameter and geometric average diameter. From the trail analysis it was possible to identify that the pH attribute positively influenced the electrical conductivity. Pearson's correlation showed that the attributes porosity, organic matter, percentage of aggregates larger than: 2.00; 1.00; 0.50; 0.25 mm, and the weighted average and geometric average diameters exhibit high positive correlation. The trail analysis showed that pH is the attribute that best determines the electrical conductivity in the Oxisol in a direct and significant way among the attributes analyzed. Keywords: multivariate analysis; soil physics; multicollinearity.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Rosset, Philippe, Adil Takahashi, and Luc Chouinard. "Vs30 Mapping of the Greater Montreal Region Using Multiple Data Sources." Geosciences 13, no. 9 (August 23, 2023): 256. http://dx.doi.org/10.3390/geosciences13090256.

Повний текст джерела
Анотація:
The metropolitan community of Montreal (MMC) is located in Eastern Canada and included in the western Quebec seismic zone characterized by shallow crustal earthquakes and moderate seismicity. Most of the urbanized areas are settled close to the Saint-Lawrence River and its tributaries and within the region, delimiting the extension of the clay deposits from the Champlain Sea. The influence of these recent and soft deposits on seismic waves has been observed after the 1988 M5.8 Saguenay earthquake and has proven to be crucial in seismic hazard analysis. The shear-wave velocity Vs averaged over the 30 m of soil, abbreviated Vs30, is one of the most used parameters to characterize the site condition and its influence on seismic waves. Since 2000, a site condition model has been developed for the municipalities of Montreal and Laval, combining seismic and borehole data for risk mitigation purposes. The paper presents an extended version of the Vs30 mapping for the entire region of the MMC, which accounts for half of the population of Quebec, including additional ambient noise recordings, recently updated borehole datasets, geological vector map and unpublished seismic refraction data to derive Vs profiles. The estimated Vs30 values for thousands of sites are then interpolated on a regular grid of 0.01 degrees using the inverse distance weighted interpolation approach. Regions with the lowest estimated Vs30 values where site amplification could be expected on seismic waves are in the Northeastern part and in the Southwest of the MMC. The map expresses in terms of site classes is compared with intensity values derived from citizen observations after recent felt. In general, the highest reported intensity values are found in regions with the lowest Vs30 values on the map. Areas where this rule does not apply, should be investigated further. This site condition model can be used in seismic hazard and risk analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Jiang, Peng, Shirong Ye, Yinhao Lu, Yanyan Liu, Dezhong Chen, and Yanlan Wu. "Development of time-varying global gridded <i>T</i><sub>s</sub>–<i>T</i><sub>m</sub> model for precise GPS–PWV retrieval." Atmospheric Measurement Techniques 12, no. 2 (February 27, 2019): 1233–49. http://dx.doi.org/10.5194/amt-12-1233-2019.

Повний текст джерела
Анотація:
Abstract. Water-vapor-weighted mean temperature, Tm, is the key variable for estimating the mapping factor between GPS zenith wet delay (ZWD) and precipitable water vapor (PWV). For the near-real-time GPS–PWV retrieval, estimating Tm from surface air temperature Ts is a widely used method because of its high temporal resolution and fair degree of accuracy. Based on the estimations of Tm and Ts at each reanalysis grid node of the ERA-Interim data, we analyzed the relationship between Tm and Ts without data smoothing. The analyses demonstrate that the Ts–Tm relationship has significant spatial and temporal variations. Static and time-varying global gridded Ts–Tm models were established and evaluated by comparisons with the radiosonde data at 723 radiosonde stations in the Integrated Global Radiosonde Archive (IGRA). Results show that our global gridded Ts–Tm equations have prominent advantages over the other globally applied models. At over 17 % of the stations, errors larger than 5 K exist in the Bevis equation (Bevis et al., 1992) and in the latitude-related linear model (Y. B. Yao et al., 2014), while these large errors are removed in our time-varying Ts–Tm models. Multiple statistical tests at the 5 % significance level show that the time-varying global gridded model is superior to the other models at 60.03 % of the radiosonde sites. The second-best model is the 1∘ × 1∘ GPT2w model, which is superior at only 12.86 % of the sites. More accurate Tm can reduce the contribution of the uncertainty associated with Tm to the total uncertainty in GPS–PWV, and the reduction augments with the growth of GPS–PWV. Our theoretical analyses with high PWV and small uncertainty in surface pressure indicate that the uncertainty associated with Tm can contribute more than 50 % of the total GPS–PWV uncertainty when using the Bevis equation, and it can decline to less than 25 % when using our time-varying Ts–Tm model. However, the uncertainty associated with surface pressure dominates the error budget of PWV (more than 75 %) when the surface pressure has an error larger than 5 hPa. GPS–PWV retrievals using different Tm estimates were compared at 74 International GNSS Service (IGS) stations. At 74.32 % of the IGS sites, the relative differences of GPS–PWV are within 1 % by applying the static or the time-varying global gridded Ts–Tm equations, while the Bevis model, the latitude-related model and the GPT2w model perform the same at 37.84 %, 41.89 % and 29.73 % of the sites. Compared with the radiosonde PWV, the error reduction in the GPS–PWV retrieval can be around 1–2 mm when using a more accurate Tm parameterization, which accounts for around 30 % of the total GPS–PWV error.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Klepers, Andris, and Pēteris Lakovskis. "IDENTIFICATION OF LANDSCAPES OF NATIONAL IMPORTANCE USING GIS." SOCIETY. TECHNOLOGY. SOLUTIONS. Proceedings of the International Scientific Conference 2 (April 8, 2022): 28. http://dx.doi.org/10.35363/via.sts.2022.91.

Повний текст джерела
Анотація:
INTRODUCTION One of the aims of recognising landscapes of national importance is to encourage public authorities to adopt policies and measures at the local, regional and national level for protecting, managing and planning landscapes throughout national states. It covers unique and outstanding landscapes among the ordinary ones, that not only determine the quality of people’s living environment but also contribute to national identity. Different approaches have been used so far internationally in identifying landscapes of national importance, assessing their characteristics, structure and landscape elements, recognising that both – quantitative assessment and expert judgement should be involved for this task. Within this study, the focus is on the quantitative part of the study, using GIS and revealing the traceable sequence of steps and criteria used. MATERIALS AND METHODS GIS approach was used to determine landscape areas of national importance, using a hexagon grid - (each in an area of 100 ha, 68,407 hexagons), which covers the territory of Latvia. The aggregation of spatial data in regular grids provides an opportunity to normalise different types of spatial data, as well as to address the use of irregularly shaped polygons (e.g., in the case of politically defined boundaries). The hexagon network, due to the shape, forms continuous coverage of the area, while at the same time the hexagon has a similar shape to a circle, which accordingly provides advantages in terms of defining and representing different spatial relationships. Territories of the most valuable landscapes of national significance are spatially separated, assigning values to hexagons in accordance with the landscape values in their territory. Each hexagon is assigned a value according to whether it overlaps with an area that meets one or more of the criteria for the most valuable landscapes of national importance. In the case of larger, continuous area units, the coincidence of areas is determined by the hexagon centroid, but in the case of smaller, individual area units (also point units), the intersect function is used. The criteria for the research part to be quantified include five thematic sections: natural heritage, cultural heritage and historical evidence, identity and community involvement, uniqueness and landscape quality, which can be quantified from the infrastructure created to highlight the visual aspects and aesthetics of landscape. RESULTS The part of the quantitative analysis data used to determine the value of the landscape by GIS has been realised in several sequent stages. First, after analysing the main criteria for the identification of landscapes of national importance from existing literature and research thematic areas, they were split into concrete criteria: 8 for natural heritage, 5 for cultural heritage and historical evidence, 6 for identity and community involvement, 4 for uniqueness and 1 for landscape quality. Each of the criteria was given an appropriate weight of 0.5 to 1.5 points (using 0.25 points as a step). Several of the criteria are exclusive and do not overlap; the total amount for most outstanding landscapes would be 12 points. This was followed by a phase of structuring and categorising large amounts of data to allow GIS analysis to be performed. Minor adjustments were made to the weights assigned to the criteria in the methodology during the analysis. Each area of 100 ha, 68,407 hexagons got weighted value, and those territories where the concentration of the highest values were identified, were reconsidered during the next stage as a landscape with national importance. As there were more than 100 such places of concentration, discussion on joint territories having less valuable hexagons in-between has been carried out. DISCUSSION A landscape character assessment technique that is scientifically sound, region-specific and stakeholder orientated, designed to describe landscape character, has been used often recently. It can be applied at a range of scales and it may also integrate landscape character analysis with biodiversity assessments, the analysis of historical character, and socio-economic functions such as recreation etc. Even so it is primarily concerned with documenting landscape character rather than assigning quality or values, implying a distinction between characterisation and judgement; identifying landscapes with national importance still involves the assessment and evaluation process. This is debated widely as the main concern is to carry out ordinary landscape quality in places where people live, recognising that only a limited number of societies will benefit from daily encounters of unique landscapes. However, the GIS method used and criteria applied provide transparent objectivity in the characterisation of landscape uniqueness, and even if it’s relatively easy to recognise them by perception, having a society consensus, spatial aspects and the identification of borders for such landscapes would be much more difficult without GIS.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Klepers, Andris, and Pēteris Lakovskis. "IDENTIFICATION OF LANDSCAPES OF NATIONAL IMPORTANCE USING GIS." SOCIETY. TECHNOLOGY. SOLUTIONS. Proceedings of the International Scientific Conference 2 (April 8, 2022): 28. http://dx.doi.org/10.35363/via.sts.2022.91.

Повний текст джерела
Анотація:
INTRODUCTION One of the aims of recognising landscapes of national importance is to encourage public authorities to adopt policies and measures at the local, regional and national level for protecting, managing and planning landscapes throughout national states. It covers unique and outstanding landscapes among the ordinary ones, that not only determine the quality of people’s living environment but also contribute to national identity. Different approaches have been used so far internationally in identifying landscapes of national importance, assessing their characteristics, structure and landscape elements, recognising that both – quantitative assessment and expert judgement should be involved for this task. Within this study, the focus is on the quantitative part of the study, using GIS and revealing the traceable sequence of steps and criteria used. MATERIALS AND METHODS GIS approach was used to determine landscape areas of national importance, using a hexagon grid - (each in an area of 100 ha, 68,407 hexagons), which covers the territory of Latvia. The aggregation of spatial data in regular grids provides an opportunity to normalise different types of spatial data, as well as to address the use of irregularly shaped polygons (e.g., in the case of politically defined boundaries). The hexagon network, due to the shape, forms continuous coverage of the area, while at the same time the hexagon has a similar shape to a circle, which accordingly provides advantages in terms of defining and representing different spatial relationships. Territories of the most valuable landscapes of national significance are spatially separated, assigning values to hexagons in accordance with the landscape values in their territory. Each hexagon is assigned a value according to whether it overlaps with an area that meets one or more of the criteria for the most valuable landscapes of national importance. In the case of larger, continuous area units, the coincidence of areas is determined by the hexagon centroid, but in the case of smaller, individual area units (also point units), the intersect function is used. The criteria for the research part to be quantified include five thematic sections: natural heritage, cultural heritage and historical evidence, identity and community involvement, uniqueness and landscape quality, which can be quantified from the infrastructure created to highlight the visual aspects and aesthetics of landscape. RESULTS The part of the quantitative analysis data used to determine the value of the landscape by GIS has been realised in several sequent stages. First, after analysing the main criteria for the identification of landscapes of national importance from existing literature and research thematic areas, they were split into concrete criteria: 8 for natural heritage, 5 for cultural heritage and historical evidence, 6 for identity and community involvement, 4 for uniqueness and 1 for landscape quality. Each of the criteria was given an appropriate weight of 0.5 to 1.5 points (using 0.25 points as a step). Several of the criteria are exclusive and do not overlap; the total amount for most outstanding landscapes would be 12 points. This was followed by a phase of structuring and categorising large amounts of data to allow GIS analysis to be performed. Minor adjustments were made to the weights assigned to the criteria in the methodology during the analysis. Each area of 100 ha, 68,407 hexagons got weighted value, and those territories where the concentration of the highest values were identified, were reconsidered during the next stage as a landscape with national importance. As there were more than 100 such places of concentration, discussion on joint territories having less valuable hexagons in-between has been carried out. DISCUSSION A landscape character assessment technique that is scientifically sound, region-specific and stakeholder orientated, designed to describe landscape character, has been used often recently. It can be applied at a range of scales and it may also integrate landscape character analysis with biodiversity assessments, the analysis of historical character, and socio-economic functions such as recreation etc. Even so it is primarily concerned with documenting landscape character rather than assigning quality or values, implying a distinction between characterisation and judgement; identifying landscapes with national importance still involves the assessment and evaluation process. This is debated widely as the main concern is to carry out ordinary landscape quality in places where people live, recognising that only a limited number of societies will benefit from daily encounters of unique landscapes. However, the GIS method used and criteria applied provide transparent objectivity in the characterisation of landscape uniqueness, and even if it’s relatively easy to recognise them by perception, having a society consensus, spatial aspects and the identification of borders for such landscapes would be much more difficult without GIS.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Acs, Gabor, Sandor Doleschall, and Eva Farkas. "General Purpose Compositional Model." Society of Petroleum Engineers Journal 25, no. 04 (August 1, 1985): 543–53. http://dx.doi.org/10.2118/10515-pa.

Повний текст джерела
Анотація:
Abstract A direct sequential method has been developed to simulate isothermal compositional systems. The solution technique is the same as that of the implicit pressure, explicit saturation (IMPES) method: one pressure is treated implicitly and (instead of the phase saturation) the component masses/moles are treated explicitly. A "volume balance" equation is used to obtain the pressure equation. A weighted sum of the conservation equations is used to eliminate the nonlinear saturation/concentration terms from the accumulation term of the pressure equation. The partial mass/mole volumes are used as "constants" to partial mass/mole volumes are used as "constants" to weight the mass/mole conservation equations. The method handles uniformly a range of cases from the simplified compositional (i.e., black-oil) models to the most complicated multiphase compositional models of incompressible and compressible fluid systems. The numerical solution is based on the integrated finite-difference method that allows one- (1D), two- (2D), and three-dimensional (3D) grids of regular or irregular volume elements to be handled with the same ease. The mathematical model makes it possible to develop modular versatile computer realizations; thus the model is highly suitable as a basis for general-purpose models. Introduction During the last three decades reservoir simulators have been well developed. The enormous progress in computer techniques has strongly contributed to the development of increasingly effective and sophisticated computer models. The key numerical techniques of modeling conventional displacement methods had been elaborated upon by the beginning of the 1970's, and it was possible to develop a single simulation model capable of addressing most reservoir problems encountered. Since the 1970's, however, because of the sharp rise in oil prices, the need for new enhanced recovery processes has forced reservoir-simulation experts to develop newer computer models that account for completely unknown effects of the new displacement mechanisms. The proliferation of recovery methods since the 1970's has resulted in a departure from the single-model concept because individual models tend to be developed to simulate each of the new recovery schemes. This proliferation of models, however, seems to be a less than ideal situation because of the expense involved in the development, maintenance, and applications training for the multiple new models. In addition, when different models are applied to simulate various enhanced recovery methods, no common basis exists to help survey, compare, and thus understand the different recovery mechanisms. The importance of a single, general simulator capable of modeling all or most recovery processes of interest was emphasized by Coats, who worked out a model as a step in this direction. Economic restrictions have also forced various companies to develop multiple-application reservoir models. The multiple-application reservoir simulator (MARS) program presented by Kendall et al. is one realization of the goal: a single program for multiple application. From a mathematical point of view, reservoir simulators consist of a set of partial differential equations and a set of algebraic equations, both with the appropriate initial and boundary conditions. In isothermal cases the partial differential equations, taking into account Darcy's law, describe the mass/mole/normal-volume conservation for each component of the reservoir fluid system. Phase and/or component transport caused by capillarity, gravity, and/or diffusion also can be taken into account. The algebraic equations describe the thermodynamic properties of the reservoir fluid/rock system. The existence of properties of the reservoir fluid/rock system. The existence of local and instant thermodynamic equilibria is a generally accepted assumption of reservoir simulation. This means that the number of mass/mole/normal-volume conservation equations is equal to the number of components used to describe the reservoir fluid/rock system. During the simulation the reservoir examined is divided into volume elements by a 1D, 2D, or 3D grid. Each of the volume elements is characterized by the appropriate reservoir properties and the displacement process is described by properties and the displacement process is described by a series of thermodynamic equilibria for each volume element. The difference between the simulators of conventional and enhanced recovery methods essentially arises from how many components are chosen as a means of appropriately describing the displacement process, and how the thermodynamic equilibria (thermodynamic properties) of the reservoir fluid/rock system are characterized. In cases of conventional technologies a simplified (black-oil) approach of the hydrocarbon system by a pseudogas and a pseudo-oil component generally is accepted, and the pseudo-oil component generally is accepted, and the thermodynamic properties of the given system depend only on the pressure. This approximation made it possible to develop the direct sequential IMPES solution technique, taking into account the advantage of black-oil models wherein the number of components is equal to the number of phases and thus the number of phases is equal to the number of conservation equations. SPEJ P. 543
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Francis, Bibin, Sanjay Viswanath, and Muthuvel Arigovindan. "Scattered data approximation by regular grid weighted smoothing." Sādhanā 43, no. 1 (January 2018). http://dx.doi.org/10.1007/s12046-017-0765-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Yunfeng, Guo, and Li Jing. "Recognition of teaching method effects based on grid model simplification and artificial intelligence." Journal of Intelligent & Fuzzy Systems, December 21, 2020, 1–11. http://dx.doi.org/10.3233/jifs-189505.

Повний текст джерела
Анотація:
In order to improve the effect of the teaching method evaluation model, based on the grid model, this paper constructs an artificial intelligence model based on the grid model. Moreover, this paper proposes a hexahedral grid structure simplification method based on weighted sorting, which comprehensively sorts the elimination order of candidate base complexes in the grid with three sets of sorting items of width, deformation and price improvement. At the same time, for the elimination order of basic complex strings, this paper also proposes a corresponding priority sorting algorithm. In addition, this paper proposes a smoothing regularization method based on the local parameterization method of the improved SLIM algorithm, which uses the regularized unit as the reference unit in the local mapping in the SLIM algorithm. Furthermore, this paper proposes an adaptive refinement method that maintains the uniformity of the grid and reduces the surface error, which can better slow down the occurrence of geometric constraints caused by insufficient number of elements in the process of grid simplification. Finally, this paper designs experiments to study the performance of the model. The research results show that the model constructed in this paper is effective.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Yang, Tiejun, Lu Tang, Qi Tang, and Lei Li. "Sparse angle CT reconstruction with weighted dictionary learning algorithm based on adaptive group-sparsity regularization." Journal of X-Ray Science and Technology, April 2, 2021, 1–18. http://dx.doi.org/10.3233/xst-210839.

Повний текст джерела
Анотація:
OBJECTIVE: In order to solve the blurred structural details and over-smoothing effects in sparse representation dictionary learning reconstruction algorithm, this study aims to test sparse angle CT reconstruction with weighted dictionary learning algorithm based on adaptive Group-Sparsity Regularization (AGSR-SART). METHODS: First, a new similarity measure is defined in which Covariance is introduced into Euclidean distance, Non-local image patches are adaptively divided into groups of different sizes as the basic unit of sparse representation. Second, the weight factor of the regular constraint terms is designed through the residuals represented by the dictionary, so that the algorithm takes different smoothing effects on different regions of the image during the iterative process. The sparse reconstructed image is modified according to the difference between the estimated value and the intermediate image. Last, The SBI (Split Bregman Iteration) iterative algorithm is used to solve the objective function. An abdominal image, a pelvic image and a thoracic image are employed to evaluate performance of the proposed method. RESULTS: In terms of quantitative evaluations, experimental results show that new algorithm yields PSNR of 48.20, the maximum SSIM of 99.06% and the minimum MAE of 0.0028. CONCLUSIONS: This study demonstrates that new algorithm can better preserve structural details in reconstructed CT images. It eliminates the effect of excessive smoothing in sparse angle reconstruction, enhances the sparseness and non-local self-similarity of the image, and thus it is superior to several existing reconstruction algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

"Smoothing of 70-Year Daily Rainfall Data Based on Moving Average and Weighted average Technique." International Journal of Engineering and Advanced Technology 8, no. 6 (August 30, 2019): 1407–10. http://dx.doi.org/10.35940/ijeat.f8108.088619.

Повний текст джерела
Анотація:
An extreme value analysis of rainfall for Thanjavur town in Tamil Nadu was carried out using 70 years of daily rainfall data. Moving average method is a simple method to understand rainfall trend of the selected station. The analysis has been carried out for monthly, seasonal and annual rainfalls. No other graphical methods such as Ordinate graph, Bar diagram, Chronological chart will describe about the trend or cyclic pattern. By smoothening out the extreme variations and indicating the trend or cyclic pattern is known as moving average curve. Through this moving average curve, it is possible to understand the trend which can be used in the future years. From the results of annual wise rainfall analysis, based on 3-year, 5-year, 10-year moving average, it is found that there is no persistent regular cycle is visible and where as in 30-year, 40-year, 50-year moving average a horizontal linear trend has been observed. In Winter season, Summer season, North-East monsoon, South- West monsoon wise rainfall analysis, for 3-year, 5-year, 10-year moving average denote no apparent trend or cyclicity where as in 30-year, 40-year, 50- year moving average a horizontal linear cycle has been noticed. It is clear from the study that there is no large variation of rainfall that had been occurred in the Thanjavur city based on selected years of analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії