Добірка наукової літератури з теми "Regular grid weighted smoothing"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Regular grid weighted smoothing".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Regular grid weighted smoothing"

1

Lauritzen, P. H., J. T. Bacmeister, P. F. Callaghan, and M. A. Taylor. "NCAR global model topography generation software for unstructured grids." Geoscientific Model Development Discussions 8, no. 6 (June 22, 2015): 4623–51. http://dx.doi.org/10.5194/gmdd-8-4623-2015.

Повний текст джерела
Анотація:
Abstract. It is the purpose of this paper to document the NCAR global model topography generation software for unstructured grids. Given a model grid, the software computes the fraction of the grid box covered by land, the gridbox mean elevation, and associated sub-grid scale variances commonly used for gravity wave and turbulent mountain stress parameterizations. The software supports regular latitude-longitude grids as well as unstructured grids; e.g. icosahedral, Voronoi, cubed-sphere and variable resolution grids. As an example application and in the spirit of documenting model development, exploratory simulations illustrating the impacts of topographic smoothing with the NCAR-DOE CESM (Community Earth System Model) CAM5.2-SE (Community Atmosphere Model version 5.2 – Spectral Elements dynamical core) are shown.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Estuti, Abdallah A., and Elemér Litvai. "Post-extrapolation for specified time-step results without interpolation in MOC-based 1D hydraulic transients and gas release computations." Journal of Computational and Applied Mechanics 18, no. 1 (2023): 85–95. http://dx.doi.org/10.32973/jcam.2023.003.

Повний текст джерела
Анотація:
The goal of the paper is to present a supplementary step called postextrapolation. When applied to the well-known method of characteristics (MOC), this assures the continuous use of the specified time steps or regular numerical grid without interpolations during computations of transients in 1D 2-phase flow in straight elastic pipes. The new method consists of two steps, the first being a typical MOC step, where the C− and C+ characteristics start from regular nodal points, allowing for the point of intersection to differ from a regular one. After defining the variables there the method transforms it corresponding to the near regular grid point, using the first derivatives contained in the original, nonlinear, governing equations, as evaluated numerically from the variables got earlier in the neighboring nodes. The procedure needs no interpolations; it deals with grid-point values only. Instead of the Courant-type stability conditions, shock-wave catching and smoothing techniques help to assure numerical stability between broad limits of parameters like the closing time of a valve and the initial gas content of the fluid. Comparison by runs with traditional codes under itemized boundary conditions and measurements on a simple TPV (tank-pipe-valve) setup show acceptable scatter.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Böhm, Gualtiero, and Aldo L. Vesnaver. "In quest of the grid." GEOPHYSICS 64, no. 4 (July 1999): 1116–25. http://dx.doi.org/10.1190/1.1444618.

Повний текст джерела
Анотація:
The possible nonuniqueness and inaccuracy of tomographic inversion solutions may be the result of an inadequate discretization of the model space with respect to the acquisition geometry and the velocity field sought. Void pixels and linearly dependent equations are introduced if the grid shape does not match the spatial distribution of rays, originating the well‐known null space. This is a common drawback when using regular pixels. By definition, the null space does not depend on the picked traveltimes, and so we cannot eliminate it by minimising the traveltime residuals. We show that the inversion quality can be improved by following a trial and error approach, that is, by adapting the pixels’ shape and distribution to the layer interfaces and velocity field. The resolution can be increased or decreased locally to search for an optimal grid, although this introduces a personal bias. On the other hand, we can so decide where, why, and which a priori information is introduced in the sought velocity field, which is hardly feasible by managing other stabilising tools such as damping factors and smoothing filters.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Billings, Stephen D., Garry N. Newsam, and Rick K. Beatson. "Smooth fitting of geophysical data using continuous global surfaces." GEOPHYSICS 67, no. 6 (November 2002): 1823–34. http://dx.doi.org/10.1190/1.1527082.

Повний текст джерела
Анотація:
Continuous global surfaces (CGS) are a general framework for interpolation and smoothing of geophysical data. The first of two smoothing techniques we consider in this paper is generalized cross validation (GCV), which is a bootstrap measure of the predictive error of a surface that requires no prior knowledge of noise levels. The second smoothing technique is to define the CGS surface with fewer centers than data points, and compute the fit by least squares (LSQR); the noise levels are implicitly estimated by the number and placement of the centers relative to the data points. We show that both smoothing methods can be implemented using extensions to the existing fast framework for interpolation, so that it is now possible to construct realistic smooth fits to the very large data sets typically collected in geophysics. Thin‐plate spline and kriging surfaces with GCV smoothing appear to produce realistic fits to noisy radiometric data. The resulting surfaces are similar, yet the thin‐plate spline required less parameterization. Given the simplicity and parsimony of GCV, this makes a combination of the two methods a reasonable default choice for the smoothing problem. LSQR smooth fitting with sinc functions defined on a regular grid of centers, effectively low‐pass filters the data and produces a reasonable surface, although one not as visually appealing as for splines and kriging.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Chevrot, Sébastien, and Maximilien Lehujeur. "Eikonal surface wave tomography with smoothing splines—application to Southern California." Geophysical Journal International 229, no. 3 (January 29, 2022): 1927–41. http://dx.doi.org/10.1093/gji/ggac034.

Повний текст джерела
Анотація:
SUMMARY The densification of both permanent and temporary seismic networks has raised new interest in surface wave eikonal tomography from which phase velocity maps can be obtained without resolving a tomographic inverse problem. However, eikonal tomography requires to reconstruct traveltime surfaces from a discrete number of measurements obtained at the station locations, which can be challenging. We present a new method to reconstruct these traveltime surfaces with smoothing splines discretized in a regular 2-D Cartesian grid. We impose Neumann boundary conditions so that the phase gradients on the edges of the grid are equal to the apparent slownesses of the average plane wave along the normal direction measured by beamforming. Using the eikonal equation, phase velocity maps are then derived from the norm of the gradient of the interpolated traveltime maps. The method is applied to Rayleigh waves recorded by the Southern California Seismic Network to derive phase velocity surfaces. Robust, stable and finely resolved phase velocity maps at 25 and 33 s period are obtained after averaging the phase velocity maps derived from the analysis of a selection of recent large (Mw ≥ 6.5) teleseismic events. The phase velocity map at 25 s mainly constrains the thickness of the Southern California crust, with results that are in excellent agreement with previous tomographic studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tai, Chang-Kou. "On the Aliasing and Resolving Power of Sea Level Low-Pass Filtered onto a Regular Grid from Along-Track Altimeter Data of Uncoordinated Satellites: The Smoothing Strategy." Journal of Atmospheric and Oceanic Technology 25, no. 4 (April 1, 2008): 617–24. http://dx.doi.org/10.1175/2007jtecho514.1.

Повний текст джерела
Анотація:
Abstract It is shown that smoothing (low-pass filtering) along-track altimeter data of uncoordinated satellites onto a regular space–time grid helps reduce the overall energy level of the aliasing from the aliasing levels of the individual satellites. The rough rule of thumb is that combining N satellites reduces the energy of the overall aliasing to 1/N of the average aliasing level of the N satellites. Assuming the aliasing levels of these satellites are roughly of the same order of magnitude (i.e., assuming that no special signal spectral content significantly favors one satellite over others at certain locations), combining data from uncoordinated satellites is clearly the right strategy. Moreover, contrary to the case of coordinated satellites, this reduction of aliasing is not achieved by the enhancement of the overall resolving power. In fact (by the strict definition of the resolving power as the largest bandwidths within which a band-limited signal remains free of aliasing), the resolving power is reduced to its smallest possible extent. If one characterizes the resolving power of each satellite as a spectral space within which all band-limited signals are resolved by the satellite, then the combined resolving power of the N satellite is characterized by the spectral space that is the intersection of all N spectral spaces (i.e., the spectral space that is common to all the resolved spectral spaces of the N satellites, hence the smallest). It is also shown that the least squares approach is superior to the smoothing approach in reducing the aliasing and upholding the resolving power of the raw data. To remedy one of the shortcomings of the smoothing approach, the author recommends a multismoother smoothing strategy that tailors the smoother to the sampling characteristics of each satellite. Last, a strategy based on the least squares approach is also described for combining data from uncoordinated satellites.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhang, Jie, Ping Duan, Jia Li, and Jiajia Liu. "Electromagnetic Radiation Space Field Construction Collected along the Road Based on Layered Radial Basis Function." Applied Sciences 13, no. 10 (May 17, 2023): 6153. http://dx.doi.org/10.3390/app13106153.

Повний текст джерела
Анотація:
The electromagnetic radiation (EMR) data collected along a road have a largely empty region overall, while they have a linear distribution locally. Moreover, the traditional spatial interpolation method is not suitable for the electromagnetic radiation space field (EMR-SF) construction collected along the road. In this paper, a layered radial basis function (LRBF) method is proposed to generate the EMR-SF, which interpolates from outside to inside in a layered strategy. First, the regular grid points are constructed based on RBF within the range of sampling data and then are layered based on Ripley’s K function. Second, on the basis of layering, the EMR of grid points is generated layer by layer using the LRBF method. Finally, EMR-SF is constructed by using the sampling data and grid points. The LRBF method is applied to EMR data from an area of Yunnan Normal University in Kunming, China. The results show that the LRBF accuracy is higher than that of the ordinary kriging (OK) and inverse-distance-weighted (IDW) interpolation methods. The LRBF interpolation accuracy can be improved through the strategy of regular grid point construction and layering, and the EMR-SF constructed by LRBF is more realistic than OK and IDW.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Chen, Xuejun, Jing Zhao, Wenchao Hu, and Yufeng Yang. "Short-Term Wind Speed Forecasting Using Decomposition-Based Neural Networks Combining Abnormal Detection Method." Abstract and Applied Analysis 2014 (2014): 1–21. http://dx.doi.org/10.1155/2014/984268.

Повний текст джерела
Анотація:
As one of the most promising renewable resources in electricity generation, wind energy is acknowledged for its significant environmental contributions and economic competitiveness. Because wind fluctuates with strong variation, it is quite difficult to describe the characteristics of wind or to estimate the power output that will be injected into the grid. In particular, short-term wind speed forecasting, an essential support for the regulatory actions and short-term load dispatching planning during the operation of wind farms, is currently regarded as one of the most difficult problems to be solved. This paper contributes to short-term wind speed forecasting by developing two three-stage hybrid approaches; both are combinations of the five-three-Hanning (53H) weighted average smoothing method, ensemble empirical mode decomposition (EEMD) algorithm, and nonlinear autoregressive (NAR) neural networks. The chosen datasets are ten-minute wind speed observations, including twelve samples, and our simulation indicates that the proposed methods perform much better than the traditional ones when addressing short-term wind speed forecasting problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Litvinchev, Igor, and Edith Lucero Ozuna Espinosa. "Integer Programming Formulations for Approximate Packing Circles in a Rectangular Container." Mathematical Problems in Engineering 2014 (2014): 1–6. http://dx.doi.org/10.1155/2014/317697.

Повний текст джерела
Анотація:
A problem of packing a limited number of unequal circles in a fixed size rectangular container is considered. The aim is to maximize the (weighted) number of circles placed into the container or minimize the waste. This problem has numerous applications in logistics, including production and packing for the textile, apparel, naval, automobile, aerospace, and food industries. Frequently the problem is formulated as a nonconvex continuous optimization problem which is solved by heuristic techniques combined with local search procedures. New formulations are proposed for approximate solution of packing problem. The container is approximated by a regular grid and the nodes of the grid are considered as potential positions for assigning centers of the circles. The packing problem is then stated as a large scale linear 0-1 optimization problem. The binary variables represent the assignment of centers to the nodes of the grid. Nesting circles inside one another is also considered. The resulting binary problem is then solved by commercial software. Numerical results are presented to demonstrate the efficiency of the proposed approach and compared with known results.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tait, Andrew, and Ross Woods. "Spatial Interpolation of Daily Potential Evapotranspiration for New Zealand Using a Spline Model." Journal of Hydrometeorology 8, no. 3 (June 1, 2007): 430–38. http://dx.doi.org/10.1175/jhm572.1.

Повний текст джерела
Анотація:
Abstract Potential evapotranspiration (PET) is an important component of water balance calculations, and these calculations form an equally important role in applications such as irrigation scheduling, pasture productivity forecasts, and groundwater recharge and streamflow modeling. This paper describes a method of interpolating daily PET data calculated at climate stations throughout New Zealand onto a regular 0.05° latitude–longitude grid using a thin-plate smoothing spline model. Maximum use is made of observational data by combining both Penman and Priestley–Taylor PET calculations and raised pan evaporation measurements. An analysis of the interpolation error using 20 validation sites shows that the average root-mean-square error varies between about 1 mm in the summer months to about 0.4 mm in winter. It is advised that interpolated data for areas above 500-m elevation should be used with caution, however, due to the paucity of input data from high-elevation sites.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Regular grid weighted smoothing"

1

Francis, Biben. "Fast and Robust Biomedical Image Reconstruction from Nonuniform Samples." Thesis, 2019. https://etd.iisc.ac.in/handle/2005/4960.

Повний текст джерела
Анотація:
We consider the problem of reconstructing images from non-uniformly under-sampled spatial point measurements with emphasis on robustness to noise. The computational methods that deals with this problem are known as scattered data approximation (SDA) methods. Among these, well-performing methods achieve the reconstruction by minimizing a cost that is a weighted sum of data fidelity term measuring the accuracy of the fit at the measurement locations, and a regularization term. The latter term incorporates certain smoothness, and is constructed by summing the squared derivative values of a chosen order. The relative weight between these two terms is known as the smoothing parameter. Prominent methods in this category are known as thin-plate spline (TPS) and radial basis function (RBF) methods, and they require solving large numerically ill-conditioned and/or dense linear system of equations. Subspace variational method alleviates the numerical instability and the computational complexity associated with the TPS and RBF methods. However, this approach involves solving large and sparse linear system of equation requiring specialized numerical methods. In the first part of the thesis, we propose a novel method for SDA that eliminates the need for solving dense linear system of equations, and even the need for storing matrix representing linear system. This is achieved by handling the reconstruction problem in two stages. In the first stage, the given non-uniform data are transformed into a pair of regular grid images, where, one image represents the measured samples and the other represents the sample density map. In the second stage, the required image is computed as the minimizer of a cost that is completely expressed in terms of regular grid discrete operations. It is expressed as a sum of weighted quadratic data fitting term involving the transformed image pair, and and discrete quadratic roughness functional. Computing the minimizer of this cost involves solving a wellconditioned sparse linear system of equations, where system matrix is represented in terms of filtering and array multiplications without the need for storing it explicitly. We demonstrate that the proposed method, which is named as regular grid weighted smoothing (RGWS), has much lower computational complexity than TPS and RBF methods, with only a little compromise in the reconstruction quality. RGWS uses quadratic regularization, which is known to yield over-smoothed images under the presence of noise. We extend the RGWS method by incorporating non-quadratic regularization which is constructed by applying a square root on the sum of squares of derivative values (known as `1 regularization). We propose a reconstruction method using this `1 regularization, which we name as the `1-RGWS. We perform extensive set of reconstruction experiments with various levels of under-sampling and noise and compare the performances of `1-RGWS and the original RGWS, which we also call `2-RGWS. When the sampling density becomes low, the performance of `1-RGWS degrade abruptly and becomes worse than the `2-RGWS. This behavior is known as the phase transition in the literature. We analyze this in a probabilistic viewpoint and infer that the prior probability model corresponding to `1-regularization is based on the assumption that probability of a pixel location taking certain derivative value is independent of the derivative values of its neighboring pixel locations, which is clearly not true. We developed a probability model where error incurred by this independence assumption is compensated by means of a multi-resolution based re-weighting scheme. In this scheme, the desired reconstruction is formulated as a series of coarse-to-fine multi-resolution reconstructions, and re-weighting of the prior probability for each resolution level is derived from the reconstruction of previous resolution level. We demonstrate that the new method, which we name the multiresolution based scattered data approximation (MSDA), performs better than `1-RGWS and `2-RGWS under wide range of sampling densities, with slightly increased computational complexity. We then developed an extended method, where, instead of re-weighting the form of prior probability model corresponding to `1 regularization, the probability model itself is determined using maximum entropy principle. Specifically, at each resolution level in the multi-resolution reconstruction, the required probability model is determined as the maximizer of entropy subject to the information extracted from the lower resolution reconstruction as constraints. To further enhance the performance, we use directional second derivative operators to define the probability model. Moreover, to control the variance of this probability model, we also propose to use a modified multiresolution scheme, where the image sizes increase by a fractional factor, instead of doubling. We demonstrate that the new method, which we call the maximum entropy regularized reconstruction (MERR), outperforms both MSDA and `1-RGWS for a wide range of sampling densities and noise levels.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Regular grid weighted smoothing"

1

Frobin, W., and E. Hierholzer. "Transformation Of Irregularly Sampled Surface Data Points Into A Regular Grid And Aspects Of Surface Interpolation, Smoothing And Accuracy." In 1985 International Technical Symposium/Europe, edited by A. M. Coblentz and Robin E. Herron. SPIE, 1986. http://dx.doi.org/10.1117/12.956298.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bursal, Faruk H., and Benson H. Tongue. "Interpolated Mapping System Identification From Time Series Data." In ASME 1993 Design Technical Conferences. American Society of Mechanical Engineers, 1993. http://dx.doi.org/10.1115/detc1993-0111.

Повний текст джерела
Анотація:
Abstract In this paper, a system identification algorithm based on Interpolated Mapping (IM) that was introduced in a previous paper is generalized to the case of data stemming from arbitrary time series. The motivation for the new algorithm is the need to identify nonlinear dynamics in continuous time from discrete-time data. This approach has great generality and is applicable to problems arising in many areas of science and engineering. In the original formulation, a map defined on a regular grid in the state space of a dynamical system was assumed to be given. For the formulation to become practically viable, however, the requirement of initial conditions being taken from such a regular grid needs to be dropped. In particular, one would like to use time series data, where the time interval between samples is identified with the mapping time step T. This paper is concerned with the resulting complications. Various options for extending the formulation are examined, and a choice is made in favor of a pre-processing algorithm for estimating the FS map based on local fits to the data set. The suggested algorithm also has smoothing properties that are desirable from the standpoint of noise reduction.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kumar, Ashok V., and Ravi K. Burla. "Finite Element Analysis Using Uniform B-Spline Basis." In ASME 2008 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/detc2008-49696.

Повний текст джерела
Анотація:
Implicit boundary finite element method uses structured grids for analysis instead of a conforming finite element mesh. The geometry of the structure is represented independently using curve / surface equations. These equations are used to apply boundary conditions even though there may not be nodes available on the boundary. In this paper, this method is applied for analysis using uniform B-spline basis defined over structured grids. Solutions can be constructed that are C1 or C2 continuous throughout the analysis domain using B-spline basis functions. Therefore, the computed stress and strain are continuous in the analysis domain thus eliminating the need for smoothing stress/strain results. Compared to conforming mesh, it is easier to generate structured grids that overlap the geometry and the elements in the grid are regular shaped and undistorted. Numerical examples are presented to demonstrate the performance of these B-spline elements. The results are compared with analytical solutions as well as traditional finite element solutions. Convergence studies for several examples show that B-spline elements provide accurate solutions with fewer elements and nodes as compared to traditional finite element method (FEM).
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Brusiani, Federico, Gian Marco Bianchi, Tommaso Lucchini, and Gianluca D’Errico. "Implementation of a Finite-Element Based Mesh Motion Technique in an Open Source CFD Code." In ASME 2009 Internal Combustion Engine Division Spring Technical Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/ices2009-76108.

Повний текст джерела
Анотація:
Nowadays, Computational Fluid Dynamic (CFD) codes are widely used in different industrial fields. Although hardware and numerical model improvements, the mesh generation remains one of the key points for a successful CFD simulation. Mesh quality is influenced by the adopted mesh generator tool and, after all, by the designer’s experience and it becomes very important when moving meshes are required. In fact, mesh skewness, aspect ratio, and non-orthogonality have to be controlled during the deforming process since their wrong evolution could produce an unphysical behavior of the computed flow field. Mesh motion could be performed by different strategies: dynamic smoothing operation and dynamic re-meshing operation, are, today, two of the mainly used approaches. All of them can be combined to guarantee the correct reproduction of motion profile and a good mesh quality level. In this context, the authors have implemented a moving mesh methodology in the Open Source CFD code OpenFOAM®. A multiple number of meshes is used to cover the whole simulation period, and the grid point motion is accommodated by an automatic mesh motion techinque with polyhedral cell support. The Laplace equation is chosen to govern mesh motion. This guarantees that an initial valid mesh remains valid for arbitrary boundary motion. Mesh to mesh interpolation is performed by using a cell based, distance weighted interpolation technique. The proposed approach was tested on a real IC-engine geometry. In particular, the mesh quality evolution during motion, the numerical results and the computational costs were evaluated.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Darabi, Hamed, Amir Kianinejad, and Amir Salehi. "Physics-Informed Spatio-Temporal Graph Neural Network for Waterflood Management." In ADIPEC. SPE, 2022. http://dx.doi.org/10.2118/211284-ms.

Повний текст джерела
Анотація:
Abstract Conventional approaches such as operational spreadsheets and reservoir simulation are ineffective for actively managing waterflooding; either too simplistic or challenging to (re)calibrate in a short timeframe for operational decision-making. This paper presents a framework that optimally blends physics-based and data-driven approaches for fast and reliable subsurface modeling. The formulation is based on a graph neural network (GNN), capable of capturing spatial and temporal patterns, and leverages critical physics to improve model accuracy and generalization. We represent the reservoir by a heterogeneous, dynamic, directed graph with injector and producer nodes where directed edges connect nearby nodes. We selected GNN for modeling the waterflood network since other deep-learning approaches (CNN and RNN) are generally suitable for regular Euclidean data (2D grids and 1D sequences). We use message passing with attention to leverage the graph's topology and reduce the number of trainable parameters. Production in each producer is equal to the weighted summation of signals received by nearby injector/aquifer nodes, where the connection's strength (well allocation factor) and efficiency (oil-cut function) represent weights. Strength quantifies the hydraulic communication between the node pairs and is estimated by solving single-phase pressure and tracer equations on an adaptive 3D unstructured PEBI grid. Efficiency indicates the percentage of total strength that contributes to oil production and is characterized by a sigmoid-like function with trainable parameters estimated by minimizing a regression loss using gradient-descent-based optimization. Also, a Markov-Chain Monte-Carlo (MCMC)-based uncertainty quantification framework is developed to quantify the model parameters' uncertainty. The proposed methodology was successfully applied to many reservoirs across the world. This paper shows the results for a carbonate field with more than 150 wells, 60 years of history, and a 50% water cut. The asset team's objective was to increase oil production while maintaining water production. The GNN model was trained with test-set (last 12 months of production history was held out a test set) accuracy of 90% and then used to optimize the waterflooding strategy for the next six months. After applying a non-linear constrained pattern search method, the optimized strategy resulted in a 26,100 STB/D increase in oil production without a drastic change in water production level. This outcome was achieved only by adjusting the injection rates and producers' operational conditions without drilling or major workovers. The presented modeling approach has many benefits for actively managing waterflooding: a) more than 90% speed-up for model building and (re)training compared to conventional workflows, b) super-fast simulations with GNN, c) improved model accuracy/generalization leveraging a physics-informed machine learning, d) more robust decision making through uncertainty quantification, and e) significantly shorter decision cycles in waterflood operations for well control optimization to increase oil recovery and/or reduce water production.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії