Journal articles on the topic 'Computational geochemistry'

To see the other types of publications on this topic, follow the link: Computational geochemistry.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computational geochemistry.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Buccianti, Antonella. "Advanced computational geochemistry." Computers & Geosciences 37, no. 5 (May 2011): 645. http://dx.doi.org/10.1016/j.cageo.2011.03.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gao, Wenlei, Gian Matharu, and Mauricio D. Sacchi. "Fast least-squares reverse time migration via a superposition of Kronecker products." GEOPHYSICS 85, no. 2 (March 1, 2020): S115—S134. http://dx.doi.org/10.1190/geo2019-0254.1.

Full text
Abstract:
Least-squares reverse time migration (LSRTM) has become increasingly popular for complex wavefield imaging due to its ability to equalize image amplitudes, attenuate migration artifacts, handle incomplete and noisy data, and improve spatial resolution. The major drawback of LSRTM is the considerable computational cost incurred by performing migration/demigration at each iteration of the optimization. To ameliorate the computational cost, we introduced a fast method to solve the LSRTM problem in the image domain. Our method is based on a new factorization that approximates the Hessian using a superposition of Kronecker products. The Kronecker factors are small matrices relative to the size of the Hessian. Crucially, the factorization is able to honor the characteristic block-band structure of the Hessian. We have developed a computationally efficient algorithm to estimate the Kronecker factors via low-rank matrix completion. The completion algorithm uses only a small percentage of preferentially sampled elements of the Hessian matrix. Element sampling requires computation of the source and receiver Green’s functions but avoids explicitly constructing the entire Hessian. Our Kronecker-based factorization leads to an imaging technique that we name Kronecker-LSRTM (KLSRTM). The iterative solution of the image-domain KLSRTM is fast because we replace computationally expensive migration/demigration operations with fast matrix multiplications involving small matrices. We first validate the efficacy of our method by explicitly computing the Hessian for a small problem. Subsequent 2D numerical tests compare LSRTM with KLSRTM for several benchmark models. We observe that KLSRTM achieves near-identical images to LSRTM at a significantly reduced computational cost (approximately 5–15× faster); however, KLSRTM has an increased, yet manageable, memory cost.
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Faqi, Guanquan Zhang, Scott A. Morton, and Jacques P. Leveille. "An optimized wave equation for seismic modeling and reverse time migration." GEOPHYSICS 74, no. 6 (November 2009): WCA153—WCA158. http://dx.doi.org/10.1190/1.3223678.

Full text
Abstract:
The acoustic wave equation has been widely used for the modeling and reverse time migration of seismic data. Numerical implementation of this equation via finite-difference techniques has established itself as a valuable approach and has long been a favored choice in the industry. To ensure quality results, accurate approximations are required for spatial and time derivatives. Traditionally, they are achieved numerically by using either relatively very fine computation grids or very long finite-difference operators. Otherwise, the numerical error, known as numerical dispersion, is present in the data and contaminates the signals. However, either approach will result in a considerable increase in the computational cost. A simple and computationally low-cost modification to the standard acoustic wave equation is presented to suppress numerical dispersion. This dispersion attenuator is one analogy of the antialiasing operator widely applied in Kirchhoff migration. When the new wave equation is solved numerically using finite-difference schemes, numerical dispersion in the original wave equation is attenuated significantly, leading to a much more accurate finite-difference scheme with little additional computational cost. Numerical tests on both synthetic and field data sets in both two and three dimensions demonstrate that the optimized wave equation dramatically improves the image quality by successfully attenuating dispersive noise. The adaptive application of this new wave equation only increases the computational cost slightly.
APA, Harvard, Vancouver, ISO, and other styles
4

Rustad, J. R., W. Dzwinel, and D. A. Yuen. "Computational Approaches to Nanomineralogy." Reviews in Mineralogy and Geochemistry 44, no. 1 (January 1, 2001): 191–216. http://dx.doi.org/10.2138/rmg.2001.44.06.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yong, Peng, Romain Brossier, and Ludovic Métivier. "Parsimonious truncated Newton method for time-domain full-waveform inversion based on the Fourier-domain full-scattered-field approximation." GEOPHYSICS 87, no. 1 (December 22, 2021): R123—R146. http://dx.doi.org/10.1190/geo2021-0164.1.

Full text
Abstract:
To exploit Hessian information in full-waveform inversion (FWI), the matrix-free truncated Newton method can be used. In such a method, Hessian-vector product computation is one of the major concerns due to the huge memory requirements and demanding computational cost. Using the adjoint-state method, the Hessian-vector product can be estimated by zero-lag crosscorrelation of the first-/second-order incident wavefields and the second-/first-order adjoint wavefields. Different from the implementation in frequency-domain FWI, Hessian-vector product construction in the time domain becomes much more challenging because it is not affordable to store all of the time-dependent wavefields. The widely used wavefield recomputation strategy leads to computationally intensive tasks. We have developed an efficient alternative approach to computing the Hessian-vector product for time-domain FWI. In our method, discrete Fourier transform is applied to extract frequency-domain components of involved wavefields, which are used to compute wavefield crosscorrelation in the frequency domain. This makes it possible to avoid reconstructing the first- and second-order incident wavefields. In addition, a full-scattered-field approximation is proposed to efficiently simplify the second-order incident and adjoint wavefield computation, which enables us to refrain from repeatedly solving the first-order incident and adjoint equations for the second-order incident and adjoint wavefields (re)computation. With our method, the computational time can be reduced by 70% and 80% in viscous media for Gauss-Newton and full-Newton Hessian-vector product construction, respectively. The effectiveness of our method is also verified in the frame of a 2D multiparameter inversion, in which our method almost reaches the same iterative convergence of the conventional time-domain implementation.
APA, Harvard, Vancouver, ISO, and other styles
6

Ignetik, Rainer. "Response by R. Ignetik to the author's reply." GEOPHYSICS 54, no. 11 (November 1989): 1502. http://dx.doi.org/10.1190/1.1486970.

Full text
Abstract:
It would be interesting to compare the two computational algorithms proposed by Raiche with Boerner and West's approach, to see which of the three is computationally fastest under similar conditions. I also note that although the semi-analytic algorithm proposed by Raiche is more elegant and may require less storage, the paper does not make it entirely clear whether we get a real reduction in computing time; reduced computational time was the motivation for the procedure in the first place.
APA, Harvard, Vancouver, ISO, and other styles
7

Bensdorp, Silvian, Steen A. Petersen, Peter M. van den Berg, and Jacob T. Fokkema. "An approximate 3D computational method for real-time computation of induction logging responses." GEOPHYSICS 79, no. 3 (May 1, 2014): E137—E148. http://dx.doi.org/10.1190/geo2013-0233.1.

Full text
Abstract:
Over many years, induction logging systems have been used to create well formation logs. The major drawback for the utilization of these tools is the long simulation time for a single forward computation. We proposed an efficient computational method based on a contrast-type of integral-equation formulation, in which we applied an approximation for the 3D electromagnetic field. We assumed that the dominant contribution in the integral equation is obtained by the contribution around the singularity of Green’s kernel. It is expected that the approximation yields reliable results when the (homogeneous) background conductivity around the logging tool is close to the actual conductivity at the location of the tool. We have developed a data-driven method to determine this background conductivity from the dominant part of the measured coaxial magnetic fields, which are mainly influenced by the conductivity at the tool sensors. For a synthetic model, the results were compared to the ones of a rigorous solution of the integral equation and show a good simulation response to small-scale variations in the medium. Further, the method was used to simulate the response of a realistic reservoir model. Such a model is created by a geological modeling program. We concluded that our approximate method was able to improve the approximation results in highly heterogeneous structures compared to the Born approximation and provide an effective medium-gradient around the tool. Our method, based on the wavefield approximation, also estimates the error, and hence yields a warning when the method becomes unreliable.
APA, Harvard, Vancouver, ISO, and other styles
8

Carcione, José M., Christina Morency, and Juan E. Santos. "Computational poroelasticity — A review." GEOPHYSICS 75, no. 5 (September 2010): 75A229–75A243. http://dx.doi.org/10.1190/1.3474602.

Full text
Abstract:
Computational physics has become an essential research and interpretation tool in many fields. Particularly in reservoir geophysics, ultrasonic and seismic modeling in porous media is used to study the properties of rocks and to characterize the seismic response of geologic formations. We provide a review of the most common numerical methods used to solve the partial differential equations describing wave propagation in fluid-saturated rocks, i.e., finite-difference, pseudospectral, and finite-element methods, including the spectral-element technique. The modeling is based on Biot-type theories of dynamic poroelasticity, which constitute a general framework to describe the physics of wave propagation. We explain the various techniques and discuss numerical implementation aspects for application to seismic modeling and rock physics, as, for instance, the role of the Biot diffusion wave as a loss mechanism and interface waves in porous media.
APA, Harvard, Vancouver, ISO, and other styles
9

Goes, Saskia. "Computational methods for geodynamics." Geophysical Journal International 184, no. 2 (December 22, 2010): 974. http://dx.doi.org/10.1111/j.1365-246x.2010.04898.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Yang, and M. Nafi Toksöz. "Impact of the cracks lost in the imaging process on computing linear elastic properties from 3D microtomographic images of Berea sandstone." GEOPHYSICS 77, no. 2 (March 2012): R95—R104. http://dx.doi.org/10.1190/geo2011-0126.1.

Full text
Abstract:
With the current developments in imaging/computational techniques and resources, computational rock physics has been emerging as a new field of study. Properties of rocks are examined by carrying out extensive numerical simulations on rocks that have been digitized using high-resolution X-ray CT scans. The ultimate goal of computational rock physics is to supplement the traditional laboratory measurements, which are time consuming, with faster numerical simulations that allow the parameter space to be explored more thoroughly. We applied the finite-element method to compute the static effective elastic properties from 3D microtomographic images of Berea sandstone saturated with different fluids. From the computations, we found discrepancies between the numerical results and the laboratory measurements. The reason for such a problem is the loss of small features, such as fine cracks and micropores, in the digitized matrix during the imaging and aggregation process. We used a hybrid approach, combining the numerical computation and the effective media theories — the differential effective medium model and the Kuster-Toksöz model — to deduce the lost cracks by a very fast simulated annealing method. We analyzed the sensitivity of the inverted results — the distributions of crack aspect ratios and concentrations — to the clay content. We found that the inverted crack distribution is not so sensitive to clay content. Compared with the effect of cracks on the computed effective elastic properties, clay has only a secondary effect. Our approach can recover the lost cracks and is capable of predicting the effective elastic properties of the rocks from the microtomographic images for different fluid saturations. Compared with the traditional inversion schemes, based only on the effective media theories, this hybrid scheme has the advantage of utilizing the complex microstructures that are resolved in the imaging process, and it helps define the inversion space for crack distribution.
APA, Harvard, Vancouver, ISO, and other styles
11

Momma, Koichi, Toshiro Nagase, Yasuhiro Kudoh, and Takahiro Kuribayashi. "Computational simulations of the structure of Japan twin boundaries in quartz." European Journal of Mineralogy 21, no. 2 (April 22, 2009): 373–83. http://dx.doi.org/10.1127/0935-1221/2009/0021-1893.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Löer, Katrin, Andrew Curtis, and Giovanni Angelo Meles. "Relating source-receiver interferometry to an inverse-scattering series to derive a new method to estimate internal multiples." GEOPHYSICS 81, no. 3 (May 2016): Q27—Q40. http://dx.doi.org/10.1190/geo2015-0330.1.

Full text
Abstract:
We have evaluated an explicit relationship between the representations of internal multiples by source-receiver interferometry and an inverse-scattering series. This provides a new insight into the interaction of different terms in each of these internal multiple prediction equations and explains why amplitudes of estimated multiples are typically incorrect. A downside of the existing representations is that their computational cost is extremely high, which can be a precluding factor especially in 3D applications. Using our insight from source-receiver interferometry, we have developed an alternative, computationally more efficient way to predict internal multiples. The new formula is based on crosscorrelation and convolution: two operations that are computationally cheap and routinely used in interferometric methods. We have compared the results of the standard and the alternative formulas qualitatively in terms of the constructed wavefields and quantitatively in terms of the computational cost using examples from a synthetic data set.
APA, Harvard, Vancouver, ISO, and other styles
13

Boonyasiriwat, Chaiwoot, Paul Valasek, Partha Routh, Weiping Cao, Gerard T. Schuster, and Brian Macy. "An efficient multiscale method for time-domain waveform tomography." GEOPHYSICS 74, no. 6 (November 2009): WCC59—WCC68. http://dx.doi.org/10.1190/1.3151869.

Full text
Abstract:
This efficient multiscale method for time-domain waveform tomography incorporates filters that are more efficient than Hamming-window filters. A strategy for choosing optimal frequency bands is proposed to achieve computational efficiency in the time domain. A staggered-grid, explicit finite-difference method with fourth-order accuracy in space and second-order accuracy in time is used for forward modeling and the adjoint calculation. The adjoint method is utilized in inverting for an efficient computation of the gradient directions. In the multiscale approach, multifrequency data and multiple grid sizes are used to overcome somewhat the severe local minima problem of waveform tomography. The method is applied successfully to 1D and 2D heterogeneous models; it can accurately recover low- and high-wavenumber components of the velocity models. The inversion result for the 2D model demonstrates that the multiscale method is computationally efficient and converges faster than a conventional, single-scale method.
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, Jing-Bo. "High-order time discretizations in seismic modeling." GEOPHYSICS 72, no. 5 (September 2007): SM115—SM122. http://dx.doi.org/10.1190/1.2750424.

Full text
Abstract:
Seismic modeling plays an important role in explor-ation geophysics. High-order modeling schemes are in demand for practical reasons. In this context, I present three kinds of high-order time discretizations: Lax-Wendroff methods, Nyström methods, and splitting methods. Lax-Wendroff methods are based on the Taylor expansion and the replacement of high-order temporal derivatives by spatial derivatives, Nyström methods are simplified Runge-Kutta algorithms, and splitting methods comprise substeps for one-step computation. Based on these methods, three schemes with third-order and fourth-order accuracy in time and pseudospectral discretizations in space are presented. I also compare their accuracy, stability, and computational complexity, and discuss advantages and shortcomings of these algorithms. Numerical experiments show that the fourth-order Lax-Wendroff scheme is more efficient for short-time simulations while the fourth-order Nyström scheme and the third-order splitting scheme are more efficient for long-term computations.
APA, Harvard, Vancouver, ISO, and other styles
15

Wei, Wei, Li-yun Fu, and Gerrit Blacquière. "Fast multifrequency focal beam analysis for 3D seismic acquisition geometry." GEOPHYSICS 77, no. 2 (March 2012): P11—P21. http://dx.doi.org/10.1190/geo2010-0327.1.

Full text
Abstract:
A method for the efficient computation of multifrequency focal beams for 3D seismic acquisition geometry analysis has been developed. By computing them for all the frequency components of seismic data, single-frequency focal beams can be extended to multifrequency focal beams. However, this straightforward method involves considerable computer time and memory requirements, especially in complex media settings. Therefore, we propose a rapid 3D multifrequency focal beam method in which only a few single-frequency focal beam computations are followed by a number of smart interpolations. The 3D wavefield extrapolation in the focal beam analysis is conducted by the combined applications of a 3D degenerate Fourier migrator and a 3D Born-Kirchhoff interpolation operator, a process that reduces the computational cost for complex media. The multifrequency focal beam analysis is applied to a 3D model from an oil field of China, demonstrating how spatial sampling differences affect seismic imaging.
APA, Harvard, Vancouver, ISO, and other styles
16

van Duin, Adri C. T., and Jaap S. Sinninghe Damsté. "Computational chemical investigation into isorenieratene cyclisation." Organic Geochemistry 34, no. 4 (April 2003): 515–26. http://dx.doi.org/10.1016/s0146-6380(02)00247-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Dvorkin, Jack, Naum Derzhi, Elizabeth Diaz, and Qian Fang. "Relevance of computational rock physics." GEOPHYSICS 76, no. 5 (September 2011): E141—E153. http://dx.doi.org/10.1190/geo2010-0352.1.

Full text
Abstract:
To validate the transport (fluid and electrical) and elastic properties computed on CT scan pore-scale volumes of natural rock, we first contrast these values to physical laboratory measurements. We find that computational and physical data obtained on the same rock material source often differ from each other. This mismatch, however, does not preclude the validity of either of the data type — it only implies that expecting a direct match between the effective properties of two volumes of very different sizes taken from the same heterogeneous material is generally incorrect. To address this situation, instead of directly comparing data points generated by different methods of measurement, we compare trends formed by such data points. These trends include permeability versus porosity; electrical formation factor versus porosity; and elastic moduli (elastic-wave velocity) versus porosity. In the physical laboratory, these trends are generated by measuring a significant number of samples. In contrast, in the computational laboratory, these trends are often hidden inside a very small digital sample and can be derived by subsampling it. Hence, we base our validation paradigm on the assumption that if these computational trends match relevant physical trends and/or theoretical rock physics transforms, the computational results are correct. We present examples of such validation for clastic and carbonate samples, including drill cuttings.
APA, Harvard, Vancouver, ISO, and other styles
18

Donnellan, Andrea, Peter Mora, Mitsuhiro Matsu’ura, and Xiang-Chu Yin. "Computational Earthquake Science Part I." Pure and Applied Geophysics 161, no. 9-10 (August 2004): 1823–25. http://dx.doi.org/10.1007/s00024-004-2533-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Donnellan, Andrea, Peter Mora, Mitsuhiro Matsu’ura, and Xiang-chu Yin. "Computational Earthquake Science PART II." Pure and Applied Geophysics 161, no. 11-12 (December 2004): 2119–22. http://dx.doi.org/10.1007/s00024-004-2552-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hu, Jiangtao, Junxing Cao, Huazhong Wang, Shaoyong Liu, and Xingjian Wang. "3D traveltime computation for quasi-P-wave in orthorhombic media using dynamic programming." GEOPHYSICS 83, no. 1 (January 1, 2018): C27—C35. http://dx.doi.org/10.1190/geo2016-0558.1.

Full text
Abstract:
A fractured area, such as a fault area, usually induces orthorhombic anisotropy. Ignoring orthorhombic anisotropy may degrade the subsurface image by creating a well mistie and blurring the image. Traveltime computation is essential for many processing techniques, such as depth imaging and tomography. Solving the ray-tracing system and eikonal equation are two popular methods for traveltime computation in isotropic media. However, because the ray-tracing system becomes complex and the eikonal equation becomes highly nonlinear, their applications in orthorhombic media become complex. We have developed an alternative 3D traveltime computation method in orthorhombic media based on dynamic programming. To avoid solving the complex ray-tracing system and nonlinear eikonal equation, it adopts an explicitly expressed group velocity from the moveout approximation to describe the propagation of the wavepath and computes the traveltime by Fermat’s principle. Similar to depth extrapolation, it computes the traveltime from one depth to the next depth and does not suffer from a shadow zone. Besides, three strategies of traveltime computation are proposed to deal with different geologic scenarios. Because classic dynamic programming (i.e., the first strategy) computes all possible wavepaths (i.e., 24 wavepaths) across one spatial location, it may be computationally intensive. Based on the idea of wavefield decomposition (e.g., upgoing and downgoing), the second and third strategies simplify the traveltime computation and reduce the computational cost. Numerical examples on the vertical and tilted orthorhombic models indicate that the traveltime contour obtained by our method matches well with the wavefront extrapolated from the wave equation. Our method can be applied in depth imaging and tomography.
APA, Harvard, Vancouver, ISO, and other styles
21

Gao, Kai, and Lianjie Huang. "An efficient vector elastic reverse time migration method in the hybrid time and frequency domain for anisotropic media." GEOPHYSICS 84, no. 6 (November 1, 2019): S511—S522. http://dx.doi.org/10.1190/geo2018-0644.1.

Full text
Abstract:
Vector elastic reverse time migration (ERTM) produces subsurface elastic images with correct polarities using multicomponent seismic data. However, the decomposition of elastic wavefields into vector P- and S-wavefields is computationally expensive, particularly in heterogeneous and complex anisotropic media. We have developed a computationally efficient vector ERTM method in the hybrid time and frequency domain by combining three existing techniques. Rather than decomposing elastic wavefields into vector qP- and qS-wavefields during time-domain wavefield propagation, we conduct the wavefield decomposition in the frequency domain for several selected frequencies. In general, the number of selected frequencies needed for migration imaging is much smaller than the number of time steps during forward and backward wavefield propagation, leading to greatly reduced computational costs associated with the qP-/qS-wavefield vector separation in complex heterogeneous anisotropic media. We further combine an implicit directional wavefield separation into the vector ERTM to enhance the image quality. The numerical results demonstrate that our method produces high-quality elastic-wave migration images with notably reduced computational costs compared to the conventional vector ERTM method.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Wensheng, and Hui Zheng. "A multiscale method for wave propagation in 3D heterogeneous poroelastic media." GEOPHYSICS 84, no. 4 (July 1, 2019): T237—T257. http://dx.doi.org/10.1190/geo2018-0482.1.

Full text
Abstract:
A new multiscale method for wave simulation in 3D heterogeneous poroelastic media is developed. Wave propagation in inhomogeneous media involves many different scales of media. The physical parameters in the real media usually vary greatly within a very small scale. For the direct numerical methods for wave simulation, a refined grid is required in mesh generation to maintain the match between the mesh size and the material variations in the spatial scale. This greatly increases the computational cost and computer memory requirements. The multiscale method can overcome this difficulty due to the scale difference. The basic idea of our multiscale method is to construct computational schemes on two sets of meshes, i.e., coarse grids and fine grids. The finite-volume method is applied on the coarse grids, whereas the multiscale basis functions are computed with the finite-element method by solving a local problem on the fine grids. Moreover, the local problem only needs to be solved once before time stepping. This allows us to use a coarse grid while still capturing the information of the physical property variations in the small scale. Therefore, it has better accuracy than the single-scale method when they use the same coarse grids. The theoretical method and the dispersion analysis are investigated. Numerical computations with the perfectly matched layer boundary conditions are completed for 3D inhomogeneous poroelastic models with randomly distributed small scatterers. The results indicate that our multiscale method can effectively simulate wave propagation in 3D heterogeneous poroelastic media with a significant reduction in computation cost.
APA, Harvard, Vancouver, ISO, and other styles
23

Song, Guojie, Xinmin Zhang, Zhiliang Wang, Yali Chen, and Puchun Chen. "The asymptotic local finite-difference method of the fractional wave equation and its viscous seismic wavefield simulation." GEOPHYSICS 85, no. 3 (April 16, 2020): T179—T189. http://dx.doi.org/10.1190/geo2019-0066.1.

Full text
Abstract:
Viscous seismic wave propagation simulation using the fractional order equation has attracted much recent attention. However, conventional finite-difference (FD) methods of the fractional partial difference equation adopt a global difference operator to approximate the fractional derivatives, which reduces the computational efficiency dramatically. To improve the efficiency of the FD method, we have developed a reasonable truncated stencil pattern by strict mathematical derivation and adopted an asymptotic local FD (ALFD) method. Theoretical analysis and numerical results indicate that the ALFD method is accurate and efficient. In fact, our numerical results illustrate that the numerical solution solved by the ALFD method has a maximum relative error not exceeding 0.014% compared to the reference solution (applied to a finely meshed computational domain). The computation speed of ALFD is also significantly faster than that of the original FD method. The computational time of the three ALFD methods satisfying a different preset accuracy is only approximately 2.71%, 1.26%, and 0.78% of that of the original fractional wave equation FD method. The ALFD method provides a useful tool for viscoelastic seismic wavefield propagation simulation.
APA, Harvard, Vancouver, ISO, and other styles
24

Xiong, Zonghou. "Symmetry properties of the scattering matrix in 3-D electromagnetic modeling using the integral equation method." GEOPHYSICS 57, no. 9 (September 1992): 1199–202. http://dx.doi.org/10.1190/1.1443334.

Full text
Abstract:
Modeling large three‐dimensional (3-D) earth conductivity structures continues to pose challenges. Although the theories of electromagnetic modeling are well understood, the basic computational problems are practical, involving the quadratically growing requirements on computer storage and cubically growing computation time with the number of cells required to discretize the modeling body.
APA, Harvard, Vancouver, ISO, and other styles
25

Redfern, Simon A. T. "Advances in computer modelling of mineral properties." Mineralogical Magazine 59, no. 397 (December 1995): 585–87. http://dx.doi.org/10.1180/minmag.1995.059.397.01.

Full text
Abstract:
The following six papers were presented at a meeting, held in September 1994, which reviewed some of the recent advances in the application of computational methods to mineralogy. Talks covered the developing and challenging field of ab initio quantum mechanical computations as well as new applications and insights afforded by the use and refinement of the more established methods of empirical simulation and modelling. The former attempt to solve Schrdinger's equation for the material in question, and in doing so determine the energy surface and electronic structure. The latter use parameterized interatomic potentials to describe the energy of interaction between pairs or groups of atoms, the parameters typically derived either by fitting to the results of quantum mechanical calculations of small clusters, or empirically determined from fitting to the known physical properties of crystals (e.g. to the elastic constants, structural parameters or dielectric constants). A certain rivalry and antipathy occasionally surfaces between computational theorists from either camp.
APA, Harvard, Vancouver, ISO, and other styles
26

Ferriss, Elizabeth D. A. Essene, and Udo Becker. "Computational study of the effect of pressure on the Ti-in-zircon geothermometer." European Journal of Mineralogy 20, no. 5 (November 5, 2008): 745–55. http://dx.doi.org/10.1127/0935-1221/2008/0020-1860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Boehm, Christian, Mauricio Hanzich, Josep de la Puente, and Andreas Fichtner. "Wavefield compression for adjoint methods in full-waveform inversion." GEOPHYSICS 81, no. 6 (November 2016): R385—R397. http://dx.doi.org/10.1190/geo2015-0653.1.

Full text
Abstract:
Adjoint methods are a key ingredient of gradient-based full-waveform inversion schemes. While being conceptually elegant, they face the challenge of massive memory requirements caused by the opposite time directions of forward and adjoint simulations and the necessity to access both wavefields simultaneously for the computation of the sensitivity kernel. To overcome this bottleneck, we have developed lossy compression techniques that significantly reduce the memory requirements with only a small computational overhead. Our approach is tailored to adjoint methods and uses the fact that the computation of a sufficiently accurate sensitivity kernel does not require the fully resolved forward wavefield. The collection of methods comprises reinterpolation with a coarse temporal grid as well as adaptively chosen polynomial degree and floating-point precision to represent spatial snapshots of the forward wavefield on hierarchical grids. Furthermore, the first arrivals of adjoint waves are used to identify “shadow zones” that do not contribute to the sensitivity kernel. Numerical experiments show the high potential of this approach achieving an effective compression factor of three orders of magnitude with only a minor reduction in the rate of convergence. Moreover, it is computationally cheap and straightforward to integrate in finite-element wave propagation codes with possible extensions to finite-difference methods.
APA, Harvard, Vancouver, ISO, and other styles
28

Ehlers, T. A. "Computational Tools for Low-Temperature Thermochronometer Interpretation." Reviews in Mineralogy and Geochemistry 58, no. 1 (January 1, 2005): 589–622. http://dx.doi.org/10.2138/rmg.2005.58.22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Michelena, R. J., and J. M. Harris. "Tomographic traveltime inversion using natural pixels." GEOPHYSICS 56, no. 5 (May 1991): 635–44. http://dx.doi.org/10.1190/1.1443080.

Full text
Abstract:
Traditionally in the problem of tomographic traveltime inversion, the model is divided into a number of rectangular cells of constant slowness. Inversion consists of finding these constant values using the measured traveltimes. The inversion process can demand a large computational effort if a high‐resolution result is desired. We show how to use a different kind of parameterization of the model based on beam propagation paths. This parameterization is obtained within the framework of reconstruction in Hilbert spaces by minimizing the error between the true model and the estimated model. The traveltimes are interpreted as the projections of the slowness along the beampaths. Although the actual beampaths are described by complicated spatial functions, we simplify the computations by approximating these functions with functions of constant width and height, i.e., “fat” rays, which collectively form a basis set of natural pixels. With a simple numerical example we demonstrate that the main advantage of this parameterization, compared with the traditional decomposition of the model in rectangular pixels, is that 2-D reconstructed images of similar quality can be obtained with considerably less computational effort. This result suggests that the natural pixels can provide considerable computational advantage for 3-D problems.
APA, Harvard, Vancouver, ISO, and other styles
30

Sun, H., and G. T. Schuster. "2‐D wavepath migration." GEOPHYSICS 66, no. 5 (September 2001): 1528–37. http://dx.doi.org/10.1190/1.1487099.

Full text
Abstract:
Prestack Kirchhoff migration (KM) is computationally intensive for iterative velocity analysis. This is partly because each time sample in a trace must be smeared along a quasi‐ellipsoid in the model. As a less costly alternative, we use the stationary phase approximation to the KM integral so that the time sample is smeared along a small Fresnel zone portion of the quasi‐ellipsoid. This is equivalent to smearing the time samples in a trace over a 1.5‐D fat ray (i.e., wavepath), so we call this “wavepath migration” (WM). This compares to standard KM, which smears the energy in a trace along a 3‐D volume of quasi‐concentric ellipsoids. In principle, single trace migration with WM has a computational count of [Formula: see text] compared to KM, which has a computational count of [Formula: see text], where N is the number of grid points along one side of a cubic velocity model. Our results with poststack data show that WM produces an image that in some places contains fewer migration artifacts and is about as well resolved as the KM image. For a 2‐D poststack migration example, the computation time of WM is less than one‐third that of KM. Our results with prestack data show that WM images contain fewer migration artifacts and can define the complex structure more accurately. It is also shown that WM can be significantly faster than KM if a slant stack technique is used in the migration. The drawback with WM is that it is sometimes less robust than KM because of its sensitivity to errors in estimating the incidence angles of the reflections.
APA, Harvard, Vancouver, ISO, and other styles
31

Egbert, Gary D., and Anna Kelbert. "Computational recipes for electromagnetic inverse problems." Geophysical Journal International 189, no. 1 (January 31, 2012): 251–67. http://dx.doi.org/10.1111/j.1365-246x.2011.05347.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yin, Xiang-chu, Peter Mora, Andrea Donnellan, and Mitsuhiro Matsuura. "Computational Earthquake Physics PART I: Introduction." Pure and Applied Geophysics 163, no. 9 (August 31, 2006): 1737–40. http://dx.doi.org/10.1007/s00024-006-0112-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Yin, Xiang-chu, Peter Mora, Andrea Donnellan, and Mitsuhiro Matsu'ura. "Computational Earthquake Physics PART II: Introduction." Pure and Applied Geophysics 163, no. 11-12 (December 2006): 2259–61. http://dx.doi.org/10.1007/s00024-006-0138-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Reshef, Moshe, and David Kessler. "Practical implementation of three‐dimensional poststack depth migration." GEOPHYSICS 54, no. 3 (March 1989): 309–18. http://dx.doi.org/10.1190/1.1442656.

Full text
Abstract:
This work deals with the practical aspects of three‐dimensional (3-D) poststack depth migration. A method, based on depth extrapolation in the frequency domain, is used for the migration. This method is suitable for structures with arbitrary velocity variation, and the number of computations required can be directly related to the complexity of the given velocity function. We demonstrate the superior computational efficiency of this method for 3-D depth migration relative to the reverse‐time migration method. The computational algorithm used for the migration is designed for a multi‐processor machine (Cray-XMP/48) and takes advantage of advanced disk technologies to overcome the input/output (I/O) problem. The method is demonstrated with both synthetic and field data. The migration of a typical 3-D data volume can be accomplished in only a few hours.
APA, Harvard, Vancouver, ISO, and other styles
35

Shragge, Jeffrey. "Solving the 3D acoustic wave equation on generalized structured meshes: A finite-difference time-domain approach." GEOPHYSICS 79, no. 6 (November 1, 2014): T363—T378. http://dx.doi.org/10.1190/geo2014-0172.1.

Full text
Abstract:
The key computational kernel of most advanced 3D seismic imaging and inversion algorithms used in exploration seismology involves calculating solutions of the 3D acoustic wave equation, most commonly with a finite-difference time-domain (FDTD) methodology. Although well suited for regularly sampled rectilinear computational domains, FDTD methods seemingly have limited applicability in scenarios involving irregular 3D domain boundary surfaces and mesh interiors best described by non-Cartesian geometry (e.g., surface topography). Using coordinate mapping relationships and differential geometry, an FDTD approach can be developed for generating solutions to the 3D acoustic wave equation that is applicable to generalized 3D coordinate systems and (quadrilateral-faced hexahedral) structured meshes. The developed numerical implementation is similar to the established Cartesian approaches, save for a necessary introduction of weighted first- and mixed second-order partial-derivative operators that account for spatially varying geometry. The approach was validated on three different types of computational meshes: (1) an “internal boundary” mesh conforming to a dipping water bottom layer, (2) analytic “semiorthogonal cylindrical” coordinates, and (3) analytic semiorthogonal and numerically specified “topographic” coordinate meshes. Impulse response tests and numerical analysis demonstrated the viability of the approach for kernel computations for 3D seismic imaging and inversion experiments for non-Cartesian geometry scenarios.
APA, Harvard, Vancouver, ISO, and other styles
36

Biondi, Biondo, and Gopal Palacharla. "3-D prestack migration of common‐azimuth data." GEOPHYSICS 61, no. 6 (November 1996): 1822–32. http://dx.doi.org/10.1190/1.1444098.

Full text
Abstract:
In principle, downward continuation of 3-D prestack data should be carried out in the 5-D space of full 3-D prestack geometry (recording time, source surface location, and receiver surface location), even when the data sets to be migrated have fewer dimensions, as in the case of common‐azimuth data sets that are only four dimensional. This increase in dimensionality of the computational space causes a severe increase in the amount of computations required for migrating the data. Unless this computational efficiency issue is solved, 3-D prestack migration methods based on downward continuation cannot compete with Kirchhoff methods. We address this problem by presenting a method for downward continuing common‐azimuth data in the original 4-D space of the common‐azimuth data geometry. The method is based on a new common‐azimuth downward‐continuation operator derived by a stationary‐phase approximation of the full 3-D prestack downward‐continuation operator expressed in the frequency‐wavenumber domain. Although the new common‐azimuth operator is exact only for constant velocity, a ray‐theoretical interpretation of the stationary‐phase approximation enables us to derive an accurate generalization of the method to media with both vertical and lateral velocity variations. The proposed migration method successfully imaged a synthetic data set that was generated assuming strong lateral and vertical velocity gradients. The common‐azimuth downward‐continuation theory also can be applied to the derivation of a computationally efficient constant‐velocity Stolt migration of common‐azimuth data. The Stolt migration formulation leads to the important theoretical result that constant‐velocity common‐azimuth migration can be split into two exact sequential migration processes: 2-D prestack migration along the inline direction, followed by 2-D zero‐offset migration along the cross‐line direction.
APA, Harvard, Vancouver, ISO, and other styles
37

Robertsson, Johan O. A., and Chris H. Chapman. "An efficient method for calculating finite‐difference seismograms after model alterations." GEOPHYSICS 65, no. 3 (May 2000): 907–18. http://dx.doi.org/10.1190/1.1444787.

Full text
Abstract:
Seismic modeling, processing, and inversion often require the calculation of the seismic response resulting from a suite of closely related seismic models. Even though changes to the model may be restricted to a small subvolume, we need to perform simulations for the full model. We present a new finite‐difference method that circumvents the need to resimulate the complete model for local changes. By requiring only calculations in the subvolume and its neighborhood, our method makes possible significant reductions in computational cost and memory requirements. In general, each source/receiver location requires one full simulation on the complete model. Following these pre‐computations, recalculation of the altered wavefield can be limited to the region around the subvolume and its neighborhood. We apply our method to a 2-D time‐lapse seismic problem, thereby achieving a factor of 15 reduction in computational cost. Potential savings for 3-D are far greater.
APA, Harvard, Vancouver, ISO, and other styles
38

Brown, R. James. "Computational errors in closed‐form expressions with an electromagnetic example: The conducting‐sphere response." GEOPHYSICS 53, no. 8 (August 1988): 1122–25. http://dx.doi.org/10.1190/1.1442550.

Full text
Abstract:
Errors are expected to result from the numerical evaluation of approximate mathematical expressions such as truncated series and finite‐element solutions, from exact expressions based on approximate theory (e.g., perfect gas law), or from analysis based on field data. Many theoretical problems in science—and geophysics is no exception—involve a wholly correct mathematical expression that is beset with computational difficulties arising solely from numerical calculations using the expression. For example, even with modern computers, computation from a slowly convergent series expression can demand an amount of computation time for convergence that is impractically great. Recurrence formulas themselves sometimes propagate errors, often in a nonlinear manner; Acton (1970) provides further insight into this and other aspects of numerical methods generally. Another potential trouble area is the application of slowly convergent iterative matrix methods to least‐squares problems. (Olson, 1987). For such cases, numerical difficulty should be expected.
APA, Harvard, Vancouver, ISO, and other styles
39

Feng, Xiaolei, Zvi Steiner, and Simon A. T. Redfern. "Fluorine incorporation into calcite, aragonite and vaterite CaCO3: Computational chemistry insights and geochemistry implications." Geochimica et Cosmochimica Acta 308 (September 2021): 384–92. http://dx.doi.org/10.1016/j.gca.2021.05.029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Pafeng, Josiane, Subhashis Mallick, and Hema Sharma. "Prestack waveform inversion of three-dimensional seismic data — An example from the Rock Springs Uplift, Wyoming, USA." GEOPHYSICS 82, no. 1 (January 1, 2017): B1—B12. http://dx.doi.org/10.1190/geo2016-0079.1.

Full text
Abstract:
Applying seismic inversion to estimate subsurface elastic earth properties for reservoir characterization is a challenge in exploration seismology. In recent years, waveform-based seismic inversions have gained popularity, but due to high computational costs, their applications are limited, and amplitude-variation-with-offset/angle inversion is still the current state-of-the-art. We have developed a genetic-algorithm-based prestack seismic waveform inversion methodology. By parallelizing at multiple levels and assuming a locally 1D structure such that forward computation of wave equation synthetics is computationally efficient, this method is capable of inverting 3D prestack seismic data on parallel computers. Applying this inversion to a real prestack seismic data volume from the Rock Springs Uplift (RSU) located in Wyoming, USA, we determined that our method is capable of inverting the data in a reasonable runtime and producing much higher quality results than amplitude-variation-with-offset/angle inversion. Because the primary purpose for seismic data acquisition at the RSU was to characterize the subsurface for potential targets for carbon dioxide sequestration, we also identified and analyzed some potential primary and secondary storage formations and their associated sealing lithologies from our inversion results.
APA, Harvard, Vancouver, ISO, and other styles
41

Bleistein, Norman, Jack K. Cohen, and Frank G. Hagin. "Computational and asymptotic aspects of velocity inversion." GEOPHYSICS 50, no. 8 (August 1985): 1253–65. http://dx.doi.org/10.1190/1.1441996.

Full text
Abstract:
We discuss computational and asymptotic aspects of the Born inversion method and show how asymptotic analysis is exploited to reduce the number of integrations in an f-k like solution formula for the velocity variation. The output of this alternative algorithm produces the reflectivity function of the surface. This is an array of singular functions—Dirac delta functions which peak on the reflecting surfaces—each scaled by the normal reflection strength at the surface. Thus, imaging of a reflector is achieved by construction of its singular function and estimation of the reflection strength is deduced from the peak value of that function. By asymptotic analysis of the application of the algorithm to the Kirchhoff representation of the backscattered field, we show that the peak value of the output estimates the reflection strength even when the condition of small variation in velocity (an assumption of the original derivation) is violated. Furthermore, this analysis demonstrates that the method provides a migration algorithm when the amplitude has not been preserved in the data. The design of the computer algorithm is discussed, including such aspects as constraints due to causality and spatial aliasing. We also provide O‐estimates of computer time. This algorithm has been successfully implemented on both synthetic data and common‐midpoint stacked field data.
APA, Harvard, Vancouver, ISO, and other styles
42

Alekseev, A. S., M. M. Lavrentiev, V. G. Romanov, and M. E. Romanov. "Theoretical and computational aspects of seismic tomography." Surveys in Geophysics 11, no. 4 (December 1990): 395–409. http://dx.doi.org/10.1007/bf01902967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Zhao, Yan, Ningbo Mao, and Zhiming Ren. "A stable and efficient approach of Q reverse time migration." GEOPHYSICS 83, no. 6 (November 1, 2018): S557—S567. http://dx.doi.org/10.1190/geo2018-0022.1.

Full text
Abstract:
Amplitude energy attenuation and phase distortion of seismic waves caused by formation viscoelasticity reduce the resolution of reverse time migration (RTM) images. Q-RTM often is used to compensate the attenuation effects and improve the resolution of seismic imaging. However, serious high-frequency noise and tremendous amplitude will be produced during the wavefield extrapolation of Q-RTM, resulting in its inability to be imaged. Many Q-RTM algorithms solve the problem of instability through low-pass filtering in the wavenumber domain, but the method is less efficient in computation and has a truncation effect in the wavefield. We have developed a stable and efficient Q-RTM method, in which a regularization term was introduced into the viscoacoustic wave equation to suppress the high-frequency noise, and the finite-difference method was used to solve the viscoacoustic wave equation with a regularization term. We used the model example to visually demonstrate the instability of wavefield extrapolation in Q-RTM and compared the effect and computational efficiency of the two stabilization processing methods, low-pass filtering and regularization. Meanwhile, our method is not involved in solving the fractional derivatives by using the pseudo-spectral method, the computational efficiency also can be improved. We tested the Q-RTM approach on a simple layered model, Marmousi model, and real seismic data. The results of numerical examples demonstrated that the Q-RTM method can solve the problem of instability effectively and obtain a higher resolution image with lower computational cost.
APA, Harvard, Vancouver, ISO, and other styles
44

Yang, Dikun, and Douglas W. Oldenburg. "Survey decomposition: A scalable framework for 3D controlled-source electromagnetic inversion." GEOPHYSICS 81, no. 2 (March 1, 2016): E69—E87. http://dx.doi.org/10.1190/geo2015-0217.1.

Full text
Abstract:
Numerical modeling and inversion of electromagnetic (EM) data is a computationally intensive task. To achieve efficiency, we have developed algorithms that were constructed from a smallest practical computational unit. This “atomic” building block, which yields the solution of Maxwell’s equations for a single time or frequency datum due to an infinitesimal current or magnetic dipole, is a self-contained EM problem that can be solved independently and inexpensively on a single core of CPU. Any EM data set can be composed from these units through assembling or superposition. This approach takes advantage of the rapidly expanding capability of multiprocessor computation. Our decomposition has allowed us to handle the computational complexity that arises because of the physical size of the survey, the large number of transmitters, and the large range of time or frequency in a data set; we did this by modeling every datum separately on customized local meshes and local time-stepping schemes. The counterpart to efficiency with atomic decomposition was that the number of independent subproblems could become very large. We have realized that not all of the data need to be considered at all stages of the inversion. Rather, the data can be significantly downsampled at late times or low frequencies and at the early stages of inversion when only long-wavelength signals are sought. We have therefore developed a random data subsampling approach, in conjunction with cross-validation, that selects data in accordance to the spatial scales of the EM induction and the degree of regularization. Alternatively, for many EM surveys, the atomic units can be combined into larger subproblems, thus reducing the number of subproblems needed. These trade-offs were explored for airborne and ground large-loop systems with specific survey configurations being considered. Our synthetic and field examples showed that the proposed framework can produce 3D inversion results in uncompromised quality in a more scalable manner.
APA, Harvard, Vancouver, ISO, and other styles
45

Amestoy, Patrick, Romain Brossier, Alfredo Buttari, Jean-Yves L’Excellent, Theo Mary, Ludovic Métivier, Alain Miniussi, and Stephane Operto. "Fast 3D frequency-domain full-waveform inversion with a parallel block low-rank multifrontal direct solver: Application to OBC data from the North Sea." GEOPHYSICS 81, no. 6 (November 2016): R363—R383. http://dx.doi.org/10.1190/geo2016-0052.1.

Full text
Abstract:
Wide-azimuth long-offset ocean bottom cable (OBC)/ocean bottom node surveys provide a suitable framework to perform computationally efficient frequency-domain full-waveform inversion (FWI) with a few discrete frequencies. Frequency-domain seismic modeling is performed efficiently with moderate computational resources for a large number of sources with a sparse multifrontal direct solver (Gauss-elimination techniques for sparse matrices). Approximate solutions of the time-harmonic wave equation are computed using a block low-rank (BLR) approximation, leading to a significant reduction in the operation count and in the volume of communication during the lower upper (LU) factorization as well as offering great potential for reduction in the memory demand. Moreover, the sparsity of the seismic source vectors is exploited to speed up the forward elimination step during the computation of the monochromatic wavefields. The relevance and the computational efficiency of the frequency-domain FWI performed in the viscoacoustic vertical transverse isotropic (VTI) approximation was tested with a real 3D OBC case study from the North Sea. The FWI subsurface models indicate a dramatic resolution improvement relative to the initial model built by reflection traveltime tomography. The amplitude errors introduced in the modeled wavefields by the BLR approximation for different low-rank thresholds have a negligible footprint in the FWI results. With respect to a standard multifrontal sparse direct factorization, and without compromise of the accuracy of the imaging, the BLR approximation can bring a reduction of the LU factor size by a factor of up to three. This reduction is not yet exploited to reduce the effective memory usage (ongoing work). The flop reduction can be larger than a factor of 10 and can bring a factor of time reduction of around three. Moreover, this reduction factor tends to increase with frequency, namely with the matrix size. Frequency-domain viscoacoustic VTI FWI can be viewed as an efficient tool to build an initial model for elastic FWI of 4C OBC data.
APA, Harvard, Vancouver, ISO, and other styles
46

Sen, Mrinal K., and Indrajit G. Roy. "Computation of differential seismograms and iteration adaptive regularization in prestack waveform inversion." GEOPHYSICS 68, no. 6 (November 2003): 2026–39. http://dx.doi.org/10.1190/1.1635056.

Full text
Abstract:
Seismic waveform inversion is a highly challenging task. Nonlinearity, nonuniqueness, and robustness issues tend to make the problem computationally intractable. We have developed a simple regularized Gauss‐Newton–type algorithm for the inversion of seismic data that addresses several of these issues. The salient features of our algorithm include an efficient approach to sensitivity computation, a strategy for band‐limiting the Jacobian matrix, and a novel approach to computing regularization weight that is iteration adaptive. In this paper, we first review various forward modeling and differential seismogram computation algorithms and then evaluate different strategies for choosing the regularization weight. Under the assumption of locally 1D earth models, we design an efficient algorithm by rearranging recursion formula in the reflection matrix approach to compute plane wave seismograms and the Fréchet derivative matrix as a by‐product of forward modeling. We then demonstrate that in a gradient‐descent–type optimization scheme, regularization is critical for obtaining stable and geologically realistic solutions. Although, in most applications, the regularization weight (relative importance between data and model misfit) is chosen in an ad‐hoc manner; the robustness in model estimation and computational stability improve significantly by allowing adaptivity in the choice of the regularization weight in each iterative step. We evaluate performances of several methods, namely, an L‐curve approach, generalized cross‐validation technique, and methods based on a discrepancy principle with application to field ocean‐bottom‐cable data, and we propose a new hybrid approach in computing iteration adaptive regularization weight for prestack inversion.
APA, Harvard, Vancouver, ISO, and other styles
47

Liu, Hongwei, and Houzhu Zhang. "Reducing computation cost by Lax-Wendroff methods with fourth-order temporal accuracy." GEOPHYSICS 84, no. 3 (May 1, 2019): T109—T119. http://dx.doi.org/10.1190/geo2018-0196.1.

Full text
Abstract:
Explicit time-marching finite-difference stencils have been extensively used for simulating seismic wave propagation, and they are the most computationally intensive part of seismic forward modeling, reverse time migration, and full-waveform inversion. The time-marching step, determined by both the stability condition and numerical dispersion, is a key factor in the computational cost. In contrast with the widely used second-order temporal stencil, the Lax-Wendroff stencil is more cost effective because the time-marching step can be much larger. It can be proved, using theory and numerical tests, that the Lax-Wendroff stencil does enable larger time steps. In terms of numerical dispersion, the time steps for second-order and Lax-Wendroff stencils are functions of the number of shortest wavelengths away from the source location. These functions are derived by evaluating the relative L2-norm of the differences between the analytical and numerical solutions. The method for determining the time-marching step is model adaptive and easy to implement. We use the pseudo spectral method for the computation of spatial derivatives, and the wave equations that we solved are for isotropic media only, but the described principles can be easily implemented for more complicated types of media.
APA, Harvard, Vancouver, ISO, and other styles
48

Furumura, Takashi, B. L. N. Kennett, and Hiroshi Takenaka. "Parallel 3-D pseudospectral simulation of seismic wave propagation." GEOPHYSICS 63, no. 1 (January 1998): 279–88. http://dx.doi.org/10.1190/1.1444322.

Full text
Abstract:
Three‐dimensional pseudospectral modeling for a realistic scale problem is still computationally very intensive, even when using current powerful computers. To overcome this, we have developed a parallel pseudospectral code for calculating the 3-D wavefield by concurrent use of a number of processors. The parallel algorithm is based on a partition of the computational domain, where the field quantities are distributed over a number of processors and the calculation is concurrently done in each subdomain with interprocessor communications. Experimental performance tests using three different styles of parallel computers achieved a fairly good speed up compared with conventional computation on a single processor: maximum speed‐up rate of 26 using 32 processors of a Thinking Machine CM-5 parallel computer, 1.6 using a Digital Equipment DEC‐Alpha two‐CPU workstation, and 4.6 using a cluster of eight Sun Microsystems SPARC-Station 10 (SPARC-10) workstations connected by an Ethernet. The result of this test agrees well with the performance theoretically predicted for each system. To demonstrate the feasibility of our parallel algorithm, we show three examples: 3-D acoustic and elastic modeling of fault‐zone trapped waves and the calculation of elastic wave propagation in a 3-D syncline model.
APA, Harvard, Vancouver, ISO, and other styles
49

Blatter, Daniel, Anandaroop Ray, and Kerry Key. "Two-dimensional Bayesian inversion of magnetotelluric data using trans-dimensional Gaussian processes." Geophysical Journal International 226, no. 1 (March 25, 2021): 548–63. http://dx.doi.org/10.1093/gji/ggab110.

Full text
Abstract:
SUMMARY Bayesian inversion of electromagnetic data produces crucial uncertainty information on inferred subsurface resistivity. Due to their high computational cost, however, Bayesian inverse methods have largely been restricted to computationally expedient 1-D resistivity models. In this study, we successfully demonstrate, for the first time, a fully 2-D, trans-dimensional Bayesian inversion of magnetotelluric (MT) data. We render this problem tractable from a computational standpoint by using a stochastic interpolation algorithm known as a Gaussian process (GP) to achieve a parsimonious parametrization of the model vis-a-vis the dense parameter grids used in numerical forward modelling codes. The GP links a trans-dimensional, parallel tempered Markov chain Monte Carlo sampler, which explores the parsimonious model space, to MARE2DEM, an adaptive finite element forward solver. MARE2DEM computes the model response using a dense parameter mesh with resistivity assigned via the GP model. We demonstrate the new trans-dimensional GP sampler by inverting both synthetic and field MT data for 2-D models of electrical resistivity, with the field data example converging within 10 d on 148 cores, a non-negligible but tractable computational cost. For a field data inversion, our algorithm achieves a parameter reduction of over 32× compared to the fixed parameter grid used for the MARE2DEM regularized inversion. Resistivity probability distributions computed from the ensemble of models produced by the inversion yield credible intervals and interquartile plots that quantitatively show the non-linear 2-D uncertainty in model structure. This uncertainty could then be propagated to other physical properties that impact resistivity including bulk composition, porosity and pore-fluid content.
APA, Harvard, Vancouver, ISO, and other styles
50

Harkrider, David G. "The early years of computational seismology at Caltech." Bulletin of the Seismological Society of America 78, no. 6 (December 1, 1988): 2105–9. http://dx.doi.org/10.1785/bssa0780062105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography