Journal articles on the topic 'Computational geoscience'

To see the other types of publications on this topic, follow the link: Computational geoscience.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computational geoscience.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhao, Chongbin. "Computational Methods for Simulating Some Typical Problems in Computational Geosciences." International Journal of Computational Methods 13, no. 02 (March 2016): 1640016. http://dx.doi.org/10.1142/s0219876216400168.

Full text
Abstract:
The main purpose of this paper is to present computational methods for simulating some typical problems in the emerging computational geoscience field. Due to remarkable differences between engineering systems and Earth ones, existing computational methods, which are designed for solving engineering problems, cannot be directly used to solve geoscience problems without any modification. However, the fundamental philosophy of developing computational methods is applicable to the computational simulation of both geoscience and engineering problems. Because of their inherent approximation, computational methods must be verified before putting into application. After several computational methods and algorithms, which are developed for simulating some typical problems in the emerging computational geoscience field, are briefly introduced, a typical geoscience problem, known as the chemical dissolution-front instability problem in ore-forming systems of supercritical Zhao numbers, is selected to demonstrate how computational methods can be used to solve geoscience problems.
APA, Harvard, Vancouver, ISO, and other styles
2

Savelonas, Michalis A., Christos N. Veinidis, and Theodoros K. Bartsokas. "Computer Vision and Pattern Recognition for the Analysis of 2D/3D Remote Sensing Data in Geoscience: A Survey." Remote Sensing 14, no. 23 (November 27, 2022): 6017. http://dx.doi.org/10.3390/rs14236017.

Full text
Abstract:
Historically, geoscience has been a prominent domain for applications of computer vision and pattern recognition. The numerous challenges associated with geoscience-related imaging data, which include poor imaging quality, noise, missing values, lack of precise boundaries defining various geoscience objects and processes, as well as non-stationarity in space and/or time, provide an ideal test bed for advanced computer vision techniques. On the other hand, the developments in pattern recognition, especially with the rapid evolution of powerful graphical processing units (GPUs) and the subsequent deep learning breakthrough, enable valuable computational tools, which can aid geoscientists in important problems, such as land cover mapping, target detection, pattern mining in imaging data, boundary extraction and change detection. In this landscape, classical computer vision approaches, such as active contours, superpixels, or descriptor-guided classification, provide alternatives that remain relevant when domain expert labelling of large sample collections is often not feasible. This issue persists, despite efforts for the standardization of geoscience datasets, such as Microsoft’s effort for AI on Earth, or Google Earth. This work covers developments in applications of computer vision and pattern recognition on geoscience-related imaging data, following both pre-deep learning and post-deep learning paradigms. Various imaging modalities are addressed, including: multispectral images, hyperspectral images (HSIs), synthetic aperture radar (SAR) images, point clouds obtained from light detection and ranging (LiDAR) sensors or digital elevation models (DEMs).
APA, Harvard, Vancouver, ISO, and other styles
3

Abokhodair, Abdulwahab A. "Numerical tools for geoscience computations: Semiautomatic differentiation—SD." Computational Geosciences 11, no. 4 (June 5, 2007): 283–96. http://dx.doi.org/10.1007/s10596-007-9052-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Blais, J. A. Rod. "Optimal Modeling and Filtering of Stochastic Time Series for Geoscience Applications." Mathematical Problems in Engineering 2013 (2013): 1–8. http://dx.doi.org/10.1155/2013/895061.

Full text
Abstract:
Sequences of observations or measurements are often modeled as realizations of stochastic processes with some stationary properties in the first and second moments. However in practice, the noise biases and variances are likely to be different for different epochs in time or regions in space, and hence such stationarity assumptions are often questionable. In the case of strict stationarity with equally spaced data, the Wiener-Hopf equations can readily be solved with fast Fourier transforms (FFTs) with optimal computational efficiency. In more general contexts, covariance matrices can also be diagonalized using the Karhunen-Loève transforms (KLTs), or more generally using empirical orthogonal and biorthogonal expansions, which are unfortunately much more demanding in terms of computational efforts. In cases with increment stationarity, the mathematical modeling can be modified and generalized covariances can be used with some computational advantages. The general nonlinear solution methodology is also briefly overviewed with the practical limitations. These different formulations are discussed with special emphasis on the spectral properties of covariance matrices and illustrated with some numerical examples. General recommendations are included for practical geoscience applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhao, Chongbin, Bruce E. Hobbs, and Alison Ord. "Advances in computational geoscience: Numerical methods and algorithms for simulating geofluid flow related problems." Journal of Geochemical Exploration 101, no. 1 (April 2009): 127. http://dx.doi.org/10.1016/j.gexplo.2008.11.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Leung, Raymond. "Subsurface Boundary Geometry Modeling: Applying Computational Physics, Computer Vision, and Signal Processing Techniques to Geoscience." IEEE Access 7 (2019): 161680–96. http://dx.doi.org/10.1109/access.2019.2951605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Frings, Patrick J., and Heather L. Buss. "The Central Role of Weathering in the Geosciences." Elements 15, no. 4 (August 1, 2019): 229–34. http://dx.doi.org/10.2138/gselements.15.4.229.

Full text
Abstract:
Weathering is the chemical and physical alteration of rock at the surface of the Earth, but its importance is felt well beyond the rock itself. The repercussions of weathering echo throughout the Earth sciences, from ecology to climatology, from geomorphology to geochemistry. This article outlines how weathering interacts with various geoscience disciplines across a huge range of scales, both spatial and temporal. It traces the evolution of scientific thinking about weathering and man's impact on weathering itself—for better and for worse. Future computational, conceptual and methodological advances are set to cement weathering's status as a central process in the Earth sciences.
APA, Harvard, Vancouver, ISO, and other styles
8

Kumi-Boateng, Bernard, and Yao Yevenyo Ziggah. "FEASIBILITY OF USING GROUP METHOD OF DATA HANDLING (GMDH) APPROACH FOR HORIZONTAL COORDINATE TRANSFORMATION." Geodesy and cartography 46, no. 2 (July 9, 2020): 55–66. http://dx.doi.org/10.3846/gac.2020.10486.

Full text
Abstract:
Machine learning algorithms have emerged as a new paradigm shift in geoscience computations and applications. The present study aims to assess the suitability of Group Method of Data Handling (GMDH) in coordinate transformation. The data used for the coordinate transformation constitute the Ghana national triangulation network which is based on the two-horizontal geodetic datums (Accra 1929 and Leigon 1977) utilised for geospatial applications in Ghana. The GMDH result was compared with other standard methods such as Backpropagation Neural Network (BPNN), Radial Basis Function Neural Network (RBFNN), 2D conformal, and 2D affine. It was observed that the proposed GMDH approach is very efficient in transforming coordinates from the Leigon 1977 datum to the official mapping datum of Ghana, i.e. Accra 1929 datum. It was also found that GMDH could produce comparable and satisfactory results just like the widely used BPNN and RBFNN. However, the classical transformation methods (2D affine and 2D conformal) performed poorly when compared with the machine learning models (GMDH, BPNN and RBFNN). The computational strength of the machine learning models’ is attributed to its self-adaptive capability to detect patterns in data set without considering the existence of functional relationships between the input and output variables. To this end, the proposed GMDH model could be used as a supplementary computational tool to the existing transformation procedures used in the Ghana geodetic reference network.
APA, Harvard, Vancouver, ISO, and other styles
9

Bhatt, Asti, Todd Valentic, Ashton Reimer, Leslie Lamarche, Pablo Reyes, and Russell Cosgrove. "Reproducible Software Environment: a tool enabling computational reproducibility in geospace sciences and facilitating collaboration." Journal of Space Weather and Space Climate 10 (2020): 12. http://dx.doi.org/10.1051/swsc/2020011.

Full text
Abstract:
The Reproducible Software Environment (Resen) is an open-source software tool enabling computationally reproducible scientific results in the geospace science community. Resen was developed as part of a larger project called the Integrated Geoscience Observatory (InGeO), which aims to help geospace researchers bring together diverse datasets from disparate instruments and data repositories, with software tools contributed by instrument providers and community members. The main goals of InGeO are to remove barriers in accessing, processing, and visualizing geospatially resolved data from multiple sources using methodologies and tools that are reproducible. The architecture of Resen combines two mainstream open source software tools, Docker and JupyterHub, to produce a software environment that not only facilitates computationally reproducible research results, but also facilitates effective collaboration among researchers. In this technical paper, we discuss some challenges for performing reproducible science and a potential solution via Resen, which is demonstrated using a case study of a geospace event. Finally we discuss how the usage of mainstream, open-source technologies seems to provide a sustainable path towards enabling reproducible science compared to proprietary and closed-source software.
APA, Harvard, Vancouver, ISO, and other styles
10

Vacher, H. L. "Computational Geology 13 – Geological-Mathematical Activities for College Students in the Journal of Geoscience Education, 1990–1999." Journal of Geoscience Education 48, no. 5 (November 2000): 682–91. http://dx.doi.org/10.5408/1089-9995-48.5.682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Paoletti, Valeria, Per Christian Hansen, Mads Friis Hansen, and Maurizio Fedi. "A computationally efficient tool for assessing the depth resolution in large-scale potential-field inversion." GEOPHYSICS 79, no. 4 (July 1, 2014): A33—A38. http://dx.doi.org/10.1190/geo2014-0017.1.

Full text
Abstract:
In potential-field inversion, careful management of singular value decomposition components is crucial for obtaining information about the source distribution with respect to depth. In principle, the depth-resolution plot provides a convenient visual tool for this analysis, but its computational complexity has hitherto prevented application to large-scale problems. To analyze depth resolution in such problems, we developed a variant ApproxDRP, which is based on an iterative algorithm and therefore suited for large-scale problems because we avoid matrix factorizations and the associated demands on memory and computing time. We used the ApproxDRP to study retrievable depth resolution in inversion of the gravity field of the Neapolitan Volcanic Area. Our main contribution is the combined use of the Lanczos bidiagonalization algorithm, established in the scientific computing community, and the depth-resolution plot defined in the geoscience community.
APA, Harvard, Vancouver, ISO, and other styles
12

Ba, Yuming, Jana de Wiljes, Dean S. Oliver, and Sebastian Reich. "Randomized maximum likelihood based posterior sampling." Computational Geosciences 26, no. 1 (December 20, 2021): 217–39. http://dx.doi.org/10.1007/s10596-021-10100-y.

Full text
Abstract:
AbstractMinimization of a stochastic cost function is commonly used for approximate sampling in high-dimensional Bayesian inverse problems with Gaussian prior distributions and multimodal posterior distributions. The density of the samples generated by minimization is not the desired target density, unless the observation operator is linear, but the distribution of samples is useful as a proposal density for importance sampling or for Markov chain Monte Carlo methods. In this paper, we focus on applications to sampling from multimodal posterior distributions in high dimensions. We first show that sampling from multimodal distributions is improved by computing all critical points instead of only minimizers of the objective function. For applications to high-dimensional geoscience inverse problems, we demonstrate an efficient approximate weighting that uses a low-rank Gauss-Newton approximation of the determinant of the Jacobian. The method is applied to two toy problems with known posterior distributions and a Darcy flow problem with multiple modes in the posterior.
APA, Harvard, Vancouver, ISO, and other styles
13

Jiang, Chunhua, Tianhe Xu, Wenfeng Nie, Zhenlong Fang, Shuaimin Wang, and Aigong Xu. "A Parallel Approach for Multi-GNSS Ultra-Rapid Orbit Determination." Remote Sensing 13, no. 17 (September 1, 2021): 3464. http://dx.doi.org/10.3390/rs13173464.

Full text
Abstract:
Global Navigation Satellite System (GNSS) ultra-rapid orbit is critical for geoscience and real-time engineering applications. To improve the computational efficiency and the accuracy of predicted orbit, a parallel approach for multi-GNSS ultra-rapid orbit determination is proposed based on Message Passing Interface (MPI)/Open Multi Processing (OpenMP). This approach, compared with earlier efficient methods, can improve the efficiency of multi-GNSS ultra-rapid orbit solution without changing the original observation data and retaining the continuity and consistency of the original parameters to be estimated. To obtain high efficiency, three steps are involved in the approach. First and foremost, the normal equation construction is optimized in parallel based on MPI. Second, equivalent reduction of the estimated parameters is optimized using OpenMP parallel method. Third, multithreading is used for parallel orbit extrapolation. Thus, GNSS ultra-rapid orbit determination is comprehensively optimized in parallel, and the computation efficiency is greatly improved. Based on the data from MGEX and IGS stations, experiments are carried out to analyze the performance of the proposed approach in computational efficiency, accuracy and stability. The results show that the approach greatly improves the efficiency of satellite orbit determination. It can realize 1-h update frequency for the multi-GNSS ultra-rapid orbit determination using 88 stations with four-system observations. The accuracy of the GPS, GLONASS, Galileo and BDS ultra-rapid orbit with 1-h update frequency using the parallel approach is approximately 33.4%,31.4%,40.1% and 32.8% higher than that of the original orbit, respectively. The root mean squares (RMS) of GPS, GLONASS, Galileo and BDS predicted orbit are about 3.2 cm, 5.1 cm, 5.6 cm and 11.8 cm. Moreover, the orbit provided by the proposed method has a better stability. The precision loss of all parallel optimization can be negligible and the original correlation between the parameters is fully preserved.
APA, Harvard, Vancouver, ISO, and other styles
14

Peyrot, Jean-Luc, Laurent Duval, Frédéric Payan, Lauriane Bouard, Lénaïc Chizat, Sébastien Schneider, and Marc Antonini. "HexaShrink, an exact scalable framework for hexahedral meshes with attributes and discontinuities: multiresolution rendering and storage of geoscience models." Computational Geosciences 23, no. 4 (May 3, 2019): 723–43. http://dx.doi.org/10.1007/s10596-019-9816-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Yang, Meng, Christian Hirt, and Roland Pail. "TGF: A New MATLAB-based Software for Terrain-related Gravity Field Calculations." Remote Sensing 12, no. 7 (March 26, 2020): 1063. http://dx.doi.org/10.3390/rs12071063.

Full text
Abstract:
With knowledge of geometry and density-distribution of topography, the residual terrain modelling (RTM) technique has been broadly applied in geodesy and geophysics for the determination of the high-frequency gravity field signals. Depending on the size of investigation areas, challenges in computational efficiency are encountered when using an ultra-high-resolution digital elevation model (DEM) in the Newtonian integration. For efficient and accurate gravity forward modelling in the spatial domain, we developed a new MATLAB-based program called, terrain gravity field (TGF). Our new software is capable of calculating the gravity field generated by an arbitrary topographic mass-density distribution. Depending on the attenuation character of gravity field with distance, the adaptive algorithm divides the integration masses into four zones, and adaptively combines four types of geometries (i.e., polyhedron, prism, tesseroid and point-mass) and DEMs with different spatial resolutions. Compared to some publicly available algorithms depending on one type of geometric approximation, this enables accurate modelling of gravity field and greatly reduces the computation time. Besides, the TGF software allows to calculate ten independent gravity field functionals, supports two types of density inputs (constant density value and digital density map), and considers the curvature of the Earth by involving spherical approximation and ellipsoidal approximation. Further to this, the TGF software is also capable of delivering the gravity field of full-scale topographic gravity field implied by masses between the Earth’s surface and mean sea level. In this contribution, the TGF software is introduced to the geoscience community and its capabilities are explained. Results from internal and external numerical validation experiments of TGF confirmed its accuracy at the sub-mGal level. Based on TGF, the trade-off between accuracy and efficiency, values for the spatial resolution and extension of topography models are recommended. The TGF software has been extensively tested and recently been applied in the SRTM2gravity project to convert the global 3” SRTM topography to implied gravity effects at 28 billion computation points. This confirms the capability of TGF for dealing with large datasets. Together with this paper, the TGF software will be released in the public domain for free use in geodetic and geophysical forward modelling computations.
APA, Harvard, Vancouver, ISO, and other styles
16

Pardo-Alonso, Samuel, Eusebio Solórzano, Jerome Vicente, Loes Brabant, Manuel L. Dierick, Ingo Manke, Andr Hilger, Ester Laguna, and Miguel Angel Rodriguez-Perez. "μCT-Based Analysis of the Solid Phase in Foams: Cell Wall Corrugation and other Microscopic Features." Microscopy and Microanalysis 21, no. 5 (August 20, 2015): 1361–71. http://dx.doi.org/10.1017/s1431927615014890.

Full text
Abstract:
AbstractThis work presents a series of three-dimensional computational methods with the objective of analyzing and quantifying some important structural characteristics in a collection of low-density polyolefin-based foams. First, the solid phase tortuosity, local thickness, and surface curvature, have been determined over the solid phase of the foam. These parameters were used to quantify the presence of wrinkles located at the cell walls of the foams under study. In addition, a novel segmentation technique has been applied to the continuous solid phase. This novel method allows performing a separate analysis of the constituting elements of this phase, that is, cell struts and cell walls. The methodology is based on a solid classification algorithm and evaluates the local topological dissimilarities existing between these elements. Thanks to this method it was possible to perform a separate analysis of curvature, local thickness, and corrugation ratio in the solid constituents that reveals additional differences that were not detected in the first analysis of the continuous structure. The methods developed in this work are applicable to other types of porous materials in fields such as geoscience or biomedicine.
APA, Harvard, Vancouver, ISO, and other styles
17

Guo, Wei, Ramachandran D. Nair, and Jing-Mei Qiu. "A Conservative Semi-Lagrangian Discontinuous Galerkin Scheme on the Cubed Sphere." Monthly Weather Review 142, no. 1 (January 1, 2014): 457–75. http://dx.doi.org/10.1175/mwr-d-13-00048.1.

Full text
Abstract:
Abstract The discontinuous Galerkin (DG) methods designed for hyperbolic problems arising from a wide range of applications are known to enjoy many computational advantages. DG methods coupled with strong-stability-preserving explicit Runge–Kutta discontinuous Galerkin (RKDG) time discretizations provide a robust numerical approach suitable for geoscience applications including atmospheric modeling. However, a major drawback of the RKDG method is its stringent Courant–Friedrichs–Lewy (CFL) stability restriction associated with explicit time stepping. To address this issue, the authors adopt a dimension-splitting approach where a semi-Lagrangian (SL) time-stepping strategy is combined with the DG method. The resulting SLDG scheme employs a sequence of 1D operations for solving multidimensional transport equations. The SLDG scheme is inherently conservative and has the option to incorporate a local positivity-preserving filter for tracers. A novel feature of the SLDG algorithm is that it can be used for multitracer transport for global models employing spectral-element grids, without using an additional finite-volume grid system. The quality of the proposed method is demonstrated via benchmark tests on Cartesian and cubed-sphere geometry, which employs nonorthogonal, curvilinear coordinates.
APA, Harvard, Vancouver, ISO, and other styles
18

He, Zhenwen, Chunfeng Zhang, Xiaogang Ma, and Gang Liu. "Hexadecimal Aggregate Approximation Representation and Classification of Time Series Data." Algorithms 14, no. 12 (December 2, 2021): 353. http://dx.doi.org/10.3390/a14120353.

Full text
Abstract:
Time series data are widely found in finance, health, environmental, social, mobile and other fields. A large amount of time series data has been produced due to the general use of smartphones, various sensors, RFID and other internet devices. How a time series is represented is key to the efficient and effective storage and management of time series data, as well as being very important to time series classification. Two new time series representation methods, Hexadecimal Aggregate approXimation (HAX) and Point Aggregate approXimation (PAX), are proposed in this paper. The two methods represent each segment of a time series as a transformable interval object (TIO). Then, each TIO is mapped to a spatial point located on a two-dimensional plane. Finally, the HAX maps each point to a hexadecimal digit so that a time series is converted into a hex string. The experimental results show that HAX has higher classification accuracy than Symbolic Aggregate approXimation (SAX) but a lower one than some SAX variants (SAX-TD, SAX-BD). The HAX has the same space cost as SAX but is lower than these variants. The PAX has higher classification accuracy than HAX and is extremely close to the Euclidean distance (ED) measurement; however, the space cost of PAX is generally much lower than the space cost of ED. HAX and PAX are general representation methods that can also support geoscience time series clustering, indexing and query except for classification.
APA, Harvard, Vancouver, ISO, and other styles
19

Yang, K. S., S. L. Li, J. Zhang, Z. Zeng, X. Y. Qin, and X. M. Zhou. "Phase transition of iron doped MgO under high pressure by first-principles study." International Journal of Modern Physics C 26, no. 02 (February 2015): 1550020. http://dx.doi.org/10.1142/s0129183115500205.

Full text
Abstract:
The ( Mg, Fe ) O solid solution is one of the major lower mantle minerals, and studying its properties and structures under high pressure is a fundamental step toward understanding Earth's deep interior. Here within the framework of density functional theory, we first discuss the relationship between the total energy and iron doped positions of ( Mg, Fe ) O , and find that the doped iron favors to be dispersive. Then the pressure-induced phase transitions of ( Mg, Fe ) O from NaCl -type (B1) to CsCl -type (B2) are probed. It is found that the phase transition pressure of ( Mg, Fe ) O decreases with damped oscillation, as the increase of iron concentration. This phenomenon is essentially determined by the iron concentration as well as iron doped positions. The electronic structures of MgO and ( Mg 0.75 Fe 0.25) O at 436 GPa are calculated, and the results show that the doped irons play a crucial role in the metallicity of ( Mg 0.75 Fe 0.25) O . Our results are in agreement with the experimental counterparts. This study would provide some useful information for understanding the behavior of pressure-induced phase transition and geoscience.
APA, Harvard, Vancouver, ISO, and other styles
20

Chen, Guoqiang, Huadong Guo, Da Liang, Chunming Han, Yixing Ding, Huiping Jiang, and Ke Zhang. "Theoretical Feasibility Analysis of Fast Back-Projection Algorithm for Moon-Based SAR in Time Domain." Applied Sciences 12, no. 8 (April 11, 2022): 3850. http://dx.doi.org/10.3390/app12083850.

Full text
Abstract:
Nowadays, the Earth observation based on the Moon has attracted attention from many researchers and relevant departments. There also exists a considerable amount of interest in monitoring large-scale and long-term geoscience phenomena using the Moon-based SAR (MBS). However, the Earth’s observation from MBS has long transmission time, and the relative motion of MBS with its Earth ground target (EGT) is much different to the space-borne SAR, the above reasons indicate that the traditional stop-and-go model is no longer suitable for MBS in frequency domain imaging. Here a dual-path separate calculation method for single pulse is presented in this paper for a better match of a real scenario, and then the slant range is fitted to a high-order polynomial series. The MBS’s location, the synthetic aperture time and other factors have effects on length of the dual- path and fit bias. Without thoroughly investigated phase de-correlation processing in frequency domain, and to avoid computational costs in traditional back-projection (BP) algorithm, the paper first proposes a fast back-projection (FBP) algorithm in time domain for MBS, a platform that has long transmission time and long synthetic aperture time. In the FBP algorithm, the original method, that projected echo on all pixels in the imaging area, is changed to projected echo on a centerline instead. A suitable interpolation for points on the centerline is adopted to reduce the projected error; the synthetic aperture length and imaging area are also divided into subsections to reduce computation cost. The formula indicates that the range error could be control once the product of sub-imaging area’s length and sub-aperture’s length stay constant. Through the theoretical analysis, the detailed range difference mainly at apogee, perigee, ascending, and descending nodes indicate the necessity to separately calculate the dual-path for MBS’s single pulse transmission in Earth-Moon motion, with real ephemeris been adopted; then, the high-order polynomial fitting will better describe the motion trajectory. Lastly, the FBP algorithm proposed is simulated in a specific scenario under acceptable resolution, and the result shows its feasibility for image compression.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Bei, Yang Liu, Mrinal K. Sen, and Zhiming Ren. "Time-space-domain mesh-free finite difference based on least squares for 2D acoustic-wave modeling." GEOPHYSICS 82, no. 4 (July 1, 2017): T143—T157. http://dx.doi.org/10.1190/geo2016-0464.1.

Full text
Abstract:
Finite-difference (FD) methods approximate derivatives through a weighted summation of function values from neighboring nodes. Traditionally, these neighboring nodes are assumed to be distributed regularly, such as in square or rectangular lattices. To improve geometric flexibility, one option is to develop FD in a mesh-free discretization, in which scattered nodes can be placed suitably with respect to irregular boundaries or arbitrarily shaped anomalies without coordinate transformation or forming any triangles or tetrahedra, etc. These mesh-free FDs have had successful applications, especially in computational geoscience. However, they are all space-domain FD schemes, in which FD coefficients are derived by approximating spatial derivatives individually in the space domain. For acoustic-wave modeling, it has been proven that space-domain FD methods normally have higher dispersion error than time-space-domain FD methods, which determine FD coefficients by approximating the time-space-domain dispersion relation. Now, we have developed a time-space-domain mesh-free FD based on minimizing the absolute error of the dispersion relation by least-squares (LS) for 2D acoustic-wave modeling. The matrix used to solve for FD coefficients in our method is determined by the spatial distribution of the nodes in a local FD stencil, whereas the temporal step size and velocity information are considered in the right side of the linear system. This feature of considering both spatial and temporal effects allows our proposed mesh-free LS-based FD to obtain greater temporal accuracy adaptive to different Courant-Friedrichs-Lewy parameters than pure space-domain mesh-free FDs. Under several 2D acoustic scenarios, the advantage was proven by comparing our method with radial-basis-function-generated FD, which is one of the most popular mesh-free FDs and has been applied in elastic wave modeling.
APA, Harvard, Vancouver, ISO, and other styles
22

Cronin, V. "Computational Geosciences with Mathematica: (William C. Haneberg)." Environmental and Engineering Geoscience 12, no. 4 (November 1, 2006): 385–86. http://dx.doi.org/10.2113/gseegeosci.12.4.385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Hengl, Tomislav, Madlene Nussbaum, Marvin N. Wright, Gerard B. M. Heuvelink, and Benedikt Gräler. "Random forest as a generic framework for predictive modeling of spatial and spatio-temporal variables." PeerJ 6 (August 29, 2018): e5518. http://dx.doi.org/10.7717/peerj.5518.

Full text
Abstract:
Random forest and similar Machine Learning techniques are already used to generate spatial predictions, but spatial location of points (geography) is often ignored in the modeling process. Spatial auto-correlation, especially if still existent in the cross-validation residuals, indicates that the predictions are maybe biased, and this is suboptimal. This paper presents a random forest for spatial predictions framework (RFsp) where buffer distances from observation points are used as explanatory variables, thus incorporating geographical proximity effects into the prediction process. The RFsp framework is illustrated with examples that use textbook datasets and apply spatial and spatio-temporal prediction to numeric, binary, categorical, multivariate and spatiotemporal variables. Performance of the RFsp framework is compared with the state-of-the-art kriging techniques using fivefold cross-validation with refitting. The results show that RFsp can obtain equally accurate and unbiased predictions as different versions of kriging. Advantages of using RFsp over kriging are that it needs no rigid statistical assumptions about the distribution and stationarity of the target variable, it is more flexible towards incorporating, combining and extending covariates of different types, and it possibly yields more informative maps characterizing the prediction error. RFsp appears to be especially attractive for building multivariate spatial prediction models that can be used as “knowledge engines” in various geoscience fields. Some disadvantages of RFsp are the exponentially growing computational intensity with increase of calibration data and covariates and the high sensitivity of predictions to input data quality. The key to the success of the RFsp framework might be the training data quality—especially quality of spatial sampling (to minimize extrapolation problems and any type of bias in data), and quality of model validation (to ensure that accuracy is not effected by overfitting). For many data sets, especially those with lower number of points and covariates and close-to-linear relationships, model-based geostatistics can still lead to more accurate predictions than RFsp.
APA, Harvard, Vancouver, ISO, and other styles
24

Eke, Zoltán, and István Havasi. "Development of Underwater Surface Measurement : Multibeam Sonar." Geosciences and Engineering 9, no. 14 (2021): 81–95. http://dx.doi.org/10.33030/geosciences.2021.14.081.

Full text
Abstract:
This study was written in the title theme as a result of the cooperation between the staff of the departments of the two scientific institutions mentioned above. Its first author is also a PhD student of the IGGIS. This year the Bay Zoltán Applied Research Public Benefit Nonprofit Ltd. has purchased a Norbit-iWBMSe multibeam ultrasonic system. The primary purpose of this paper is to describe a multibeam ultrasonic system (Norbit- iWBMSe) and to examine its useful application for mine surveying purposes, particularly for measurements used in underwater mining situations, such as the official determination of the amount of the extracted mineral resources, which also serves as the basis for determining the payable annual mining rent. In our study, first, we will briefly review the new surveying technologies which have appeared in Hungarian mine surveying recently and are likely to be used more and more in the future. Then we will deal with the history of under-water topographic mapping. Afterwards the multibeam sonar survey device and its ad- vantages will be discussed, and two waterbed surveying examples from our own practice will be shown. Finally, the legal background of mining volume computation will be briefly described. Furthermore, it is important to highlight the role of multibeam sonar in mining exploitation process.
APA, Harvard, Vancouver, ISO, and other styles
25

Behrens, Jörn, and Michael Bader. "Efficiency considerations in triangular adaptive mesh refinement." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 367, no. 1907 (November 28, 2009): 4577–89. http://dx.doi.org/10.1098/rsta.2009.0175.

Full text
Abstract:
Locally or adaptively refined meshes have been successfully applied to simulation applications involving multi-scale phenomena in the geosciences. In particular, for situations with complex geometries or domain boundaries, meshes with triangular or tetrahedral cells demonstrate their superior ability to accurately represent relevant realistic features. On the other hand, these methods require more complex data structures and are therefore less easily implemented, maintained and optimized. Acceptance in the Earth-system modelling community is still low. One of the major drawbacks is posed by indirect addressing due to unstructured or dynamically changing data structures and correspondingly lower efficiency of the related computations. In this paper, we will derive several strategies to circumvent the mentioned efficiency constraint. In particular, we will apply recent computational sciences methods in combination with results of classical mathematics (space-filling curves) in order to linearize the complex data and access structure.
APA, Harvard, Vancouver, ISO, and other styles
26

Lee, Eun Young, Annarita D’Addabbo, and Dimitrios Piretzidis. "Editorial of Special Issue “Advances and Applications in Computational Geosciences”." Geosciences 12, no. 12 (December 18, 2022): 457. http://dx.doi.org/10.3390/geosciences12120457.

Full text
Abstract:
In geoscientific fields, mathematical modelling, numerical analysis, visualization, simulation, and other computational techniques have become necessary to improve our understanding of phenomena and evolution of the Earth [...]
APA, Harvard, Vancouver, ISO, and other styles
27

Carneiro, Celso Dal Ré, Kauan Martins dos Santos, Thiago Rivaben Lopes, Filipe Constantino dos Santos, Jorge Vicente Lopes da Silva, and Ana Lucia Nogueira de Camargo Harris. "Three-dimensional physical models of sedimentary basins as a resource for teaching-learning of geology." Terrae Didatica 14, no. 4 (November 29, 2018): 379–84. http://dx.doi.org/10.20396/td.v14i4.8654098.

Full text
Abstract:
Three-dimensional modeling connects several fields of knowledge, both basic and applied. 3D models are relevant in educa-tional research because the manipulation of 3D objects favors students' acquisition of spatial vision, but in the Geosciences, there are few didactic publications in Portuguese on the subject. The authors develop an educational research project to produce three-dimensional models of didactic examples of sedimentary basins: the Paraná Basin (Silurian-Upper Cretaceous), the Tau-baté and the São Paulo basins (Neogene). 3D-compatible files will be produced to compose didactic and display material, from maps and geological-structural profiles of certain regional stratigraphic levels of each basin. The research challenges are: (a) to obtain an overview of the available resources for 3D modeling; (b) to evaluate their potential, characteristics, advantages and limitations for applications in Geology and Geosciences; (c) to create computational models of the basins; (d) to produce at least one physical model based on one of the computational models of each basin. The resources will subsidize training work-shops for in-service teachers, technical-scientific articles and Internet pages.
APA, Harvard, Vancouver, ISO, and other styles
28

Heagy, Lindsey J., and Douglas W. Oldenburg. "Direct current resistivity with steel-cased wells." Geophysical Journal International 219, no. 1 (June 18, 2019): 1–26. http://dx.doi.org/10.1093/gji/ggz281.

Full text
Abstract:
SUMMARY The work in this paper is motivated by the increasing use of electrical and electromagnetic methods in geoscience problems where steel-cased wells are present. Applications of interest include monitoring carbon capture and storage and hydraulic fracturing operations. Also of interest is detecting flaws or breaks in degrading steel-casings—such wells pose serious environmental hazards. The general principles of electrical methods with steel-cased wells are understood and several authors have demonstrated that the presence of steel-cased wells can be beneficial for detecting signal due to targets at depth. However, the success of a direct current (DC) resistivity survey lies in the details. Secondary signals might only be a few per cent of the primary signal. In designing a survey, the geometry of the source and receivers, and whether the source is at the top of the casing, inside of it, or beneath the casing will impact measured responses. Also the physical properties and geometry of the background geology, target and casing will have a large impact on the measured data. Because of the small values of the diagnostic signals, it is important to understand the detailed physics of the problem and also to be able to carry out accurate simulations. This latter task is computationally challenging because of the extreme geometry of the wells, which extend kilometers in depth but have millimeter variations in the radial direction, and the extreme variation in the electrical conductivity which is typically 5–7 orders of magnitude larger than that of the background geology. In this paper, we adopt a cylindrical discretization for numerical simulations to investigate three important aspects of DC resistivity in settings with steel-cased wells. (1) We examine the feasibility of using a surface-based DC resistivity survey for diagnosing impairments along a well in a casing integrity experiment. This parameter study demonstrates the impact of the background conductivity, the conductivity of the casing, the depth of the flaw, and the proportion of the casing circumference that is compromised on amplitude of the secondary electric fields measured at the surface. (2) Next, we consider elements of survey design for exciting a conductive or resistive target at depth. We show that conductive targets generate stronger secondary responses than resistive targets, and that having an electrical connection between the target and well can significantly increase the measured secondary responses. (3) Finally, we examine common strategies for approximating the fine-scale structure of a steel cased well with a coarse-scale representation to reduce computational load. We show that for DC resistivity experiments, the product of the conductivity and the cross-sectional area of the casing is the important quantity for controlling the distribution of currents and charges along its length. To promote insight into the physics, we present results by plotting the currents, charges, and electric fields in each of the scenarios examined. All of the examples shown in this paper are built on open-source software and are available as Jupyter notebooks.
APA, Harvard, Vancouver, ISO, and other styles
29

Neves, Rui G. M., Maria C. Neves, and Vítor Duarte Teodoro. "Modellus: Interactive computational modelling to improve teaching of physics in the geosciences." Computers & Geosciences 56 (July 2013): 119–26. http://dx.doi.org/10.1016/j.cageo.2013.03.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Sambridge, M., T. Bodin, K. Gallagher, and H. Tkalčić. "Transdimensional inference in the geosciences." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 371, no. 1984 (February 13, 2013): 20110547. http://dx.doi.org/10.1098/rsta.2011.0547.

Full text
Abstract:
Seismologists construct images of the Earth's interior structure using observations, derived from seismograms, collected at the surface. A common approach to such inverse problems is to build a single ‘best’ Earth model, in some sense. This is despite the fact that the observations by themselves often do not require, or even allow, a single best-fit Earth model to exist. Interpretation of optimal models can be fraught with difficulties, particularly when formal uncertainty estimates become heavily dependent on the regularization imposed. Similar issues occur across the physical sciences with model construction in ill-posed problems. An alternative approach is to embrace the non-uniqueness directly and employ an inference process based on parameter space sampling. Instead of seeking a best model within an optimization framework, one seeks an ensemble of solutions and derives properties of that ensemble for inspection. While this idea has itself been employed for more than 30 years, it is now receiving increasing attention in the geosciences. Recently, it has been shown that transdimensional and hierarchical sampling methods have some considerable benefits for problems involving multiple parameter types, uncertain data errors and/or uncertain model parametrizations, as are common in seismology. Rather than being forced to make decisions on parametrization, the level of data noise and the weights between data types in advance, as is often the case in an optimization framework, the choice can be informed by the data themselves. Despite the relatively high computational burden involved, the number of areas where sampling methods are now feasible is growing rapidly. The intention of this article is to introduce concepts of transdimensional inference to a general readership and illustrate with particular seismological examples. A growing body of references provide necessary detail.
APA, Harvard, Vancouver, ISO, and other styles
31

Degen, Denise, Karen Veroy, and Florian Wellmann. "Certified reduced basis method in geosciences." Computational Geosciences 24, no. 1 (December 13, 2019): 241–59. http://dx.doi.org/10.1007/s10596-019-09916-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Caers, Jef. "A special issue on benchmark problems, datasets and methodologies for the computational geosciences." Computers & Geosciences 50 (January 2013): 1–3. http://dx.doi.org/10.1016/j.cageo.2012.11.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Lai, Leo O., and Jed O. Kaplan. "A Fast Mean-Preserving Spline for Interpolating Interval Data." Journal of Atmospheric and Oceanic Technology 39, no. 4 (April 2022): 503–12. http://dx.doi.org/10.1175/jtech-d-21-0154.1.

Full text
Abstract:
Abstract Interpolation of interval data where the mean is preserved, e.g., estimating smoothed, pseudodaily meteorological variables based on monthly means, is a common problem in the geosciences. Existing methods for mean-preserving interpolation are computationally intensive and/or do not readily accommodate bounded interpolation, where the interpolated data cannot exceed a threshold value. Here we present a mean-preserving, continuous, easily implementable, and computationally efficient method for interpolating one-dimensional interval data. Our new algorithm provides a straightforward solution to the interpolation problem by utilizing Hermite cubic splines and midinterval control points to interpolate interval data into smaller partitions. We further include adjustment schemes to restrict the interpolated result to user-specified minimum and maximum bounds. Our method is fast, portable, and broadly applicable to a range of geoscientific data, including interpolating unbounded time series such as mean temperature, and bounded data including mean wind speed or cloud-cover fraction. Significance Statement Interpolation is often utilized to mathematically estimate smaller time step values when such data are not readily available, for example, the estimation of daily temperature when only monthly temperature values are available. We propose a novel interpolation method based on linking segments of flexible continuous curves that ensures the average of interpolated result will be the same as the original value, which is important for minimizing interpolation errors. We find that our new method takes significantly less computational time when compared with other existing methods, while retaining a similar degree of precision. Furthermore, we outline an additional procedure for users to specify the minimum and maximum bounds of interpolated results if applicable.
APA, Harvard, Vancouver, ISO, and other styles
34

Guldan, Vladimir, and Mariana Marcokova. "Orthogonal Polynomials and Related Special Functions Applied in Geosciences and Engineering Computations." Communications - Scientific letters of the University of Zilina 12, no. 1 (March 31, 2010): 12–15. http://dx.doi.org/10.26552/com.c.2010.1.12-15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Wong, Tony E., Alexander M. R. Bakker, Kelsey Ruckert, Patrick Applegate, Aimée B. A. Slangen, and Klaus Keller. "BRICK v0.2, a simple, accessible, and transparent model framework for climate and regional sea-level projections." Geoscientific Model Development 10, no. 7 (July 17, 2017): 2741–60. http://dx.doi.org/10.5194/gmd-10-2741-2017.

Full text
Abstract:
Abstract. Simple models can play pivotal roles in the quantification and framing of uncertainties surrounding climate change and sea-level rise. They are computationally efficient, transparent, and easy to reproduce. These qualities also make simple models useful for the characterization of risk. Simple model codes are increasingly distributed as open source, as well as actively shared and guided. Alas, computer codes used in the geosciences can often be hard to access, run, modify (e.g., with regards to assumptions and model components), and review. Here, we describe the simple model framework BRICK (Building blocks for Relevant Ice and Climate Knowledge) v0.2 and its underlying design principles. The paper adds detail to an earlier published model setup and discusses the inclusion of a land water storage component. The framework largely builds on existing models and allows for projections of global mean temperature as well as regional sea levels and coastal flood risk. BRICK is written in R and Fortran. BRICK gives special attention to the model values of transparency, accessibility, and flexibility in order to mitigate the above-mentioned issues while maintaining a high degree of computational efficiency. We demonstrate the flexibility of this framework through simple model intercomparison experiments. Furthermore, we demonstrate that BRICK is suitable for risk assessment applications by using a didactic example in local flood risk management.
APA, Harvard, Vancouver, ISO, and other styles
36

Rehme, Michael, Stephen Roberts, and Dirk Pflüger. "Uncertainty quantification for the Hokkaido Nansei-Oki tsunami using B-splines on adaptive sparse grids." ANZIAM Journal 62 (June 29, 2021): C30—C44. http://dx.doi.org/10.21914/anziamj.v62.16121.

Full text
Abstract:
Modeling uncertainties in the input parameters of computer simulations is an established way to account for inevitably limited knowledge. To overcome long run-times and high demand for computational resources, a surrogate model can replace the original simulation. We use spatially adaptive sparse grids for the creation of this surrogate model. Sparse grids are a discretization scheme designed to mitigate the curse of dimensionality, and spatial adaptivity further decreases the necessary number of expensive simulations. We combine this with B-spline basis functions which provide gradients and are exactly integrable. We demonstrate the capability of this uncertainty quantification approach for a simulation of the Hokkaido Nansei–Oki Tsunami with anuga. We develop a better understanding of the tsunami behavior by calculating key quantities such as mean, percentiles and maximum run-up. We compare our approach to the popular Dakota toolbox and reach slightly better results for all quantities of interest. References B. M. Adams, M. S. Ebeida, et al. Dakota. Sandia Technical Report, SAND2014-4633, Version 6.11 User’s Manual, July 2014. 2019. https://dakota.sandia.gov/content/manuals. J. H. S. de Baar and S. G. Roberts. Multifidelity sparse-grid-based uncertainty quantification for the Hokkaido Nansei–Oki tsunami. Pure Appl. Geophys. 174 (2017), pp. 3107–3121. doi: 10.1007/s00024-017-1606-y. H.-J. Bungartz and M. Griebel. Sparse grids. Acta Numer. 13 (2004), pp. 147–269. doi: 10.1017/S0962492904000182. M. Eldred and J. Burkardt. Comparison of non-intrusive polynomial chaos and stochastic collocation methods for uncertainty quantification. 47th AIAA. 2009. doi: 10.2514/6.2009-976. K. Höllig and J. Hörner. Approximation and modeling with B-splines. Philadelphia: SIAM, 2013. doi: 10.1137/1.9781611972955. M. Matsuyama and H. Tanaka. An experimental study of the highest run-up height in the 1993 Hokkaido Nansei–Oki earthquake tsunami. National Tsunami Hazard Mitigation Program Review and International Tsunami Symposium (ITS). 2001. O. Nielsen, S. Roberts, D. Gray, A. McPherson, and A. Hitchman. Hydrodymamic modelling of coastal inundation. MODSIM 2005. 2005, pp. 518–523. https://www.mssanz.org.au/modsim05/papers/nielsen.pdf. J. Nocedal and S. J. Wright. Numerical optimization. Springer, 2006. doi: 10.1007/978-0-387-40065-5. D. Pflüger. Spatially Adaptive Sparse Grids for High-Dimensional Problems. Dr. rer. nat., Technische Universität München, Aug. 2010. https://www5.in.tum.de/pub/pflueger10spatially.pdf. M. F. Rehme, F. Franzelin, and D. Pflüger. B-splines on sparse grids for surrogates in uncertainty quantification. Reliab. Eng. Sys. Saf. 209 (2021), p. 107430. doi: 10.1016/j.ress.2021.107430. M. F. Rehme and D. Pflüger. Stochastic collocation with hierarchical extended B-splines on Sparse Grids. Approximation Theory XVI, AT 2019. Springer Proc. Math. Stats. Vol. 336. Springer, 2020. doi: 10.1007/978-3-030-57464-2_12. S Roberts, O. Nielsen, D. Gray, J. Sexton, and G. Davies. ANUGA. Geoscience Australia. 2015. doi: 10.13140/RG.2.2.12401.99686. I. J. Schoenberg and A. Whitney. On Pólya frequence functions. III. The positivity of translation determinants with an application to the interpolation problem by spline curves. Trans. Am. Math. Soc. 74.2 (1953), pp. 246–259. doi: 10.2307/1990881. W. Sickel and T. Ullrich. Spline interpolation on sparse grids. Appl. Anal. 90.3–4 (2011), pp. 337–383. doi: 10.1080/00036811.2010.495336. C. E. Synolakis, E. N. Bernard, V. V. Titov, U. Kânoğlu, and F. I. González. Standards, criteria, and procedures for NOAA evaluation of tsunami numerical models. NOAA/Pacific Marine Environmental Laboratory. 2007. https://nctr.pmel.noaa.gov/benchmark/. J. Valentin and D. Pflüger. Hierarchical gradient-based optimization with B-splines on sparse grids. Sparse Grids and Applications—Stuttgart 2014. Lecture Notes in Computational Science and Engineering. Vol. 109. Springer, 2016, pp. 315–336. doi: 10.1007/978-3-319-28262-6_13. D. Xiu and G. E. Karniadakis. The Wiener–Askey polynomial chaos for stochastic differential equations. SIAM J. Sci. Comput. 24.2 (2002), pp. 619–644. doi: 10.1137/S1064827501387826.
APA, Harvard, Vancouver, ISO, and other styles
37

Darvishi, M., and G. Ahmadi. "Data assimilation techniques and modelling uncertainty in geosciences." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-2/W3 (October 22, 2014): 85–90. http://dx.doi.org/10.5194/isprsarchives-xl-2-w3-85-2014.

Full text
Abstract:
"You cannot step into the same river twice". Perhaps this ancient quote is the best phrase to describe the dynamic nature of the earth system. If we regard the earth as a several mixed systems, we want to know the state of the system at any time. The state could be time-evolving, complex (such as atmosphere) or simple and finding the current state requires complete knowledge of all aspects of the system. On one hand, the Measurements (in situ and satellite data) are often with errors and incomplete. On the other hand, the modelling cannot be exact; therefore, the optimal combination of the measurements with the model information is the best choice to estimate the true state of the system. Data assimilation (DA) methods are powerful tools to combine observations and a numerical model. Actually, DA is an interaction between uncertainty analysis, physical modelling and mathematical algorithms. DA improves knowledge of the past, present or future system states. DA provides a forecast the state of complex systems and better scientific understanding of calibration, validation, data errors and their probability distributions. Nowadays, the high performance and capabilities of DA have led to extensive use of it in different sciences such as meteorology, oceanography, hydrology and nuclear cores. In this paper, after a brief overview of the DA history and a comparison with conventional statistical methods, investigated the accuracy and computational efficiency of two main classical algorithms of DA involving stochastic DA (BLUE and Kalman filter) and variational DA (3D and 4D-Var), then evaluated quantification and modelling of the errors. Finally, some of DA applications in geosciences and the challenges facing the DA are discussed.
APA, Harvard, Vancouver, ISO, and other styles
38

Revello Lami, Martina. "A Conversation with Elisabeth Niklasson." Ex Novo: Journal of Archaeology 6 (February 11, 2022): 253. http://dx.doi.org/10.32028/vol6isspp253.

Full text
Abstract:
Elisabeth Niklasson is lecturer at the School of Geosciences at the University of Aberdeen. She holds two master’s degrees in Cultural Heritage and Computational Archaeology from Gothenburg University, and a PhD in Archaeology from Stockholm University. She has long been interested in the ethical and political aspects of archaeological heritage, particularly relating to contemporary identity politics. A core part of her research has focused on European heritage policies, studying the influence of European Union (EU) funding schemes on archaeology. She has also reflected critically on the reception of aDNA in archaeology. In recent years, she has expanded her focus to analyse uses of the past by populist and extreme right movements in Europe, making vital contributions to our understanding of heritage in times of political polarization.
APA, Harvard, Vancouver, ISO, and other styles
39

Karpatne, Anuj, Imme Ebert-Uphoff, Sai Ravela, Hassan Ali Babaie, and Vipin Kumar. "Machine Learning for the Geosciences: Challenges and Opportunities." IEEE Transactions on Knowledge and Data Engineering 31, no. 8 (August 1, 2019): 1544–54. http://dx.doi.org/10.1109/tkde.2018.2861006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Azevedo, Leonardo. "Model reduction in geostatistical seismic inversion with functional data analysis." GEOPHYSICS 87, no. 1 (November 10, 2021): M1—M11. http://dx.doi.org/10.1190/geo2021-0096.1.

Full text
Abstract:
In subsurface modeling and characterization, predicting the spatial distribution of subsurface elastic properties is commonly achieved by seismic inversion. Stochastic seismic inversion methods, such as iterative geostatistical seismic inversion (GSI), are widely applied to this end. Global iterative GSI methods are computationally expensive because they require, at a given iteration, the stochastic sequential simulation of the entire inversion grid at once multiple times. Functional data analysis (FDA) is a well-established statistical method suited to model long-term and noisy temporal series. This method allows us to summarize spatiotemporal series in a set of analytical functions with a low-dimension representation. FDA has been recently extended to problems related to geosciences, but its application to geophysics is still limited. We have developed the use of FDA as a model reduction technique during the model perturbation step in global iterative GSI. FDA is used to collapse the vertical dimension of the inversion grid. We illustrate our hybrid inversion method with its application to 3D synthetic and real data sets. The results indicate the ability of our inversion methodology to predict smooth inverted subsurface models that match the observed data at a similar convergence as obtained by a global iterative GSI, but with a considerable decrease in the computational cost. Although the resolution of the inverted models might not be enough for a detailed subsurface characterization, the inverted models can be used as a starting point of global iterative GSI to speed up the inversion or to test alternative geologic scenarios by changing the inversion parameterization and obtaining inverted models in a relatively short time.
APA, Harvard, Vancouver, ISO, and other styles
41

Soldatenko, Sergei, Peter Steinle, Chris Tingwell, and Denis Chichkine. "Some Aspects of Sensitivity Analysis in Variational Data Assimilation for Coupled Dynamical Systems." Advances in Meteorology 2015 (2015): 1–22. http://dx.doi.org/10.1155/2015/753031.

Full text
Abstract:
Variational data assimilation (VDA) remains one of the key issues arising in many fields of geosciences including the numerical weather prediction. While the theory of VDA is well established, there are a number of issues with practical implementation that require additional consideration and study. However, the exploration of VDA requires considerable computational resources. For simple enough low-order models, the computational cost is minor and therefore models of this class are used as simple test instruments to emulate more complex systems. In this paper, the sensitivity with respect to variations in the parameters of one of the main components of VDA, the nonlinear forecasting model, is considered. For chaotic atmospheric dynamics, conventional methods of sensitivity analysis provide uninformative results since the envelopes of sensitivity functions grow with time and sensitivity functions themselves demonstrate the oscillating behaviour. The use of sensitivity analysis method, developed on the basis of the theory of shadowing pseudoorbits in dynamical systems, allows us to calculate sensitivity functions correctly. Sensitivity estimates for a simple coupled dynamical system are calculated and presented in the paper. To estimate the influence of model parameter uncertainties on the forecast, the relative error in the energy norm is applied.
APA, Harvard, Vancouver, ISO, and other styles
42

Peszynska, M., and M. F. Wheeler. "Special Issue: High-Performance Computing in Geosciences." Concurrency and Computation: Practice and Experience 17, no. 11 (September 2005): 1363–64. http://dx.doi.org/10.1002/cpe.896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Hastings, J. B., L. Rivkin, and G. Aeppli. "Present and Future Accelerator-Based X-ray Sources: A Perspective." Reviews of Accelerator Science and Technology 10, no. 01 (August 2019): 33–48. http://dx.doi.org/10.1142/s1793626819300044.

Full text
Abstract:
Accelerator-based X-ray sources have contributed uniquely to the physical, engineering and life sciences. There has been a constant development of the sources themselves as well as of the necessary X-ray optics and detectors. These advances have combined to push X-ray science to the forefront in structural studies, achieving atomic resolution for complex protein molecules, to meV scale dynamics addressing problems ranging from geoscience to high-temperature superconductors, and to spatial resolutions approaching 10[Formula: see text]nm for elemental mapping as well as three-dimensional structures. Here we discuss accelerator-based photon science in the frame of imaging and highlight the importance of optics, detectors and computation/data science as well as the source technology. We look to a bright future for X-ray systems, integrating all components from accelerator sources to digital image production algorithms, and highlight aspects that make them unique scientific tools.
APA, Harvard, Vancouver, ISO, and other styles
44

Gibson, Thomas H., Lawrence Mitchell, David A. Ham, and Colin J. Cotter. "Slate: extending Firedrake's domain-specific abstraction to hybridized solvers for geoscience and beyond." Geoscientific Model Development 13, no. 2 (February 25, 2020): 735–61. http://dx.doi.org/10.5194/gmd-13-735-2020.

Full text
Abstract:
Abstract. Within the finite element community, discontinuous Galerkin (DG) and mixed finite element methods have become increasingly popular in simulating geophysical flows. However, robust and efficient solvers for the resulting saddle point and elliptic systems arising from these discretizations continue to be an ongoing challenge. One possible approach for addressing this issue is to employ a method known as hybridization, where the discrete equations are transformed such that classic static condensation and local post-processing methods can be employed. However, it is challenging to implement hybridization as performant parallel code within complex models whilst maintaining a separation of concerns between applications scientists and software experts. In this paper, we introduce a domain-specific abstraction within the Firedrake finite element library that permits the rapid execution of these hybridization techniques within a code-generating framework. The resulting framework composes naturally with Firedrake's solver environment, allowing for the implementation of hybridization and static condensation as runtime-configurable preconditioners via the Python interface to the Portable, Extensible Toolkit for Scientific Computation (PETSc), petsc4py. We provide examples derived from second-order elliptic problems and geophysical fluid dynamics. In addition, we demonstrate that hybridization shows great promise for improving the performance of solvers for mixed finite element discretizations of equations related to large-scale geophysical flows.
APA, Harvard, Vancouver, ISO, and other styles
45

Zhao, ChongBin. "Advances in numerical algorithms and methods in computational geosciences with modeling characteristics of multiple physical and chemical processes." Science China Technological Sciences 58, no. 5 (March 14, 2015): 783–95. http://dx.doi.org/10.1007/s11431-015-5784-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Tsyganenko, N. A. "Data-based modelling of the Earth's dynamic magnetosphere: a review." Annales Geophysicae 31, no. 10 (October 21, 2013): 1745–72. http://dx.doi.org/10.5194/angeo-31-1745-2013.

Full text
Abstract:
Abstract. This paper reviews the main advances in the area of data-based modelling of the Earth's distant magnetic field achieved during the last two decades. The essence and the principal goal of the approach is to extract maximum information from available data, using physically realistic and flexible mathematical structures, parameterized by the most relevant and routinely accessible observables. Accordingly, the paper concentrates on three aspects of the modelling: (i) mathematical methods to develop a computational "skeleton" of a model, (ii) spacecraft databases, and (iii) parameterization of the magnetospheric models by the solar wind drivers and/or ground-based indices. The review is followed by a discussion of the main issues concerning further progress in the area, in particular, methods to assess the models' performance and the accuracy of the field line mapping. The material presented in the paper is organized along the lines of the author Julius-Bartels' Medal Lecture during the General Assembly 2013 of the European Geosciences Union.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Xin, and Andrew Curtis. "Interrogating probabilistic inversion results for subsurface structural information." Geophysical Journal International 229, no. 2 (December 10, 2021): 750–57. http://dx.doi.org/10.1093/gji/ggab496.

Full text
Abstract:
SUMMARY The goal of a scientific investigation is to find answers to specific questions. In geosciences, this is typically achieved by solving an inference or inverse problem and interpreting the solution. However, the answer obtained is often biased because the solution to an inverse problem is non-unique and human interpretation is a biased process. Interrogation theory provides a systematic way to find optimal answers by considering their full uncertainty estimates and by designing an objective function that defines desirable qualities in the answer. In this study, we demonstrate interrogation theory by quantifying the size of a particular subsurface structure. The results show that interrogation theory provides an accurate estimate of the true answer, which cannot be obtained by direct, subjective interpretation of the solution mean and standard deviation. This demonstrates the value of interrogation theory. It also shows that fully nonlinear uncertainty assessments may be critical in order to address real-world scientific problems; this goes some way towards justifying their computational expense.
APA, Harvard, Vancouver, ISO, and other styles
48

Ni, Lin, Xue Zhang, Liangchao Zou, and Jinsong Huang. "Phase-field modeling of hydraulic fracture network propagation in poroelastic rocks." Computational Geosciences 24, no. 5 (April 19, 2020): 1767–82. http://dx.doi.org/10.1007/s10596-020-09955-4.

Full text
Abstract:
Abstract Modeling of hydraulic fracturing processes is of great importance in computational geosciences. In this paper, a phase-field model is developed and applied for investigating the hydraulic fracturing propagation in saturated poroelastic rocks with pre-existing fractures. The phase-field model replaces discrete, discontinuous fractures by continuous diffused damage field, and thus is capable of simulating complex cracking phenomena such as crack branching and coalescence. Specifically, hydraulic fracturing propagation in a rock sample of a single pre-existing natural fracture or natural fracture networks is simulated using the proposed model. It is shown that distance between fractures plays a significant role in the determination of propagation direction of hydraulic fracture. While the rock permeability has a limited influence on the final crack topology induced by hydraulic fracturing, it considerably impacts the distribution of the fluid pressure in rocks. The propagation of hydraulic fractures driven by the injected fluid increases the connectivity of the natural fracture networks, which consequently enhances the effective permeability of the rocks.
APA, Harvard, Vancouver, ISO, and other styles
49

Banwart, Steven A., Nikolaos P. Nikolaidis, Yong-Guan Zhu, Caroline L. Peacock, and Donald L. Sparks. "Soil Functions: Connecting Earth's Critical Zone." Annual Review of Earth and Planetary Sciences 47, no. 1 (May 30, 2019): 333–59. http://dx.doi.org/10.1146/annurev-earth-063016-020544.

Full text
Abstract:
Soil is the central interface of Earth's critical zone—the planetary surface layer extending from unaltered bedrock to the vegetation canopy—and is under intense pressure from human demand for biomass, water, and food resources. Soil functions are flows and transformations of mass, energy, and genetic information that connect soil to the wider critical zone, transmitting the impacts of human activity at the land surface and providing a control point for beneficial human intervention. Soil functions are manifest during bedrock weathering and, in fully developed soil profiles, correlate with the porosity architecture of soil structure and arise from the development of soil aggregates as fundamental ecological units. Advances in knowledge on the mechanistic processes of soil functions, their connection throughout the critical zone, and their quantitative representation in mathematical and computational models define research frontiers that address the major global challenges of critical zone resource provisioning for human benefit. ▪ Connecting the mechanisms of soil functions with critical zone processes defines integrating science to tackle challenges of climate change and food and water supply. ▪ Soil functions, which develop through formation of soil aggregates as fundamental eco-logical units, are manifest at the earliest stages of critical zone evolution. ▪ Global degradation of soil functions during the Anthropocene is reversible through positive human intervention in soil as a central control point in Earth's critical zone. ▪ Measurement and mathematical translation of soil functions and critical zone processes offer new computational approaches for basic and applied geosciences research.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhao, ChongBin, B. E. Hobbs, and A. Ord. "Investigating dynamic mechanisms of geological phenomena using methodology of computational geosciences: An example of equal-distant mineralization in a fault." Science in China Series D: Earth Sciences 51, no. 7 (July 2008): 947–54. http://dx.doi.org/10.1007/s11430-008-0070-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography