Journal articles on the topic '3D Depth Model'

To see the other types of publications on this topic, follow the link: 3D Depth Model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic '3D Depth Model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Mukhtar, N. F., S. Azri, U. Ujang, M. G. Cuétara, G. M. Retortillo, and S. Mohd Salleh. "3D MODEL FOR INDOOR SPACES USING DEPTH SENSOR." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W16 (October 1, 2019): 471–79. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w16-471-2019.

Full text
Abstract:
Abstract. In recent years, 3D model for indoor spaces have become highly demanded in the development of technology. Many approaches to 3D visualisation and modelling especially for indoor environment was developed such as laser scanner, photogrammetry, computer vision, image and many more. However, most of the technique relies on the experience of the operator to get the best result. Besides that, the equipment is quite expensive and time-consuming in terms of processing. This paper focuses on the data acquisition and visualisation of a 3D model for an indoor space by using a depth sensor. In this study, EyesMap3D Pro by Ecapture is used to collect 3D data of the indoor spaces. The EyesMap3D Pro depth sensor is able to generate 3D point clouds in high speed and high mobility due to the portability and light weight of the device. However, more attention must be paid on data acquisition, data processing, visualizing, and evaluation of the depth sensor data. Hence, this paper will discuss the data processing from extracting features from 3D point clouds to 3D indoor models. Afterwards, the evaluation on the 3D models is made to ensure the suitability in indoor model and indoor mapping application. In this study, the 3D model was exported to 3D GIS-ready format for displaying and storing more information of the indoor spaces.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Yong Guang, Ming Quan Zhou, and Ya Chun Fan. "Using Depth Image in 3D Model Retrieval System." Advanced Materials Research 268-270 (July 2011): 981–87. http://dx.doi.org/10.4028/www.scientific.net/amr.268-270.981.

Full text
Abstract:
For content-based 3D model retrieval, an improved depth image-based feature extraction algorithm is proposed. First, a 3-D model is preprocessed. Secondly, six depth images are generated in three principal directions in the normalized coordinate system. Thirdly, the eigenvectors of 3D model are obtained through 2D Fourier Transform of the depth images. Finally a new method is used for low-frequency sampling. Experiments show that the approach performed quite well despite its apparently simple approach. In our large 3D database, our approach is well for variant resolution models and holds satisfied computational costs.
APA, Harvard, Vancouver, ISO, and other styles
3

Lü, Qingtian, Guang Qi, and Jiayong Yan. "3D geologic model of Shizishan ore field constrained by gravity and magnetic interactive modeling: A case history." GEOPHYSICS 78, no. 1 (January 1, 2013): B25—B35. http://dx.doi.org/10.1190/geo2012-0126.1.

Full text
Abstract:
We performed a study on using an integrated geologic model in mineral exploration at depth. Shizishan ore field, in the western part of the Tongling ore district, Anhui Province in China, is well known for its polymetallic deposits and recent deep discovery of Dongguashan deposit at around 1000-m depth. Understanding the 3D structure and delineating the locations and variations of the intrusions and ore-controlling strata in the study area are essential for selecting deep mineral targets. A pilot 3D geologic model, covering an area of 11 × 16 km and extends to a depth of 3 km, has been constructed by interactive gravity and magnetic inversions to define the geometry, depth, and physical properties of geologic bodies at depths. The 3D visualization of the results assists in understanding the spatial relations between various intrusive units and the ore-bearing strata. The model has confirmed most previous knowledge, but also revealed new features of different folds and intrusions that are important for planning future exploration at large depths. Several deep targets have also been predicted by combining the conceptual mineralization model in the district with the 3D geologic model. Our study demonstrates the potential of using gravity and magnetic data with geologic constraints to build 3D models in structurally complex areas for the purpose of mineral exploration at depth and under cover.
APA, Harvard, Vancouver, ISO, and other styles
4

Sintunata, Vicky, and Terumasa Aoki. "Color Segmentation Based Depth Adjustment for 3D Model Reconstruction from a Single Input Image." International Journal of Computer Theory and Engineering 8, no. 2 (2016): 171–76. http://dx.doi.org/10.7763/ijcte.2016.v8.1039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Huang, and Zhao. "A New Model of RGB-D Camera Calibration Based On 3D Control Field." Sensors 19, no. 23 (November 21, 2019): 5082. http://dx.doi.org/10.3390/s19235082.

Full text
Abstract:
With extensive application of RGB-D cameras in robotics, computer vision, and many other fields, accurate calibration becomes more and more critical to the sensors. However, most existing models for calibrating depth and the relative pose between a depth camera and an RGB camera are not universally applicable to many different kinds of RGB-D cameras. In this paper, by using the collinear equation and space resection of photogrammetry, we present a new model to correct the depth and calibrate the relative pose between depth and RGB cameras based on a 3D control field. We establish a rigorous relationship model between the two cameras; then, we optimize the relative parameters of two cameras by least-squares iteration. For depth correction, based on the extrinsic parameters related to object space, the reference depths are calculated by using a collinear equation. Then, we calibrate the depth measurements with consideration of the distortion of pixels in depth images. We apply Kinect-2 to verify the calibration parameters by registering depth and color images. We test the effect of depth correction based on 3D reconstruction. Compared to the registration results from a state-of-the-art calibration model, the registration results obtained with our calibration parameters improve dramatically. Likewise, the performances of 3D reconstruction demonstrate obvious improvements after depth correction.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Fan, Junli Zhao, Liang Wang, and Fuqing Duan. "3D Face Model Super-Resolution Based on Radial Curve Estimation." Applied Sciences 10, no. 3 (February 5, 2020): 1047. http://dx.doi.org/10.3390/app10031047.

Full text
Abstract:
Consumer depth cameras bring about cheap and fast acquisition of 3D models. However, the precision and resolution of these consumer depth cameras cannot satisfy the requirements of some 3D face applications. In this paper, we present a super-resolution method for reconstructing a high resolution 3D face model from a low resolution 3D face model acquired from a consumer depth camera. We used a group of radial curves to represent a 3D face. For a given low resolution 3D face model, we first extracted radial curves on it, and then estimated their corresponding high resolution ones by radial curve matching, for which Dynamic Time Warping (DTW) was used. Finally, a reference high resolution 3D face model was deformed to generate a high resolution face model by using the radial curves as the constraining feature. We evaluated our method both qualitatively and quantitatively, and the experimental results validated our method.
APA, Harvard, Vancouver, ISO, and other styles
7

Hollands, J. G., Heather A. Parker, and Andrew Morton. "Judgments of 3D Bars in Depth." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 46, no. 17 (September 2002): 1565–69. http://dx.doi.org/10.1177/154193120204601708.

Full text
Abstract:
Twenty participants judged the relative size of two bars portrayed at different locations in a 3D bar graph. Bars were co-located, placed side by side (near adjacent), or the small bar occluded the large (near or far occluded). Error was greater for the far occluded condition and there was greater variability in bias scores. To account for observed error, we proposed a model that distinguishes between cyclical bias commonly observed in proportion judgments and bias resulting from improper size-distance scaling. The model was fit to data, and the results indicated that the absolute value of size-distance scaling parameter γ was greater in the far occluded condition. Inclusion of γ increased R2 for far and near occluded conditions only. Bar location did not affect cyclical bias. Thus, judgments of the relative sizes of bars in 3D bar graphs showed increased error when the bars were separated, due to inaccurate size-distance scaling.
APA, Harvard, Vancouver, ISO, and other styles
8

Kwon, Ki Hoon, Munkh-Uchral Erdenebat, Nam Kim, Anar Khuderchuluun, Shariar Md Imtiaz, Min Young Kim, and Ki-Chul Kwon. "High-Quality 3D Visualization System for Light-Field Microscopy with Fine-Scale Shape Measurement through Accurate 3D Surface Data." Sensors 23, no. 4 (February 15, 2023): 2173. http://dx.doi.org/10.3390/s23042173.

Full text
Abstract:
We propose a light-field microscopy display system that provides improved image quality and realistic three-dimensional (3D) measurement information. Our approach acquires both high-resolution two-dimensional (2D) and light-field images of the specimen sequentially. We put forward a matting Laplacian-based depth estimation algorithm to obtain nearly realistic 3D surface data, allowing the calculation of depth data, which is relatively close to the actual surface, and measurement information from the light-field images of specimens. High-reliability area data of the focus measure map and spatial affinity information of the matting Laplacian are used to estimate nearly realistic depths. This process represents a reference value for the light-field microscopy depth range that was not previously available. A 3D model is regenerated by combining the depth data and the high-resolution 2D image. The element image array is rendered through a simplified direction-reversal calculation method, which depends on user interaction from the 3D model and is displayed on the 3D display device. We confirm that the proposed system increases the accuracy of depth estimation and measurement and improves the quality of visualization and 3D display images.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, X. Y., B. Zhou, H. Li, and W. Xin. "Depth detection of spar cap defects in large-scale wind turbine blades based on a 3D heat conduction model using step heating infrared thermography." Measurement Science and Technology 33, no. 5 (February 8, 2022): 055008. http://dx.doi.org/10.1088/1361-6501/ac41a8.

Full text
Abstract:
Abstract The defects dispersed in a spar cap often lead to the failure of large-scale wind turbine blades. To predict the residual service life of the blade and make the repair, it is necessary to detect the depth of spar cap defects. Step-heating thermography (SHT) is a common infrared technique in this domain. However, the existing methods of SHT on defect depth detection are generally based on 1D models, which are unable to accurately detect the depth of spar cap defects due to ignoring material anisotropy and in-plane heat flow. To improve the depth detection accuracy of spar cap defects, a 3D model based on the theory of heat transfer is established by using the equivalent source method (ESM), and a defect depth criterion is proposed based on the analytical solution of the heat conduction equation. The modeling process is as follows. The heat conduction model of SHT was established by ESM. Then, coordinate transformation, variables separation, and Laplace transformation were utilized to solve the 3D heat conduction equation. A defect depth criterion was proposed based on emerging contrast Cr. A glass fiber reinforced plastic composite plate containing 12 square flat-bottom holes with different sizes and depths was manufactured to represent a spar cap with large thermal resistance defects, such as delamination and cracks. The experimental results demonstrate the validity of the 3D model. Then, the model was applied to an on-site SHT test of a 1.5 MW wind turbine blade. The test results prove that the depth detection accuracy of spar cap defects can be significantly improved by using the 3D model. In addition, by using an improved principle component analysis (PCA) method containing a contrast enhancement factor, artifacts can be reduced and the recognition time of defects can be shortened. The 3D model provides a tool for detecting the depth of deep-lying defects in a thick composite structure, and the SHT technology is optimized by improved PCA.
APA, Harvard, Vancouver, ISO, and other styles
10

Yu, Jongsub, and Hyukdoo Choi. "YOLO MDE: Object Detection with Monocular Depth Estimation." Electronics 11, no. 1 (December 27, 2021): 76. http://dx.doi.org/10.3390/electronics11010076.

Full text
Abstract:
This paper presents an object detector with depth estimation using monocular camera images. Previous detection studies have typically focused on detecting objects with 2D or 3D bounding boxes. A 3D bounding box consists of the center point, its size parameters, and heading information. However, predicting complex output compositions leads a model to have generally low performances, and it is not necessary for risk assessment for autonomous driving. We focused on predicting a single depth per object, which is essential for risk assessment for autonomous driving. Our network architecture is based on YOLO v4, which is a fast and accurate one-stage object detector. We added an additional channel to the output layer for depth estimation. To train depth prediction, we extract the closest depth from the 3D bounding box coordinates of ground truth labels in the dataset. Our model is compared with the latest studies on 3D object detection using the KITTI object detection benchmark. As a result, we show that our model achieves higher detection performance and detection speed than existing models with comparable depth accuracy.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Qiuwen, Liang Tian, Lixun Huang, Xiaobing Wang, and Haodong Zhu. "Rendering Distortion Estimation Model for 3D High Efficiency Depth Coding." Mathematical Problems in Engineering 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/940737.

Full text
Abstract:
A depth map represents three-dimensional (3D) scene geometry information and is used for depth image based rendering (DIBR) to synthesize arbitrary virtual views. Since the depth map is only used to synthesize virtual views and is not displayed directly, the depth map needs to be compressed in a certain way that can minimize distortions in the rendered views. In this paper, a modified distortion estimation model is proposed based on view rendering distortion instead of depth map distortion itself and can be applied to the high efficiency video coding (HEVC) rate distortion cost function process for rendering view quality optimization. Experimental results on various 3D video sequences show that the proposed algorithm provides about 31% BD-rate savings in comparison with HEVC simulcast and 1.3 dB BD-PSNR coding gain for the rendered view.
APA, Harvard, Vancouver, ISO, and other styles
12

Freer, JJ, GA Tarling, MA Collins, JC Partridge, and MJ Genner. "Estimating circumpolar distributions of lanternfish using 2D and 3D ecological niche models." Marine Ecology Progress Series 647 (August 13, 2020): 179–93. http://dx.doi.org/10.3354/meps13384.

Full text
Abstract:
Ecological niche models (ENMs) can be a practical approach for investigating distributions and habitat characteristics of pelagic species. In principle, to reflect the ecological niche of a species well, ENMs should incorporate environmental predictors that consider its full vertical habitat, yet examples of such models are rare. Here we present the first application of ‘3D’ ENMs to 10 Southern Ocean lanternfish species. This 3D approach incorporates depth-specific environmental predictor data to identify the distribution of suitable habitat across multiple depth levels. Results were compared to those from the more common ‘2D’ approach, which uses only environmental data from the sea surface. Measures of model discriminatory ability and overfitting indicated that 2D models often outperform 3D methods, even when accounting for reduced available sample size in the 3D models. Nevertheless, models for species with a known affinity for deeper habitat benefitted from the 3D approach, and our results suggest that species can track their ecological niche in latitude and depth leading to equatorward or poleward range extensions beyond that expected from incorporating only surface data. However, since 3D models require comprehensive depth-specific data, both data availability and the need for depth-specific model outputs must be considered when choosing the appropriate modelling approach. We advocate increased effort to include depth-resolved environmental parameters within marine ENMs. This will require collection of mesopelagic species occurrence data using appropriate temporal and depth-stratified methods, and inclusion of accurate depth information when occurrence records are submitted to global biodiversity databases.
APA, Harvard, Vancouver, ISO, and other styles
13

Cui, Meng-Yao, Shao-Ping Lu, Miao Wang, Yong-Liang Yang, Yu-Kun Lai, and Paul L. Rosin. "3D computational modeling and perceptual analysis of kinetic depth effects." Computational Visual Media 6, no. 3 (August 13, 2020): 265–77. http://dx.doi.org/10.1007/s41095-020-0180-x.

Full text
Abstract:
Abstract Humans have the ability to perceive kinetic depth effects, i.e., to perceived 3D shapes from 2D projections of rotating 3D objects. This process is based on a variety of visual cues such as lighting and shading effects. However, when such cues are weak or missing, perception can become faulty, as demonstrated by the famous silhouette illusion example of the spinning dancer. Inspired by this, we establish objective and subjective evaluation models of rotated 3D objects by taking their projected 2D images as input. We investigate five different cues: ambient luminance, shading, rotation speed, perspective, and color difference between the objects and background. In the objective evaluation model, we first apply 3D reconstruction algorithms to obtain an objective reconstruction quality metric, and then use quadratic stepwise regression analysis to determine weights of depth cues to represent the reconstruction quality. In the subjective evaluation model, we use a comprehensive user study to reveal correlations with reaction time and accuracy, rotation speed, and perspective. The two evaluation models are generally consistent, and potentially of benefit to inter-disciplinary research into visual perception and 3D reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhu, Chuanbin, Fabrice Cotton, and Marco Pilz. "Testing the Depths to 1.0 and 2.5 km/s Velocity Isosurfaces in a Velocity Model for Japan and Implications for Ground‐Motion Modeling." Bulletin of the Seismological Society of America 109, no. 6 (September 24, 2019): 2710–21. http://dx.doi.org/10.1785/0120190016.

Full text
Abstract:
Abstract In the Next Generation Attenuation West2 (NGA‐West2) project, a 3D subsurface structure model (Japan Seismic Hazard Information Station [J‐SHIS]) was queried to establish depths to 1.0 and 2.5 km/s velocity isosurfaces for sites without depth measurement in Japan. In this article, we evaluate the depth parameters in the J‐SHIS velocity model by comparing them with their corresponding site‐specific depth measurements derived from selected KiK‐net velocity profiles. The comparison indicates that the J‐SHIS model underestimates site depths at shallow sites and overestimates depths at deep sites. Similar issues were also identified in the southern California basin model. Our results also show that these underestimations and overestimations have a potentially significant impact on ground‐motion prediction using NGA‐West2 ground‐motion models (GMMs). Site resonant period may be considered as an alternative to depth parameter in the site term of a GMM.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Hongwei, Jie Gao, and Jingjing Liu. "Research and Implementation of the Sports Analysis System Based on 3D Image Technology." Wireless Communications and Mobile Computing 2021 (October 6, 2021): 1–11. http://dx.doi.org/10.1155/2021/4266417.

Full text
Abstract:
On the basis of existing research, this paper analyzes the algorithms and technologies of 3D image-based sports models in depth and proposes a fusion depth map in view of some of the shortcomings of the current hot spot sports model methods based on 3D images. We use the 3D space to collect the depth image, remove the background from the depth map, recover the 3D motion model from it, and then build the 3D model database. In this paper, based on the characteristics of continuity in space and smoothness in time of a rigid body moving target, a reasonable rigid body target motion hypothesis is proposed, and a three-dimensional motion model of a rigid body target based on the center of rotation of the moving target and corresponding motion is designed to solve the equation with parameters. In the case of unknown motion law, shape, structure, and size of the moving target, this algorithm can achieve accurate measurement of the three-dimensional rigid body motion target’s self-rotation center and related motion parameters. In the process of motion parameter calculation, the least square algorithm is used to process the feature point data, thereby reducing the influence of noise interference on the motion detection result and correctly completing the motion detection task. The paper gives the measurement uncertainty of the stereo vision motion measurement system through simulated and real experiments. We extract the human body motion trajectory according to the depth map and establish a motion trajectory database. For using the recognition algorithm of the sports model based on the 3D image, we input a set of depth map action sequences. After the above process, the 3D motion model is obtained and matched with the model in the 3D motion model database, and the sequence with the smallest distance is calculated. The corresponding motion trajectory is taken as the result of motion capture, and the efficiency of this system is verified through experiments.
APA, Harvard, Vancouver, ISO, and other styles
16

Truong Giang, Khang, Soohwan Song, Daekyum Kim, and Sunghee Choi. "Sequential Depth Completion With Confidence Estimation for 3D Model Reconstruction." IEEE Robotics and Automation Letters 6, no. 2 (April 2021): 327–34. http://dx.doi.org/10.1109/lra.2020.3043172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Hu, Hui, Jianfeng Zhang, and Tao Li. "Dam-Break Flows: Comparison between Flow-3D, MIKE 3 FM, and Analytical Solutions with Experimental Data." Applied Sciences 8, no. 12 (December 2, 2018): 2456. http://dx.doi.org/10.3390/app8122456.

Full text
Abstract:
The objective of this study was to evaluate the applicability of a flow model with different numbers of spatial dimensions in a hydraulic features solution, with parameters such a free surface profile, water depth variations, and averaged velocity evolution in a dam-break under dry and wet bed conditions with different tailwater depths. Two similar three-dimensional (3D) hydrodynamic models (Flow-3D and MIKE 3 FM) were studied in a dam-break simulation by performing a comparison with published experimental data and the one-dimensional (1D) analytical solution. The results indicate that the Flow-3D model better captures the free surface profile of wavefronts for dry and wet beds than other methods. The MIKE 3 FM model also replicated the free surface profiles well, but it underestimated them during the initial stage under wet-bed conditions. However, it provided a better approach to the measurements over time. Measured and simulated water depth variations and velocity variations demonstrate that both of the 3D models predict the dam-break flow with a reasonable estimation and a root mean square error (RMSE) lower than 0.04, while the MIKE 3 FM had a small memory footprint and the computational time of this model was 24 times faster than that of the Flow-3D. Therefore, the MIKE 3 FM model is recommended for computations involving real-life dam-break problems in large domains, leaving the Flow-3D model for fine calculations in which knowledge of the 3D flow structure is required. The 1D analytical solution was only effective for the dam-break wave propagations along the initially dry bed, and its applicability was fairly limited.
APA, Harvard, Vancouver, ISO, and other styles
18

Valdez-Rodríguez, José E., Hiram Calvo, Edgardo Felipe-Riverón, and Marco A. Moreno-Armendáriz. "Improving Depth Estimation by Embedding Semantic Segmentation: A Hybrid CNN Model." Sensors 22, no. 4 (February 21, 2022): 1669. http://dx.doi.org/10.3390/s22041669.

Full text
Abstract:
Single image depth estimation works fail to separate foreground elements because they can easily be confounded with the background. To alleviate this problem, we propose the use of a semantic segmentation procedure that adds information to a depth estimator, in this case, a 3D Convolutional Neural Network (CNN)—segmentation is coded as one-hot planes representing categories of objects. We explore 2D and 3D models. Particularly, we propose a hybrid 2D–3D CNN architecture capable of obtaining semantic segmentation and depth estimation at the same time. We tested our procedure on the SYNTHIA-AL dataset and obtained σ3=0.95, which is an improvement of 0.14 points (compared with the state of the art of σ3=0.81) by using manual segmentation, and σ3=0.89 using automatic semantic segmentation, proving that depth estimation is improved when the shape and position of objects in a scene are known.
APA, Harvard, Vancouver, ISO, and other styles
19

Ngo, Duc Tuan, Minh-Quan Viet Bui, Duc Dung Nguyen, and Hoang-Anh Pham. "eGAC3D: enhancing depth adaptive convolution and depth estimation for monocular 3D object pose detection." PeerJ Computer Science 8 (November 3, 2022): e1144. http://dx.doi.org/10.7717/peerj-cs.1144.

Full text
Abstract:
Many alternative approaches for 3D object detection using a singular camera have been studied instead of leveraging high-precision 3D LiDAR sensors incurring a prohibitive cost. Recently, we proposed a novel approach for 3D object detection by employing a ground plane model that utilizes geometric constraints named GAC3D to improve the results of the deep-based detector. GAC3D adopts an adaptive depth convolution to replace the traditional 2D convolution to deal with the divergent context of the image’s feature, leading to a significant improvement in both training convergence and testing accuracy on the KITTI 3D object detection benchmark. This article presents an alternative architecture named eGAC3D that adopts a revised depth adaptive convolution with variant guidance to improve detection accuracy. Additionally, eGAC3D utilizes the pixel adaptive convolution to leverage the depth map to guide our model for detection heads instead of using an external depth estimator like other methods leading to a significant reduction of time inference. The experimental results on the KITTI benchmark show that our eGAC3D outperforms not only our previous GAC3D but also many existing monocular methods in terms of accuracy and inference time. Moreover, we deployed and optimized the proposed eGAC3D framework on an embedded platform with a low-cost GPU. To the best of the authors’ knowledge, we are the first to develop a monocular 3D detection framework on embedded devices. The experimental results on Jetson Xavier NX demonstrate that our proposed method can achieve nearly real-time performance with appropriate accuracy even with the modest hardware resource.
APA, Harvard, Vancouver, ISO, and other styles
20

Sun, Robert, George A. McMechan, Chen-Shao Lee, Jinder Chow, and Chen-Hong Chen. "Prestack scalar reverse-time depth migration of 3D elastic seismic data." GEOPHYSICS 71, no. 5 (September 2006): S199—S207. http://dx.doi.org/10.1190/1.2227519.

Full text
Abstract:
Using two independent, 3D scalar reverse-time depth migrations, we migrate the reflected P- and S-waves in a prestack 3D, three-component (3-C), elastic seismic data volume generated with a P-wave source in a 3D model and recorded at the top of the model. Reflected P- and S-waves are extracted by divergence (a scalar) and curl (a 3-C vector) calculations, respectively, during shallow downward extrapolation of the elastic seismic data. The imaging time for the migrations of both the reflected P- and P-S converted waves at each point is the one-way P-wave traveltime from the source to that point.The divergence (the extracted P-waves) is reverse-time extrapolated using a finite-difference solution of the 3D scalar wave equation in a 3D P-velocity modeland is imaged to obtain the migrated P-image. The curl (the extracted S-waves) is first converted into a scalar S-wavefield by taking the curl’s absolute value as the absolute value of the scalar S-wavefield and assigning a positive sign if the curl is counterclockwise relative to the source or a negative sign otherwise. This scalar S-wavefield is then reverse-time extrapolated using a finite-difference solution of the 3D scalar wave equation in a 3D S-velocity model, and it is imaged with the same one-way P-wave traveltime imaging condition as that used for the P-wave. This achieves S-wave polarity uniformity and ensures constructive S-wave interference between data from adjacent sources. The algorithm gives satisfactory results on synthetic examples for 3D laterally inhomogeneous models.
APA, Harvard, Vancouver, ISO, and other styles
21

Karpiah, Arvin Boutik, Maxwell Azuka Meju, Roger Vernon Miller, Xavier Legrand, Prabal Shankar Das, and Raja Natasha Bt Raja Musafarudin. "Crustal structure and basement-cover relationship in the Dangerous Grounds, offshore North-West Borneo, from 3D joint CSEM and MT imaging." Interpretation 8, no. 4 (November 1, 2020): SS97—SS111. http://dx.doi.org/10.1190/int-2019-0261.1.

Full text
Abstract:
Accurate mapping of crustal thickness variations and the boundary relationships between sedimentary cover rocks and the crystalline basement is very important for heat-flow prediction and petroleum system modeling of a basin. Using legacy industry 3D data sets, we investigated the potential of 3D joint inversion of marine controlled-source electromagnetic (CSEM) and magnetotelluric (MT) data incorporating resistivity anisotropy to map these parameters across subbasins in the Dangerous Grounds in the southwestern rifted margin of the South China Sea, where limited previous seismic and potential field basement interpretations are available for comparison. We have reconstructed 3D horizontal and vertical resistivity models from the seabed down to [Formula: see text] depth for a [Formula: see text] area. The resistivity-versus-depth profile extracted from our 3D joint inversion models satisfactorily matched the resistivity and lithologic well logs at a wildcat exploration well location chosen for model validation. We found that the maximum resistivity gradients in the computed first derivative of the 3D resistivity volumes predict a depth to basement that matches the acoustic basement. The models predict the presence of 2 to approximately 5 km thick electrically conductive ([Formula: see text]) sedimentary cover atop an electrically resistive ([Formula: see text]) crystalline crust that is underlain by an electrically conductive ([Formula: see text]) upper mantle at depths that vary laterally from approximately 25 to 30 km below sea level in our study area. Our resistivity variation with depth is found to be remarkably consistent with the density distribution at Moho depth from recent independent 3D gravity/gradiometry inversion studies in this region. We suggest that 3D joint inversion of CSEM-MT, seismic, and potential field data is the way forward for understanding the deep structure of such rifted margins.
APA, Harvard, Vancouver, ISO, and other styles
22

Khalili, Khalil, Seyed Yousef Ahmadi-Brooghani, and M. Rakhshkhorshid. "CAD Model Generation Using 3D Scanning." Advanced Materials Research 23 (October 2007): 169–72. http://dx.doi.org/10.4028/www.scientific.net/amr.23.169.

Full text
Abstract:
3D Scanners are used in industrial applications such as reverse engineering and inspection. Customization of existing CAD systems is one of rapid ways to supplying a 3D Scanning software. In this paper, using AutoLisp and Visual Basic programming languages, AutoCAD has been customized. Also facilities of automatic scanning of physical parts, in the domain of free form surfaces, have been provided. Furthermore, possibilities such as, control of scanner automotive system, representation of registered point clouds, generation of polygon and /or NURBS model from primary or modified point clouds, have been prepared. Triangulation and image processing techniques along with a new fuzzy logic algorithm have been used to extract the depth information more accurate. These, accompanying with AutoCAD capabilities have provided acceptable facilities for 3D scanning.
APA, Harvard, Vancouver, ISO, and other styles
23

Korkalo, Otto, and Tapio Takala. "Measurement Noise Model for Depth Camera-Based People Tracking." Sensors 21, no. 13 (June 30, 2021): 4488. http://dx.doi.org/10.3390/s21134488.

Full text
Abstract:
Depth cameras are widely used in people tracking applications. They typically suffer from significant range measurement noise, which causes uncertainty in the detections made of the people. The data fusion, state estimation and data association tasks require that the measurement uncertainty is modelled, especially in multi-sensor systems. Measurement noise models for different kinds of depth sensors have been proposed, however, the existing approaches require manual calibration procedures which can be impractical to conduct in real-life scenarios. In this paper, we present a new measurement noise model for depth camera-based people tracking. In our tracking solution, we utilise the so-called plan-view approach, where the 3D measurements are transformed to the floor plane, and the tracking problem is solved in 2D. We directly model the measurement noise in the plan-view domain, and the errors that originate from the imaging process and the geometric transformations of the 3D data are combined. We also present a method for directly defining the noise models from the observations. Together with our depth sensor network self-calibration routine, the approach allows fast and practical deployment of depth-based people tracking systems.
APA, Harvard, Vancouver, ISO, and other styles
24

Pambudi, Doni Setio, and Lailatul Hidayah. "Foot 3D Reconstruction and Measurement using Depth Data." Journal of Information Systems Engineering and Business Intelligence 6, no. 1 (April 27, 2020): 37. http://dx.doi.org/10.20473/jisebi.6.1.37-45.

Full text
Abstract:
Background: The need for shoes with non-standard sizes is increasing, but this is not followed by the competence to measure the foot effectively. The high cost of such an instrument in the market has led to the development of a precise yet affordable measurement system.Objective: This research attempts to solve the measuring problem by employing an automatic instrument utilizing a depth image sensor that is available on the market at an affordable price.Methods: Data from several Realsense sensors that have been preprocessed are combined using transformation techniques and noise cleaning is performed afterward. Finally the 3D model of the foot is ready and hence the length and width can be obtained.Results: The experimental results show that the proposed method produces a measurement error of 0.351 cm in foot length, and 0.355 cm in foot width.Conclusion: The result shows that multiple angles of a static Realsense sensor can produce a good 3D foot model automatically. This proposed system configuration can reduce complexity as well as being an affordable solution.
APA, Harvard, Vancouver, ISO, and other styles
25

Bairamian, David, Shinuo Liu, and Behzad Eftekhar. "Virtual Reality Angiogram vs 3-Dimensional Printed Angiogram as an Educational tool—A Comparative Study." Neurosurgery 85, no. 2 (February 2, 2019): E343—E349. http://dx.doi.org/10.1093/neuros/nyz003.

Full text
Abstract:
Abstract BACKGROUND Three-dimensional (3D) visualization of the neurovascular structures has helped preoperative surgical planning. 3D printed models and virtual reality (VR) devices are 2 options to improve 3D stereovision and stereoscopic depth perception of cerebrovascular anatomy for aneurysm surgery. OBJECTIVE To investigate and compare the practicality and potential of 3D printed and VR models in a neurosurgical education context. METHODS The VR angiogram was introduced through the development and testing of a VR smartphone app. Ten neurosurgical trainees from Australia and New Zealand participated in a 2-part interactive exercise using 3 3D printed and VR angiogram models followed by a questionnaire about their experience. In a separate exercise to investigate the learning curve effect on VR angiogram application, a qualified neurosurgeon was subjected to 15 exercises involving manipulating VR angiograms models. RESULTS VR angiogram outperformed 3D printed model in terms of resolution. It had statistically significant advantage in ability to zoom, resolution, ease of manipulation, model durability, and educational potential. VR angiogram had a higher questionnaire total score than 3D models. The 3D printed models had a statistically significant advantage in depth perception and ease of manipulation. The results were independent of trainee year level, sequence of the tests, or anatomy. CONCLUSION In selected cases with challenging cerebrovascular anatomy where stereoscopic depth perception is helpful, VR angiogram should be considered as a viable alternative to the 3D printed models for neurosurgical training and preoperative planning. An immersive virtual environment offers excellent resolution and ability to zoom, potentiating it as an untapped educational tool.
APA, Harvard, Vancouver, ISO, and other styles
26

Landa, Jaromír, and David Procházka. "Usage of Microsoft Kinect for augmented prototyping speed-up." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 60, no. 2 (2012): 175–80. http://dx.doi.org/10.11118/actaun201260020175.

Full text
Abstract:
Physical model is a common tool for testing of the product features during the design process. This model is usually made of clay or plastic because of the modifiability of these materials. Therefore, the designer could easily adjust the model shape to enhance the look or ergonomics of the product. Nowadays, some companies use augmented reality to enhance their design process. This concept is called augmented prototyping. Common approach uses artificial markers to augment the product prototype by digital 3D models. These 3D models that are shown on the markers positions can represent e.g. car spare parts such as different lights, wheels, spoiler etc. This allows the designer interactively change the look of the physical model. Further, it is also necessary to transfer physical adjustments made on the model surface back to the computer digital model. Well-known tool for this purpose a professional 3D scanner. Nevertheless, the cost of such scanner is substantial. Therefore, we focused on different solution – a motion capture device Microsoft Kinect that is used for computer games. This article outlines a new augmented prototyping approach that directly updates the digital model during the design process using Kinect depth camera. This solution is a cost effective alternative to the professional 3D scanners. Our article describes especially how depth data can be obtained by the Kinect and also provides an evaluation of depth measurement precision.
APA, Harvard, Vancouver, ISO, and other styles
27

FANG Kai, 方恺. "Ecological footprint depth and size: new indicators for a 3D model." Acta Ecologica Sinica 33, no. 1 (2013): 267–74. http://dx.doi.org/10.5846/stxb201111051670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Galinium, Maulahikmah, Jason Yapri, and James Purnama. "27markerless motion capture for 3D human model animation using depth camera." TELKOMNIKA (Telecommunication Computing Electronics and Control) 17, no. 3 (June 1, 2019): 1300. http://dx.doi.org/10.12928/telkomnika.v17i3.8939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Sheng, Lu, Jianfei Cai, Tat-Jen Cham, Vladimir Pavlovic, and King Ngi Ngan. "Visibility Constrained Generative Model for Depth-Based 3D Facial Pose Tracking." IEEE Transactions on Pattern Analysis and Machine Intelligence 41, no. 8 (August 1, 2019): 1994–2007. http://dx.doi.org/10.1109/tpami.2018.2877675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Sook Cho, Young, Takuya Komatsu, Masayuki Takatera, Shigeru Inui, Yoshio Shimizu, and Hyejun Park. "Posture and depth adjustable 3D body model for individual pattern making." International Journal of Clothing Science and Technology 18, no. 2 (March 2006): 96–107. http://dx.doi.org/10.1108/09556220610645757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kumar, Atul, Yen-Yu Wang, Ching-Jen Wu, Kai-Che Liu, and Hurng-Sheng Wu. "Stereoscopic visualization of laparoscope image using depth information from 3D model." Computer Methods and Programs in Biomedicine 113, no. 3 (March 2014): 862–68. http://dx.doi.org/10.1016/j.cmpb.2013.12.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Xu, Zhengze, and Wenjun Zhang. "3D CNN hand pose estimation with end-to-end hierarchical model and physical constraints from depth images." Neural Network World 33, no. 1 (2023): 35–48. http://dx.doi.org/10.14311/nnw.2023.33.003.

Full text
Abstract:
Previous studies are mainly focused on the works that depth image is treated as flat image, and then depth data tends to be mapped as gray values during the convolution processing and features extraction. To address this issue, an approach of 3D CNN hand pose estimation with end-to-end hierarchical model and physical constraints is proposed. After reconstruction of 3D space structure of hand from depth image, 3D model is converted into voxel grid for further hand pose estimation by 3D CNN. The 3D CNN method makes improvements by embedding end-to-end hierarchical model and constraints algorithm into the networks, resulting to train at fast convergence rate and avoid unrealistic hand pose. According to the experimental results, it reaches 87.98% of mean accuracy and 8.82 mm of mean absolute error (MAE) for all 21 joints within 24 ms at the inference time, which consistently outperforms several well-known gesture recognition algorithms.
APA, Harvard, Vancouver, ISO, and other styles
33

Schramm, Stefan, Alexander Dietzel, Maren-Christina Blum, Dietmar Link, and Sascha Klee. "Technical light-field setup for 3D imaging of the human nerve head validated with an eye model." Current Directions in Biomedical Engineering 7, no. 2 (October 1, 2021): 433–36. http://dx.doi.org/10.1515/cdbme-2021-2110.

Full text
Abstract:
Abstract With the new technology of 3D light field (LF) imaging, fundus photography can be expanded to provide depth information. This increases the diagnostic possibilities and additionally improves image quality by digitally refocusing. To provide depth information in the human optic nerve head such as in glaucoma diagnostics, a mydriatic fundus camera was upgraded with an LF imager. The aim of the study presented here was the validation of the technical setup and resulting depth estimations with an appropriate eye model. The technical setup consisted of a mydriatic fundus camera (FF450, Carl Zeiss Meditec AG, Jena, Germany) and an LF imager (R12, Raytrix GmbH, Kiel, Germany). The field of view was set to 30°. The eye model (24.65 mm total length) consisted of a two-lens optical system and interchangeable fundus models with papilla excavations from 0.2 to 1 mm in steps of 0.2 mm. They were coated with red acrylic lacquer and vessels were drawn with a thin brush. 15 images were taken for each papilla depth illuminated with green light (wavelength 520 nm ± 20 nm). Papilla depth was measured from the papilla ground to the surrounding flat region. All 15 measurements for each papilla depth were averaged and compared to the printed depth. It was possible to perform 3D fundus imaging in an eye model by means of a novel LF-based optical setup. All LF images could be digitally refocused subsequently. Depth estimation in the eye model was successfully performed over a 30° field of view. The measured virtual depth and the printed model papilla depth is linear correlated. The presented LF setup allowed high-quality 3D one-shot imaging and depth estimation of the optic nerve head in an eye model.
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Cheng-Wei, and Chao-Chung Peng. "3D Face Point Cloud Reconstruction and Recognition Using Depth Sensor." Sensors 21, no. 8 (April 7, 2021): 2587. http://dx.doi.org/10.3390/s21082587.

Full text
Abstract:
Facial recognition has attracted more and more attention since the rapid growth of artificial intelligence (AI) techniques in recent years. However, most of the related works about facial reconstruction and recognition are mainly based on big data collection and image deep learning related algorithms. The data driven based AI approaches inevitably increase the computational complexity of CPU and usually highly count on GPU capacity. One of the typical issues of RGB-based facial recognition is its applicability in low light or dark environments. To solve this problem, this paper presents an effective procedure for facial reconstruction as well as facial recognition via using a depth sensor. For each testing candidate, the depth camera acquires a multi-view of its 3D point clouds. The point cloud sets are stitched for 3D model reconstruction by using the iterative closest point (ICP). Then, a segmentation procedure is designed to separate the model set into a body part and head part. Based on the segmented 3D face point clouds, certain facial features are then extracted for recognition scoring. Taking a single shot from the depth sensor, the point cloud data is going to register with other 3D face models to determine which is the best candidate the data belongs to. By using the proposed feature-based 3D facial similarity score algorithm, which composes of normal, curvature, and registration similarities between different point clouds, the person can be labeled correctly even in a dark environment. The proposed method is suitable for smart devices such as smart phones and smart pads with tiny depth camera equipped. Experiments with real-world data show that the proposed method is able to reconstruct denser models and achieve point cloud-based 3D face recognition.
APA, Harvard, Vancouver, ISO, and other styles
35

Luijendijk, Arjen, Johan Henrotte, Dirk Jan Walstra, and Maarten Van Ormondt. "QUASI-3D MODELLING OF SURF ZONE DYNAMICS." Coastal Engineering Proceedings 1, no. 32 (January 26, 2011): 52. http://dx.doi.org/10.9753/icce.v32.currents.52.

Full text
Abstract:
A quasi-three-dimensional model (quasi-3D) has been developed through the implementation of an analytical 1DV flow model in existing depth-averaged shallow water equations. The model includes the effects of waves and wind on the vertical distribution of the horizontal velocities. Comparisons with data from both physical and field cases show that the quasi-3D approach is able to combine the effect of vertical structures with the efficiency of depth-averaged simulations. Inter-comparisons with three-dimensional simulations show that the quasi-3D approach can represent similar velocity profiles in the surf zone. Quasi-3D morphodynamic simulations show that the bed dynamics in the surf zone represent the relevant 3D effects in the surf zone much more than the depth-averaged computations. It was shown that the quasi-3D approach is computationally efficient as it only adds about 15-20% to the runtimes of a 2DH simulation which is minor compared to a run time increase of 250-800% when switching to a 3D simulation.
APA, Harvard, Vancouver, ISO, and other styles
36

Singh, Brij, Michał Malinowski, Andrzej Górszczyk, Alireza Malehmir, Stefan Buske, Łukasz Sito, and Paul Marsden. "3D high-resolution seismic imaging of the iron oxide deposits in Ludvika (Sweden) using full-waveform inversion and reverse time migration." Solid Earth 13, no. 6 (June 29, 2022): 1065–85. http://dx.doi.org/10.5194/se-13-1065-2022.

Full text
Abstract:
Abstract. A sparse 3D seismic survey was acquired over the Blötberget iron oxide deposits of the Ludvika Mines in south-central Sweden. The main aim of the survey was to delineate the deeper extension of the mineralisation and to better understand its 3D nature and associated fault systems for mine planning purposes. To obtain a high-quality seismic image in depth, we applied time-domain 3D acoustic full-waveform inversion (FWI) to build a high-resolution P-wave velocity model. This model was subsequently used for pre-stack depth imaging with reverse time migration (RTM) to produce the complementary reflectivity section. We developed a data preprocessing workflow and inversion strategy for the successful implementation of FWI in the hardrock environment. We obtained a high-fidelity velocity model using FWI and assessed its robustness. We extensively tested and optimised the parameters associated with the RTM method for subsequent depth imaging using different velocity models: a constant velocity model, a model built using first-arrival travel-time tomography and a velocity model derived by FWI. We compare our RTM results with a priori data available in the area. We conclude that, from all tested velocity models, the FWI velocity model in combination with the subsequent RTM step provided the most focussed image of the mineralisation and we successfully mapped its 3D geometrical nature. In particular, a major reflector interpreted as a cross-cutting fault, which is restricting the deeper extension of the mineralisation with depth, and several other fault structures which were earlier not imaged were also delineated. We believe that a thorough analysis of the depth images derived with the combined FWI–RTM approach that we present here can provide more details which will help with better estimation of areas with high mineralisation, better mine planning and safety measures.
APA, Harvard, Vancouver, ISO, and other styles
37

Luo, Guoliang, Guoming Xiong, Xiaojun Huang, Xin Zhao, Yang Tong, Qiang Chen, Zhiliang Zhu, Haopeng Lei, and Juncong Lin. "Geometry Sampling-Based Adaption to DCGAN for 3D Face Generation." Sensors 23, no. 4 (February 9, 2023): 1937. http://dx.doi.org/10.3390/s23041937.

Full text
Abstract:
Despite progress in the past decades, 3D shape acquisition techniques are still a threshold for various 3D face-based applications and have therefore attracted extensive research. Moreover, advanced 2D data generation models based on deep networks may not be directly applicable to 3D objects because of the different dimensionality of 2D and 3D data. In this work, we propose two novel sampling methods to represent 3D faces as matrix-like structured data that can better fit deep networks, namely (1) a geometric sampling method for the structured representation of 3D faces based on the intersection of iso-geodesic curves and radial curves, and (2) a depth-like map sampling method using the average depth of grid cells on the front surface. The above sampling methods can bridge the gap between unstructured 3D face models and powerful deep networks for an unsupervised generative 3D face model. In particular, the above approaches can obtain the structured representation of 3D faces, which enables us to adapt the 3D faces to the Deep Convolution Generative Adversarial Network (DCGAN) for 3D face generation to obtain better 3D faces with different expressions. We demonstrated the effectiveness of our generative model by producing a large variety of 3D faces with different expressions using the two novel down-sampling methods mentioned above.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhdanov, Michael S., Michael Jorgensen, and Le Wan. "Three-Dimensional Gravity Inversion in the Presence of the Sediment-Basement Interface: A Case Study in Utah, USA." Minerals 12, no. 4 (April 6, 2022): 448. http://dx.doi.org/10.3390/min12040448.

Full text
Abstract:
We introduce a novel approach to three-dimensional gravity inversion in the presence of the sediment-basement interface with a strong density contrast. This approach makes it possible to incorporate the known information about the basement depth in the inversion. It also allows the user to determine the depth-to-basement in the initial inversion phase. One can then use this interface to constrain the final inversion phase. First, the inversion generates the depth-to-basement model based on the 3D Cauchy-type integral representation of the gravity field. Then, in the second phase, full 3D voxel-type inversion applies the depth-to-basement model determined in the first phase as an a priori constraint. We use this approach to the 3D inversion of the Bouguer gravity anomaly data observed in Utah, USA. The results of inversion generated a 3D density model of the top layers of the earth’s crust, including unconsolidated sediments and the top of the crystalline basement.
APA, Harvard, Vancouver, ISO, and other styles
39

Yang, Fei, Yuanjian Wang, and Enhui Jiang. "Simplified Spectral Model of 3D Meander Flow." Water 13, no. 9 (April 28, 2021): 1228. http://dx.doi.org/10.3390/w13091228.

Full text
Abstract:
Most 2D (two-dimensional) models either take vertical velocity profiles as uniform, or consider secondary flow in momentum equations with presupposed velocity profiles, which weakly reflect the spatio-temporal characteristics of meander flow. To tackle meander flow in a more accurate 3D (three-dimensional) way while avoiding low computational efficiency, a new 3D model based on spectral methods is established and verified in this paper. In the present model, the vertical water flow field is expanded into polynomials. Governing equations are transformed by the Galerkin method and then advection terms are tackled with a semi-Lagrangian method. The simulated flow structures of an open channel bend are then compared with experimental results. Although a zero-equation turbulence model is used in this new 3D model, it shows reasonable flow structures, and calculation efficiency is comparable to a depth-averaged 2D model.
APA, Harvard, Vancouver, ISO, and other styles
40

Daud, Hanita, Majid Niaz Akhtar, Noorhana Yahya, Nadeem Nasir, and Hasan Soleimani. "Effect of Frequency on Hydrocarbon (HC) Detection Using 3D Finite Integral Modeling." Defect and Diffusion Forum 326-328 (April 2012): 654–61. http://dx.doi.org/10.4028/www.scientific.net/ddf.326-328.654.

Full text
Abstract:
Detection of hydrocarbon in sea bed logging (SBL) is still a very challenging task for deep target reservoirs. The response of electromagnetic (EM) field from marine environment is very low and it is very difficult to predict deep target reservoirs below 2500 m from the sea floor. Straight antennas at 0.125 Hz and 0.0625 Hz are used for the detection of deep target hydrocarbon reservoirs below the seafloor. The finite integration method (FIM) is applied on 3D geological seabed models. The proposed area of the seabed model (16 km ×16 km) was simulated by using CST (computer simulation technology) EM studio. The comparison of different frequencies for different target depths was done in our proposed model. Total electric and magnetic fields were applied instead of scattered electric and magnetic fields, due to its accurate and precise measurements of resistivity contrast at the target depth up to 3000 m. From the results, it was observed that straight antenna at 0.0625 Hz shows 50.11% resistivity contrast at target depth of 1000 m whereas straight antenna at 0.125 Hz showed 42.30% resistivity contrast at the same target depth for the E-field. It was found that the E-field response decreased as the target depth increased gradually by 500 m from 1000 m to 3000 m at different values of frequencies with constant current (1250 A). It was also investigated that at frequency of 0.0625 Hz, straight antenna gave 7.10% better delineation of hydrocarbon at 3000 m target depth. It was speculated that an antenna at 0.0625 Hz may be able to detect hydrocarbon reservoirs at 4000 m target depth below the seafloor. This EM antenna may open a new frontier for oil and gas industry for the detection of deep target hydrocarbon reservoirs below the seafloor.
APA, Harvard, Vancouver, ISO, and other styles
41

Sun, Siyuan, Changchun Yin, and Xiuhe Gao. "3D Gravity Inversion on Unstructured Grids." Applied Sciences 11, no. 2 (January 13, 2021): 722. http://dx.doi.org/10.3390/app11020722.

Full text
Abstract:
Compared with structured grids, unstructured grids are more flexible to model arbitrarily shaped structures. However, based on unstructured grids, gravity inversion results would be discontinuous and hollow because of cell volume and depth variations. To solve this problem, we first analyzed the gradient of objective function in gradient-based inversion methods, and a new gradient scheme of objective function is developed, which is a derivative with respect to weighted model parameters. The new gradient scheme can more effectively solve the problem with lacking depth resolution than the traditional inversions, and the improvement is not affected by the regularization parameters. Besides, an improved fuzzy c-means clustering combined with spatial constraints is developed to measure property distribution of inverted models in both spatial domain and parameter domain simultaneously. The new inversion method can yield a more internal continuous model, as it encourages cells and their adjacent cells to tend to the same property value. At last, the smooth constraint inversion, the focusing inversion, and the improved fuzzy c-means clustering inversion on unstructured grids are tested on synthetic and measured gravity data to compare and demonstrate the algorithms proposed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
42

Bae, Min Soo, and In Kyu Park. "Content-based 3D model retrieval using a single depth image from a low-cost 3D camera." Visual Computer 29, no. 6-8 (April 23, 2013): 555–64. http://dx.doi.org/10.1007/s00371-013-0819-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Xu, Jie, Xiao Lei Jia, Yu Fan, Zhi Sun, An Min Liu, and Chong Hao Zhang. "A Comparison of 3D and Axi-Symmetric Models in Pipe Welding Simulation Process." Applied Mechanics and Materials 529 (June 2014): 277–81. http://dx.doi.org/10.4028/www.scientific.net/amm.529.277.

Full text
Abstract:
In this paper, a detailed comparison of axi-symmetric and 3D pipe finite element models was carried out under the condition of same welding simulation parameters. Results showed that axi-symmetric model share similar residual stress distribution with 3D model in the condition of same heat source shape parameters. However, the stress values of the two concerned models were different. Meanwhile the scale of welding pool for 3D model was almost twice bigger than that of axi-symmetric model. Both welding experiment and simulation results of 3D model showed that peak temperature of welding pool along the welding path increased during the welding process, and welding pool width and depth also increased with the moving of heat source.
APA, Harvard, Vancouver, ISO, and other styles
44

Kim, H., W. Yoon, and T. Kim. "AUTOMATED MOSAICKING OF MULTIPLE 3D POINT CLOUDS GENERATED FROM A DEPTH CAMERA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 269–72. http://dx.doi.org/10.5194/isprs-archives-xli-b3-269-2016.

Full text
Abstract:
In this paper, we propose a method for automated mosaicking of multiple 3D point clouds generated from a depth camera. A depth camera generates depth data by using ToF (Time of Flight) method and intensity data by using intensity of returned signal. The depth camera used in this paper was a SR4000 from MESA Imaging. This camera generates a depth map and intensity map of 176 x 44 pixels. Generated depth map saves physical depth data with mm of precision. Generated intensity map contains texture data with many noises. We used texture maps for extracting tiepoints and depth maps for assigning z coordinates to tiepoints and point cloud mosaicking. There are four steps in the proposed mosaicking method. In the first step, we acquired multiple 3D point clouds by rotating depth camera and capturing data per rotation. In the second step, we estimated 3D-3D transformation relationships between subsequent point clouds. For this, 2D tiepoints were extracted automatically from the corresponding two intensity maps. They were converted into 3D tiepoints using depth maps. We used a 3D similarity transformation model for estimating the 3D-3D transformation relationships. In the third step, we converted local 3D-3D transformations into a global transformation for all point clouds with respect to a reference one. In the last step, the extent of single depth map mosaic was calculated and depth values per mosaic pixel were determined by a ray tracing method. For experiments, 8 depth maps and intensity maps were used. After the four steps, an output mosaicked depth map of 454x144 was generated. It is expected that the proposed method would be useful for developing an effective 3D indoor mapping method in future.
APA, Harvard, Vancouver, ISO, and other styles
45

Kim, H., W. Yoon, and T. Kim. "AUTOMATED MOSAICKING OF MULTIPLE 3D POINT CLOUDS GENERATED FROM A DEPTH CAMERA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 269–72. http://dx.doi.org/10.5194/isprsarchives-xli-b3-269-2016.

Full text
Abstract:
In this paper, we propose a method for automated mosaicking of multiple 3D point clouds generated from a depth camera. A depth camera generates depth data by using ToF (Time of Flight) method and intensity data by using intensity of returned signal. The depth camera used in this paper was a SR4000 from MESA Imaging. This camera generates a depth map and intensity map of 176 x 44 pixels. Generated depth map saves physical depth data with mm of precision. Generated intensity map contains texture data with many noises. We used texture maps for extracting tiepoints and depth maps for assigning z coordinates to tiepoints and point cloud mosaicking. There are four steps in the proposed mosaicking method. In the first step, we acquired multiple 3D point clouds by rotating depth camera and capturing data per rotation. In the second step, we estimated 3D-3D transformation relationships between subsequent point clouds. For this, 2D tiepoints were extracted automatically from the corresponding two intensity maps. They were converted into 3D tiepoints using depth maps. We used a 3D similarity transformation model for estimating the 3D-3D transformation relationships. In the third step, we converted local 3D-3D transformations into a global transformation for all point clouds with respect to a reference one. In the last step, the extent of single depth map mosaic was calculated and depth values per mosaic pixel were determined by a ray tracing method. For experiments, 8 depth maps and intensity maps were used. After the four steps, an output mosaicked depth map of 454x144 was generated. It is expected that the proposed method would be useful for developing an effective 3D indoor mapping method in future.
APA, Harvard, Vancouver, ISO, and other styles
46

Kim, Dae-Hong, and Patrick Lynett. "DEPTH-INTEGRATED NUMERICAL MODELING OF TURBULENT TRANSPORT BY LONG WAVES AND CURRENTS." Coastal Engineering Proceedings 1, no. 32 (February 2, 2011): 15. http://dx.doi.org/10.9753/icce.v32.currents.15.

Full text
Abstract:
In nature, flows are 3D phenomenon, but, in many geophysical settings, the water depth scale is smaller relative to the horizontal scale, such that horizontal 2D (H2D) motions dominate the flow structure. In those cases, especially in large domains, the H2D numerical model can be a practical and accurate tool - if the 3D physical properties can be included properly into the H2D model. Some of the H2D approaches in widespread use are the Boussinesq-type equations (BE) and shallow water equations (SWE) derived by a perturbation approach or depth averaging. The BE can account for some of the dispersive, turbulent and rotational flow properties frequently observed in nature (Kim et al., 2009). Also it has the ability of coupling currents and waves and can predict nonlinear water wave propagation over an uneven bottom from deep (or intermediate) water to the shallow water area. However, during the derivation of a H2D equation set, BE or SWE, some of the 3D flow properties like the dispersive stresses (Kuipers and Vreugdenhill, 1973) and the effects of the unresolved small scale 3D turbulence are excluded. Subsequently, there must be some limitations for predicting horizontal flow structures which can be generated through these neglected 3D effects. Naturally, any inaccuracy of the hydrodynamic flow model is reflected in the results of a coupled scalar transport model. In order to incorporate 3D turbulence effects into H2D flow models, various approaches have been proposed. Among many others, the stochastic backscatter model (BSM) proposed by Hinterberger et al. (2007) can account for the mechanism of inverse energy transfer from unresolved 3D turbulence to resolved 2D flow motions. Reasonable results were obtained by the proposed methods. Similar to the flow model, for scalar transport it is desired to develop a H2D model that can approximately account for the vertical deviations of concentration and velocity, and the associated mixing. For the accurate prediction of transport, an accurate numerical solver which can minimize numerical dispersion, dissipation and diffusion should be developed. Recently, the finite volume method (FVM) using approximate Riemann solvers has been developed and applied successfully. In this study, a depth-integrated model including subgrid scale mixing effects for turbulent transport by long waves and currents is presented. A fully-nonlinear, depth-integrated set of equations for weakly dispersive and rotational flow are derived by the long wave perturbation approach. The same approach is applied to derive a depth-integrated scalar transport model. The proposed equations are solved by a fourth-order accurate FVM. The depth-integrated flow and transport models are applied to typical problems which have different mixing mechanisms. Several important conclusions are obtained from the simulations: (i) From a mixing layer simulation it is revealed that the dispersive stress implemented with a stochastic BSM plays an important role for energy transfer. (ii) The proposed transport model coupled with the depth-integrated flow model can predict the passive scalar transport based on the turbulent intensity - not by relying on empirical constants. (iii) For near field transport simulations, the inherent limitation of the two-dimensional horizontal model to capture vertical structure is recognized. (iv) If the main mechanism of flow instability originates from relatively large-scale bottom topography features, then the effects of the dispersive stresses are less important.
APA, Harvard, Vancouver, ISO, and other styles
47

Chakravarthi, Vishnubhotla, and Narasimman Sundararajan. "3D gravity inversion of basement relief — A depth-dependent density approach." GEOPHYSICS 72, no. 2 (March 2007): I23—I32. http://dx.doi.org/10.1190/1.2431634.

Full text
Abstract:
We present a 3D gravity inversion technique, based on the Marquardt algorithm, to analyze gravity anomalies attributable to basement interfaces above which the density contrast varies continuously with depth. The salient feature of this inversion is that the initial depth of the basement is not a required input. The proposed inversion simultaneously estimates the depth of the basement interface and the regional gravity background. Applicability and efficacy of the inversion is demonstrated with a synthetic model of a density interface. We analyze the synthetic gravity anomalies (1) solely because of the structure, (2) in the presence of a regional gravity background, and (3) in the presence of both random noise and regional gravity background. The inverted structure remains more or less the same, regardless of whether the regional background is simulated with a second-degree polynomial or a bilinear equation. The depth of the structure and estimated regional background deviate only modestly from the assumed ones in the presence of random noise and regional background. The analyses of two sets of real field data, one over the Chintalpudi subbasin, India, and another over the Pannonian basin, eastern Austria, yield geologically plausible models with the estimated depths that compare well with drilling data.
APA, Harvard, Vancouver, ISO, and other styles
48

Kwak, Jeonghoon, and Yunsick Sung. "Automatic 3D Landmark Extraction System Based on an Encoder–Decoder Using Fusion of Vision and LiDAR." Remote Sensing 12, no. 7 (April 3, 2020): 1142. http://dx.doi.org/10.3390/rs12071142.

Full text
Abstract:
To provide a realistic environment for remote sensing applications, point clouds are used to realize a three-dimensional (3D) digital world for the user. Motion recognition of objects, e.g., humans, is required to provide realistic experiences in the 3D digital world. To recognize a user’s motions, 3D landmarks are provided by analyzing a 3D point cloud collected through a light detection and ranging (LiDAR) system or a red green blue (RGB) image collected visually. However, manual supervision is required to extract 3D landmarks as to whether they originate from the RGB image or the 3D point cloud. Thus, there is a need for a method for extracting 3D landmarks without manual supervision. Herein, an RGB image and a 3D point cloud are used to extract 3D landmarks. The 3D point cloud is utilized as the relative distance between a LiDAR and a user. Because it cannot contain all information the user’s entire body due to disparities, it cannot generate a dense depth image that provides the boundary of user’s body. Therefore, up-sampling is performed to increase the density of the depth image generated based on the 3D point cloud; the density depends on the 3D point cloud. This paper proposes a system for extracting 3D landmarks using 3D point clouds and RGB images without manual supervision. A depth image provides the boundary of a user’s motion and is generated by using 3D point cloud and RGB image collected by a LiDAR and an RGB camera, respectively. To extract 3D landmarks automatically, an encoder–decoder model is trained with the generated depth images, and the RGB images and 3D landmarks are extracted from these images with the trained encoder model. The method of extracting 3D landmarks using RGB depth (RGBD) images was verified experimentally, and 3D landmarks were extracted to evaluate the user’s motions with RGBD images. In this manner, landmarks could be extracted according to the user’s motions, rather than by extracting them using the RGB images. The depth images generated by the proposed method were 1.832 times denser than the up-sampling-based depth images generated with bilateral filtering.
APA, Harvard, Vancouver, ISO, and other styles
49

Bui, Minh-Quan Viet, Duc Tuan Ngo, Hoang-Anh Pham, and Duc Dung Nguyen. "GAC3D: improving monocular 3D object detection with ground-guide model and adaptive convolution." PeerJ Computer Science 7 (October 6, 2021): e686. http://dx.doi.org/10.7717/peerj-cs.686.

Full text
Abstract:
Monocular 3D object detection has recently become prevalent in autonomous driving and navigation applications due to its cost-efficiency and easy-to-embed to existent vehicles. The most challenging task in monocular vision is to estimate a reliable object’s location cause of the lack of depth information in RGB images. Many methods tackle this ill-posed problem by directly regressing the object’s depth or take the depth map as a supplement input to enhance the model’s results. However, the performance relies heavily on the estimated depth map quality, which is bias to the training data. In this work, we propose depth-adaptive convolution to replace the traditional 2D convolution to deal with the divergent context of the image’s features. This lead to significant improvement in both training convergence and testing accuracy. Second, we propose a ground plane model that utilizes geometric constraints in the pose estimation process. With the new method, named GAC3D, we achieve better detection results. We demonstrate our approach on the KITTI 3D Object Detection benchmark, which outperforms existing monocular methods.
APA, Harvard, Vancouver, ISO, and other styles
50

Irawan, Sudra, Yeni Rokhayati, and Satriya Bayu Aji. "An Analysis of the Accuracy of Time Domain 3D Image Geology Model Resulted from PSTM and Depth Domain 3D Image Geology Model Resulted from PSDM in Oil and Gas Exploration." Journal of Geoscience, Engineering, Environment, and Technology 4, no. 1 (March 1, 2019): 1. http://dx.doi.org/10.25299/jgeet.2019.4.1.2121.

Full text
Abstract:
This study aims to obtain a geological model which is close to the truth and compare accuracy between the time domain 3D image of the PSTM results with the depth domain 3D image of PSDM results. There are 3 parameters to determine the accuracy of an interval velocity model in the production of a geology model: depth gathering that is already flat, semblance that has concurred with zero residual move-out axes, and depth image which conforms to the marker (well seismic tie). The analytical method employed is Horizon Based Tomography, which is a method to correct the seismic wave travel time error along the analyzed horizon. Reducing errors in the travel time of the seismic wave will decrease depth errors. This improvement is expected to provide correct information about subsurface geological conditions. The results showed that the depth domain image generated by the PSDM process represents the actual geological model better than time domain image produced by the PSTM process, evidenced by the sharpening of the reflector continuity, reduction of pull-up effect, and high resolution.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography