Siga este enlace para ver otros tipos de publicaciones sobre el tema: Reconstruction 3D de la scene.

Artículos de revistas sobre el tema "Reconstruction 3D de la scene"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Reconstruction 3D de la scene".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Wen, Mingyun y Kyungeun Cho. "Object-Aware 3D Scene Reconstruction from Single 2D Images of Indoor Scenes". Mathematics 11, n.º 2 (12 de enero de 2023): 403. http://dx.doi.org/10.3390/math11020403.

Texto completo
Resumen
Recent studies have shown that deep learning achieves excellent performance in reconstructing 3D scenes from multiview images or videos. However, these reconstructions do not provide the identities of objects, and object identification is necessary for a scene to be functional in virtual reality or interactive applications. The objects in a scene reconstructed as one mesh are treated as a single object, rather than individual entities that can be interacted with or manipulated. Reconstructing an object-aware 3D scene from a single 2D image is challenging because the image conversion process from a 3D scene to a 2D image is irreversible, and the projection from 3D to 2D reduces a dimension. To alleviate the effects of dimension reduction, we proposed a module to generate depth features that can aid the 3D pose estimation of objects. Additionally, we developed a novel approach to mesh reconstruction that combines two decoders that estimate 3D shapes with different shape representations. By leveraging the principles of multitask learning, our approach demonstrated superior performance in generating complete meshes compared to methods relying solely on implicit representation-based mesh reconstruction networks (e.g., local deep implicit functions), as well as producing more accurate shapes compared to previous approaches for mesh reconstruction from single images (e.g., topology modification networks). The proposed method was evaluated on real-world datasets. The results showed that it could effectively improve the object-aware 3D scene reconstruction performance over existing methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Guo, Rui Bin, Tao Guan, Dong Xiang Zhou, Ke Ju Peng y Wei Hong Fan. "Efficient Multi-Scale Registration of 3D Reconstructions Based on Camera Center Constraint". Advanced Materials Research 998-999 (julio de 2014): 1018–23. http://dx.doi.org/10.4028/www.scientific.net/amr.998-999.1018.

Texto completo
Resumen
Recent approaches for reconstructing 3D scenes from image collections only produce single scene models. To build a unified scene model that contains multiple subsets, we present a novel method for registration of 3D scene reconstructions in different scales. It first normalizes the scales of the models building on similarity reconstruction by the constraint of the 3D position of shared cameras. Then we use Cayley transform to fit the matrix of coordinates transformation for the models in normalization scales. The experimental results show the effectiveness and scalability of the proposed approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Jang, Hyeonjoong, Andréas Meuleman, Dahyun Kang, Donggun Kim, Christian Richardt y Min H. Kim. "Egocentric scene reconstruction from an omnidirectional video". ACM Transactions on Graphics 41, n.º 4 (julio de 2022): 1–12. http://dx.doi.org/10.1145/3528223.3530074.

Texto completo
Resumen
Omnidirectional videos capture environmental scenes effectively, but they have rarely been used for geometry reconstruction. In this work, we propose an egocentric 3D reconstruction method that can acquire scene geometry with high accuracy from a short egocentric omnidirectional video. To this end, we first estimate per-frame depth using a spherical disparity network. We then fuse per-frame depth estimates into a novel spherical binoctree data structure that is specifically designed to tolerate spherical depth estimation errors. By subdividing the spherical space into binary tree and octree nodes that represent spherical frustums adaptively, the spherical binoctree effectively enables egocentric surface geometry reconstruction for environmental scenes while simultaneously assigning high-resolution nodes for closely observed surfaces. This allows to reconstruct an entire scene from a short video captured with a small camera trajectory. Experimental results validate the effectiveness and accuracy of our approach for reconstructing the 3D geometry of environmental scenes from short egocentric omnidirectional video inputs. We further demonstrate various applications using a conventional omnidirectional camera, including novel-view synthesis, object insertion, and relighting of scenes using reconstructed 3D models with texture.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Buck, Ursula. "3D crime scene reconstruction". Forensic Science International 304 (noviembre de 2019): 109901. http://dx.doi.org/10.1016/j.forsciint.2019.109901.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Gao, Huanbing, Lei Liu, Ya Tian y Shouyin Lu. "3D Reconstruction for Road Scene with Obstacle Detection Feedback". International Journal of Pattern Recognition and Artificial Intelligence 32, n.º 12 (27 de agosto de 2018): 1855021. http://dx.doi.org/10.1142/s0218001418550212.

Texto completo
Resumen
This paper presented 3D reconstruction method for road scene with the help of obstacle detection. 3D reconstruction for road scene can be used in autonomous driving, driver assistance system, car navigation systems. However, some errors often rose when 3D reconstructing due to the shade from the moving object in the road scene. The presented 3D reconstruction method with obstacle detection feedback can avoid this problem. Firstly, this paper offers a framework for the 3D reconstruction of road scene by laser scanning and vision. A calibration method based on the location of horizon is proposed, and a method of attitude angle measuring based on vanishing point is proposed to revise the 3D reconstruction result. Secondly, the reconstruction framework is extended by integrating with an object recognition that can automatically detect and discriminate obstacles in the input video streams by a RANSAC approach and threshold filter, and localizes them in the 3D model. 3D reconstruction and obstacle detection are tightly integrated and benefit from each other. The experiment result verified the feasibility and practicability of the proposed method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Liu, Yilin, Liqiang Lin, Yue Hu, Ke Xie, Chi-Wing Fu, Hao Zhang y Hui Huang. "Learning Reconstructability for Drone Aerial Path Planning". ACM Transactions on Graphics 41, n.º 6 (30 de noviembre de 2022): 1–17. http://dx.doi.org/10.1145/3550454.3555433.

Texto completo
Resumen
We introduce the first learning-based reconstructability predictor to improve view and path planning for large-scale 3D urban scene acquisition using unmanned drones. In contrast to previous heuristic approaches, our method learns a model that explicitly predicts how well a 3D urban scene will be reconstructed from a set of viewpoints. To make such a model trainable and simultaneously applicable to drone path planning, we simulate the proxy-based 3D scene reconstruction during training to set up the prediction. Specifically, the neural network we design is trained to predict the scene reconstructability as a function of the proxy geometry , a set of viewpoints, and optionally a series of scene images acquired in flight. To reconstruct a new urban scene, we first build the 3D scene proxy, then rely on the predicted reconstruction quality and uncertainty measures by our network, based off of the proxy geometry, to guide the drone path planning. We demonstrate that our data-driven reconstructability predictions are more closely correlated to the true reconstruction quality than prior heuristic measures. Further, our learned predictor can be easily integrated into existing path planners to yield improvements. Finally, we devise a new iterative view planning framework, based on the learned reconstructability, and show superior performance of the new planner when reconstructing both synthetic and real scenes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Dong, Bo, Kaiqiang Chen, Zhirui Wang, Menglong Yan, Jiaojiao Gu y Xian Sun. "MM-NeRF: Large-Scale Scene Representation with Multi-Resolution Hash Grid and Multi-View Priors Features". Electronics 13, n.º 5 (22 de febrero de 2024): 844. http://dx.doi.org/10.3390/electronics13050844.

Texto completo
Resumen
Reconstructing large-scale scenes using Neural Radiance Fields (NeRFs) is a research hotspot in 3D computer vision. Existing MLP (multi-layer perception)-based methods often suffer from issues of underfitting and a lack of fine details in rendering large-scale scenes. Popular solutions are to divide the scene into small areas for separate modeling or to increase the layer scale of the MLP network. However, the subsequent problem is that the training cost increases. Moreover, reconstructing large scenes, unlike object-scale reconstruction, involves a geometrically considerable increase in the quantity of view data if the prior information of the scene is not effectively utilized. In this paper, we propose an innovative method named MM-NeRF, which integrates efficient hybrid features into the NeRF framework to enhance the reconstruction of large-scale scenes. We propose employing a dual-branch feature capture structure, comprising a multi-resolution 3D hash grid feature branch and a multi-view 2D prior feature branch. The 3D hash grid feature models geometric details, while the 2D prior feature supplements local texture information. Our experimental results show that such integration is sufficient to render realistic novel views with fine details, forming a more accurate geometric representation. Compared with representative methods in the field, our method significantly improves the PSNR (Peak Signal-to-Noise Ratio) by approximately 5%. This remarkable progress underscores the outstanding contribution of our method in the field of large-scene radiance field reconstruction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Tingdahl, David y Gool Van Luc. "An Enhanced On-Line Service for 3D Model Construction from Photographs". International Journal of Heritage in the Digital Era 1, n.º 2 (junio de 2012): 277–94. http://dx.doi.org/10.1260/2047-4970.1.2.277.

Texto completo
Resumen
We present a web service for image based 3D reconstruction. The system allows a cultural heritage professional to easily create a 3D model of a scene or object out of images taken from different viewpoints. The user uploads the images to our server on which all processing takes place, and the final result can be downloaded upon completion. Any consumer-class digital camera can be used, and the system is free to use for non-commercial purposes. The service includes a number of innovations to greatly simplify the process of taking pictures suitable for reconstruction. In particular, we are able to construct models of planar scenes and from photographs shot using a turntable, and at varying zoom levels. Although the first two may seem like particularly simple cases, they cause some mathematical issues with traditional self-calibration techniques. We handle these cases by taking advantage of a new automatic camera calibration method that uses meta-data stored with the images. For fixed-lens camera setups, we can also reuse previously computed calibrations to support otherwise degenerate scenes. Furthermore, we can automatically compute the relative scale and transformation between two reconstructions of the same scene, merging two reconstructions into one. We demonstrate the capabilities of the system by two case studies: turntable reconstruction of various objects and the reconstruction of a cave, with walls and roof integrated into a complete model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Wang, Wei, Fengjiao Gao y Yongliang Shen. "Res-NeuS: Deep Residuals and Neural Implicit Surface Learning for Multi-View Reconstruction". Sensors 24, n.º 3 (29 de enero de 2024): 881. http://dx.doi.org/10.3390/s24030881.

Texto completo
Resumen
Surface reconstruction using neural networks has proven effective in reconstructing dense 3D surfaces through image-based neural rendering. Nevertheless, current methods are challenging when dealing with the intricate details of large-scale scenes. The high-fidelity reconstruction performance of neural rendering is constrained by the view sparsity and structural complexity of such scenes. In this paper, we present Res-NeuS, a method combining ResNet-50 and neural surface rendering for dense 3D reconstruction. Specifically, we present appearance embeddings: ResNet-50 is used to extract the appearance depth features of an image to further capture more scene details. We interpolate points near the surface and optimize their weights for the accurate localization of 3D surfaces. We introduce photometric consistency and geometric constraints to optimize 3D surfaces and eliminate geometric ambiguity existing in current methods. Finally, we design a 3D geometry automatic sampling to filter out uninteresting areas and reconstruct complex surface details in a coarse-to-fine manner. Comprehensive experiments demonstrate Res-NeuS’s superior capability in the reconstruction of 3D surfaces in complex, large-scale scenes, and the harmful distance of the reconstructed 3D model is 0.4 times that of general neural rendering 3D reconstruction methods and 0.6 times that of traditional 3D reconstruction methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Xia, Wei, Rongfeng Lu, Yaoqi Sun, Chenghao Xu, Kun Lv, Yanwei Jia, Zunjie Zhu y Bolun Zheng. "3D Indoor Scene Completion via Room Layout Estimation". Journal of Physics: Conference Series 2025, n.º 1 (1 de septiembre de 2021): 012102. http://dx.doi.org/10.1088/1742-6596/2025/1/012102.

Texto completo
Resumen
Abstract Recent advances in 3D reconstructions have shown impressive progress in 3D indoor scene reconstruction, enabling automatic scene modeling; however, holes in the 3D scans hinder the further usage of the reconstructed models. Thus, we propose the task of layout-based hole filling for the incomplete indoor scene scans: from the mesh of a scene model, we estimate the scene layout by detecting the principal planes of a scene and leverage the layout as the prior for the accurate completion of planar regions. Experiments show that guiding scene model completion through the scene layout prior significantly outperforms the alternative approach to the task of scene model completion.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Wang, Tengfei, Qingdong Wang, Haibin Ai y Li Zhang. "Semantics-and-Primitives-Guided Indoor 3D Reconstruction from Point Clouds". Remote Sensing 14, n.º 19 (27 de septiembre de 2022): 4820. http://dx.doi.org/10.3390/rs14194820.

Texto completo
Resumen
The automatic 3D reconstruction of indoor scenes is of great significance in the application of 3D-scene understanding. The existing methods have poor resilience to the incomplete and noisy point cloud, which leads to low-quality results and tedious post-processing. Therefore, the objective of this work is to automatically reconstruct indoor scenes from an incomplete and noisy point-cloud base on semantics and primitives. In this paper, we propose a semantics-and-primitives-guided indoor 3D reconstruction method. Firstly, a local, fully connected graph neural network is designed for semantic segmentation. Secondly, based on the enumerable features of indoor scenes, a primitive-based reconstruction method is proposed, which retrieves the most similar model in a 3D-ESF indoor model library by using ESF descriptors and semantic labels. Finally, a coarse-to-fine registration method is proposed to register the model into the scene. The results indicate that our method can achieve high-quality results while remaining better resilience to the incompleteness and noise of point cloud. It is concluded that the proposed method is practical and is able to automatically reconstruct the indoor scene from the point cloud with incompleteness and noise.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Li, Yuan y Jiangming Kan. "CGAN-Based Forest Scene 3D Reconstruction from a Single Image". Forests 15, n.º 1 (18 de enero de 2024): 194. http://dx.doi.org/10.3390/f15010194.

Texto completo
Resumen
Forest scene 3D reconstruction serves as the fundamental basis for crucial applications such as forest resource inventory, forestry 3D visualization, and the perceptual capabilities of intelligent forestry robots in operational environments. However, traditional 3D reconstruction methods like LiDAR present challenges primarily because of their lack of portability. Additionally, they encounter complexities related to feature point extraction and matching within multi-view stereo vision sensors. In this research, we propose a new method that not only reconstructs the forest environment but also performs a more detailed tree reconstruction in the scene using conditional generative adversarial networks (CGANs) based on a single RGB image. Firstly, we introduced a depth estimation network based on a CGAN. This network aims to reconstruct forest scenes from images and has demonstrated remarkable performance in accurately reconstructing intricate outdoor environments. Subsequently, we designed a new tree silhouette depth map to represent the tree’s shape as derived from the tree prediction network. This network aims to accomplish a detailed 3D reconstruction of individual trees masked by instance segmentation. Our approach underwent validation using the Cityscapes and Make3D outdoor datasets and exhibited exceptional performance compared with state-of-the-art methods, such as GCNDepth. It achieved a relative error as low as 8% (with an absolute error of 1.76 cm) in estimating diameter at breast height (DBH). Remarkably, our method outperforms existing approaches for single-image reconstruction. It stands as a cost-effective and user-friendly alternative to conventional forest survey methods like LiDAR and SFM techniques. The significance of our method lies in its contribution to technical support, enabling the efficient and detailed utilization of 3D forest scene reconstruction for various applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Li, Yao, Yue Qi, Chen Wang y Yongtang Bao. "A Cluster-Based 3D Reconstruction System for Large-Scale Scenes". Sensors 23, n.º 5 (21 de febrero de 2023): 2377. http://dx.doi.org/10.3390/s23052377.

Texto completo
Resumen
The reconstruction of realistic large-scale 3D scene models using aerial images or videos has significant applications in smart cities, surveying and mapping, the military and other fields. In the current state-of-the-art 3D-reconstruction pipeline, the massive scale of the scene and the enormous amount of input data are still considerable obstacles to the rapid reconstruction of large-scale 3D scene models. In this paper, we develop a professional system for large-scale 3D reconstruction. First, in the sparse point-cloud reconstruction stage, the computed matching relationships are used as the initial camera graph and divided into multiple subgraphs by a clustering algorithm. Multiple computational nodes execute the local structure-from-motion (SFM) technique, and local cameras are registered. Global camera alignment is achieved by integrating and optimizing all local camera poses. Second, in the dense point-cloud reconstruction stage, the adjacency information is decoupled from the pixel level by red-and-black checkerboard grid sampling. The optimal depth value is obtained using normalized cross-correlation (NCC). Additionally, during the mesh-reconstruction stage, feature-preserving mesh simplification, Laplace mesh-smoothing and mesh-detail-recovery methods are used to improve the quality of the mesh model. Finally, the above algorithms are integrated into our large-scale 3D-reconstruction system. Experiments show that the system can effectively improve the reconstruction speed of large-scale 3D scenes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Galanakis, George, Xenophon Zabulis, Theodore Evdaimon, Sven-Eric Fikenscher, Sebastian Allertseder, Theodora Tsikrika y Stefanos Vrochidis. "A Study of 3D Digitisation Modalities for Crime Scene Investigation". Forensic Sciences 1, n.º 2 (30 de julio de 2021): 56–85. http://dx.doi.org/10.3390/forensicsci1020008.

Texto completo
Resumen
A valuable aspect during crime scene investigation is the digital documentation of the scene. Traditional means of documentation include photography and in situ measurements from experts for further analysis. Although 3D reconstruction of pertinent scenes has already been explored as a complementary tool in investigation pipelines, such technology is considered unfamiliar and not yet widely adopted. This is explained by the expensive and specialised digitisation equipment that is available so far. However, the emergence of high-precision but low-cost devices capable of scanning scenes or objects in 3D has been proven as a reliable alternative to their counterparts. This paper summarises and analyses the state-of-the-art technologies in scene documentation using 3D digitisation and assesses the usefulness in typical police-related situations and the forensics domain in general. We present the methodology for acquiring data for 3D reconstruction of various types of scenes. Emphasis is placed on the applicability of each technique in a wide range of situations, ranging in type and size. The application of each reconstruction method is considered in this context and compared with respect to additional constraints, such as time availability and simplicity of operation of the corresponding scanning modality. To further support our findings, we release a multi-modal dataset obtained from a hypothetical indoor crime scene to the public.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Li, Jianwei, Wei Gao, Heping Li, Fulin Tang y Yihong Wu. "Robust and Efficient CPU-Based RGB-D Scene Reconstruction". Sensors 18, n.º 11 (28 de octubre de 2018): 3652. http://dx.doi.org/10.3390/s18113652.

Texto completo
Resumen
3D scene reconstruction is an important topic in computer vision. A complete scene is reconstructed from views acquired along the camera trajectory, each view containing a small part of the scene. Tracking in textureless scenes is well known to be a Gordian knot of camera tracking, and how to obtain accurate 3D models quickly is a major challenge for existing systems. For the application of robotics, we propose a robust CPU-based approach to reconstruct indoor scenes efficiently with a consumer RGB-D camera. The proposed approach bridges feature-based camera tracking and volumetric-based data integration together and has a good reconstruction performance in terms of both robustness and efficiency. The key points in our approach include: (i) a robust and fast camera tracking method combining points and edges, which improves tracking stability in textureless scenes; (ii) an efficient data fusion strategy to select camera views and integrate RGB-D images on multiple scales, which enhances the efficiency of volumetric integration; (iii) a novel RGB-D scene reconstruction system, which can be quickly implemented on a standard CPU. Experimental results demonstrate that our approach reconstructs scenes with higher robustness and efficiency compared to state-of-the-art reconstruction systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Li, Xiaoli. "A KD-tree and random sample consensus-based 3D reconstruction model for 2D sports stadium images". Mathematical Biosciences and Engineering 20, n.º 12 (2023): 21432–50. http://dx.doi.org/10.3934/mbe.2023948.

Texto completo
Resumen
<abstract> <p>The application of 3D reconstruction technology in building images has been a novel research direction. In such scenes, the reconstruction with proper building details remains challenging. To deal with this issue, I propose a KD-tree and random sample consensus-based 3D reconstruction model for 2D building images. Specifically, the improved KD-tree algorithm with the random sampling consistency algorithm has a better matching rate for the two-dimensional image data extraction of the stadium scene. The number of discrete areas in the stadium scene increases with the increase in the number of images. The sparse 3D models can be transformed into dense 3D models to some extent using the screening method. In addition, we carry out some simulation experiments to assess the performance of the proposed algorithm in this paper in terms of stadium scenes. The results reflect that the error of the proposal is significantly lower than that of the comparison algorithms. Therefore, it is proven that the proposal can be well-suitable for 3D reconstruction in building images.</p> </abstract>
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Liu, Yilin, Ruiqi Cui, Ke Xie, Minglun Gong y Hui Huang. "Aerial path planning for online real-time exploration and offline high-quality reconstruction of large-scale urban scenes". ACM Transactions on Graphics 40, n.º 6 (diciembre de 2021): 1–16. http://dx.doi.org/10.1145/3478513.3480491.

Texto completo
Resumen
Existing approaches have shown that, through carefully planning flight trajectories, images captured by Unmanned Aerial Vehicles (UAVs) can be used to reconstruct high-quality 3D models for real environments. These approaches greatly simplify and cut the cost of large-scale urban scene reconstruction. However, to properly capture height discontinuities in urban scenes, all state-of-the-art methods require prior knowledge on scene geometry and hence, additional prepossessing steps are needed before performing the actual image acquisition flights. To address this limitation and to make urban modeling techniques even more accessible, we present a real-time explore-and-reconstruct planning algorithm that does not require any prior knowledge for the scenes. Using only captured 2D images, we estimate 3D bounding boxes for buildings on-the-fly and use them to guide online path planning for both scene exploration and building observation. Experimental results demonstrate that the aerial paths planned by our algorithm in realtime for unknown environments support reconstructing 3D models with comparable qualities and lead to shorter flight air time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Zhang, Han, Yucong Yao, Ke Xie, Chi-Wing Fu, Hao Zhang y Hui Huang. "Continuous aerial path planning for 3D urban scene reconstruction". ACM Transactions on Graphics 40, n.º 6 (diciembre de 2021): 1–15. http://dx.doi.org/10.1145/3478513.3480483.

Texto completo
Resumen
We introduce the first path-oriented drone trajectory planning algorithm, which performs continuous (i.e., dense ) image acquisition along an aerial path and explicitly factors path quality into an optimization along with scene reconstruction quality. Specifically, our method takes as input a rough 3D scene proxy and produces a drone trajectory and image capturing setup, which efficiently yields a high-quality reconstruction of the 3D scene based on three optimization objectives: one to maximize the amount of 3D scene information that can be acquired along the entirety of the trajectory, another to optimize the scene capturing efficiency by maximizing the scene information that can be acquired per unit length along the aerial path, and the last one to minimize the total turning angles along the aerial path, so as to reduce the number of sharp turns. Our search scheme is based on the rapidly-exploring random tree framework, resulting in a final trajectory as a single path through the search tree. Unlike state-of-the-art works, our joint optimization for view selection and path planning is performed in a single step. We comprehensively evaluate our method not only on benchmark virtual datasets as in existing works but also on several large-scale real urban scenes. We demonstrate that the continuous paths optimized by our method can effectively reduce onsite acquisition cost using drones, while achieving high-fidelity 3D reconstruction, compared to existing planning methods and oblique photography, a mature and popular industry solution.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Sui, Haigang, Hao Zhang, Guohua Gou, Xuanhao Wang, Sheng Wang, Fei Li y Junyi Liu. "Multi-UAV Cooperative and Continuous Path Planning for High-Resolution 3D Scene Reconstruction". Drones 7, n.º 9 (22 de agosto de 2023): 544. http://dx.doi.org/10.3390/drones7090544.

Texto completo
Resumen
Unmanned aerial vehicles (UAVs) are extensively employed for urban image captures and the reconstruction of large-scale 3D models due to their affordability and versatility. However, most commercial flight software lack support for the adaptive capture of multi-view images. Furthermore, the limited performance and battery capacity of a single UAV hinder efficient image capturing of large-scale scenes. To address these challenges, this paper presents a novel method for multi-UAV continuous trajectory planning aimed at the image captures and reconstructions of a scene. Our primary contribution lies in the development of a path planning framework rooted in task and search principles. Within this framework, we initially ascertain optimal task locations for capturing images by assessing scene reconstructability, thereby enhancing the overall quality of reconstructions. Furthermore, we curtail energy costs of trajectories by allocating task sequences, characterized by minimal corners and lengths, among multiple UAVs. Ultimately, we integrate considerations of energy costs, safety, and reconstructability into a unified optimization process, facilitating the search for optimal paths for multiple UAVs. Empirical evaluations demonstrate the efficacy of our approach in facilitating collaborative full-scene image captures by multiple UAVs, achieving low energy costs while attaining high-quality 3D reconstructions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Nor'a, Muhammad Nur Affendy, Fazliaty Edora Fadzli y Ajune Wanis Ismail. "A Review on Real-Time 3D Reconstruction Methods in Dynamic Scene". International Journal of Innovative Computing 12, n.º 1 (16 de noviembre de 2021): 91–97. http://dx.doi.org/10.11113/ijic.v12n1.317.

Texto completo
Resumen
Advancements made in consumer and readily available RGB-D capturing devices have sparked researcher interest in 3D reconstruction, particularly in dynamic scenes, as well as the quality performance and its speed. The recent advancement in such devices supports the developments of various applications such as teleportation, gaming, volumetric video, and CG films. Real-time 3D reconstruction methods review in a dynamic scene of virtual environment is depicted in this paper. This provides an insight view on how real-time 3D reconstruction beneficial achievement further enables reconstruction systems to be managed in real-time technology such as virtual reality or augmented reality application.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Roessle, Barbara, Norman Müller, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder y Matthias Niessner. "GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields". ACM Transactions on Graphics 42, n.º 6 (5 de diciembre de 2023): 1–14. http://dx.doi.org/10.1145/3618402.

Texto completo
Resumen
Neural Radiance Fields (NeRF) have shown impressive novel view synthesis results; nonetheless, even thorough recordings yield imperfections in reconstructions, for instance due to poorly observed areas or minor lighting changes. Our goal is to mitigate these imperfections from various sources with a joint solution: we take advantage of the ability of generative adversarial networks (GANs) to produce realistic images and use them to enhance realism in 3D scene reconstruction with NeRFs. To this end, we learn the patch distribution of a scene using an adversarial discriminator, which provides feedback to the radiance field reconstruction, thus improving realism in a 3D-consistent fashion. Thereby, rendering artifacts are repaired directly in the underlying 3D representation by imposing multi-view path rendering constraints. In addition, we condition a generator with multi-resolution NeRF renderings which is adversarially trained to further improve rendering quality. We demonstrate that our approach significantly improves rendering quality, e.g., nearly halving LPIPS scores compared to Nerfacto while at the same time improving PSNR by 1.4dB on the advanced indoor scenes of Tanks and Temples.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Zhang, Bao Feng, Jia Lu Li y Xiao Ling Zhang. "Application of SIFT Algorithm in 3D Scene Reconstruction". Advanced Materials Research 616-618 (diciembre de 2012): 1956–60. http://dx.doi.org/10.4028/www.scientific.net/amr.616-618.1956.

Texto completo
Resumen
Applying SIFT algorithm in 3D scene reconstruction can improve the system accuracy. Firstly, the paper analysed the characteristics of SIFT algorithm. Then 3D scene reconstruction process was introduced briefly. At last, the experimental images were matched by SIFT algorithm, and a suggestion value bound was given by comparing the matching result of different nearest ratio. The experimental results show that matching by SIFT algorithm has excellent accuracy, and it can be further applied to the 3D scene reconstruction system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Shen, Xi y Wanlin Li. "P‐2.11: Research on Scene 3d Reconstruction Technology Based on Multi‐sensor Fusion". SID Symposium Digest of Technical Papers 54, S1 (abril de 2023): 517–21. http://dx.doi.org/10.1002/sdtp.16345.

Texto completo
Resumen
3D scene provides reliable prior information for autonomous decision-making of mobile robot, which is the premise of intelligent mobile robot. 3D reconstruction technology realizes data fusion by sensing the environment and registering sensor data multiple times to the same coordinate system to generate offline 3D scene. 3D reconstruction is widely used in cultural relics reconstruction, AR tourism, automatic driving, smart home and video entertainment. This paper mainly uses lidar to fuse IMU data to realize 3D reconstruction of scene. The front-end odometer based on the fusion of lidar and IMU and the global pose optimization and mapping system based on loopback detection are designed. It improves the accuracy of the simultaneous positioning and mapping system and ensures the real-time performance
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Deng, Bao Song, Rong Huan Yu, Tie Qing Deng y Ling Da Wu. "A 3D Reconstruction Framework from Image Sequences Based on Point and Line Features". Advanced Materials Research 317-319 (agosto de 2011): 962–67. http://dx.doi.org/10.4028/www.scientific.net/amr.317-319.962.

Texto completo
Resumen
A novel three dimensional reconstruction framework from wide baseline images was proposed based on point and line features. After detecting and matching features, the relations between discrete images are computed and refined according to multi-view geometric constraints, and both structure of the scene and motion of cameras are retrieved, where we employ a procedure of Euclidean reconstruction based on approximate camera internal parameters and buddle adjustments. Based on retrieved motion and correspondence of line features, a 3D line reconstruction scheme was put forward to assist us in gaining regular structure and topology of the scene. In virtue of some manual interactions, mesh models of the scene came into being, and a rectification method for perspective images was used to acquiring texture patches. Finally, an interactive modeling prototype system from multiple images is designed and implemented. Real scenes and augmented reality applications demonstrate the feasibility, correctness and accuracy of our framework.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Mahmoud, Mostafa, Wu Chen, Yang Yang, Tianxia Liu y Yaxin Li. "Leveraging Deep Learning for Automated Reconstruction of Indoor Unstructured Elements in Scan-to-BIM". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1-2024 (10 de mayo de 2024): 479–86. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-2024-479-2024.

Texto completo
Resumen
Abstract. Achieving automatic 3D reconstruction for indoor scenes is extremely useful in the field of scene understanding. Building information modeling (BIM) models are essential for lowering project costs, assisting in building planning and renovations, as well as improving building management efficiency. However, nearly all current available scan-to-BIM approaches employ manual or semi-automatic methods. These approaches concentrate solely on significant structured objects, neglecting other unstructured elements such as furniture. The limitation arises from challenges in modeling incomplete point clouds of obstructed objects and capturing indoor scene details. Therefore, this research introduces an innovative and effective reconstruction framework based on deep learning semantic segmentation and model-driven techniques to address these limitations. The proposed framework utilizes wall segment recognition, feature extraction, opening detection, and automatic modeling to reconstruct 3D structured models of point clouds with different room layouts in both Manhattan and non-Manhattan architectures. Moreover, it provides 3D BIM models of actual unstructured elements by detecting objects, completing point clouds, establishing bounding boxes, determining type and orientation, and automatically generating 3D BIM models with a parametric algorithm implemented into the Revit software. We evaluated this framework using publicly available and locally generated point cloud datasets with varying furniture combinations and layout complexity. The results demonstrate the proposed framework's efficiency in reconstructing structured indoor elements, exhibiting completeness and geometric accuracy, and achieving precision and recall values greater than 98%. Furthermore, the generated unstructured 3D BIM models keep essential real-scene characteristics such as geometry, spatial locations, numerical aspects, various shapes, and orientations compared to literature methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Fan, Yiyan, Yang Zhou y Zheng Yuan. "Interior Design Evaluation Based on Deep Learning: A Multi-Modal Fusion Evaluation Mechanism". Mathematics 12, n.º 10 (16 de mayo de 2024): 1560. http://dx.doi.org/10.3390/math12101560.

Texto completo
Resumen
The design of 3D scenes is of great significance, and one of the crucial areas is interior scene design. This study not only pertains to the living environment of individuals but also has applications in the design and development of virtual environments. Previous work on indoor scenes has focused on understanding and editing existing indoor scenes, such as scene reconstruction, segmentation tasks, texture, object localization, and rendering. In this study, we propose a novel task in the realm of indoor scene comprehension, amalgamating interior design principles with professional evaluation criteria: 3D indoor scene design assessment. Furthermore, we propose an approach using a transformer encoder–decoder architecture and a dual-graph convolutional network. Our approach facilitates users in posing text-based inquiries; accepts input in two modalities, point cloud representations of indoor scenes and textual queries; and ultimately generates a probability distribution indicating positive, neutral, and negative assessments of interior design. The proposed method uses separately pre-trained modules, including a 3D visual question-answering module and a dual-graph convolutional network for identifying emotional tendencies of text.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Hoegner, L., T. Abmayr, D. Tosic, S. Turzer y U. Stilla. "FUSION OF 3D POINT CLOUDS WITH TIR IMAGES FOR INDOOR SCENE RECONSTRUCTION". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1 (26 de septiembre de 2018): 189–94. http://dx.doi.org/10.5194/isprs-archives-xlii-1-189-2018.

Texto completo
Resumen
<p><strong>Abstract.</strong> Obtaining accurate 3D descriptions in the thermal infrared (TIR) is a quite challenging task due to the low geometric resolutions of TIR cameras and the low number of strong features in TIR images. Combining the radiometric information of the thermal infrared with 3D data from another sensor is able to overcome most of the limitations in the 3D geometric accuracy. In case of dynamic scenes with moving objects or a moving sensor system, a combination with RGB cameras and profile laserscanners is suitable. As a laserscanner is an active sensor in the visible red or near infrared (NIR) and the thermal infrared camera captures the radiation emitted by the objects in the observed scene, the combination of these two sensors for close range applications are independent from external illumination or textures in the scene. This contribution focusses on the fusion of point clouds from terrestrial laserscanners and RGB cameras with images from thermal infrared mounted together on a robot for indoor 3D reconstruction. The system is geometrical calibrated including the lever arm between the different sensors. As the field of view is different for the sensors, the different sensors record the same scene points not exactly at the same time. Thus, the 3D scene points of the laserscanner and the photogrammetric point cloud from the RGB camera have to be synchronized before point cloud fusion and adding the thermal channel to the 3D points.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Dmitriev, E. A. y V. V. Myasnikov. "Possibility estimation of 3D scene reconstruction from multiple images". Information Technology and Nanotechnology, n.º 2391 (2019): 293–96. http://dx.doi.org/10.18287/1613-0073-2019-2391-293-296.

Texto completo
Resumen
This paper presents a pixel-by-pixel possibility estimation of 3D scene reconstruction from multiple images. This method estimates conjugate pairs number with convolutional neural networks for further 3D reconstruction using classic approach. We considered neural networks that showed good results in semantic segmentation problem. The efficiency criterion of an algorithm is the resulting estimation accuracy. We conducted all experiments on images from Unity 3d program. The results of experiments showed the effectiveness of our approach in 3D scene reconstruction problem.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Lattanzi, David y Gregory R. Miller. "3D Scene Reconstruction for Robotic Bridge Inspection". Journal of Infrastructure Systems 21, n.º 2 (junio de 2015): 04014041. http://dx.doi.org/10.1061/(asce)is.1943-555x.0000229.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Bunschoten, Roland y Ben Kröse. "3D scene reconstruction from cylindrical panoramic images". Robotics and Autonomous Systems 41, n.º 2-3 (noviembre de 2002): 111–18. http://dx.doi.org/10.1016/s0921-8890(02)00257-9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Wöhler, Christian, Pablo d’Angelo, Lars Krüger, Annika Kuhl y Horst-Michael Groß. "Monocular 3D scene reconstruction at absolute scale". ISPRS Journal of Photogrammetry and Remote Sensing 64, n.º 6 (noviembre de 2009): 529–40. http://dx.doi.org/10.1016/j.isprsjprs.2009.03.004.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Haitz, D., B. Jutzi, M. Ulrich, M. Jäger y P. Hübner. "COMBINING HOLOLENS WITH INSTANT-NERFS: ADVANCED REAL-TIME 3D MOBILE MAPPING". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W1-2023 (25 de mayo de 2023): 167–74. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w1-2023-167-2023.

Texto completo
Resumen
Abstract. This work represents a large step into modern ways of fast 3D reconstruction based on RGB camera images. Utilizing a Microsoft HoloLens 2 as a multisensor platform that includes an RGB camera and an inertial measurement unit for SLAM-based camera-pose determination, we train a Neural Radiance Field (NeRF) as a neural scene representation in real-time with the acquired data from the HoloLens. The HoloLens is connected via Wifi to a high-performance PC that is responsible for the training and 3D reconstruction. After the data stream ends, the training is stopped and the 3D reconstruction is initiated, which extracts a point cloud of the scene. With our specialized inference algorithm, five million scene points can be extracted within 1 second. In addition, the point cloud also includes radiometry per point. Our method of 3D reconstruction outperforms grid point sampling with NeRFs by multiple orders of magnitude and can be regarded as a complete real-time 3D reconstruction method in a mobile mapping setup.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Xiong, Zi Ming y Gang Wan. "An Approach to Automatic Great-Scene 3D Reconstruction Based on UAV Sequence Images". Applied Mechanics and Materials 229-231 (noviembre de 2012): 2294–97. http://dx.doi.org/10.4028/www.scientific.net/amm.229-231.2294.

Texto completo
Resumen
In this paper, we propose an approach to automatic great-scene 3D reconstruction based on UAV sequence images. In this method, Harris feature point and SIFT feature vector is used to distill image feature, achieving images match; quasi-perspective projection model and factorization is employed to calibrate the uncalibrated image sequences automatically; Efficient suboptimal solutions to the optimal triangulation is plied to obtain the coordinate of 3D points; quasi-dense diffusing algorithm is bestowed to make 3D point denseness; the algorithm of bundle adjustment is taken to improve the precision of 3D points; the approach of Possion surface reconstruction is used to make 3D points gridded. This paper introduces the theory and technology of computer vision into great-scene 3D reconstruction, provides a new way for the construction of 3D scene, and provides a new thinking for the appliance of UAV sequence images.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Peng, Cheng y Rama Chellappa. "PDRF: Progressively Deblurring Radiance Field for Fast Scene Reconstruction from Blurry Images". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 2 (26 de junio de 2023): 2029–37. http://dx.doi.org/10.1609/aaai.v37i2.25295.

Texto completo
Resumen
We present Progressively Deblurring Radiance Field (PDRF), a novel approach to efficiently reconstruct high quality radiance fields from blurry images. While current State-of-The-Art (SoTA) scene reconstruction methods achieve photo-realistic renderings from clean source views, their performances suffer when the source views are affected by blur, which is commonly observed in the wild. Previous deblurring methods either do not account for 3D geometry, or are computationally intense. To addresses these issues, PDRF uses a progressively deblurring scheme for radiance field modeling, which can accurately model blur with 3D scene context. PDRF further uses an efficient importance sampling scheme that results in fast scene optimization. We perform extensive experiments and show that PDRF is 15X faster than previous SoTA while achieving better performance on both synthetic and real scenes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Eldefrawy, Mahmoud, Scott A. King y Michael Starek. "Partial Scene Reconstruction for Close Range Photogrammetry Using Deep Learning Pipeline for Region Masking". Remote Sensing 14, n.º 13 (3 de julio de 2022): 3199. http://dx.doi.org/10.3390/rs14133199.

Texto completo
Resumen
3D reconstruction is a beneficial technique to generate 3D geometry of scenes or objects for various applications such as computer graphics, industrial construction, and civil engineering. There are several techniques to obtain the 3D geometry of an object. Close-range photogrammetry is an inexpensive, accessible approach to obtaining high-quality object reconstruction. However, state-of-the-art software systems need a stationary scene or a controlled environment (often a turntable setup with a black background), which can be a limiting factor for object scanning. This work presents a method that reduces the need for a controlled environment and allows the capture of multiple objects with independent motion. We achieve this by creating a preprocessing pipeline that uses deep learning to transform a complex scene from an uncontrolled environment into multiple stationary scenes with a black background that is then fed into existing software systems for reconstruction. Our pipeline achieves this by using deep learning models to detect and track objects through the scene. The detection and tracking pipeline uses semantic-based detection and tracking and supports using available pretrained or custom networks. We develop a correction mechanism to overcome some detection and tracking shortcomings, namely, object-reidentification and multiple detections of the same object. We show detection and tracking are effective techniques to address scenes with multiple motion systems and that objects can be reconstructed with limited or no knowledge of the camera or the environment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Baligh Jahromi, A. y G. Sohn. "EDGE BASED 3D INDOOR CORRIDOR MODELING USING A SINGLE IMAGE". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-3/W5 (20 de agosto de 2015): 417–24. http://dx.doi.org/10.5194/isprsannals-ii-3-w5-417-2015.

Texto completo
Resumen
Reconstruction of spatial layout of indoor scenes from a single image is inherently an ambiguous problem. However, indoor scenes are usually comprised of orthogonal planes. The regularity of planar configuration (scene layout) is often recognizable, which provides valuable information for understanding the indoor scenes. Most of the current methods define the scene layout as a single cubic primitive. This domain-specific knowledge is often not valid in many indoors where multiple corridors are linked each other. In this paper, we aim to address this problem by hypothesizing-verifying multiple cubic primitives representing the indoor scene layout. This method utilizes middle-level perceptual organization, and relies on finding the ground-wall and ceiling-wall boundaries using detected line segments and the orthogonal vanishing points. A comprehensive interpretation of these edge relations is often hindered due to shadows and occlusions. To handle this problem, the proposed method introduces virtual rays which aid in the creation of a physically valid cubic structure by using orthogonal vanishing points. The straight line segments are extracted from the single image and the orthogonal vanishing points are estimated by employing the RANSAC approach. Many scene layout hypotheses are created through intersecting random line segments and virtual rays of vanishing points. The created hypotheses are evaluated by a geometric reasoning-based objective function to find the best fitting hypothesis to the image. The best model hypothesis offered with the highest score is then converted to a 3D model. The proposed method is fully automatic and no human intervention is necessary to obtain an approximate 3D reconstruction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Stathopoulou, E. K., S. Rigon, R. Battisti y F. Remondino. "ENHANCING GEOMETRIC EDGE DETAILS IN MVS RECONSTRUCTION". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2021 (28 de junio de 2021): 391–98. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2021-391-2021.

Texto completo
Resumen
Abstract. Mesh models generated by multi view stereo (MVS) algorithms often fail to represent in an adequate manner the sharp, natural edge details of the scene. The harsh depth discontinuities of edge regions are eventually a challenging task for dense reconstruction, while vertex displacement during mesh refinement frequently leads to smoothed edges that do not coincide with the fine details of the scene. Meanwhile, 3D edges have been used for scene representation, particularly man-made built environments, which are dominated by regular planar and linear structures. Indeed, 3D edge detection and matching are commonly exploited either to constrain camera pose estimation, or to generate an abstract representation of the most salient parts of the scene, and even to support mesh reconstruction. In this work, we attempt to jointly use 3D edge extraction and MVS mesh generation to promote edge detail preservation in the final result. Salient 3D edges of the scene are reconstructed with state-of-the-art algorithms and integrated in the dense point cloud to be further used in order to support the mesh triangulation step. Experimental results on benchmark dataset sequences using metric and appearance-based measures are performed in order to evaluate our hypothesis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Liu, Zhendong, Chengcheng Zhang, Haolin Cai, Wenhu Qv y Shuaizhe Zhang. "A Model Simplification Algorithm for 3D Reconstruction". Remote Sensing 14, n.º 17 (26 de agosto de 2022): 4216. http://dx.doi.org/10.3390/rs14174216.

Texto completo
Resumen
Mesh simplification is an effective way to solve the contradiction between 3D models and limited transmission bandwidth and smooth model rendering. The existing mesh simplification algorithms usually have problems of texture distortion, deformation of different degrees, and no texture simplification. In this paper, a model simplification algorithm suitable for 3D reconstruction is proposed by taking full advantage of the recovered 3D scene structure and calibrated images. First, the reference 3D model scene is constructed on the basis of the original mesh; second, the images are collected on the basis of the reference 3D model scene; then, the mesh and texture are simplified by using the reference image set combined with the QEM algorithm. Lastly, the 3D model data of a town in Tengzhou are used for experimental verification. The results show that the algorithm proposed in this paper basically has no texture distortion and deformation problems in texture simplification and can effectively reduce the amount of texture data, with good feasibility.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Lin, Xiaobo y Shibiao Xu. "Implicit–Explicit Coupling Enhancement for UAV Scene 3D Reconstruction". Applied Sciences 14, n.º 6 (13 de marzo de 2024): 2425. http://dx.doi.org/10.3390/app14062425.

Texto completo
Resumen
In unmanned aerial vehicle (UAV) large-scale scene modeling, challenges such as missed shots, low overlap, and data gaps due to flight paths and environmental factors, such as variations in lighting, occlusion, and weak textures, often lead to incomplete 3D models with blurred geometric structures and textures. To address these challenges, an implicit–explicit coupling enhancement for a UAV large-scale scene modeling framework is proposed. Benefiting from the mutual promotion of implicit and explicit models, we initially address the issue of missing co-visibility clusters caused by environmental noise through large-scale implicit modeling with UAVs. This enhances the inter-frame photometric and geometric consistency. Subsequently, we enhance the multi-view point cloud reconstruction density via synthetic co-visibility clusters, effectively recovering missing spatial information and constructing a more complete dense point cloud. Finally, during the mesh modeling phase, high-quality 3D modeling of large-scale UAV scenes is achieved by inversely radiating and mapping additional texture details into 3D voxels. The experimental results demonstrate that our method achieves state-of-the-art modeling accuracy across various scenarios, outperforming existing commercial UAV aerial photography software (COLMAP 3.9, Context Capture 2023, PhotoScan 2023, Pix4D 4.5.6) and related algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Svistunov, Andrey S., Dmitry A. Rymov, Rostislav S. Starikov y Pavel A. Cheremkhin. "HoloForkNet: Digital Hologram Reconstruction via Multibranch Neural Network". Applied Sciences 13, n.º 10 (17 de mayo de 2023): 6125. http://dx.doi.org/10.3390/app13106125.

Texto completo
Resumen
Reconstruction of 3D scenes from digital holograms is an important task in different areas of science, such as biology, medicine, ecology, etc. A lot of parameters, such as the object’s shape, number, position, rate and density, can be extracted. However, reconstruction of off-axis and especially inline holograms can be challenging due to the presence of optical noise, zero-order image and twin image. We have used a deep-multibranch neural network model, which we call HoloForkNet, to reconstruct different 2D sections of a 3D scene from a single inline hologram. This paper describes the proposed method and analyzes its performance for different types of objects. Both computer-generated and optically registered digital holograms with resolutions up to 2048 × 2048 pixels were reconstructed. High-quality image reconstruction for scenes consisting of up to eight planes was achieved. The average structural similarity index (SSIM) for 3D test scenes with eight object planes was 0.94. The HoloForkNet can be used to reconstruct 3D scenes consisting of micro- and macro-objects.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Zhu, Tanbo, Die Wang, Yuhua Li y Wenjie Dong. "Three-Dimensional Image Reconstruction for Virtual Talent Training Scene". Traitement du Signal 38, n.º 6 (31 de diciembre de 2021): 1719–26. http://dx.doi.org/10.18280/ts.380615.

Texto completo
Resumen
In real training, the training conditions are often undesirable, and the use of equipment is severely limited. These problems can be solved by virtual practical training, which breaks the limit of space, lowers the training cost, while ensuring the training quality. However, the existing methods work poorly in image reconstruction, because they fail to consider the fact that the environmental perception of actual scene is strongly regular by nature. Therefore, this paper investigates the three-dimensional (3D) image reconstruction for virtual talent training scene. Specifically, a fusion network model was deigned, and the deep-seated correlation between target detection and semantic segmentation was discussed for images shot in two-dimensional (2D) scenes, in order to enhance the extraction effect of image features. Next, the vertical and horizontal parallaxes of the scene were solved, and the depth-based virtual talent training scene was reconstructed three dimensionally, based on the continuity of scene depth. Finally, the proposed algorithm was proved effective through experiments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Li, Changhao, Junfu Guo, Ruizhen Hu y Ligang Liu. "Online Scene CAD Recomposition via Autonomous Scanning". ACM Transactions on Graphics 42, n.º 6 (5 de diciembre de 2023): 1–16. http://dx.doi.org/10.1145/3618339.

Texto completo
Resumen
Autonomous surface reconstruction of 3D scenes has been intensely studied in recent years, however, it is still difficult to accurately reconstruct all the surface details of complex scenes with complicated object relations and severe occlusions, which makes the reconstruction results not suitable for direct use in applications such as gaming and virtual reality. Therefore, instead of reconstructing the detailed surfaces, we aim to recompose the scene with CAD models retrieved from a given dataset to faithfully reflect the object geometry and arrangement in the given scene. Moreover, unlike most of the previous works on scene CAD recomposition requiring an offline reconstructed scene or captured video as input, which leads to significant data redundancy, we propose a novel online scene CAD recomposition method with autonomous scanning, which efficiently recomposes the scene with the guidance of automatically optimized Next-Best-View (NBV) in a single online scanning pass. Based on the key observation that spatial relation in the scene can not only constrain the object pose and layout optimization but also guide the NBV generation, our system consists of two key modules: relation-guided CAD recomposition module that uses relation-constrained global optimization to get accurate object pose and layout estimation, and relation-aware NBV generation module that makes the exploration during the autonomous scanning tailored for our composition task. Extensive experiments have been conducted to show the superiority of our method over previous methods in scanning efficiency and retrieval accuracy as well as the importance of each key component of our method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Shao, Z., G. Cheng y Y. Yi. "INDOOR AND OUTDOOR STRUCTURED MONOMER RECONSTRUCTION OF CITY 3D REAL SCENE BASED ON NONLINEAR OPTIMIZATION AND INTEGRATION OF MULTI-SOURCE AND MULTI-MODAL DATA". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-3/W2-2022 (27 de octubre de 2022): 51–57. http://dx.doi.org/10.5194/isprs-archives-xlviii-3-w2-2022-51-2022.

Texto completo
Resumen
Abstract. 3D Real Scene is an important part of new infrastructure construction, which provides a unified spatial base for economic and social development and informatization of various departments. According to the demand of real 3D Real Scene digital construction, this paper aims to study the method of indoor and outdoor structured monomer reconstruction of city 3D Real Scene, develop digital twin platform, and strive to lead the development of 3D Real Scene. The main research contents of this paper are as follows: 1) Spatio-temporal-spectral-angular remote sensing observation system and data fusion model; 2) Rapid construction method of vector monomer 3D model of indoor and outdoor entity object; 3) 3D structural reconstruction technology of urban component level based on vector images; 4) A universal digital twins platform, LuojiaDT. The technologies we have developed have been widely used in the national 3D Real Scene construction of China, including smart city, smart transportation, cultural heritage protection, public security and police, urban underground pipe network, indoor and outdoor location service. This research will promote the continuous development of digital twins technology.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Mat Amin, M. A., S. Abdullah, S. N. Abdul Mukti, M. H. A. Mohd Zaidi y K. N. Tahar. "RECONSTRUCTION OF 3D ACCIDENT SCENE FROM MULTIROTOR UAV PLATFORM". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (12 de agosto de 2020): 451–58. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-451-2020.

Texto completo
Resumen
Abstract. Traffic accidents are one of the major causes of fatality in developing countries. The aim of the study is to reconstruct accident scenes by using UAV photogrammetry. The methodology of this study is organised into four main phases which consist of preliminary work, flight planning, 3D model processing and analysis of the results. The 3D model was successfully generated by using Point of Interest (POI) flight planning. The 3D model showed that the results of the process produced good 3D texture where the two vehicles had good shapes and could be seen clearly from an oblique view. In addition, the effect of the tyres on the road could also be seen clearly and had good shape which were generated accurately. The accuracy values obtained from the POI technique and waypoint technique were 0.059m and 0.043m, respectively. Due to the availability of UAVs in the market at reasonable costs, photogrammetry offers the best alternative technique to other methods that have been used to reconstruct the accident scene.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

ROBINSON, MARTIN, KURT KUBIK y BRIAN LOVELL. "A FIRST ORDER PREDICATE LOGIC FORMULATION OF THE 3D RECONSTRUCTION PROBLEM AND ITS SOLUTION SPACE". International Journal of Pattern Recognition and Artificial Intelligence 19, n.º 01 (febrero de 2005): 45–62. http://dx.doi.org/10.1142/s0218001405003910.

Texto completo
Resumen
This paper defines the 3D reconstruction problem as the process of reconstructing a 3D scene from numerous 2D visual images of that scene. It is well known that this problem is ill-posed, and numerous constraints and assumptions are used in 3D reconstruction algorithms in order to reduce the solution space. Unfortunately, most constraints only work in a certain range of situations and often constraints are built into the most fundamental methods (e.g. Area Based Matching assumes that all the pixels in the window belong to the same object). This paper presents a novel formulation of the 3D reconstruction problem, using a voxel framework and first order logic equations, which does not contain any additional constraints or assumptions. Solving this formulation for a set of input images gives all the possible solutions for that set, rather than picking a solution that is deemed most likely. Using this formulation, this paper studies the problem of uniqueness in 3D reconstruction and how the solution space changes for different configurations of input images. It is found that it is not possible to guarantee a unique solution, no matter how many images are taken of the scene, their orientation or even how much color variation is in the scene itself. Results of using the formulation to reconstruct a few small voxel spaces are also presented. They show that the number of solutions is extremely large for even very small voxel spaces (5 × 5 voxel space gives 10 to 107 solutions). This shows the need for constraints to reduce the solution space to a reasonable size. Finally, it is noted that because of the discrete nature of the formulation, the solution space size can be easily calculated, making the formulation a useful tool to numerically evaluate the usefulness of any constraints that are added.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Xu, Guangkai y Feng Zhao. "Towards 3D Scene Reconstruction from Locally Scale-Aligned Monocular Video Depth". JUSTC 53 (2023): 1. http://dx.doi.org/10.52396/justc-2023-0061.

Texto completo
Resumen
Monocular depth estimation methods have achieved excellent robustness on diverse scenes, usually by predicting affine-invariant depth, up to an unknown scale and shift, rather than metric depth in that it is much easier to collect large-scale affine-invariant depth training data. However, in some video-based scenarios such as video depth estimation and 3D scene reconstruction, the unknown scale and shift residing in per-frame prediction may cause the predicted depth to be inconsistent. To tackle this problem, we propose a locally weighted linear regression method to recover the scale and shift map with very sparse anchor points, which ensures the consistency along consecutive frames. Extensive experiments show that our method can drop the Rel error of existing state-of-the-art approaches by 50% at most over several zero-shot benchmarks. Besides, we merge 6.3 million RGBD images to train robust depth models. By locally recovering scale and shift, our produced ResNet50-backbone model even outperforms the state-of-the-art DPT ViT-Large model. Combined with geometry-based reconstruction methods, we formulate a new dense 3D scene reconstruction pipeline, which benefits from both the scale consistency of sparse points and the robustness of monocular methods. By performing simple per-frame prediction over a video, the accurate 3D scene geometry can be recovered.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Kiriy, Semen A., Dmitry A. Rymov, Andrey S. Svistunov, Anna V. Shifrina, Rostislav S. Starikov y Pavel A. Cheremkhin. "Generative adversarial neural network for 3D-hologram reconstruction". Laser Physics Letters 21, n.º 4 (14 de febrero de 2024): 045201. http://dx.doi.org/10.1088/1612-202x/ad26eb.

Texto completo
Resumen
Abstract Neural-network-based reconstruction of digital holograms can improve the speed and the quality of micro- and macro-object images, as well as reduce the noise and suppress the twin image and the zero-order. Usually, such methods aim to reconstruct the 2D object image or amplitude and phase distribution. In this paper, we investigated the feasibility of using a generative adversarial neural network to reconstruct 3D-scenes consisting of a set of cross-sections. The method was tested on computer-generated and optically-registered digital inline holograms. It enabled the reconstruction of all layers of a scene from each hologram. The reconstruction quality is improved 1.8 times when compared to the U-Net architecture on the normalized standard deviation value.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Salman, Nader y Mariette Yvinec. "Surface Reconstruction from Multi-View Stereo of Large-Scale Outdoor Scenes". International Journal of Virtual Reality 9, n.º 1 (1 de enero de 2010): 19–26. http://dx.doi.org/10.20870/ijvr.2010.9.1.2758.

Texto completo
Resumen
This article describes an original method to reconstruct a 3D scene from a sequence of images. Our approach uses both the dense 3D point cloud extracted by multi-view stereovision and the calibrated images. It combines depth-maps construction in the image planes with surface reconstruction through restricted Delaunay triangulation. The method may handle very large scale outdoor scenes. Its accuracy has been tested on numerous outdoor scenes including the dense multi-view benchmark proposed by Strecha et al. Our results show that the proposed method compares favorably with the current state of the art.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Ding, Youli, Xianwei Zheng, Yan Zhou, Hanjiang Xiong y and Jianya Gong. "Low-Cost and Efficient Indoor 3D Reconstruction Through Annotated Hierarchical Structure-from-Motion". Remote Sensing 11, n.º 1 (29 de diciembre de 2018): 58. http://dx.doi.org/10.3390/rs11010058.

Texto completo
Resumen
With the widespread application of location-based services, the appropriate representation of indoor spaces and efficient indoor 3D reconstruction have become essential tasks. Due to the complexity and closeness of indoor spaces, it is difficult to develop a versatile solution for large-scale indoor 3D scene reconstruction. In this paper, an annotated hierarchical Structure-from-Motion (SfM) method is proposed for low-cost and efficient indoor 3D reconstruction using unordered images collected with widely available smartphone or consumer-level cameras. Although the reconstruction of indoor models is often compromised by the indoor complexity, we make use of the availability of complex semantic objects to classify the scenes and construct a hierarchical scene tree to recover the indoor space. Starting with the semantic annotation of the images, images that share the same object were detected and classified utilizing visual words and the support vector machine (SVM) algorithm. The SfM method was then applied to hierarchically recover the atomic 3D point cloud model of each object, with the semantic information from the images attached. Finally, an improved random sample consensus (RANSAC) generalized Procrustes analysis (RGPA) method was employed to register and optimize the partial models into a complete indoor scene. The proposed approach incorporates image classification in the hierarchical SfM based indoor reconstruction task, which explores the semantic propagation from images to points. It also reduces the computational complexity of the traditional SfM by avoiding exhausting pair-wise image matching. The applicability and accuracy of the proposed method was verified on two different image datasets collected with smartphone and consumer cameras. The results demonstrate that the proposed method is able to efficiently and robustly produce semantically and geometrically correct indoor 3D point models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Qi, Yang y Yuan Li. "Indoor Key Point Reconstruction Based on Laser Illumination and Omnidirectional Vision". Journal of Advanced Computational Intelligence and Intelligent Informatics 24, n.º 7 (20 de diciembre de 2020): 864–71. http://dx.doi.org/10.20965/jaciii.2020.p0864.

Texto completo
Resumen
Efficient and precise three-dimensional (3D) measurement is an important issue in the field of machine vision. In this paper, a measurement method for indoor key points is proposed with structured lights and omnidirectional vision system and the system can achieve the wide field of view and accurate results. In this paper, the process of obtaining indoor key points is as follows: Firstly, through the analysis of the system imaging model, an omnidirectional vision system based on structured light is constructed. Secondly, the full convolution neural network is used to estimate the scene for the dataset. Then, according to the geometric relationship between the scenery point and its reference point in structured light, for obtaining the 3D coordinates of the unstructured light point is presented. Finally, combining the full convolution network model and the structured light 3D vision model, the 3D mathematical representation of the key points of the indoor scene frame is completed. The experimental results proved that the proposed method can accurately reconstruct indoor scenes, and the measurement error is about 2%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía