Literatura científica selecionada sobre o tema "Reconstruction 3D de la scene"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Reconstruction 3D de la scene".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Reconstruction 3D de la scene"

1

Wen, Mingyun, e Kyungeun Cho. "Object-Aware 3D Scene Reconstruction from Single 2D Images of Indoor Scenes". Mathematics 11, n.º 2 (12 de janeiro de 2023): 403. http://dx.doi.org/10.3390/math11020403.

Texto completo da fonte
Resumo:
Recent studies have shown that deep learning achieves excellent performance in reconstructing 3D scenes from multiview images or videos. However, these reconstructions do not provide the identities of objects, and object identification is necessary for a scene to be functional in virtual reality or interactive applications. The objects in a scene reconstructed as one mesh are treated as a single object, rather than individual entities that can be interacted with or manipulated. Reconstructing an object-aware 3D scene from a single 2D image is challenging because the image conversion process from a 3D scene to a 2D image is irreversible, and the projection from 3D to 2D reduces a dimension. To alleviate the effects of dimension reduction, we proposed a module to generate depth features that can aid the 3D pose estimation of objects. Additionally, we developed a novel approach to mesh reconstruction that combines two decoders that estimate 3D shapes with different shape representations. By leveraging the principles of multitask learning, our approach demonstrated superior performance in generating complete meshes compared to methods relying solely on implicit representation-based mesh reconstruction networks (e.g., local deep implicit functions), as well as producing more accurate shapes compared to previous approaches for mesh reconstruction from single images (e.g., topology modification networks). The proposed method was evaluated on real-world datasets. The results showed that it could effectively improve the object-aware 3D scene reconstruction performance over existing methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Guo, Rui Bin, Tao Guan, Dong Xiang Zhou, Ke Ju Peng e Wei Hong Fan. "Efficient Multi-Scale Registration of 3D Reconstructions Based on Camera Center Constraint". Advanced Materials Research 998-999 (julho de 2014): 1018–23. http://dx.doi.org/10.4028/www.scientific.net/amr.998-999.1018.

Texto completo da fonte
Resumo:
Recent approaches for reconstructing 3D scenes from image collections only produce single scene models. To build a unified scene model that contains multiple subsets, we present a novel method for registration of 3D scene reconstructions in different scales. It first normalizes the scales of the models building on similarity reconstruction by the constraint of the 3D position of shared cameras. Then we use Cayley transform to fit the matrix of coordinates transformation for the models in normalization scales. The experimental results show the effectiveness and scalability of the proposed approach.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Jang, Hyeonjoong, Andréas Meuleman, Dahyun Kang, Donggun Kim, Christian Richardt e Min H. Kim. "Egocentric scene reconstruction from an omnidirectional video". ACM Transactions on Graphics 41, n.º 4 (julho de 2022): 1–12. http://dx.doi.org/10.1145/3528223.3530074.

Texto completo da fonte
Resumo:
Omnidirectional videos capture environmental scenes effectively, but they have rarely been used for geometry reconstruction. In this work, we propose an egocentric 3D reconstruction method that can acquire scene geometry with high accuracy from a short egocentric omnidirectional video. To this end, we first estimate per-frame depth using a spherical disparity network. We then fuse per-frame depth estimates into a novel spherical binoctree data structure that is specifically designed to tolerate spherical depth estimation errors. By subdividing the spherical space into binary tree and octree nodes that represent spherical frustums adaptively, the spherical binoctree effectively enables egocentric surface geometry reconstruction for environmental scenes while simultaneously assigning high-resolution nodes for closely observed surfaces. This allows to reconstruct an entire scene from a short video captured with a small camera trajectory. Experimental results validate the effectiveness and accuracy of our approach for reconstructing the 3D geometry of environmental scenes from short egocentric omnidirectional video inputs. We further demonstrate various applications using a conventional omnidirectional camera, including novel-view synthesis, object insertion, and relighting of scenes using reconstructed 3D models with texture.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Buck, Ursula. "3D crime scene reconstruction". Forensic Science International 304 (novembro de 2019): 109901. http://dx.doi.org/10.1016/j.forsciint.2019.109901.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Gao, Huanbing, Lei Liu, Ya Tian e Shouyin Lu. "3D Reconstruction for Road Scene with Obstacle Detection Feedback". International Journal of Pattern Recognition and Artificial Intelligence 32, n.º 12 (27 de agosto de 2018): 1855021. http://dx.doi.org/10.1142/s0218001418550212.

Texto completo da fonte
Resumo:
This paper presented 3D reconstruction method for road scene with the help of obstacle detection. 3D reconstruction for road scene can be used in autonomous driving, driver assistance system, car navigation systems. However, some errors often rose when 3D reconstructing due to the shade from the moving object in the road scene. The presented 3D reconstruction method with obstacle detection feedback can avoid this problem. Firstly, this paper offers a framework for the 3D reconstruction of road scene by laser scanning and vision. A calibration method based on the location of horizon is proposed, and a method of attitude angle measuring based on vanishing point is proposed to revise the 3D reconstruction result. Secondly, the reconstruction framework is extended by integrating with an object recognition that can automatically detect and discriminate obstacles in the input video streams by a RANSAC approach and threshold filter, and localizes them in the 3D model. 3D reconstruction and obstacle detection are tightly integrated and benefit from each other. The experiment result verified the feasibility and practicability of the proposed method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Liu, Yilin, Liqiang Lin, Yue Hu, Ke Xie, Chi-Wing Fu, Hao Zhang e Hui Huang. "Learning Reconstructability for Drone Aerial Path Planning". ACM Transactions on Graphics 41, n.º 6 (30 de novembro de 2022): 1–17. http://dx.doi.org/10.1145/3550454.3555433.

Texto completo da fonte
Resumo:
We introduce the first learning-based reconstructability predictor to improve view and path planning for large-scale 3D urban scene acquisition using unmanned drones. In contrast to previous heuristic approaches, our method learns a model that explicitly predicts how well a 3D urban scene will be reconstructed from a set of viewpoints. To make such a model trainable and simultaneously applicable to drone path planning, we simulate the proxy-based 3D scene reconstruction during training to set up the prediction. Specifically, the neural network we design is trained to predict the scene reconstructability as a function of the proxy geometry , a set of viewpoints, and optionally a series of scene images acquired in flight. To reconstruct a new urban scene, we first build the 3D scene proxy, then rely on the predicted reconstruction quality and uncertainty measures by our network, based off of the proxy geometry, to guide the drone path planning. We demonstrate that our data-driven reconstructability predictions are more closely correlated to the true reconstruction quality than prior heuristic measures. Further, our learned predictor can be easily integrated into existing path planners to yield improvements. Finally, we devise a new iterative view planning framework, based on the learned reconstructability, and show superior performance of the new planner when reconstructing both synthetic and real scenes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Dong, Bo, Kaiqiang Chen, Zhirui Wang, Menglong Yan, Jiaojiao Gu e Xian Sun. "MM-NeRF: Large-Scale Scene Representation with Multi-Resolution Hash Grid and Multi-View Priors Features". Electronics 13, n.º 5 (22 de fevereiro de 2024): 844. http://dx.doi.org/10.3390/electronics13050844.

Texto completo da fonte
Resumo:
Reconstructing large-scale scenes using Neural Radiance Fields (NeRFs) is a research hotspot in 3D computer vision. Existing MLP (multi-layer perception)-based methods often suffer from issues of underfitting and a lack of fine details in rendering large-scale scenes. Popular solutions are to divide the scene into small areas for separate modeling or to increase the layer scale of the MLP network. However, the subsequent problem is that the training cost increases. Moreover, reconstructing large scenes, unlike object-scale reconstruction, involves a geometrically considerable increase in the quantity of view data if the prior information of the scene is not effectively utilized. In this paper, we propose an innovative method named MM-NeRF, which integrates efficient hybrid features into the NeRF framework to enhance the reconstruction of large-scale scenes. We propose employing a dual-branch feature capture structure, comprising a multi-resolution 3D hash grid feature branch and a multi-view 2D prior feature branch. The 3D hash grid feature models geometric details, while the 2D prior feature supplements local texture information. Our experimental results show that such integration is sufficient to render realistic novel views with fine details, forming a more accurate geometric representation. Compared with representative methods in the field, our method significantly improves the PSNR (Peak Signal-to-Noise Ratio) by approximately 5%. This remarkable progress underscores the outstanding contribution of our method in the field of large-scene radiance field reconstruction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Tingdahl, David, e Gool Van Luc. "An Enhanced On-Line Service for 3D Model Construction from Photographs". International Journal of Heritage in the Digital Era 1, n.º 2 (junho de 2012): 277–94. http://dx.doi.org/10.1260/2047-4970.1.2.277.

Texto completo da fonte
Resumo:
We present a web service for image based 3D reconstruction. The system allows a cultural heritage professional to easily create a 3D model of a scene or object out of images taken from different viewpoints. The user uploads the images to our server on which all processing takes place, and the final result can be downloaded upon completion. Any consumer-class digital camera can be used, and the system is free to use for non-commercial purposes. The service includes a number of innovations to greatly simplify the process of taking pictures suitable for reconstruction. In particular, we are able to construct models of planar scenes and from photographs shot using a turntable, and at varying zoom levels. Although the first two may seem like particularly simple cases, they cause some mathematical issues with traditional self-calibration techniques. We handle these cases by taking advantage of a new automatic camera calibration method that uses meta-data stored with the images. For fixed-lens camera setups, we can also reuse previously computed calibrations to support otherwise degenerate scenes. Furthermore, we can automatically compute the relative scale and transformation between two reconstructions of the same scene, merging two reconstructions into one. We demonstrate the capabilities of the system by two case studies: turntable reconstruction of various objects and the reconstruction of a cave, with walls and roof integrated into a complete model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Wang, Wei, Fengjiao Gao e Yongliang Shen. "Res-NeuS: Deep Residuals and Neural Implicit Surface Learning for Multi-View Reconstruction". Sensors 24, n.º 3 (29 de janeiro de 2024): 881. http://dx.doi.org/10.3390/s24030881.

Texto completo da fonte
Resumo:
Surface reconstruction using neural networks has proven effective in reconstructing dense 3D surfaces through image-based neural rendering. Nevertheless, current methods are challenging when dealing with the intricate details of large-scale scenes. The high-fidelity reconstruction performance of neural rendering is constrained by the view sparsity and structural complexity of such scenes. In this paper, we present Res-NeuS, a method combining ResNet-50 and neural surface rendering for dense 3D reconstruction. Specifically, we present appearance embeddings: ResNet-50 is used to extract the appearance depth features of an image to further capture more scene details. We interpolate points near the surface and optimize their weights for the accurate localization of 3D surfaces. We introduce photometric consistency and geometric constraints to optimize 3D surfaces and eliminate geometric ambiguity existing in current methods. Finally, we design a 3D geometry automatic sampling to filter out uninteresting areas and reconstruct complex surface details in a coarse-to-fine manner. Comprehensive experiments demonstrate Res-NeuS’s superior capability in the reconstruction of 3D surfaces in complex, large-scale scenes, and the harmful distance of the reconstructed 3D model is 0.4 times that of general neural rendering 3D reconstruction methods and 0.6 times that of traditional 3D reconstruction methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Xia, Wei, Rongfeng Lu, Yaoqi Sun, Chenghao Xu, Kun Lv, Yanwei Jia, Zunjie Zhu e Bolun Zheng. "3D Indoor Scene Completion via Room Layout Estimation". Journal of Physics: Conference Series 2025, n.º 1 (1 de setembro de 2021): 012102. http://dx.doi.org/10.1088/1742-6596/2025/1/012102.

Texto completo da fonte
Resumo:
Abstract Recent advances in 3D reconstructions have shown impressive progress in 3D indoor scene reconstruction, enabling automatic scene modeling; however, holes in the 3D scans hinder the further usage of the reconstructed models. Thus, we propose the task of layout-based hole filling for the incomplete indoor scene scans: from the mesh of a scene model, we estimate the scene layout by detecting the principal planes of a scene and leverage the layout as the prior for the accurate completion of planar regions. Experiments show that guiding scene model completion through the scene layout prior significantly outperforms the alternative approach to the task of scene model completion.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Reconstruction 3D de la scene"

1

Boyling, Timothy A. "Active vision for autonomous 3D scene reconstruction". Thesis, University of Glasgow, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433622.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Nitschke, Christian. "3D reconstruction : real-time volumetric scene reconstruction from multiple views /". Saarbrücken : VDM Verl. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=2939698&prov=M&dok_var=1&dok_ext=htm.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Roldão, Jimenez Luis Guillermo. "3D Scene Reconstruction and Completion for Autonomous Driving". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS415.

Texto completo da fonte
Resumo:
Dans cette thèse, nous nous intéressons à des problèmes liés à la reconstruction et la complétion des scènes 3D à partir de nuages de points de densité hétérogène. Nous étudions l'utilisation de grilles d'occupation tridimensionnelles pour la reconstruction d'une scène 3D à partir de plusieurs observations. Nous proposons d'exploiter les informations de trajet des rayons pour résoudre des ambiguïtés dans les cellules partiellement occupées. Notre approche permet de réduire les imprécisions dues à la discrétisation et d'effectuer des mises à jour d'occupation des cellules dans des scénarios dynamiques. Puis, dans le cas où le nuage de points correspond à une seule observation de la scène, nous introduisons un algorithme de reconstruction de surface implicite 3D capable de traiter des données de densité hétérogène en utilisant une stratégie de voisinages adaptatifs. Notre méthode permet de compléter de petites zones manquantes de la scène et génère une représentation continue de la scène. Enfin, nous nous intéressons aux approches d'apprentissage profond adaptées à la complétion sémantique d'une scène 3D. Après avoir présenté une étude approfondie des méthodes existantes, nous introduisons une nouvelle méthode de complétion sémantique multi-échelle appropriée aux scenarios en extérieur. Pour ce faire, nous proposons une architecture constituée d'un réseau neuronal convolutif hybride basé sur une branche principale 2D et comportant des têtes de segmentation 3D pour prédire la scène sémantique complète à différentes échelles. Notre approche est plus légère et plus rapide que les approches existantes, tout en ayant une efficacité similaire
In this thesis, we address the challenges of 3D scene reconstruction and completion from sparse and heterogeneous density point clouds. Therefore proposing different techniques to create a 3D model of the surroundings.In the first part, we study the use of 3-dimensional occupancy grids for multi-frame reconstruction, useful for localization and HD-Maps applications. This is done by exploiting ray-path information to resolve ambiguities in partially occupied cells. Our sensor model reduces discretization inaccuracies and enables occupancy updates in dynamic scenarios.We also focus on single-frame environment perception by the introduction of a 3D implicit surface reconstruction algorithm capable to deal with heterogeneous density data by employing an adaptive neighborhood strategy. Our method completes small regions of missing data and outputs a continuous representation useful for physical modeling or terrain traversability assessment.We dive into deep learning applications for the novel task of semantic scene completion, which completes and semantically annotates entire 3D input scans. Given the little consensus found in the literature, we present an in-depth survey of existing methods and introduce our lightweight multiscale semantic completion network for outdoor scenarios. Our method employs a new hybrid pipeline based on a 2D CNN backbone branch to reduce computation overhead and 3D segmentation heads to predict the complete semantic scene at different scales, being significantly lighter and faster than existing approaches
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Goldman, Benjamin Joseph. "Broadband World Modeling and Scene Reconstruction". Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/23094.

Texto completo da fonte
Resumo:
Perception is a key feature in how any creature or autonomous system relates to its environment. While there are many types of perception, this thesis focuses on the improvement of the visual robotics perception systems. By implementing a broadband passive sensing system in conjunction with current perception algorithms, this thesis explores scene reconstruction and world modeling.
The process involves two main steps. The first is stereo correspondence using block matching algorithms with filtering to improve the quality of this matching process. The disparity maps are then transformed into 3D point clouds. These point clouds are filtered again before the registration process is done. The registration uses a SAC-IA matching technique to align the point clouds with minimum error.  The registered final cloud is then filtered again to smooth and down sample the large amount of data. This process was implemented through software architecture that utilizes Qt, OpenCV, and Point Cloud Library. It was tested using a variety of experiments on each of the components of the process.  It shows promise for being able to replace or augment existing UGV perception systems in the future.
Master of Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Booth, Roy. "Scene analysis and 3D object reconstruction using passive vision". Thesis, University of Newcastle Upon Tyne, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295780.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Aufderheide, Dominik. "VISrec! : visual-inertial sensor fusion for 3D scene reconstruction". Thesis, University of Bolton, 2014. http://ubir.bolton.ac.uk/649/.

Texto completo da fonte
Resumo:
The self-acting generation of three-dimensional models, by analysing monocular image streams from standard cameras, is one fundamental problem in the field of computer vision. A prerequisite for the scene modelling is the computation of the camera pose for the different frames of the sequence. Several techniques and methodologies have been introduced during the last decade to solve this classical Structure from Motion (SfM) problem, which incorporates camera egomotion estimation and subsequent recovery of 3D scene structure. However the applicability of those approaches to real world devices and applications is still limited, due to non-satisfactorily properties in terms of computational costs, accuracy and robustness. Thus tactile systems and laser scanners are still the predominantly used methods in industry for 3D measurements. This thesis suggests a novel framework for 3D scene reconstruction based on visual-inertial measurements and a corresponding sensor fusion framework. The integration of additional modalities, such as inertial measurements, are useful to compensate for typical problems of systems which rely only on visual information. The complete system is implemented based on a generic framework for designing Multi-Sensor Data Fusion (MSDF) systems. It is demonstrated that the incorporation of inertial measurements into a visual-inertial sensor fusion scheme for scene reconstruction (VISrec!) outperforms classical methods in terms of robustness and accuracy. It can be shown that the combination of visual and inertial modalities for scene reconstruction allows a reduction of the mean reconstruction error of typical scenes by up to 30%. Furthermore, the number of 3D feature points, which can be successfully reconstructed can be nearly doubled. In addition range and RGB-D sensors have been successfully incorporated into the VISrec! scheme proving the general applicability of the framework. By this it is possible to increase the number of 3D points within the reconstructed point cloud by a factor of five hundred if compared to standard visual SfM. Finally the applicability of the VISrec!-sensor to a specific industrial problem, in corporation with a local company, for reverse engineering of tailor-made car racing components demonstrates the usefulness of the developed system.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Chandraker, Manmohan Krishna. "From pictures to 3D global optimization for scene reconstruction /". Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p3369041.

Texto completo da fonte
Resumo:
Thesis (Ph. D.)--University of California, San Diego, 2009.
Title from first page of PDF file (viewed September 15, 2009). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 235-246).
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Manessis, A. "3D reconstruction from video using a mobile robot". Thesis, University of Surrey, 2001. http://epubs.surrey.ac.uk/844129/.

Texto completo da fonte
Resumo:
An autonomous robot able to navigate inside an unknown environment and reconstruct full 3D scene models using monocular video has been a long term goal in the field of Machine Vision. A key component of such a system is the reconstruction of surface models from estimated scene structure. Sparse 3D measurements of real scenes are readily estimated from N-view image sequences using structure-from-motion techniques. In this thesis we present a geometric theory for reconstruction of surface models from sparse 3D data captured from N camera views. Based on this theory we introduce a general N-view algorithm for reconstruction of 3D models of arbitrary scenes from sparse data. Using a hypothesise and verify strategy this algorithm reconstructs a surface model which interpolates the sparse data and is guaranteed to be consistent with the feature visibility in the N-views. To achieve efficient reconstruction independent of the number of views a simplified incremental algorithm is developed which integrates the feature visibility independently for each view. This approach is shown to converge to an approximation of the real scene structure and have a computational cost which is linear in the number of views. Surface hypothesis are generated based on a new incremental planar constrained Delaunay triangulation algorithm. We present a statistical geometric framework to explicitly consider noise inherent in estimates of 3D scene structure from any real vision system. This approach ensures that the reconstruction is reliable in the presence of noise and missing data. Results are presented for reconstruction of both real and synthetic scenes together with an evaluation of the reconstruction performance in the presence of noise.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Moodie, Daniel Thien-An. "Sensor Fused Scene Reconstruction and Surface Inspection". Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/47453.

Texto completo da fonte
Resumo:
Optical three dimensional (3D) mapping routines are used in inspection robots to detect faults by creating 3D reconstructions of environments. To detect surface faults, sub millimeter depth resolution is required to determine minute differences caused by coating loss and pitting. Sensors that can detect these small depth differences cannot quickly create contextual maps of large environments. To solve the 3D mapping problem, a sensor fused approach is proposed that can gather contextual information about large environments with one depth sensor and a SLAM routine; while local surface defects can be measured with an actuated optical profilometer. The depth sensor uses a modified Kinect Fusion to create a contextual map of the environment. A custom actuated optical profilometer is created and then calibrated. The two systems are then registered to each other to place local surface scans from the profilometer into a scene context created by Kinect Fusion. The resulting system can create a contextual map of large scale features (0.4 m) with less than 10% error while the optical profilometer can create surface reconstructions with sub millimeter resolution. The combination of the two allows for the detection and quantification of surface faults with the profilometer placed in a contextual reconstruction.
Master of Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

D'Angelo, Paolo. "3D scene reconstruction by integration of photometric and geometric methods". [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=985352949.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Reconstruction 3D de la scene"

1

Nitschke, Christian. 3D reconstruction: Real-time volumetric scene reconstruction from multiple views. Saarbrücken: VDM, Verlag Dr. Müller, 2007.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Weinmann, Martin. Reconstruction and Analysis of 3D Scenes. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-29246-5.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Bellocchio, Francesco, N. Alberto Borghese, Stefano Ferrari e Vincenzo Piuri. 3D Surface Reconstruction. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-5632-2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Zhang, Zhengyou, e Olivier Faugeras. 3D Dynamic Scene Analysis. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-58148-9.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

1948-, DeHaan John D., ed. Forensic fire scene reconstruction. Upper Saddle River, N.J: Prentice-Hall, 2004.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

1948-, DeHaan John D., ed. Forensic fire scene reconstruction. 2a ed. Upper Saddle River, N.J: Pearson/Prentice Hall, 2009.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Icove, David J. Forensic fire scene reconstruction. 2a ed. Upper Saddle River, N.J: Pearson/Prentice Hall, 2009.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Abdelguerfi, Mahdi, ed. 3D Synthetic Environment Reconstruction. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4419-8756-3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, (2nd : 1993 : Snowbird, Utah), ed. Fully 3D image reconstruction. Bristol: IOP Publishing, 1994.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Mahdi, Abdelguerfi, ed. 3D synthetic environment reconstruction. Boston: Kluwer Academic Publishers, 2001.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Reconstruction 3D de la scene"

1

Lucas, Laurent, Céline Loscos e Yannick Remion. "3D Scene Reconstruction and Structuring". In 3D Video, 157–72. Hoboken, USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118761915.ch8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Weinmann, Martin. "3D Scene Analysis". In Reconstruction and Analysis of 3D Scenes, 141–224. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-29246-5_6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Chen, Jian, Bingxi Jia e Kaixiang Zhang. "Road Scene 3D Reconstruction". In Multi-View Geometry Based Visual Perception and Control of Robotic Systems, 73–94. Boca Raton, FL : CRC Press/Taylor &Francis Group, 2017.: CRC Press, 2018. http://dx.doi.org/10.1201/9780429489211-5.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Zhang, Zhengyou, e Olivier Faugeras. "Reconstruction of 3D Line Segments". In 3D Dynamic Scene Analysis, 29–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-58148-9_3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Morana, Marco. "3D Scene Reconstruction Using Kinect". In Advances in Intelligent Systems and Computing, 179–90. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-03992-3_13.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Hartley, Richard, e Gilles Debunne. "Dualizing Scene Reconstruction Algorithms". In 3D Structure from Multiple Images of Large-Scale Environments, 14–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/3-540-49437-5_2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Jiang, Cansen, Yohan Fougerolle, David Fofi e Cédric Demonceaux. "Dynamic 3D Scene Reconstruction and Enhancement". In Image Analysis and Processing - ICIAP 2017, 518–29. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68560-1_46.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Lucas, Laurent, Céline Loscos e Yannick Remion. "3D Reconstruction of Sport Scenes". In 3D Video, 405–20. Hoboken, USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118761915.ch21.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Miller, Corey A., e Thomas J. Walls. "Passive 3D Scene Reconstruction via Hyperspectral Imagery". In Advances in Visual Computing, 413–22. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-14249-4_39.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Denninger, Maximilian, e Rudolph Triebel. "3D Scene Reconstruction from a Single Viewport". In Computer Vision – ECCV 2020, 51–67. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58542-6_4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Reconstruction 3D de la scene"

1

Little, Charles Q., Daniel E. Small, Ralph R. Peters e J. B. Rigdon. "Forensic 3D scene reconstruction". In 28th AIPR Workshop: 3D Visualization for Data Exploration and Decision Making, editado por William R. Oliver. SPIE, 2000. http://dx.doi.org/10.1117/12.384885.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Liu, Juan, Shijie Zhang e Haowen Ma. "Real-time Holographic Display based on Dynamic Scene Reconstruction and Rendering". In 3D Image Acquisition and Display: Technology, Perception and Applications. Washington, D.C.: Optica Publishing Group, 2023. http://dx.doi.org/10.1364/3d.2023.dw5a.1.

Texto completo da fonte
Resumo:
We propose an end-to-end real-time holographic display based on real-time capture of real scenes with simple system composition and affordable hardware requirements, the proposed technique will break the dilemma of the existing real-scene holographic display.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Shen, Yangping, Yoshitsugu Manabe e Noriko Yata. "3D scene reconstruction and object recognition for indoor scene". In International Workshop on Advanced Image Technology, editado por Phooi Yee Lau, Kazuya Hayase, Qian Kemao, Wen-Nung Lie, Yung-Lyul Lee, Sanun Srisuk e Lu Yu. SPIE, 2019. http://dx.doi.org/10.1117/12.2521492.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Sabharwal, Chaman L. "Stereoscopic projections and 3D scene reconstruction". In the 1992 ACM/SIGAPP symposium. New York, New York, USA: ACM Press, 1992. http://dx.doi.org/10.1145/130069.130155.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Shan, Qi, Riley Adams, Brian Curless, Yasutaka Furukawa e Steven M. Seitz. "The Visual Turing Test for Scene Reconstruction". In 2013 International Conference on 3D Vision (3DV). IEEE, 2013. http://dx.doi.org/10.1109/3dv.2013.12.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Da Silveira, Thiago L. T., e Cláudio R. Jung. "Dense 3D Indoor Scene Reconstruction from Spherical Images". In Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação, 2020. http://dx.doi.org/10.5753/sibgrapi.est.2020.12977.

Texto completo da fonte
Resumo:
Techniques for 3D reconstruction of scenes based on images are popular and support a number of secondary applications. Traditional approaches require several captures for covering whole environments due to the narrow field of view (FoV) of the pinhole-based/perspective cameras. This paper summarizes the main contributions of the homonym Ph.D. Thesis, which addresses the 3D scene reconstruction problem by considering omnidirectional (spherical or 360◦ ) cameras that present a 360◦ × 180◦ FoV. Although spherical imagery have the benefit of the full-FoV, they are also challenging due to the inherent distortions involved in the capture and representation of such images, which might compromise the use of many wellestablished algorithms for image processing and computer vision. The referred Ph.D. Thesis introduces novel methodologies for estimating dense depth maps from two or more uncalibrated and temporally unordered 360◦ images. It also presents a framework for inferring depth from a single spherical image. We validate our approaches using both synthetic data and computer-generated imagery, showing competitive results concerning other state-ofthe-art methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Huzaifa, Muhammad, Boyuan Tian, Yihan Pang, Henry Che, Shenlong Wang e Sarita Adve. "ADAPTIVEFUSION: Low Power Scene Reconstruction". In 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, 2023. http://dx.doi.org/10.1109/vrw58643.2023.00296.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Miksik, Ondrej, Yousef Amar, Vibhav Vineet, Patrick Perez e Philip H. S. Torr. "Incremental dense multi-modal 3D scene reconstruction". In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2015. http://dx.doi.org/10.1109/iros.2015.7353479.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Hane, Christian, Christopher Zach, Andrea Cohen, Roland Angst e Marc Pollefeys. "Joint 3D Scene Reconstruction and Class Segmentation". In 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2013. http://dx.doi.org/10.1109/cvpr.2013.20.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

McBride, Jonah C., Magnus S. Snorrason, Thomas R. Goodsell, Ross S. Eaton e Mark R. Stevens. "3D scene reconstruction: why, when, and how?" In Defense and Security, editado por Grant R. Gerhart, Chuck M. Shoemaker e Douglas W. Gage. SPIE, 2004. http://dx.doi.org/10.1117/12.542678.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "Reconstruction 3D de la scene"

1

Defrise, Michel, e Grant T. Gullberg. 3D reconstruction of tensors and vectors. Office of Scientific and Technical Information (OSTI), fevereiro de 2005. http://dx.doi.org/10.2172/838184.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Lyckegaard, A., A. Alpers, W. Ludwig, R. W. Fonda, L. Margulies, A. Goetz, H. O. Soerensen, S. R. Dey, H. F. Poulsen e E. M. Lauridsen. 3D Grain Reconstruction from Boxscan Data. Fort Belvoir, VA: Defense Technical Information Center, janeiro de 2010. http://dx.doi.org/10.21236/ada530190.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Weiss, Isaac. 3D Curve Reconstruction From Uncalibrated Cameras. Fort Belvoir, VA: Defense Technical Information Center, janeiro de 1996. http://dx.doi.org/10.21236/ada306610.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Rother, Diego, Kedar Patwardhan, Iman Aganj e Guillermo Sapiro. 3D Priors for Scene Learning from a Single View. Fort Belvoir, VA: Defense Technical Information Center, maio de 2008. http://dx.doi.org/10.21236/ada513268.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

KRUHL, Jörn H., Robert MARSCHALLINGER, Kai-Uwe HESS, Asher FLAWS e Richard ZEFACK KHEMAKA. 3D fabric recording by neutron tomography: benchmarking with destructive 3D reconstruction. Cogeo@oeaw-giscience, junho de 2010. http://dx.doi.org/10.5242/cogeo.2010.0009.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

KRUHL, Jörn H., Robert MARSCHALLINGER, Kai-Uwe HESS, Asher FLAWS e Richard ZEFACK KHEMAKA. 3D fabric recording by neutron tomography: benchmarking with destructive 3D reconstruction. Cogeo@oeaw-giscience, junho de 2010. http://dx.doi.org/10.5242/cogeo.2010.0009.a01.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Rosales, Romer, Vassilis Athitsos, Leonid Sigal e Stan Sclaroff. 3D Hand Pose Reconstruction Using Specialized Mappings. Fort Belvoir, VA: Defense Technical Information Center, abril de 2001. http://dx.doi.org/10.21236/ada451286.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Blankenbecler, Richard. 3D Image Reconstruction: Determination of Pattern Orientation. Office of Scientific and Technical Information (OSTI), março de 2003. http://dx.doi.org/10.2172/812988.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Scott, Logan, e Thomas Karnowski. An Evaluation of Three Dimensional Scene Reconstruction Tools for Safeguards Applications. Office of Scientific and Technical Information (OSTI), outubro de 2023. http://dx.doi.org/10.2172/2205425.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Tyler, Christopher W., e Tai-Sing Lee. Encoding of 3D Structure in the Visual Scene: A New Conceptualization. Fort Belvoir, VA: Defense Technical Information Center, março de 2013. http://dx.doi.org/10.21236/ada580528.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia