Auswahl der wissenschaftlichen Literatur zum Thema „Reconstruction 3D de la scene“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Reconstruction 3D de la scene" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Reconstruction 3D de la scene"

1

Wen, Mingyun, und Kyungeun Cho. „Object-Aware 3D Scene Reconstruction from Single 2D Images of Indoor Scenes“. Mathematics 11, Nr. 2 (12.01.2023): 403. http://dx.doi.org/10.3390/math11020403.

Der volle Inhalt der Quelle
Annotation:
Recent studies have shown that deep learning achieves excellent performance in reconstructing 3D scenes from multiview images or videos. However, these reconstructions do not provide the identities of objects, and object identification is necessary for a scene to be functional in virtual reality or interactive applications. The objects in a scene reconstructed as one mesh are treated as a single object, rather than individual entities that can be interacted with or manipulated. Reconstructing an object-aware 3D scene from a single 2D image is challenging because the image conversion process from a 3D scene to a 2D image is irreversible, and the projection from 3D to 2D reduces a dimension. To alleviate the effects of dimension reduction, we proposed a module to generate depth features that can aid the 3D pose estimation of objects. Additionally, we developed a novel approach to mesh reconstruction that combines two decoders that estimate 3D shapes with different shape representations. By leveraging the principles of multitask learning, our approach demonstrated superior performance in generating complete meshes compared to methods relying solely on implicit representation-based mesh reconstruction networks (e.g., local deep implicit functions), as well as producing more accurate shapes compared to previous approaches for mesh reconstruction from single images (e.g., topology modification networks). The proposed method was evaluated on real-world datasets. The results showed that it could effectively improve the object-aware 3D scene reconstruction performance over existing methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Guo, Rui Bin, Tao Guan, Dong Xiang Zhou, Ke Ju Peng und Wei Hong Fan. „Efficient Multi-Scale Registration of 3D Reconstructions Based on Camera Center Constraint“. Advanced Materials Research 998-999 (Juli 2014): 1018–23. http://dx.doi.org/10.4028/www.scientific.net/amr.998-999.1018.

Der volle Inhalt der Quelle
Annotation:
Recent approaches for reconstructing 3D scenes from image collections only produce single scene models. To build a unified scene model that contains multiple subsets, we present a novel method for registration of 3D scene reconstructions in different scales. It first normalizes the scales of the models building on similarity reconstruction by the constraint of the 3D position of shared cameras. Then we use Cayley transform to fit the matrix of coordinates transformation for the models in normalization scales. The experimental results show the effectiveness and scalability of the proposed approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Jang, Hyeonjoong, Andréas Meuleman, Dahyun Kang, Donggun Kim, Christian Richardt und Min H. Kim. „Egocentric scene reconstruction from an omnidirectional video“. ACM Transactions on Graphics 41, Nr. 4 (Juli 2022): 1–12. http://dx.doi.org/10.1145/3528223.3530074.

Der volle Inhalt der Quelle
Annotation:
Omnidirectional videos capture environmental scenes effectively, but they have rarely been used for geometry reconstruction. In this work, we propose an egocentric 3D reconstruction method that can acquire scene geometry with high accuracy from a short egocentric omnidirectional video. To this end, we first estimate per-frame depth using a spherical disparity network. We then fuse per-frame depth estimates into a novel spherical binoctree data structure that is specifically designed to tolerate spherical depth estimation errors. By subdividing the spherical space into binary tree and octree nodes that represent spherical frustums adaptively, the spherical binoctree effectively enables egocentric surface geometry reconstruction for environmental scenes while simultaneously assigning high-resolution nodes for closely observed surfaces. This allows to reconstruct an entire scene from a short video captured with a small camera trajectory. Experimental results validate the effectiveness and accuracy of our approach for reconstructing the 3D geometry of environmental scenes from short egocentric omnidirectional video inputs. We further demonstrate various applications using a conventional omnidirectional camera, including novel-view synthesis, object insertion, and relighting of scenes using reconstructed 3D models with texture.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Buck, Ursula. „3D crime scene reconstruction“. Forensic Science International 304 (November 2019): 109901. http://dx.doi.org/10.1016/j.forsciint.2019.109901.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Gao, Huanbing, Lei Liu, Ya Tian und Shouyin Lu. „3D Reconstruction for Road Scene with Obstacle Detection Feedback“. International Journal of Pattern Recognition and Artificial Intelligence 32, Nr. 12 (27.08.2018): 1855021. http://dx.doi.org/10.1142/s0218001418550212.

Der volle Inhalt der Quelle
Annotation:
This paper presented 3D reconstruction method for road scene with the help of obstacle detection. 3D reconstruction for road scene can be used in autonomous driving, driver assistance system, car navigation systems. However, some errors often rose when 3D reconstructing due to the shade from the moving object in the road scene. The presented 3D reconstruction method with obstacle detection feedback can avoid this problem. Firstly, this paper offers a framework for the 3D reconstruction of road scene by laser scanning and vision. A calibration method based on the location of horizon is proposed, and a method of attitude angle measuring based on vanishing point is proposed to revise the 3D reconstruction result. Secondly, the reconstruction framework is extended by integrating with an object recognition that can automatically detect and discriminate obstacles in the input video streams by a RANSAC approach and threshold filter, and localizes them in the 3D model. 3D reconstruction and obstacle detection are tightly integrated and benefit from each other. The experiment result verified the feasibility and practicability of the proposed method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Liu, Yilin, Liqiang Lin, Yue Hu, Ke Xie, Chi-Wing Fu, Hao Zhang und Hui Huang. „Learning Reconstructability for Drone Aerial Path Planning“. ACM Transactions on Graphics 41, Nr. 6 (30.11.2022): 1–17. http://dx.doi.org/10.1145/3550454.3555433.

Der volle Inhalt der Quelle
Annotation:
We introduce the first learning-based reconstructability predictor to improve view and path planning for large-scale 3D urban scene acquisition using unmanned drones. In contrast to previous heuristic approaches, our method learns a model that explicitly predicts how well a 3D urban scene will be reconstructed from a set of viewpoints. To make such a model trainable and simultaneously applicable to drone path planning, we simulate the proxy-based 3D scene reconstruction during training to set up the prediction. Specifically, the neural network we design is trained to predict the scene reconstructability as a function of the proxy geometry , a set of viewpoints, and optionally a series of scene images acquired in flight. To reconstruct a new urban scene, we first build the 3D scene proxy, then rely on the predicted reconstruction quality and uncertainty measures by our network, based off of the proxy geometry, to guide the drone path planning. We demonstrate that our data-driven reconstructability predictions are more closely correlated to the true reconstruction quality than prior heuristic measures. Further, our learned predictor can be easily integrated into existing path planners to yield improvements. Finally, we devise a new iterative view planning framework, based on the learned reconstructability, and show superior performance of the new planner when reconstructing both synthetic and real scenes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Dong, Bo, Kaiqiang Chen, Zhirui Wang, Menglong Yan, Jiaojiao Gu und Xian Sun. „MM-NeRF: Large-Scale Scene Representation with Multi-Resolution Hash Grid and Multi-View Priors Features“. Electronics 13, Nr. 5 (22.02.2024): 844. http://dx.doi.org/10.3390/electronics13050844.

Der volle Inhalt der Quelle
Annotation:
Reconstructing large-scale scenes using Neural Radiance Fields (NeRFs) is a research hotspot in 3D computer vision. Existing MLP (multi-layer perception)-based methods often suffer from issues of underfitting and a lack of fine details in rendering large-scale scenes. Popular solutions are to divide the scene into small areas for separate modeling or to increase the layer scale of the MLP network. However, the subsequent problem is that the training cost increases. Moreover, reconstructing large scenes, unlike object-scale reconstruction, involves a geometrically considerable increase in the quantity of view data if the prior information of the scene is not effectively utilized. In this paper, we propose an innovative method named MM-NeRF, which integrates efficient hybrid features into the NeRF framework to enhance the reconstruction of large-scale scenes. We propose employing a dual-branch feature capture structure, comprising a multi-resolution 3D hash grid feature branch and a multi-view 2D prior feature branch. The 3D hash grid feature models geometric details, while the 2D prior feature supplements local texture information. Our experimental results show that such integration is sufficient to render realistic novel views with fine details, forming a more accurate geometric representation. Compared with representative methods in the field, our method significantly improves the PSNR (Peak Signal-to-Noise Ratio) by approximately 5%. This remarkable progress underscores the outstanding contribution of our method in the field of large-scene radiance field reconstruction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Tingdahl, David, und Gool Van Luc. „An Enhanced On-Line Service for 3D Model Construction from Photographs“. International Journal of Heritage in the Digital Era 1, Nr. 2 (Juni 2012): 277–94. http://dx.doi.org/10.1260/2047-4970.1.2.277.

Der volle Inhalt der Quelle
Annotation:
We present a web service for image based 3D reconstruction. The system allows a cultural heritage professional to easily create a 3D model of a scene or object out of images taken from different viewpoints. The user uploads the images to our server on which all processing takes place, and the final result can be downloaded upon completion. Any consumer-class digital camera can be used, and the system is free to use for non-commercial purposes. The service includes a number of innovations to greatly simplify the process of taking pictures suitable for reconstruction. In particular, we are able to construct models of planar scenes and from photographs shot using a turntable, and at varying zoom levels. Although the first two may seem like particularly simple cases, they cause some mathematical issues with traditional self-calibration techniques. We handle these cases by taking advantage of a new automatic camera calibration method that uses meta-data stored with the images. For fixed-lens camera setups, we can also reuse previously computed calibrations to support otherwise degenerate scenes. Furthermore, we can automatically compute the relative scale and transformation between two reconstructions of the same scene, merging two reconstructions into one. We demonstrate the capabilities of the system by two case studies: turntable reconstruction of various objects and the reconstruction of a cave, with walls and roof integrated into a complete model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Wang, Wei, Fengjiao Gao und Yongliang Shen. „Res-NeuS: Deep Residuals and Neural Implicit Surface Learning for Multi-View Reconstruction“. Sensors 24, Nr. 3 (29.01.2024): 881. http://dx.doi.org/10.3390/s24030881.

Der volle Inhalt der Quelle
Annotation:
Surface reconstruction using neural networks has proven effective in reconstructing dense 3D surfaces through image-based neural rendering. Nevertheless, current methods are challenging when dealing with the intricate details of large-scale scenes. The high-fidelity reconstruction performance of neural rendering is constrained by the view sparsity and structural complexity of such scenes. In this paper, we present Res-NeuS, a method combining ResNet-50 and neural surface rendering for dense 3D reconstruction. Specifically, we present appearance embeddings: ResNet-50 is used to extract the appearance depth features of an image to further capture more scene details. We interpolate points near the surface and optimize their weights for the accurate localization of 3D surfaces. We introduce photometric consistency and geometric constraints to optimize 3D surfaces and eliminate geometric ambiguity existing in current methods. Finally, we design a 3D geometry automatic sampling to filter out uninteresting areas and reconstruct complex surface details in a coarse-to-fine manner. Comprehensive experiments demonstrate Res-NeuS’s superior capability in the reconstruction of 3D surfaces in complex, large-scale scenes, and the harmful distance of the reconstructed 3D model is 0.4 times that of general neural rendering 3D reconstruction methods and 0.6 times that of traditional 3D reconstruction methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Xia, Wei, Rongfeng Lu, Yaoqi Sun, Chenghao Xu, Kun Lv, Yanwei Jia, Zunjie Zhu und Bolun Zheng. „3D Indoor Scene Completion via Room Layout Estimation“. Journal of Physics: Conference Series 2025, Nr. 1 (01.09.2021): 012102. http://dx.doi.org/10.1088/1742-6596/2025/1/012102.

Der volle Inhalt der Quelle
Annotation:
Abstract Recent advances in 3D reconstructions have shown impressive progress in 3D indoor scene reconstruction, enabling automatic scene modeling; however, holes in the 3D scans hinder the further usage of the reconstructed models. Thus, we propose the task of layout-based hole filling for the incomplete indoor scene scans: from the mesh of a scene model, we estimate the scene layout by detecting the principal planes of a scene and leverage the layout as the prior for the accurate completion of planar regions. Experiments show that guiding scene model completion through the scene layout prior significantly outperforms the alternative approach to the task of scene model completion.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Reconstruction 3D de la scene"

1

Boyling, Timothy A. „Active vision for autonomous 3D scene reconstruction“. Thesis, University of Glasgow, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433622.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Nitschke, Christian. „3D reconstruction : real-time volumetric scene reconstruction from multiple views /“. Saarbrücken : VDM Verl. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=2939698&prov=M&dok_var=1&dok_ext=htm.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Roldão, Jimenez Luis Guillermo. „3D Scene Reconstruction and Completion for Autonomous Driving“. Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS415.

Der volle Inhalt der Quelle
Annotation:
Dans cette thèse, nous nous intéressons à des problèmes liés à la reconstruction et la complétion des scènes 3D à partir de nuages de points de densité hétérogène. Nous étudions l'utilisation de grilles d'occupation tridimensionnelles pour la reconstruction d'une scène 3D à partir de plusieurs observations. Nous proposons d'exploiter les informations de trajet des rayons pour résoudre des ambiguïtés dans les cellules partiellement occupées. Notre approche permet de réduire les imprécisions dues à la discrétisation et d'effectuer des mises à jour d'occupation des cellules dans des scénarios dynamiques. Puis, dans le cas où le nuage de points correspond à une seule observation de la scène, nous introduisons un algorithme de reconstruction de surface implicite 3D capable de traiter des données de densité hétérogène en utilisant une stratégie de voisinages adaptatifs. Notre méthode permet de compléter de petites zones manquantes de la scène et génère une représentation continue de la scène. Enfin, nous nous intéressons aux approches d'apprentissage profond adaptées à la complétion sémantique d'une scène 3D. Après avoir présenté une étude approfondie des méthodes existantes, nous introduisons une nouvelle méthode de complétion sémantique multi-échelle appropriée aux scenarios en extérieur. Pour ce faire, nous proposons une architecture constituée d'un réseau neuronal convolutif hybride basé sur une branche principale 2D et comportant des têtes de segmentation 3D pour prédire la scène sémantique complète à différentes échelles. Notre approche est plus légère et plus rapide que les approches existantes, tout en ayant une efficacité similaire
In this thesis, we address the challenges of 3D scene reconstruction and completion from sparse and heterogeneous density point clouds. Therefore proposing different techniques to create a 3D model of the surroundings.In the first part, we study the use of 3-dimensional occupancy grids for multi-frame reconstruction, useful for localization and HD-Maps applications. This is done by exploiting ray-path information to resolve ambiguities in partially occupied cells. Our sensor model reduces discretization inaccuracies and enables occupancy updates in dynamic scenarios.We also focus on single-frame environment perception by the introduction of a 3D implicit surface reconstruction algorithm capable to deal with heterogeneous density data by employing an adaptive neighborhood strategy. Our method completes small regions of missing data and outputs a continuous representation useful for physical modeling or terrain traversability assessment.We dive into deep learning applications for the novel task of semantic scene completion, which completes and semantically annotates entire 3D input scans. Given the little consensus found in the literature, we present an in-depth survey of existing methods and introduce our lightweight multiscale semantic completion network for outdoor scenarios. Our method employs a new hybrid pipeline based on a 2D CNN backbone branch to reduce computation overhead and 3D segmentation heads to predict the complete semantic scene at different scales, being significantly lighter and faster than existing approaches
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Goldman, Benjamin Joseph. „Broadband World Modeling and Scene Reconstruction“. Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/23094.

Der volle Inhalt der Quelle
Annotation:
Perception is a key feature in how any creature or autonomous system relates to its environment. While there are many types of perception, this thesis focuses on the improvement of the visual robotics perception systems. By implementing a broadband passive sensing system in conjunction with current perception algorithms, this thesis explores scene reconstruction and world modeling.
The process involves two main steps. The first is stereo correspondence using block matching algorithms with filtering to improve the quality of this matching process. The disparity maps are then transformed into 3D point clouds. These point clouds are filtered again before the registration process is done. The registration uses a SAC-IA matching technique to align the point clouds with minimum error.  The registered final cloud is then filtered again to smooth and down sample the large amount of data. This process was implemented through software architecture that utilizes Qt, OpenCV, and Point Cloud Library. It was tested using a variety of experiments on each of the components of the process.  It shows promise for being able to replace or augment existing UGV perception systems in the future.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Booth, Roy. „Scene analysis and 3D object reconstruction using passive vision“. Thesis, University of Newcastle Upon Tyne, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295780.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Aufderheide, Dominik. „VISrec! : visual-inertial sensor fusion for 3D scene reconstruction“. Thesis, University of Bolton, 2014. http://ubir.bolton.ac.uk/649/.

Der volle Inhalt der Quelle
Annotation:
The self-acting generation of three-dimensional models, by analysing monocular image streams from standard cameras, is one fundamental problem in the field of computer vision. A prerequisite for the scene modelling is the computation of the camera pose for the different frames of the sequence. Several techniques and methodologies have been introduced during the last decade to solve this classical Structure from Motion (SfM) problem, which incorporates camera egomotion estimation and subsequent recovery of 3D scene structure. However the applicability of those approaches to real world devices and applications is still limited, due to non-satisfactorily properties in terms of computational costs, accuracy and robustness. Thus tactile systems and laser scanners are still the predominantly used methods in industry for 3D measurements. This thesis suggests a novel framework for 3D scene reconstruction based on visual-inertial measurements and a corresponding sensor fusion framework. The integration of additional modalities, such as inertial measurements, are useful to compensate for typical problems of systems which rely only on visual information. The complete system is implemented based on a generic framework for designing Multi-Sensor Data Fusion (MSDF) systems. It is demonstrated that the incorporation of inertial measurements into a visual-inertial sensor fusion scheme for scene reconstruction (VISrec!) outperforms classical methods in terms of robustness and accuracy. It can be shown that the combination of visual and inertial modalities for scene reconstruction allows a reduction of the mean reconstruction error of typical scenes by up to 30%. Furthermore, the number of 3D feature points, which can be successfully reconstructed can be nearly doubled. In addition range and RGB-D sensors have been successfully incorporated into the VISrec! scheme proving the general applicability of the framework. By this it is possible to increase the number of 3D points within the reconstructed point cloud by a factor of five hundred if compared to standard visual SfM. Finally the applicability of the VISrec!-sensor to a specific industrial problem, in corporation with a local company, for reverse engineering of tailor-made car racing components demonstrates the usefulness of the developed system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Chandraker, Manmohan Krishna. „From pictures to 3D global optimization for scene reconstruction /“. Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p3369041.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--University of California, San Diego, 2009.
Title from first page of PDF file (viewed September 15, 2009). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 235-246).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Manessis, A. „3D reconstruction from video using a mobile robot“. Thesis, University of Surrey, 2001. http://epubs.surrey.ac.uk/844129/.

Der volle Inhalt der Quelle
Annotation:
An autonomous robot able to navigate inside an unknown environment and reconstruct full 3D scene models using monocular video has been a long term goal in the field of Machine Vision. A key component of such a system is the reconstruction of surface models from estimated scene structure. Sparse 3D measurements of real scenes are readily estimated from N-view image sequences using structure-from-motion techniques. In this thesis we present a geometric theory for reconstruction of surface models from sparse 3D data captured from N camera views. Based on this theory we introduce a general N-view algorithm for reconstruction of 3D models of arbitrary scenes from sparse data. Using a hypothesise and verify strategy this algorithm reconstructs a surface model which interpolates the sparse data and is guaranteed to be consistent with the feature visibility in the N-views. To achieve efficient reconstruction independent of the number of views a simplified incremental algorithm is developed which integrates the feature visibility independently for each view. This approach is shown to converge to an approximation of the real scene structure and have a computational cost which is linear in the number of views. Surface hypothesis are generated based on a new incremental planar constrained Delaunay triangulation algorithm. We present a statistical geometric framework to explicitly consider noise inherent in estimates of 3D scene structure from any real vision system. This approach ensures that the reconstruction is reliable in the presence of noise and missing data. Results are presented for reconstruction of both real and synthetic scenes together with an evaluation of the reconstruction performance in the presence of noise.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Moodie, Daniel Thien-An. „Sensor Fused Scene Reconstruction and Surface Inspection“. Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/47453.

Der volle Inhalt der Quelle
Annotation:
Optical three dimensional (3D) mapping routines are used in inspection robots to detect faults by creating 3D reconstructions of environments. To detect surface faults, sub millimeter depth resolution is required to determine minute differences caused by coating loss and pitting. Sensors that can detect these small depth differences cannot quickly create contextual maps of large environments. To solve the 3D mapping problem, a sensor fused approach is proposed that can gather contextual information about large environments with one depth sensor and a SLAM routine; while local surface defects can be measured with an actuated optical profilometer. The depth sensor uses a modified Kinect Fusion to create a contextual map of the environment. A custom actuated optical profilometer is created and then calibrated. The two systems are then registered to each other to place local surface scans from the profilometer into a scene context created by Kinect Fusion. The resulting system can create a contextual map of large scale features (0.4 m) with less than 10% error while the optical profilometer can create surface reconstructions with sub millimeter resolution. The combination of the two allows for the detection and quantification of surface faults with the profilometer placed in a contextual reconstruction.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

D'Angelo, Paolo. „3D scene reconstruction by integration of photometric and geometric methods“. [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=985352949.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Reconstruction 3D de la scene"

1

Nitschke, Christian. 3D reconstruction: Real-time volumetric scene reconstruction from multiple views. Saarbrücken: VDM, Verlag Dr. Müller, 2007.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Weinmann, Martin. Reconstruction and Analysis of 3D Scenes. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-29246-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Bellocchio, Francesco, N. Alberto Borghese, Stefano Ferrari und Vincenzo Piuri. 3D Surface Reconstruction. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-5632-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Zhang, Zhengyou, und Olivier Faugeras. 3D Dynamic Scene Analysis. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-58148-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

1948-, DeHaan John D., Hrsg. Forensic fire scene reconstruction. Upper Saddle River, N.J: Prentice-Hall, 2004.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

1948-, DeHaan John D., Hrsg. Forensic fire scene reconstruction. 2. Aufl. Upper Saddle River, N.J: Pearson/Prentice Hall, 2009.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Icove, David J. Forensic fire scene reconstruction. 2. Aufl. Upper Saddle River, N.J: Pearson/Prentice Hall, 2009.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Abdelguerfi, Mahdi, Hrsg. 3D Synthetic Environment Reconstruction. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4419-8756-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, (2nd : 1993 : Snowbird, Utah), Hrsg. Fully 3D image reconstruction. Bristol: IOP Publishing, 1994.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Mahdi, Abdelguerfi, Hrsg. 3D synthetic environment reconstruction. Boston: Kluwer Academic Publishers, 2001.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Reconstruction 3D de la scene"

1

Lucas, Laurent, Céline Loscos und Yannick Remion. „3D Scene Reconstruction and Structuring“. In 3D Video, 157–72. Hoboken, USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118761915.ch8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Weinmann, Martin. „3D Scene Analysis“. In Reconstruction and Analysis of 3D Scenes, 141–224. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-29246-5_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Chen, Jian, Bingxi Jia und Kaixiang Zhang. „Road Scene 3D Reconstruction“. In Multi-View Geometry Based Visual Perception and Control of Robotic Systems, 73–94. Boca Raton, FL : CRC Press/Taylor &Francis Group, 2017.: CRC Press, 2018. http://dx.doi.org/10.1201/9780429489211-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Zhang, Zhengyou, und Olivier Faugeras. „Reconstruction of 3D Line Segments“. In 3D Dynamic Scene Analysis, 29–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-58148-9_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Morana, Marco. „3D Scene Reconstruction Using Kinect“. In Advances in Intelligent Systems and Computing, 179–90. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-03992-3_13.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Hartley, Richard, und Gilles Debunne. „Dualizing Scene Reconstruction Algorithms“. In 3D Structure from Multiple Images of Large-Scale Environments, 14–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/3-540-49437-5_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Jiang, Cansen, Yohan Fougerolle, David Fofi und Cédric Demonceaux. „Dynamic 3D Scene Reconstruction and Enhancement“. In Image Analysis and Processing - ICIAP 2017, 518–29. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68560-1_46.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lucas, Laurent, Céline Loscos und Yannick Remion. „3D Reconstruction of Sport Scenes“. In 3D Video, 405–20. Hoboken, USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118761915.ch21.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Miller, Corey A., und Thomas J. Walls. „Passive 3D Scene Reconstruction via Hyperspectral Imagery“. In Advances in Visual Computing, 413–22. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-14249-4_39.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Denninger, Maximilian, und Rudolph Triebel. „3D Scene Reconstruction from a Single Viewport“. In Computer Vision – ECCV 2020, 51–67. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58542-6_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Reconstruction 3D de la scene"

1

Little, Charles Q., Daniel E. Small, Ralph R. Peters und J. B. Rigdon. „Forensic 3D scene reconstruction“. In 28th AIPR Workshop: 3D Visualization for Data Exploration and Decision Making, herausgegeben von William R. Oliver. SPIE, 2000. http://dx.doi.org/10.1117/12.384885.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Liu, Juan, Shijie Zhang und Haowen Ma. „Real-time Holographic Display based on Dynamic Scene Reconstruction and Rendering“. In 3D Image Acquisition and Display: Technology, Perception and Applications. Washington, D.C.: Optica Publishing Group, 2023. http://dx.doi.org/10.1364/3d.2023.dw5a.1.

Der volle Inhalt der Quelle
Annotation:
We propose an end-to-end real-time holographic display based on real-time capture of real scenes with simple system composition and affordable hardware requirements, the proposed technique will break the dilemma of the existing real-scene holographic display.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Shen, Yangping, Yoshitsugu Manabe und Noriko Yata. „3D scene reconstruction and object recognition for indoor scene“. In International Workshop on Advanced Image Technology, herausgegeben von Phooi Yee Lau, Kazuya Hayase, Qian Kemao, Wen-Nung Lie, Yung-Lyul Lee, Sanun Srisuk und Lu Yu. SPIE, 2019. http://dx.doi.org/10.1117/12.2521492.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sabharwal, Chaman L. „Stereoscopic projections and 3D scene reconstruction“. In the 1992 ACM/SIGAPP symposium. New York, New York, USA: ACM Press, 1992. http://dx.doi.org/10.1145/130069.130155.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Shan, Qi, Riley Adams, Brian Curless, Yasutaka Furukawa und Steven M. Seitz. „The Visual Turing Test for Scene Reconstruction“. In 2013 International Conference on 3D Vision (3DV). IEEE, 2013. http://dx.doi.org/10.1109/3dv.2013.12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Da Silveira, Thiago L. T., und Cláudio R. Jung. „Dense 3D Indoor Scene Reconstruction from Spherical Images“. In Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação, 2020. http://dx.doi.org/10.5753/sibgrapi.est.2020.12977.

Der volle Inhalt der Quelle
Annotation:
Techniques for 3D reconstruction of scenes based on images are popular and support a number of secondary applications. Traditional approaches require several captures for covering whole environments due to the narrow field of view (FoV) of the pinhole-based/perspective cameras. This paper summarizes the main contributions of the homonym Ph.D. Thesis, which addresses the 3D scene reconstruction problem by considering omnidirectional (spherical or 360◦ ) cameras that present a 360◦ × 180◦ FoV. Although spherical imagery have the benefit of the full-FoV, they are also challenging due to the inherent distortions involved in the capture and representation of such images, which might compromise the use of many wellestablished algorithms for image processing and computer vision. The referred Ph.D. Thesis introduces novel methodologies for estimating dense depth maps from two or more uncalibrated and temporally unordered 360◦ images. It also presents a framework for inferring depth from a single spherical image. We validate our approaches using both synthetic data and computer-generated imagery, showing competitive results concerning other state-ofthe-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Huzaifa, Muhammad, Boyuan Tian, Yihan Pang, Henry Che, Shenlong Wang und Sarita Adve. „ADAPTIVEFUSION: Low Power Scene Reconstruction“. In 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, 2023. http://dx.doi.org/10.1109/vrw58643.2023.00296.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Miksik, Ondrej, Yousef Amar, Vibhav Vineet, Patrick Perez und Philip H. S. Torr. „Incremental dense multi-modal 3D scene reconstruction“. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2015. http://dx.doi.org/10.1109/iros.2015.7353479.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Hane, Christian, Christopher Zach, Andrea Cohen, Roland Angst und Marc Pollefeys. „Joint 3D Scene Reconstruction and Class Segmentation“. In 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2013. http://dx.doi.org/10.1109/cvpr.2013.20.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

McBride, Jonah C., Magnus S. Snorrason, Thomas R. Goodsell, Ross S. Eaton und Mark R. Stevens. „3D scene reconstruction: why, when, and how?“ In Defense and Security, herausgegeben von Grant R. Gerhart, Chuck M. Shoemaker und Douglas W. Gage. SPIE, 2004. http://dx.doi.org/10.1117/12.542678.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Reconstruction 3D de la scene"

1

Defrise, Michel, und Grant T. Gullberg. 3D reconstruction of tensors and vectors. Office of Scientific and Technical Information (OSTI), Februar 2005. http://dx.doi.org/10.2172/838184.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Lyckegaard, A., A. Alpers, W. Ludwig, R. W. Fonda, L. Margulies, A. Goetz, H. O. Soerensen, S. R. Dey, H. F. Poulsen und E. M. Lauridsen. 3D Grain Reconstruction from Boxscan Data. Fort Belvoir, VA: Defense Technical Information Center, Januar 2010. http://dx.doi.org/10.21236/ada530190.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Weiss, Isaac. 3D Curve Reconstruction From Uncalibrated Cameras. Fort Belvoir, VA: Defense Technical Information Center, Januar 1996. http://dx.doi.org/10.21236/ada306610.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Rother, Diego, Kedar Patwardhan, Iman Aganj und Guillermo Sapiro. 3D Priors for Scene Learning from a Single View. Fort Belvoir, VA: Defense Technical Information Center, Mai 2008. http://dx.doi.org/10.21236/ada513268.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

KRUHL, Jörn H., Robert MARSCHALLINGER, Kai-Uwe HESS, Asher FLAWS und Richard ZEFACK KHEMAKA. 3D fabric recording by neutron tomography: benchmarking with destructive 3D reconstruction. Cogeo@oeaw-giscience, Juni 2010. http://dx.doi.org/10.5242/cogeo.2010.0009.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

KRUHL, Jörn H., Robert MARSCHALLINGER, Kai-Uwe HESS, Asher FLAWS und Richard ZEFACK KHEMAKA. 3D fabric recording by neutron tomography: benchmarking with destructive 3D reconstruction. Cogeo@oeaw-giscience, Juni 2010. http://dx.doi.org/10.5242/cogeo.2010.0009.a01.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Rosales, Romer, Vassilis Athitsos, Leonid Sigal und Stan Sclaroff. 3D Hand Pose Reconstruction Using Specialized Mappings. Fort Belvoir, VA: Defense Technical Information Center, April 2001. http://dx.doi.org/10.21236/ada451286.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Blankenbecler, Richard. 3D Image Reconstruction: Determination of Pattern Orientation. Office of Scientific and Technical Information (OSTI), März 2003. http://dx.doi.org/10.2172/812988.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Scott, Logan, und Thomas Karnowski. An Evaluation of Three Dimensional Scene Reconstruction Tools for Safeguards Applications. Office of Scientific and Technical Information (OSTI), Oktober 2023. http://dx.doi.org/10.2172/2205425.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Tyler, Christopher W., und Tai-Sing Lee. Encoding of 3D Structure in the Visual Scene: A New Conceptualization. Fort Belvoir, VA: Defense Technical Information Center, März 2013. http://dx.doi.org/10.21236/ada580528.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie