To see the other types of publications on this topic, follow the link: Reconstruction 3D de la scene.

Dissertations / Theses on the topic 'Reconstruction 3D de la scene'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Reconstruction 3D de la scene.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Boyling, Timothy A. "Active vision for autonomous 3D scene reconstruction." Thesis, University of Glasgow, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nitschke, Christian. "3D reconstruction : real-time volumetric scene reconstruction from multiple views /." Saarbrücken : VDM Verl. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=2939698&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Roldão, Jimenez Luis Guillermo. "3D Scene Reconstruction and Completion for Autonomous Driving." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS415.

Full text
Abstract:
Dans cette thèse, nous nous intéressons à des problèmes liés à la reconstruction et la complétion des scènes 3D à partir de nuages de points de densité hétérogène. Nous étudions l'utilisation de grilles d'occupation tridimensionnelles pour la reconstruction d'une scène 3D à partir de plusieurs observations. Nous proposons d'exploiter les informations de trajet des rayons pour résoudre des ambiguïtés dans les cellules partiellement occupées. Notre approche permet de réduire les imprécisions dues à la discrétisation et d'effectuer des mises à jour d'occupation des cellules dans des scénarios dynamiques. Puis, dans le cas où le nuage de points correspond à une seule observation de la scène, nous introduisons un algorithme de reconstruction de surface implicite 3D capable de traiter des données de densité hétérogène en utilisant une stratégie de voisinages adaptatifs. Notre méthode permet de compléter de petites zones manquantes de la scène et génère une représentation continue de la scène. Enfin, nous nous intéressons aux approches d'apprentissage profond adaptées à la complétion sémantique d'une scène 3D. Après avoir présenté une étude approfondie des méthodes existantes, nous introduisons une nouvelle méthode de complétion sémantique multi-échelle appropriée aux scenarios en extérieur. Pour ce faire, nous proposons une architecture constituée d'un réseau neuronal convolutif hybride basé sur une branche principale 2D et comportant des têtes de segmentation 3D pour prédire la scène sémantique complète à différentes échelles. Notre approche est plus légère et plus rapide que les approches existantes, tout en ayant une efficacité similaire
In this thesis, we address the challenges of 3D scene reconstruction and completion from sparse and heterogeneous density point clouds. Therefore proposing different techniques to create a 3D model of the surroundings.In the first part, we study the use of 3-dimensional occupancy grids for multi-frame reconstruction, useful for localization and HD-Maps applications. This is done by exploiting ray-path information to resolve ambiguities in partially occupied cells. Our sensor model reduces discretization inaccuracies and enables occupancy updates in dynamic scenarios.We also focus on single-frame environment perception by the introduction of a 3D implicit surface reconstruction algorithm capable to deal with heterogeneous density data by employing an adaptive neighborhood strategy. Our method completes small regions of missing data and outputs a continuous representation useful for physical modeling or terrain traversability assessment.We dive into deep learning applications for the novel task of semantic scene completion, which completes and semantically annotates entire 3D input scans. Given the little consensus found in the literature, we present an in-depth survey of existing methods and introduce our lightweight multiscale semantic completion network for outdoor scenarios. Our method employs a new hybrid pipeline based on a 2D CNN backbone branch to reduce computation overhead and 3D segmentation heads to predict the complete semantic scene at different scales, being significantly lighter and faster than existing approaches
APA, Harvard, Vancouver, ISO, and other styles
4

Goldman, Benjamin Joseph. "Broadband World Modeling and Scene Reconstruction." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/23094.

Full text
Abstract:
Perception is a key feature in how any creature or autonomous system relates to its environment. While there are many types of perception, this thesis focuses on the improvement of the visual robotics perception systems. By implementing a broadband passive sensing system in conjunction with current perception algorithms, this thesis explores scene reconstruction and world modeling.
The process involves two main steps. The first is stereo correspondence using block matching algorithms with filtering to improve the quality of this matching process. The disparity maps are then transformed into 3D point clouds. These point clouds are filtered again before the registration process is done. The registration uses a SAC-IA matching technique to align the point clouds with minimum error.  The registered final cloud is then filtered again to smooth and down sample the large amount of data. This process was implemented through software architecture that utilizes Qt, OpenCV, and Point Cloud Library. It was tested using a variety of experiments on each of the components of the process.  It shows promise for being able to replace or augment existing UGV perception systems in the future.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Booth, Roy. "Scene analysis and 3D object reconstruction using passive vision." Thesis, University of Newcastle Upon Tyne, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Aufderheide, Dominik. "VISrec! : visual-inertial sensor fusion for 3D scene reconstruction." Thesis, University of Bolton, 2014. http://ubir.bolton.ac.uk/649/.

Full text
Abstract:
The self-acting generation of three-dimensional models, by analysing monocular image streams from standard cameras, is one fundamental problem in the field of computer vision. A prerequisite for the scene modelling is the computation of the camera pose for the different frames of the sequence. Several techniques and methodologies have been introduced during the last decade to solve this classical Structure from Motion (SfM) problem, which incorporates camera egomotion estimation and subsequent recovery of 3D scene structure. However the applicability of those approaches to real world devices and applications is still limited, due to non-satisfactorily properties in terms of computational costs, accuracy and robustness. Thus tactile systems and laser scanners are still the predominantly used methods in industry for 3D measurements. This thesis suggests a novel framework for 3D scene reconstruction based on visual-inertial measurements and a corresponding sensor fusion framework. The integration of additional modalities, such as inertial measurements, are useful to compensate for typical problems of systems which rely only on visual information. The complete system is implemented based on a generic framework for designing Multi-Sensor Data Fusion (MSDF) systems. It is demonstrated that the incorporation of inertial measurements into a visual-inertial sensor fusion scheme for scene reconstruction (VISrec!) outperforms classical methods in terms of robustness and accuracy. It can be shown that the combination of visual and inertial modalities for scene reconstruction allows a reduction of the mean reconstruction error of typical scenes by up to 30%. Furthermore, the number of 3D feature points, which can be successfully reconstructed can be nearly doubled. In addition range and RGB-D sensors have been successfully incorporated into the VISrec! scheme proving the general applicability of the framework. By this it is possible to increase the number of 3D points within the reconstructed point cloud by a factor of five hundred if compared to standard visual SfM. Finally the applicability of the VISrec!-sensor to a specific industrial problem, in corporation with a local company, for reverse engineering of tailor-made car racing components demonstrates the usefulness of the developed system.
APA, Harvard, Vancouver, ISO, and other styles
7

Chandraker, Manmohan Krishna. "From pictures to 3D global optimization for scene reconstruction /." Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p3369041.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2009.
Title from first page of PDF file (viewed September 15, 2009). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 235-246).
APA, Harvard, Vancouver, ISO, and other styles
8

Manessis, A. "3D reconstruction from video using a mobile robot." Thesis, University of Surrey, 2001. http://epubs.surrey.ac.uk/844129/.

Full text
Abstract:
An autonomous robot able to navigate inside an unknown environment and reconstruct full 3D scene models using monocular video has been a long term goal in the field of Machine Vision. A key component of such a system is the reconstruction of surface models from estimated scene structure. Sparse 3D measurements of real scenes are readily estimated from N-view image sequences using structure-from-motion techniques. In this thesis we present a geometric theory for reconstruction of surface models from sparse 3D data captured from N camera views. Based on this theory we introduce a general N-view algorithm for reconstruction of 3D models of arbitrary scenes from sparse data. Using a hypothesise and verify strategy this algorithm reconstructs a surface model which interpolates the sparse data and is guaranteed to be consistent with the feature visibility in the N-views. To achieve efficient reconstruction independent of the number of views a simplified incremental algorithm is developed which integrates the feature visibility independently for each view. This approach is shown to converge to an approximation of the real scene structure and have a computational cost which is linear in the number of views. Surface hypothesis are generated based on a new incremental planar constrained Delaunay triangulation algorithm. We present a statistical geometric framework to explicitly consider noise inherent in estimates of 3D scene structure from any real vision system. This approach ensures that the reconstruction is reliable in the presence of noise and missing data. Results are presented for reconstruction of both real and synthetic scenes together with an evaluation of the reconstruction performance in the presence of noise.
APA, Harvard, Vancouver, ISO, and other styles
9

Moodie, Daniel Thien-An. "Sensor Fused Scene Reconstruction and Surface Inspection." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/47453.

Full text
Abstract:
Optical three dimensional (3D) mapping routines are used in inspection robots to detect faults by creating 3D reconstructions of environments. To detect surface faults, sub millimeter depth resolution is required to determine minute differences caused by coating loss and pitting. Sensors that can detect these small depth differences cannot quickly create contextual maps of large environments. To solve the 3D mapping problem, a sensor fused approach is proposed that can gather contextual information about large environments with one depth sensor and a SLAM routine; while local surface defects can be measured with an actuated optical profilometer. The depth sensor uses a modified Kinect Fusion to create a contextual map of the environment. A custom actuated optical profilometer is created and then calibrated. The two systems are then registered to each other to place local surface scans from the profilometer into a scene context created by Kinect Fusion. The resulting system can create a contextual map of large scale features (0.4 m) with less than 10% error while the optical profilometer can create surface reconstructions with sub millimeter resolution. The combination of the two allows for the detection and quantification of surface faults with the profilometer placed in a contextual reconstruction.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
10

D'Angelo, Paolo. "3D scene reconstruction by integration of photometric and geometric methods." [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=985352949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Tola, Engin. "Multiview 3d Reconstruction Of A Scene Containing Independently Moving Objects." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606411/index.pdf.

Full text
Abstract:
In this thesis, the structure from motion problem for calibrated scenes containing independently moving objects (IMO) has been studied. For this purpose, the overall reconstruction process is partitioned into various stages. The first stage deals with the fundamental problem of estimating structure and motion by using only two views. This process starts with finding some salient features using a sub-pixel version of the Harris corner detector. The features are matched by the help of a similarity and neighborhood-based matcher. In order to reject the outliers and estimate the fundamental matrix of the two images, a robust estimation is performed via RANSAC and normalized 8-point algorithms. Two-view reconstruction is finalized by decomposing the fundamental matrix and estimating the 3D-point locations as a result of triangulation. The second stage of the reconstruction is the generalization of the two-view algorithm for the N-view case. This goal is accomplished by first reconstructing an initial framework from the first stage and then relating the additional views by finding correspondences between the new view and already reconstructed views. In this way, 3D-2D projection pairs are determined and the projection matrix of this new view is estimated by using a robust procedure. The final section deals with scenes containing IMOs. In order to reject the correspondences due to moving objects, parallax-based rigidity constraint is used. In utilizing this constraint, an automatic background pixel selection algorithm is developed and an IMO rejection algorithm is also proposed. The results of the proposed algorithm are compared against that of a robust outlier rejection algorithm and found to be quite promising in terms of execution time vs. reconstruction quality.
APA, Harvard, Vancouver, ISO, and other styles
12

Kühner, Tilman [Verfasser], and C. [Akademischer Betreuer] Stiller. "Large-Scale Textured 3D Scene Reconstruction / Tilman Kühner ; Betreuer: C. Stiller." Karlsruhe : KIT-Bibliothek, 2020. http://d-nb.info/1221186965/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Villota, Juan Carlos Perafán. "Adaptive registration using 2D and 3D features for indoor scene reconstruction." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-17042017-090901/.

Full text
Abstract:
Pairwise alignment between point clouds is an important task in building 3D maps of indoor environments with partial information. The combination of 2D local features with depth information provided by RGB-D cameras are often used to improve such alignment. However, under varying lighting or low visual texture, indoor pairwise frame registration with sparse 2D local features is not a particularly robust method. In these conditions, features are hard to detect, thus leading to misalignment between consecutive pairs of frames. The use of 3D local features can be a solution as such features come from the 3D points themselves and are resistant to variations in visual texture and illumination. Because varying conditions in real indoor scenes are unavoidable, we propose a new framework to improve the pairwise frame alignment using an adaptive combination of sparse 2D and 3D features based on both the levels of geometric structure and visual texture contained in each scene. Experiments with datasets including unrestricted RGB-D camera motion and natural changes in illumination show that the proposed framework convincingly outperforms methods using 2D or 3D features separately, as reflected in better level of alignment accuracy.
O alinhamento entre pares de nuvens de pontos é uma tarefa importante na construção de mapas de ambientes em 3D. A combinação de características locais 2D com informação de profundidade fornecida por câmeras RGB-D são frequentemente utilizadas para melhorar tais alinhamentos. No entanto, em ambientes internos com baixa iluminação ou pouca textura visual o método usando somente características locais 2D não é particularmente robusto. Nessas condições, as características 2D são difíceis de serem detectadas, conduzindo a um desalinhamento entre pares de quadros consecutivos. A utilização de características 3D locais pode ser uma solução uma vez que tais características são extraídas diretamente de pontos 3D e são resistentes a variações na textura visual e na iluminação. Como situações de variações em cenas reais em ambientes internos são inevitáveis, essa tese apresenta um novo sistema desenvolvido com o objetivo de melhorar o alinhamento entre pares de quadros usando uma combinação adaptativa de características esparsas 2D e 3D. Tal combinação está baseada nos níveis de estrutura geométrica e de textura visual contidos em cada cena. Esse sistema foi testado com conjuntos de dados RGB-D, incluindo vídeos com movimentos irrestritos da câmera e mudanças naturais na iluminação. Os resultados experimentais mostram que a nossa proposta supera aqueles métodos que usam características 2D ou 3D separadamente, obtendo uma melhora da precisão no alinhamento de cenas em ambientes internos reais.
APA, Harvard, Vancouver, ISO, and other styles
14

Tannouri, Anthony. "Using Wireless multimedia sensor networks for 3D scene asquisition and reconstruction." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCD053/document.

Full text
Abstract:
De nos jours, les réseaux de capteurs multimédia sans fils sont prometteurs pour différentes applications et domaines, en particulier avec le développement de l’IoT et des capteurs de caméra efficaces et bon marché. La stéréo vision est également très importante pour des objectifs multiples comme la Cinématographie, les jeux, la Réalité Virtuelle, la Réalité Augmentée, etc. Cette thèse vise à développer un système de reconstruction de scène en 3D prouvant l’utilisation de cartes de disparités stéréoscopiques multi-angles dans le contexte des réseaux de capteurs multimedia. Notre travail peut être divisé en trois parties. La première se concentre sur l’étude de toutes les applications, composants, topologies, contraintes et limitations de ces réseaux. En plus, les méthodes de calcul de disparité de vision stéréoscopique afin de choisir la ou les meilleures méthodes pour réaliser une reconstruction en 3D sur le réseau à faible coût en termes de complexité et de consommation d’énergie. Dans la deuxième partie, nous expérimentons et simulons différents calculs de cartes de disparités sur quelques nœuds en changeant les scénarios (intérieur et extérieur), les distances de couverture, les angles, le nombre de nœuds et les algorithmes. Dans la troisième partie, nous proposons un modèle de réseau basé sur l’arbre pour calculer des cartes de disparités précises sur des nœuds de capteurs de caméra multicouches qui répond aux besoins du serveur pour faire une reconstruction de scène 3D de la scène ou de l’objet d’intérêt. Les résultats sont acceptables et assurent la preuve du concept d’utilisation des cartes de disparités dans le contexte des réseaux de capteurs multimédia
Nowadays, the WMSNs are promising for different applications and fields, specially with the development of the IoT and cheap efficient camera sensors. The stereo vision is also very important for multiple purposes like Cinematography, games, Virtual Reality, Augmented Reality, etc. This thesis aim to develop a 3D scene reconstruction system that proves the concept of using multiple view stereo disparity maps in the context of WMSNs. Our work can be divided in three parts. The first one concentrates on studying all WMSNs applications, components, topologies, constraints and limitations. Adding to this stereo vision disparity map calculations methods in order to choose the best method(s) to make a 3d reconstruction on WMSNs with low cost in terms of complexity and power consumption. In the second part, we experiment and simulate different disparity map calculations on a couple of nodes by changing scenarios (indoor and outdoor), coverage distances, angles, number of nodes and algorithms. In the third part, we propose a tree-based network model to compute accurate disparity maps on multi-layer camera sensor nodes that meets the server needs to make a 3d scene reconstruction of the scene or object of interest. The results are acceptable and ensure the proof of the concept to use disparity maps in the context of WMSNs
APA, Harvard, Vancouver, ISO, and other styles
15

Imre, Evren. "Prioritized 3d Scene Reconstruction And Rate-distortion Efficient Representation For Video Sequences." Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608722/index.pdf.

Full text
Abstract:
In this dissertation, a novel scheme performing 3D reconstruction of a scene from a 2D video sequence is presented. To this aim, first, the trajectories of the salient features in the scene are determined as a sequence of displacements via Kanade-Lukas-Tomasi tracker and Kalman filter. Then, a tentative camera trajectory with respect to a metric reference reconstruction is estimated. All frame pairs are ordered with respect to their amenability to 3D reconstruction by a metric that utilizes the baseline distances and the number of tracked correspondences between the frames. The ordered frame pairs are processed via a sequential structure-from-motion algorithm to estimate the sparse structure and camera matrices. The metric and the associated reconstruction algorithm are shown to outperform their counterparts in the literature via experiments. Finally, a mesh-based, rate-distortion efficient representation is constructed through a novel procedure driven by the error between a target image, and its prediction from a reference image and the current mesh. At each iteration, the triangular patch, whose projection on the predicted image has the largest error, is identified. Within this projected region and its correspondence on the reference frame, feature matches are extracted. The pair with the least conformance to the planar model is used to determine the vertex to be added to the mesh. The procedure is shown to outperform the dense depth-map representation in all tested cases, and the block motion vector representation, in scenes with large depth range, in rate-distortion sense.
APA, Harvard, Vancouver, ISO, and other styles
16

Leggett, I. C. "3D scene reconstruction and object recognition for use with AGV self positioning." Thesis, University of Newcastle Upon Tyne, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Grundberg, Måns, and Viktor Altintas. "Generating 3D Scenes From Single RGB Images in Real-Time Using Neural Networks." Thesis, Malmö universitet, Institutionen för datavetenskap och medieteknik (DVMT), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43091.

Full text
Abstract:
The ability to reconstruct 3D scenes of environments is of great interest in a number of fields such as autonomous driving, surveillance, and virtual reality. However, traditional methods often rely on multiple cameras or sensor-based depth measurements to accurately reconstruct 3D scenes. In this thesis we propose an alternative, deep learning-based approach to 3D scene reconstruction for objects of interest, using nothing but single RGB images. We evaluate our approach using the Deep Object Pose Estimation (DOPE) neural network for object detection and pose estimation, and the NVIDIA Deep learning Dataset Synthesizer for synthetic data generation. Using two unique objects, our results indicate that it is possible to reconstruct 3D scenes from single RGB images within a few centimeters of error margin.
APA, Harvard, Vancouver, ISO, and other styles
18

Boulch, Alexandre. "Reconstruction automatique de maquettes numériques 3D." Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1099/document.

Full text
Abstract:
La maquette numérique de bâtiment est un outil nouveau et en plein essor dans les métiers de la construction. Elle centralise les informations et facilite la communication entre les acteurs : évaluation des coûts, simulations physiques, présentations virtuelles, suivis de travaux, etc. Si une maquette numérique est désormais utilisée pour les grands chantiers de bâtiments nouveaux, il n'en existe pas en revanche pour la plupart des bâtiments déjà construits. Or, avec le vieillissement du parc immobilier et le développement du marché de la rénovation, la maquette numérique serait une aide considérable pour des bâtiments anciens. Des techniques de reconstruction plus ou moins automatique ont été développées ces dernières années, à base de mesures laser ou de photogrammétrie. Les lasers, précis et denses, sont chers mais restent abordables pour les industriels, tandis que la photogrammétrie, souvent moins précise et moins fiable dans les zones uniformes (p.ex. les murs), est beaucoup plus bon marché. Mais la plupart des approches s'arrêtent à la reconstruction de surfaces, sans produire de maquettes numériques. À la géométrie doit cependant s'ajouter des informations sémantiques décrivant les éléments de la scène. L'objectif de cette thèse est de fournir un cadre de reconstruction de maquettes numériques, à la fois en ce qui concerne la géométrie et la sémantique, à partir de nuages de points. Pour cela, plusieurs étapes sont proposées. Elles s'inscrivent dans un processus d'enrichissement des données, depuis les points jusqu'à la maquette numérique finale. Dans un premier temps, un estimateur de normales pour les nuages de points est défini. Basé sur une transformée de Hough robuste, il permet de retrouver correctement les normales, y compris dans les zones anguleuses et s'adapte à l'anisotropie des données. Dans un second temps, des primitives géométriques sont extraites du nuage de points avec leur normales. Afin d'identifier les primitives identiques existantes en cas de sur-segmentation, nous développons un critère statistique robuste et général pour l'identification de formes, ne requérant qu'une fonction distance entre points et formes. Ensuite, une surface planaire par morceaux est reconstruite. Des hypothèses de plans pour les zones visibles et les parties cachées sont générées et insérées dans un arrangement. La surface est extraite avec une nouvelle régularisation sur le nombre de coins et la longueur des arêtes. L'utilisation d'une formulation linéaire permet, après relaxation continue, d'extraire efficacement une surface proche de l'optimum. Enfin, nous proposons une approche basée sur des grammaires attribuées avec contraintes pour l'enrichissement sémantique de modèles 3D. Cette méthode est bottom-up : nous partons des données pour construire des objets de complexité croissante. La possible explosion combinatoire est gérée efficacement via l'introduction d'opérateurs maximaux et d'un ordre pour l'instanciation des variables
The interest for digital models in the building industry is growing rapidly. These centralize all the information concerning the building and facilitate communication between the players of construction : cost evaluation, physical simulations, virtual presentations, building lifecycle management, site supervision, etc. Although building models now tend to be used for large projects of new constructions, there is no such models for existing building. In particular, old buildings do not enjoy digital 3D model and information whereas they would benefit the most from them, e.g., to plan cost-effective renovation that achieves good thermal performance. Such 3D models are reconstructed from the real building. Lately a number of automatic reconstruction methods have been developed either from laser or photogrammetric data. Lasers are precise and produce dense point clouds. Their price have greatly reduced in the past few years, making them affordable for industries. Photogrammetry, often less precise and failing in uniform regions (e.g. bare walls), is a lot cheaper than the lasers. However most approaches only reconstruct a surface from point clouds, not a semantically rich building model. A building information model is the alliance of a geometry and a semantics for the scene elements. The main objective of this thesis is to define a framework for digital model production regarding both geometry and semantic, using point clouds as an entry. The reconstruction process is divided in four parts, gradually enriching information, from the points to the final digital mockup. First, we define a normal estimator for unstructured point clouds based on a robust Hough transform. It allows to estimate accurate normals, even near sharp edges and corners, and deals with the anisotropy inherent to laser scans. Then, primitives such as planes are extracted from the point cloud. To avoid over-segmentation issues, we develop a general and robust statistical criterion for shape merging. It only requires a distance function from points to shapes. A piecewise-planar surface is then reconstructed. Planes hypothesis for visible and hidden parts of the scene are inserted in a 3D plane arrangement. Cells of the arrangement are labelled full or empty using a new regularization on corner count and edge length. A linear formulation allow us to efficiently solve this labelling problem with a continuous relaxation. Finally, we propose an approach based on constrained attribute grammars for 3D model semantization. This method is entirely bottom-up. We prevent the possible combinatorial explosion by introducing maximal operators and an order on variable instantiation
APA, Harvard, Vancouver, ISO, and other styles
19

Del, Pero Luca. "Top-Down Bayesian Modeling and Inference for Indoor Scenes." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/297040.

Full text
Abstract:
People can understand the content of an image without effort. We can easily identify the objects in it, and figure out where they are in the 3D world. Automating these abilities is critical for many applications, like robotics, autonomous driving and surveillance. Unfortunately, despite recent advancements, fully automated vision systems for image understanding do not exist. In this work, we present progress restricted to the domain of images of indoor scenes, such as bedrooms and kitchens. These environments typically have the "Manhattan" property that most surfaces are parallel to three principal ones. Further, the 3D geometry of a room and the objects within it can be approximated with simple geometric primitives, such as 3D blocks. Our goal is to reconstruct the 3D geometry of an indoor environment while also understanding its semantic meaning, by identifying the objects in the scene, such as beds and couches. We separately model the 3D geometry, the camera, and an image likelihood, to provide a generative statistical model for image data. Our representation captures the rich structure of an indoor scene, by explicitly modeling the contextual relationships among its elements, such as the typical size of objects and their arrangement in the room, and simple physical constraints, such as 3D objects do not intersect. This ensures that the predicted image interpretation will be globally coherent geometrically and semantically, which allows tackling the ambiguities caused by projecting a 3D scene onto an image, such as occlusions and foreshortening. We fit this model to images using MCMC sampling. Our inference method combines bottom-up evidence from the data and top-down knowledge from the 3D world, in order to explore the vast output space efficiently. Comprehensive evaluation confirms our intuition that global inference of the entire scene is more effective than estimating its individual elements independently. Further, our experiments show that our approach is competitive and often exceeds the results of state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
20

Kirli, Mustafa Yavuz. "3d Reconstruction Of Underwater Scenes From Uncalibrated Video Sequences." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12609901/index.pdf.

Full text
Abstract:
The aim of this thesis is to reconstruct 3D representation of underwater scenes from uncalibrated video sequences. Underwater visualization is important for underwater Remotely Operated Vehicles and underwater is a complex structured environment because of inhomogeneous light absorption and light scattering by the environment. These factors make 3D reconstruction in underwater more challenging. The reconstruction consists of the following stages: Image enhancement, feature detection and matching, fundamental matrix estimation, auto-calibration, recovery of extrinsic parameters, rectification, stereo matching and triangulation. For image enhancement, a pre-processing filter is used to remove the effects of water and to enhance the images. Two feature extraction methods are examined: 1. Difference of Gaussian with SIFT feature descriptor, 2. Harris Corner Detector with grey level around the feature point. Matching is performed by finding similarities of SIFT features and by finding correlated grey levels respectively for each feature extraction method. The results show that SIFT performs better than Harris with grey level information. RANSAC method with normalized 8-point algorithm is used to estimate fundamental matrix and to reject outliers. Because of the difficulties of calibrating the cameras in underwater, auto-calibration process is examined. Rectification is also performed since it provides epipolar lines coincide with image scan lines which is helpful to stereo matching algorithms. The Graph-Cut stereo matching algorithm is used to compute corresponding pixel of each pixel in the stereo image pair. For the last stage triangulation is used to compute 3D points from the corresponding pixel pairs.
APA, Harvard, Vancouver, ISO, and other styles
21

Vural, Elif. "Robust Extraction Of Sparse 3d Points From Image Sequences." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609888/index.pdf.

Full text
Abstract:
In this thesis, the extraction of sparse 3D points from calibrated image sequences is studied. The presented method for sparse 3D reconstruction is examined in two steps, where the first part addresses the problem of two-view reconstruction, and the second part is the extension of the two-view reconstruction algorithm to include multiple views. The examined two-view reconstruction method consists of some basic building blocks, such as feature detection and matching, epipolar geometry estimation, and the reconstruction of cameras and scene structure. Feature detection and matching is achieved by Scale Invariant Feature Transform (SIFT) method. For the estimation of epipolar geometry, the 7-point and 8-point algorithms are examined for Fundamental matrix (F-matrix) computation, while RANSAC and PROSAC are utilized for the robustness and accuracy for model estimation. In the final stage of two-view reconstruction, the camera projection matrices are computed from the F-matrix, and the locations of 3D scene points are estimated by triangulation
hence, determining the scene structure and cameras up to a projective transformation. The extension of the two-view reconstruction to multiple views is achieved by estimating the camera projection matrix of each additional view from the already reconstructed matches, and then adding new points to the scene structure by triangulating the unreconstructed matches. Finally, the reconstruction is upgraded from projective to metric by a rectifying homography computed from the camera calibration information. In order to obtain a refined reconstruction, two different methods are suggested for the removal of erroneous points from the scene structure. In addition to the examination of the solution to the reconstruction problem, experiments have been conducted that compare the performances of competing algorithms used in various stages of reconstruction. In connection with sparse reconstruction, a rate-distortion efficient piecewise planar scene representation algorithm that generates mesh models of scenes from reconstructed point clouds is examined, and its performance is evaluated through experiments.
APA, Harvard, Vancouver, ISO, and other styles
22

Oesau, Sven. "Modélisation géométrique de scènes intérieures à partir de nuage de points." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4034/document.

Full text
Abstract:
La modélisation géométrique et la sémantisation de scènes intérieures à partir d'échantillon de points et un sujet de recherche qui prend de plus en plus d'importance. Cependant, le traitement d'un ensemble volumineux de données est rendu difficile d'une part par le nombre élevé d'objets parasitant la scène et d'autre part par divers défauts d'acquisitions comme par exemple des données manquantes ou un échantillonnage de la scène non isotrope. Cette thèse s'intéresse de près à de nouvelles méthodes permettant de modéliser géométriquement un nuage de point non structuré et d’y donner de la sémantique. Dans le chapitre 2, nous présentons deux méthodes permettant de transformer le nuage de points en un ensemble de formes. Nous proposons en premier lieu une méthode d'extraction de lignes qui détecte des segments à partir d'une coupe horizontale du nuage de point initiale. Puis nous introduisons une méthode par croissance de régions qui détecte et renforce progressivement des régularités parmi les formes planaires. Dans la première partie du chapitre 3, nous proposons une méthode basée sur de l'analyse statistique afin de séparer de la structure de la scène les objets la parasitant. Dans la seconde partie, nous présentons une méthode d'apprentissage supervisé permettant de classifier des objets en fonction d'un ensemble de formes planaires. Nous introduisons dans le chapitre 4 une méthode permettant de modéliser géométriquement le volume d'une pièce (sans meubles). Une formulation énergétique est utilisée afin de labelliser les régions d’une partition générée à partir de formes élémentaires comme étant intérieur ou extérieur de manière robuste au bruit et aux données
Geometric modeling and semantization of indoor scenes from sampled point data is an emerging research topic. Recent advances in acquisition technologies provide highly accurate laser scanners and low-cost handheld RGB-D cameras for real-time acquisition. However, the processing of large data sets is hampered by high amounts of clutter and various defects such as missing data, outliers and anisotropic sampling. This thesis investigates three novel methods for efficient geometric modeling and semantization from unstructured point data: Shape detection, classification and geometric modeling. Chapter 2 introduces two methods for abstracting the input point data with primitive shapes. First, we propose a line extraction method to detect wall segments from a horizontal cross-section of the input point cloud. Second, we introduce a region growing method that progressively detects and reinforces regularities of planar shapes. This method utilizes regularities common to man-made architecture, i.e. coplanarity, parallelism and orthogonality, to reduce complexity and improve data fitting in defect-laden data. Chapter 3 introduces a method based on statistical analysis for separating clutter from structure. We also contribute a supervised machine learning method for object classification based on sets of planar shapes. Chapter 4 introduces a method for 3D geometric modeling of indoor scenes. We first partition the space using primitive shapes detected from permanent structures. An energy formulation is then used to solve an inside/outside labeling of a space partitioning, the latter providing robustness to missing data and outliers
APA, Harvard, Vancouver, ISO, and other styles
23

Mrkvička, Daniel. "Rekonstrukce 3D objektů z více pohledů." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-399560.

Full text
Abstract:
This thesis deals with the reconstruction of the scene using two or more images. It describes the whole reconstruction process consisting of detecting points in images, finding the appropriate geometry between images and resulting projection of these points into the space of scene. The thesis also includes a description of the application, which demonstrates the described methods.
APA, Harvard, Vancouver, ISO, and other styles
24

Diskin, Yakov. "Dense 3D Point Cloud Representation of a Scene Using Uncalibrated Monocular Vision." University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1366386933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Schindler, Grant. "Unlocking the urban photographic record through 4D scene modeling." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34719.

Full text
Abstract:
Vast collections of historical photographs are being digitally archived and placed online, providing an objective record of the last two centuries that remains largely untapped. We propose that time-varying 3D models can pull together and index large collections of images while also serving as a tool of historical discovery, revealing new information about the locations, dates, and contents of historical images. In particular, our goal is to use computer vision techniques to tie together a large set of historical photographs of a given city into a consistent 4D model of the city: a 3D model with time as an additional dimension. To extract 4D city models from historical images, we must perform inference about the position of cameras and scene structure in both space and time. Traditional structure from motion techniques can be used to deal with the spatial problem, while here we focus on the problem of inferring temporal information: a date for each image and a time interval for which each structural element in the scene persists. We first formulate this task as a constraint satisfaction problem based on the visibility of structural elements in each image, resulting in a temporal ordering of images. Next, we present methods to incorporate real date information into the temporal inference solution. Finally, we present a general probabilistic framework for estimating all temporal variables in structure from motion problems, including an unknown date for each camera and an unknown time interval for each structural element. Given a collection of images with mostly unknown or uncertain dates, we can use this framework to automatically recover the dates of all images by reasoning probabilistically about the visibility and existence of objects in the scene. We present results for image collections consisting of hundreds of historical images of cities taken over decades of time, including Manhattan and downtown Atlanta.
APA, Harvard, Vancouver, ISO, and other styles
26

Roubtsova, Nadejda S. "Accurate 3D reconstruction of dynamic scenes with complex reflectance properties." Thesis, University of Surrey, 2016. http://epubs.surrey.ac.uk/810256/.

Full text
Abstract:
Accurate 3D geometry modelling is an essential technology for many practical applications (computer generated imagery, assisted surgery, heritage preservation, automated quality control, robotics etc.). While the existing reconstruction methods mainly operate assuming the simplistic Lambertian model, real scenes, static or dynamic, are characterised by arbitrarily complex a priori unknown reflectance properties. The reflectance limitation of the state-of-the-art causes a gap between the practical demand for photometrically arbitrary scene modelling and the constrained applicability scope of existing methods. In response to the gap, this dissertation proposes a solution to the challenging problem of accurate geometric reconstruction of dynamic scenes with arbitrary a priori unknown reflectance. This is achieved by introducing a novel approach which generalises Helmholtz Stereopsis (HS) - a niche technique known to be independent of surface reflectance but till now limited to static scenes requiring sequential acquisition of a large number of input views. The undertaken generalisation extends the technique to dynamic scenes by two mutually tailored developments in response to the shortcomings of conventional HS. These developments are 1) a framework to fundamentally improve the geometric reconstruction accuracy from a small set of input images and 2) the design of a novel wavelength-multiplexing-based pipeline for dynamic scene modelling. Together these constitute a novel practical system which, for the first time, enables reconstruction of dynamic scenes with arbitrary surface properties. To improve the quality of geometric reconstruction by HS, a novel Bayesian formulation of the technique is proposed to replace its sub-optimal maximum likelihood formulation. Further a tailored prior enforcing consistency of per-point depth and normal estimates and related to integrability is developed. The prior purposely exploits the unique ability of HS to characterise the surface by both estimates. The formulation embedded into a coarse-to-fine framework without explicit surface integration achieves unprecedented accuracy and resolution of geometric modelling by HS regardless of reflectance, competitive with what the non-HS state-of-the-art achieves with strictly constrained reflectance. To generalise HS to dynamic scenes, Colour Helmholtz Stereopsis (CL HS) is proposed which utilises wavelength multiplexing for simultaneous acquisition of the minimal set of input images required for reconstruction. The challenges imposed by wavelength multiplexing in CL HS are addressed using a specially designed calibration consisting of two mutually dependent parts: one infers the photometric properties of the acquisition equipment while the other estimates the reconstructed surface chromaticity spatially and propagates it temporally to accommodate dynamic surface deformation. By integrating the proposed coarse-to-fine Bayesian HS with integrability prior into CL HS, remarkable accuracy and resolution of reconstruction are achieved with the minimal input using just three RGB cameras. Evaluation validates the approach by reconstruction of dynamic scenes with arbitrary a priori unknown reflectance, which includes unconstrained spatially varying chromaticity. The reconstructed dynamic sequences exhibit high per-frame geometric accuracy and resolution as well as temporal consistency.
APA, Harvard, Vancouver, ISO, and other styles
27

Lai, Po Kong. "Immersive Dynamic Scenes for Virtual Reality from a Single RGB-D Camera." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39663.

Full text
Abstract:
In this thesis we explore the concepts and components which can be used as individual building blocks for producing immersive virtual reality (VR) content from a single RGB-D sensor. We identify the properties of immersive VR videos and propose a system composed of a foreground/background separator, a dynamic scene re-constructor and a shape completer. We initially explore the foreground/background separator component in the context of video summarization. More specifically, we examined how to extract trajectories of moving objects from video sequences captured with a static camera. We then present a new approach for video summarization via minimization of the spatial-temporal projections of the extracted object trajectories. New evaluation criterion are also presented for video summarization. These concepts of foreground/background separation can then be applied towards VR scene creation by extracting relative objects of interest. We present an approach for the dynamic scene re-constructor component using a single moving RGB-D sensor. By tracking the foreground objects and removing them from the input RGB-D frames we can feed the background only data into existing RGB-D SLAM systems. The result is a static 3D background model where the foreground frames are then super-imposed to produce a coherent scene with dynamic moving foreground objects. We also present a specific method for extracting moving foreground objects from a moving RGB-D camera along with an evaluation dataset with benchmarks. Lastly, the shape completer component takes in a single view depth map of an object as input and "fills in" the occluded portions to produce a complete 3D shape. We present an approach that utilizes a new data minimal representation, the additive depth map, which allows traditional 2D convolutional neural networks to accomplish the task. The additive depth map represents the amount of depth required to transform the input into the "back depth map" which would exist if there was a sensor exactly opposite of the input. We train and benchmark our approach using existing synthetic datasets and also show that it can perform shape completion on real world data without fine-tuning. Our experiments show that our data minimal representation can achieve comparable results to existing state-of-the-art 3D networks while also being able to produce higher resolution outputs.
APA, Harvard, Vancouver, ISO, and other styles
28

Poulin-Girard, Anne-Sophie. "Paire stéréoscopique Panomorphe pour la reconstruction 3D d'objets d'intérêt dans une scène." Doctoral thesis, Université Laval, 2016. http://hdl.handle.net/20.500.11794/27073.

Full text
Abstract:
Il existe désormais une grande variété de lentilles panoramiques disponibles sur le marché dont certaines présentant des caractéristiques étonnantes. Faisant partie de cette dernière catégorie, les lentilles Panomorphes sont des lentilles panoramiques anamorphiques dont le profil de distorsion est fortement non-uniforme, ce qui cause la présence de zones de grandissement augmenté dans le champ de vue. Dans un contexte de robotique mobile, ces particularités peuvent être exploitées dans des systèmes stéréoscopiques pour la reconstruction 3D d’objets d’intérêt qui permettent à la fois une bonne connaissance de l’environnement, mais également l’accès à des détails plus fins en raison des zones de grandissement augmenté. Cependant, à cause de leur complexité, ces lentilles sont difficiles à calibrer et, à notre connaissance, aucune étude n’a réellement été menée à ce propos. L’objectif principal de cette thèse est la conception, l’élaboration et l’évaluation des performances de systèmes stéréoscopiques Panomorphes. Le calibrage a été effectué à l’aide d’une technique établie utilisant des cibles planes et d’une boîte à outils de calibrage dont l’usage est répandu. De plus, des techniques mathématiques nouvelles visant à rétablir la symétrie de révolution dans l’image (cercle) et à uniformiser la longueur focale (cercle uniforme) ont été développées pour voir s’il était possible d’ainsi faciliter le calibrage. Dans un premier temps, le champ de vue a été divisé en zones à l’intérieur desquelles la longueur focale instantanée varie peu et le calibrage a été effectué pour chacune d’entre elles. Puis, le calibrage général des systèmes a aussi été réalisé pour tout le champ de vue simultanément. Les résultats ont montré que la technique de calibrage par zone ne produit pas de gain significatif quant à la qualité des reconstructions 3D d’objet d’intérêt par rapport au calibrage général. Cependant, l’étude de cette nouvelle approche a permis de réaliser une évaluation des performances des systèmes stéréoscopiques Panomorphes sur tout le champ de vue et de montrer qu’il est possible d’effectuer des reconstructions 3D de qualité dans toutes les zones. De plus, la technique mathématique du cercle a produit des résultats de reconstructions 3D en général équivalents à l’utilisation des coordonnées originales. Puisqu’il existe des outils de calibrage qui, contrairement à celui utilisé dans ce travail, ne disposent que d’un seul degré de liberté sur la longueur focale, cette technique pourrait rendre possible le calibrage de lentilles Panomorphes à l’aide de ceux-ci. Finalement, certaines conclusions ont pu être dégagées quant aux facteurs déterminants influençant la qualité de la reconstruction 3D à l’aide de systèmes stéréoscopiques Panomorphes et aux caractéristiques à privilégier dans le choix des lentilles. La difficulté à calibrer les optiques Panomorphes en laboratoire a mené à l’élaboration d’une technique de calibrage virtuel utilisant un logiciel de conception optique et une boîte à outils de calibrage. Cette approche a permis d’effectuer des simulations en lien avec l’impact des conditions d’opération sur les paramètres de calibrage et avec l’effet des conditions de calibrage sur la qualité de la reconstruction. Des expérimentations de ce type sont pratiquement impossibles à réaliser en laboratoire mais représentent un intérêt certain pour les utilisateurs. Le calibrage virtuel d’une lentille traditionnelle a aussi montré que l’erreur de reprojection moyenne, couramment utilisée comme façon d’évaluer la qualité d’un calibrage, n’est pas nécessairement un indicateur fiable de la qualité de la reconstruction 3D. Il est alors nécessaire de disposer de données supplémentaires pour juger adéquatement de la qualité d’un calibrage.
A wide variety of panoramic lenses are available on the market. Exhibiting interesting characteristics, the Panomorph lens is a panoramic anamorphic optical system. Its highly non-uniform distortion profile creates areas of enhanced magnification across the field of view. For mobile robotic applications, a stereoscopic system for 3D reconstruction of objects of interest could greatly benefit from the unique features of these special lenses. Such a stereoscopic system would provide general information describing the environment surrounding its navigation. Moreover, the areas of enhanced magnification give access to smaller details. However, the downfall is that Panomorph lenses are difficult to calibrate, and this is the main reason why no research has been carried out on this topic. The main goal of this thesis is the design and development of Panomorph stereoscopic systems as well as the evaluation of their performance. The calibration of the lenses was performed using plane targets and a well-established calibration toolbox. In addition, new mathematical techniques aiming to restore the symmetry of revolution in the image and to make the focal length uniform over the field of view were developed to simplify the calibration process. First, the field of view was divided into zones exhibiting a small variation of the focal length and the calibration was performed for each zone. Then, the general calibration was performed for the entire field of view. The results showed that the calibration of each zone does not lead to a better 3D reconstruction than the general calibration method. However, this new approach allowed a study of the quality of the reconstruction over the entire field of view. Indeed, it showed that is it possible to achieve good reconstruction for all the zones of the field of view. In addition, the results for the mathematical techniques used to restore the symmetry of revolution were similar to the results obtained with the original data. These techniques could therefore be used to calibrate Panomorph lenses with calibration toolboxes that do not have two degrees of freedom relating to the focal length. The study of the performance of stereoscopic Panomorph systems also highlighted important factors that could influence the choice of lenses and configuration for similar systems. The challenge met during the calibration of Panomorph lenses led to the development of a virtual calibration technique that used an optical design software and a calibration toolbox. With this technique, simulations reproducing the operating conditions were made to evaluate their impact on the calibration parameters. The quality of 3D reconstruction of a volume was also evaluated for various calibration conditions. Similar experiments would be extremely tedious to perform in the laboratory but the results are quite meaningful for the user. The virtual calibration of a traditional lens also showed that the mean reprojection error, often used to judge the quality of the calibration process, does not represent the quality of the 3D reconstruction. It is then essential to have access to more information in order to asses the quality of a lens calibration.
APA, Harvard, Vancouver, ISO, and other styles
29

Abayowa, Bernard Olushola. "Automatic Registration of Optical Aerial Imagery to a LiDAR Point Cloud for Generation of Large Scale City Models." University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1372508452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Litvinov, Vadim. "Reconstruction incrémentale d'une scène complexe à l'aide d'une caméra omnidirectionnelle." Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22541/document.

Full text
Abstract:
Un problème toujours d'actualité est la reconstruction automatique de la surface d'une scène à partir du flot d'images prises par une caméra en mouvement. Il se résout en général en deux étapes : le calcul de la géométrie où les poses de la caméra et un nuage épars de points 3D de la scène sont simultanément estimés, et un calcul de stéréo dense qui permet d'obtenir une surface en estimant la profondeur de tous les pixels. L' approche que nous proposons se distingue des précédentes en cumulant les caractéristiques suivantes. La surface est une 2-variété, ce qui est utile pour les traitements ou utilisations ultérieurs. Elle est calculée directement à partir du nuage épars donné par la première étape, afin d'éviter la seconde étape coûteuse et pour obtenir une modélisation compacte d'une scène complexe. Le calcul est incrémental afin d'avoir un résultat pendant la lecture de la vidéo. Le principe est le suivant. A chaque itération, de nouveaux points 3D sont estimés et insérés dans une triangulation de Delaunay 3D. Celle-ci partitionne l'espace en tétraèdres vides et pleins grâce à l'information de visibilité également fournie par la première étape. On met aussi à jour une seconde partition en tétraèdres intérieurs et extérieurs dont le bord est la 2-variété recherchée. Sous certaines hypothèses, et contrairement à la seule méthode précédente ayant les même propriétés et hypothèses, la complexité d'une itération est bornée. Notre méthode a été expérimentée sur des séquences synthétiques et réelles, dont une séquence longue de 2;5 km prise en milieu urbain avec une caméra omnidirectionnelle. La qualité du résultat est proche de celle obtenue par la méthode globale (non incrémentale) qui a servi d'inspiration, mais le temps de calcul ne permet pas actuellement une utilisation en-ligne sur un PC standard. On a aussi étudié l'intérêt d'ajouter des contours dans le processus de reconstruction
The automatic reconstruction of a scene surface from images taken by a moving camera is still an active research topic. This problem is usually solved in two steps : first estimate the camera poses and a sparse cloud of 3D points using Structure-from-Motion, then apply dense stereo to obtain the surface by estimating the depth for all pixels. Compared to the previous approaches, ours accumulates the following properties. The output surface is a 2-manifold, which is useful for applications and postprocessing. It is computed directly from the sparse point cloud provided by the first step, so as to avoid the second and time consuming step and to obtain a compact model of a complex scene. The computation is incremental to allow access to intermediary results during the processing. The principle is the following. At each iteration, new 3D points are estimated and added to a 3D Delaunay triangulation; the tetrahedra are labeled as free-space or matter thanks to the visibility information provided by the first step. We also update a second partition of outside and inside tetrahedra whose boundary is the target 2-manifold. Under some assumptions, the time complexity of one iteration is bounded (there is only one previous method with the same properties, but its complexity is greater than that). Our method is experimented on synthetic and real sequences, including a 2:5 km. long urban sequence taken by an omnidirectional camera. The surface quality is similar to that of the batch method which inspired us. However, the computations are not yet real-time on a commodity PC. We also study the use of contours in thereconstruction process
APA, Harvard, Vancouver, ISO, and other styles
31

Huhle, Benjamin [Verfasser]. "Acquisition and Reconstruction of 3D Scenes with Range and Image Data / Benjamin Huhle." München : Verlag Dr. Hut, 2011. http://d-nb.info/1018982590/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

El, Natour Ghina. "Towards 3D reconstruction of outdoor scenes by mmw radar and a vision sensor fusion." Thesis, Clermont-Ferrand 2, 2016. http://www.theses.fr/2016CLF22773/document.

Full text
Abstract:
L’objectif de cette thèse est de développer des méthodes permettant la cartographie d’un environnement tridimensionnel de grande dimension en combinant radar panoramique MMW et caméras optiques. Contrairement aux méthodes existantes de fusion de données multi-capteurs, telles que le SLAM, nous souhaitons réaliser un capteur de type RGB-D fournissant directement des mesures de profondeur enrichies par l’apparence (couleur, texture...). Après avoir modélisé géométriquement le système radar/caméra, nous proposons une méthode de calibrage originale utilisant des correspondances de points. Pour obtenir ces correspondances, des cibles permettant une mesure ponctuelle aussi bien par le radar que la caméra ont été conçues. L’approche proposée a été élaborée pour pouvoir être mise en oeuvre dans un environnement libre et par un opérateur non expert. Deuxièmement, une méthode de reconstruction de points tridimensionnels sur la base de correspondances de points radar et image a été développée. Nous montrons par une analyse théorique des incertitudes combinées des deux capteurs et par des résultats expérimentaux, que la méthode proposée est plus précise que la triangulation stéréoscopique classique pour des points éloignés comme on en trouve dans le cas de cartographie d’environnements extérieurs. Enfin, nous proposons une stratégie efficace de mise en correspondance automatique des données caméra et radar. Cette stratégie utilise deux caméras calibrées. Prenant en compte l’hétérogénéité des données radar et caméras, l’algorithme développé commence par segmenter les données radar en régions polygonales. Grâce au calibrage, l’enveloppe de chaque région est projetée dans deux images afin de définir des régions d’intérêt plus restreintes. Ces régions sont alors segmentées à leur tour en régions polygonales générant ainsi une liste restreinte d’appariement candidats. Un critère basé sur l’inter corrélation et la contrainte épipolaire est appliqué pour valider ou rejeter des paires de régions. Tant que ce critère n’est pas vérifié, les régions sont, elles même, subdivisées par segmentation. Ce processus, favorise l’appariement de régions de grande dimension en premier. L’objectif de cette approche est d’obtenir une cartographie sous forme de patchs localement denses. Les méthodes proposées, ont été testées aussi bien sur des données de synthèse que sur des données expérimentales réelles. Les résultats sont encourageants et montrent, à notre sens, la faisabilité de l’utilisation de ces deux capteurs pour la cartographie d’environnements extérieurs de grande échelle
The main goal of this PhD work is to develop 3D mapping methods of large scale environment by combining panoramic radar and cameras. Unlike existing sensor fusion methods, such as SLAM (simultaneous localization and mapping), we want to build a RGB-D sensor which directly provides depth measurement enhanced with texture and color information. After modeling the geometry of the radar/camera system, we propose a novel calibration method using points correspondences. To obtain these points correspondences, we designed special targets allowing accurate point detection by both the radar and the camera. The proposed approach has been developed to be implemented by non-expert operators and in unconstrained environment. Secondly, a 3D reconstruction method is elaborated based on radar data and image point correspondences. A theoretical analysis is done to study the influence of the uncertainty zone of each sensor on the reconstruction method. This theoretical study, together with the experimental results, show that the proposed method outperforms the conventional stereoscopic triangulation for large scale outdoor scenes. Finally, we propose an efficient strategy for automatic data matching. This strategy uses two calibrated cameras. Taking into account the heterogeneity of cameras and radar data, the developed algorithm starts by segmenting the radar data into polygonal regions. The calibration process allows the restriction of the search by defining a region of interest in the pair of images. A similarity criterion based on both cross correlation and epipolar constraint is applied in order to validate or reject region pairs. While the similarity test is not met, the image regions are re-segmented iteratively into polygonal regions, generating thereby a shortlist of candidate matches. This process promotes the matching of large regions first which allows obtaining maps with locally dense patches. The proposed methods were tested on both synthetic and real experimental data. The results are encouraging and prove the feasibility of radar and vision sensor fusion for the 3D mapping of large scale urban environment
APA, Harvard, Vancouver, ISO, and other styles
33

Rehman, Farzeed Ur. "3D reconstruction of architectural scenes from images and video captured with an uncalibrated camera." Thesis, University of Manchester, 2011. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.549000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Blanc, Jérôme. "Synthèse de nouvelles vues d'une scène 3D à partir d'images existantes." Phd thesis, Grenoble INPG, 1998. http://tel.archives-ouvertes.fr/tel-00004870.

Full text
Abstract:
La synthèse d'images a pour but de calculer des vues aussi réalistes que possible d'une scène tridimensionnelle définie par un modèle géométrique. Cette modélisation est effectuée manuellement, et pour synthétiser de façon réaliste une scène complexe, telle qu'un paysage, cette étape fastidieuse peut demander plusieurs hommes-mois. Nous proposons d'automatiser cette tâche. En effet, quelques photographies du paysage suffisent à modéliser entièrement ses informations géométriques et photométriques : structure 3D, couleurs et textures. Aussi, en appliquant des techniques d'analyse d'images et de vision par ordinateur, nous pouvons générer automatiquement une représentation tridimensionnelle de la scène, et la visualiser sous d'autres points de vue. Les algorithmes appropriés sont évalués et spécialement adaptés à notre problème. Des tests quantitatifs détaillés sont menés sur des données synthétiques et réelles, et la qualité finale des images produites est évaluée numériquement.
APA, Harvard, Vancouver, ISO, and other styles
35

Deschênes, François. "Estimation des jonctions, du mouvement apparent et du relief en vue d'une reconstruction 3D de la scène." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0018/MQ56894.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Deschênes, François. "Estimation des jonctions, du mouvement apparent et du relief en vue d'une reconstruction 3D de la scène." Mémoire, Université de Sherbrooke, 1999. http://savoirs.usherbrooke.ca/handle/11143/4421.

Full text
Abstract:
La reconstruction 3D d'une scène réelle à partir d'une ou plusieurs images permet de comprendre la relation tridimensionnelle qui existe entre les différents objets. À cet effet, il existe plusieurs indices de profondeur (coins, flou, disparité, mouvement, etc.) permettant la perception 3D. Pour notre part, nous avons étudié, développé et implanté quatre approches dont une pour la détection des jonctions de lignes, une pour la segmentation basée sur le mouvement, une pour l'estimation du flou et enfin une pour l'estimation simultanée du flou et de la disparité. La détection des jonctions et des terminaisons de lignes est basée sur une mesure de courbure locale. Étant donné que ces caractéristiques de l'image sont robustes, elles constituent un intérêt particulier pour la mise en correspondance et la reconstruction 3D. La segmentation en couches des images d'une séquence vidéo consiste à déterminer l'ordre de profondeur relatif des objets (régions) les uns par rapport aux autres à partir d'un critère d'homogénéité du flot optique. L'estimation de la différence de flou entre deux images d'une même scène est fondée sur la transformée d'Hermite. Afin de valider l'estimation résultante, la profondeur des objets de la scène est ensuite calculée à partir de la différence de flou. Finalement, le modèle unifié permet de calculer de manière simultanée et coopérative le flou et la disparité. Il constitue en fait une généralisation de notre approche d'estimation du flou. Les indices ainsi calculés peuvent ensuite servir à la reconstruction 3D de scènes complexes.
APA, Harvard, Vancouver, ISO, and other styles
37

Deschênes, François. "Estimation des jonctions, du mouvement apparent et du relief en vue d'une reconstruction 3D de la scène." Sherbrooke : Université de Sherbrooke, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
38

Müller, Franziska [Verfasser]. "Real-time 3D hand reconstruction in challenging scenes from a single color or depth camera / Franziska Müller." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2020. http://d-nb.info/1224883594/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

MANIERE, THIERRY. "Etude et realisation d'un systeme de prises de vues panoramiques binoculaires dedie a la reconstruction 3d de scenes." Paris 6, 1997. http://www.theses.fr/1997PA066127.

Full text
Abstract:
Nous proposons un systeme de vision panoramique binoculaire a l'architecture nouvelle, dedie a la reconstruction 3d de scenes. Dans ce manuscrit nous effectuons un rappel des techniques de vision 3d et nous faisons un etat de l'art dans le cas particulier de la vision panoramique. Apres avoir pose les bases de l'architecture que nous proposons, nous effectuons une etude detaillee de celle-ci. L'etude theorique terminee, nous decrivons les differentes etapes de la realisation d'un prototype, puis nous caracterisons celui-ci. Nous teminons par la presentation des premiers resultats obtenus grace a notre systeme, puis nous concluons sur les perspectives de developpements futurs de notre prototype.
APA, Harvard, Vancouver, ISO, and other styles
40

OISEL, LIONEL. "Reconstruction 3d de scenes complexes a partir de sequences video non calibrees : estimation et maillage d'un champ de disparite." Rennes 1, 1998. http://www.theses.fr/1998REN10119.

Full text
Abstract:
Cette these s'inscrit dans le domaine de l'analyse / synthese de sequences d'images video numeriques. L'objectif est de parvenir a construire un modele 3d d'une scene complexe ne contenant pas d'objets en mouvement. A partir de deux images (ou plus), la tache a realiser consiste a decouper parallelement les deux images en regions planaires se correspondant. Nous montrons que la segmentation en facettes planes peut etre vue comme un probleme d'estimation de modeles de mouvement. Dans notre cas, le modele a calculer est homographique. Nous proposons de scinder l'obtention du modele en deux etapes successives : la premiere etape consiste a estimer le deplacement en chaque pixel, et dans un deuxieme temps d'estimer les modeles a partir du champ dense obtenu. La premiere partie de cette these consiste a proposer un nouvel algorithme de calcul de champ de disparite robuste et regularise. Cet algorithme calcule puis utilise la geometrie epipolaire provenant de la contrainte de rigidite de la scene. La recherche du vecteur deplacement en tout point se ramene alors d'un probleme a deux dimensions a un probleme a une seule dimension. Sa resolution est effectuee par modelisation markovienne couplee a des schemas d'estimation robuste multi-resolution. La deuxieme partie de notre travail consiste a estimer les differents modeles homographiques (facettes planes) recursivement a partir de la carte de disparite dense associee a une triangulation initiale de points singuliers automatiquement extraits et apparies dans les images. Cette triangulation est alors raffinee de maniere a ce que le champ dense connu pour chaque triangle corresponde au modele homographique associe a celui-ci. Le modele triangulaire est alors reconstruit en 3d, le rendu etant assure par un plaquage de texture provenant de l'image originale suivant un modele homographique.
APA, Harvard, Vancouver, ISO, and other styles
41

Ismael, Muhannad. "Reconstruction de scène dynamique à partir de plusieurs vidéos mono- et multi-scopiques par hybridation de méthodes « silhouettes » et « multi-stéréovision »." Thesis, Reims, 2016. http://www.theses.fr/2016REIMS021/document.

Full text
Abstract:
La reconstruction précise d’une scène 3D à partir de plusieurs caméras offre un contenu synthétique 3D à destination de nombreuses applications telles que le divertissement, la télévision et la production cinématographique. Cette thèse propose une nouvelle approche pour la reconstruction 3D multi-vues basée sur l’enveloppe visuelle et la stéréovision multi-oculaire. Cette approche nécessite en entrée l’enveloppe visuelle et plusieurs jeux d’images rectifiées issues de différentes unités multiscopiques constituées chacune de plusieurs caméras alignées et équidistantes. Nos contributions se situent à différents niveaux. Le premier est notre méthode de stéréovision multi-oculaire qui est fondée sur un nouvel échantillonnage de l’espace scénique et fournit une carte de matérialité exprimant la probabilité pour chaque point d’échantillonnage 3D d’appartenir à la surface visible par l’unité multiscopique. Le second est l’hybridation de cette méthode avec les informations issues de l’enveloppe visuelle et le troisième est la chaîne de reconstruction basée sur la fusion des différentes enveloppes creusées tout en gérant les informations contradictoires qui peuvent exister. Les résultats confirment : i) l’efficacité de l’utilisation de la carte de matérialité pour traiter les problèmes qui se produisent souvent dans la stéréovision, en particulier pour les régions partiellementoccultées ; ii) l’avantage de la fusion des méthodes de l’enveloppe visuelle et de la stéréovision multi-oculaire pour générer un modèle 3D précis de la scène
Accurate reconstruction of a 3D scene from multiple cameras offers 3D synthetic content tobe used in many applications such as entertainment, TV, and cinema production. This thesisis placed in the context of the RECOVER3D collaborative project, which aims is to provideefficient and quality innovative solutions to 3D acquisition of actors. The RECOVER3Dacquisition system is composed of several tens of synchronized cameras scattered aroundthe observed scene within a chromakey studio in order to build the visual hull, with severalgroups laid as multiscopic units dedicated to multi-baseline stereovision. A multiscopic unitis defined as a set of aligned and evenly distributed cameras. This thesis proposes a novelframework for multi-view 3D reconstruction relying on both multi-baseline stereovision andvisual hull. This method’s inputs are a visual hull and several sets of multi-baseline views.For each such view set, a multi-baseline stereovision method yields a surface which is usedto carve the visual hull. Carved visual hulls from different view sets are then fused iterativelyto deliver the intended 3D model. Furthermore, we propose a framework for multi-baselinestereo-vision which provides upon the Disparity Space (DS), a materiality map expressingthe probability for 3D sample points to lie on a visible surface. The results confirm i) theefficient of using the materiality map to deal with commonly occurring problems in multibaselinestereovision in particular for semi or partially occluded regions, ii) the benefit ofmerging visual hull and multi-baseline stereovision methods to produce 3D objects modelswith high precision
APA, Harvard, Vancouver, ISO, and other styles
42

Chausse, Frédéric. "Reconstruction 3d de courbes parametriques polynomiales par filtrage temporel. Approche par cooperation vision par ordinateur/infographie. Application aux scenes routieres." Clermont-Ferrand 2, 1994. http://www.theses.fr/1994CLF21678.

Full text
Abstract:
Ce memoire decrit l'aide mutuelle que peuvent s'apporter la vision par ordinateur et l'infographie (ou synthese d'image). Le concept de cooperation entre ces deux techniques duales est applique, dans le contexte de l'analyse d'images routieres, a la reconstruction tridimensionnelle d'une route. Le premier chapitre definit les objectifs et les methodes respectives de la vision par ordinateur et de l'infographie. Il presente aussi une etude bibliographique des travaux menes dans le contexte de cooperation vision par ordinateur/infographie. Le second chapitre presente le cadre de l'assistance a la conduite automobile par vision par ordinateur. La necessite de l'acquisition d'une modelisation complete et precise de l'environnement routier est soulignee. Il est propose d'obtenir un tel modele par cooperation vision par ordinateur/infographie. La mise en uvre de la cooperation s'effectue en quatre etapes: ? modelisation parametrique complete de la scene, ? reconstruction de ce modele par analyse d'image, ? rendu d'une image de synthese a partir de ce modele reconstruit, ? minimisation de l'erreur entre l'image reelle analysee et l'image de synthese calculee. Le troisieme chapitre presente une methode originale de reconstruction de courbes 3d qui repose sur une modelisation polynomiale et une reconstruction 3d de ce modele a partir de plusieurs projections perspectives integrees dans un processus de filtrage temporel de kalman. Cette methode est validee dans le cas d'une courbe synthetique ideale, puis sur images reelles simples. Le dernier chapitre utilise cette methode dans le cadre de la cooperation vision par ordinateur/infographie pour reconstruire la geometrie 3d de la route ainsi que l'attitude spatiale de la camera par rapport a la scene routiere
APA, Harvard, Vancouver, ISO, and other styles
43

Duan, Liuyun. "Modélisation géométrique de scènes urbaines par imagerie satellitaire." Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4025.

Full text
Abstract:
La modélisation automatique de villes à partir d’images satellites est l'un des principaux défis en lien avec la reconstruction urbaine. Son objectif est de représenter des villes en 3D de manière suffisamment compacte et précise. Elle trouve son application dans divers domaines, qui vont de la planification urbaine aux télécommunications, en passant par la gestion des catastrophes. L'imagerie satellite offre plusieurs avantages sur l'imagerie aérienne classique, tels qu'un faible coût d'acquisition, une couverture mondiale et une bonne fréquence de passage au-dessus des sites visités. Elle impose toutefois un certain nombre de contraintes techniques. Les méthodes existantes ne permettent que la synthèse de DSM (Digital Surface Models), dont la précision est parfois inégale. Cette dissertation décrit une méthode entièrement automatique pour la production de modèles 3D compacts, précis et répondant à une sémantique particulière, à partir de deux images satellites en stéréo. Cette méthode repose sur deux grands concepts. D'une part, la description géométrique des objets et leur assimilation à des catégories génériques sont effectuées simultanément, conférant ainsi une certaine robustesse face aux occlusions partielles ainsi qu'à la faible qualité des images. D'autre part, la méthode opère à une échelle géométrique très basse, ce qui permet la préservation de la forme des objets, avec finalement, une plus grande efficacité et un meilleur passage à l'échelle. Pour générer des régions élémentaires, un algorithme de partitionnement de l'image en polygones convexes est présenté
Automatic city modeling from satellite imagery is one of the biggest challenges in urban reconstruction. The ultimate goal is to produce compact and accurate 3D city models that benefit many application fields such as urban planning, telecommunications and disaster management. Compared with aerial acquisition, satellite imagery provides appealing advantages such as low acquisition cost, worldwide coverage and high collection frequency. However, satellite context also imposes a set of technical constraints as a lower pixel resolution and a wider that challenge 3D city reconstruction. In this PhD thesis, we present a set of methodological tools for generating compact, semantically-aware and geometrically accurate 3D city models from stereo pairs of satellite images. The proposed pipeline relies on two key ingredients. First, geometry and semantics are retrieved simultaneously providing robust handling of occlusion areas and low image quality. Second, it operates at the scale of geometric atomic regions which allows the shape of urban objects to be well preserved, with a gain in scalability and efficiency. Images are first decomposed into convex polygons that capture geometric details via Voronoi diagram. Semantic classes, elevations, and 3D geometric shapes are then retrieved in a joint classification and reconstruction process operating on polygons. Experimental results on various cities around the world show the robustness, scalability and efficiency of the proposed approach
APA, Harvard, Vancouver, ISO, and other styles
44

Alkhadour, Wissam M. "Reconstruction of 3D scenes from pairs of uncalibrated images. Creation of an interactive system for extracting 3D data points and investigation of automatic techniques for generating dense 3D data maps from pairs of uncalibrated images for remote sensing applications." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4933.

Full text
Abstract:
Much research effort has been devoted to producing algorithms that contribute directly or indirectly to the extraction of 3D information from a wide variety of types of scenes and conditions of image capture. The research work presented in this thesis is aimed at three distinct applications in this area: interactively extracting 3D points from a pair of uncalibrated images in a flexible way; finding corresponding points automatically in high resolution images, particularly those of archaeological scenes captured from a freely moving light aircraft; and improving a correlation approach to dense disparity mapping leading to 3D surface reconstructions. The fundamental concepts required to describe the principles of stereo vision, the camera models, and the epipolar geometry described by the fundamental matrix are introduced, followed by a detailed literature review of existing methods. An interactive system for viewing a scene via a monochrome or colour anaglyph is presented which allows the user to choose the level of compromise between amount of colour and ghosting perceived by controlling colour saturation, and to choose the depth plane of interest. An improved method of extracting 3D coordinates from disparity values when there is significant error is presented. Interactive methods, while very flexible, require significant effort from the user finding and fusing corresponding points and the thesis continues by presenting several variants of existing scale invariant feature transform methods to automatically find correspondences in uncalibrated high resolution aerial images with improved speed and memory requirements. In addition, a contribution to estimating lens distortion correction by a Levenberg Marquard based method is presented; generating data strings for straight lines which are essential input for estimating lens distortion correction. The remainder of the thesis presents correlation based methods for generating dense disparity maps based on single and multiple image rectifications using sets of automatically found correspondences and demonstrates improvements obtained using the latter method. Some example views of point clouds for 3D surfaces produced from pairs of uncalibrated images using the methods presented in the thesis are included.
Al-Baath University
The appendices files and images are not available online.
APA, Harvard, Vancouver, ISO, and other styles
45

Alkhadour, Wissam Mohamad. "Reconstruction of 3D scenes from pairs of uncalibrated images : creation of an interactive system for extracting 3D data points and investigation of automatic techniques for generating dense 3D data maps from pairs of uncalibrated images for remote sensing applications." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4933.

Full text
Abstract:
Much research effort has been devoted to producing algorithms that contribute directly or indirectly to the extraction of 3D information from a wide variety of types of scenes and conditions of image capture. The research work presented in this thesis is aimed at three distinct applications in this area: interactively extracting 3D points from a pair of uncalibrated images in a flexible way; finding corresponding points automatically in high resolution images, particularly those of archaeological scenes captured from a freely moving light aircraft; and improving a correlation approach to dense disparity mapping leading to 3D surface reconstructions. The fundamental concepts required to describe the principles of stereo vision, the camera models, and the epipolar geometry described by the fundamental matrix are introduced, followed by a detailed literature review of existing methods. An interactive system for viewing a scene via a monochrome or colour anaglyph is presented which allows the user to choose the level of compromise between amount of colour and ghosting perceived by controlling colour saturation, and to choose the depth plane of interest. An improved method of extracting 3D coordinates from disparity values when there is significant error is presented. Interactive methods, while very flexible, require significant effort from the user finding and fusing corresponding points and the thesis continues by presenting several variants of existing scale invariant feature transform methods to automatically find correspondences in uncalibrated high resolution aerial images with improved speed and memory requirements. In addition, a contribution to estimating lens distortion correction by a Levenberg Marquard based method is presented; generating data strings for straight lines which are essential input for estimating lens distortion correction. The remainder of the thesis presents correlation based methods for generating dense disparity maps based on single and multiple image rectifications using sets of automatically found correspondences and demonstrates improvements obtained using the latter method. Some example views of point clouds for 3D surfaces produced from pairs of uncalibrated images using the methods presented in the thesis are included.
APA, Harvard, Vancouver, ISO, and other styles
46

Verdie, Yannick. "Modélisation de scènes urbaines à partir de données aériennes." Thesis, Nice, 2013. http://www.theses.fr/2013NICE4078.

Full text
Abstract:
L'analyse et la reconstruction automatique de scène urbaine 3D est un problème fondamental dans le domaine de la vision par ordinateur et du traitement numérique de la géométrie. Cette thèse présente des méthodologies pour résoudre le problème complexe de la reconstruction d'éléments urbains en 3D à partir de données aériennes Lidar ou bien de maillages générés par imagerie Multi-View Stereo (MVS). Nos approches génèrent une représentation précise et compacte sous la forme d'un maillage 3D comportant une sémantique de l'espace urbain. Deux étapes sont nécessaires ; une identification des différents éléments de la scène urbaine, et une modélisation des éléments sous la forme d'un maillage 3D. Le Chapitre 2 présente deux méthodes de classifications des éléments urbains en classes d'intérêts permettant d'obtenir une compréhension approfondie de la scène urbaine, et d'élaborer différentes stratégies de reconstruction suivant le type d'éléments urbains. Cette idée, consistant à insérer à la fois une information sémantique et géométrique dans les scènes urbaines, est présentée en détails et validée à travers des expériences. Le Chapitre 3 présente une approche pour détecter la 'Végétation' incluses dans des données Lidar reposant sur les processus ponctuels marqués, combinée avec une nouvelle méthode d'optimisation. Le Chapitre 4 décrit à la fois une approche de maillage 3D pour les 'Bâtiments' à partir de données Lidar et de données MVS. Des expériences sur des structures urbaines larges et complexes montrent les bonnes performances de nos systèmes
Analysis and 3D reconstruction of urban scenes from physical measurements is a fundamental problem in computer vision and geometry processing. Within the last decades, an important demand arises for automatic methods generating urban scenes representations. This thesis investigates the design of pipelines for solving the complex problem of reconstructing 3D urban elements from either aerial Lidar data or Multi-View Stereo (MVS) meshes. Our approaches generate accurate and compact mesh representations enriched with urban-related semantic labeling.In urban scene reconstruction, two important steps are necessary: an identification of the different elements of the scenes, and a representation of these elements with 3D meshes. Chapter 2 presents two classification methods which yield to a segmentation of the scene into semantic classes of interests. The beneath is twofold. First, this brings awareness of the scene for better understanding. Second, deferent reconstruction strategies are adopted for each type of urban elements. Our idea of inserting both semantical and structural information within urban scenes is discussed and validated through experiments. In Chapter 3, a top-down approach to detect 'Vegetation' elements from Lidar data is proposed using Marked Point Processes and a novel optimization method. In Chapter 4, bottom-up approaches are presented reconstructing 'Building' elements from Lidar data and from MVS meshes. Experiments on complex urban structures illustrate the robustness and scalability of our systems
APA, Harvard, Vancouver, ISO, and other styles
47

Bagheri, Hossein [Verfasser], Xiaoxiang [Akademischer Betreuer] Zhu, Peter [Gutachter] Reinartz, Xiaoxiang [Gutachter] Zhu, and Michael [Gutachter] Schmitt. "Fusion of Multi-sensor-derived Data for the 3D Reconstruction of Urban Scenes / Hossein Bagheri ; Gutachter: Peter Reinartz, Xiaoxiang Zhu, Michael Schmitt ; Betreuer: Xiaoxiang Zhu." München : Universitätsbibliothek der TU München, 2019. http://d-nb.info/1195708610/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Joubert, Eric. "Reconstruction de surfaces en trois dimensions par analyse de la polarisation de la lumière réfléchie par les objets de la scène." Rouen, 1993. http://www.theses.fr/1993ROUES052.

Full text
Abstract:
Les travaux présentés dans cette thèse abordent la résolution du problème de la reconstruction de surfaces en trois dimensions par l'analyse de la polarisation de la lumière. Notre méthode suppose que les rayons lumineux issus de la réflexion sur des objets sont partiellement polarisés. Cet état de polarisation s'avère alors être fonction de l'orientation de l'élément de surface observé. Le présent mémoire est organisé en trois parties. La première partie décrit une méthode originale qui est capable de fournir une valeur représentative de l'état de polarisation pour chaque point de l'image totale d'une scène courante. Des résultats de ce traitement, qui sont proposés pour différents types de scènes, font apparaître une précision moyenne de l'ordre de 1%. La deuxième partie met en oeuvre la méthode précédente dans un système de reconstruction qui n'utilise qu'un seul point de vue. Les résultats obtenus pour deux scènes génériques montrent clairement les limites d'un tel principe, et définissent un champ d'applications spécifique. La dernière partie ajoute un deuxième point de vue de la scène observée de manière à créer un système de stéréovision original. Les résultats présentés pour deux scènes génériques montrent de réelles capacités de reconstruction pour des formes aussi diversifiées que des formes courbes ou plates. Cette dernière méthode illustre parfaitement les avantages de l'utilisation des grandeurs de polarisation dans un système de reconstruction
APA, Harvard, Vancouver, ISO, and other styles
49

Bauchet, Jean-Philippe. "Structures de données cinétiques pour la modélisation géométrique d’environnements urbains." Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4091.

Full text
Abstract:
La modélisation géométrique d'objets urbains à partir de mesures physiques et leur représentation de manière efficace, compacte et précise est un problème difficile en vision par ordinateur et en infographie. Dans la littérature scientifique, les structures de données géométriques à l'interface entre les mesures physiques en entrée et les modèles produits en sortie passent rarement à l'échelle et ne permettent pas de partitionner des domaines fermés 2D et 3D représentant des scènes complexes. Dans cette thèse, on étudie une nouvelle famille de structures de données géométrique qui repose sur une formulation cinétique. Plus précisément, on réalise une partition de domaines fermés en détectant et en propageant au cours du temps des formes géométriques telles que des segments de droites ou des plans, jusqu'à collision et création de cellules polygonales. On propose en particulier deux méthodes de modélisation géométrique, une pour la vectorisation de régions d'intérêt dans des images, et une autre pour la reconstruction d'objets en maillages polygonaux concis à partir de nuages de points 3D. Les deux approches exploitent les structures de données cinétiques pour décomposer efficacement en cellules soit un domaine image en 2D, soit un domaine fermé en 3D. Les objets sont ensuite extraits de la partition à l’aide d’une procédure d’étiquetage binaire des cellules. Les expériences menées sur une grande variété de données en termes de nature, contenus, complexité, taille et caractéristiques d'acquisition démontrent la polyvalence de ces deux méthodes. On montre en particulier leur potentiel applicatif sur le problème de modélisation urbaine à grande échelle à partir de données aériennes et satellitaires
The geometric modeling of urban objects from physical measurements, and their representation in an accurate, compact and efficient way, is an enduring problem in computer vision and computer graphics. In the literature, the geometric data structures at the interface between physical measurements and output models typically suffer from scalability issues, and fail to partition 2D and 3D bounding domains of complex scenes. In this thesis, we propose a new family of geometric data structures that rely on kinetic frameworks. More precisely, we compute partitions of bounding domains by detecting geometric shapes such as line-segments and planes, and extending these shapes until they collide with each other. This process results in light partitions, containing a low number of polygonal cells. We propose two geometric modeling pipelines, one for the vectorization of regions of interest in images, another for the reconstruction of concise polygonal meshes from point clouds. Both approaches exploit kinetic data structures to decompose efficiently either a 2D image domain or a 3D bounding domain into cells. Then, we extract objects from the partitions by optimizing a binary labelling of cells. Conducted on a wide range of data in terms of contents, complexity, sizes and acquisition characteristics, our experiments demonstrate the scalability and the versatility of our methods. We show the applicative potential of our method by applying our kinetic formulation to the problem of urban modeling from remote sensing data
APA, Harvard, Vancouver, ISO, and other styles
50

Bolognini, Damiano. "Improving the convergence speed of NeRFs with depth supervision and weight initialization." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25656/.

Full text
Abstract:
Neural rendering is a new and developing field where computer graphics and deep learning techniques are combined to generate photo-realistic images using deep neural networks. In particular, Neural Radiance Fields (NeRF) is able to synthesise novel views of a scene with unprecedented quality by fitting a Multi-Layer Perceptron (MLP) to RGB images. However, training this network requires plenty of time and computation even on modern GPUs, making this new technology hardly employable on practical specialized applications. In this project, we show that employing the known depth of the scene as an additional supervision during the training, and starting from pre-trained weights of other scene with similar setups, instead of from scratch, leads to a convergence speed 3 to 5 time faster.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography