Tesis sobre el tema "Reconstruction 3D de la scene"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Reconstruction 3D de la scene".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Boyling, Timothy A. "Active vision for autonomous 3D scene reconstruction". Thesis, University of Glasgow, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433622.
Texto completoNitschke, Christian. "3D reconstruction : real-time volumetric scene reconstruction from multiple views /". Saarbrücken : VDM Verl. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=2939698&prov=M&dok_var=1&dok_ext=htm.
Texto completoRoldão, Jimenez Luis Guillermo. "3D Scene Reconstruction and Completion for Autonomous Driving". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS415.
Texto completoIn this thesis, we address the challenges of 3D scene reconstruction and completion from sparse and heterogeneous density point clouds. Therefore proposing different techniques to create a 3D model of the surroundings.In the first part, we study the use of 3-dimensional occupancy grids for multi-frame reconstruction, useful for localization and HD-Maps applications. This is done by exploiting ray-path information to resolve ambiguities in partially occupied cells. Our sensor model reduces discretization inaccuracies and enables occupancy updates in dynamic scenarios.We also focus on single-frame environment perception by the introduction of a 3D implicit surface reconstruction algorithm capable to deal with heterogeneous density data by employing an adaptive neighborhood strategy. Our method completes small regions of missing data and outputs a continuous representation useful for physical modeling or terrain traversability assessment.We dive into deep learning applications for the novel task of semantic scene completion, which completes and semantically annotates entire 3D input scans. Given the little consensus found in the literature, we present an in-depth survey of existing methods and introduce our lightweight multiscale semantic completion network for outdoor scenarios. Our method employs a new hybrid pipeline based on a 2D CNN backbone branch to reduce computation overhead and 3D segmentation heads to predict the complete semantic scene at different scales, being significantly lighter and faster than existing approaches
Goldman, Benjamin Joseph. "Broadband World Modeling and Scene Reconstruction". Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/23094.
Texto completoThe process involves two main steps. The first is stereo correspondence using block matching algorithms with filtering to improve the quality of this matching process. The disparity maps are then transformed into 3D point clouds. These point clouds are filtered again before the registration process is done. The registration uses a SAC-IA matching technique to align the point clouds with minimum error. The registered final cloud is then filtered again to smooth and down sample the large amount of data. This process was implemented through software architecture that utilizes Qt, OpenCV, and Point Cloud Library. It was tested using a variety of experiments on each of the components of the process. It shows promise for being able to replace or augment existing UGV perception systems in the future.
Master of Science
Booth, Roy. "Scene analysis and 3D object reconstruction using passive vision". Thesis, University of Newcastle Upon Tyne, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295780.
Texto completoAufderheide, Dominik. "VISrec! : visual-inertial sensor fusion for 3D scene reconstruction". Thesis, University of Bolton, 2014. http://ubir.bolton.ac.uk/649/.
Texto completoChandraker, Manmohan Krishna. "From pictures to 3D global optimization for scene reconstruction /". Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p3369041.
Texto completoTitle from first page of PDF file (viewed September 15, 2009). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 235-246).
Manessis, A. "3D reconstruction from video using a mobile robot". Thesis, University of Surrey, 2001. http://epubs.surrey.ac.uk/844129/.
Texto completoMoodie, Daniel Thien-An. "Sensor Fused Scene Reconstruction and Surface Inspection". Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/47453.
Texto completoMaster of Science
D'Angelo, Paolo. "3D scene reconstruction by integration of photometric and geometric methods". [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=985352949.
Texto completoTola, Engin. "Multiview 3d Reconstruction Of A Scene Containing Independently Moving Objects". Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606411/index.pdf.
Texto completoKühner, Tilman [Verfasser] y C. [Akademischer Betreuer] Stiller. "Large-Scale Textured 3D Scene Reconstruction / Tilman Kühner ; Betreuer: C. Stiller". Karlsruhe : KIT-Bibliothek, 2020. http://d-nb.info/1221186965/34.
Texto completoVillota, Juan Carlos Perafán. "Adaptive registration using 2D and 3D features for indoor scene reconstruction". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-17042017-090901/.
Texto completoO alinhamento entre pares de nuvens de pontos é uma tarefa importante na construção de mapas de ambientes em 3D. A combinação de características locais 2D com informação de profundidade fornecida por câmeras RGB-D são frequentemente utilizadas para melhorar tais alinhamentos. No entanto, em ambientes internos com baixa iluminação ou pouca textura visual o método usando somente características locais 2D não é particularmente robusto. Nessas condições, as características 2D são difíceis de serem detectadas, conduzindo a um desalinhamento entre pares de quadros consecutivos. A utilização de características 3D locais pode ser uma solução uma vez que tais características são extraídas diretamente de pontos 3D e são resistentes a variações na textura visual e na iluminação. Como situações de variações em cenas reais em ambientes internos são inevitáveis, essa tese apresenta um novo sistema desenvolvido com o objetivo de melhorar o alinhamento entre pares de quadros usando uma combinação adaptativa de características esparsas 2D e 3D. Tal combinação está baseada nos níveis de estrutura geométrica e de textura visual contidos em cada cena. Esse sistema foi testado com conjuntos de dados RGB-D, incluindo vídeos com movimentos irrestritos da câmera e mudanças naturais na iluminação. Os resultados experimentais mostram que a nossa proposta supera aqueles métodos que usam características 2D ou 3D separadamente, obtendo uma melhora da precisão no alinhamento de cenas em ambientes internos reais.
Tannouri, Anthony. "Using Wireless multimedia sensor networks for 3D scene asquisition and reconstruction". Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCD053/document.
Texto completoNowadays, the WMSNs are promising for different applications and fields, specially with the development of the IoT and cheap efficient camera sensors. The stereo vision is also very important for multiple purposes like Cinematography, games, Virtual Reality, Augmented Reality, etc. This thesis aim to develop a 3D scene reconstruction system that proves the concept of using multiple view stereo disparity maps in the context of WMSNs. Our work can be divided in three parts. The first one concentrates on studying all WMSNs applications, components, topologies, constraints and limitations. Adding to this stereo vision disparity map calculations methods in order to choose the best method(s) to make a 3d reconstruction on WMSNs with low cost in terms of complexity and power consumption. In the second part, we experiment and simulate different disparity map calculations on a couple of nodes by changing scenarios (indoor and outdoor), coverage distances, angles, number of nodes and algorithms. In the third part, we propose a tree-based network model to compute accurate disparity maps on multi-layer camera sensor nodes that meets the server needs to make a 3d scene reconstruction of the scene or object of interest. The results are acceptable and ensure the proof of the concept to use disparity maps in the context of WMSNs
Imre, Evren. "Prioritized 3d Scene Reconstruction And Rate-distortion Efficient Representation For Video Sequences". Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608722/index.pdf.
Texto completoLeggett, I. C. "3D scene reconstruction and object recognition for use with AGV self positioning". Thesis, University of Newcastle Upon Tyne, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320495.
Texto completoGrundberg, Måns y Viktor Altintas. "Generating 3D Scenes From Single RGB Images in Real-Time Using Neural Networks". Thesis, Malmö universitet, Institutionen för datavetenskap och medieteknik (DVMT), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43091.
Texto completoBoulch, Alexandre. "Reconstruction automatique de maquettes numériques 3D". Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1099/document.
Texto completoThe interest for digital models in the building industry is growing rapidly. These centralize all the information concerning the building and facilitate communication between the players of construction : cost evaluation, physical simulations, virtual presentations, building lifecycle management, site supervision, etc. Although building models now tend to be used for large projects of new constructions, there is no such models for existing building. In particular, old buildings do not enjoy digital 3D model and information whereas they would benefit the most from them, e.g., to plan cost-effective renovation that achieves good thermal performance. Such 3D models are reconstructed from the real building. Lately a number of automatic reconstruction methods have been developed either from laser or photogrammetric data. Lasers are precise and produce dense point clouds. Their price have greatly reduced in the past few years, making them affordable for industries. Photogrammetry, often less precise and failing in uniform regions (e.g. bare walls), is a lot cheaper than the lasers. However most approaches only reconstruct a surface from point clouds, not a semantically rich building model. A building information model is the alliance of a geometry and a semantics for the scene elements. The main objective of this thesis is to define a framework for digital model production regarding both geometry and semantic, using point clouds as an entry. The reconstruction process is divided in four parts, gradually enriching information, from the points to the final digital mockup. First, we define a normal estimator for unstructured point clouds based on a robust Hough transform. It allows to estimate accurate normals, even near sharp edges and corners, and deals with the anisotropy inherent to laser scans. Then, primitives such as planes are extracted from the point cloud. To avoid over-segmentation issues, we develop a general and robust statistical criterion for shape merging. It only requires a distance function from points to shapes. A piecewise-planar surface is then reconstructed. Planes hypothesis for visible and hidden parts of the scene are inserted in a 3D plane arrangement. Cells of the arrangement are labelled full or empty using a new regularization on corner count and edge length. A linear formulation allow us to efficiently solve this labelling problem with a continuous relaxation. Finally, we propose an approach based on constrained attribute grammars for 3D model semantization. This method is entirely bottom-up. We prevent the possible combinatorial explosion by introducing maximal operators and an order on variable instantiation
Del, Pero Luca. "Top-Down Bayesian Modeling and Inference for Indoor Scenes". Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/297040.
Texto completoKirli, Mustafa Yavuz. "3d Reconstruction Of Underwater Scenes From Uncalibrated Video Sequences". Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12609901/index.pdf.
Texto completoVural, Elif. "Robust Extraction Of Sparse 3d Points From Image Sequences". Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609888/index.pdf.
Texto completohence, determining the scene structure and cameras up to a projective transformation. The extension of the two-view reconstruction to multiple views is achieved by estimating the camera projection matrix of each additional view from the already reconstructed matches, and then adding new points to the scene structure by triangulating the unreconstructed matches. Finally, the reconstruction is upgraded from projective to metric by a rectifying homography computed from the camera calibration information. In order to obtain a refined reconstruction, two different methods are suggested for the removal of erroneous points from the scene structure. In addition to the examination of the solution to the reconstruction problem, experiments have been conducted that compare the performances of competing algorithms used in various stages of reconstruction. In connection with sparse reconstruction, a rate-distortion efficient piecewise planar scene representation algorithm that generates mesh models of scenes from reconstructed point clouds is examined, and its performance is evaluated through experiments.
Oesau, Sven. "Modélisation géométrique de scènes intérieures à partir de nuage de points". Thesis, Nice, 2015. http://www.theses.fr/2015NICE4034/document.
Texto completoGeometric modeling and semantization of indoor scenes from sampled point data is an emerging research topic. Recent advances in acquisition technologies provide highly accurate laser scanners and low-cost handheld RGB-D cameras for real-time acquisition. However, the processing of large data sets is hampered by high amounts of clutter and various defects such as missing data, outliers and anisotropic sampling. This thesis investigates three novel methods for efficient geometric modeling and semantization from unstructured point data: Shape detection, classification and geometric modeling. Chapter 2 introduces two methods for abstracting the input point data with primitive shapes. First, we propose a line extraction method to detect wall segments from a horizontal cross-section of the input point cloud. Second, we introduce a region growing method that progressively detects and reinforces regularities of planar shapes. This method utilizes regularities common to man-made architecture, i.e. coplanarity, parallelism and orthogonality, to reduce complexity and improve data fitting in defect-laden data. Chapter 3 introduces a method based on statistical analysis for separating clutter from structure. We also contribute a supervised machine learning method for object classification based on sets of planar shapes. Chapter 4 introduces a method for 3D geometric modeling of indoor scenes. We first partition the space using primitive shapes detected from permanent structures. An energy formulation is then used to solve an inside/outside labeling of a space partitioning, the latter providing robustness to missing data and outliers
Mrkvička, Daniel. "Rekonstrukce 3D objektů z více pohledů". Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-399560.
Texto completoDiskin, Yakov. "Dense 3D Point Cloud Representation of a Scene Using Uncalibrated Monocular Vision". University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1366386933.
Texto completoSchindler, Grant. "Unlocking the urban photographic record through 4D scene modeling". Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34719.
Texto completoRoubtsova, Nadejda S. "Accurate 3D reconstruction of dynamic scenes with complex reflectance properties". Thesis, University of Surrey, 2016. http://epubs.surrey.ac.uk/810256/.
Texto completoLai, Po Kong. "Immersive Dynamic Scenes for Virtual Reality from a Single RGB-D Camera". Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39663.
Texto completoPoulin-Girard, Anne-Sophie. "Paire stéréoscopique Panomorphe pour la reconstruction 3D d'objets d'intérêt dans une scène". Doctoral thesis, Université Laval, 2016. http://hdl.handle.net/20.500.11794/27073.
Texto completoA wide variety of panoramic lenses are available on the market. Exhibiting interesting characteristics, the Panomorph lens is a panoramic anamorphic optical system. Its highly non-uniform distortion profile creates areas of enhanced magnification across the field of view. For mobile robotic applications, a stereoscopic system for 3D reconstruction of objects of interest could greatly benefit from the unique features of these special lenses. Such a stereoscopic system would provide general information describing the environment surrounding its navigation. Moreover, the areas of enhanced magnification give access to smaller details. However, the downfall is that Panomorph lenses are difficult to calibrate, and this is the main reason why no research has been carried out on this topic. The main goal of this thesis is the design and development of Panomorph stereoscopic systems as well as the evaluation of their performance. The calibration of the lenses was performed using plane targets and a well-established calibration toolbox. In addition, new mathematical techniques aiming to restore the symmetry of revolution in the image and to make the focal length uniform over the field of view were developed to simplify the calibration process. First, the field of view was divided into zones exhibiting a small variation of the focal length and the calibration was performed for each zone. Then, the general calibration was performed for the entire field of view. The results showed that the calibration of each zone does not lead to a better 3D reconstruction than the general calibration method. However, this new approach allowed a study of the quality of the reconstruction over the entire field of view. Indeed, it showed that is it possible to achieve good reconstruction for all the zones of the field of view. In addition, the results for the mathematical techniques used to restore the symmetry of revolution were similar to the results obtained with the original data. These techniques could therefore be used to calibrate Panomorph lenses with calibration toolboxes that do not have two degrees of freedom relating to the focal length. The study of the performance of stereoscopic Panomorph systems also highlighted important factors that could influence the choice of lenses and configuration for similar systems. The challenge met during the calibration of Panomorph lenses led to the development of a virtual calibration technique that used an optical design software and a calibration toolbox. With this technique, simulations reproducing the operating conditions were made to evaluate their impact on the calibration parameters. The quality of 3D reconstruction of a volume was also evaluated for various calibration conditions. Similar experiments would be extremely tedious to perform in the laboratory but the results are quite meaningful for the user. The virtual calibration of a traditional lens also showed that the mean reprojection error, often used to judge the quality of the calibration process, does not represent the quality of the 3D reconstruction. It is then essential to have access to more information in order to asses the quality of a lens calibration.
Abayowa, Bernard Olushola. "Automatic Registration of Optical Aerial Imagery to a LiDAR Point Cloud for Generation of Large Scale City Models". University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1372508452.
Texto completoLitvinov, Vadim. "Reconstruction incrémentale d'une scène complexe à l'aide d'une caméra omnidirectionnelle". Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22541/document.
Texto completoThe automatic reconstruction of a scene surface from images taken by a moving camera is still an active research topic. This problem is usually solved in two steps : first estimate the camera poses and a sparse cloud of 3D points using Structure-from-Motion, then apply dense stereo to obtain the surface by estimating the depth for all pixels. Compared to the previous approaches, ours accumulates the following properties. The output surface is a 2-manifold, which is useful for applications and postprocessing. It is computed directly from the sparse point cloud provided by the first step, so as to avoid the second and time consuming step and to obtain a compact model of a complex scene. The computation is incremental to allow access to intermediary results during the processing. The principle is the following. At each iteration, new 3D points are estimated and added to a 3D Delaunay triangulation; the tetrahedra are labeled as free-space or matter thanks to the visibility information provided by the first step. We also update a second partition of outside and inside tetrahedra whose boundary is the target 2-manifold. Under some assumptions, the time complexity of one iteration is bounded (there is only one previous method with the same properties, but its complexity is greater than that). Our method is experimented on synthetic and real sequences, including a 2:5 km. long urban sequence taken by an omnidirectional camera. The surface quality is similar to that of the batch method which inspired us. However, the computations are not yet real-time on a commodity PC. We also study the use of contours in thereconstruction process
Huhle, Benjamin [Verfasser]. "Acquisition and Reconstruction of 3D Scenes with Range and Image Data / Benjamin Huhle". München : Verlag Dr. Hut, 2011. http://d-nb.info/1018982590/34.
Texto completoEl, Natour Ghina. "Towards 3D reconstruction of outdoor scenes by mmw radar and a vision sensor fusion". Thesis, Clermont-Ferrand 2, 2016. http://www.theses.fr/2016CLF22773/document.
Texto completoThe main goal of this PhD work is to develop 3D mapping methods of large scale environment by combining panoramic radar and cameras. Unlike existing sensor fusion methods, such as SLAM (simultaneous localization and mapping), we want to build a RGB-D sensor which directly provides depth measurement enhanced with texture and color information. After modeling the geometry of the radar/camera system, we propose a novel calibration method using points correspondences. To obtain these points correspondences, we designed special targets allowing accurate point detection by both the radar and the camera. The proposed approach has been developed to be implemented by non-expert operators and in unconstrained environment. Secondly, a 3D reconstruction method is elaborated based on radar data and image point correspondences. A theoretical analysis is done to study the influence of the uncertainty zone of each sensor on the reconstruction method. This theoretical study, together with the experimental results, show that the proposed method outperforms the conventional stereoscopic triangulation for large scale outdoor scenes. Finally, we propose an efficient strategy for automatic data matching. This strategy uses two calibrated cameras. Taking into account the heterogeneity of cameras and radar data, the developed algorithm starts by segmenting the radar data into polygonal regions. The calibration process allows the restriction of the search by defining a region of interest in the pair of images. A similarity criterion based on both cross correlation and epipolar constraint is applied in order to validate or reject region pairs. While the similarity test is not met, the image regions are re-segmented iteratively into polygonal regions, generating thereby a shortlist of candidate matches. This process promotes the matching of large regions first which allows obtaining maps with locally dense patches. The proposed methods were tested on both synthetic and real experimental data. The results are encouraging and prove the feasibility of radar and vision sensor fusion for the 3D mapping of large scale urban environment
Rehman, Farzeed Ur. "3D reconstruction of architectural scenes from images and video captured with an uncalibrated camera". Thesis, University of Manchester, 2011. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.549000.
Texto completoBlanc, Jérôme. "Synthèse de nouvelles vues d'une scène 3D à partir d'images existantes". Phd thesis, Grenoble INPG, 1998. http://tel.archives-ouvertes.fr/tel-00004870.
Texto completoDeschênes, François. "Estimation des jonctions, du mouvement apparent et du relief en vue d'une reconstruction 3D de la scène". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0018/MQ56894.pdf.
Texto completoDeschênes, François. "Estimation des jonctions, du mouvement apparent et du relief en vue d'une reconstruction 3D de la scène". Mémoire, Université de Sherbrooke, 1999. http://savoirs.usherbrooke.ca/handle/11143/4421.
Texto completoDeschênes, François. "Estimation des jonctions, du mouvement apparent et du relief en vue d'une reconstruction 3D de la scène". Sherbrooke : Université de Sherbrooke, 2000.
Buscar texto completoMüller, Franziska [Verfasser]. "Real-time 3D hand reconstruction in challenging scenes from a single color or depth camera / Franziska Müller". Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2020. http://d-nb.info/1224883594/34.
Texto completoMANIERE, THIERRY. "Etude et realisation d'un systeme de prises de vues panoramiques binoculaires dedie a la reconstruction 3d de scenes". Paris 6, 1997. http://www.theses.fr/1997PA066127.
Texto completoOISEL, LIONEL. "Reconstruction 3d de scenes complexes a partir de sequences video non calibrees : estimation et maillage d'un champ de disparite". Rennes 1, 1998. http://www.theses.fr/1998REN10119.
Texto completoIsmael, Muhannad. "Reconstruction de scène dynamique à partir de plusieurs vidéos mono- et multi-scopiques par hybridation de méthodes « silhouettes » et « multi-stéréovision »". Thesis, Reims, 2016. http://www.theses.fr/2016REIMS021/document.
Texto completoAccurate reconstruction of a 3D scene from multiple cameras offers 3D synthetic content tobe used in many applications such as entertainment, TV, and cinema production. This thesisis placed in the context of the RECOVER3D collaborative project, which aims is to provideefficient and quality innovative solutions to 3D acquisition of actors. The RECOVER3Dacquisition system is composed of several tens of synchronized cameras scattered aroundthe observed scene within a chromakey studio in order to build the visual hull, with severalgroups laid as multiscopic units dedicated to multi-baseline stereovision. A multiscopic unitis defined as a set of aligned and evenly distributed cameras. This thesis proposes a novelframework for multi-view 3D reconstruction relying on both multi-baseline stereovision andvisual hull. This method’s inputs are a visual hull and several sets of multi-baseline views.For each such view set, a multi-baseline stereovision method yields a surface which is usedto carve the visual hull. Carved visual hulls from different view sets are then fused iterativelyto deliver the intended 3D model. Furthermore, we propose a framework for multi-baselinestereo-vision which provides upon the Disparity Space (DS), a materiality map expressingthe probability for 3D sample points to lie on a visible surface. The results confirm i) theefficient of using the materiality map to deal with commonly occurring problems in multibaselinestereovision in particular for semi or partially occluded regions, ii) the benefit ofmerging visual hull and multi-baseline stereovision methods to produce 3D objects modelswith high precision
Chausse, Frédéric. "Reconstruction 3d de courbes parametriques polynomiales par filtrage temporel. Approche par cooperation vision par ordinateur/infographie. Application aux scenes routieres". Clermont-Ferrand 2, 1994. http://www.theses.fr/1994CLF21678.
Texto completoDuan, Liuyun. "Modélisation géométrique de scènes urbaines par imagerie satellitaire". Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4025.
Texto completoAutomatic city modeling from satellite imagery is one of the biggest challenges in urban reconstruction. The ultimate goal is to produce compact and accurate 3D city models that benefit many application fields such as urban planning, telecommunications and disaster management. Compared with aerial acquisition, satellite imagery provides appealing advantages such as low acquisition cost, worldwide coverage and high collection frequency. However, satellite context also imposes a set of technical constraints as a lower pixel resolution and a wider that challenge 3D city reconstruction. In this PhD thesis, we present a set of methodological tools for generating compact, semantically-aware and geometrically accurate 3D city models from stereo pairs of satellite images. The proposed pipeline relies on two key ingredients. First, geometry and semantics are retrieved simultaneously providing robust handling of occlusion areas and low image quality. Second, it operates at the scale of geometric atomic regions which allows the shape of urban objects to be well preserved, with a gain in scalability and efficiency. Images are first decomposed into convex polygons that capture geometric details via Voronoi diagram. Semantic classes, elevations, and 3D geometric shapes are then retrieved in a joint classification and reconstruction process operating on polygons. Experimental results on various cities around the world show the robustness, scalability and efficiency of the proposed approach
Alkhadour, Wissam M. "Reconstruction of 3D scenes from pairs of uncalibrated images. Creation of an interactive system for extracting 3D data points and investigation of automatic techniques for generating dense 3D data maps from pairs of uncalibrated images for remote sensing applications". Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4933.
Texto completoAl-Baath University
The appendices files and images are not available online.
Alkhadour, Wissam Mohamad. "Reconstruction of 3D scenes from pairs of uncalibrated images : creation of an interactive system for extracting 3D data points and investigation of automatic techniques for generating dense 3D data maps from pairs of uncalibrated images for remote sensing applications". Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4933.
Texto completoVerdie, Yannick. "Modélisation de scènes urbaines à partir de données aériennes". Thesis, Nice, 2013. http://www.theses.fr/2013NICE4078.
Texto completoAnalysis and 3D reconstruction of urban scenes from physical measurements is a fundamental problem in computer vision and geometry processing. Within the last decades, an important demand arises for automatic methods generating urban scenes representations. This thesis investigates the design of pipelines for solving the complex problem of reconstructing 3D urban elements from either aerial Lidar data or Multi-View Stereo (MVS) meshes. Our approaches generate accurate and compact mesh representations enriched with urban-related semantic labeling.In urban scene reconstruction, two important steps are necessary: an identification of the different elements of the scenes, and a representation of these elements with 3D meshes. Chapter 2 presents two classification methods which yield to a segmentation of the scene into semantic classes of interests. The beneath is twofold. First, this brings awareness of the scene for better understanding. Second, deferent reconstruction strategies are adopted for each type of urban elements. Our idea of inserting both semantical and structural information within urban scenes is discussed and validated through experiments. In Chapter 3, a top-down approach to detect 'Vegetation' elements from Lidar data is proposed using Marked Point Processes and a novel optimization method. In Chapter 4, bottom-up approaches are presented reconstructing 'Building' elements from Lidar data and from MVS meshes. Experiments on complex urban structures illustrate the robustness and scalability of our systems
Bagheri, Hossein [Verfasser], Xiaoxiang [Akademischer Betreuer] Zhu, Peter [Gutachter] Reinartz, Xiaoxiang [Gutachter] Zhu y Michael [Gutachter] Schmitt. "Fusion of Multi-sensor-derived Data for the 3D Reconstruction of Urban Scenes / Hossein Bagheri ; Gutachter: Peter Reinartz, Xiaoxiang Zhu, Michael Schmitt ; Betreuer: Xiaoxiang Zhu". München : Universitätsbibliothek der TU München, 2019. http://d-nb.info/1195708610/34.
Texto completoJoubert, Eric. "Reconstruction de surfaces en trois dimensions par analyse de la polarisation de la lumière réfléchie par les objets de la scène". Rouen, 1993. http://www.theses.fr/1993ROUES052.
Texto completoBauchet, Jean-Philippe. "Structures de données cinétiques pour la modélisation géométrique d’environnements urbains". Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4091.
Texto completoThe geometric modeling of urban objects from physical measurements, and their representation in an accurate, compact and efficient way, is an enduring problem in computer vision and computer graphics. In the literature, the geometric data structures at the interface between physical measurements and output models typically suffer from scalability issues, and fail to partition 2D and 3D bounding domains of complex scenes. In this thesis, we propose a new family of geometric data structures that rely on kinetic frameworks. More precisely, we compute partitions of bounding domains by detecting geometric shapes such as line-segments and planes, and extending these shapes until they collide with each other. This process results in light partitions, containing a low number of polygonal cells. We propose two geometric modeling pipelines, one for the vectorization of regions of interest in images, another for the reconstruction of concise polygonal meshes from point clouds. Both approaches exploit kinetic data structures to decompose efficiently either a 2D image domain or a 3D bounding domain into cells. Then, we extract objects from the partitions by optimizing a binary labelling of cells. Conducted on a wide range of data in terms of contents, complexity, sizes and acquisition characteristics, our experiments demonstrate the scalability and the versatility of our methods. We show the applicative potential of our method by applying our kinetic formulation to the problem of urban modeling from remote sensing data
Bolognini, Damiano. "Improving the convergence speed of NeRFs with depth supervision and weight initialization". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25656/.
Texto completo