Dissertations / Theses on the topic '3d photography'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic '3d photography.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Scott-Murray, Amy. "Applications of 3D computational photography to marine science." Thesis, University of Aberdeen, 2017. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=233937.
Full textFerguson, Paul. "Development of a 3D audio panning and realtime visualisation toolset using emerging technologies." Thesis, Edinburgh Napier University, 2010. http://researchrepository.napier.ac.uk/Output/6698.
Full textSlysz, Rémi. "Reconstruction de surface 3D d'objets vivants." Thesis, Troyes, 2014. http://www.theses.fr/2014TROY0022/document.
Full textThis thesis is part of the CPER BRAMSS project, one of its objectives was to develop an surface's retrieval method applied to the female bust. Therefore the work has aimed at the design, development and implementation of a three-dimensional measuring machine adapted to living objects.Among the large number of existing methods of three-dimensional measurements, attention was paid to the stereo matching as well as the use of structured light. Matching in stereovision is to find homologous pixels in two images of the same scene, taken from two different points of view. One way to achieve the mapping is to use correlation measurements. The algorithms used come up against certain difficulties: the changing light, noises, distortions, occlusions, low textured areas and large homogeneous areas. The use of structured light allow essentially the adding of information in homogeneous areas in this work. Developing this approach, an original method of reconstruction based on the exploitation of a particular pattern projected on the surface has been designed. A matching based on a comparison of the signatures of specific points in the pattern was implemented. This method allows a single sparse reconstruction acquisition step and simplifies the handling of the point cloud when transforming it in a surface mesh
FAUSZ, JAMES K. "EXPLORING PUBLIC INVOLVEMENT THROUGH PHOTOGRAPHY AND INTERACTIVE DESIGN." University of Cincinnati / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1147629903.
Full textLee, Won Hee. "Bundle block adjustment using 3D natural cubic splines." Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1211476222.
Full textEl-Hajjaji, Abdellah. "Traitement numérique en 3D d'un couple d'images stéréo du satellite SPOT." Rouen, 1993. http://www.theses.fr/1993ROUES028.
Full textThe aim of our research was to extract the level h of a precise landscape taken over two differents angles by the satellite SPOT. To do so, we have modelised the movement of the satellite and his optical system to transform the two first images in another one, epipolar which will allow us to reduce the matching time and to find with success the equivalent pixels. For the pairing, we have utilised a technic wich is based on the corrolation and of the dynamic programming. This method was very satisfactory and allow us to match 96 % of the equivalent pixels, with an error of less than 5 meters, but the original problem is still a matter of research for complimentary studing
Abranches, Gonçalo Botelho de Sousa. "Determinação da qualidade geométrica de superfície refletoras com recurso à fotogrametria." Master's thesis, Universidade de Évora, 2018. http://hdl.handle.net/10174/23893.
Full textSilva, Roger Correia Pinheiro. "Desenvolvimento e análise de um digitalizador câmera-projetor de alta definição para captura de geometria e fotometria." Universidade Federal de Juiz de Fora (UFJF), 2011. https://repositorio.ufjf.br/jspui/handle/ufjf/3515.
Full textApproved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-06T19:52:42Z (GMT) No. of bitstreams: 1 rogercorreiapinheirosilva.pdf: 22838442 bytes, checksum: 0bd115f462fc7572058a542e9ed91fcc (MD5)
Made available in DSpace on 2017-03-06T19:52:42Z (GMT). No. of bitstreams: 1 rogercorreiapinheirosilva.pdf: 22838442 bytes, checksum: 0bd115f462fc7572058a542e9ed91fcc (MD5) Previous issue date: 2011-08-26
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Um sistema câmera-projetor é capaz de capturar informação geométrica tridimensional de objetos e ambientes do mundo real. A captura de geometria em tal sistema baseia-se na projeção de luz estruturada sobre um objeto através do projetor, e na captura da cena modulada através da câmera. Com o sistema previamente calibrado, a deformação da luz projetada causada pelo objeto fornece a informação necessária para reconstruir a geometria do mesmo por meio de triangulação. Este trabalho descreve o desenvolvimento de um digitalizador câmera-projetor de alta definição (com resoluções de até 1920x1080 e 1280x720); são detalhadas as etapas e processos que conduzem à reconstrução de geometria, como calibração câmera-projetor, calibração de cores, processamento da imagem capturada e triangulação. O digitalizador desenvolvido utiliza a codificação de luz estruturada (b; s)-BCSL, que emprega a projeção de uma sequência de faixas verticais coloridas sobre a cena. Este esquema de codificação flexível oferece um número variado de faixas para projeção: quanto maior o número de faixas, mais detalhada a geometria capturada. Um dos objetivos deste trabalho é estimar o número limite de faixas (b,s)-BCSL possível dentro das resoluções atuais de vídeo de alta definição. Este número limite é aquele que provê reconstrução densa da geometria alvo, e ao mesmo tempo possui baixo nível de erro. Para avaliar a geometria reconstruída pelo digitalizador para os diversos números de faixas, é proposto um protocolo para avaliação de erro. O protocolo desenvolvido utiliza planos como objetos para mensurar a qualidade de reconstrução geométrica. A partir da nuvem de pontos gerada pelo digitalizador, a equação do plano para a mesma é estimada por meio de mínimos quadrados. Para um número fixo de faixas, são feitas cinco digitalizações independentes do plano: cada digitalização leva a uma equação; também é computado o plano médio, estimado a partir da união das cinco nuvens de pontos. Uma métrica de distância no espaço projetivo é usada para avaliar a precisão e a acurácia de cada número de faixas projetados. Além da avaliação quantitativa, a geometria de vários objetos é apresentada para uma avaliação qualitativa. Os resultados demonstram que a quantidade de faixas limite para vídeos de alta resolução permite uma grande densidade de pontos mesmo em superfícies com alta variação de cores.
A camera-projector system is capable of capturing three-dimensional geometric information of objects and real-world environments. The capture of geometry in such system is based on the projection of structured light over an object by the projector, and the capture of the modulated scene through the camera. With a calibrated system, the deformation of the projected light caused by the object provides the information needed to reconstruct its geometry through triangulation. The present work describes the development of a high definition camera-projector system (with resolutions up to 1920x1080 and 1280x720). The steps and processes that lead to the reconstruction of geometry, such as camera-projector calibration, color calibration, image processing and triangulation, are detailed. The developed scanner uses the (b; s)-BCSL structured light coding, which employs the projection of a sequence of colored vertical stripes on the scene. This coding scheme offers a flexible number of stripes for projection: the higher the number of stripes, more detailed is the captured geometry. One of the objectives of this work is to estimate the limit number of (b; s)-BCSL stripes possible within the current resolutions of high definition video. This limit number is the one that provides dense geometry reconstruction, and at the same has low error. To evaluate the geometry reconstructed by the scanner for a different number of stripes, we propose a protocol for error measurement. The developed protocol uses planes as objects to measure the quality of geometric reconstruction. From the point cloud generated by the scanner, the equation for the same plane is estimated by least squares. For a fixed number of stripes, five independent scans are made for the plane: each scan leads to one equation; the median plane, estimated from the union of the five clouds of points, is also computed. A distance metric in the projective space is used to evaluate the precision and the accuracy of each number of projected stripes. In addition to the quantitative evaluation, the geometry of many objects are presented for qualitative evaluation. The results show that the limit number of stripes for high resolution video allows high density of points even on surfaces with high color variation.
Marques, Clarissa Codá dos Santos Cavalcanti. "Um sistema de calibração de câmera." Universidade Federal de Alagoas, 2007. http://repositorio.ufal.br/handle/riufal/1051.
Full textFundação de Amparo a Pesquisa do Estado de Alagoas
Um processo de calibração de câmera consiste no problema de determinar as características geométricas digitais e ópticas da câmera a partir de um conjunto de dados iniciais. Este problema pode ser dividido em três etapas: aquisição de dados iniciais, o processo de calibração em si e otimização. Este trabalho propõe o desenvolvimento de uma ferramenta de calibração baseada em uma arquitetura genérica para qualquer processo de calibração. Para este propósito, o sistema apresentado neste trabalho permite a personalização de cada etapa da calibração. A inclusão de novos métodos de calibração é realizada de forma dinâmica, permitindo assim maior integração e flexibilidade entre os módulos do sistema.
Hudec, Jiří. "Vizualizace výrobních podkladů ve firmě IFE Brno." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2011. http://www.nusl.cz/ntk/nusl-229956.
Full textChoi, Keum-Ran. "3D thermal mapping of cone calorimeter specimen and development of a heat flux mapping procedure utilizing an infrared camera." Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-020205-215634/.
Full textKeywords: temperature measurement; heat flux maps; Cone Calorimeter; three-dimensional heat conduction; fire growth models; retainer frame; ceramic fiberboard; edge effect; one-dimensional heat conduction; heat flux mapping procedure; infrared camera; specimen preparation; edge frame; one-dimensional heat conduction model; thermal properties. Includes bibliographical references (p.202-204).
Banerjee, Natasha Kholgade. "3D Manipulation of Objects in Photographs." Research Showcase @ CMU, 2015. http://repository.cmu.edu/dissertations/659.
Full textShlyakhter, Ilya 1975, and Max 1976 Rozenoer. "Reconstruction of 3D tree models from instrumented photographs." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80136.
Full textAlso issued with order of names reversed on t.p.
Includes bibliographical references (leaves 33-36).
by Ilya Shlyakhter and Max Rozenoer.
M.Eng.
Vesna, Stojaković. "Generisanje prostora na osnovu perspektivnih slika i primena u oblasti graditeljskog nasleđa." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2011. http://dx.doi.org/10.2298/NS20110816STOJAKOVIC.
Full textIn this research a new semi-automated normative image-based modelling system is created. The system includes number of procedures that are used to transform two-dimensional medium, such as photographs, to threedimensional structure. The used approach is adjusted to the properties of complex projects in the domain of visualization of cultural heritage. An application of the system is given demonstrating its practical value.
Persson, Thom. "Building of a Stereo Camera System." Thesis, Blekinge Tekniska Högskola, Avdelningen för signalbehandling, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3579.
Full textDetta projekt består av en stereokamerarigg som kan bestyckas med två DSLR-kameror, samt en applikation indelad i flera trådar (multithreaded) , skriven i C++, som kan förflytta kamerorna på riggen, ändra fotoinställningar och ra bilder. Resultatet blir 3D-bilder som kan ses på en autostereoskopisk skärm. Kamerornas position kontrolleras med en stegmotor, som i sin tur styrs av en PIC-mikrokontroller. Kommunikationen mellan PIC-enheten och datorn sker via USB. Slutarna på kamerorna är synkroniserade så det är möjligt att ta bilder på objekt i rörelse på ett avstånd av 2,5 m eller mer. Resultaten visar att det är flera punkter som måste åtgärdas på prototypen innan den kan anses vara redo för marknaden. Den viktigaste punkten är att kunna få fungerande respons (callback) från kamerorna.
Lin, Wei-Ming. "Constructing a GIS-based 3D urban model using LiDAR and aerial photographs." Texas A&M University, 2004. http://hdl.handle.net/1969.1/1495.
Full textParmentier, Alain. "Acquisition de cartes denses pour la génération et le contrôle de formes vestimentaires." Valenciennes, 1994. https://ged.uphf.fr/nuxeo/site/esupversions/96d52b8e-37f9-4146-8eaf-405f79a9426f.
Full textCooper, Joseph L. "Supporting Flight Control for UAV-Assisted Wilderness Search and Rescue Through Human Centered Interface Design." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2140.pdf.
Full textViktora, Jakub. "Využití fotogrammetrie pro realitní praxi." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2014. http://www.nusl.cz/ntk/nusl-233074.
Full textSchindler, Grant. "Unlocking the urban photographic record through 4D scene modeling." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34719.
Full textDupont, de Dinechin Grégoire. "Towards comfortable virtual reality viewing of virtual environments created from photographs of the real world." Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLM049.
Full textThere are many applications to capturing and digitally recreating real-world people and places for virtual reality (VR), such as preserving and promoting cultural heritage sites, placing users face-to-face with faraway family and friends, and creating photorealistic replicas of specific locations for therapy and training. This is typically done by transforming sets of input images, i.e. photographs and videos, into immersive 360° scenes and interactive 3D objects. However, such image-based virtual environments are often flawed such that they fail to provide users with a comfortable viewing experience. In particular, accurately recovering the scene's 3D geometry is a difficult task, causing many existing approaches to make approximations that are likely to cause discomfort, e.g. as the scene appears distorted or seems to move with the viewer during head motion. In the same way, existing solutions most often fail to accurately render the scene's visual appearance in a comfortable fashion. Standard 3D reconstruction pipelines thus commonly average out captured view-dependent effects such as specular reflections, whereas complex image-based rendering algorithms often fail to achieve VR-compatible framerates, and are likely to cause distracting visual artifacts outside of a small range of head motion. Finally, further complications arise when the goal is to virtually recreate people, as inaccuracies in the appearance of the displayed 3D characters or unconvincing responsive behavior may be additional sources of unease. Therefore, in this thesis, we investigate the extent to which users can be made more comfortable when viewing digital replicas of the real world in VR, by enhancing, combining, and designing new solutions for creating virtual environments from input sets of photographs. We thus demonstrate and evaluate solutions for (1) providing motion parallax during the viewing of 360° images, using a VR interface for estimating depth information, (2) automatically generating responsive 3D virtual agents from 360° videos, by combining pre-trained deep learning networks, and (3) rendering captured view-dependent effects at high framerates in a game engine widely used for VR development, which we apply to digitally recreate a museum's mineralogy collection. We evaluate and discuss each approach by way of user studies, and make our codebase available as an open-source toolkit
Tournaire, Olivier. "Extraction 3D de marquages routiers à partir d'images aériennes multi-vues et quelques applications." Marne-la-Vallée, 2007. http://www.theses.fr/2007MARN0368.
Full textRoad detection from aerial, satellital or RADAR images is of great interest in the image processing and photogrammetric communities since the 70’s. This topic stays today challenging, the growing needs for accurate and recent data being more and more important. In the context of this work, we study the road network in a restrictive frame, the one of the horizontal road signs, i. E. Road markings. For this purpose, we will use high resolution aerial images (25 cm GSD), in a multiview frame. We will particularly emphasize on the urban environment. In this thesis, we will present the works done according to this point of view. We choose to focus first on the road markings objects that we propose to detect and reconstruct in a three dimensional space. In this context, the general strategy will be developped : detection and selection of images features, then, three dimensionnal reconstruction. The first bottomup approach will then be compare with with a top-down approach involving marked point processes. This approach enables to cope with some of the problems inherents to the first methods we will present, with the introduction of a priori knowledge. From the obtained results, we will present a methodology allowing us to increase geometric quality of the reconstructed objects. All those steps are based on a set of specifications describing clearly and carefully the geometric shapes of the road markings coming from public road services. In a second time, we tried to highlight the sake of these works in miscellaneous applications. Indeed, in addition to the purely informative aspect on the road network structure derived from the road markings, it is possible to branch off from them data usefull in other processes. We first present a method to obtain a fine georeferencing of images acquired with mobile mapping systems in urban areas. The location problems of such systems (masks and multipaths of GPS signal) can be solved with matching objects detected from both aerial and terrestrial imaging systems. This method enables to benefit jointly from the geometric quality of the objets coming from the terrestrial imagery and of the location quality of the objects reconstructed from aerial images. We will then focus of the detection and delimitation of roads from seeds automatically computed thanks to the detected and reconstructed road markings. These seeds are then introduced in a region growing algorithm
Deák, Jaromír. "Registrace fotografií do 3D modelu terénu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363795.
Full textSlomp, Marcos Paulo Berteli. "Real-time photographic local tone reproduction using summed-area tables." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/34766.
Full textHigh dynamic range (HDR) rendering is becoming an increasingly popular technique in computer graphics. Its challenge consists on mapping the resulting images’ large range of intensities to the much narrower ones of the display devices in a way that preserves contrastive details. Local tone-mapping operators effectively perform the required compression by adapting the luminance level of each pixel with respect to its neighborhood. While they generate significantly better results when compared to global operators, their computational costs are considerably higher, thus preventing their use in real-time applications. This work presents a real-time technique for approximating the photographic local tone reproduction that runs entirely on the GPU and is significantly faster than existing implementations that produce similar results. Our approach is based on the use of summed-area tables for accelerating the convolution of the local neighborhoods with a box filter and provides an attractive solution for HDR rendering applications that require high performance without compromising image quality. A survey of prefix sum algorithms and possible improvements are also presented.
Poulin-Girard, Anne-Sophie. "Paire stéréoscopique Panomorphe pour la reconstruction 3D d'objets d'intérêt dans une scène." Doctoral thesis, Université Laval, 2016. http://hdl.handle.net/20.500.11794/27073.
Full textA wide variety of panoramic lenses are available on the market. Exhibiting interesting characteristics, the Panomorph lens is a panoramic anamorphic optical system. Its highly non-uniform distortion profile creates areas of enhanced magnification across the field of view. For mobile robotic applications, a stereoscopic system for 3D reconstruction of objects of interest could greatly benefit from the unique features of these special lenses. Such a stereoscopic system would provide general information describing the environment surrounding its navigation. Moreover, the areas of enhanced magnification give access to smaller details. However, the downfall is that Panomorph lenses are difficult to calibrate, and this is the main reason why no research has been carried out on this topic. The main goal of this thesis is the design and development of Panomorph stereoscopic systems as well as the evaluation of their performance. The calibration of the lenses was performed using plane targets and a well-established calibration toolbox. In addition, new mathematical techniques aiming to restore the symmetry of revolution in the image and to make the focal length uniform over the field of view were developed to simplify the calibration process. First, the field of view was divided into zones exhibiting a small variation of the focal length and the calibration was performed for each zone. Then, the general calibration was performed for the entire field of view. The results showed that the calibration of each zone does not lead to a better 3D reconstruction than the general calibration method. However, this new approach allowed a study of the quality of the reconstruction over the entire field of view. Indeed, it showed that is it possible to achieve good reconstruction for all the zones of the field of view. In addition, the results for the mathematical techniques used to restore the symmetry of revolution were similar to the results obtained with the original data. These techniques could therefore be used to calibrate Panomorph lenses with calibration toolboxes that do not have two degrees of freedom relating to the focal length. The study of the performance of stereoscopic Panomorph systems also highlighted important factors that could influence the choice of lenses and configuration for similar systems. The challenge met during the calibration of Panomorph lenses led to the development of a virtual calibration technique that used an optical design software and a calibration toolbox. With this technique, simulations reproducing the operating conditions were made to evaluate their impact on the calibration parameters. The quality of 3D reconstruction of a volume was also evaluated for various calibration conditions. Similar experiments would be extremely tedious to perform in the laboratory but the results are quite meaningful for the user. The virtual calibration of a traditional lens also showed that the mean reprojection error, often used to judge the quality of the calibration process, does not represent the quality of the 3D reconstruction. It is then essential to have access to more information in order to asses the quality of a lens calibration.
Rossi, Romain. "Reconstruction 3D volumétrique par vision omnidirectionnelle sur architecture massivement parallèle." Rouen, 2011. http://www.theses.fr/2011ROUES025.
Full text3D reconstruction of an unknown scene is a classical computer vision problem. Usual solutions, which use a pair of cameras in stereoscopic configuration and an algorithm relying on image disparities, don't allow to create a densely sampled 3D model. Moreover, processing this model in real-time is a complex task which often needs an implementation on dedicated hardware (FPGA or DSP), very powerful but hard to use. In this thesis, we propose a volumetric reconstruction method aiming to produce a high-resolution 3D model of the scene surrounding a mobile robot. A pair of catadioptric cameras allows panoramic acquisition of the whole scene. The reconstruction algorithm, adapted for the massively-parallel architecture of a very powerful and inexpensive Graphical Processing Unit (GPU) tries to limit data-dependencies to improve performances. This reconstruction method also benefit from additional pictures, taken as the robot moves in the scene, to incrementally improve the 3D model. The final results are qualitatively equivalent to the ones obtained with classical methods, but our approach allows a 3D resolution far better (500x500x200 voxels) with a very short running time (about 5 seconds for each reconstruction). The real-time objective (2 reconstructions per second) can even be reached for a lower-resolution (150x150x150 voxels). Experimental results on a real image validate the proposed approach
Beauvivre, Stéphane. "Evaluation des performances de microcaméras réalisées en technologie 3D pour la mission spatiale cométaire ROSETTA." Montpellier 2, 1999. http://www.theses.fr/1999MON20037.
Full textChaibi, Yasmina. "Adaptation des méthodes de reconstruction 3D rapides par stéréoradiographie : Modélisation du membre inférieur et calcul des indices cliniques en présence de déformation structurale." Paris, ENSAM, 2010. http://www.theses.fr/2010ENAM0013.
Full textThe Friction Stir Welding (FSW) is a solid state welding process, without melting. The weld is fabricated thanks to the action of tool made of a shoulder and a pin, positioned at the interface of the two pieces to be welded. The tool as two roles : heating of the material by friction of the shoulder, mixing of the material due to the pin. This thesis work is made within the partnership between Arts et Métiers ParisTech and Institut de Soudure. Its goal is to develop a FSW simulation model in order to decrease experimental trials required to optimize the process. Therefore, some points have been treated in this manuscript. Experimental analysis of thermal cycles and material movements in the case of unthreaded tools has been carried out. This situation allows (1) to make the comparison with numerical simulation easier and (2) to be in the case of worn tools. Formulations (lagrangian, eulerian, ALE) analysis in order to choose the more appropriate to take material flow into account has allowed to select an eulerian formulation (implemented in the FLUENT software) to estimate thermal and kinematical fields in the steady state. The set up of the numerical model in the FLUENT software is presented. We have studied the influence of numerical parameters on the results and proposed an identification strategy for some parameters which are not reachable experimentally. A detailed comparison between our experimental results and the ones from our simulations have been performed with success. The study of the influence of the process parameters (feed rate, rotating speed) and of the pin geometry on the kinematical and thermal fields has highlighted the link between velocity field and the presence of tunnel type defects
Hejl, Zdeněk. "Rekonstrukce 3D scény z obrazových dat." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236495.
Full textPitout, Cédric. "Conception et utilisation d'un système d'information géographique pour l'étude et le suivi de sites industriels pollués : Analyse spatiale 2D-3D. Analyse multiparamètre." Lille 1, 2000. https://pepite-depot.univ-lille.fr/RESTREINT/Th_Num/2000/50377-2000-23.pdf.
Full textBuchholz, Bert. "Abstraction et traitement de masses de données 3D animées." Phd thesis, Télécom ParisTech, 2012. http://pastel.archives-ouvertes.fr/pastel-00958339.
Full textAttia, Youssef. "Interfaçage de bases de données photographiques et géographiques par appariement de lignes." Phd thesis, Université Jean Monnet - Saint-Etienne, 2012. http://tel.archives-ouvertes.fr/tel-00944135.
Full textFernandez, Julia Laura. "Avancements dans l'estimation de pose et la reconstruction 3D de scènes à 2 et 3 vues." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1157/document.
Full textThe study of cameras and images has been a prominent subject since the beginning of computer vision, one of the main focus being the pose estimation and 3D reconstruction. The goal of this thesis is to tackle and study some specific problems and methods of the structure-from-motion pipeline in order to provide improvements in accuracy, broad studies to comprehend the advantages and disadvantages of the state-of-the-art models and useful implementations made available to the public. More specifically, we center our attention to stereo pairs and triplets of images and discuss some of the methods and models able to provide pose estimation and 3D reconstruction of the scene.First, we address the depth estimation task for stereo pairs using block-matching. This approach implicitly assumes that all pixels in the patch have the same depth producing the common artifact known as the ``foreground fattening effect''. In order to find a more appropriate support, Yoon and Kweon introduced the use of weights based on color similarity and spatial distance, analogous to those used in the bilateral filter. We present the theory of this method and the implementation we have developed with some improvements. We discuss some variants of the method and analyze its parameters and performance.Secondly, we consider the addition of a third view and study the trifocal tensor, which describes the geometric constraints linking the three views. We explore the advantages offered by this operator in the pose estimation task of a triplet of cameras as opposed to computing the relative poses pair by pair using the fundamental matrix. In addition, we present a study and implementation of several parameterizations of the tensor. We show that the initial improvement in accuracy of the trifocal tensor is not enough to have a remarkable impact on the pose estimation after bundle adjustment and that using the fundamental matrix with image triplets remains relevant.Finally, we propose using a different projection model than the pinhole camera for the pose estimation of perspective cameras. We present a method based on the matrix factorization due to Tomasi and Kanade that relies on the orthographic projection. This method can be used in configurations where other methods fail, in particular, when using cameras with long focal length lenses. The performance of our implementation of this method is compared to that given by the perspective-based methods, we consider that the accuracy achieved and its robustness make it worth considering in any SfM procedure
Guezzi, Messaoud Fadoua. "Analyse de l'apport des technologies d'intégration tri-dimensionnelles pour les imageurs CMOS : application aux imageurs à grande dynamique." Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1022/document.
Full textWith the increase of systems complexity, integrating different technologies together has become a major challenge. Another challenge has traditionally been the limitation on the throughout between different part of the system coming from the interconnections. If traditional two dimensional integration solutions like System In a Package (SIP) bring heterogonous technologies together there is still limitations coming from the restricted number and lengths of interconnections between the different system components. Three Dimensional stacking (3D), by exploiting short vertical interconnections between different circuits of mixed technologies, has the potential to overcome these limitations. Still, despite strong interests for the 3D concepts, there is no advanced analysis of 3D integration benefits, especially in the field of imagers and smart image sensors. This thesis study the potential benefits of 3D integration, with local processing and short feedback loops, for the realisation of a High Dynamic Range (HDR) image sensor. The dense vertical interconnections are used to locally adapt the integration time by group of pixels, called macro-pixels, while keeping a classic pixel architecture and hence a high fill factor. Stacking the pixel section and circuit section enables a compact pixel and the integration of flexible and versatile functions. High Dynamic Range values producing an important quantity of data, the choice has been made to implement data compression to reduce the circuit throughout. A first level of compression is produced by coding the pixel value using a floating format with a common exponent shared among the macro-pixel. A second level of compression is proposed based on a simplified version of the Discrete Cosine Transform (DCT). Using this two level scheme, a compression of 93% can be obtained with a typical PSNR of 30 dB. A validation of the architecture was carried out by the development; fabrication and test of a prototype on a 2D, 180 nm, CMOS technology. A few pixels of each macro-pixel had to be sacrificed to implement the high dynamic range control signals and emulate the 3D integration. The test results are very promising proving the benefits that will bring the 3D integration in term of power consumption and image quality compared to a classic 2D integration. Future realisations of this architecture, done using a real 3D technology, separating sensing and processing on different circuits communicating by vertical interconnection will not need the sacrifice of any pixel to adjust the integration time, improving power consumption, image quality and latency
Batmaz, Anil Ufuk. "Speed, precision and grip force analysis of human manual operations with and without direct visual input." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAJ056/document.
Full textPerceptual system of a surgeon must adapt to conditions of multisensorial constrains regard to planning, control, and execution of the image-guided surgical operations. Three experimental setups are designed to explore these visual and haptic constraints in the image-guided training. Results show that subjects are faster and more precise with direct vision compared to image guidance. Stereoscopic 3D viewing does not represent a performance advantage for complete beginners. In virtual reality, variation in object length, width, position, and complexity affect the motor performance. Applied grip force on a surgical robot system depends on the user experience level. In conclusion, both time and precision matter critically, but trainee gets as precise as possible before getting faster should be a priority. Study group homogeneity and background play key role in surgical training research. The findings have direct implications for individual skill monitoring for image-guided applications
Ting, Yu-Hisn, and 丁友信. "3D Imaging based on Integral Photography Technology." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/81689620600188640454.
Full text長庚大學
半導體科技研究所
91
Integral photography (IP) can be regarded as a method of capturing and displaying light rays passing through a plane. Because a three-dimensional (3-D) autostereoscopic image can be seen from a designed viewpoint without any special viewing glasses, IP is an ideal method to create 3-D autostereoscopic images. The conventional IP method in which is placed the film or a CCD camera behind a lens array immediately. In this thesis, the author proposes a method of analyzing maximum resolution and viewing angle from IP images simulated by the ASAP software, which enables us to getting optimum reconstruction image quality. Final simulated results show the potential to build the whole IP optical system by computer simulation technologies.
Jirathamopas, Jinwara, and 曾玲玲. "Female facial attractiveness Assessed by 2D Photography and 3D Face-scan." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/56485205244003294403.
Full text長庚大學
顱顏口腔醫學研究所
102
Background: This is a serial study 1) to evaluate the consistency of female facial attractive perception across gender, age and professional background and 2) to present whether contour lines could be used to evaluate facial attractiveness. Materials and methods: Series of 100 female 2D photos (one frontal, two lateral views) were projected on a screen. Each photo lasted 5 seconds and raters marked their impression of facial attractiveness on a 5-point Likert scale within 3 seconds. Raters included hospital staffs and laypeople. The consistency of facial attractive perception was compared between raters according to gender, age, and professional background. Same protocol was carried out with 100 contour line images extracted from 3D images of the same samples. The evaluation was performed twice in 2 weeks apart. Raters were laypeople only. The consistency of facial contour lines perception and the correlation between mean facial attractive scores of 2D photos and contour lines were calculated. Results: High consistency was found for all of the comparisons. In the evaluation of 2D photos, females give higher score than males and the significant different was found among laypeople (p=0.011). No significant different between the rating of senior and junior raters (p=0.457 and 0.781 for hospital staffs and laypeople). Hospital staffs rated significant higher score than laypeople (p=0.005). In the evaluation of contour lines, females give higher score than males and significant different was found in 2nd time rating (p=0.017). The correlation between contour lines and attractiveness was r = 0.576 and 0.574 for 1st and 2nd time evaluation. Conclusion: The perception of 2D or contour line female facial attractiveness was very consistent. Only gender and professional background influence female attractive perception. The correlation between facial attractiveness and contour lines were moderate.
Hasinoff, Samuel William. "Variable-aperture Photography." Thesis, 2008. http://hdl.handle.net/1807/16734.
Full textWeng, Chia-Duo, and 翁家鐸. "Application of VR object Photography in on-line displaying of 3D products." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/65684546498189632756.
Full text國立臺北教育大學
文化創意產業經營學系
99
Image-based object movies are available for immediate interaction with the virtual three-dimensional characteristics of real images, to convey a more complete three-dimensional objects in rich visual image information, has been regarded as an important heritage records and show good results. This study used relative approaches documents, marketing theory, implementation, verification, and comprehensive expert interviews , and tried to explore the development of online three-dimensional display of goods . The results showed that the display of effect and application development object movie were well received and very useful for online shopping, especially for higher priced fine display. However, the speed of network bandwidth to be able to immediately meet the production costs and convenience, there is hope for improvement.
Wang, Jia-Hong, and 翁嘉鴻. "Achieving Floating 3D Image with Applying Integral Photography Theory in Oblique Viewing Angle." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/19481816671249812222.
Full text國立交通大學
光電工程研究所
103
With the rapid growth of the three-dimensional (3D) stereoscopic image technology, the various application and potential has become more and more important in our modern life. However, comparing with the “pop out” 3D image in the direct viewing angle presented by the flat panel display and movie, the fantastic “floating” 3D image which allows people to perceive and interact with in the sci-fi movie is more and more attractive. Therefore, the floating 3D image is undoubtedly the brand new landmark of the display technology in the next generation. To achieve the target, 3D image in oblique viewing angle, in other words, the floating 3D image is the essential technology. Nevertheless in nowadays, floating image confronts the issues on the inconvenience feature, limitation of application and the volumetric characteristic. The technology is not mature enough for the user-friendly requirement. Hence, we apply a simple concept to achieve the floating 3D image and expect the concept could be applied on various structures, including projector and mobile device. The concept is integral photography (IP) theory in oblique viewing angle. Different with the disparity method of modern 3D technology, which gives different views to our left and right eye to form the 3D image, IP theory reconstructs the light field, showing true 3D image without causing visual fatigue. Among our research, the effect and issue of projection type and display type integral image (InI) in oblique viewing angle with applying micro lens array and pinhole array are discussed in detail. Moreover, a human factor experiment is done. The different factors such as floating height and viewing angle that may influence the image quality of the floating InI are also carefully analyzed. The research applies IP theorem on projector and mobile device, achieving good floating 3D image with simple structure and convenient adjustment and showing various applications and freedom for this concept. Furthermore, the true 3D image with light field reconstruction could satisfy the interaction requirement; the rapid calculation with computational algorithm could draft the image content without the capture stage, enhancing the freedom of the displaying image and also showing the potential for discussion with many kinds of factors in the future. The research would lead us step by step to the fantastic and charming world of floating 3D image technology.
Fiveash, Tina Dale Media Arts College of Fine Arts UNSW. "The enigma of appearances: photography of the third dimension." 2007. http://handle.unsw.edu.au/1959.4/44259.
Full textAlmeida, Vítor Miguel Amorim de. "3D reconstruction through photographs." Master's thesis, 2014. http://hdl.handle.net/10400.13/1057.
Full textWang, Yu-Chi, and 王煜智. "Reconstructing 3D Model of Real Scenes from Photographs." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/10929116439838702092.
Full text國立成功大學
電腦與通信工程研究所
93
In this paper, we present a system which automatically extracts the 3D information and reconstructs a textured 3D model from a sequence of images of a real scene. No prior knowledge about the scene is needed to build the 3D models. All information such as camera pose and orientation will be estimated through the processes. Therefore, this system offers a high degree of flexibility when taking photographs. The only constraint is that the intrinsic camera parameters need to be obtained first. The 3D modeling task is decomposed into 4 successive steps. The camera intrinsic parameters are calibrated using a calibration board first. Second, the camera pose and the epipolar geometry between a stereoscopic image pair are estimated by the corresponding points of this pair. Next, consecutive images of the sequence are treated as stereo pair and the disparity maps are computed by area matching. Finally, the dense 3D points are estimated by the linking matches through consecutive image pairs. Then, these 3D points are visualized as a 3D model which is also texture mapped for photo-realistic appearance. This system has been tested on several real scenes, and some of the reconstructed models are shown in this paper.
Wu, Sara, and 吳宜樺. "To construct 3D human model from 2D photograph." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/76995668107455197224.
Full textMillar, Usiskin Josh. "4D reconstruction of construction photographs." 2013. http://hdl.handle.net/1993/22057.
Full textAbson, Karl, Hassan Ugail, and Stanley S. Ipson. "A methodology for feature based 3D face modelling from photographs." 2008. http://hdl.handle.net/10454/2435.
Full textHsu, Lung-kai, and 徐瓏愷. "The Research and Creation of 3D Stereoscopic Animation Applying Stereoscopic Display -"The Photographer"." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/06049405591227833971.
Full text國立雲林科技大學
設計運算研究所碩士班
100
Since 2010, the display technology has become the mainstream technology and meanwhile many other stereoscopic effect technologies have come along with this trend. However, one problem has emerged, that is the output of the digital content is way slower than the development of the new hardware. As a result, this 3D computer animation work is trying to apply the Stereoscopic Display by using the Shutter Glasses System as the basic hardware in order to further create and present this 3D effect animation short film – The Photographer. I use this film to analyze and record all the problems that happened regarding to the 3D display technology during my working process. The result showed that when making a 3D animation, it is not only about the settings of two cameras. Instead, it concerned with stereology and the analyses of every statistics which were strongly related to the physical feelings of the audiences and it was something that all the 3D animators should be aware of. After completing the film, the creator found that the planning of the whole working process mattered the most in terms of making the stereoscopic animation since it was more complicated and time-consuming than the normal films. Hence, while making the film, the creator tried to build a standard operating procedure regarding the creation of stereoscopic animation for future references.
Ciou, Jian-Wei, and 邱健瑋. "The Application of 3D Reconstruction and Measurement Based on Single Image of Oblique Photograph by UAV." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/an3c94.
Full text國立高雄應用科技大學
土木工程與防災科技研究所
104
It is not easy to measure statistics on images through traditional photography in the early period, it is even unable to measure the figures from images, resulted in overbank. Thus, there is no further description about oblique photograph. Nevertheless, since MVS( Multi View Stereo) 3D reconstruction starts being applied, it loosens the aviation requirements of UAV. Combining the digital compact camera with UAV( Unmanned Aerial Vehicles) to shooting, it can not only reduce the shooting cost, but the resolution and stability are much higher than aero photography by comparison.Therefore the study includes camera calibration, 3D reconstruction, generats the true Orthophoto based on morden view stereo processing. Before camera self-calibration begin to develop, people use pre-calibration to solve the intrinsic parameters and exterior orientation parameters, pre-calibration could get the steady answer, but the requirements are complex and time-consuming. The development of self-calibration improve the problems of pre-calibration, however, the requirements of self-calibration is unstable, so there are accuracy analysis of the camera self-calibration, therefore, to compare with the applicability of camera calibration methods, the experiment select region of National Kaohsiung University of Applied Sciences and choose different numbers of control points to explore the results of the calibration. Today commercial Oblique Photograph processing system and 3D reconstruction are using traditional photogrammetric aerial triangulation method by the perfectional camera with high resolution but it is also expensive, so this study design uses digital camera by unmanned aerial vehicle to shoot and use modern methods of stereo vision SfM reconstruction, mainly discusses tilt integrity model photography and 3D reconstructio accuracy, and provide DEM let measurement based on single image of oblique photograph to use.The experiment selects National Kaohsiung University of Applied Sciences and Taichung city to do reconstruction and explore completion and accuracy of 3D reconstruction model. In order to make the images apply fleetly, this research utilize collinearity principle to measure single image of oblique photograph, mainly because the lateral information of tilted images are rich, and it can fully display the characteristics and construction of the object. The experiment selects a region of National Kaohsiung University of Applied Sciences to compare the difference of dense cloud, measurement based on single image and survey. The part of the camera calibration show there is no significant difference between the two methods, self-calibration even better than pre-calibration. From the experiment of 3D-reconstruction show because of a lot of observation, accuracy of oblique photography method is better than traditional photography, even get complete model. Measurement of Single image gets 3D information quickly and have less than 2 pixel difference, therefore, through direct measurement method can provide images for disaster response or after a field survey reference.
Chu, Te-Yuen, and 朱德原. "The application of terrestrial 3D Laser Scanner and Aerial Photographs to investigate the sea cliff changes over the last decade at Chi Lai Bi, Hualien." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/55773142122810733378.
Full text國立花蓮教育大學
地球科學研究所
96
Abstract Chi-Lai-Bi sea cliff is a natural sea cliff without any hard engineering protection from the erosion of waves. This study uses aerial photographs (on October 13, 1993, April 15, 2002 and September 5, 2002 respectively) and digital photographs (on June 27, 2005 and September 5, 2006 respectively) to establish 5 different stages of 2 meters Digital Surface Model (DSM) of Chi-Lai-Bi sea cliff, Hualien and the surrounding area in order to investigate a middle-term topographic change of the sea cliff. For a short-term topographic change of the sea cliff, using terrestrial 3D laser scanner (on July 3, August 30 and November 16, 2007 respectively) reconstructs 3 different stages of 10 centimeters DSM of Chi Lai Bi sea cliff, Hualien. Moreover, a Light Detection And Ranging (LiDAR) result from the 9th River Management Bureau on July 5, 2005 is also used to create a Digital Terrain Model (DTM) of Chi-Hsing-Tan area and of Chi-Lai-Bi sea cliff in order to compare the short-term sea cliff change. The result of the middle-term sea cliff change shows: (1) The shorter time span of sea cliff change is faster than longer time span of the sea cliff change. For example, from June, 2005 to September, 2006 the average recession of the sea cliff is 1.62 meters, but from October, 1993 to April, 2002 the annual mean recession of the sea cliff is 1.25 meters and from October, 1993 to September, 2006 the annual mean recession of the sea cliff is 1.3 meters respectively. (2) The Chi Lai Bi sea cliff has both the erosions and accumulations areas, the erosion areas are mainly caused by ocean waves and Typhoon in the sea cliff margin, the accumulation areas are created by human interference such as in a land filled site and the storage of stone materials in a nearby open space. The result of the short-term sea cliff change using terrestrial 3D laser scanner and the LiDAR data shows:(1) The study area of 10520 square meters has a 5600 cubic meters volume reduction and an average of 0.53 m height reduction between July 3, 2007 and November 16, 2007 during the Typhoon season. (2) Ten west-east profiles of the sea cliff DSM from north to south show two different types of erosion trend, the first type is that 5 meter above the sea surface, the sea cliff has been eroded and the eroded materials are then either accumulated at the foot of the sea cliff or moved by the waves, the second type is that the sea cliff has no obvious change. (3) Comparing the sea cliff change using LiDAR and the latest terrestrial 3D laser scanner data shows, the sea cliff change has the similar pattern as using terrestrial 3D laser scanner data only.
Egoda, Gamage Ruwan Janapriya. "A high resolution 3D and color image acquisition system for long and shallow impressions in crime scenes." Thesis, 2014. http://hdl.handle.net/1805/5906.
Full textIn crime scene investigations it is necessary to capture images of impression evidence such as tire track or shoe impressions. Currently, such evidence is captured by taking two-dimensional (2D) color photographs or making a physical cast of the impression in order to capture the three-dimensional (3D) structure of the information. This project aims to build a digitizing device that scans the impression evidence and generates (i) a high resolution three-dimensional (3D) surface image, and (ii) a co-registered two-dimensional (2D) color image. The method is based on active structured lighting methods in order to extract 3D shape information of a surface. A prototype device was built that uses an assembly of two line laser lights and a high-definition video camera that is moved at a precisely controlled and constant speed along a mechanical actuator rail in order to scan the evidence. A prototype software was also developed which implements the image processing, calibration, and surface depth calculations. The methods developed in this project for extracting the digitized 3D surface shape and 2D color images include (i) a self-contained calibration method that eliminates the need for pre-calibration of the device; (ii) the use of two colored line laser lights projected from two different angles to eliminate problems due to occlusions; and (iii) the extraction of high resolution color image of the impression evidence with minimal distortion.The system results in sub-millimeter accuracy in the depth image and a high resolution color image that is registered with the depth image. The system is particularly suitable for high quality images of long tire track impressions without the need for stitching multiple images.