Dissertationen zum Thema „Scene depth“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Scene depth" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Oliver, Parera Maria. „Scene understanding from image and video : segmentation, depth configuration“. Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/663870.
Der volle Inhalt der QuelleAquesta tesi té per objectiu analitzar imatges i vídeos a nivell d’objectes, amb l’objectiu de descompondre l’escena en objectes complets que es mouen i interaccionen entre ells. La tesi està dividida en tres parts. En primer lloc, proposem un mètode de segmentació per descompondre l’escena en les formes que la componen. A continuació, proposem un mètode probabilístic, que considera les formes o objectes en dues profunditats de l’escena diferents, i infereix quins objectes estan davant dels altres, completant també els objectes parcialment ocults. Finalment, proposem dos mètodes relacionats amb el vídeo inpainting. Per una banda, proposem un mètode per vídeo inpainting binari que utilitza el flux òptic del vídeo per completar les formes al llarg del temps, tenint en compte el seu moviment. Per l’altra banda, proposem un mètode per inpainting de flux òptic que té en compte la informació provinent dels frames.
Mitra, Bhargav Kumar. „Scene segmentation using miliarity, motion and depth based cues“. Thesis, University of Sussex, 2010. http://sro.sussex.ac.uk/id/eprint/2480/.
Der volle Inhalt der QuelleMalleson, Charles D. „Dynamic scene modelling and representation from video and depth“. Thesis, University of Surrey, 2016. http://epubs.surrey.ac.uk/809990/.
Der volle Inhalt der QuelleStynsberg, John. „Incorporating Scene Depth in Discriminative Correlation Filters for Visual Tracking“. Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153110.
Der volle Inhalt der QuelleElezovikj, Semir. „FOREGROUND AND SCENE STRUCTURE PRESERVED VISUAL PRIVACY PROTECTION USING DEPTH INFORMATION“. Master's thesis, Temple University Libraries, 2014. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/259533.
Der volle Inhalt der QuelleM.S.
We propose the use of depth-information to protect privacy in person-aware visual systems while preserving important foreground subjects and scene structures. We aim to preserve the identity of foreground subjects while hiding superfluous details in the background that may contain sensitive information. We achieve this goal by using depth information and relevant human detection mechanisms provided by the Kinect sensor. In particular, for an input color and depth image pair, we first create a sensitivity map which favors background regions (where privacy should be preserved) and low depth-gradient pixels (which often relates a lot to scene structure but little to identity). We then combine this per-pixel sensitivity map with an inhomogeneous image obscuration process for privacy protection. We tested the proposed method using data involving different scenarios including various illumination conditions, various number of subjects, different context, etc. The experiments demonstrate the quality of preserving the identity of humans and edges obtained from the depth information while obscuring privacy intrusive information in the background.
Temple University--Theses
Quiroga, Sepúlveda Julián. „Scene Flow Estimation from RGBD Images“. Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM057/document.
Der volle Inhalt der QuelleThis thesis addresses the problem of reliably recovering a 3D motion field, or scene flow, from a temporal pair of RGBD images. We propose a semi-rigid estimation framework for the robust computation of scene flow, taking advantage of color and depth information, and an alternating variational minimization framework for recovering rigid and non-rigid components of the 3D motion field. Previous attempts to estimate scene flow from RGBD images have extended optical flow approaches without fully exploiting depth data or have formulated the estimation in 3D space disregarding the semi-rigidity of real scenes. We demonstrate that scene flow can be robustly and accurately computed in the image domain by solving for 3D motions consistent with color and depth, encouraging an adjustable combination between local and piecewise rigidity. Additionally, we show that solving for the 3D motion field can be seen as a specific case of a more general estimation problem of a 6D field of rigid motions. Accordingly, we formulate scene flow estimation as the search of an optimal field of twist motions achieving state-of-the-art results.STAR
Forne, Christopher Jes. „3-D Scene Reconstruction from Multiple Photometric Images“. Thesis, University of Canterbury. Electrical and Computer Engineering, 2007. http://hdl.handle.net/10092/1227.
Der volle Inhalt der QuelleRehfeld, Timo [Verfasser], Stefan [Akademischer Betreuer] Roth und Carsten [Akademischer Betreuer] Rother. „Combining Appearance, Depth and Motion for Efficient Semantic Scene Understanding / Timo Rehfeld ; Stefan Roth, Carsten Rother“. Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2018. http://d-nb.info/1157011950/34.
Der volle Inhalt der QuelleJaritz, Maximilian. „2D-3D scene understanding for autonomous driving“. Thesis, Université Paris sciences et lettres, 2020. https://pastel.archives-ouvertes.fr/tel-02921424.
Der volle Inhalt der QuelleIn this thesis, we address the challenges of label scarcity and fusion of heterogeneous 3D point clouds and 2D images. We adopt the strategy of end-to-end race driving where a neural network is trained to directly map sensor input (camera image) to control output, which makes this strategy independent from annotations in the visual domain. We employ deep reinforcement learning where the algorithm learns from reward by interaction with a realistic simulator. We propose new training strategies and reward functions for better driving and faster convergence. However, training time is still very long which is why we focus on perception to study point cloud and image fusion in the remainder of this thesis. We propose two different methods for 2D-3D fusion. First, we project 3D LiDAR point clouds into 2D image space, resulting in sparse depth maps. We propose a novel encoder-decoder architecture to fuse dense RGB and sparse depth for the task of depth completion that enhances point cloud resolution to image level. Second, we fuse directly in 3D space to prevent information loss through projection. Therefore, we compute image features with a 2D CNN of multiple views and then lift them all to a global 3D point cloud for fusion, followed by a point-based network to predict 3D semantic labels. Building on this work, we introduce the more difficult novel task of cross-modal unsupervised domain adaptation, where one is provided with multi-modal data in a labeled source and an unlabeled target dataset. We propose to perform 2D-3D cross-modal learning via mutual mimicking between image and point cloud networks to address the source-target domain shift. We further showcase that our method is complementary to the existing uni-modal technique of pseudo-labeling
Diskin, Yakov. „Dense 3D Point Cloud Representation of a Scene Using Uncalibrated Monocular Vision“. University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1366386933.
Der volle Inhalt der QuelleMacháček, Jan. „Fokusovací techniky optického měření 3D vlastností“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442511.
Der volle Inhalt der QuelleKaiser, Adrien. „Analyse de scène temps réel pour l'interaction 3D“. Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT025/document.
Der volle Inhalt der QuelleThis PhD thesis focuses on the problem of visual scene analysis captured by commodity depth sensors to convert their data into high level understanding of the scene. It explores the use of 3D geometry analysis tools on visual depth data in terms of enhancement, registration and consolidation. In particular, we aim to show how shape abstraction can generate lightweight representations of the data for fast analysis with low hardware requirements. This last property is important as one of our goals is to design algorithms suitable for live embedded operation in e.g., wearable devices, smartphones or mobile robots. The context of this thesis is the live operation of 3D interaction on a mobile device, which raises numerous issues including placing 3D interaction zones with relation to real surrounding objects, tracking the interaction zones in space when the sensor moves and providing a meaningful and understandable experience to non-expert users. Towards solving these problems, we make contributions where scene abstraction leads to fast and robust sensor localization as well as efficient frame data representation, enhancement and consolidation. While simple geometric surface shapes are not as faithful as heavy point sets or volumes to represent observed scenes, we show that they are an acceptable approximation and their light weight makes them well balanced between accuracy and performance
Clarou, Alphonse. „Catastrophe et répétition : une intelligence du théâtre“. Thesis, Paris 10, 2016. http://www.theses.fr/2016PA100053.
Der volle Inhalt der QuelleCatastrophe and Repetition make one think about theater. They give an intelligence of it. « Anintelligence of theater » : one or several ideas of this last we can arrive at, using forms,notions or principles which don’t essentially originate in this art itself. Thus, starting fromJean Genet’s propositions in The Tightrope-Walker, notably viewed in the light of GeorgesBataille’s The Dead Man ; from bullfighting and especially a sentence pronounced by thematador José Tomás, who claims he’s leaving his « body at the hotel » before making hisentrance unto the arena floor (akin to the « acteur en vrai » who committed suicide in ValèreNovarina’s Le Théâtre des paroles) ; as well as from several poetic, urban, or theoreticalrepresentations of the city as a scene operated upon by « Death » (the one from which we donot yet die : this preceding world, this « desperate and radiant region where the artistoperates », as Genet put it) ; starting from these texts and images, Catastrophe and Repetitionform a pensive couple, and sometimes their work has something to tell about those sceneswhere love and death have their entries. Something to tell about their specificities, about whatgrounds and structures them, what keeps them standing or makes them come undone.Something to tell about what makes a scene
Firman, M. D. „Learning to complete 3D scenes from single depth images“. Thesis, University College London (University of London), 2016. http://discovery.ucl.ac.uk/1532193/.
Der volle Inhalt der QuelleClark, Laura. „"But ayenste deth may no man rebell" Death scenes as tools for characterization in Thomas Malory's "Morte d'Arthur" /“. Ann Arbor, Mich. : ProQuest, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1453248.
Der volle Inhalt der QuelleTitle from PDF title page (viewed Mar. 16, 2009). Source: Masters Abstracts International, Volume: 46-06, page: 2989. Adviser: Bonnie Wheeler. Includes bibliographical references.
Deng, Zhuo. „RGB-DEPTH IMAGE SEGMENTATION AND OBJECT RECOGNITION FOR INDOOR SCENES“. Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/427631.
Der volle Inhalt der QuellePh.D.
With the advent of Microsoft Kinect, the landscape of various vision-related tasks has been changed. Firstly, using an active infrared structured light sensor, the Kinect can provide directly the depth information that is hard to infer from traditional RGB images. Secondly, RGB and depth information are generated synchronously and can be easily aligned, which makes their direct integration possible. In this thesis, I propose several algorithms or systems that focus on how to integrate depth information with traditional visual appearances for addressing different computer vision applications. Those applications cover both low level (image segmentation, class agnostic object proposals) and high level (object detection, semantic segmentation) computer vision tasks. To firstly understand whether and how depth information is helpful for improving computer vision performances, I start research on the image segmentation field, which is a fundamental problem and has been studied extensively in natural color images. We propose an unsupervised segmentation algorithm that is carefully crafted to balance the contribution of color and depth features in RGB-D images. The segmentation problem is then formulated as solving the Maximum Weight Independence Set (MWIS) problem. Given superpixels obtained from different layers of a hierarchical segmentation, the saliency of each superpixel is estimated based on balanced combination of features originating from depth, gray level intensity, and texture information. We evaluate the segmentation quality based on five standard measures on the commonly used NYU-v2 RGB-Depth dataset. A surprising message indicated from experiments is that unsupervised image segmentation of RGB-D images yields comparable results to supervised segmentation. In image segmentation, an image is partitioned into several groups of pixels (or super-pixels). We take one step further to investigate on the problem of assigning class labels to every pixel, i.e., semantic scene segmentation. We propose a novel image region labeling method which augments CRF formulation with hard mutual exclusion (mutex) constraints. This way our approach can make use of rich and accurate 3D geometric structure coming from Kinect in a principled manner. The final labeling result must satisfy all mutex constraints, which allows us to eliminate configurations that violate common sense physics laws like placing a floor above a night stand. Three classes of mutex constraints are proposed: global object co-occurrence constraint, relative height relationship constraint, and local support relationship constraint. Segments obtained from image segmentation can be either too fine or too coarse. A full object region not only conveys global features but also arguably enriches contextual features as confusing background is separated. We propose a novel unsupervised framework for automatically generating bottom up class independent object candidates for detection and recognition in cluttered indoor environments. Utilizing raw depth map, we propose a novel plane segmentation algorithm for dividing an indoor scene into predominant planar regions and non-planar regions. Based on this partition, we are able to effectively predict object locations and their spatial extensions. Our approach automatically generates object proposals considering five different aspects: Non-planar Regions (NPR), Planar Regions (PR), Detected Planes (DP), Merged Detected Planes (MDP) and Hierarchical Clustering (HC) of 3D point clouds. Object region proposals include both bounding boxes and instance segments. Although 2D computer vision tasks can roughly identify where objects are placed on image planes, their true locations and poses in the physical 3D world are difficult to determine due to multiple factors such as occlusions and the uncertainty arising from perspective projections. However, it is very natural for human beings to understand how far objects are from viewers, object poses and their full extents from still images. These kind of features are extremely desirable for many applications such as robotics navigation, grasp estimation, and Augmented Reality (AR) etc. In order to fill the gap, we addresses the problem of amodal perception of 3D object detection. The task is to not only find object localizations in the 3D world, but also estimate their physical sizes and poses, even if only parts of them are visible in the RGB-D image. Recent approaches have attempted to harness point cloud from depth channel to exploit 3D features directly in the 3D space and demonstrated the superiority over traditional 2D representation approaches. We revisit the amodal 3D detection problem by sticking to the 2D representation framework, and directly relate 2D visual appearance to 3D objects. We propose a novel 3D object detection system that simultaneously predicts objects' 3D locations, physical sizes, and orientations in indoor scenes.
Temple University--Theses
Wishart, Keith A. „Cue combination for depth, brightness and lightness in 3-D scenes“. Thesis, University of Sheffield, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389616.
Der volle Inhalt der QuelleLabrie-Larrivée, Félix. „Depth texture synthesis for high resolution seamless reconstruction of large scenes“. Master's thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/30324.
Der volle Inhalt der QuelleLarge scenes such as building facades are challenging environments for 3D reconstruction. These scenes often include repeating elements (windows, bricks, wood paneling) that can be exploited for the task of 3D reconstruction. Our approach, Depth Texture Synthesis, is based on that idea and aims to improve the quality of 3D model representation of large scenes. By scanning a sample of a repeating structure using a RGBD sensor, Depth Texture Synthesis can propagate the high resolution of that sample to similar parts of the scene. It does so following RGB and low resolution depth information of a SfM reconstruction. To handle this information the building facade is simplified into a planar primitive and serves as our canvas. The high resolution depth of the Kinect sample and low resolution depth of the SfM model as well as the RGB information are projected onto the canvas. Then, powerful image based texture synthesis algorithms are used to propagate the high resolution depth following cues in RGB and low resolution depth. The resulting synthesized high resolution depth is converted back into a 3D model that greatly improves on the SfM model with more detailed, more realistic looking geometry. Our approach is also much less labor intensive than RGBD sensors in large scenes and it is much more affordable than Lidar.
Kirschner, Aaron J. „Two ballet scenes for The Masque of the Red Death“. Thesis, Boston University, 2012. https://hdl.handle.net/2144/12453.
Der volle Inhalt der QuelleThis project is part of an adaptation of Edgar Alan Poe's short story "The Masque of the Red Death" into a ballet. The music is scored for a chamber orchestra with single winds and brass, piano, two percussionists, and string quintet. The two scenes presented herein are the opening two of the ballet. The first scene ("The Red Death Had Long Devastated the Country") depicts the eponymous disease consuming the population. In the second scene ("The Prince Prospero & the March to the Walled Abbey") Prospero attempts to ignore the plague and retire to his walled abbey. While his message is beautiful on its own, it is in constant disharmony with the world in which he now lives. The music reflects tlus, with the Prince often in the wrong key. The scene concludes with the Prince and his knaves marching to the abbey.
Hassell, Sian Angharad. „The role of death in ancient Roman mythological epic : exploring death and death scenes in Virgil's Aeneid and Valerius Flaccus' Argonautica“. Thesis, University of Leeds, 2014. http://etheses.whiterose.ac.uk/7245/.
Der volle Inhalt der QuelleBennett, Tracy. „Exploring the Medico-legal death scene investigation of sudden unexpected death of infants admitted to Salt River mortuary, Cape Town, South Africa“. Master's thesis, Faculty of Health Sciences, 2018. http://hdl.handle.net/11427/30064.
Der volle Inhalt der QuelleTANIMOTO, Masayuki, Toshiaki FUJII, Bunpei TOUJI, Tadahiko KIMOTO und Takashi IMORI. „A Segmentation-Based Multiple-Baseline Stereo (SMBS) Scheme for Acquisition of Depth in 3-D Scenes“. Institute of Electronics, Information and Communication Engineers, 1998. http://hdl.handle.net/2237/14997.
Der volle Inhalt der QuelleAllan, Janice Morag. „The writing of the primal scene(s), the death of God in the novels of Wilkie Collins“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ30128.pdf.
Der volle Inhalt der QuelleMüller, Franziska [Verfasser]. „Real-time 3D hand reconstruction in challenging scenes from a single color or depth camera / Franziska Müller“. Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2020. http://d-nb.info/1224883594/34.
Der volle Inhalt der QuelleBackhouse, George. „References to swords in the death scenes of Dido and Turnus in the Aeneid“. Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/71764.
Der volle Inhalt der QuelleENGLISH ABSTRACT: This thesis investigates the references to swords in key scenes in the Aeneid – particularly the scenes of Dido’s and Turnus’ death – in order to add new perspectives on these scenes and on the way in which they impact on the presentation of Aeneas’ Roman mission in the epic. In Chapter Two I attempt to provide an outline of the mission of Aeneas. I also investigate the manner in which Dido and Turnus may be considered to be opponents of Aeneas’ mission. In Chapter Three I investigate references to swords in select scenes in book four of the Aeneid. I highlight an ambiguity in the interpretation of the sword that Dido uses to commit suicide and I also provide a description of the sword as a weapon and its place in the epic. In Chapter Four I provide an analysis of the references to swords in Dido’s and Turnus’ death scenes alongside a number of other important scenes involving mention of swords. I preface my analyses of the references to swords that play a role in interpreting Dido and Turnus’ deaths with an outline of the reasons for the deaths of each of these figures. The additional references to swords that I use in this chapter are the references to the sword in the scene of Deiphobus’ death in book six and to the sword and Priam’s act of arming himself on the night on which Troy is destroyed. At the end of Chapter Four I look at parallels between Dido and Turnus and their relationship to the mission of Aeneas. At the end of this thesis I am able to conclude that an investigation and analysis of the references to swords in select scenes in the Aeneid adds to existing scholarship in Dido’s and Turnus’ death in the following way: a more detailed investigation of the role of swords in the interpretation of Dido’s death from an erotic perspective strengthens the existing notion in scholarship that Dido is an obstacle to the mission of Aeneas.
AFRIKAANSE OPSOMMING: Hierdie tesis ondersoek die verwysings na swaarde in kerntonele in die Aeneïs – hoofsaaklik die sterftonele van Dido en Turnus – met die oog daarop om addisionele perspektiewe te verskaf op hierdie tonele en die impak wat hulle het op die voorstelling van Aeneas se Romeinse missie in die epos. In hoofstuk twee poog ek om ’n oorsig te bied van Aeneas se Romeinse missie. Ek stel ook ondersoek in na die mate waartoe Dido en Turnus as teenstanders van Aeneas se Romeinse missie beskou kan word. In Hoofstuk Drie ondersoek ek die verwysings na swaarde in spesifieke tonele van boek vier van die Aeneïs. Ek verwys na ’n dubbelsinnigheid in die interpretasie van die swaard wat Dido gebruik om selfmoord te pleeg en verskaf ook ’n beskrywing van die swaard as ’n wapen en die gebruik daarvan in die epos. In Hoofstuk Vier verskaf ek ‘n ontleding van die verwysings na swaarde in Dido en Turnus se sterftonele saam met ’n aantal ander belangrike tonele met verwysings na swaarde. Ek lei my ontleding van die beskrywings van die swaarde wat ’n rol speel in die interpretasie van Dido en Turnus se sterftes in met ’n uiteensetting van die redes vir die dood van elk van hierdie figure. Die addisionele verwysings na swaarde wat ek in hierdie hoofstuk ontleed, is die verwysing na die swaard in die toneel van Deiphobus se dood in boek ses en die verwysing na die swaard in die toneel waar Priamus sy wapenrusting aantrek op Troje se laaste aand. Aan die einde van Hoofstuk Vier ondersoek ek die parallele tussen Dido en Turnus en hulle verhouding tot Aeneas se Romeinse missie. Hierdie tesis ondersoek die verwysings na swaarde in kerntonele in die Aeneïs – hoofsaaklik die sterftonele van Dido en Turnus – met die oog daarop om addisionele perspektiewe te verskaf op hierdie tonele en die impak wat hulle het op die voorstelling van Aeneas se Romeinse missie in die epos. In hoofstuk twee poog ek om ’n oorsig te bied van Aeneas se Romeinse missie. Ek stel ook ondersoek in na die mate waartoe Dido en Turnus as teenstanders van Aeneas se Romeinse missie beskou kan word. In Hoofstuk Drie ondersoek ek die verwysings na swaarde in spesifieke tonele van boek vier van die Aeneïs. Ek verwys na ’n dubbelsinnigheid in die interpretasie van die swaard wat Dido gebruik om selfmoord te pleeg en verskaf ook ’n beskrywing van die swaard as ’n wapen en die gebruik daarvan in die epos. In Hoofstuk Vier verskaf ek ‘n ontleding van die verwysings na swaarde in Dido en Turnus se sterftonele saam met ’n aantal ander belangrike tonele met verwysings na swaarde. Ek lei my ontleding van die beskrywings van die swaarde wat ’n rol speel in die interpretasie van Dido en Turnus se sterftes in met ’n uiteensetting van die redes vir die dood van elk van hierdie figure. Die addisionele verwysings na swaarde wat ek in hierdie hoofstuk ontleed, is die verwysing na die swaard in die toneel van Deiphobus se dood in boek ses en die verwysing na die swaard in die toneel waar Priamus sy wapenrusting aantrek op Troje se laaste aand. Aan die einde van Hoofstuk Vier ondersoek ek die parallele tussen Dido en Turnus en hulle verhouding tot Aeneas se Romeinse missie.
Jones, Dean. „Fatal call - getting away with murder : a study into influences of decision making at the initial scene of unexpected death“. Thesis, University of Portsmouth, 2016. https://researchportal.port.ac.uk/portal/en/theses/fatal-call--getting-away-with-murder(d911623a-4009-4a5c-bba1-5827fdf25798).html.
Der volle Inhalt der QuelleSIMOES, Francisco Paulo Magalhaes. „Object detection and pose estimation from natural features for augmented reality in complex scenes“. Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/22417.
Der volle Inhalt der QuelleMade available in DSpace on 2017-11-29T16:49:07Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) TeseFinal_fpms.pdf: 108609391 bytes, checksum: c84c50e3c8588d6c85e44f9ac6343200 (MD5) Previous issue date: 2016-03-07
CNPQ
Alignment of virtual elements to the real world scenes (known as detection and tracking) relying on features that are naturally present on the scene is one of the most important challenges in Augmented Reality. When it goes to complex scenes like industrial scenarios, the problem gets bigger with the lack of features and models, high specularity and others. Based on these problems, this PhD thesis addresses the question “How to improve object detection and pose estimation from natural features for AR when dealing with complex scenes problems?”. In order to answer this question, we need to ask ourselves “What are the challenges that we face when developing a new tracker for real world scenarios?”. We begin to answer these questions by developing a complete tracking system that tackles some characteristics typically found in industrial scenarios. This system was validated in a tracking competition organized by the most important AR conference in the world, called ISMAR. During the contest, two complementary problems to tracking were also discussed: calibration, procedure which puts the virtual information in the same coordinate system of the real world, and 3D reconstruction, which is responsible for creating 3D models of the scene to be used for tracking. Because many trackers need a pre-acquired model of the target objects, the quality of the generated geometric model of the objects influences the tracker, as observed on the tracking contest. Sometimes these models are available but in other cases their acquisition represents a great effort (manually) or cost (laser scanning). Because of this we decided to analyze how difficult it is today to automatically recover 3D geometry from complex 3D scenes by using only video. In our case, we considered an electrical substation as a complex 3D scene. Based on the acquired knowledge from previous experiments, we decided to first tackle the problem of improving the tracking for scenes where we can use recent RGB-D sensors during model generation and tracking. We developed a technique called DARP, Depth Assisted Rectification of Patches, which can improve matching by using rectified features based on patches normals. We analyzed this new technique under different synthetic and real scenes and improved the results over traditional texture based trackers like ORB, DAFT or SIFT. Since model generation is a difficult problem in complex scenes, our second proposed tracking approach does not depend on these geometric models and aims to track texture or textureless objects. We applied a supervised learning technique, called Gradient Boosting Trees (GBTs) to solve the tracking as a linear regression problem. We developed this technique by using image gradients and analyzing their relationship with tracking parameters. We also proposed an improvement over GBTs by using traditional tracking approaches together with them, like intensity or edge based features which turned their piecewise constant function to a more robust piecewise linear function. With the new approach, it was possible to track textureless objects like a black and white map for example.
O alinhamento de elementos virtuais com a cena real (definido como detecção e rastreamento) através de características naturalmente presentes em cena é um dos grandes desafios da Realidade Aumentada. Quando se trata de cenas complexas, como cenários industriais, o problema se torna maior com objetos pouco texturizados, alta especularidade e outros. Com base nesses problemas, esta tese de doutorado aborda a questão "Como melhorar a detecção de objetos e a estimativa da sua pose através de características naturais da cena para RA ao lidar com problemas de cenários complexos?". Para responder a essa pergunta, precisamos também nos perguntar: Quais são os desafios que enfrentamos ao desenvolver um novo rastreador para cenários reais?". Nesta tese, começamos a responder estas questões através da criação de um sistema de rastreamento completo que lida com algumas características tipicamente encontradas em cenários industriais. Este sistema foi validado em uma competição de rastreamento realizada na principal conferência de RA no mundo, chamada ISMAR. Durante a competição também foram discutidos dois problemas complementares ao rastreamento: a calibração, procedimento que coloca a informação virtual no mesmo sistema de coordenadas do mundo real, e a reconstrução 3D, responsável por criar modelos 3D da cena. Muitos rastreadores necessitam de modelos pré-adquiridos dos objetos presentes na cena e sua qualidade influencia o rastreador, como observado na competição de rastreamento. Às vezes, esses modelos estão disponíveis, mas em outros casos a sua aquisição representa um grande esforço (manual) ou custo (por varredura a laser). Devido a isto, decidimos analisar a dificuldade de reconstruir automaticamente a geometria de cenas 3D complexas usando apenas vídeo. No nosso caso, considerou-se uma subestação elétrica como exemplo de uma cena 3D complexa. Com base no conhecimento adquirido a partir das experiências anteriores, decidimos primeiro resolver o problema de melhorar o rastreamento para as cenas em que podemos utilizar sensores RGB-D durante a reconstrução e o rastreamento. Foi desenvolvida a técnica chamada DARP, sigla do inglês para Retificação de Patches Assistida por Informação de Profundidade, para melhorar o casamento de características usando patches retificados a partir das normais. A técnica foi analisada em cenários sintéticos e reais e melhorou resultados de rastreadores baseados em textura como ORB, DAFT ou SIFT. Já que a reconstrução do modelo 3D é um problema difícil em cenas complexas, a segunda abordagem de rastreamento não depende desses modelos geométricos e pretende rastrear objetos texturizados ou não. Nós aplicamos uma técnica de aprendizagem supervisionada, chamada Gradient Boosting Trees (GBTs) para tratar o rastreamento como um problema de regressão linear. A técnica foi desenvolvida utilizando gradientes da imagem e a análise de sua relação com os parâmetros de rastreamento. Foi também proposta uma melhoria em relação às GBTs através do uso de abordagens tradicionais de rastreamento em conjunto com a regressão linear, como rastreamento baseado em intensidade ou em arestas, propondo uma nova função de predição por partes lineares mais robusta que a função de predição por partes constantes. A nova abordagem permitiu o rastreamento de objetos não-texturizados como por exemplo um mapa em preto e branco.
Key, Jennifer Selina. „Death in Anglo-Saxon hagiography : approaches, attitudes, aesthetics“. Thesis, University of St Andrews, 2014. http://hdl.handle.net/10023/6352.
Der volle Inhalt der QuelleGurrieri, Luis E. „The Omnidirectional Acquisition of Stereoscopic Images of Dynamic Scenes“. Thèse, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/30923.
Der volle Inhalt der QuelleKonradsson, Albin, und Gustav Bohman. „3D Instance Segmentation of Cluttered Scenes : A Comparative Study of 3D Data Representations“. Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177598.
Der volle Inhalt der QuelleMcCracken, Michael. „Lowest of the Low: Scenes of Shame and Self-Deprecation in Contemporary Scottish Cinema“. Thesis, University of North Texas, 2008. https://digital.library.unt.edu/ark:/67531/metadc9804/.
Der volle Inhalt der QuelleBoudrika, Mohammed Amin. „Jan Fabre : dialogue du corps et de la mort. Ecriture, scénographie et mise en scène“. Thesis, Normandie, 2018. http://www.theses.fr/2018NORMR154/document.
Der volle Inhalt der QuelleJan Fabre's dialogue of body and death is based on a body of theatrical texts. Regarding to the approach of my work, it was mainly focused on three main aspects, first the inheritances and the artistic and cultural inspirations, then the ritual and the sacrifice in its historical philosophical and artistic dimension, and finally the production of the performance: from genesis to scenic representation. The goal of my thesis is to demonstrate the artist's desire to overcome the limits, to violate the codes of society and to create an authentic artistic language inspired by all possible materials to touch the vulnerability of contemporary man. To do this Jan Fabre has developed a method of work that rediscover a body raw and instinctive, a body generating a vital energy and resulting in strong sensations. In this work process I noticed that for Fabre the notion of research has a primordial place based on a conceptual, philosophical and historical evolution. Jan Fabre builds his performances so that all the components of the performance interweave. A singular textual and visual writing constructs a sort of post-mortem state in the sense that logic is ceded to intuition, a writing that offers a scenic universe rich in images and allegories. A scenic composition where space, time and rhythm are based mainly on tension and the development of a ritual atmosphere. After all, the body and death in his universe are united and he manifests a recurring work between the appearance and the disappearance
Desmet, Maud. „Les confessions silencieuses du cadavre : de la fiction d’autopsie aux figures du mort dans les séries et films policiers contemporains (1991-2013)“. Thesis, Poitiers, 2014. http://www.theses.fr/2014POIT5001.
Der volle Inhalt der QuelleWithout bodies, no stories. A vehicle of action, a narrative agent, and the support of a strong identification link between the audience and the character, the body is the main figure of cinematographic and television mediums.If cinema has always, from its early stages, glorified the endless liveliness of bodies, the reverse side of this exposure has simultaneously been lingering: the mute threat of death. However, in films or in television series, if the last breath before death is often synonymous with a ultimate communion with life and with a resistance to death, what happens to the body and the character when death has seized them for ever, and the living – characters and audience – are only left facing the corpse? As a parasite figure, the corpse is neither a character nor even an extra. Both an empty sign and a narrative core, the crime plot will indeed develop from the corpse and its examination, during the autopsy or on the crime scene. And whereas the corpse may seem secondary, even minor, if we look at crime fictions from the angle of its fixed and opaque non-look, it still allows us to see something of the crime and of its deeply unfair nature, and of the relations between the living and a death that appears in its most abject features on the autopsy table. In this study, we will examine how crime fictions stage corpses as disturbingly precise reflects of a contemporary lack of perspective in front of death. Similarly to the philosopher Maxime Coulombe in his essay on zombies, we will consider the fictional corpse as an "analyser of contemporary society" and as a "symptom of what is tormenting the consciousness of our time"
Caraballo, Norma Iris. „Identification of Characteristic Volatile Organic Compounds Released during the Decomposition Process of Human Remains and Analogues“. FIU Digital Commons, 2014. http://digitalcommons.fiu.edu/etd/1391.
Der volle Inhalt der QuelleAngladon, Vincent. „Room layout estimation on mobile devices“. Phd thesis, Toulouse, INPT, 2018. http://oatao.univ-toulouse.fr/20745/1/ANGLADON_Vincent.pdf.
Der volle Inhalt der QuelleRosinski, Milosz Paul. „Cinema of the self : a theory of cinematic selfhood & practices of neoliberal portraiture“. Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/269409.
Der volle Inhalt der QuelleTorralba, Antonio, und Aude Oliva. „Global Depth Perception from Familiar Scene Structure“. 2001. http://hdl.handle.net/1721.1/7267.
Der volle Inhalt der QuelleLahoud, Jean. „Indoor 3D Scene Understanding Using Depth Sensors“. Diss., 2020. http://hdl.handle.net/10754/665033.
Der volle Inhalt der QuelleYang, Zih-Yi, und 楊子儀. „Scene Depth Reconstruction Based on Stereo Vision“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/73128409175081453522.
Der volle Inhalt der Quelle國立勤益科技大學
電機工程系
100
In this work, we construct a depth map of objects from two images which captured by a pair of cameras. Objects in a depth map can be detected, and the distances form objects to camera can also be estimated in our proposed algorithms. The cost of camera in our proposed system is much cheaper than those of a variety of sensors such as radar and laser etc. The scenes captured by cameras (which may include target objects, non-target objects, a background and others) do not require the installation of various sensors in the further applications. Furthermore, our proposed algorithm for reconstructing disparity plane has less complexity. The depth information of objects can be reconstructed by left- and right-side images which captured by two horizontally installed cameras. The disparity map can be constructed by comparing two images with the shift of pixels in different horizontal directions. Since an object usually has constant illumination and similar colors, the mean shift segmentation algorithm is applied to cut up CIELVU coordinates image into several color segments. After finding color segments and disparity map, the Graph Cut algorithm computes the energy function of each color segment with disparity plane, and assigns the same color tag for the color segments with minimal energy functions. The distance to object is estimated according to the multiple views in computer vision and the angle of views between the pair of cameras. The estimated distances from objects to cameras can be measured on the basis of multiple geometric views and camera parameters. In summary, our experimental results demonstrated that the proposed method is effective for reconstructing disparity plane of the scene. And the distances from camera to objects in the scene can be measured by applying inverse perspective method and two-view geometry.
Liang, Yun-Hui, und 梁韻卉. „Depth-map Generation Based on Scene Classification“. Thesis, 2007. http://ndltd.ncl.edu.tw/handle/76244604150757399866.
Der volle Inhalt der QuelleRehfeld, Timo. „Combining Appearance, Depth and Motion for Efficient Semantic Scene Understanding“. Phd thesis, 2018. https://tuprints.ulb.tu-darmstadt.de/7315/1/dissertation_timo_rehfeld_final_a4_color_refs_march10_2018_small.pdf.
Der volle Inhalt der QuelleTai-pao, Chuang. „Use of Stereoscopic Photography to Distinguish Object Depth in Outdoor Scene“. 2005. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0021-2004200716265208.
Der volle Inhalt der QuelleChuang, Tai-pao, und 莊臺寶. „Use of Stereoscopic Photography to Distinguish Object Depth in Outdoor Scene“. Thesis, 2006. http://ndltd.ncl.edu.tw/handle/80388658829038097161.
Der volle Inhalt der Quelle國立臺灣師範大學
資訊教育學系
94
Stereoscopic scene of the mankind is naturally caused by synthesizing two images produced by the parallax of the two eyes of human. Such being the case, mankind can distinguish the relative position of the objects. In the study of related stereovision, some persons aim at the framework of taking simulated images with two eyes using one or two cameras from front and back or simultaneously at the same time to obtain a pair of parallax mages; someone pay more attention to the theoretical analysis of the relative positions of the parallax images, and some others do the work of using parallax images as material to carry out the job of image classification, comparison and analysis. The study is mainly divided into two parts: Firstly, we used a camera to take a shot on each different position to obtain a set of parallax images and perform analysis on this set of parallax images, so as to get the calculation the intrinsic and extrinsic parameters of the camera and find the regression equation. Secondly, we use the equation to estimate the relative positions of each different object in the every set of parallax images. The study used two kinds of digital cameras, i.e. Casio Z4 and Pentax S5i to carry out the experiment to obtain individual camera’s parameter. We find the regression equation as follow, and we use it to estimate the object distance. For Casio, the regression equation is zd=24.028×b, and its focus is 24.028. For Pentax, the regression equation is zd=25.637×b, and its focus is 25.637. Of them: z: the distance between marker and camera (m), d: the disparity of the corresponding point (pixel), b: base line between two shots(cm). We took images at the Exhibition Center of Fine Arts of the Library of National Taiwan Normal University and the campus of its Branch School, and analyzed each object’s image depth.
Fan, Song-Yong, und 范淞詠. „The Single Depth Map Generation Method Based on Vanish Point and Scene Information“. Thesis, 2014. http://ndltd.ncl.edu.tw/handle/31514442633914622114.
Der volle Inhalt der Quelle玄奘大學
資訊管理學系碩士班
102
This study proposes a single depth image generation method. The depth map of the image is original synthesis of 3D stereo images. Due to depth map video cameras are not very popular currently, if we want to get a depth map of image will use more than one image with two or more cameras, and generated through the software. In this thesis we use the vanishing point and vanishing line characteristics, and achieve relative distance in the image, and scene information in the image to adjust the image depth allows more accurate. Experimental results indicate that the method does not require depth imaging video cameras can still get better depth map.
Emerson, David R. „3-D Scene Reconstruction for Passive Ranging Using Depth from Defocus and Deep Learning“. Thesis, 2019. http://hdl.handle.net/1805/19900.
Der volle Inhalt der QuelleDepth estimation is increasingly becoming more important in computer vision. The requirement for autonomous systems to gauge their surroundings is of the utmost importance in order to avoid obstacles, preventing damage to itself and/or other systems or people. Depth measuring/estimation systems that use multiple cameras from multiple views can be expensive and extremely complex. And as these autonomous systems decrease in size and available power, the supporting sensors required to estimate depth must also shrink in size and power consumption. This research will concentrate on a single passive method known as Depth from Defocus (DfD), which uses an in-focus and out-of-focus image to infer the depth of objects in a scene. The major contribution of this research is the introduction of a new Deep Learning (DL) architecture to process the the in-focus and out-of-focus images to produce a depth map for the scene improving both speed and performance over a range of lighting conditions. Compared to the previous state-of-the-art multi-label graph cuts algorithms applied to the synthetically blurred dataset the DfD-Net produced a 34.30% improvement in the average Normalized Root Mean Square Error (NRMSE). Similarly the DfD-Net architecture produced a 76.69% improvement in the average Normalized Mean Absolute Error (NMAE). Only the Structural Similarity Index (SSIM) had a small average decrease of 2.68% when compared to the graph cuts algorithm. This slight reduction in the SSIM value is a result of the SSIM metric penalizing images that appear to be noisy. In some instances the DfD-Net output is mottled, which is interpreted as noise by the SSIM metric. This research introduces two methods of deep learning architecture optimization. The first method employs the use of a variant of the Particle Swarm Optimization (PSO) algorithm to improve the performance of the DfD-Net architecture. The PSO algorithm was able to find a combination of the number of convolutional filters, the size of the filters, the activation layers used, the use of a batch normalization layer between filters and the size of the input image used during training to produce a network architecture that resulted in an average NRMSE that was approximately 6.25% better than the baseline DfD-Net average NRMSE. This optimized architecture also resulted in an average NMAE that was 5.25% better than the baseline DfD-Net average NMAE. Only the SSIM metric did not see a gain in performance, dropping by 0.26% when compared to the baseline DfD-Net average SSIM value. The second method illustrates the use of a Self Organizing Map clustering method to reduce the number convolutional filters in the DfD-Net to reduce the overall run time of the architecture while still retaining the network performance exhibited prior to the reduction. This method produces a reduced DfD-Net architecture that has a run time decrease of between 14.91% and 44.85% depending on the hardware architecture that is running the network. The final reduced DfD-Net resulted in a network architecture that had an overall decrease in the average NRMSE value of approximately 3.4% when compared to the baseline, unaltered DfD-Net, mean NRMSE value. The NMAE and the SSIM results for the reduced architecture were 0.65% and 0.13% below the baseline results respectively. This illustrates that reducing the network architecture complexity does not necessarily reduce the reduction in performance. Finally, this research introduced a new, real world dataset that was captured using a camera and a voltage controlled microfluidic lens to capture the visual data and a 2-D scanning LIDAR to capture the ground truth data. The visual data consists of images captured at seven different exposure times and 17 discrete voltage steps per exposure time. The objects in this dataset were divided into four repeating scene patterns in which the same surfaces were used. These scenes were located between 1.5 and 2.5 meters from the camera and LIDAR. This was done so any of the deep learning algorithms tested would see the same texture at multiple depths and multiple blurs. The DfD-Net architecture was employed in two separate tests using the real world dataset. The first test was the synthetic blurring of the real world dataset and assessing the performance of the DfD-Net trained on the Middlebury dataset. The results of the real world dataset for the scenes that were between 1.5 and 2.2 meters from the camera the DfD-Net trained on the Middlebury dataset produced an average NRMSE, NMAE and SSIM value that exceeded the test results of the DfD-Net tested on the Middlebury test set. The second test conducted was the training and testing solely on the real world dataset. Analysis of the camera and lens behavior led to an optimal lens voltage step configuration of 141 and 129. Using this configuration, training the DfD-Net resulted in an average NRMSE, NMAE and SSIM of 0.0660, 0.0517 and 0.8028 with a standard deviation of 0.0173, 0.0186 and 0.0641 respectively.
HSU, SHUN-MING, und 許舜銘. „Based on Vanish Point and Scene from Focus to Generate A Single Depth Map“. Thesis, 2016. http://ndltd.ncl.edu.tw/handle/74457494033081135178.
Der volle Inhalt der Quelle玄奘大學
資訊管理學系碩士班
104
In this thesis we study the techinque of the depth map generating from 2D image to high quality 3D stereo image. We use the vanishing points and vanishing lines with the objects of scene and the depth of focus to create the single depth map. Since the depth information of a 2D image are too much to build a good depth map, we need foreground, background, scene, and foucs of the 2D image. We apply Laplace filter, Hough transform, vanishing point to construct a coarse depth map, then use the scene and object with the depth from focus to form a more accuracy depth map. Experimental results indicate that the method does not require depth imaging video cameras can still get a better depth map.
Su, Che-Chun. „Applied statistical modeling of three-dimensional natural scene data“. Thesis, 2014. http://hdl.handle.net/2152/24878.
Der volle Inhalt der Quelletext
Bhat, Shariq. „Depth Estimation Using Adaptive Bins via Global Attention at High Resolution“. Thesis, 2021. http://hdl.handle.net/10754/668894.
Der volle Inhalt der QuelleSun, Wei-Chih, und 孫偉智. „Using STM to Estimate Depth Map of A Scene from Two Different Defocused Images and Hardware Implementation“. Thesis, 2009. http://ndltd.ncl.edu.tw/handle/33709215779925821091.
Der volle Inhalt der Quelle國立清華大學
電機工程學系
98
Three-dimensional television (3D-TV) is the trend of television development in the future and there are many researches focus on 3D-TV. We believe that three-dimensional (or stereoscopic) television (3D-TV) will replace high-definition television (HD-TV). Recently, an advanced 3D-TV system has been brought up on the new technology called Depth Image-Based Rendering (DIBR), which is also called 2D-plus-depth. This representation is generally considered to be more efficient for coding, storage, transmission and rendering than traditional 3D video representation which is transmitting left image and right image to receiver. There are many approaches to 3D depth recovery and the approaches of 3D depth recovery can be divided into depth from focus (DFF) and Depth from defocus (DFD) for focus cue. We choose spatial domain transform method (STM) [13] to estimate the depth information of different defocused images because STM is more simple and direct than other methods. We use Verilog HDL to develop the hardware architecture of the STM algorithm and implement the prototype on Xilinx FPGA board. About acquiring images, the different defocused images were recorded by applying different voltage to Liquid-Crystal Lens camera. Then we estimate depth map from blur degree of images by using STM algorithm. And we use three dimensional display to watch the experiment results.
WANG, WEI-HSIANG, und 王煒翔. „On the Relative Depth Estimation Techniques Used for Static Scene Videos Captured by Moving a Single Camera Lens“. Thesis, 2015. http://ndltd.ncl.edu.tw/handle/zfc8ma.
Der volle Inhalt der Quelle國立臺灣科技大學
資訊工程系
103
In recent year, depth map is used extensively. The real time depth map captured by Kinect can get the human motion easily. It is very important in human–computer interaction. However, since smartphone is popularized, static scene depth maps are very popular. It is used to edit the photo with some special effects. To get a better static scene depth map, some smartphone company produce the smartphone with dual cameras, sparing no efforts. But most of company have the smartphone with dual cameras require cost-down. So static scene depth maps produced by single camera is more important. Most of popular special effects of photos only need relative depth information, don’t need the absolutely depth, so this thesis estimate relative depth. In this thesis, the concept of making depth map is based on the distance of moving object on the video frames proportionating to the depth. It is like the situation when a person sit in a moving car who can observe that the object out of car close to us move very fast and sun never move. We use a single camera to record a video to produce the depth map. While recording, the camera need vertically and/or horizontally jiggle. The video is detected the keypoint, then use them to match the keypoints by Scale-invariant Feature Transform. Distance of each matching keypoints is the depth information. Using the image segmentation to divide the image into several blocks can help the depth information expend to all the image by filling the depth in its blocks. The experiment of our method can realize the sense with complex black ground would get the more correct result.