Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Scene depth.

Dissertationen zum Thema „Scene depth“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Scene depth" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Oliver, Parera Maria. „Scene understanding from image and video : segmentation, depth configuration“. Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/663870.

Der volle Inhalt der Quelle
Annotation:
In this thesis we aim at analyzing images and videos at the object level, with the goal of decomposing the scene into complete objects that move and interact among themselves. The thesis is divided in three parts. First, we propose a segmentation method to decompose the scene into shapes. Then, we propose a probabilistic method, which works with shapes or objects at two different depths, to infer which objects are in front of the others, while completing the ones which are partially occluded. Finally, we propose two video related inpainting method. On one hand, we propose a binary video inpainting method that relies on the optical flow of the video in order to complete the shapes across time taking into account their motion. On the other hand, we propose a method for optical flow that takes into account the informational from the frames.
Aquesta tesi té per objectiu analitzar imatges i vídeos a nivell d’objectes, amb l’objectiu de descompondre l’escena en objectes complets que es mouen i interaccionen entre ells. La tesi està dividida en tres parts. En primer lloc, proposem un mètode de segmentació per descompondre l’escena en les formes que la componen. A continuació, proposem un mètode probabilístic, que considera les formes o objectes en dues profunditats de l’escena diferents, i infereix quins objectes estan davant dels altres, completant també els objectes parcialment ocults. Finalment, proposem dos mètodes relacionats amb el vídeo inpainting. Per una banda, proposem un mètode per vídeo inpainting binari que utilitza el flux òptic del vídeo per completar les formes al llarg del temps, tenint en compte el seu moviment. Per l’altra banda, proposem un mètode per inpainting de flux òptic que té en compte la informació provinent dels frames.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Mitra, Bhargav Kumar. „Scene segmentation using miliarity, motion and depth based cues“. Thesis, University of Sussex, 2010. http://sro.sussex.ac.uk/id/eprint/2480/.

Der volle Inhalt der Quelle
Annotation:
Segmentation of complex scenes to aid surveillance is still considered an open research problem. In this thesis a computational model (CM) has been developed to classify a scene into foreground, moving-shadow and background regions. It has been demonstrated how the CM, with the optional use of a channel ratio test, can be applied to demarcate foreground shadow regions in indoor scenes illuminated by a fixed incandescent source of light. A combined approach, involving the CM working in tandem with a traditional motion cue based segmentation method, has also been constructed. In the combined approach, the CM is applied to segregate the foreground shaded regions in a current frame based on a binary mask generated using a standard background subtraction process (BSP). Various popular outlier detection strategies have been investigated to assess their suitabilities in generating a threshold automatically, required to develop a binary mask from a difference frame, the outcome of the BSP. To evaluate the full scope of the pixel labeling capabilities of the CM and to estimate the associated time constraints, the model is deployed for foreground scene segmentation in recorded real-life video streams. The observations made validate the satisfactory performance of the model in most cases. In the second part of the thesis depth based cues have been exploited to perform the task of foreground scene segmentation. An active structured light based depthestimating arrangement has been modeled in the thesis; the choice of modeling an active system over a passive stereovision one has been made to alleviate some of the difficulties associated with the classical correspondence problem. The model developed not only facilitates use of the set-up but also makes possible a method to increase the working volume of the system without explicitly encoding the projected structured pattern. Finally, it is explained how scene segmentation can be accomplished based solely on the structured pattern disparity information, without generating explicit depthmaps. To de-noise the difference frames, generated using the developed method, two median filtering schemes have been implemented. The working of one of the schemes is advocated for practical use and is described in terms of discrete morphological operators, thus facilitating hardware realisation of the method to speed-up the de-noising process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Malleson, Charles D. „Dynamic scene modelling and representation from video and depth“. Thesis, University of Surrey, 2016. http://epubs.surrey.ac.uk/809990/.

Der volle Inhalt der Quelle
Annotation:
Recent advances in sensor technology have introduced low-cost video+depth sensors, such as the Microsoft Kinect, which enable simultaneous acquisition of colour and depth images at video rates. The aim of this research is to investigate representations which support integration of noisy, partial surface measurements over time to form more complete, temporally coherent models of dynamic scenes with enhanced detail and reduced noise. The initial focus of this work is on the restricted case of rigid geometry for which online GPU-accelerated volumetric fusion is implemented and tested. An alternative fusion approach based on dense surface elements (surfels) is also explored and compared to the volumetric approach. As a first step towards handling non-rigid scenes, the static volumetric approach is extended to treat articulated (semi-rigid) geometry with a focus on humans. The human body is segmented into piece-wise rigid volumetric parts and part tracking is aided by depth-based skeletal motion data. To address scenes containing more general non-rigid geometry beyond people and isolated rigid shapes, a more flexible approach is required. A piece-wise modelling approach using a sparse surfel graph and repeated alternation between part segmentation, motion and shape estimation is proposed. The method is designed to incorporate methods for noise reduction and handling of missing data. Finally, a hybrid approach is proposed which leverages the advantages of the surfel graph segmentation and coarse surface modelling with the higher-resolution surface reconstruction capability of volumetric fusion. The hybrid method is able to produce a seamless skinned mesh structure to efficiently represent a temporally consistent dynamic scene. The hybrid framework can be considered a unification of rigid and non-rigid reconstruction techniques, for which static scenes are a special case. It allows arbitrary dynamic scenes to be efficiently represented with enhanced levels of detail and completeness where possible, but gracefully falls back to raw measurements where no structure can be inferred. The representation is shown to facilitate creative manipulation of real scene data which would previously require more complex capture setups or extensive manual processing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Stynsberg, John. „Incorporating Scene Depth in Discriminative Correlation Filters for Visual Tracking“. Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153110.

Der volle Inhalt der Quelle
Annotation:
Visual tracking is a computer vision problem where the task is to follow a targetthrough a video sequence. Tracking has many important real-world applications in several fields such as autonomous vehicles and robot-vision. Since visual tracking does not assume any prior knowledge about the target, it faces different challenges such occlusion, appearance change, background clutter and scale change. In this thesis we try to improve the capabilities of tracking frameworks using discriminative correlation filters by incorporating scene depth information. We utilize scene depth information on three main levels. First, we use raw depth information to segment the target from its surroundings enabling occlusion detection and scale estimation. Second, we investigate different visual features calculated from depth data to decide which features are good at encoding geometric information available solely in depth data. Third, we investigate handling missing data in the depth maps using a modified version of the normalized convolution framework. Finally, we introduce a novel approach for parameter search using genetic algorithms to find the best hyperparameters for our tracking framework. Experiments show that depth data can be used to estimate scale changes and handle occlusions. In addition, visual features calculated from depth are more representative if they were combined with color features. It is also shown that utilizing normalized convolution improves the overall performance in some cases. Lastly, the usage of genetic algorithms for hyperparameter search leads to accuracy gains as well as some insights on the performance of different components within the framework.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Elezovikj, Semir. „FOREGROUND AND SCENE STRUCTURE PRESERVED VISUAL PRIVACY PROTECTION USING DEPTH INFORMATION“. Master's thesis, Temple University Libraries, 2014. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/259533.

Der volle Inhalt der Quelle
Annotation:
Computer and Information Science
M.S.
We propose the use of depth-information to protect privacy in person-aware visual systems while preserving important foreground subjects and scene structures. We aim to preserve the identity of foreground subjects while hiding superfluous details in the background that may contain sensitive information. We achieve this goal by using depth information and relevant human detection mechanisms provided by the Kinect sensor. In particular, for an input color and depth image pair, we first create a sensitivity map which favors background regions (where privacy should be preserved) and low depth-gradient pixels (which often relates a lot to scene structure but little to identity). We then combine this per-pixel sensitivity map with an inhomogeneous image obscuration process for privacy protection. We tested the proposed method using data involving different scenarios including various illumination conditions, various number of subjects, different context, etc. The experiments demonstrate the quality of preserving the identity of humans and edges obtained from the depth information while obscuring privacy intrusive information in the background.
Temple University--Theses
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Quiroga, Sepúlveda Julián. „Scene Flow Estimation from RGBD Images“. Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM057/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse aborde le problème du calcul de manière fiable d'un champ de mouvement 3D, appelé flot de scène, à partir d'une paire d'images RGBD prises à des instants différents. Nous proposons un schéma d'estimation semi-rigide pour le calcul robuste du flot de scène, en prenant compte de l'information de couleur et de profondeur, et un cadre de minimisation alternée variationnelle pour récupérer les composantes rigides et non rigides du champ de mouvement 3D. Les tentatives précédentes pour estimer le flot de scène à partir des images RGBD étaient des extensions des approches de flux optique, et n'exploitaient pas totalement les données de profondeur, ou bien elles formulaient l'estimation dans l'espace 3D sans tenir compte de la semi-rigidité des scènes réelles. Nous démontrons que le flot de scène peut ^etre calculé de manière robuste et précise dans le domaine de l'image en reconstruisant un mouvement 3D cohérent avec la couleur et la profondeur, en encourageant une combinaison réglable entre rigidité locale et par morceaux. En outre, nous montrons que le calcul du champ de mouvement 3D peut être considéré comme un cas particulier d'un problème d'estimation plus général d'un champ de mouvements rigides à 6 dimensions. L'estimation du flot de scène est donc formulée comme la recherche d'un champ optimal de mouvements rigides. Nous montrons finalement que notre méthode permet d'obtenir des résultats comparables à l'état de l'art
This thesis addresses the problem of reliably recovering a 3D motion field, or scene flow, from a temporal pair of RGBD images. We propose a semi-rigid estimation framework for the robust computation of scene flow, taking advantage of color and depth information, and an alternating variational minimization framework for recovering rigid and non-rigid components of the 3D motion field. Previous attempts to estimate scene flow from RGBD images have extended optical flow approaches without fully exploiting depth data or have formulated the estimation in 3D space disregarding the semi-rigidity of real scenes. We demonstrate that scene flow can be robustly and accurately computed in the image domain by solving for 3D motions consistent with color and depth, encouraging an adjustable combination between local and piecewise rigidity. Additionally, we show that solving for the 3D motion field can be seen as a specific case of a more general estimation problem of a 6D field of rigid motions. Accordingly, we formulate scene flow estimation as the search of an optimal field of twist motions achieving state-of-the-art results.STAR
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Forne, Christopher Jes. „3-D Scene Reconstruction from Multiple Photometric Images“. Thesis, University of Canterbury. Electrical and Computer Engineering, 2007. http://hdl.handle.net/10092/1227.

Der volle Inhalt der Quelle
Annotation:
This thesis deals with the problem of three dimensional scene reconstruction from multiple camera images. This is a well established problem in computer vision and has been significantly researched. In recent years some excellent results have been achieved, however existing algorithms often fall short of many biological systems in terms of robustness and generality. The aim of this research was to develop improved algorithms for reconstructing 3D scenes, with a focus on accurate system modelling and correctly dealing with occlusions. With scene reconstruction the objective is to infer scene parameters describing the 3D structure of the scene from the data given by camera images. This is an illposed inverse problem, where an exact solution cannot be guaranteed. The use of a statistical approach to deal with the scene reconstruction problem is introduced and the differences between maximum a priori (MAP) and minimum mean square estimate (MMSE) considered. It is discussed how traditional stereo matching can be performed using a volumetric scene model. An improved model describing the relationship between the camera data and a discrete model of the scene is presented. This highlights some of the common causes of modelling errors, enabling them to be dealt with objectively. The problems posed by occlusions are considered. Using a greedy algorithm the scene is progressively reconstructed to account for visibility interactions between regions and the idea of a complete scene estimate is established. Some simple and improved techniques for reliably assigning opaque voxels are developed, making use of prior information. Problems with variations in the imaging convolution kernel between images motivate the development of a pixel dissimilarity measure. Belief propagation is then applied to better utilise prior information and obtain an improved global optimum. A new volumetric factor graph model is presented which represents the joint probability distribution of the scene and imaging system. By utilising the structure of the local compatibility functions, an efficient procedure for updating the messages is detailed. To help convergence, a novel approach of accentuating beliefs is shown. Results demonstrate the validity of this approach, however the reconstruction error is similar or slightly higher than from the Greedy algorithm. To simplify the volumetric model, a new approach to belief propagation is demonstrated by applying it to a dynamic model. This approach is developed as an alternative to the full volumetric model because it is less memory and computationally intensive. Using a factor graph, a volumetric known visibility model is presented which ensures the scene is complete with respect to all the camera images. Dynamic updating is also applied to a simpler single depth-map model. Results show this approach is unsuitable for the volumetric known visibility model, however, improved results are obtained with the simple depth-map model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Rehfeld, Timo [Verfasser], Stefan [Akademischer Betreuer] Roth und Carsten [Akademischer Betreuer] Rother. „Combining Appearance, Depth and Motion for Efficient Semantic Scene Understanding / Timo Rehfeld ; Stefan Roth, Carsten Rother“. Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2018. http://d-nb.info/1157011950/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Jaritz, Maximilian. „2D-3D scene understanding for autonomous driving“. Thesis, Université Paris sciences et lettres, 2020. https://pastel.archives-ouvertes.fr/tel-02921424.

Der volle Inhalt der Quelle
Annotation:
Dans cette thèse, nous abordons les défis de la rareté des annotations et la fusion de données hétérogènes tels que les nuages de points 3D et images 2D. D’abord, nous adoptons une stratégie de conduite de bout en bout où un réseau de neurones est entraîné pour directement traduire l'entrée capteur (image caméra) en contrôles-commandes, ce qui rend cette approche indépendante des annotations dans le domaine visuel. Nous utilisons l’apprentissage par renforcement profond où l'algorithme apprend de la récompense, obtenue par interaction avec un simulateur réaliste. Nous proposons de nouvelles stratégies d'entraînement et fonctions de récompense pour une meilleure conduite et une convergence plus rapide. Cependant, le temps d’apprentissage reste élevé. C'est pourquoi nous nous concentrons sur la perception dans le reste de cette thèse pour étudier la fusion de nuage de points et d'images. Nous proposons deux méthodes différentes pour la fusion 2D-3D. Premièrement, nous projetons des nuages de points LiDAR 3D dans l’espace image 2D, résultant en des cartes de profondeur éparses. Nous proposons une nouvelle architecture encodeur-décodeur qui fusionne les informations de l’image et la profondeur pour la tâche de complétion de carte de profondeur, améliorant ainsi la résolution du nuage de points projeté dans l'espace image. Deuxièmement, nous fusionnons directement dans l'espace 3D pour éviter la perte d'informations dû à la projection. Pour cela, nous calculons les caractéristiques d’image issues de plusieurs vues avec un CNN 2D, puis nous les projetons dans un nuage de points 3D global pour les fusionner avec l’information 3D. Par la suite, ce nuage de point enrichi sert d'entrée à un réseau "point-based" dont la tâche est l'inférence de la sémantique 3D par point. Sur la base de ce travail, nous introduisons la nouvelle tâche d'adaptation de domaine non supervisée inter-modalités où on a accès à des données multi-capteurs dans une base de données source annotée et une base cible non annotée. Nous proposons une méthode d’apprentissage inter-modalités 2D-3D via une imitation mutuelle entre les réseaux d'images et de nuages de points pour résoudre l’écart de domaine source-cible. Nous montrons en outre que notre méthode est complémentaire à la technique unimodale existante dite de pseudo-labeling
In this thesis, we address the challenges of label scarcity and fusion of heterogeneous 3D point clouds and 2D images. We adopt the strategy of end-to-end race driving where a neural network is trained to directly map sensor input (camera image) to control output, which makes this strategy independent from annotations in the visual domain. We employ deep reinforcement learning where the algorithm learns from reward by interaction with a realistic simulator. We propose new training strategies and reward functions for better driving and faster convergence. However, training time is still very long which is why we focus on perception to study point cloud and image fusion in the remainder of this thesis. We propose two different methods for 2D-3D fusion. First, we project 3D LiDAR point clouds into 2D image space, resulting in sparse depth maps. We propose a novel encoder-decoder architecture to fuse dense RGB and sparse depth for the task of depth completion that enhances point cloud resolution to image level. Second, we fuse directly in 3D space to prevent information loss through projection. Therefore, we compute image features with a 2D CNN of multiple views and then lift them all to a global 3D point cloud for fusion, followed by a point-based network to predict 3D semantic labels. Building on this work, we introduce the more difficult novel task of cross-modal unsupervised domain adaptation, where one is provided with multi-modal data in a labeled source and an unlabeled target dataset. We propose to perform 2D-3D cross-modal learning via mutual mimicking between image and point cloud networks to address the source-target domain shift. We further showcase that our method is complementary to the existing uni-modal technique of pseudo-labeling
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Diskin, Yakov. „Dense 3D Point Cloud Representation of a Scene Using Uncalibrated Monocular Vision“. University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1366386933.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Macháček, Jan. „Fokusovací techniky optického měření 3D vlastností“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442511.

Der volle Inhalt der Quelle
Annotation:
This thesis deals with optical distance measurement and 3D scene measurement using focusing techniques with focus on confocal microscopy, depth from focus and depth from defocus. Theoretical part of the thesis is about different approaches to depth map generation and also about micro image defocusing technique for measuring refractive index of transparent materials. Then the camera calibration for focused techniques is described. In the next part of the thesis is described experimentally verification of depth from focus and depth from defocus techniques. For the first technique are shown results of depth map generation and for the second technique is shown comparison between measured distance values and real distance values. Finally, the discussed techniques are compared and evaluated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Kaiser, Adrien. „Analyse de scène temps réel pour l'interaction 3D“. Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT025/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse porte sur l'analyse visuelle de scènes intérieures capturées par des caméras de profondeur dans le but de convertir leurs données en information de haut niveau sur la scène. Elle explore l'application d'outils d'analyse géométrique 3D à des données visuelles de profondeur en termes d'amélioration de qualité, de recalage et de consolidation. En particulier, elle vise à montrer comment l'abstraction de formes permet de générer des représentations légères pour une analyse rapide avec des besoins matériels faibles. Cette propriété est liée à notre objectif de concevoir des algorithmes adaptés à un fonctionnement embarqué en temps réel dans le cadre d'appareils portables, téléphones ou robots mobiles. Le contexte de cette thèse est l'exécution d'un procédé d’interaction 3D temps réel sur un appareil mobile. Cette exécution soulève plusieurs problématiques, dont le placement de zones d'interaction 3D par rapport à des objets environnants réels, le suivi de ces zones dans l'espace lorsque le capteur est déplacé ainsi qu'une utilisation claire et compréhensible du système par des utilisateurs non experts. Nous apportons des contributions vers la résolution de ces problèmes pour montrer comment l'abstraction géométrique de la scène permet une localisation rapide et robuste du capteur et une représentation efficace des données fournies ainsi que l'amélioration de leur qualité et leur consolidation. Bien que les formes géométriques simples ne contiennent pas autant d'information que les nuages de points denses ou les ensembles volumiques pour représenter les scènes observées, nous montrons qu’elles constituent une approximation acceptable et que leur légèreté leur donne un bon équilibre entre précision et performance
This PhD thesis focuses on the problem of visual scene analysis captured by commodity depth sensors to convert their data into high level understanding of the scene. It explores the use of 3D geometry analysis tools on visual depth data in terms of enhancement, registration and consolidation. In particular, we aim to show how shape abstraction can generate lightweight representations of the data for fast analysis with low hardware requirements. This last property is important as one of our goals is to design algorithms suitable for live embedded operation in e.g., wearable devices, smartphones or mobile robots. The context of this thesis is the live operation of 3D interaction on a mobile device, which raises numerous issues including placing 3D interaction zones with relation to real surrounding objects, tracking the interaction zones in space when the sensor moves and providing a meaningful and understandable experience to non-expert users. Towards solving these problems, we make contributions where scene abstraction leads to fast and robust sensor localization as well as efficient frame data representation, enhancement and consolidation. While simple geometric surface shapes are not as faithful as heavy point sets or volumes to represent observed scenes, we show that they are an acceptable approximation and their light weight makes them well balanced between accuracy and performance
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Clarou, Alphonse. „Catastrophe et répétition : une intelligence du théâtre“. Thesis, Paris 10, 2016. http://www.theses.fr/2016PA100053.

Der volle Inhalt der Quelle
Annotation:
Catastrophe et Répétition pensent le théâtre. Ils en donnent une intelligence. « Uneintelligence du théâtre » : une idée ou plusieurs, que l’on peut s’en faire en usant de formes,notions ou principes qui ne relèvent pas essentiellement de cet art. À partir des propositionsde Jean Genet dans Le Funambule, texte mis en regard notamment avec le récit de GeorgesBataille Le mort ; de la corrida et d’une phrase prononcée par le matador José Tomás, quiaffirme laisser son « corps à l’hôtel » avant de faire son entrée sur la piste des arènes (procheen cela de l’« acteur en vrai » suicidé dans Le Théâtre des paroles de Valère Novarina) ; ouencore, de quelques représentations, poétiques, urbanistiques ou théoriques de la villeenvisagée comme une scène oeuvrée par « la Mort » (celle dont on ne meurt pas encore : cemonde antérieur, cette « région désespérée et éclatante où opère l’artiste », écrit Genet) ; àpartir de ces textes et images, Catastrophe et Répétition forment un couple pensif, et leurtravail a décidément quelque chose à dire de ces scènes où l’amour et la mort ont leursentrées. Quelque chose à dire de leurs spécificités, de ce qui les fonde et les structure, les fait tenir ou les défait. Quelque chose à dire de ce qui fait scène
Catastrophe and Repetition make one think about theater. They give an intelligence of it. « Anintelligence of theater » : one or several ideas of this last we can arrive at, using forms,notions or principles which don’t essentially originate in this art itself. Thus, starting fromJean Genet’s propositions in The Tightrope-Walker, notably viewed in the light of GeorgesBataille’s The Dead Man ; from bullfighting and especially a sentence pronounced by thematador José Tomás, who claims he’s leaving his « body at the hotel » before making hisentrance unto the arena floor (akin to the « acteur en vrai » who committed suicide in ValèreNovarina’s Le Théâtre des paroles) ; as well as from several poetic, urban, or theoreticalrepresentations of the city as a scene operated upon by « Death » (the one from which we donot yet die : this preceding world, this « desperate and radiant region where the artistoperates », as Genet put it) ; starting from these texts and images, Catastrophe and Repetitionform a pensive couple, and sometimes their work has something to tell about those sceneswhere love and death have their entries. Something to tell about their specificities, about whatgrounds and structures them, what keeps them standing or makes them come undone.Something to tell about what makes a scene
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Firman, M. D. „Learning to complete 3D scenes from single depth images“. Thesis, University College London (University of London), 2016. http://discovery.ucl.ac.uk/1532193/.

Der volle Inhalt der Quelle
Annotation:
Building a complete 3D model of a scene given only a single depth image is underconstrained. To acquire a full volumetric model, one typically needs either multiple views, or a single view together with a library of unambiguous 3D models that will fit the shape of each individual object in the scene. In this thesis, we present alternative methods for inferring the hidden geometry of table-top scenes. We first introduce two depth-image datasets consisting of multiple scenes, each with a ground truth voxel occupancy grid. We then introduce three methods for predicting voxel occupancy. The first predicts the occupancy of each voxel using a novel feature vector which measures the relationship between the query voxel and surfaces in the scene observed by the depth camera. We use a Random Forest to map each voxel of unknown state to a prediction of occupancy. We observed that predicting the occupancy of each voxel independently can lead to noisy solutions. We hypothesize that objects of dissimilar semantic classes often share similar 3D shape components, enabling a limited dataset to model the shape of a wide range of objects, and hence estimate their hidden geometry. Demonstrating this hypothesis, we propose an algorithm that can make structured completions of unobserved geometry. Finally, we propose an alternative framework for understanding the 3D geometry of scenes using the observation that individual objects can appear in multiple different scenes, but in different configurations. We introduce a supervised method to find regions corresponding to the same object across different scenes. We demonstrate that it is possible to then use these groupings of partially observed objects to reconstruct missing geometry. We then perform a critical review of the approaches we have taken, including an assessment of our metrics and datasets, before proposing extensions and future work.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Clark, Laura. „"But ayenste deth may no man rebell" Death scenes as tools for characterization in Thomas Malory's "Morte d'Arthur" /“. Ann Arbor, Mich. : ProQuest, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1453248.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.A. in Medieval Studies)--S.M.U.
Title from PDF title page (viewed Mar. 16, 2009). Source: Masters Abstracts International, Volume: 46-06, page: 2989. Adviser: Bonnie Wheeler. Includes bibliographical references.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Deng, Zhuo. „RGB-DEPTH IMAGE SEGMENTATION AND OBJECT RECOGNITION FOR INDOOR SCENES“. Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/427631.

Der volle Inhalt der Quelle
Annotation:
Computer and Information Science
Ph.D.
With the advent of Microsoft Kinect, the landscape of various vision-related tasks has been changed. Firstly, using an active infrared structured light sensor, the Kinect can provide directly the depth information that is hard to infer from traditional RGB images. Secondly, RGB and depth information are generated synchronously and can be easily aligned, which makes their direct integration possible. In this thesis, I propose several algorithms or systems that focus on how to integrate depth information with traditional visual appearances for addressing different computer vision applications. Those applications cover both low level (image segmentation, class agnostic object proposals) and high level (object detection, semantic segmentation) computer vision tasks. To firstly understand whether and how depth information is helpful for improving computer vision performances, I start research on the image segmentation field, which is a fundamental problem and has been studied extensively in natural color images. We propose an unsupervised segmentation algorithm that is carefully crafted to balance the contribution of color and depth features in RGB-D images. The segmentation problem is then formulated as solving the Maximum Weight Independence Set (MWIS) problem. Given superpixels obtained from different layers of a hierarchical segmentation, the saliency of each superpixel is estimated based on balanced combination of features originating from depth, gray level intensity, and texture information. We evaluate the segmentation quality based on five standard measures on the commonly used NYU-v2 RGB-Depth dataset. A surprising message indicated from experiments is that unsupervised image segmentation of RGB-D images yields comparable results to supervised segmentation. In image segmentation, an image is partitioned into several groups of pixels (or super-pixels). We take one step further to investigate on the problem of assigning class labels to every pixel, i.e., semantic scene segmentation. We propose a novel image region labeling method which augments CRF formulation with hard mutual exclusion (mutex) constraints. This way our approach can make use of rich and accurate 3D geometric structure coming from Kinect in a principled manner. The final labeling result must satisfy all mutex constraints, which allows us to eliminate configurations that violate common sense physics laws like placing a floor above a night stand. Three classes of mutex constraints are proposed: global object co-occurrence constraint, relative height relationship constraint, and local support relationship constraint. Segments obtained from image segmentation can be either too fine or too coarse. A full object region not only conveys global features but also arguably enriches contextual features as confusing background is separated. We propose a novel unsupervised framework for automatically generating bottom up class independent object candidates for detection and recognition in cluttered indoor environments. Utilizing raw depth map, we propose a novel plane segmentation algorithm for dividing an indoor scene into predominant planar regions and non-planar regions. Based on this partition, we are able to effectively predict object locations and their spatial extensions. Our approach automatically generates object proposals considering five different aspects: Non-planar Regions (NPR), Planar Regions (PR), Detected Planes (DP), Merged Detected Planes (MDP) and Hierarchical Clustering (HC) of 3D point clouds. Object region proposals include both bounding boxes and instance segments. Although 2D computer vision tasks can roughly identify where objects are placed on image planes, their true locations and poses in the physical 3D world are difficult to determine due to multiple factors such as occlusions and the uncertainty arising from perspective projections. However, it is very natural for human beings to understand how far objects are from viewers, object poses and their full extents from still images. These kind of features are extremely desirable for many applications such as robotics navigation, grasp estimation, and Augmented Reality (AR) etc. In order to fill the gap, we addresses the problem of amodal perception of 3D object detection. The task is to not only find object localizations in the 3D world, but also estimate their physical sizes and poses, even if only parts of them are visible in the RGB-D image. Recent approaches have attempted to harness point cloud from depth channel to exploit 3D features directly in the 3D space and demonstrated the superiority over traditional 2D representation approaches. We revisit the amodal 3D detection problem by sticking to the 2D representation framework, and directly relate 2D visual appearance to 3D objects. We propose a novel 3D object detection system that simultaneously predicts objects' 3D locations, physical sizes, and orientations in indoor scenes.
Temple University--Theses
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Wishart, Keith A. „Cue combination for depth, brightness and lightness in 3-D scenes“. Thesis, University of Sheffield, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389616.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Labrie-Larrivée, Félix. „Depth texture synthesis for high resolution seamless reconstruction of large scenes“. Master's thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/30324.

Der volle Inhalt der Quelle
Annotation:
La numérisation 3D de scène à grande échelle est un problème complexe sans solution à la fois précise, rapide et abordable. Les scènes à grande échelle comme les façades d'édices comportent cependant des éléments répétitifs (fenêtres, briques, panneaux de bois) qui peuvent être utilisés pour améliorer le processus de numérisation. Notre approche, Depth Texture Synthesis, utilise un scan haute résolution d'un de ces éléments, effectué avec un scanneur RGBD, et transmet cette résolution élevée aux endroits où l'élément est répété ailleurs dans la scène. Cette transmission s'effectue suivant l'information fournie par une reconstruction SfM. Pour effectuer une procédure de Depth Texture Synthesis, la façade de l'édice est simplifiée en une géométrie planaire qui nous sert de canevas. Sur ce canevas nous projetons l'information RGB ainsi que l'information de profondeur du modèle échantillon haute résolution et du modèle SfM basse résolution. Ensuite, un algorithme puissant de synthèse de texture 2D est employé pour transmettre l'information de profondeur haute résolution suivant les patrons de profondeur basse résolution et d'information RGB. La nouvelle carte de profondeur haute résolution peut alors être reconvertie en modèle 3D pour un résultat beaucoup plus réaliste et visuellement détaillé que la reconstruction SfM. Il est aussi intéressant de noter que notre approche est beaucoup moins fastidieuse qu'un scan complet de la scène utilisant des scanneurs RGBD. Les outils utilisés (Kinect v2 et appareil photo) sont aussi très abordables en comparaison avec le Lidar.
Large scenes such as building facades are challenging environments for 3D reconstruction. These scenes often include repeating elements (windows, bricks, wood paneling) that can be exploited for the task of 3D reconstruction. Our approach, Depth Texture Synthesis, is based on that idea and aims to improve the quality of 3D model representation of large scenes. By scanning a sample of a repeating structure using a RGBD sensor, Depth Texture Synthesis can propagate the high resolution of that sample to similar parts of the scene. It does so following RGB and low resolution depth information of a SfM reconstruction. To handle this information the building facade is simplified into a planar primitive and serves as our canvas. The high resolution depth of the Kinect sample and low resolution depth of the SfM model as well as the RGB information are projected onto the canvas. Then, powerful image based texture synthesis algorithms are used to propagate the high resolution depth following cues in RGB and low resolution depth. The resulting synthesized high resolution depth is converted back into a 3D model that greatly improves on the SfM model with more detailed, more realistic looking geometry. Our approach is also much less labor intensive than RGBD sensors in large scenes and it is much more affordable than Lidar.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Kirschner, Aaron J. „Two ballet scenes for The Masque of the Red Death“. Thesis, Boston University, 2012. https://hdl.handle.net/2144/12453.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.M.)--Boston University PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.
This project is part of an adaptation of Edgar Alan Poe's short story "The Masque of the Red Death" into a ballet. The music is scored for a chamber orchestra with single winds and brass, piano, two percussionists, and string quintet. The two scenes presented herein are the opening two of the ballet. The first scene ("The Red Death Had Long Devastated the Country") depicts the eponymous disease consuming the population. In the second scene ("The Prince Prospero & the March to the Walled Abbey") Prospero attempts to ignore the plague and retire to his walled abbey. While his message is beautiful on its own, it is in constant disharmony with the world in which he now lives. The music reflects tlus, with the Prince often in the wrong key. The scene concludes with the Prince and his knaves marching to the abbey.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Hassell, Sian Angharad. „The role of death in ancient Roman mythological epic : exploring death and death scenes in Virgil's Aeneid and Valerius Flaccus' Argonautica“. Thesis, University of Leeds, 2014. http://etheses.whiterose.ac.uk/7245/.

Der volle Inhalt der Quelle
Annotation:
This thesis explores and analyses the narrative and thematic uses of death in two Latin mythological epics, in order to investigate the ways in which various deaths reflect or highlight the ideology inherent to each epic. Death is one of the fundamental realities of life, yet can occur in many different ways and be used for many different purposes in fiction. Its application and significance in epic is accordingly complex, reflecting both its literary and socio-historical contexts. Each chapter covers a different type of death (such as murder or war injury, for example), and, in each case, begins by concentrating on Virgil's Aeneid, before moving on to Valerius Flaccus' Argonautica. In doing so, the thesis explores how each author approached and utilised various forms of death for their various thematic, narrative and structural purposes within the poem, and then the extent to which the attitudes and thematic significance surrounding the deaths were affected by the contemporary social, historical and political landscape. Finally, how each author's use of death compares with the other is considered. I demonstrate that some of the similarities and differences between the depictions of death in the two epics are linked primarily to their respective thematic and narrative requirements. Other elements, however, such as a heightened focus on the (generally negative) consequences of absolute power in Valerius and Virgil’s thematic warnings against the assumption of too much power, can instead be traced directly to shifting socio-political and cultural influences within Roman society. It further becomes clear that, while both epics were written shortly after turbulent eras in history, the wider context of those periods ensures that each epic displays different approaches to the ideology and realities of death while nevertheless belonging to the same genre of mythological epic.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Bennett, Tracy. „Exploring the Medico-legal death scene investigation of sudden unexpected death of infants admitted to Salt River mortuary, Cape Town, South Africa“. Master's thesis, Faculty of Health Sciences, 2018. http://hdl.handle.net/11427/30064.

Der volle Inhalt der Quelle
Annotation:
A death scene investigation (DSI) forms an integral part of the inquiry into death, particularly for sudden unexpected death of infants (SUDI). Global guidelines exist for DSI, however, it is unclear how many countries adhere to them, and to what extent they are followed. Therefore, a systematic literature review was undertaken to assess the scope of SUDI DSI performed internationally. It was found that national protocols have been established in some countries, and have shown value in guiding medico-legal examinations. Further, South Africa did not routinely perform DSI for SUDI cases, nor was there a protocol. This was largely attributed to the burden of SUDI cases as well as the lack of resources. Therefore, this study aimed to suggest realistic and feasible ways to improve DSI for local SUDI cases. This research study consisted of three phases: 1) A twoyear review of medico-legal case files from SUDI cases investigated at Salt River Mortuary; 2) The prospective observation of DSI for ten SUDI cases, using a semi-structured checklist; and 3) he distribution and analysis of a survey regarding SUDI DSI to all registered, qualified forensic pathologists in South Africa. The results showed that the SUDI death scenes were assessed in 59.2% of cases at Salt River Mortuary, with inconsistent levels of documentation or photography. Death scenes were never investigated in cases where the infant was pronounced dead on arrival at a medical facility. In both scene observations (n=10) and retrospective analysis (n=454) only one case incorporated a re-enactment, but the majority of infants were moved prior to DSI. The findings support the need for a standardised approach to DSI, coupled with specialised training for staff. Based on the available resources, this should focus on the establishment of guidelines pertaining to photography, handling medicine and scene reconstruction, as well as accurate use of relevant documentation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

TANIMOTO, Masayuki, Toshiaki FUJII, Bunpei TOUJI, Tadahiko KIMOTO und Takashi IMORI. „A Segmentation-Based Multiple-Baseline Stereo (SMBS) Scheme for Acquisition of Depth in 3-D Scenes“. Institute of Electronics, Information and Communication Engineers, 1998. http://hdl.handle.net/2237/14997.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Allan, Janice Morag. „The writing of the primal scene(s), the death of God in the novels of Wilkie Collins“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ30128.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Müller, Franziska [Verfasser]. „Real-time 3D hand reconstruction in challenging scenes from a single color or depth camera / Franziska Müller“. Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2020. http://d-nb.info/1224883594/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Backhouse, George. „References to swords in the death scenes of Dido and Turnus in the Aeneid“. Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/71764.

Der volle Inhalt der Quelle
Annotation:
Thesis (MA)--Stellenbosch University, 2012.
ENGLISH ABSTRACT: This thesis investigates the references to swords in key scenes in the Aeneid – particularly the scenes of Dido’s and Turnus’ death – in order to add new perspectives on these scenes and on the way in which they impact on the presentation of Aeneas’ Roman mission in the epic. In Chapter Two I attempt to provide an outline of the mission of Aeneas. I also investigate the manner in which Dido and Turnus may be considered to be opponents of Aeneas’ mission. In Chapter Three I investigate references to swords in select scenes in book four of the Aeneid. I highlight an ambiguity in the interpretation of the sword that Dido uses to commit suicide and I also provide a description of the sword as a weapon and its place in the epic. In Chapter Four I provide an analysis of the references to swords in Dido’s and Turnus’ death scenes alongside a number of other important scenes involving mention of swords. I preface my analyses of the references to swords that play a role in interpreting Dido and Turnus’ deaths with an outline of the reasons for the deaths of each of these figures. The additional references to swords that I use in this chapter are the references to the sword in the scene of Deiphobus’ death in book six and to the sword and Priam’s act of arming himself on the night on which Troy is destroyed. At the end of Chapter Four I look at parallels between Dido and Turnus and their relationship to the mission of Aeneas. At the end of this thesis I am able to conclude that an investigation and analysis of the references to swords in select scenes in the Aeneid adds to existing scholarship in Dido’s and Turnus’ death in the following way: a more detailed investigation of the role of swords in the interpretation of Dido’s death from an erotic perspective strengthens the existing notion in scholarship that Dido is an obstacle to the mission of Aeneas.
AFRIKAANSE OPSOMMING: Hierdie tesis ondersoek die verwysings na swaarde in kerntonele in die Aeneïs – hoofsaaklik die sterftonele van Dido en Turnus – met die oog daarop om addisionele perspektiewe te verskaf op hierdie tonele en die impak wat hulle het op die voorstelling van Aeneas se Romeinse missie in die epos. In hoofstuk twee poog ek om ’n oorsig te bied van Aeneas se Romeinse missie. Ek stel ook ondersoek in na die mate waartoe Dido en Turnus as teenstanders van Aeneas se Romeinse missie beskou kan word. In Hoofstuk Drie ondersoek ek die verwysings na swaarde in spesifieke tonele van boek vier van die Aeneïs. Ek verwys na ’n dubbelsinnigheid in die interpretasie van die swaard wat Dido gebruik om selfmoord te pleeg en verskaf ook ’n beskrywing van die swaard as ’n wapen en die gebruik daarvan in die epos. In Hoofstuk Vier verskaf ek ‘n ontleding van die verwysings na swaarde in Dido en Turnus se sterftonele saam met ’n aantal ander belangrike tonele met verwysings na swaarde. Ek lei my ontleding van die beskrywings van die swaarde wat ’n rol speel in die interpretasie van Dido en Turnus se sterftes in met ’n uiteensetting van die redes vir die dood van elk van hierdie figure. Die addisionele verwysings na swaarde wat ek in hierdie hoofstuk ontleed, is die verwysing na die swaard in die toneel van Deiphobus se dood in boek ses en die verwysing na die swaard in die toneel waar Priamus sy wapenrusting aantrek op Troje se laaste aand. Aan die einde van Hoofstuk Vier ondersoek ek die parallele tussen Dido en Turnus en hulle verhouding tot Aeneas se Romeinse missie. Hierdie tesis ondersoek die verwysings na swaarde in kerntonele in die Aeneïs – hoofsaaklik die sterftonele van Dido en Turnus – met die oog daarop om addisionele perspektiewe te verskaf op hierdie tonele en die impak wat hulle het op die voorstelling van Aeneas se Romeinse missie in die epos. In hoofstuk twee poog ek om ’n oorsig te bied van Aeneas se Romeinse missie. Ek stel ook ondersoek in na die mate waartoe Dido en Turnus as teenstanders van Aeneas se Romeinse missie beskou kan word. In Hoofstuk Drie ondersoek ek die verwysings na swaarde in spesifieke tonele van boek vier van die Aeneïs. Ek verwys na ’n dubbelsinnigheid in die interpretasie van die swaard wat Dido gebruik om selfmoord te pleeg en verskaf ook ’n beskrywing van die swaard as ’n wapen en die gebruik daarvan in die epos. In Hoofstuk Vier verskaf ek ‘n ontleding van die verwysings na swaarde in Dido en Turnus se sterftonele saam met ’n aantal ander belangrike tonele met verwysings na swaarde. Ek lei my ontleding van die beskrywings van die swaarde wat ’n rol speel in die interpretasie van Dido en Turnus se sterftes in met ’n uiteensetting van die redes vir die dood van elk van hierdie figure. Die addisionele verwysings na swaarde wat ek in hierdie hoofstuk ontleed, is die verwysing na die swaard in die toneel van Deiphobus se dood in boek ses en die verwysing na die swaard in die toneel waar Priamus sy wapenrusting aantrek op Troje se laaste aand. Aan die einde van Hoofstuk Vier ondersoek ek die parallele tussen Dido en Turnus en hulle verhouding tot Aeneas se Romeinse missie.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Jones, Dean. „Fatal call - getting away with murder : a study into influences of decision making at the initial scene of unexpected death“. Thesis, University of Portsmouth, 2016. https://researchportal.port.ac.uk/portal/en/theses/fatal-call--getting-away-with-murder(d911623a-4009-4a5c-bba1-5827fdf25798).html.

Der volle Inhalt der Quelle
Annotation:
This thesis examined influences on the decision making process of police officers attending the scenes of sudden and unexpected death in England and Wales. It was initiated following concerns raised by Home Office Registered Forensic Pathologists (HORFPs) in some parts of England and Wales that their services were not being appropriately utilised to assist in the decision as to whether a death was ‘suspicious’ and possibly involving a third party, or a non-suspicious community death. Failure to properly assess the scene of the death can deny the investigation of processes to forensically determine a cause of death, and to lose forensic trace evidence from the body. There were three parts to the research; i) an examination of homicide statistics and forensic post mortem data which showed inconsistency in decision making between some police forces; ii) a case study of 32 real deaths where HORFPs had taken over the conduct of a post mortem procedure where the police had made a decision that the case was not suspicious but where the non-forensic pathologists felt that the case was a suspicious one; and iii) focus groups interviews with key individuals involved in the operational decision making process at the scene of sudden and unexpected death which revealed a lack of training and standardisation in dealing with sudden and unexpected deaths. Overall it was found that homicide cases may be missed due to poor decision making and that this phenomenon is not a new one. The mind-set of police officers dealing with these cases may influence the decision to treat cases as non-suspicious, and thus the services of a HORFP is not utilised to give an expert medical opinion. A major factor appeared to be the vulnerability of the deceased, as well as budgetary pressures. Recommendations are made to address the quality of death investigations, including a national policy, training of front line officers and supervisors and a standard operating procedure. The wrong decision – a ‘fatal call’ – can lead to a failed investigation and someone ‘getting away with murder’.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

SIMOES, Francisco Paulo Magalhaes. „Object detection and pose estimation from natural features for augmented reality in complex scenes“. Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/22417.

Der volle Inhalt der Quelle
Annotation:
Submitted by Alice Araujo (alice.caraujo@ufpe.br) on 2017-11-29T16:49:07Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) TeseFinal_fpms.pdf: 108609391 bytes, checksum: c84c50e3c8588d6c85e44f9ac6343200 (MD5)
Made available in DSpace on 2017-11-29T16:49:07Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) TeseFinal_fpms.pdf: 108609391 bytes, checksum: c84c50e3c8588d6c85e44f9ac6343200 (MD5) Previous issue date: 2016-03-07
CNPQ
Alignment of virtual elements to the real world scenes (known as detection and tracking) relying on features that are naturally present on the scene is one of the most important challenges in Augmented Reality. When it goes to complex scenes like industrial scenarios, the problem gets bigger with the lack of features and models, high specularity and others. Based on these problems, this PhD thesis addresses the question “How to improve object detection and pose estimation from natural features for AR when dealing with complex scenes problems?”. In order to answer this question, we need to ask ourselves “What are the challenges that we face when developing a new tracker for real world scenarios?”. We begin to answer these questions by developing a complete tracking system that tackles some characteristics typically found in industrial scenarios. This system was validated in a tracking competition organized by the most important AR conference in the world, called ISMAR. During the contest, two complementary problems to tracking were also discussed: calibration, procedure which puts the virtual information in the same coordinate system of the real world, and 3D reconstruction, which is responsible for creating 3D models of the scene to be used for tracking. Because many trackers need a pre-acquired model of the target objects, the quality of the generated geometric model of the objects influences the tracker, as observed on the tracking contest. Sometimes these models are available but in other cases their acquisition represents a great effort (manually) or cost (laser scanning). Because of this we decided to analyze how difficult it is today to automatically recover 3D geometry from complex 3D scenes by using only video. In our case, we considered an electrical substation as a complex 3D scene. Based on the acquired knowledge from previous experiments, we decided to first tackle the problem of improving the tracking for scenes where we can use recent RGB-D sensors during model generation and tracking. We developed a technique called DARP, Depth Assisted Rectification of Patches, which can improve matching by using rectified features based on patches normals. We analyzed this new technique under different synthetic and real scenes and improved the results over traditional texture based trackers like ORB, DAFT or SIFT. Since model generation is a difficult problem in complex scenes, our second proposed tracking approach does not depend on these geometric models and aims to track texture or textureless objects. We applied a supervised learning technique, called Gradient Boosting Trees (GBTs) to solve the tracking as a linear regression problem. We developed this technique by using image gradients and analyzing their relationship with tracking parameters. We also proposed an improvement over GBTs by using traditional tracking approaches together with them, like intensity or edge based features which turned their piecewise constant function to a more robust piecewise linear function. With the new approach, it was possible to track textureless objects like a black and white map for example.
O alinhamento de elementos virtuais com a cena real (definido como detecção e rastreamento) através de características naturalmente presentes em cena é um dos grandes desafios da Realidade Aumentada. Quando se trata de cenas complexas, como cenários industriais, o problema se torna maior com objetos pouco texturizados, alta especularidade e outros. Com base nesses problemas, esta tese de doutorado aborda a questão "Como melhorar a detecção de objetos e a estimativa da sua pose através de características naturais da cena para RA ao lidar com problemas de cenários complexos?". Para responder a essa pergunta, precisamos também nos perguntar: Quais são os desafios que enfrentamos ao desenvolver um novo rastreador para cenários reais?". Nesta tese, começamos a responder estas questões através da criação de um sistema de rastreamento completo que lida com algumas características tipicamente encontradas em cenários industriais. Este sistema foi validado em uma competição de rastreamento realizada na principal conferência de RA no mundo, chamada ISMAR. Durante a competição também foram discutidos dois problemas complementares ao rastreamento: a calibração, procedimento que coloca a informação virtual no mesmo sistema de coordenadas do mundo real, e a reconstrução 3D, responsável por criar modelos 3D da cena. Muitos rastreadores necessitam de modelos pré-adquiridos dos objetos presentes na cena e sua qualidade influencia o rastreador, como observado na competição de rastreamento. Às vezes, esses modelos estão disponíveis, mas em outros casos a sua aquisição representa um grande esforço (manual) ou custo (por varredura a laser). Devido a isto, decidimos analisar a dificuldade de reconstruir automaticamente a geometria de cenas 3D complexas usando apenas vídeo. No nosso caso, considerou-se uma subestação elétrica como exemplo de uma cena 3D complexa. Com base no conhecimento adquirido a partir das experiências anteriores, decidimos primeiro resolver o problema de melhorar o rastreamento para as cenas em que podemos utilizar sensores RGB-D durante a reconstrução e o rastreamento. Foi desenvolvida a técnica chamada DARP, sigla do inglês para Retificação de Patches Assistida por Informação de Profundidade, para melhorar o casamento de características usando patches retificados a partir das normais. A técnica foi analisada em cenários sintéticos e reais e melhorou resultados de rastreadores baseados em textura como ORB, DAFT ou SIFT. Já que a reconstrução do modelo 3D é um problema difícil em cenas complexas, a segunda abordagem de rastreamento não depende desses modelos geométricos e pretende rastrear objetos texturizados ou não. Nós aplicamos uma técnica de aprendizagem supervisionada, chamada Gradient Boosting Trees (GBTs) para tratar o rastreamento como um problema de regressão linear. A técnica foi desenvolvida utilizando gradientes da imagem e a análise de sua relação com os parâmetros de rastreamento. Foi também proposta uma melhoria em relação às GBTs através do uso de abordagens tradicionais de rastreamento em conjunto com a regressão linear, como rastreamento baseado em intensidade ou em arestas, propondo uma nova função de predição por partes lineares mais robusta que a função de predição por partes constantes. A nova abordagem permitiu o rastreamento de objetos não-texturizados como por exemplo um mapa em preto e branco.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Key, Jennifer Selina. „Death in Anglo-Saxon hagiography : approaches, attitudes, aesthetics“. Thesis, University of St Andrews, 2014. http://hdl.handle.net/10023/6352.

Der volle Inhalt der Quelle
Annotation:
This thesis examines attitudes and approaches towards death, as well as aesthetic representations of death, in Anglo-Saxon hagiography. The thesis contributes to the discussion of the historical and intellectual contexts of hagiography and considers how saintly death-scenes are represented to form commentaries on exemplary behaviour. A comprehensive survey of death-scenes in Anglo-Saxon hagiography has been undertaken, charting typical and atypical motifs used in literary manifestations of both martyrdom and non-violent death. The clusters of literary motifs found in these texts and what their use suggests about attitudes to exemplary death is analysed in an exploration of whether Anglo-Saxon hagiography presents a consistent aesthetic of death. The thesis also considers how modern scholarly fields such as thanatology can provide fresh discourses on the attitudes to and depictions of ‘good' and ‘bad' deaths. Moreover, the thesis addresses the intersection of the hagiographic inheritance with discernibly Anglo-Saxon attitudes towards death and dying, and investigates whether or not the deaths of native Anglo-Saxon saints are presented differently compared with the deaths of universal saints. The thesis explores continuities and discontinuities in the presentations of physical and spiritual death, and assesses whether or not differences exist in the depiction of death-scenes based on an author's personal agenda, choice of terminology, approaches towards the body–soul dichotomy, or the gender of his or her subject, for example. Furthermore, the thesis investigates how hagiographic representations of death compare with portrayals in other literature of the Anglo-Saxon period, and whether any non-hagiographic paradigms provide alternative exemplars of the ‘good death'. The thesis also assesses gendered portrayals of death, the portrayal of last words in saints' lives, and the various motifs relating to the soul at the moment of death. The thesis contains a Motif Index of saintly death-scenes as Appendix I.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Gurrieri, Luis E. „The Omnidirectional Acquisition of Stereoscopic Images of Dynamic Scenes“. Thèse, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/30923.

Der volle Inhalt der Quelle
Annotation:
This thesis analyzes the problem of acquiring stereoscopic images in all gazing directions around a reference viewpoint in space with the purpose of creating stereoscopic panoramas of non-static scenes. The generation of immersive stereoscopic imagery suitable to stimulate human stereopsis requires images from two distinct viewpoints with horizontal parallax in all gazing directions, or to be able to simulate this situation in the generated imagery. The available techniques to produce omnistereoscopic imagery for human viewing are not suitable to capture dynamic scenes stereoscopically. This is a not trivial problem when considering acquiring the entire scene at once while avoiding self-occlusion between multiple cameras. In this thesis, the term omnidirectional refers to all possible gazing directions in azimuth and a limited set of directions in elevation. The acquisition of dynamic scenes restricts the problem to those techniques suitable for collecting in one simultaneous exposure all the necessary visual information to recreate stereoscopic imagery in arbitrary gazing directions. The analysis of the problem starts by defining an omnistereoscopic viewing model for the physical magnitude to be measured by a panoramic image sensor intended to produce stereoscopic imagery for human viewing. Based on this model, a novel acquisition model is proposed, which is suitable to describe the omnistereoscopic techniques based on horizontal stereo. From this acquisition model, an acquisition method based on multiple cameras combined with the rendering by mosaicking of partially overlapped stereoscopic images is identified as a good candidate to produce omnistereoscopic imagery of dynamic scenes. Experimental acquisition and rendering tests were performed for different multiple-camera configurations. Furthermore, a mosaicking criterion between partially overlapped stereoscopic images based on the continuity of the perceived depth and the prediction of the location and magnitude of unwanted vertical disparities in the final stereoscopic panorama are two main contributions of this thesis. In addition, two novel omnistereoscopic acquisition and rendering techniques were introduced. The main contributions to this field are to propose a general model for the acquisition of omnistereoscopic imagery, to devise novel methods to produce omnistereoscopic imagery, and more importantly, to contribute to the awareness of the problem of acquiring dynamic scenes within the scope of omnistereoscopic research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Konradsson, Albin, und Gustav Bohman. „3D Instance Segmentation of Cluttered Scenes : A Comparative Study of 3D Data Representations“. Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177598.

Der volle Inhalt der Quelle
Annotation:
This thesis provides a comparison between instance segmentation methods using point clouds and depth images. Specifically, their performance on cluttered scenes of irregular objects in an industrial environment is investigated. Recent work by Wang et al. [1] has suggested potential benefits of a point cloud representation when performing deep learning on data from 3D cameras. However, little work has been done to enable quantifiable comparisons between methods based on different representations, particularly on industrial data. Generating synthetic data provides accurate grayscale, depth map, and point cloud representations for a large number of scenes and can thus be used to compare methods regardless of datatype. The datasets in this work are created using a tool provided by SICK. They simulate postal packages on a conveyor belt scanned by a LiDAR, closely resembling a common industry application. Two datasets are generated. One dataset has low complexity, containing only boxes.The other has higher complexity, containing a combination of boxes and multiple types of irregularly shaped parcels. State-of-the-art instance segmentation methods are selected based on their performance on existing benchmarks. We chose PointGroup by Jiang et al. [2], which uses point clouds, and Mask R-CNN by He et al. [3], which uses images. The results support that there may be benefits of using a point cloud representation over depth images. PointGroup performs better in terms of the chosen metric on both datasets. On low complexity scenes, the inference times are similar between the two methods tested. However, on higher complexity scenes, MaskR-CNN is significantly faster.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

McCracken, Michael. „Lowest of the Low: Scenes of Shame and Self-Deprecation in Contemporary Scottish Cinema“. Thesis, University of North Texas, 2008. https://digital.library.unt.edu/ark:/67531/metadc9804/.

Der volle Inhalt der Quelle
Annotation:
This thesis explores the factors leading to the images of self-deprecation and shame in contemporary Scottish film. It would seem that the causes of these reoccurring motifs may be because the Scottish people are unable to escape from their past and are uneasy about the future of the nation. There is an internal struggle for both Scottish men and women, who try to adhere to their predetermined roles in Scottish culture, but this role leads to violence, alcoholism, and shame. In addition, there is also a fear for the future of the nation that represented in films that feature a connection between children and the creation of life with the death of Scotland's past. This thesis will focus on films created under a recent boom in film production in Scotland beginning in 1994 till the present day.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Boudrika, Mohammed Amin. „Jan Fabre : dialogue du corps et de la mort. Ecriture, scénographie et mise en scène“. Thesis, Normandie, 2018. http://www.theses.fr/2018NORMR154/document.

Der volle Inhalt der Quelle
Annotation:
Le dialogue du corps et de la mort chez jan Fabre est fondé sur un corpus de textes théâtraux. En ce qui concerne la démarche de mon travail, elle était principalement axée sur trois volets principaux d’abord les héritages et les inspirations artistiques et culturelles ensuite le rituel et le sacrifice dans sa dimension historique philosophique et artistique et enfin la fabrication du spectacle : de la genèse à la représentation scénique. L’objectif de ma thèse est de démontrer la volonté de l’artiste d’affranchir les limites, de violer les codes de la société et de créer un langage artistique authentique inspiré de toute matière possible pour toucher à la vulnérabilité de l’homme contemporain. Pour ce faire Jan Fabre a élaboré une méthode de travail qui fait redécouvrir un corps brut et instinctif, un corps générant une énergie vitale résultant ainsi à des sensations fortes. Dans ce processus de travail j’ai remarqué que pour Fabre la notion de recherche a une place primordiale en se basant sur une évolution conceptuelle, philosophique et historique. Jan Fabre construit ses spectacles de sorte que tous les composants de la représentation s’entremêlent. Une écriture textuelle et visuelle singulière construit une sorte d’état post-mortem dans le sens où la logique est cédée à l’intuition, une écriture qui offre un univers scénique riche d’images et d’allégories. Une composition scénique où l’espace, le temps et le rythme se basent surtout sur la tension et l’élaboration d’une atmosphère rituelle. Finalement le corps et la mort dans son univers font unité et il manifeste un travail récurant entre l’apparition et la disparation
Jan Fabre's dialogue of body and death is based on a body of theatrical texts. Regarding to the approach of my work, it was mainly focused on three main aspects, first the inheritances and the artistic and cultural inspirations, then the ritual and the sacrifice in its historical philosophical and artistic dimension, and finally the production of the performance: from genesis to scenic representation. The goal of my thesis is to demonstrate the artist's desire to overcome the limits, to violate the codes of society and to create an authentic artistic language inspired by all possible materials to touch the vulnerability of contemporary man. To do this Jan Fabre has developed a method of work that rediscover a body raw and instinctive, a body generating a vital energy and resulting in strong sensations. In this work process I noticed that for Fabre the notion of research has a primordial place based on a conceptual, philosophical and historical evolution. Jan Fabre builds his performances so that all the components of the performance interweave. A singular textual and visual writing constructs a sort of post-mortem state in the sense that logic is ceded to intuition, a writing that offers a scenic universe rich in images and allegories. A scenic composition where space, time and rhythm are based mainly on tension and the development of a ritual atmosphere. After all, the body and death in his universe are united and he manifests a recurring work between the appearance and the disappearance
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Desmet, Maud. „Les confessions silencieuses du cadavre : de la fiction d’autopsie aux figures du mort dans les séries et films policiers contemporains (1991-2013)“. Thesis, Poitiers, 2014. http://www.theses.fr/2014POIT5001.

Der volle Inhalt der Quelle
Annotation:
Sans corps, pas d'histoires. Vecteur d'action, instrument de la narration, et support d'un lien d'identification fort entre le spectateur et le personnage, le corps est la principale figure des médiums cinématographique et télévisuel. Si le cinéma a toujours, depuis ses balbutiements, glorifié la vivacité inépuisable des corps, parallèlement déjà, planait la face inversée de cette exposition, la menace muette de la mort. Mais si le dernier souffle avant la mort est bien souvent encore, au cinéma et à la télévision, synonyme d'ultime communion avec la vie et de résistance à la mort, qu'en est-il du corps et du personnage quand la mort s'en est saisi à jamais et qu'il ne reste plus aux vivants, personnages et spectateurs, qu'à se confronter au cadavre ? Figure parasitaire, le cadavre n'est ni un personnage ni même un figurant. A la fois signe vide et noyau narratif, c'est à partir de lui et de son examen pendant l'autopsie ou sur les lieux du crime que va se nourrir et se développer l'intrigue policière. Et s'il peut paraître secondaire, voire accessoire, à regarder les fictions policières sous l'angle de son non-regard fixe et opaque, il donne à voir quelque chose du crime, de son caractère profondément injuste, et des rapports qu'entretiennent les vivants avec une mort qui se présente sur la table d'autopsie, sous ses traits les plus abjects. L'enjeu de cette thèse sera d'envisager la façon dont les fictions policières mettent en scène le cadavre comme le reflet, d'une troublante précision, d'un défaut contemporain de distanciation face à la mort. Il s'agira bien pour nous, et selon un principe analogue à celui qu'applique le philosophe Maxime Coulombe dans son essai sur les zombies, de considérer le cadavre fictionnel comme « analyseur de la société contemporaine » et comme « symptôme de ce qui taraude la conscience de notre époque »
Without bodies, no stories. A vehicle of action, a narrative agent, and the support of a strong identification link between the audience and the character, the body is the main figure of cinematographic and television mediums.If cinema has always, from its early stages, glorified the endless liveliness of bodies, the reverse side of this exposure has simultaneously been lingering: the mute threat of death. However, in films or in television series, if the last breath before death is often synonymous with a ultimate communion with life and with a resistance to death, what happens to the body and the character when death has seized them for ever, and the living – characters and audience – are only left facing the corpse? As a parasite figure, the corpse is neither a character nor even an extra. Both an empty sign and a narrative core, the crime plot will indeed develop from the corpse and its examination, during the autopsy or on the crime scene. And whereas the corpse may seem secondary, even minor, if we look at crime fictions from the angle of its fixed and opaque non-look, it still allows us to see something of the crime and of its deeply unfair nature, and of the relations between the living and a death that appears in its most abject features on the autopsy table. In this study, we will examine how crime fictions stage corpses as disturbingly precise reflects of a contemporary lack of perspective in front of death. Similarly to the philosopher Maxime Coulombe in his essay on zombies, we will consider the fictional corpse as an "analyser of contemporary society" and as a "symptom of what is tormenting the consciousness of our time"
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Caraballo, Norma Iris. „Identification of Characteristic Volatile Organic Compounds Released during the Decomposition Process of Human Remains and Analogues“. FIU Digital Commons, 2014. http://digitalcommons.fiu.edu/etd/1391.

Der volle Inhalt der Quelle
Annotation:
The manner in which remains decompose has been and is currently being researched around the world, yet little is still known about the generated scent of death. In fact, it was not until the Casey Anthony trial that research on the odor released from decomposing remains, and the compounds that it is comprised of, was brought to light. The Anthony trial marked the first admission of human decomposition odor as forensic evidence into the court of law; however, it was not “ready for prime time” as the scientific research on the scent of death is still in its infancy. This research employed the use of solid-phase microextraction (SPME) with gas chromatography-mass spectrometry (GC-MS) to identify the volatile organic compounds (VOCs) released from decomposing remains and to assess the impact that different environmental conditions had on the scent of death. Using human cadaver analogues, it was discovered that the environment in which the remains were exposed to dramatically affected the odors released by either modifying the compounds that it was comprised of or by enhancing/hindering the amount that was liberated. In addition, the VOCs released during the different stages of the decomposition process for both human remains and analogues were evaluated. Statistical analysis showed correlations between the stage of decay and the VOCs generated, such that each phase of decomposition was distinguishable based upon the type and abundance of compounds that comprised the odor. This study has provided new insight into the scent of death and the factors that can dramatically affect it, specifically, frozen, aquatic, and soil environments. Moreover, the results revealed that different stages of decomposition were distinguishable based upon the type and total mass of each compound present. Thus, based upon these findings, it is suggested that the training aids that are employed for human remains detection (HRD) canines should 1) be characteristic of remains that have undergone decomposition in different environmental settings, and 2) represent each stage of decay, to ensure that the HRD canines have been trained to the various odors that they are likely to encounter in an operational situation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Angladon, Vincent. „Room layout estimation on mobile devices“. Phd thesis, Toulouse, INPT, 2018. http://oatao.univ-toulouse.fr/20745/1/ANGLADON_Vincent.pdf.

Der volle Inhalt der Quelle
Annotation:
Room layout generation is the problem of generating a drawing or a digital model of an existing room from a set of measurements such as laser data or images. The generation of floor plans can find application in the building industry to assess the quality and the correctness of an ongoing construction w.r.t. the initial model, or to quickly sketch the renovation of an apartment. Real estate industry can rely on automatic generation of floor plans to ease the process of checking the livable surface and to propose virtual visits to prospective customers. As for the general public, the room layout can be integrated into mixed reality games to provide a better immersiveness experience, or used in other related augmented reality applications such room redecoration. The goal of this industrial thesis (CIFRE) is to investigate and take advantage of the state-of-the art mobile devices in order to automate the process of generating room layouts. Nowadays, modern mobile devices usually come a wide range of sensors, such as inertial motion unit (IMU), RGB cameras and, more recently, depth cameras. Moreover, tactile touchscreens offer a natural and simple way to interact with the user, thus favoring the development of interactive applications, in which the user can be part of the processing loop. This work aims at exploiting the richness of such devices to address the room layout generation problem. The thesis has three major contributions. We first show how the classic problem of detecting vanishing points in an image can benefit from an a-priori given by the IMU sensor. We propose a simple and effective algorithm for detecting vanishing points relying on the gravity vector estimated by the IMU. A new public dataset containing images and the relevant IMU data is introduced to help assessing vanishing point algorithms and foster further studies in the field. As a second contribution, we explored the state of-the-art of real-time localization and map optimization algorithms for RGB-D sensors. Real-time localization is a fundamental task to enable augmented reality applications, and thus it is a critical component when designing interactive applications. We propose an evaluation of existing algorithms for the common desktop set-up in order to be employed on a mobile device. For each considered method, we assess the accuracy of the localization as well as the computational performances when ported on a mobile device. Finally, we present a proof of concept of application able to generate the room layout relying on a Project Tango tablet equipped with an RGB-D sensor. In particular, we propose an algorithm that incrementally processes and fuses the 3D data provided by the sensor in order to obtain the layout of the room. We show how our algorithm can rely on the user interactions in order to correct the generated 3D model during the acquisition process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Rosinski, Milosz Paul. „Cinema of the self : a theory of cinematic selfhood & practices of neoliberal portraiture“. Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/269409.

Der volle Inhalt der Quelle
Annotation:
This thesis examines the philosophical notion of selfhood in visual representation. I introduce the self as a modern and postmodern concept and argue that there is a loss of selfhood in contemporary culture. Via Jacques Derrida, Jean-Luc Nancy, Gerhard Richter and the method of deconstruction of language, I theorise selfhood through the figurative and literal analysis of duration, the frame, and the mirror. In this approach, selfhood is understood as aesthetic-ontological relation and construction based on specific techniques of the self. In the first part of the study, I argue for a presentational rather than representational perspective concerning selfhood by translating the photograph Self in the Mirror (1964), the painting Las Meninas (1656), and the video Cornered (1988), into my conception of a cinematic theory of selfhood. Based on the presentation of selfhood in those works, the viewer establishes a cinematic relation to the visual self that extends and transgresses the boundaries of inside and outside, presence and absence, and here and there. In the second part, I interpret epistemic scenes of cinematic works as durational scenes in which selfhood is exposed with respect to the forces of time and space. My close readings of epistemic scenes of the films The Congress (2013), and Boyhood (2014) propose that cinema is a philosophical mirror collecting loss of selfhood over time for the viewer. Further, the cinematic concert A Trip to Japan, Revisited (2013), and the hyper-film Cool World (1992) disperse a spatial sense of selfhood for the viewer. In the third part, I examine moments of selfhood and the forces of death, survival, and love in the practice of contemporary cinematic portraiture in Joshua Oppenheimer’s, Michael Glawogger’s, and Yorgos Lanthimos’ work. While the force of death is interpreted in the portrait of perpetrators in The Act of Killing (2013), and The Look of Silence (2014), the force of survival in the longing for life is analysed in Megacities (1998), Workingman’s death (2005), and Whores’ Glory (2011). Lastly, Dogtooth (2009), Alps (2011), and The Lobster (2015) present the contemporary human condition as a lost intuition of relationality epitomised in love.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Torralba, Antonio, und Aude Oliva. „Global Depth Perception from Familiar Scene Structure“. 2001. http://hdl.handle.net/1721.1/7267.

Der volle Inhalt der Quelle
Annotation:
In the absence of cues for absolute depth measurements as binocular disparity, motion, or defocus, the absolute distance between the observer and a scene cannot be measured. The interpretation of shading, edges and junctions may provide a 3D model of the scene but it will not inform about the actual "size" of the space. One possible source of information for absolute depth estimation is the image size of known objects. However, this is computationally complex due to the difficulty of the object recognition process. Here we propose a source of information for absolute depth estimation that does not rely on specific objects: we introduce a procedure for absolute depth estimation based on the recognition of the whole scene. The shape of the space of the scene and the structures present in the scene are strongly related to the scale of observation. We demonstrate that, by recognizing the properties of the structures present in the image, we can infer the scale of the scene, and therefore its absolute mean depth. We illustrate the interest in computing the mean depth of the scene with application to scene recognition and object detection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Lahoud, Jean. „Indoor 3D Scene Understanding Using Depth Sensors“. Diss., 2020. http://hdl.handle.net/10754/665033.

Der volle Inhalt der Quelle
Annotation:
One of the main goals in computer vision is to achieve a human-like understanding of images. Nevertheless, image understanding has been mainly studied in the 2D image frame, so more information is needed to relate them to the 3D world. With the emergence of 3D sensors (e.g. the Microsoft Kinect), which provide depth along with color information, the task of propagating 2D knowledge into 3D becomes more attainable and enables interaction between a machine (e.g. robot) and its environment. This dissertation focuses on three aspects of indoor 3D scene understanding: (1) 2D-driven 3D object detection for single frame scenes with inherent 2D information, (2) 3D object instance segmentation for 3D reconstructed scenes, and (3) using room and floor orientation for automatic labeling of indoor scenes that could be used for self-supervised object segmentation. These methods allow capturing of physical extents of 3D objects, such as their sizes and actual locations within a scene.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Yang, Zih-Yi, und 楊子儀. „Scene Depth Reconstruction Based on Stereo Vision“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/73128409175081453522.

Der volle Inhalt der Quelle
Annotation:
碩士
國立勤益科技大學
電機工程系
100
In this work, we construct a depth map of objects from two images which captured by a pair of cameras. Objects in a depth map can be detected, and the distances form objects to camera can also be estimated in our proposed algorithms. The cost of camera in our proposed system is much cheaper than those of a variety of sensors such as radar and laser etc. The scenes captured by cameras (which may include target objects, non-target objects, a background and others) do not require the installation of various sensors in the further applications. Furthermore, our proposed algorithm for reconstructing disparity plane has less complexity. The depth information of objects can be reconstructed by left- and right-side images which captured by two horizontally installed cameras. The disparity map can be constructed by comparing two images with the shift of pixels in different horizontal directions. Since an object usually has constant illumination and similar colors, the mean shift segmentation algorithm is applied to cut up CIELVU coordinates image into several color segments. After finding color segments and disparity map, the Graph Cut algorithm computes the energy function of each color segment with disparity plane, and assigns the same color tag for the color segments with minimal energy functions. The distance to object is estimated according to the multiple views in computer vision and the angle of views between the pair of cameras. The estimated distances from objects to cameras can be measured on the basis of multiple geometric views and camera parameters. In summary, our experimental results demonstrated that the proposed method is effective for reconstructing disparity plane of the scene. And the distances from camera to objects in the scene can be measured by applying inverse perspective method and two-view geometry.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Liang, Yun-Hui, und 梁韻卉. „Depth-map Generation Based on Scene Classification“. Thesis, 2007. http://ndltd.ncl.edu.tw/handle/76244604150757399866.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Rehfeld, Timo. „Combining Appearance, Depth and Motion for Efficient Semantic Scene Understanding“. Phd thesis, 2018. https://tuprints.ulb.tu-darmstadt.de/7315/1/dissertation_timo_rehfeld_final_a4_color_refs_march10_2018_small.pdf.

Der volle Inhalt der Quelle
Annotation:
Computer vision plays a central role in autonomous vehicle technology, because cameras are comparably cheap and capture rich information about the environment. In particular, object classes, i.e. whether a certain object is a pedestrian, cyclist or vehicle can be extracted very well based on image data. Environment perception in urban city centers is a highly challenging computer vision problem, as the environment is very complex and cluttered: road boundaries and markings, traffic signs and lights and many different kinds of objects that can mutually occlude each other need to be detected in real-time. Existing automotive vision systems do not easily scale to these requirements, because every problem or object class is treated independently. Scene labeling on the other hand, which assigns object class information to every pixel in the image, is the most promising approach to avoid this overhead by sharing extracted features across multiple classes. Compared to bounding box detectors, scene labeling additionally provides richer and denser information about the environment. However, most existing scene labeling methods require a large amount of computational resources, which makes them infeasible for real-time in-vehicle applications. In addition, in terms of bandwidth, a dense pixel-level representation is not ideal to transmit the perceived environment to other modules of an autonomous vehicle, such as localization or path planning. This dissertation addresses the scene labeling problem in an automotive context by constructing a scene labeling concept around the "Stixel World" model of Pfeiffer (2011), which compresses dense information about the environment into a set of small "sticks" that stand upright, perpendicular to the ground plane. This work provides the first extension of the existing Stixel formulation that takes into account learned dense pixel-level appearance features. In a second step, Stixels are used as primitive scene elements to build a highly efficient region-level labeling scheme. The last part of this dissertation finally proposes a model that combines both pixel-level and region-level scene labeling into a single model that yields state-of-the-art or better labeling accuracy and can be executed in real-time with typical camera refresh rates. This work further investigates how existing depth information, i.e. from a stereo camera, can help to improve labeling accuracy and reduce runtime.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Tai-pao, Chuang. „Use of Stereoscopic Photography to Distinguish Object Depth in Outdoor Scene“. 2005. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0021-2004200716265208.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Chuang, Tai-pao, und 莊臺寶. „Use of Stereoscopic Photography to Distinguish Object Depth in Outdoor Scene“. Thesis, 2006. http://ndltd.ncl.edu.tw/handle/80388658829038097161.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺灣師範大學
資訊教育學系
94
Stereoscopic scene of the mankind is naturally caused by synthesizing two images produced by the parallax of the two eyes of human. Such being the case, mankind can distinguish the relative position of the objects. In the study of related stereovision, some persons aim at the framework of taking simulated images with two eyes using one or two cameras from front and back or simultaneously at the same time to obtain a pair of parallax mages; someone pay more attention to the theoretical analysis of the relative positions of the parallax images, and some others do the work of using parallax images as material to carry out the job of image classification, comparison and analysis. The study is mainly divided into two parts: Firstly, we used a camera to take a shot on each different position to obtain a set of parallax images and perform analysis on this set of parallax images, so as to get the calculation the intrinsic and extrinsic parameters of the camera and find the regression equation. Secondly, we use the equation to estimate the relative positions of each different object in the every set of parallax images. The study used two kinds of digital cameras, i.e. Casio Z4 and Pentax S5i to carry out the experiment to obtain individual camera’s parameter. We find the regression equation as follow, and we use it to estimate the object distance. For Casio, the regression equation is zd=24.028×b, and its focus is 24.028. For Pentax, the regression equation is zd=25.637×b, and its focus is 25.637. Of them: z: the distance between marker and camera (m), d: the disparity of the corresponding point (pixel), b: base line between two shots(cm). We took images at the Exhibition Center of Fine Arts of the Library of National Taiwan Normal University and the campus of its Branch School, and analyzed each object’s image depth.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Fan, Song-Yong, und 范淞詠. „The Single Depth Map Generation Method Based on Vanish Point and Scene Information“. Thesis, 2014. http://ndltd.ncl.edu.tw/handle/31514442633914622114.

Der volle Inhalt der Quelle
Annotation:
碩士
玄奘大學
資訊管理學系碩士班
102
This study proposes a single depth image generation method. The depth map of the image is original synthesis of 3D stereo images. Due to depth map video cameras are not very popular currently, if we want to get a depth map of image will use more than one image with two or more cameras, and generated through the software. In this thesis we use the vanishing point and vanishing line characteristics, and achieve relative distance in the image, and scene information in the image to adjust the image depth allows more accurate. Experimental results indicate that the method does not require depth imaging video cameras can still get better depth map.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Emerson, David R. „3-D Scene Reconstruction for Passive Ranging Using Depth from Defocus and Deep Learning“. Thesis, 2019. http://hdl.handle.net/1805/19900.

Der volle Inhalt der Quelle
Annotation:
Indiana University-Purdue University Indianapolis (IUPUI)
Depth estimation is increasingly becoming more important in computer vision. The requirement for autonomous systems to gauge their surroundings is of the utmost importance in order to avoid obstacles, preventing damage to itself and/or other systems or people. Depth measuring/estimation systems that use multiple cameras from multiple views can be expensive and extremely complex. And as these autonomous systems decrease in size and available power, the supporting sensors required to estimate depth must also shrink in size and power consumption. This research will concentrate on a single passive method known as Depth from Defocus (DfD), which uses an in-focus and out-of-focus image to infer the depth of objects in a scene. The major contribution of this research is the introduction of a new Deep Learning (DL) architecture to process the the in-focus and out-of-focus images to produce a depth map for the scene improving both speed and performance over a range of lighting conditions. Compared to the previous state-of-the-art multi-label graph cuts algorithms applied to the synthetically blurred dataset the DfD-Net produced a 34.30% improvement in the average Normalized Root Mean Square Error (NRMSE). Similarly the DfD-Net architecture produced a 76.69% improvement in the average Normalized Mean Absolute Error (NMAE). Only the Structural Similarity Index (SSIM) had a small average decrease of 2.68% when compared to the graph cuts algorithm. This slight reduction in the SSIM value is a result of the SSIM metric penalizing images that appear to be noisy. In some instances the DfD-Net output is mottled, which is interpreted as noise by the SSIM metric. This research introduces two methods of deep learning architecture optimization. The first method employs the use of a variant of the Particle Swarm Optimization (PSO) algorithm to improve the performance of the DfD-Net architecture. The PSO algorithm was able to find a combination of the number of convolutional filters, the size of the filters, the activation layers used, the use of a batch normalization layer between filters and the size of the input image used during training to produce a network architecture that resulted in an average NRMSE that was approximately 6.25% better than the baseline DfD-Net average NRMSE. This optimized architecture also resulted in an average NMAE that was 5.25% better than the baseline DfD-Net average NMAE. Only the SSIM metric did not see a gain in performance, dropping by 0.26% when compared to the baseline DfD-Net average SSIM value. The second method illustrates the use of a Self Organizing Map clustering method to reduce the number convolutional filters in the DfD-Net to reduce the overall run time of the architecture while still retaining the network performance exhibited prior to the reduction. This method produces a reduced DfD-Net architecture that has a run time decrease of between 14.91% and 44.85% depending on the hardware architecture that is running the network. The final reduced DfD-Net resulted in a network architecture that had an overall decrease in the average NRMSE value of approximately 3.4% when compared to the baseline, unaltered DfD-Net, mean NRMSE value. The NMAE and the SSIM results for the reduced architecture were 0.65% and 0.13% below the baseline results respectively. This illustrates that reducing the network architecture complexity does not necessarily reduce the reduction in performance. Finally, this research introduced a new, real world dataset that was captured using a camera and a voltage controlled microfluidic lens to capture the visual data and a 2-D scanning LIDAR to capture the ground truth data. The visual data consists of images captured at seven different exposure times and 17 discrete voltage steps per exposure time. The objects in this dataset were divided into four repeating scene patterns in which the same surfaces were used. These scenes were located between 1.5 and 2.5 meters from the camera and LIDAR. This was done so any of the deep learning algorithms tested would see the same texture at multiple depths and multiple blurs. The DfD-Net architecture was employed in two separate tests using the real world dataset. The first test was the synthetic blurring of the real world dataset and assessing the performance of the DfD-Net trained on the Middlebury dataset. The results of the real world dataset for the scenes that were between 1.5 and 2.2 meters from the camera the DfD-Net trained on the Middlebury dataset produced an average NRMSE, NMAE and SSIM value that exceeded the test results of the DfD-Net tested on the Middlebury test set. The second test conducted was the training and testing solely on the real world dataset. Analysis of the camera and lens behavior led to an optimal lens voltage step configuration of 141 and 129. Using this configuration, training the DfD-Net resulted in an average NRMSE, NMAE and SSIM of 0.0660, 0.0517 and 0.8028 with a standard deviation of 0.0173, 0.0186 and 0.0641 respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

HSU, SHUN-MING, und 許舜銘. „Based on Vanish Point and Scene from Focus to Generate A Single Depth Map“. Thesis, 2016. http://ndltd.ncl.edu.tw/handle/74457494033081135178.

Der volle Inhalt der Quelle
Annotation:
碩士
玄奘大學
資訊管理學系碩士班
104
In this thesis we study the techinque of the depth map generating from 2D image to high quality 3D stereo image. We use the vanishing points and vanishing lines with the objects of scene and the depth of focus to create the single depth map. Since the depth information of a 2D image are too much to build a good depth map, we need foreground, background, scene, and foucs of the 2D image. We apply Laplace filter, Hough transform, vanishing point to construct a coarse depth map, then use the scene and object with the depth from focus to form a more accuracy depth map. Experimental results indicate that the method does not require depth imaging video cameras can still get a better depth map.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Su, Che-Chun. „Applied statistical modeling of three-dimensional natural scene data“. Thesis, 2014. http://hdl.handle.net/2152/24878.

Der volle Inhalt der Quelle
Annotation:
Natural scene statistics (NSS) have played an increasingly important role in both our understanding of the function and evolution of the human vision system, and in the development of modern image processing applications. Because depth/range, i.e., egocentric distance, is arguably the most important thing a visual system must compute (from an evolutionary perspective), the joint statistics between natural image and depth/range information are of particular interest. However, while there exist regular and reliable statistical models of two-dimensional (2D) natural images, there has been little work done on statistical modeling of natural luminance/chrominance and depth/disparity, and of their mutual relationships. One major reason is the dearth of high-quality three-dimensional (3D) image and depth/range database. To facilitate research progress on 3D natural scene statistics, this dissertation first presents a high-quality database of color images and accurately co-registered depth/range maps using an advanced laser range scanner mounted with a high-end digital single-lens reflex camera. By utilizing this high-resolution, high-quality database, this dissertation performs reliable and robust statistical modeling of natural image and depth/disparity information, including new bivariate and spatial oriented correlation models. In particular, these new statistical models capture higher-order dependencies embedded in spatially adjacent bandpass responses projected from natural environments, which have not yet been well understood or explored in literature. To demonstrate the efficacy and effectiveness of the advanced NSS models, this dissertation addresses two challenging, yet very important problems, depth estimation from monocular images and no-reference stereoscopic/3D (S3D) image quality assessment. A Bayesian depth estimation framework is proposed to consider the canonical depth/range patterns in natural scenes, and it forms priors and likelihoods using both univariate and bivariate NSS features. The no-reference S3D image quality index proposed in this dissertation exploits new bivariate and correlation NSS features to quantify different types of stereoscopic distortions. Experimental results show that the proposed framework and index achieve superior performance to state-of-the-art algorithms in both disciplines.
text
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Bhat, Shariq. „Depth Estimation Using Adaptive Bins via Global Attention at High Resolution“. Thesis, 2021. http://hdl.handle.net/10754/668894.

Der volle Inhalt der Quelle
Annotation:
We address the problem of estimating a high quality dense depth map from a single RGB input image. We start out with a baseline encoder-decoder convolutional neural network architecture and pose the question of how the global processing of information can help improve overall depth estimation. To this end, we propose a transformer-based architecture block that divides the depth range into bins whose center value is estimated adaptively per image. The nal depth values are estimated as linear combinations of the bin centers. We call our new building block AdaBins. Our results show a decisive improvement over the state-of-the-art on several popular depth datasets across all metrics. We also validate the e ectiveness of the proposed block with an ablation study.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Sun, Wei-Chih, und 孫偉智. „Using STM to Estimate Depth Map of A Scene from Two Different Defocused Images and Hardware Implementation“. Thesis, 2009. http://ndltd.ncl.edu.tw/handle/33709215779925821091.

Der volle Inhalt der Quelle
Annotation:
碩士
國立清華大學
電機工程學系
98
Three-dimensional television (3D-TV) is the trend of television development in the future and there are many researches focus on 3D-TV. We believe that three-dimensional (or stereoscopic) television (3D-TV) will replace high-definition television (HD-TV). Recently, an advanced 3D-TV system has been brought up on the new technology called Depth Image-Based Rendering (DIBR), which is also called 2D-plus-depth. This representation is generally considered to be more efficient for coding, storage, transmission and rendering than traditional 3D video representation which is transmitting left image and right image to receiver. There are many approaches to 3D depth recovery and the approaches of 3D depth recovery can be divided into depth from focus (DFF) and Depth from defocus (DFD) for focus cue. We choose spatial domain transform method (STM) [13] to estimate the depth information of different defocused images because STM is more simple and direct than other methods. We use Verilog HDL to develop the hardware architecture of the STM algorithm and implement the prototype on Xilinx FPGA board. About acquiring images, the different defocused images were recorded by applying different voltage to Liquid-Crystal Lens camera. Then we estimate depth map from blur degree of images by using STM algorithm. And we use three dimensional display to watch the experiment results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

WANG, WEI-HSIANG, und 王煒翔. „On the Relative Depth Estimation Techniques Used for Static Scene Videos Captured by Moving a Single Camera Lens“. Thesis, 2015. http://ndltd.ncl.edu.tw/handle/zfc8ma.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺灣科技大學
資訊工程系
103
In recent year, depth map is used extensively. The real time depth map captured by Kinect can get the human motion easily. It is very important in human–computer interaction. However, since smartphone is popularized, static scene depth maps are very popular. It is used to edit the photo with some special effects. To get a better static scene depth map, some smartphone company produce the smartphone with dual cameras, sparing no efforts. But most of company have the smartphone with dual cameras require cost-down. So static scene depth maps produced by single camera is more important. Most of popular special effects of photos only need relative depth information, don’t need the absolutely depth, so this thesis estimate relative depth.   In this thesis, the concept of making depth map is based on the distance of moving object on the video frames proportionating to the depth. It is like the situation when a person sit in a moving car who can observe that the object out of car close to us move very fast and sun never move.   We use a single camera to record a video to produce the depth map. While recording, the camera need vertically and/or horizontally jiggle. The video is detected the keypoint, then use them to match the keypoints by Scale-invariant Feature Transform. Distance of each matching keypoints is the depth information. Using the image segmentation to divide the image into several blocks can help the depth information expend to all the image by filling the depth in its blocks. The experiment of our method can realize the sense with complex black ground would get the more correct result.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie