Academic literature on the topic 'Scene depth'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Scene depth.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Scene depth"

1

Fernandes, Suzette, and Monica S. Castelhano. "The Foreground Bias: Initial Scene Representations Across the Depth Plane." Psychological Science 32, no. 6 (May 21, 2021): 890–902. http://dx.doi.org/10.1177/0956797620984464.

Full text
Abstract:
When you walk into a large room, you perceive visual information that is both close to you in depth and farther in the background. Here, we investigated how initial scene representations are affected by information across depth. We examined the role of background and foreground information on scene gist by using chimera scenes (images with a foreground and background from different scene categories). Across three experiments, we found a foreground bias: Information in the foreground initially had a strong influence on the interpretation of the scene. This bias persisted when the initial fixation position was on the scene background and when the task was changed to emphasize scene information. We concluded that the foreground bias arises from initial processing of scenes for understanding and suggests that scene information closer to the observer is initially prioritized. We discuss the implications for theories of scene and depth perception.
APA, Harvard, Vancouver, ISO, and other styles
2

Nefs, Harold T. "Depth of Field Affects Perceived Depth-width Ratios in Photographs of Natural Scenes." Seeing and Perceiving 25, no. 6 (2012): 577–95. http://dx.doi.org/10.1163/18784763-00002400.

Full text
Abstract:
The aim of the study was to find out how much influence depth of field has on the perceived ratio of depth and width in photographs of natural scenes. Depth of field is roughly defined as the distance range that is perceived as sharp in the photograph. Four different semi-natural scenes consisting of a central and two flanking figurines were used. For each scene, five series of photos were made, in which the distance in depth between the central figurine and the flanking figurines increased. These series of photographs had different amounts of depth of field. In the first experiment participants adjusted the position of the two flanking figurines relative to a central figurine, until the perceived distance in the depth dimension equaled the perceived lateral distance between the two flanking figurines. Viewing condition was either monocular or binocular (non-stereo). In the second experiment, the participants did the same task but this time we varied the viewing distance. We found that the participants’ depth/width settings increased with increasing depth of field. As depth of field increased, the perceived depth in the scene was reduced relative to the perceived width. Perceived depth was reduced relative to perceived width under binocular viewing conditions compared to monocular viewing conditions. There was a greater reduction when the viewing distance was increased. As photographs of natural scenes contain many highly redundant or conflicting depth cues, we conclude therefore that local image blur is an important cue to depth. Moreover, local image blur is not only taken into account in the perception of egocentric distances, but also affects the perception of depth within the scene relative to lateral distances within the scene.
APA, Harvard, Vancouver, ISO, and other styles
3

Chlubna, T., T. Milet, and P. Zemčík. "Real-time per-pixel focusing method for light field rendering." Computational Visual Media 7, no. 3 (February 27, 2021): 319–33. http://dx.doi.org/10.1007/s41095-021-0205-0.

Full text
Abstract:
AbstractLight field rendering is an image-based rendering method that does not use 3D models but only images of the scene as input to render new views. Light field approximation, represented as a set of images, suffers from so-called refocusing artifacts due to different depth values of the pixels in the scene. Without information about depths in the scene, proper focusing of the light field scene is limited to a single focusing distance. The correct focusing method is addressed in this work and a real-time solution is proposed for focusing of light field scenes, based on statistical analysis of the pixel values contributing to the final image. Unlike existing techniques, this method does not need precomputed or acquired depth information. Memory requirements and streaming bandwidth are reduced and real-time rendering is possible even for high resolution light field data, yielding visually satisfactory results. Experimental evaluation of the proposed method, implemented on a GPU, is presented in this paper.
APA, Harvard, Vancouver, ISO, and other styles
4

Lee, Jaeho, Seungwoo Yoo, Changick Kim, and Bhaskaran Vasudev. "Estimating Scene-Oriented Pseudo Depth With Pictorial Depth Cues." IEEE Transactions on Broadcasting 59, no. 2 (June 2013): 238–50. http://dx.doi.org/10.1109/tbc.2013.2240131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sauer, Craig W., Myron L. Braunstein, Asad Saidpour, and George J. Andersen. "Propagation of Depth Information from Local Regions in 3-D Scenes." Perception 31, no. 9 (September 2002): 1047–59. http://dx.doi.org/10.1068/p3261.

Full text
Abstract:
The effects of regions with local linear perspective on judgments of the depth separation between two objects in a scene were investigated for scenes consisting of a ground plane, a quadrilateral region, and two poles separated in depth. The poles were either inside or outside the region. Two types of displays were used: motion-parallax dot displays, and a still photograph of a real scene on which computer-generated regions and objects were superimposed. Judged depth separations were greater for regions with greater linear perspective, both for objects inside and outside the region. In most cases, the effect of the region's shape was reduced for objects outside the region. Some systematic differences were found between the two types of displays. For example, adding a region with any shape increased judged depth in motion-parallax displays, but only high-perspective regions increased judged depth in real-scene displays. We conclude that depth information present in local regions affects perceived depth within the region, and that these effects propagate, to a lesser degree, outside the region.
APA, Harvard, Vancouver, ISO, and other styles
6

Torralba, A., and A. Oliva. "Depth perception from familiar scene structure." Journal of Vision 2, no. 7 (March 14, 2010): 494. http://dx.doi.org/10.1167/2.7.494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Groen, Iris I. A., Sennay Ghebreab, Victor A. F. Lamme, and H. Steven Scholte. "The time course of natural scene perception with reduced attention." Journal of Neurophysiology 115, no. 2 (February 1, 2016): 931–46. http://dx.doi.org/10.1152/jn.00896.2015.

Full text
Abstract:
Attention is thought to impose an informational bottleneck on vision by selecting particular information from visual scenes for enhanced processing. Behavioral evidence suggests, however, that some scene information is extracted even when attention is directed elsewhere. Here, we investigated the neural correlates of this ability by examining how attention affects electrophysiological markers of scene perception. In two electro-encephalography (EEG) experiments, human subjects categorized real-world scenes as manmade or natural (full attention condition) or performed tasks on unrelated stimuli in the center or periphery of the scenes (reduced attention conditions). Scene processing was examined in two ways: traditional trial averaging was used to assess the presence of a categorical manmade/natural distinction in event-related potentials, whereas single-trial analyses assessed whether EEG activity was modulated by scene statistics that are diagnostic of naturalness of individual scenes. The results indicated that evoked activity up to 250 ms was unaffected by reduced attention, showing intact categorical differences between manmade and natural scenes and strong modulations of single-trial activity by scene statistics in all conditions. Thus initial processing of both categorical and individual scene information remained intact with reduced attention. Importantly, however, attention did have profound effects on later evoked activity; full attention on the scene resulted in prolonged manmade/natural differences, increased neural sensitivity to scene statistics, and enhanced scene memory. These results show that initial processing of real-world scene information is intact with diminished attention but that the depth of processing of this information does depend on attention.
APA, Harvard, Vancouver, ISO, and other styles
8

Qiu, Yue, Yutaka Satoh, Ryota Suzuki, Kenji Iwata, and Hirokatsu Kataoka. "Indoor Scene Change Captioning Based on Multimodality Data." Sensors 20, no. 17 (August 23, 2020): 4761. http://dx.doi.org/10.3390/s20174761.

Full text
Abstract:
This study proposes a framework for describing a scene change using natural language text based on indoor scene observations conducted before and after a scene change. The recognition of scene changes plays an essential role in a variety of real-world applications, such as scene anomaly detection. Most scene understanding research has focused on static scenes. Most existing scene change captioning methods detect scene changes from single-view RGB images, neglecting the underlying three-dimensional structures. Previous three-dimensional scene change captioning methods use simulated scenes consisting of geometry primitives, making it unsuitable for real-world applications. To solve these problems, we automatically generated large-scale indoor scene change caption datasets. We propose an end-to-end framework for describing scene changes from various input modalities, namely, RGB images, depth images, and point cloud data, which are available in most robot applications. We conducted experiments with various input modalities and models and evaluated model performance using datasets with various levels of complexity. Experimental results show that the models that combine RGB images and point cloud data as input achieve high performance in sentence generation and caption correctness and are robust for change type understanding for datasets with high complexity. The developed datasets and models contribute to the study of indoor scene change understanding.
APA, Harvard, Vancouver, ISO, and other styles
9

Warrant, Eric. "The eyes of deep–sea fishes and the changing nature of visual scenes with depth." Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 355, no. 1401 (September 29, 2000): 1155–59. http://dx.doi.org/10.1098/rstb.2000.0658.

Full text
Abstract:
The visual scenes viewed by ocean animals change dramatically with depth. In the brighter epipelagic depths, daylight provides an extended field of illumination. In mesopelagic depths down to 1000 m the visual scene is semi–extended, with the downwelling daylight providing increasingly dim extended illumination with depth. In contrast, greater depths increase the prominence of point–source bioluminescent flashes. In bathypelagic depths (below 1000 m) daylight no longer penetrates, and the visual scene consists exclusively of point–source bioluminescent flashes. In this paper, I show that the eyes of fishes match this change from extended to point–source illumination, becoming increasingly foveate and spatially acute with increasing depth. A sharp fovea is optimal for localizing point sources. Quite contrary to their reputation as ‘degenerate’ and ‘regressed’, I show here that the remarkably prominent foveae and relatively large pupils of bathypelagic fishes give them excellent perception and localization of bioluminescent flashes up to a few tens of metres distant. In a world with almost no food, where fishes are weak and must swim very slowly, this range of detection (and interception) is energetically realistic, with distances greater than this physically beyond range. Larger and more sensitive eyes would give bathypelagic fishes little more than the useless ability to see flashes beyond reach.
APA, Harvard, Vancouver, ISO, and other styles
10

Madhuanand, L., F. Nex, and M. Y. Yang. "DEEP LEARNING FOR MONOCULAR DEPTH ESTIMATION FROM UAV IMAGES." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020 (August 3, 2020): 451–58. http://dx.doi.org/10.5194/isprs-annals-v-2-2020-451-2020.

Full text
Abstract:
Abstract. Depth is an essential component for various scene understanding tasks and for reconstructing the 3D geometry of the scene. Estimating depth from stereo images requires multiple views of the same scene to be captured which is often not possible when exploring new environments with a UAV. To overcome this monocular depth estimation has been a topic of interest with the recent advancements in computer vision and deep learning techniques. This research has been widely focused on indoor scenes or outdoor scenes captured at ground level. Single image depth estimation from aerial images has been limited due to additional complexities arising from increased camera distance, wider area coverage with lots of occlusions. A new aerial image dataset is prepared specifically for this purpose combining Unmanned Aerial Vehicles (UAV) images covering different regions, features and point of views. The single image depth estimation is based on image reconstruction techniques which uses stereo images for learning to estimate depth from single images. Among the various available models for ground-level single image depth estimation, two models, 1) a Convolutional Neural Network (CNN) and 2) a Generative Adversarial model (GAN) are used to learn depth from aerial images from UAVs. These models generate pixel-wise disparity images which could be converted into depth information. The generated disparity maps from these models are evaluated for its internal quality using various error metrics. The results show higher disparity ranges with smoother images generated by CNN model and sharper images with lesser disparity range generated by GAN model. The produced disparity images are converted to depth information and compared with point clouds obtained using Pix4D. It is found that the CNN model performs better than GAN and produces depth similar to that of Pix4D. This comparison helps in streamlining the efforts to produce depth from a single aerial image.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Scene depth"

1

Oliver, Parera Maria. "Scene understanding from image and video : segmentation, depth configuration." Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/663870.

Full text
Abstract:
In this thesis we aim at analyzing images and videos at the object level, with the goal of decomposing the scene into complete objects that move and interact among themselves. The thesis is divided in three parts. First, we propose a segmentation method to decompose the scene into shapes. Then, we propose a probabilistic method, which works with shapes or objects at two different depths, to infer which objects are in front of the others, while completing the ones which are partially occluded. Finally, we propose two video related inpainting method. On one hand, we propose a binary video inpainting method that relies on the optical flow of the video in order to complete the shapes across time taking into account their motion. On the other hand, we propose a method for optical flow that takes into account the informational from the frames.
Aquesta tesi té per objectiu analitzar imatges i vídeos a nivell d’objectes, amb l’objectiu de descompondre l’escena en objectes complets que es mouen i interaccionen entre ells. La tesi està dividida en tres parts. En primer lloc, proposem un mètode de segmentació per descompondre l’escena en les formes que la componen. A continuació, proposem un mètode probabilístic, que considera les formes o objectes en dues profunditats de l’escena diferents, i infereix quins objectes estan davant dels altres, completant també els objectes parcialment ocults. Finalment, proposem dos mètodes relacionats amb el vídeo inpainting. Per una banda, proposem un mètode per vídeo inpainting binari que utilitza el flux òptic del vídeo per completar les formes al llarg del temps, tenint en compte el seu moviment. Per l’altra banda, proposem un mètode per inpainting de flux òptic que té en compte la informació provinent dels frames.
APA, Harvard, Vancouver, ISO, and other styles
2

Mitra, Bhargav Kumar. "Scene segmentation using miliarity, motion and depth based cues." Thesis, University of Sussex, 2010. http://sro.sussex.ac.uk/id/eprint/2480/.

Full text
Abstract:
Segmentation of complex scenes to aid surveillance is still considered an open research problem. In this thesis a computational model (CM) has been developed to classify a scene into foreground, moving-shadow and background regions. It has been demonstrated how the CM, with the optional use of a channel ratio test, can be applied to demarcate foreground shadow regions in indoor scenes illuminated by a fixed incandescent source of light. A combined approach, involving the CM working in tandem with a traditional motion cue based segmentation method, has also been constructed. In the combined approach, the CM is applied to segregate the foreground shaded regions in a current frame based on a binary mask generated using a standard background subtraction process (BSP). Various popular outlier detection strategies have been investigated to assess their suitabilities in generating a threshold automatically, required to develop a binary mask from a difference frame, the outcome of the BSP. To evaluate the full scope of the pixel labeling capabilities of the CM and to estimate the associated time constraints, the model is deployed for foreground scene segmentation in recorded real-life video streams. The observations made validate the satisfactory performance of the model in most cases. In the second part of the thesis depth based cues have been exploited to perform the task of foreground scene segmentation. An active structured light based depthestimating arrangement has been modeled in the thesis; the choice of modeling an active system over a passive stereovision one has been made to alleviate some of the difficulties associated with the classical correspondence problem. The model developed not only facilitates use of the set-up but also makes possible a method to increase the working volume of the system without explicitly encoding the projected structured pattern. Finally, it is explained how scene segmentation can be accomplished based solely on the structured pattern disparity information, without generating explicit depthmaps. To de-noise the difference frames, generated using the developed method, two median filtering schemes have been implemented. The working of one of the schemes is advocated for practical use and is described in terms of discrete morphological operators, thus facilitating hardware realisation of the method to speed-up the de-noising process.
APA, Harvard, Vancouver, ISO, and other styles
3

Malleson, Charles D. "Dynamic scene modelling and representation from video and depth." Thesis, University of Surrey, 2016. http://epubs.surrey.ac.uk/809990/.

Full text
Abstract:
Recent advances in sensor technology have introduced low-cost video+depth sensors, such as the Microsoft Kinect, which enable simultaneous acquisition of colour and depth images at video rates. The aim of this research is to investigate representations which support integration of noisy, partial surface measurements over time to form more complete, temporally coherent models of dynamic scenes with enhanced detail and reduced noise. The initial focus of this work is on the restricted case of rigid geometry for which online GPU-accelerated volumetric fusion is implemented and tested. An alternative fusion approach based on dense surface elements (surfels) is also explored and compared to the volumetric approach. As a first step towards handling non-rigid scenes, the static volumetric approach is extended to treat articulated (semi-rigid) geometry with a focus on humans. The human body is segmented into piece-wise rigid volumetric parts and part tracking is aided by depth-based skeletal motion data. To address scenes containing more general non-rigid geometry beyond people and isolated rigid shapes, a more flexible approach is required. A piece-wise modelling approach using a sparse surfel graph and repeated alternation between part segmentation, motion and shape estimation is proposed. The method is designed to incorporate methods for noise reduction and handling of missing data. Finally, a hybrid approach is proposed which leverages the advantages of the surfel graph segmentation and coarse surface modelling with the higher-resolution surface reconstruction capability of volumetric fusion. The hybrid method is able to produce a seamless skinned mesh structure to efficiently represent a temporally consistent dynamic scene. The hybrid framework can be considered a unification of rigid and non-rigid reconstruction techniques, for which static scenes are a special case. It allows arbitrary dynamic scenes to be efficiently represented with enhanced levels of detail and completeness where possible, but gracefully falls back to raw measurements where no structure can be inferred. The representation is shown to facilitate creative manipulation of real scene data which would previously require more complex capture setups or extensive manual processing.
APA, Harvard, Vancouver, ISO, and other styles
4

Stynsberg, John. "Incorporating Scene Depth in Discriminative Correlation Filters for Visual Tracking." Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153110.

Full text
Abstract:
Visual tracking is a computer vision problem where the task is to follow a targetthrough a video sequence. Tracking has many important real-world applications in several fields such as autonomous vehicles and robot-vision. Since visual tracking does not assume any prior knowledge about the target, it faces different challenges such occlusion, appearance change, background clutter and scale change. In this thesis we try to improve the capabilities of tracking frameworks using discriminative correlation filters by incorporating scene depth information. We utilize scene depth information on three main levels. First, we use raw depth information to segment the target from its surroundings enabling occlusion detection and scale estimation. Second, we investigate different visual features calculated from depth data to decide which features are good at encoding geometric information available solely in depth data. Third, we investigate handling missing data in the depth maps using a modified version of the normalized convolution framework. Finally, we introduce a novel approach for parameter search using genetic algorithms to find the best hyperparameters for our tracking framework. Experiments show that depth data can be used to estimate scale changes and handle occlusions. In addition, visual features calculated from depth are more representative if they were combined with color features. It is also shown that utilizing normalized convolution improves the overall performance in some cases. Lastly, the usage of genetic algorithms for hyperparameter search leads to accuracy gains as well as some insights on the performance of different components within the framework.
APA, Harvard, Vancouver, ISO, and other styles
5

Elezovikj, Semir. "FOREGROUND AND SCENE STRUCTURE PRESERVED VISUAL PRIVACY PROTECTION USING DEPTH INFORMATION." Master's thesis, Temple University Libraries, 2014. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/259533.

Full text
Abstract:
Computer and Information Science
M.S.
We propose the use of depth-information to protect privacy in person-aware visual systems while preserving important foreground subjects and scene structures. We aim to preserve the identity of foreground subjects while hiding superfluous details in the background that may contain sensitive information. We achieve this goal by using depth information and relevant human detection mechanisms provided by the Kinect sensor. In particular, for an input color and depth image pair, we first create a sensitivity map which favors background regions (where privacy should be preserved) and low depth-gradient pixels (which often relates a lot to scene structure but little to identity). We then combine this per-pixel sensitivity map with an inhomogeneous image obscuration process for privacy protection. We tested the proposed method using data involving different scenarios including various illumination conditions, various number of subjects, different context, etc. The experiments demonstrate the quality of preserving the identity of humans and edges obtained from the depth information while obscuring privacy intrusive information in the background.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
6

Quiroga, Sepúlveda Julián. "Scene Flow Estimation from RGBD Images." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM057/document.

Full text
Abstract:
Cette thèse aborde le problème du calcul de manière fiable d'un champ de mouvement 3D, appelé flot de scène, à partir d'une paire d'images RGBD prises à des instants différents. Nous proposons un schéma d'estimation semi-rigide pour le calcul robuste du flot de scène, en prenant compte de l'information de couleur et de profondeur, et un cadre de minimisation alternée variationnelle pour récupérer les composantes rigides et non rigides du champ de mouvement 3D. Les tentatives précédentes pour estimer le flot de scène à partir des images RGBD étaient des extensions des approches de flux optique, et n'exploitaient pas totalement les données de profondeur, ou bien elles formulaient l'estimation dans l'espace 3D sans tenir compte de la semi-rigidité des scènes réelles. Nous démontrons que le flot de scène peut ^etre calculé de manière robuste et précise dans le domaine de l'image en reconstruisant un mouvement 3D cohérent avec la couleur et la profondeur, en encourageant une combinaison réglable entre rigidité locale et par morceaux. En outre, nous montrons que le calcul du champ de mouvement 3D peut être considéré comme un cas particulier d'un problème d'estimation plus général d'un champ de mouvements rigides à 6 dimensions. L'estimation du flot de scène est donc formulée comme la recherche d'un champ optimal de mouvements rigides. Nous montrons finalement que notre méthode permet d'obtenir des résultats comparables à l'état de l'art
This thesis addresses the problem of reliably recovering a 3D motion field, or scene flow, from a temporal pair of RGBD images. We propose a semi-rigid estimation framework for the robust computation of scene flow, taking advantage of color and depth information, and an alternating variational minimization framework for recovering rigid and non-rigid components of the 3D motion field. Previous attempts to estimate scene flow from RGBD images have extended optical flow approaches without fully exploiting depth data or have formulated the estimation in 3D space disregarding the semi-rigidity of real scenes. We demonstrate that scene flow can be robustly and accurately computed in the image domain by solving for 3D motions consistent with color and depth, encouraging an adjustable combination between local and piecewise rigidity. Additionally, we show that solving for the 3D motion field can be seen as a specific case of a more general estimation problem of a 6D field of rigid motions. Accordingly, we formulate scene flow estimation as the search of an optimal field of twist motions achieving state-of-the-art results.STAR
APA, Harvard, Vancouver, ISO, and other styles
7

Forne, Christopher Jes. "3-D Scene Reconstruction from Multiple Photometric Images." Thesis, University of Canterbury. Electrical and Computer Engineering, 2007. http://hdl.handle.net/10092/1227.

Full text
Abstract:
This thesis deals with the problem of three dimensional scene reconstruction from multiple camera images. This is a well established problem in computer vision and has been significantly researched. In recent years some excellent results have been achieved, however existing algorithms often fall short of many biological systems in terms of robustness and generality. The aim of this research was to develop improved algorithms for reconstructing 3D scenes, with a focus on accurate system modelling and correctly dealing with occlusions. With scene reconstruction the objective is to infer scene parameters describing the 3D structure of the scene from the data given by camera images. This is an illposed inverse problem, where an exact solution cannot be guaranteed. The use of a statistical approach to deal with the scene reconstruction problem is introduced and the differences between maximum a priori (MAP) and minimum mean square estimate (MMSE) considered. It is discussed how traditional stereo matching can be performed using a volumetric scene model. An improved model describing the relationship between the camera data and a discrete model of the scene is presented. This highlights some of the common causes of modelling errors, enabling them to be dealt with objectively. The problems posed by occlusions are considered. Using a greedy algorithm the scene is progressively reconstructed to account for visibility interactions between regions and the idea of a complete scene estimate is established. Some simple and improved techniques for reliably assigning opaque voxels are developed, making use of prior information. Problems with variations in the imaging convolution kernel between images motivate the development of a pixel dissimilarity measure. Belief propagation is then applied to better utilise prior information and obtain an improved global optimum. A new volumetric factor graph model is presented which represents the joint probability distribution of the scene and imaging system. By utilising the structure of the local compatibility functions, an efficient procedure for updating the messages is detailed. To help convergence, a novel approach of accentuating beliefs is shown. Results demonstrate the validity of this approach, however the reconstruction error is similar or slightly higher than from the Greedy algorithm. To simplify the volumetric model, a new approach to belief propagation is demonstrated by applying it to a dynamic model. This approach is developed as an alternative to the full volumetric model because it is less memory and computationally intensive. Using a factor graph, a volumetric known visibility model is presented which ensures the scene is complete with respect to all the camera images. Dynamic updating is also applied to a simpler single depth-map model. Results show this approach is unsuitable for the volumetric known visibility model, however, improved results are obtained with the simple depth-map model.
APA, Harvard, Vancouver, ISO, and other styles
8

Rehfeld, Timo [Verfasser], Stefan [Akademischer Betreuer] Roth, and Carsten [Akademischer Betreuer] Rother. "Combining Appearance, Depth and Motion for Efficient Semantic Scene Understanding / Timo Rehfeld ; Stefan Roth, Carsten Rother." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2018. http://d-nb.info/1157011950/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jaritz, Maximilian. "2D-3D scene understanding for autonomous driving." Thesis, Université Paris sciences et lettres, 2020. https://pastel.archives-ouvertes.fr/tel-02921424.

Full text
Abstract:
Dans cette thèse, nous abordons les défis de la rareté des annotations et la fusion de données hétérogènes tels que les nuages de points 3D et images 2D. D’abord, nous adoptons une stratégie de conduite de bout en bout où un réseau de neurones est entraîné pour directement traduire l'entrée capteur (image caméra) en contrôles-commandes, ce qui rend cette approche indépendante des annotations dans le domaine visuel. Nous utilisons l’apprentissage par renforcement profond où l'algorithme apprend de la récompense, obtenue par interaction avec un simulateur réaliste. Nous proposons de nouvelles stratégies d'entraînement et fonctions de récompense pour une meilleure conduite et une convergence plus rapide. Cependant, le temps d’apprentissage reste élevé. C'est pourquoi nous nous concentrons sur la perception dans le reste de cette thèse pour étudier la fusion de nuage de points et d'images. Nous proposons deux méthodes différentes pour la fusion 2D-3D. Premièrement, nous projetons des nuages de points LiDAR 3D dans l’espace image 2D, résultant en des cartes de profondeur éparses. Nous proposons une nouvelle architecture encodeur-décodeur qui fusionne les informations de l’image et la profondeur pour la tâche de complétion de carte de profondeur, améliorant ainsi la résolution du nuage de points projeté dans l'espace image. Deuxièmement, nous fusionnons directement dans l'espace 3D pour éviter la perte d'informations dû à la projection. Pour cela, nous calculons les caractéristiques d’image issues de plusieurs vues avec un CNN 2D, puis nous les projetons dans un nuage de points 3D global pour les fusionner avec l’information 3D. Par la suite, ce nuage de point enrichi sert d'entrée à un réseau "point-based" dont la tâche est l'inférence de la sémantique 3D par point. Sur la base de ce travail, nous introduisons la nouvelle tâche d'adaptation de domaine non supervisée inter-modalités où on a accès à des données multi-capteurs dans une base de données source annotée et une base cible non annotée. Nous proposons une méthode d’apprentissage inter-modalités 2D-3D via une imitation mutuelle entre les réseaux d'images et de nuages de points pour résoudre l’écart de domaine source-cible. Nous montrons en outre que notre méthode est complémentaire à la technique unimodale existante dite de pseudo-labeling
In this thesis, we address the challenges of label scarcity and fusion of heterogeneous 3D point clouds and 2D images. We adopt the strategy of end-to-end race driving where a neural network is trained to directly map sensor input (camera image) to control output, which makes this strategy independent from annotations in the visual domain. We employ deep reinforcement learning where the algorithm learns from reward by interaction with a realistic simulator. We propose new training strategies and reward functions for better driving and faster convergence. However, training time is still very long which is why we focus on perception to study point cloud and image fusion in the remainder of this thesis. We propose two different methods for 2D-3D fusion. First, we project 3D LiDAR point clouds into 2D image space, resulting in sparse depth maps. We propose a novel encoder-decoder architecture to fuse dense RGB and sparse depth for the task of depth completion that enhances point cloud resolution to image level. Second, we fuse directly in 3D space to prevent information loss through projection. Therefore, we compute image features with a 2D CNN of multiple views and then lift them all to a global 3D point cloud for fusion, followed by a point-based network to predict 3D semantic labels. Building on this work, we introduce the more difficult novel task of cross-modal unsupervised domain adaptation, where one is provided with multi-modal data in a labeled source and an unlabeled target dataset. We propose to perform 2D-3D cross-modal learning via mutual mimicking between image and point cloud networks to address the source-target domain shift. We further showcase that our method is complementary to the existing uni-modal technique of pseudo-labeling
APA, Harvard, Vancouver, ISO, and other styles
10

Diskin, Yakov. "Dense 3D Point Cloud Representation of a Scene Using Uncalibrated Monocular Vision." University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1366386933.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Scene depth"

1

Maloney, Michael S. Death Scene Investigation. Second edition. | Boca Raton : CRC Press, [2018]: CRC Press, 2017. http://dx.doi.org/10.1201/9781315107271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wolf, S. V. Death scent. [U.S.]: Black Rose Writing, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fran, Ernst Mary, ed. Handbook for death scene investigators. Boca Raton, Fla: CRC Press, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Scent of death. London: Collins, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

SCENT OF DEATH. [Place of publication not identified]: OUTSKIRTS Press, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Page, Emma. Scent of death. Toronto: Worldwide, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Scent of death. Garden City, N.Y: Published for the Crime Club by Doubleday, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Death scene investigations: A field guide. Boca Raton: Taylor & Francis, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Islam, Khwaja Muhammad. The scene of death and what happens after death. New Delhi: Islamic Book Service, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Allom, Elizabeth Anne. Death scenes and other poems. Hackney: Caleb Turner, Church Street; and Simpkin and Marshall, Stationers' Court, London, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Scene depth"

1

Arnspang, Jens, Knud Henriksen, and Fredrik Bergholm. "Relating Scene Depth to Image Ratios." In Computer Analysis of Images and Patterns, 516–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48375-6_62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mori, Hironori, Roderick Köhle, and Markus Kamm. "Scene Depth Profiling Using Helmholtz Stereopsis." In Computer Vision – ECCV 2016, 462–76. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46448-0_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zanuttigh, Pietro, Giulio Marin, Carlo Dal Mutto, Fabio Dominio, Ludovico Minto, and Guido Maria Cortelazzo. "Scene Segmentation Assisted by Depth Data." In Time-of-Flight and Structured Light Depth Cameras, 199–230. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30973-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fernández, Miguel A., José M. López-Valles, Antonio Fernández-Caballero, María T. López, José Mira, and Ana E. Delgado. "Permanency Memories in Scene Depth Analysis." In Lecture Notes in Computer Science, 531–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11556985_69.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zanuttigh, Pietro, Giulio Marin, Carlo Dal Mutto, Fabio Dominio, Ludovico Minto, and Guido Maria Cortelazzo. "3D Scene Reconstruction from Depth Camera Data." In Time-of-Flight and Structured Light Depth Cameras, 231–51. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30973-6_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zheng, Yingbin, Jian Pu, Hong Wang, and Hao Ye. "Indoor Scene Classification by Incorporating Predicted Depth Descriptor." In Advances in Multimedia Information Processing – PCM 2017, 13–23. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77383-4_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pillai, Ignazio, Riccardo Satta, Giorgio Fumera, and Fabio Roli. "Exploiting Depth Information for Indoor-Outdoor Scene Classification." In Image Analysis and Processing – ICIAP 2011, 130–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24088-1_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mutto, Carlo Dal, Pietro Zanuttigh, and Guido M. Cortelazzo. "Scene Segmentation and Video Matting Assisted by Depth Data." In Time-of-Flight Cameras and Microsoft Kinect™, 93–105. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4614-3807-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jiang, Huaizu, Gustav Larsson, Michael Maire, Greg Shakhnarovich, and Erik Learned-Miller. "Self-Supervised Relative Depth Learning for Urban Scene Understanding." In Computer Vision – ECCV 2018, 20–37. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01252-6_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fukuoka, Mamiko, Shun’ichi Doi, Takahiko Kimura, and Toshiaki Miura. "Measurement of Depth Attention of Driver in Frontal Scene." In Engineering Psychology and Cognitive Ergonomics, 376–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02728-4_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Scene depth"

1

Alazawi, E., A. Aggoun, M. Abbod, M. R. Swash, O. Abdul Fatah, and J. Fernandez. "Scene depth extraction from Holoscopic Imaging technology." In 2013 3DTV Vision Beyond Depth (3DTV-CON). IEEE, 2013. http://dx.doi.org/10.1109/3dtv.2013.6676640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jin, Bo, Leandro Cruz, and Nuno Goncalves. "Face Depth Prediction by the Scene Depth." In 2021 IEEE/ACIS 19th International Conference on Computer and Information Science (ICIS). IEEE, 2021. http://dx.doi.org/10.1109/icis51600.2021.9516598.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Xiaotian, Xuejin Chen, and Zheng-Jun Zha. "Structure-Aware Residual Pyramid Network for Monocular Depth Estimation." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/98.

Full text
Abstract:
Monocular depth estimation is an essential task for scene understanding. The underlying structure of objects and stuff in a complex scene is critical to recovering accurate and visually-pleasing depth maps. Global structure conveys scene layouts, while local structure reflects shape details. Recently developed approaches based on convolutional neural networks (CNNs) significantly improve the performance of depth estimation. However, few of them take into account multi-scale structures in complex scenes. In this paper, we propose a Structure-Aware Residual Pyramid Network (SARPN) to exploit multi-scale structures for accurate depth prediction. We propose a Residual Pyramid Decoder (RPD) which expresses global scene structure in upper levels to represent layouts, and local structure in lower levels to present shape details. At each level, we propose Residual Refinement Modules (RRM) that predict residual maps to progressively add finer structures on the coarser structure predicted at the upper level. In order to fully exploit multi-scale image features, an Adaptive Dense Feature Fusion (ADFF) module, which adaptively fuses effective features from all scales for inferring structures of each scale, is introduced. Experiment results on the challenging NYU-Depth v2 dataset demonstrate that our proposed approach achieves state-of-the-art performance in both qualitative and quantitative evaluation. The code is available at https://github.com/Xt-Chen/SARPN.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Wendong, Feng Gao, Bingbing Ni, Lingyu Duan, Yichao Yan, Jingwei Xu, and Xiaokang Yang. "Depth Structure Preserving Scene Image Generation." In MM '18: ACM Multimedia Conference. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3240508.3240584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gillsjo, David, and Kalle Astrom. "In Depth Bayesian Semantic Scene Completion." In 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. http://dx.doi.org/10.1109/icpr48806.2021.9412403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Lei, Zongqing Lu, Qingmin Liao, Haoyu Ma, and Jing-Hao Xue. "Disparity Estimation with Scene Depth Cues." In 2021 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2021. http://dx.doi.org/10.1109/icme51207.2021.9428216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Yea-Shuan, Fang-Hsuan Cheng, and Yun-Hui Liang. "Creating Depth Map from 2D Scene Classification." In 2008 3rd International Conference on Innovative Computing Information and Control. IEEE, 2008. http://dx.doi.org/10.1109/icicic.2008.205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sun, Yu-fei, Rui-dong Tang, Shao-hui Qian, Chuan-ruo Yu, Yu-jin Shi, and Wei-yu Yu. "Scene depth information based image saliency detection." In 2015 IEEE Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). IEEE, 2015. http://dx.doi.org/10.1109/iaeac.2015.7428553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Letouzey, Antoine, Benjamin Petit, and Edmond Boyer. "Scene Flow from Depth and Color Images." In British Machine Vision Conference 2011. British Machine Vision Association, 2011. http://dx.doi.org/10.5244/c.25.46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ess, Andreas, Bastian Leibe, and Luc Van Gool. "Depth and Appearance for Mobile Scene Analysis." In 2007 IEEE 11th International Conference on Computer Vision. IEEE, 2007. http://dx.doi.org/10.1109/iccv.2007.4409092.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Scene depth"

1

Lieutenant suffers sudden cardiac death at scene of a brush fire - Missouri. U.S. Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health, March 2010. http://dx.doi.org/10.26616/nioshfffacef201001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Driver/engineer suffers sudden cardiac death at scene of motor vehicle crash - Georgia. U.S. Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health, August 2013. http://dx.doi.org/10.26616/nioshfffacef201318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lieutenant suffers sudden cardiac death at the scene of a structure fire - South Carolina. U.S. Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health, September 2005. http://dx.doi.org/10.26616/nioshfffacef200514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography