Academic literature on the topic 'Scene depth'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Scene depth.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Scene depth"
Fernandes, Suzette, and Monica S. Castelhano. "The Foreground Bias: Initial Scene Representations Across the Depth Plane." Psychological Science 32, no. 6 (May 21, 2021): 890–902. http://dx.doi.org/10.1177/0956797620984464.
Full textNefs, Harold T. "Depth of Field Affects Perceived Depth-width Ratios in Photographs of Natural Scenes." Seeing and Perceiving 25, no. 6 (2012): 577–95. http://dx.doi.org/10.1163/18784763-00002400.
Full textChlubna, T., T. Milet, and P. Zemčík. "Real-time per-pixel focusing method for light field rendering." Computational Visual Media 7, no. 3 (February 27, 2021): 319–33. http://dx.doi.org/10.1007/s41095-021-0205-0.
Full textLee, Jaeho, Seungwoo Yoo, Changick Kim, and Bhaskaran Vasudev. "Estimating Scene-Oriented Pseudo Depth With Pictorial Depth Cues." IEEE Transactions on Broadcasting 59, no. 2 (June 2013): 238–50. http://dx.doi.org/10.1109/tbc.2013.2240131.
Full textSauer, Craig W., Myron L. Braunstein, Asad Saidpour, and George J. Andersen. "Propagation of Depth Information from Local Regions in 3-D Scenes." Perception 31, no. 9 (September 2002): 1047–59. http://dx.doi.org/10.1068/p3261.
Full textTorralba, A., and A. Oliva. "Depth perception from familiar scene structure." Journal of Vision 2, no. 7 (March 14, 2010): 494. http://dx.doi.org/10.1167/2.7.494.
Full textGroen, Iris I. A., Sennay Ghebreab, Victor A. F. Lamme, and H. Steven Scholte. "The time course of natural scene perception with reduced attention." Journal of Neurophysiology 115, no. 2 (February 1, 2016): 931–46. http://dx.doi.org/10.1152/jn.00896.2015.
Full textQiu, Yue, Yutaka Satoh, Ryota Suzuki, Kenji Iwata, and Hirokatsu Kataoka. "Indoor Scene Change Captioning Based on Multimodality Data." Sensors 20, no. 17 (August 23, 2020): 4761. http://dx.doi.org/10.3390/s20174761.
Full textWarrant, Eric. "The eyes of deep–sea fishes and the changing nature of visual scenes with depth." Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 355, no. 1401 (September 29, 2000): 1155–59. http://dx.doi.org/10.1098/rstb.2000.0658.
Full textMadhuanand, L., F. Nex, and M. Y. Yang. "DEEP LEARNING FOR MONOCULAR DEPTH ESTIMATION FROM UAV IMAGES." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020 (August 3, 2020): 451–58. http://dx.doi.org/10.5194/isprs-annals-v-2-2020-451-2020.
Full textDissertations / Theses on the topic "Scene depth"
Oliver, Parera Maria. "Scene understanding from image and video : segmentation, depth configuration." Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/663870.
Full textAquesta tesi té per objectiu analitzar imatges i vídeos a nivell d’objectes, amb l’objectiu de descompondre l’escena en objectes complets que es mouen i interaccionen entre ells. La tesi està dividida en tres parts. En primer lloc, proposem un mètode de segmentació per descompondre l’escena en les formes que la componen. A continuació, proposem un mètode probabilístic, que considera les formes o objectes en dues profunditats de l’escena diferents, i infereix quins objectes estan davant dels altres, completant també els objectes parcialment ocults. Finalment, proposem dos mètodes relacionats amb el vídeo inpainting. Per una banda, proposem un mètode per vídeo inpainting binari que utilitza el flux òptic del vídeo per completar les formes al llarg del temps, tenint en compte el seu moviment. Per l’altra banda, proposem un mètode per inpainting de flux òptic que té en compte la informació provinent dels frames.
Mitra, Bhargav Kumar. "Scene segmentation using miliarity, motion and depth based cues." Thesis, University of Sussex, 2010. http://sro.sussex.ac.uk/id/eprint/2480/.
Full textMalleson, Charles D. "Dynamic scene modelling and representation from video and depth." Thesis, University of Surrey, 2016. http://epubs.surrey.ac.uk/809990/.
Full textStynsberg, John. "Incorporating Scene Depth in Discriminative Correlation Filters for Visual Tracking." Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153110.
Full textElezovikj, Semir. "FOREGROUND AND SCENE STRUCTURE PRESERVED VISUAL PRIVACY PROTECTION USING DEPTH INFORMATION." Master's thesis, Temple University Libraries, 2014. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/259533.
Full textM.S.
We propose the use of depth-information to protect privacy in person-aware visual systems while preserving important foreground subjects and scene structures. We aim to preserve the identity of foreground subjects while hiding superfluous details in the background that may contain sensitive information. We achieve this goal by using depth information and relevant human detection mechanisms provided by the Kinect sensor. In particular, for an input color and depth image pair, we first create a sensitivity map which favors background regions (where privacy should be preserved) and low depth-gradient pixels (which often relates a lot to scene structure but little to identity). We then combine this per-pixel sensitivity map with an inhomogeneous image obscuration process for privacy protection. We tested the proposed method using data involving different scenarios including various illumination conditions, various number of subjects, different context, etc. The experiments demonstrate the quality of preserving the identity of humans and edges obtained from the depth information while obscuring privacy intrusive information in the background.
Temple University--Theses
Quiroga, Sepúlveda Julián. "Scene Flow Estimation from RGBD Images." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM057/document.
Full textThis thesis addresses the problem of reliably recovering a 3D motion field, or scene flow, from a temporal pair of RGBD images. We propose a semi-rigid estimation framework for the robust computation of scene flow, taking advantage of color and depth information, and an alternating variational minimization framework for recovering rigid and non-rigid components of the 3D motion field. Previous attempts to estimate scene flow from RGBD images have extended optical flow approaches without fully exploiting depth data or have formulated the estimation in 3D space disregarding the semi-rigidity of real scenes. We demonstrate that scene flow can be robustly and accurately computed in the image domain by solving for 3D motions consistent with color and depth, encouraging an adjustable combination between local and piecewise rigidity. Additionally, we show that solving for the 3D motion field can be seen as a specific case of a more general estimation problem of a 6D field of rigid motions. Accordingly, we formulate scene flow estimation as the search of an optimal field of twist motions achieving state-of-the-art results.STAR
Forne, Christopher Jes. "3-D Scene Reconstruction from Multiple Photometric Images." Thesis, University of Canterbury. Electrical and Computer Engineering, 2007. http://hdl.handle.net/10092/1227.
Full textRehfeld, Timo [Verfasser], Stefan [Akademischer Betreuer] Roth, and Carsten [Akademischer Betreuer] Rother. "Combining Appearance, Depth and Motion for Efficient Semantic Scene Understanding / Timo Rehfeld ; Stefan Roth, Carsten Rother." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2018. http://d-nb.info/1157011950/34.
Full textJaritz, Maximilian. "2D-3D scene understanding for autonomous driving." Thesis, Université Paris sciences et lettres, 2020. https://pastel.archives-ouvertes.fr/tel-02921424.
Full textIn this thesis, we address the challenges of label scarcity and fusion of heterogeneous 3D point clouds and 2D images. We adopt the strategy of end-to-end race driving where a neural network is trained to directly map sensor input (camera image) to control output, which makes this strategy independent from annotations in the visual domain. We employ deep reinforcement learning where the algorithm learns from reward by interaction with a realistic simulator. We propose new training strategies and reward functions for better driving and faster convergence. However, training time is still very long which is why we focus on perception to study point cloud and image fusion in the remainder of this thesis. We propose two different methods for 2D-3D fusion. First, we project 3D LiDAR point clouds into 2D image space, resulting in sparse depth maps. We propose a novel encoder-decoder architecture to fuse dense RGB and sparse depth for the task of depth completion that enhances point cloud resolution to image level. Second, we fuse directly in 3D space to prevent information loss through projection. Therefore, we compute image features with a 2D CNN of multiple views and then lift them all to a global 3D point cloud for fusion, followed by a point-based network to predict 3D semantic labels. Building on this work, we introduce the more difficult novel task of cross-modal unsupervised domain adaptation, where one is provided with multi-modal data in a labeled source and an unlabeled target dataset. We propose to perform 2D-3D cross-modal learning via mutual mimicking between image and point cloud networks to address the source-target domain shift. We further showcase that our method is complementary to the existing uni-modal technique of pseudo-labeling
Diskin, Yakov. "Dense 3D Point Cloud Representation of a Scene Using Uncalibrated Monocular Vision." University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1366386933.
Full textBooks on the topic "Scene depth"
Maloney, Michael S. Death Scene Investigation. Second edition. | Boca Raton : CRC Press, [2018]: CRC Press, 2017. http://dx.doi.org/10.1201/9781315107271.
Full textWolf, S. V. Death scent. [U.S.]: Black Rose Writing, 2012.
Find full textFran, Ernst Mary, ed. Handbook for death scene investigators. Boca Raton, Fla: CRC Press, 1999.
Find full textScent of death. London: Collins, 1985.
Find full textSCENT OF DEATH. [Place of publication not identified]: OUTSKIRTS Press, 2015.
Find full textPage, Emma. Scent of death. Toronto: Worldwide, 1989.
Find full textScent of death. Garden City, N.Y: Published for the Crime Club by Doubleday, 1986.
Find full textDeath scene investigations: A field guide. Boca Raton: Taylor & Francis, 2008.
Find full textIslam, Khwaja Muhammad. The scene of death and what happens after death. New Delhi: Islamic Book Service, 1991.
Find full textAllom, Elizabeth Anne. Death scenes and other poems. Hackney: Caleb Turner, Church Street; and Simpkin and Marshall, Stationers' Court, London, 1988.
Find full textBook chapters on the topic "Scene depth"
Arnspang, Jens, Knud Henriksen, and Fredrik Bergholm. "Relating Scene Depth to Image Ratios." In Computer Analysis of Images and Patterns, 516–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48375-6_62.
Full textMori, Hironori, Roderick Köhle, and Markus Kamm. "Scene Depth Profiling Using Helmholtz Stereopsis." In Computer Vision – ECCV 2016, 462–76. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46448-0_28.
Full textZanuttigh, Pietro, Giulio Marin, Carlo Dal Mutto, Fabio Dominio, Ludovico Minto, and Guido Maria Cortelazzo. "Scene Segmentation Assisted by Depth Data." In Time-of-Flight and Structured Light Depth Cameras, 199–230. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30973-6_6.
Full textFernández, Miguel A., José M. López-Valles, Antonio Fernández-Caballero, María T. López, José Mira, and Ana E. Delgado. "Permanency Memories in Scene Depth Analysis." In Lecture Notes in Computer Science, 531–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11556985_69.
Full textZanuttigh, Pietro, Giulio Marin, Carlo Dal Mutto, Fabio Dominio, Ludovico Minto, and Guido Maria Cortelazzo. "3D Scene Reconstruction from Depth Camera Data." In Time-of-Flight and Structured Light Depth Cameras, 231–51. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30973-6_7.
Full textZheng, Yingbin, Jian Pu, Hong Wang, and Hao Ye. "Indoor Scene Classification by Incorporating Predicted Depth Descriptor." In Advances in Multimedia Information Processing – PCM 2017, 13–23. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77383-4_2.
Full textPillai, Ignazio, Riccardo Satta, Giorgio Fumera, and Fabio Roli. "Exploiting Depth Information for Indoor-Outdoor Scene Classification." In Image Analysis and Processing – ICIAP 2011, 130–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24088-1_14.
Full textMutto, Carlo Dal, Pietro Zanuttigh, and Guido M. Cortelazzo. "Scene Segmentation and Video Matting Assisted by Depth Data." In Time-of-Flight Cameras and Microsoft Kinect™, 93–105. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4614-3807-6_6.
Full textJiang, Huaizu, Gustav Larsson, Michael Maire, Greg Shakhnarovich, and Erik Learned-Miller. "Self-Supervised Relative Depth Learning for Urban Scene Understanding." In Computer Vision – ECCV 2018, 20–37. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01252-6_2.
Full textFukuoka, Mamiko, Shun’ichi Doi, Takahiko Kimura, and Toshiaki Miura. "Measurement of Depth Attention of Driver in Frontal Scene." In Engineering Psychology and Cognitive Ergonomics, 376–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02728-4_40.
Full textConference papers on the topic "Scene depth"
Alazawi, E., A. Aggoun, M. Abbod, M. R. Swash, O. Abdul Fatah, and J. Fernandez. "Scene depth extraction from Holoscopic Imaging technology." In 2013 3DTV Vision Beyond Depth (3DTV-CON). IEEE, 2013. http://dx.doi.org/10.1109/3dtv.2013.6676640.
Full textJin, Bo, Leandro Cruz, and Nuno Goncalves. "Face Depth Prediction by the Scene Depth." In 2021 IEEE/ACIS 19th International Conference on Computer and Information Science (ICIS). IEEE, 2021. http://dx.doi.org/10.1109/icis51600.2021.9516598.
Full textChen, Xiaotian, Xuejin Chen, and Zheng-Jun Zha. "Structure-Aware Residual Pyramid Network for Monocular Depth Estimation." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/98.
Full textZhang, Wendong, Feng Gao, Bingbing Ni, Lingyu Duan, Yichao Yan, Jingwei Xu, and Xiaokang Yang. "Depth Structure Preserving Scene Image Generation." In MM '18: ACM Multimedia Conference. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3240508.3240584.
Full textGillsjo, David, and Kalle Astrom. "In Depth Bayesian Semantic Scene Completion." In 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. http://dx.doi.org/10.1109/icpr48806.2021.9412403.
Full textChen, Lei, Zongqing Lu, Qingmin Liao, Haoyu Ma, and Jing-Hao Xue. "Disparity Estimation with Scene Depth Cues." In 2021 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2021. http://dx.doi.org/10.1109/icme51207.2021.9428216.
Full textHuang, Yea-Shuan, Fang-Hsuan Cheng, and Yun-Hui Liang. "Creating Depth Map from 2D Scene Classification." In 2008 3rd International Conference on Innovative Computing Information and Control. IEEE, 2008. http://dx.doi.org/10.1109/icicic.2008.205.
Full textSun, Yu-fei, Rui-dong Tang, Shao-hui Qian, Chuan-ruo Yu, Yu-jin Shi, and Wei-yu Yu. "Scene depth information based image saliency detection." In 2015 IEEE Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). IEEE, 2015. http://dx.doi.org/10.1109/iaeac.2015.7428553.
Full textLetouzey, Antoine, Benjamin Petit, and Edmond Boyer. "Scene Flow from Depth and Color Images." In British Machine Vision Conference 2011. British Machine Vision Association, 2011. http://dx.doi.org/10.5244/c.25.46.
Full textEss, Andreas, Bastian Leibe, and Luc Van Gool. "Depth and Appearance for Mobile Scene Analysis." In 2007 IEEE 11th International Conference on Computer Vision. IEEE, 2007. http://dx.doi.org/10.1109/iccv.2007.4409092.
Full textReports on the topic "Scene depth"
Lieutenant suffers sudden cardiac death at scene of a brush fire - Missouri. U.S. Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health, March 2010. http://dx.doi.org/10.26616/nioshfffacef201001.
Full textDriver/engineer suffers sudden cardiac death at scene of motor vehicle crash - Georgia. U.S. Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health, August 2013. http://dx.doi.org/10.26616/nioshfffacef201318.
Full textLieutenant suffers sudden cardiac death at the scene of a structure fire - South Carolina. U.S. Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health, September 2005. http://dx.doi.org/10.26616/nioshfffacef200514.
Full text