Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Scene depth“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Scene depth" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Scene depth"
Fernandes, Suzette, und Monica S. Castelhano. „The Foreground Bias: Initial Scene Representations Across the Depth Plane“. Psychological Science 32, Nr. 6 (21.05.2021): 890–902. http://dx.doi.org/10.1177/0956797620984464.
Der volle Inhalt der QuelleNefs, Harold T. „Depth of Field Affects Perceived Depth-width Ratios in Photographs of Natural Scenes“. Seeing and Perceiving 25, Nr. 6 (2012): 577–95. http://dx.doi.org/10.1163/18784763-00002400.
Der volle Inhalt der QuelleChlubna, T., T. Milet und P. Zemčík. „Real-time per-pixel focusing method for light field rendering“. Computational Visual Media 7, Nr. 3 (27.02.2021): 319–33. http://dx.doi.org/10.1007/s41095-021-0205-0.
Der volle Inhalt der QuelleLee, Jaeho, Seungwoo Yoo, Changick Kim und Bhaskaran Vasudev. „Estimating Scene-Oriented Pseudo Depth With Pictorial Depth Cues“. IEEE Transactions on Broadcasting 59, Nr. 2 (Juni 2013): 238–50. http://dx.doi.org/10.1109/tbc.2013.2240131.
Der volle Inhalt der QuelleSauer, Craig W., Myron L. Braunstein, Asad Saidpour und George J. Andersen. „Propagation of Depth Information from Local Regions in 3-D Scenes“. Perception 31, Nr. 9 (September 2002): 1047–59. http://dx.doi.org/10.1068/p3261.
Der volle Inhalt der QuelleTorralba, A., und A. Oliva. „Depth perception from familiar scene structure“. Journal of Vision 2, Nr. 7 (14.03.2010): 494. http://dx.doi.org/10.1167/2.7.494.
Der volle Inhalt der QuelleGroen, Iris I. A., Sennay Ghebreab, Victor A. F. Lamme und H. Steven Scholte. „The time course of natural scene perception with reduced attention“. Journal of Neurophysiology 115, Nr. 2 (01.02.2016): 931–46. http://dx.doi.org/10.1152/jn.00896.2015.
Der volle Inhalt der QuelleQiu, Yue, Yutaka Satoh, Ryota Suzuki, Kenji Iwata und Hirokatsu Kataoka. „Indoor Scene Change Captioning Based on Multimodality Data“. Sensors 20, Nr. 17 (23.08.2020): 4761. http://dx.doi.org/10.3390/s20174761.
Der volle Inhalt der QuelleWarrant, Eric. „The eyes of deep–sea fishes and the changing nature of visual scenes with depth“. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 355, Nr. 1401 (29.09.2000): 1155–59. http://dx.doi.org/10.1098/rstb.2000.0658.
Der volle Inhalt der QuelleMadhuanand, L., F. Nex und M. Y. Yang. „DEEP LEARNING FOR MONOCULAR DEPTH ESTIMATION FROM UAV IMAGES“. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020 (03.08.2020): 451–58. http://dx.doi.org/10.5194/isprs-annals-v-2-2020-451-2020.
Der volle Inhalt der QuelleDissertationen zum Thema "Scene depth"
Oliver, Parera Maria. „Scene understanding from image and video : segmentation, depth configuration“. Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/663870.
Der volle Inhalt der QuelleAquesta tesi té per objectiu analitzar imatges i vídeos a nivell d’objectes, amb l’objectiu de descompondre l’escena en objectes complets que es mouen i interaccionen entre ells. La tesi està dividida en tres parts. En primer lloc, proposem un mètode de segmentació per descompondre l’escena en les formes que la componen. A continuació, proposem un mètode probabilístic, que considera les formes o objectes en dues profunditats de l’escena diferents, i infereix quins objectes estan davant dels altres, completant també els objectes parcialment ocults. Finalment, proposem dos mètodes relacionats amb el vídeo inpainting. Per una banda, proposem un mètode per vídeo inpainting binari que utilitza el flux òptic del vídeo per completar les formes al llarg del temps, tenint en compte el seu moviment. Per l’altra banda, proposem un mètode per inpainting de flux òptic que té en compte la informació provinent dels frames.
Mitra, Bhargav Kumar. „Scene segmentation using miliarity, motion and depth based cues“. Thesis, University of Sussex, 2010. http://sro.sussex.ac.uk/id/eprint/2480/.
Der volle Inhalt der QuelleMalleson, Charles D. „Dynamic scene modelling and representation from video and depth“. Thesis, University of Surrey, 2016. http://epubs.surrey.ac.uk/809990/.
Der volle Inhalt der QuelleStynsberg, John. „Incorporating Scene Depth in Discriminative Correlation Filters for Visual Tracking“. Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153110.
Der volle Inhalt der QuelleElezovikj, Semir. „FOREGROUND AND SCENE STRUCTURE PRESERVED VISUAL PRIVACY PROTECTION USING DEPTH INFORMATION“. Master's thesis, Temple University Libraries, 2014. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/259533.
Der volle Inhalt der QuelleM.S.
We propose the use of depth-information to protect privacy in person-aware visual systems while preserving important foreground subjects and scene structures. We aim to preserve the identity of foreground subjects while hiding superfluous details in the background that may contain sensitive information. We achieve this goal by using depth information and relevant human detection mechanisms provided by the Kinect sensor. In particular, for an input color and depth image pair, we first create a sensitivity map which favors background regions (where privacy should be preserved) and low depth-gradient pixels (which often relates a lot to scene structure but little to identity). We then combine this per-pixel sensitivity map with an inhomogeneous image obscuration process for privacy protection. We tested the proposed method using data involving different scenarios including various illumination conditions, various number of subjects, different context, etc. The experiments demonstrate the quality of preserving the identity of humans and edges obtained from the depth information while obscuring privacy intrusive information in the background.
Temple University--Theses
Quiroga, Sepúlveda Julián. „Scene Flow Estimation from RGBD Images“. Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM057/document.
Der volle Inhalt der QuelleThis thesis addresses the problem of reliably recovering a 3D motion field, or scene flow, from a temporal pair of RGBD images. We propose a semi-rigid estimation framework for the robust computation of scene flow, taking advantage of color and depth information, and an alternating variational minimization framework for recovering rigid and non-rigid components of the 3D motion field. Previous attempts to estimate scene flow from RGBD images have extended optical flow approaches without fully exploiting depth data or have formulated the estimation in 3D space disregarding the semi-rigidity of real scenes. We demonstrate that scene flow can be robustly and accurately computed in the image domain by solving for 3D motions consistent with color and depth, encouraging an adjustable combination between local and piecewise rigidity. Additionally, we show that solving for the 3D motion field can be seen as a specific case of a more general estimation problem of a 6D field of rigid motions. Accordingly, we formulate scene flow estimation as the search of an optimal field of twist motions achieving state-of-the-art results.STAR
Forne, Christopher Jes. „3-D Scene Reconstruction from Multiple Photometric Images“. Thesis, University of Canterbury. Electrical and Computer Engineering, 2007. http://hdl.handle.net/10092/1227.
Der volle Inhalt der QuelleRehfeld, Timo [Verfasser], Stefan [Akademischer Betreuer] Roth und Carsten [Akademischer Betreuer] Rother. „Combining Appearance, Depth and Motion for Efficient Semantic Scene Understanding / Timo Rehfeld ; Stefan Roth, Carsten Rother“. Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2018. http://d-nb.info/1157011950/34.
Der volle Inhalt der QuelleJaritz, Maximilian. „2D-3D scene understanding for autonomous driving“. Thesis, Université Paris sciences et lettres, 2020. https://pastel.archives-ouvertes.fr/tel-02921424.
Der volle Inhalt der QuelleIn this thesis, we address the challenges of label scarcity and fusion of heterogeneous 3D point clouds and 2D images. We adopt the strategy of end-to-end race driving where a neural network is trained to directly map sensor input (camera image) to control output, which makes this strategy independent from annotations in the visual domain. We employ deep reinforcement learning where the algorithm learns from reward by interaction with a realistic simulator. We propose new training strategies and reward functions for better driving and faster convergence. However, training time is still very long which is why we focus on perception to study point cloud and image fusion in the remainder of this thesis. We propose two different methods for 2D-3D fusion. First, we project 3D LiDAR point clouds into 2D image space, resulting in sparse depth maps. We propose a novel encoder-decoder architecture to fuse dense RGB and sparse depth for the task of depth completion that enhances point cloud resolution to image level. Second, we fuse directly in 3D space to prevent information loss through projection. Therefore, we compute image features with a 2D CNN of multiple views and then lift them all to a global 3D point cloud for fusion, followed by a point-based network to predict 3D semantic labels. Building on this work, we introduce the more difficult novel task of cross-modal unsupervised domain adaptation, where one is provided with multi-modal data in a labeled source and an unlabeled target dataset. We propose to perform 2D-3D cross-modal learning via mutual mimicking between image and point cloud networks to address the source-target domain shift. We further showcase that our method is complementary to the existing uni-modal technique of pseudo-labeling
Diskin, Yakov. „Dense 3D Point Cloud Representation of a Scene Using Uncalibrated Monocular Vision“. University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1366386933.
Der volle Inhalt der QuelleBücher zum Thema "Scene depth"
Maloney, Michael S. Death Scene Investigation. Second edition. | Boca Raton : CRC Press, [2018]: CRC Press, 2017. http://dx.doi.org/10.1201/9781315107271.
Der volle Inhalt der QuelleWolf, S. V. Death scent. [U.S.]: Black Rose Writing, 2012.
Den vollen Inhalt der Quelle findenFran, Ernst Mary, Hrsg. Handbook for death scene investigators. Boca Raton, Fla: CRC Press, 1999.
Den vollen Inhalt der Quelle findenScent of death. London: Collins, 1985.
Den vollen Inhalt der Quelle findenSCENT OF DEATH. [Place of publication not identified]: OUTSKIRTS Press, 2015.
Den vollen Inhalt der Quelle findenPage, Emma. Scent of death. Toronto: Worldwide, 1989.
Den vollen Inhalt der Quelle findenScent of death. Garden City, N.Y: Published for the Crime Club by Doubleday, 1986.
Den vollen Inhalt der Quelle findenDeath scene investigations: A field guide. Boca Raton: Taylor & Francis, 2008.
Den vollen Inhalt der Quelle findenIslam, Khwaja Muhammad. The scene of death and what happens after death. New Delhi: Islamic Book Service, 1991.
Den vollen Inhalt der Quelle findenAllom, Elizabeth Anne. Death scenes and other poems. Hackney: Caleb Turner, Church Street; and Simpkin and Marshall, Stationers' Court, London, 1988.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Scene depth"
Arnspang, Jens, Knud Henriksen und Fredrik Bergholm. „Relating Scene Depth to Image Ratios“. In Computer Analysis of Images and Patterns, 516–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48375-6_62.
Der volle Inhalt der QuelleMori, Hironori, Roderick Köhle und Markus Kamm. „Scene Depth Profiling Using Helmholtz Stereopsis“. In Computer Vision – ECCV 2016, 462–76. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46448-0_28.
Der volle Inhalt der QuelleZanuttigh, Pietro, Giulio Marin, Carlo Dal Mutto, Fabio Dominio, Ludovico Minto und Guido Maria Cortelazzo. „Scene Segmentation Assisted by Depth Data“. In Time-of-Flight and Structured Light Depth Cameras, 199–230. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30973-6_6.
Der volle Inhalt der QuelleFernández, Miguel A., José M. López-Valles, Antonio Fernández-Caballero, María T. López, José Mira und Ana E. Delgado. „Permanency Memories in Scene Depth Analysis“. In Lecture Notes in Computer Science, 531–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11556985_69.
Der volle Inhalt der QuelleZanuttigh, Pietro, Giulio Marin, Carlo Dal Mutto, Fabio Dominio, Ludovico Minto und Guido Maria Cortelazzo. „3D Scene Reconstruction from Depth Camera Data“. In Time-of-Flight and Structured Light Depth Cameras, 231–51. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30973-6_7.
Der volle Inhalt der QuelleZheng, Yingbin, Jian Pu, Hong Wang und Hao Ye. „Indoor Scene Classification by Incorporating Predicted Depth Descriptor“. In Advances in Multimedia Information Processing – PCM 2017, 13–23. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77383-4_2.
Der volle Inhalt der QuellePillai, Ignazio, Riccardo Satta, Giorgio Fumera und Fabio Roli. „Exploiting Depth Information for Indoor-Outdoor Scene Classification“. In Image Analysis and Processing – ICIAP 2011, 130–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24088-1_14.
Der volle Inhalt der QuelleMutto, Carlo Dal, Pietro Zanuttigh und Guido M. Cortelazzo. „Scene Segmentation and Video Matting Assisted by Depth Data“. In Time-of-Flight Cameras and Microsoft Kinect™, 93–105. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4614-3807-6_6.
Der volle Inhalt der QuelleJiang, Huaizu, Gustav Larsson, Michael Maire, Greg Shakhnarovich und Erik Learned-Miller. „Self-Supervised Relative Depth Learning for Urban Scene Understanding“. In Computer Vision – ECCV 2018, 20–37. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01252-6_2.
Der volle Inhalt der QuelleFukuoka, Mamiko, Shun’ichi Doi, Takahiko Kimura und Toshiaki Miura. „Measurement of Depth Attention of Driver in Frontal Scene“. In Engineering Psychology and Cognitive Ergonomics, 376–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02728-4_40.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Scene depth"
Alazawi, E., A. Aggoun, M. Abbod, M. R. Swash, O. Abdul Fatah und J. Fernandez. „Scene depth extraction from Holoscopic Imaging technology“. In 2013 3DTV Vision Beyond Depth (3DTV-CON). IEEE, 2013. http://dx.doi.org/10.1109/3dtv.2013.6676640.
Der volle Inhalt der QuelleJin, Bo, Leandro Cruz und Nuno Goncalves. „Face Depth Prediction by the Scene Depth“. In 2021 IEEE/ACIS 19th International Conference on Computer and Information Science (ICIS). IEEE, 2021. http://dx.doi.org/10.1109/icis51600.2021.9516598.
Der volle Inhalt der QuelleChen, Xiaotian, Xuejin Chen und Zheng-Jun Zha. „Structure-Aware Residual Pyramid Network for Monocular Depth Estimation“. In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/98.
Der volle Inhalt der QuelleZhang, Wendong, Feng Gao, Bingbing Ni, Lingyu Duan, Yichao Yan, Jingwei Xu und Xiaokang Yang. „Depth Structure Preserving Scene Image Generation“. In MM '18: ACM Multimedia Conference. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3240508.3240584.
Der volle Inhalt der QuelleGillsjo, David, und Kalle Astrom. „In Depth Bayesian Semantic Scene Completion“. In 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. http://dx.doi.org/10.1109/icpr48806.2021.9412403.
Der volle Inhalt der QuelleChen, Lei, Zongqing Lu, Qingmin Liao, Haoyu Ma und Jing-Hao Xue. „Disparity Estimation with Scene Depth Cues“. In 2021 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2021. http://dx.doi.org/10.1109/icme51207.2021.9428216.
Der volle Inhalt der QuelleHuang, Yea-Shuan, Fang-Hsuan Cheng und Yun-Hui Liang. „Creating Depth Map from 2D Scene Classification“. In 2008 3rd International Conference on Innovative Computing Information and Control. IEEE, 2008. http://dx.doi.org/10.1109/icicic.2008.205.
Der volle Inhalt der QuelleSun, Yu-fei, Rui-dong Tang, Shao-hui Qian, Chuan-ruo Yu, Yu-jin Shi und Wei-yu Yu. „Scene depth information based image saliency detection“. In 2015 IEEE Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). IEEE, 2015. http://dx.doi.org/10.1109/iaeac.2015.7428553.
Der volle Inhalt der QuelleLetouzey, Antoine, Benjamin Petit und Edmond Boyer. „Scene Flow from Depth and Color Images“. In British Machine Vision Conference 2011. British Machine Vision Association, 2011. http://dx.doi.org/10.5244/c.25.46.
Der volle Inhalt der QuelleEss, Andreas, Bastian Leibe und Luc Van Gool. „Depth and Appearance for Mobile Scene Analysis“. In 2007 IEEE 11th International Conference on Computer Vision. IEEE, 2007. http://dx.doi.org/10.1109/iccv.2007.4409092.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Scene depth"
Lieutenant suffers sudden cardiac death at scene of a brush fire - Missouri. U.S. Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health, März 2010. http://dx.doi.org/10.26616/nioshfffacef201001.
Der volle Inhalt der QuelleDriver/engineer suffers sudden cardiac death at scene of motor vehicle crash - Georgia. U.S. Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health, August 2013. http://dx.doi.org/10.26616/nioshfffacef201318.
Der volle Inhalt der QuelleLieutenant suffers sudden cardiac death at the scene of a structure fire - South Carolina. U.S. Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health, September 2005. http://dx.doi.org/10.26616/nioshfffacef200514.
Der volle Inhalt der Quelle