Academic literature on the topic 'Monocular depth'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Monocular depth.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Monocular depth"
Seo, Beom-Su, Byungjae Park, and Hoon Choi. "Sensing Range Extension for Short-Baseline Stereo Camera Using Monocular Depth Estimation." Sensors 22, no. 12 (June 18, 2022): 4605. http://dx.doi.org/10.3390/s22124605.
Full textRychkova, S. I., and V. G. Likhvantseva. "Monocular Depth Estimation (Literature Review)." EYE GLAZ 24, no. 1 (April 2, 2022): 43–54. http://dx.doi.org/10.33791/2222-4408-2022-1-43-54.
Full textPan, Janice, and Alan C. Bovik. "Perceptual Monocular Depth Estimation." Neural Processing Letters 53, no. 2 (February 10, 2021): 1205–28. http://dx.doi.org/10.1007/s11063-021-10437-6.
Full textHoward, I. P., and P. Duke. "Depth from monocular transparency." Journal of Vision 2, no. 10 (December 1, 2002): 82. http://dx.doi.org/10.1167/2.10.82.
Full textHoward, I. P., and P. Duke. "Depth from monocular images." Journal of Vision 3, no. 9 (March 16, 2010): 463. http://dx.doi.org/10.1167/3.9.463.
Full textTimney, Brian. "Effects of brief monocular deprivation on binocular depth perception in the cat: A sensitive period for the loss of stereopsis." Visual Neuroscience 5, no. 3 (September 1990): 273–80. http://dx.doi.org/10.1017/s0952523800000341.
Full textSwaraja, K., V. Akshitha, K. Pranav, B. Vyshnavi, V. Sai Akhil, K. Meenakshi, Padmavathi Kora, Himabindu Valiveti, and Chaitanya Duggineni. "Monocular Depth Estimation using Transfer learning-An Overview." E3S Web of Conferences 309 (2021): 01069. http://dx.doi.org/10.1051/e3sconf/202130901069.
Full textMunguia, Rodrigo, and Antoni Grau. "Delayed Inverse Depth Monocular SLAM." IFAC Proceedings Volumes 41, no. 2 (2008): 2365–70. http://dx.doi.org/10.3182/20080706-5-kr-1001.00399.
Full textSwaraja, K., K. Naga Siva Pavan, S. Suryakanth Reddy, K. Ajay, P. Uday Kiran Reddy, Padmavathi Kora, K. Meenakshi, Duggineni Chaitanya, and Himabindu Valiveti. "CNN Based Monocular Depth Estimation." E3S Web of Conferences 309 (2021): 01070. http://dx.doi.org/10.1051/e3sconf/202130901070.
Full textHoward, Ian P., and Philip A. Duke. "Monocular transparency generates quantitative depth." Vision Research 43, no. 25 (November 2003): 2615–21. http://dx.doi.org/10.1016/s0042-6989(03)00477-2.
Full textDissertations / Theses on the topic "Monocular depth"
Andraghetti, Lorenzo. "Monocular Depth Estimation enhancement by depth from SLAM Keypoints." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16626/.
Full textPinheiro, de Carvalho Marcela. "Deep Depth from Defocus : Neural Networks for Monocular Depth Estimation." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS609.
Full textDepth estimation from a single image is a key instrument for several applications from robotics to virtual reality. Successful Deep Learning approaches in computer vision tasks as object recognition and classification also benefited the domain of depth estimation. In this thesis, we develop methods for monocular depth estimation with deep neural network by exploring different cues: defocus blur and semantics. We conduct several experiments to understand the contribution of each cue in terms of generalization and model performance. At first, we propose an efficient convolutional neural network for depth estimation along with a conditional Generative Adversarial framework. Our method achieves performances among the best on standard datasets for depth estimation. Then, we propose to explore defocus blur cues, which is an optical information deeply related to depth. We show that deep models are able to implicitly learn and use this information to improve performance and overcome known limitations of classical Depth-from-Defocus. We also build a new dataset with real focused and defocused images that we use to validate our approach. Finally, we explore the use of semantic information, which brings rich contextual information while learned jointly to depth on a multi-task approach. We validate our approaches with several datasets containing indoor, outdoor and aerial images
Cheda, Diego. "Monocular Depth Cues in Computer Vision Applications." Doctoral thesis, Universitat Autònoma de Barcelona, 2012. http://hdl.handle.net/10803/121644.
Full textDepth perception is a key aspect of human vision. It is a routine and essential visual task that the human do effortlessly in many daily activities. This has often been associated with stereo vision, but humans have an amazing ability to perceive depth relations even from a single image by using several monocular cues. In the computer vision field, if image depth information were available, many tasks could be posed from a different perspective for the sake of higher performance and robustness. Nevertheless, given a single image, this possibility is usually discarded, since obtaining depth information has frequently been performed by three-dimensional reconstruction techniques, requiring two or more images of the same scene taken from different viewpoints. Recently, some proposals have shown the feasibility of computing depth information from single images. In essence, the idea is to take advantage of a priori knowledge of the acquisition conditions and the observed scene to estimate depth from monocular pictorial cues. These approaches try to precisely estimate the scene depth maps by employing computationally demanding techniques. However, to assist many computer vision algorithms, it is not really necessary computing a costly and detailed depth map of the image. Indeed, just a rough depth description can be very valuable in many problems. In this thesis, we have demonstrated how coarse depth information can be integrated in different tasks following holistic and alternative strategies to obtain more precise and robustness results. In that sense, we have proposed a simple, but reliable enough technique, whereby image scene regions are categorized into discrete depth ranges to build a coarse depth map. Based on this representation, we have explored the potential usefulness of our method in three application domains from novel viewpoints: camera rotation parameters estimation, background estimation and pedestrian candidate generation. In the first case, we have computed camera rotation mounted in a moving vehicle from two novels methods that identify distant elements in the image, where the translation component of the image flow field is negligible. In background estimation, we have proposed a novel method to reconstruct the background by penalizing close regions in a cost function, which integrates color, motion, and depth terms. Finally, we have benefited of geometric and depth information available on single images for pedestrian candidate generation to significantly reduce the number of generated windows to be further processed by a pedestrian classifier. In all cases, results have shown that our depth-based approaches contribute to better performances.
Toschi, Marco. "Towards Monocular Depth Estimation for Robot Guidance." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.
Find full textRovinelli, Marco. "Realtime Monocular Depth Estimation on Mobile Phones." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24159/.
Full textRivero, Pindado Víctor. "Monocular visual SLAM based on Inverse depth parametrization." Thesis, Mälardalen University, School of Innovation, Design and Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-10166.
Full textThe first objective of this research has always been carry out a study of visual techniques SLAM (Simultaneous localization and mapping), specifically the type monovisual, less studied than the stereo. These techniques have been well studied in the world of robotics. These techniques are focused on reconstruct a map of the robot enviroment while maintaining its position information in that map. We chose to investigate a method to encode the points by the inverse of its depth, from the first time that the feature was observed. This method permits efficient and accurate representation of uncertainty during undelayed initialization and beyond, all within the standard extended Kalman filter (EKF).At first, the study mentioned it should be consolidated developing an application that implements this method. After suffering various difficulties, it was decided to make use of a platform developed by the same author of Slam method mentioned in MATLAB. Until then it had developed the tasks of calibration, feature extraction and matching. From that point, that application was adapted to the characteristics of our camera and our video to work. We recorded a video with our camera following a known trajectory to check the calculated path shown in the application. Corroborating works and studying the limitations and advantages of this method.
Chan, Kevin S. (Kevin Sao Wei). "Multiview monocular depth estimation using unsupervised learning methods." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119753.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 50-51).
Existing learned methods for monocular depth estimation use only a single view of scene for depth evaluation, so they inherently overt to their training scenes and cannot generalize well to new datasets. This thesis presents a neural network for multiview monocular depth estimation. Teaching a network to estimate depth via structure from motion allows it to generalize better to new environments with unfamiliar objects. This thesis extends recent work in unsupervised methods for single-view monocular depth estimation and uses the reconstruction losses for training posed in those works. Models and baseline models were evaluated on a variety of datasets and results indicate that indicate multiview models generalize across datasets better than previous work. This work is unique in that it emphasizes cross domain performance and ability to generalize more so than performance on the training set.
by Kevin S. Chan.
M. Eng.
Larsson, Susanna. "Monocular Depth Estimation Using Deep Convolutional Neural Networks." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-159981.
Full textMöckelind, Christoffer. "Improving deep monocular depth predictions using dense narrow field of view depth images." Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235660.
Full textI det här arbetet studerar vi ett djupapproximationsproblem där vi tillhandahåller en djupbild med smal synvinkel och en RGB-bild med bred synvinkel till ett djupt nätverk med uppgift att förutsäga djupet för hela RGB-bilden. Vi visar att genom att ge djupbilden till nätverket förbättras resultatet för området utanför det tillhandahållna djupet jämfört med en existerande metod som använder en RGB-bild för att förutsäga djupet. Vi undersöker flera arkitekturer och storlekar på djupbildssynfält och studerar effekten av att lägga till brus och sänka upplösningen på djupbilden. Vi visar att större synfält för djupbilden ger en större fördel och även att modellens noggrannhet minskar med avståndet från det angivna djupet. Våra resultat visar också att modellerna som använde sig av det brusiga lågupplösta djupet presterade på samma nivå som de modeller som använde sig av det omodifierade djupet.
Pilzer, Andrea. "Learning Unsupervised Depth Estimation, from Stereo to Monocular Images." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/268252.
Full textBooks on the topic "Monocular depth"
Arden, P. L. C. Monocular Stereopsis: Seeing in Depth with One Eye. Palgrave Macmillan, 2017.
Find full textGaneri, Jonardon. Postscript: Philosophy Without Borders. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198757405.003.0017.
Full textBook chapters on the topic "Monocular depth"
Jun, Jinyoung, Jae-Han Lee, Chul Lee, and Chang-Su Kim. "Depth Map Decomposition for Monocular Depth Estimation." In Lecture Notes in Computer Science, 18–34. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20086-1_2.
Full textHouse, Donald. "Monocular and Binocular Cooperation." In Depth Perception in Frogs and Toads, 31–56. New York, NY: Springer US, 1989. http://dx.doi.org/10.1007/978-1-4684-6391-0_3.
Full textHuynh, Lam, Phong Nguyen-Ha, Jiri Matas, Esa Rahtu, and Janne Heikkilä. "Guiding Monocular Depth Estimation Using Depth-Attention Volume." In Computer Vision – ECCV 2020, 581–97. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58574-7_35.
Full textZhang, Jinqing, Haosong Yue, Xingming Wu, Weihai Chen, and Changyun Wen. "Densely Connecting Depth Maps for Monocular Depth Estimation." In Computer Vision – ECCV 2020 Workshops, 149–65. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66823-5_9.
Full textClark, James J., and Alan L. Yuille. "Fusing Binocular and Monocular Depth Cues." In Data Fusion for Sensory Information Processing Systems, 137–46. Boston, MA: Springer US, 1990. http://dx.doi.org/10.1007/978-1-4757-2076-1_6.
Full textLeroy, Jean-Vincent, Thierry Simon, and François Deschenes. "Real Time Monocular Depth from Defocus." In Lecture Notes in Computer Science, 103–11. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-69905-7_12.
Full textChaudhari, Shubham, Aaryamaan Rao, Rohit Vardam, and Mandar Sohani. "A Synopsis of Monocular Depth Estimation." In Lecture Notes in Electrical Engineering, 203–18. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3690-5_18.
Full textKim, Ue-Hwan, Gyeong-Min Lee, and Jong-Hwan Kim. "Revisiting Self-supervised Monocular Depth Estimation." In Robot Intelligence Technology and Applications 6, 336–50. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-97672-9_30.
Full textHe, Mu, Le Hui, Yikai Bian, Jian Ren, Jin Xie, and Jian Yang. "RA-Depth: Resolution Adaptive Self-supervised Monocular Depth Estimation." In Lecture Notes in Computer Science, 565–81. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19812-0_33.
Full textZhang, Min, and Jianhua Li. "Efficient Unsupervised Monocular Depth Estimation with Inter-Frame Depth Interpolation." In Lecture Notes in Computer Science, 729–41. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87361-5_59.
Full textConference papers on the topic "Monocular depth"
Lee, Jae-Han, and Chang-Su Kim. "Monocular Depth Estimation Using Relative Depth Maps." In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.00996.
Full textAtapour-Abarghouei, Amir, and Toby P. Breckon. "Monocular Segment-Wise Depth: Monocular Depth Estimation Based on a Semantic Segmentation Prior." In 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. http://dx.doi.org/10.1109/icip.2019.8803551.
Full textDimiccoli, Mariella, Jean-Michel Morel, and Philippe Salembier. "Monocular Depth by Nonlinear Diffusion." In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing. IEEE, 2008. http://dx.doi.org/10.1109/icvgip.2008.97.
Full textWatson, Jamie, Michael Firman, Gabriel Brostow, and Daniyar Turmukhambetov. "Self-Supervised Monocular Depth Hints." In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019. http://dx.doi.org/10.1109/iccv.2019.00225.
Full text"MONOCULAR DEPTH-BASED BACKGROUND ESTIMATION." In International Conference on Computer Vision Theory and Applications. SciTePress - Science and and Technology Publications, 2012. http://dx.doi.org/10.5220/0003816503230328.
Full textDukor, Obumneme Stanley, S. Mahdi H. Miangoleh, Mahesh Kumar Krishna Reddy, Long Mai, and Yağız Aksoy. "Interactive Editing of Monocular Depth." In SIGGRAPH '22: Special Interest Group on Computer Graphics and Interactive Techniques Conference. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3532719.3543235.
Full textCivera, Javier, Andrew J. Davison, and J. M. M. Montiel. "Inverse Depth to Depth Conversion for Monocular SLAM." In 2007 IEEE International Conference on Robotics and Automation. IEEE, 2007. http://dx.doi.org/10.1109/robot.2007.363892.
Full textKamath K.M., Shreyas, Srijith Rajeev, Karen Panetta, and Sos S. Agaian. "DTTNet: Depth transverse transformer network for monocular depth estimation." In Multimodal Image Exploitation and Learning 2022, edited by Sos S. Agaian, Sabah A. Jassim, Stephen P. DelMarco, and Vijayan K. Asari. SPIE, 2022. http://dx.doi.org/10.1117/12.2618535.
Full textYue, Haosong, Jinqing Zhang, Xingming Wu, Jianhua Wang, and Weihai Chen. "Edge Enhancement in Monocular Depth Prediction." In 2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA). IEEE, 2020. http://dx.doi.org/10.1109/iciea48937.2020.9248336.
Full textZhang, Ji, Michael Kaess, and Sanjiv Singh. "Real-time depth enhanced monocular odometry." In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014). IEEE, 2014. http://dx.doi.org/10.1109/iros.2014.6943269.
Full textReports on the topic "Monocular depth"
Winterbottom, Marc D., Robert Patterson, Byron J. Pierce, Christine Covas, and Jennifer Winner. The Influence of Depth of Focus on Visibility of Monocular Head-Mounted Display Symbology in Simulation and Training Applications. Fort Belvoir, VA: Defense Technical Information Center, February 2007. http://dx.doi.org/10.21236/ada464044.
Full text