Gotowa bibliografia na temat „Visual Odometry”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Visual Odometry”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Visual Odometry"
Sun, Qian, Ming Diao, Yibing Li i Ya Zhang. "An improved binocular visual odometry algorithm based on the Random Sample Consensus in visual navigation systems". Industrial Robot: An International Journal 44, nr 4 (19.06.2017): 542–51. http://dx.doi.org/10.1108/ir-11-2016-0280.
Pełny tekst źródłaSrinivasan, M., S. Zhang i N. Bidwell. "Visually mediated odometry in honeybees". Journal of Experimental Biology 200, nr 19 (1.10.1997): 2513–22. http://dx.doi.org/10.1242/jeb.200.19.2513.
Pełny tekst źródłaScaramuzza, Davide, i Friedrich Fraundorfer. "Visual Odometry [Tutorial]". IEEE Robotics & Automation Magazine 18, nr 4 (grudzień 2011): 80–92. http://dx.doi.org/10.1109/mra.2011.943233.
Pełny tekst źródłaWang, Chenggong, Gen Li, Ruiqi Wang i Lin Li. "Wheeled Robot Visual Odometer Based on Two-dimensional Iterative Closest Point Algorithm". Journal of Physics: Conference Series 2504, nr 1 (1.05.2023): 012002. http://dx.doi.org/10.1088/1742-6596/2504/1/012002.
Pełny tekst źródłaCIOCOIU, Titus, Florin MOLDOVEANU i Caius SULIMAN. "CAMERA CALIBRATION FOR VISUAL ODOMETRY SYSTEM". SCIENTIFIC RESEARCH AND EDUCATION IN THE AIR FORCE 18, nr 1 (24.06.2016): 227–32. http://dx.doi.org/10.19062/2247-3173.2016.18.1.30.
Pełny tekst źródłaAn, Lifeng, Xinyu Zhang, Hongbo Gao i Yuchao Liu. "Semantic segmentation–aided visual odometry for urban autonomous driving". International Journal of Advanced Robotic Systems 14, nr 5 (1.09.2017): 172988141773566. http://dx.doi.org/10.1177/1729881417735667.
Pełny tekst źródłaWang, Jiabin, i Faqin Gao. "Improved visual inertial odometry based on deep learning". Journal of Physics: Conference Series 2078, nr 1 (1.11.2021): 012016. http://dx.doi.org/10.1088/1742-6596/2078/1/012016.
Pełny tekst źródłaBorges, Paulo Vinicius Koerich, i Stephen Vidas. "Practical Infrared Visual Odometry". IEEE Transactions on Intelligent Transportation Systems 17, nr 8 (sierpień 2016): 2205–13. http://dx.doi.org/10.1109/tits.2016.2515625.
Pełny tekst źródłaGonzalez, Ramon, Francisco Rodriguez, Jose Luis Guzman, Cedric Pradalier i Roland Siegwart. "Combined visual odometry and visual compass for off-road mobile robots localization". Robotica 30, nr 6 (5.10.2011): 865–78. http://dx.doi.org/10.1017/s026357471100110x.
Pełny tekst źródłaAguiar, André, Filipe Santos, Armando Jorge Sousa i Luís Santos. "FAST-FUSION: An Improved Accuracy Omnidirectional Visual Odometry System with Sensor Fusion and GPU Optimization for Embedded Low Cost Hardware". Applied Sciences 9, nr 24 (15.12.2019): 5516. http://dx.doi.org/10.3390/app9245516.
Pełny tekst źródłaRozprawy doktorskie na temat "Visual Odometry"
Pereira, Fabio Irigon. "High precision monocular visual odometry". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/183233.
Pełny tekst źródłaRecovering three-dimensional information from bi-dimensional images is an important problem in computer vision that finds several applications in our society. Robotics, entertainment industry, medical diagnose and prosthesis, and even interplanetary exploration benefit from vision based 3D estimation. The problem can be divided in two interdependent operations: estimating the camera position and orientation when each image was produced, and estimating the 3D scene structure. This work focuses on computer vision techniques, used to estimate the trajectory of a vehicle equipped camera, a problem known as visual odometry. In order to provide an objective measure of estimation efficiency and to compare the achieved results to the state-of-the-art works in visual odometry a high precision popular dataset was selected and used. In the course of this work new techniques for image feature tracking, camera pose estimation, point 3D position calculation and scale recovery are proposed. The achieved results outperform the best ranked results in the popular chosen dataset.
Masson, Clément. "Direction estimation using visual odometry". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169377.
Pełny tekst źródłaDetta masterarbete behandlar problemet med att mäta objekts riktningar från en fastobservationspunkt. En ny metod föreslås, baserad på en enda roterande kamera som kräverendast två (eller flera) landmärkens riktningar. I en första fas används multiperspektivgeometri,för att uppskatta kamerarotationer och nyckelelements riktningar utifrån en uppsättningöverlappande bilder. I en andra fas kan sedan riktningen hos vilket objekt som helst uppskattasgenom att kameran, associerad till en bild visande detta objekt, omsektioneras. En detaljeradbeskrivning av den algoritmiska kedjan ges, tillsammans med testresultat av både syntetisk dataoch verkliga bilder tagen med en infraröd kamera.
Johansson, Fredrik. "Visual Stereo Odometry for Indoor Positioning". Thesis, Linköpings universitet, Datorseende, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-81215.
Pełny tekst źródłaVenturelli, Cavalheiro Guilherme. "Fusing visual odometry and depth completion". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122517.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (pages 57-62).
Recent advances in technology indicate that autonomous vehicles and self-driving cats in particular may become commonplace in the near future. This thesis contributes to that scenario by studying the problem of depth perception based on sequences of camera images. We start by presenting a sensor fusion framework that achieves state-of-the-art performance when completing depth from sparse LiDAR measurements and a camera. Then, we study how the system performs under a variety of modifications of the sparse input until we ultimately replace LiDAR measurements with triangulations from a typical sparse visual odometry pipeline. We are then able to achieve a small improvement over the single image baseline and chart guidelines to assist in designing a system with even more substantial gains.
by Guilherme Venturelli Cavalheiro.
S.M.
S.M. Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
Burusa, Akshay Kumar. "Visual-Inertial Odometry for Autonomous Ground Vehicles". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217284.
Pełny tekst źródłaMonokulära kameror används ofta vid rörelseestimering av obemannade flygande farkoster. Med det ökade intresset för autonoma fordon har även användningen av monokulära kameror i fordon ökat. Detta är fram för allt fördelaktigt i situationer där satellitnavigering (Global Navigation Satellite System (GNSS)) äropålitlig, exempelvis i dagbrott. De flesta system som använder sig av monokulära kameror har problem med att estimera skalan. Denna estimering blir ännu svårare på grund av ett fordons större hastigheter och snabbare rörelser. Syftet med detta exjobb är att försöka estimera skalan baserat på bild data från en monokulär kamera, genom att komplettera med data från tröghetssensorer. Det visas att simultan estimering av position och skala för ett fordon är möjligt genom fusion av bild- och tröghetsdata från sensorer med hjälp av ett utökat Kalmanfilter (EKF). Estimeringens konvergens beror på flera faktorer, inklusive initialiseringsfel. En noggrann estimering av skalan möjliggör också en noggrann estimering av positionen. Detta möjliggör lokalisering av fordon vid avsaknad av GNSS och erbjuder därmed en ökad redundans.
Rao, Anantha N. "Learning-based Visual Odometry - A Transformer Approach". University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627658636420617.
Pełny tekst źródłaCampanholo, Guizilini Vitor. "Non-Parametric Learning for Monocular Visual Odometry". Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9903.
Pełny tekst źródłaWuthrich, Tori(Tori Lee). "Learning visual odometry primitives for computationally constrained platforms". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122419.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (pages 51-52).
Autonomous navigation for robotic platforms, particularly techniques that leverage an onboard camera, are of currently of significant interest to the robotics community. Designing methods to localize small, resource-constrained robots is a particular challenge due to limited availability of computing power and physical space for sensors. A computer vision, machine learning-based localization method was proposed by researchers investigating the automation of medical procedures. However, we believed the method to also be promising for low size, weight, and power (SWAP) budget robots. Unlike for traditional odometry methods, in this case, a machine learning model can be trained offline, and can then generate odometry measurements quickly and efficiently. This thesis describes the implementation of the learning-based, visual odometry method in the context of autonomous drones. We refer to the method as RetiNav due to its similarities with the way the human eye processes light signals from its surroundings. We make several modifications to the method relative to the initial design based on a detailed parameter study, and we test the method on a variety of challenging flight datasets. We show that over the course of a trajectory, RetiNav achieves as low as 1.4% error in predicting the distance traveled. We conclude that such a method is a viable component of a localization system, and propose the next steps for work in this area.
by Tori Wuthrich.
S.M.
S.M. Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
Greenberg, Jacob. "Visual Odometry for Autonomous MAV with On-Board Processing". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177290.
Pełny tekst źródłaEn ny visuell registreringsalgoritm (Adaptive Iterative Closest Keypoint, AICK) testas och utvärderas som ett positioneringsverktyg på en Micro Aerial Vehicle (MAV). Tagna bilder från en Kinect liknande RGB-D kamera analyseras och en approximerad position av MAVen beräknas. Förhoppningen är att hitta en positioneringslösning för miljöer utan GPS förbindelse, där detta arbete fokuserar på kontorsmiljöer inomhus. MAVen flygs manuellt samtidigt som RGB-D bilder tas, dessa registreras sedan med hjälp av AICK. Resultatet analyseras för att kunna dra en slutsats om AICK är en rimlig metod eller inte för att åstadkomma autonom flygning med hjälp av den uppskattade positionen. Resultatet visar potentialen för en fungerande autonom MAV i miljöer utan GPS förbindelse, men det finns testade miljöer där AICK i dagsläget fungerar undermåligt. Bristen på visuella särdrag på t.ex. en vit vägg inför problem och osäkerheter i positioneringen, ännu mer besvärande är det när avståndet till omgivningen överskrider RGB-D kamerornas räckvidd. Med fortsatt arbete med dessa svagheter är en robust autonom MAV som använder AICK för positioneringen rimlig.
Clark, Ronald. "Visual-inertial odometry, mapping and re-localization through learning". Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:69b03c50-f315-42f8-ad41-d97cd4c9bf09.
Pełny tekst źródłaKsiążki na temat "Visual Odometry"
Erdem, Uğur Murat, Nicholas Roy, John J. Leonard i Michael E. Hasselmo. Spatial and episodic memory. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199674923.003.0029.
Pełny tekst źródłaCzęści książek na temat "Visual Odometry"
Chien, Hsiang-Jen, Jr-Jiun Lin, Tang-Kai Yin i Reinhard Klette. "Multi-objective Visual Odometry". W Image and Video Technology, 62–74. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75786-5_6.
Pełny tekst źródłaGao, Xiang, i Tao Zhang. "Visual Odometry: Part II". W Introduction to Visual SLAM, 197–221. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-4939-4_7.
Pełny tekst źródłaGao, Xiang, i Tao Zhang. "Practice: Stereo Visual Odometry". W Introduction to Visual SLAM, 331–46. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-4939-4_12.
Pełny tekst źródłaGao, Xiang, i Tao Zhang. "Visual Odometry: Part I". W Introduction to Visual SLAM, 143–95. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-4939-4_6.
Pełny tekst źródłaLianos, Konstantinos-Nektarios, Johannes L. Schönberger, Marc Pollefeys i Torsten Sattler. "VSO: Visual Semantic Odometry". W Computer Vision – ECCV 2018, 246–63. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01225-0_15.
Pełny tekst źródłaKalambe, Shrijay S., Elizabeth Rufus, Vinod Karar i Shashi Poddar. "Descriptor- Using Low- for Visual Odometry". W Proceedings of 3rd International Conference on Computer Vision and Image Processing, 1–11. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-32-9291-8_1.
Pełny tekst źródłaRani, Prachi, Arpit Jangid, Vinay P. Namboodiri i K. S. Venkatesh. "Visual Odometry Based Omni-directional Hyperlapse". W Communications in Computer and Information Science, 3–13. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0020-2_1.
Pełny tekst źródłaMirabdollah, M. Hossein, i Bärbel Mertsching. "Fast Techniques for Monocular Visual Odometry". W Lecture Notes in Computer Science, 297–307. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24947-6_24.
Pełny tekst źródłaVan Hamme, David, Peter Veelaert i Wilfried Philips. "Robust Visual Odometry Using Uncertainty Models". W Advanced Concepts for Intelligent Vision Systems, 1–12. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23687-7_1.
Pełny tekst źródłaScaramuzza, Davide, i Zichao Zhang. "Aerial Robots, Visual-Inertial Odometry of". W Encyclopedia of Robotics, 1–9. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-642-41610-1_71-1.
Pełny tekst źródłaStreszczenia konferencji na temat "Visual Odometry"
Kleinschmidt, Sebastian P., i Bernardo Wagner. "Visual Multimodal Odometry: Robust Visual Odometry in Harsh Environments". W 2018 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). IEEE, 2018. http://dx.doi.org/10.1109/ssrr.2018.8468653.
Pełny tekst źródłaLin, Minjie, Qixin Cao i Haoruo Zhang. "PVO:Panoramic Visual Odometry". W 2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM). IEEE, 2018. http://dx.doi.org/10.1109/icarm.2018.8610700.
Pełny tekst źródłaCenter, Julian L., Kevin H. Knuth, Ali Mohammad-Djafari, Jean-François Bercher i Pierre Bessiére. "Bayesian Visual Odometry". W BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING: Proceedings of the 30th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. AIP, 2011. http://dx.doi.org/10.1063/1.3573659.
Pełny tekst źródłaFlemmen, Henrik D., Rudolf Mester, Annette Stahl, Torleiv H. Bryne i Edmund Førland Brekke. "Maritime radar odometry inspired by visual odometry". W 2023 26th International Conference on Information Fusion (FUSION). IEEE, 2023. http://dx.doi.org/10.23919/fusion52260.2023.10224142.
Pełny tekst źródłaHuai, Zheng, i Guoquan Huang. "Robocentric Visual-Inertial Odometry". W 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018. http://dx.doi.org/10.1109/iros.2018.8593643.
Pełny tekst źródłaZhu, Pengxiang, Yulin Yang, Wei Ren i Guoquan Huang. "Cooperative Visual-Inertial Odometry". W 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021. http://dx.doi.org/10.1109/icra48506.2021.9561674.
Pełny tekst źródłaAbdulov, Alexander, i Alexander Abramenkov. "Visual odometry system simulator". W 2017 International Siberian Conference on Control and Communications (SIBCON). IEEE, 2017. http://dx.doi.org/10.1109/sibcon.2017.7998584.
Pełny tekst źródłaYe, Weicai, Xinyue Lan, Shuo Chen, Yuhang Ming, Xingyuan Yu, Hujun Bao, Zhaopeng Cui i Guofeng Zhang. "PVO: Panoptic Visual Odometry". W 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.00924.
Pełny tekst źródłaKlenk, Simon, Marvin Motzet, Lukas Koestler i Daniel Cremers. "Deep Event Visual Odometry". W 2024 International Conference on 3D Vision (3DV). IEEE, 2024. http://dx.doi.org/10.1109/3dv62453.2024.00036.
Pełny tekst źródłaWei, Peng, Guoliang Hua, Weibo Huang, Fanyang Meng i Hong Liu. "Unsupervised Monocular Visual-inertial Odometry Network". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/325.
Pełny tekst źródłaRaporty organizacyjne na temat "Visual Odometry"
Pirozzo, David M., Philip A. Frederick, Shawn Hunt, Bernard Theisen i Mike Del Rose. Spectrally Queued Feature Selection for Robotic Visual Odometery. Fort Belvoir, VA: Defense Technical Information Center, listopad 2010. http://dx.doi.org/10.21236/ada535663.
Pełny tekst źródła