Academic literature on the topic 'Visual Odometry'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Visual Odometry.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Visual Odometry"

1

Sun, Qian, Ming Diao, Yibing Li, and Ya Zhang. "An improved binocular visual odometry algorithm based on the Random Sample Consensus in visual navigation systems." Industrial Robot: An International Journal 44, no. 4 (June 19, 2017): 542–51. http://dx.doi.org/10.1108/ir-11-2016-0280.

Full text
Abstract:
Purpose The purpose of this paper is to propose a binocular visual odometry algorithm based on the Random Sample Consensus (RANSAC) in visual navigation systems. Design/methodology/approach The authors propose a novel binocular visual odometry algorithm based on features from accelerated segment test (FAST) extractor and an improved matching method based on the RANSAC. Firstly, features are detected by utilizing the FAST extractor. Secondly, the detected features are roughly matched by utilizing the distance ration of the nearest neighbor and the second nearest neighbor. Finally, wrong matched feature pairs are removed by using the RANSAC method to reduce the interference of error matchings. Findings The performance of this new algorithm has been examined by an actual experiment data. The results shown that not only the robustness of feature detection and matching can be enhanced but also the positioning error can be significantly reduced by utilizing this novel binocular visual odometry algorithm. The feasibility and effectiveness of the proposed matching method and the improved binocular visual odometry algorithm were also verified in this paper. Practical implications This paper presents an improved binocular visual odometry algorithm which has been tested by real data. This algorithm can be used for outdoor vehicle navigation. Originality/value A binocular visual odometer algorithm based on FAST extractor and RANSAC methods is proposed to improve the positioning accuracy and robustness. Experiment results have verified the effectiveness of the present visual odometer algorithm.
APA, Harvard, Vancouver, ISO, and other styles
2

Srinivasan, M., S. Zhang, and N. Bidwell. "Visually mediated odometry in honeybees." Journal of Experimental Biology 200, no. 19 (October 1, 1997): 2513–22. http://dx.doi.org/10.1242/jeb.200.19.2513.

Full text
Abstract:
The ability of honeybees to gauge the distances of short flights was investigated under controlled laboratory conditions where a variety of potential odometric cues such as flight duration, energy consumption, image motion, airspeed, inertial navigation and landmarks were manipulated. Our findings indicate that honeybees can indeed measure short distances travelled and that they do so solely by analysis of image motion. Visual odometry seems to rely primarily on the motion that is sensed by the lateral regions of the visual field. Computation of distance flown is re-commenced whenever a prominent landmark is encountered en route. 'Re-setting' the odometer (or starting a new one) at each landmark facilitates accurate long-range navigation by preventing excessive accumulation of odometric errors. Distance appears to be learnt on the way to the food source and not on the way back.
APA, Harvard, Vancouver, ISO, and other styles
3

Scaramuzza, Davide, and Friedrich Fraundorfer. "Visual Odometry [Tutorial]." IEEE Robotics & Automation Magazine 18, no. 4 (December 2011): 80–92. http://dx.doi.org/10.1109/mra.2011.943233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Chenggong, Gen Li, Ruiqi Wang, and Lin Li. "Wheeled Robot Visual Odometer Based on Two-dimensional Iterative Closest Point Algorithm." Journal of Physics: Conference Series 2504, no. 1 (May 1, 2023): 012002. http://dx.doi.org/10.1088/1742-6596/2504/1/012002.

Full text
Abstract:
Abstract According to the two-dimensional motion characteristics of planar motion wheeled robot, the visual odometer was dimensionally reduced in this study. In the feature point matching part of visual odometer, the contour constraint was used to filter out the mismatched feature point pairs (abbreviated as FPP). This method could also filter out the matched FPP, and the feature of FPP was correct color image matches, however, their depth image error was large. This offered higher quality matched FPP for the subsequent interframe motion estimation. Dimension reduction was performed in the interframe motion estimation part, and the two-dimensional Iterative Closest Point (ICP) algorithm was used for camera motion estimation. The experiments indicated that the proposed algorithm effectively improved the computational speed and precision of planar motion wheeled robot visual odometer. This research indicates that the dimension reduction processing of ICP algorithm can effectively improve the operation speed and calculation accuracy of planar motion wheeled robot visual odometry, which provides a good reference and data support for the subsequent research of wheeled robot visual odometry in the future.
APA, Harvard, Vancouver, ISO, and other styles
5

CIOCOIU, Titus, Florin MOLDOVEANU, and Caius SULIMAN. "CAMERA CALIBRATION FOR VISUAL ODOMETRY SYSTEM." SCIENTIFIC RESEARCH AND EDUCATION IN THE AIR FORCE 18, no. 1 (June 24, 2016): 227–32. http://dx.doi.org/10.19062/2247-3173.2016.18.1.30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

An, Lifeng, Xinyu Zhang, Hongbo Gao, and Yuchao Liu. "Semantic segmentation–aided visual odometry for urban autonomous driving." International Journal of Advanced Robotic Systems 14, no. 5 (September 1, 2017): 172988141773566. http://dx.doi.org/10.1177/1729881417735667.

Full text
Abstract:
Visual odometry plays an important role in urban autonomous driving cars. Feature-based visual odometry methods sample the candidates randomly from all available feature points, while alignment-based visual odometry methods take all pixels into account. These methods hold an assumption that quantitative majority of candidate visual cues could represent the truth of motions. But in real urban traffic scenes, this assumption could be broken by lots of dynamic traffic participants. Big trucks or buses may occupy the main image parts of a front-view monocular camera and result in wrong visual odometry estimation. Finding available visual cues that could represent real motion is the most important and hardest step for visual odometry in the dynamic environment. Semantic attributes of pixels could be considered as a more reasonable factor for candidate selection in that case. This article analyzed the availability of all visual cues with the help of pixel-level semantic information and proposed a new visual odometry method that combines feature-based and alignment-based visual odometry methods with one optimization pipeline. The proposed method was compared with three open-source visual odometry algorithms on Kitti benchmark data sets and our own data set. Experimental results confirmed that the new approach provided effective improvement both on accurate and robustness in the complex dynamic scenes.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Jiabin, and Faqin Gao. "Improved visual inertial odometry based on deep learning." Journal of Physics: Conference Series 2078, no. 1 (November 1, 2021): 012016. http://dx.doi.org/10.1088/1742-6596/2078/1/012016.

Full text
Abstract:
Abstract The traditional visual inertial odometry according to the manually designed rules extracts key points. However, the manually designed extraction rules are easy to be affected and have poor robustness in the scene of illumination and perspective change, resulting in the decline of positioning accuracy. Deep learning methods show strong robustness in key point extraction. In order to improve the positioning accuracy of visual inertial odometer in the scene of illumination and perspective change, deep learning is introduced into the visual inertial odometer system for key point detection. The encoder part of MagicPoint network is improved by depthwise separable convolution, and then the network is trained by self-supervised method; A visual inertial odometer system based on deep learning is compose by using the trained network to replace the traditional key points detection algorithm on the basis of VINS. The key point detection network is tested on HPatches dataset, and the odometer positioning effect is evaluated on EUROC dataset. The results show that the improved visual inertial odometer based on deep learning can reduce the positioning error by more than 5% without affecting the real-time performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Borges, Paulo Vinicius Koerich, and Stephen Vidas. "Practical Infrared Visual Odometry." IEEE Transactions on Intelligent Transportation Systems 17, no. 8 (August 2016): 2205–13. http://dx.doi.org/10.1109/tits.2016.2515625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gonzalez, Ramon, Francisco Rodriguez, Jose Luis Guzman, Cedric Pradalier, and Roland Siegwart. "Combined visual odometry and visual compass for off-road mobile robots localization." Robotica 30, no. 6 (October 5, 2011): 865–78. http://dx.doi.org/10.1017/s026357471100110x.

Full text
Abstract:
SUMMARYIn this paper, we present the work related to the application of a visual odometry approach to estimate the location of mobile robots operating in off-road conditions. The visual odometry approach is based on template matching, which deals with estimating the robot displacement through a matching process between two consecutive images. Standard visual odometry has been improved using visual compass method for orientation estimation. For this purpose, two consumer-grade monocular cameras have been employed. One camera is pointing at the ground under the robot, and the other is looking at the surrounding environment. Comparisons with popular localization approaches, through physical experiments in off-road conditions, have shown the satisfactory behavior of the proposed strategy.
APA, Harvard, Vancouver, ISO, and other styles
10

Aguiar, André, Filipe Santos, Armando Jorge Sousa, and Luís Santos. "FAST-FUSION: An Improved Accuracy Omnidirectional Visual Odometry System with Sensor Fusion and GPU Optimization for Embedded Low Cost Hardware." Applied Sciences 9, no. 24 (December 15, 2019): 5516. http://dx.doi.org/10.3390/app9245516.

Full text
Abstract:
The main task while developing a mobile robot is to achieve accurate and robust navigation in a given environment. To achieve such a goal, the ability of the robot to localize itself is crucial. In outdoor, namely agricultural environments, this task becomes a real challenge because odometry is not always usable and global navigation satellite systems (GNSS) signals are blocked or significantly degraded. To answer this challenge, this work presents a solution for outdoor localization based on an omnidirectional visual odometry technique fused with a gyroscope and a low cost planar light detection and ranging (LIDAR), that is optimized to run in a low cost graphical processing unit (GPU). This solution, named FAST-FUSION, proposes to the scientific community three core contributions. The first contribution is an extension to the state-of-the-art monocular visual odometry (Libviso2) to work with omnidirectional cameras and single axis gyro to increase the system accuracy. The second contribution, it is an algorithm that considers low cost LIDAR data to estimate the motion scale and solve the limitations of monocular visual odometer systems. Finally, we propose an heterogeneous computing optimization that considers a Raspberry Pi GPU to improve the visual odometry runtime performance in low cost platforms. To test and evaluate FAST-FUSION, we created three open-source datasets in an outdoor environment. Results shows that FAST-FUSION is acceptable to run in real-time in low cost hardware and that outperforms the original Libviso2 approach in terms of time performance and motion estimation accuracy.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Visual Odometry"

1

Pereira, Fabio Irigon. "High precision monocular visual odometry." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/183233.

Full text
Abstract:
Extrair informação de profundidade a partir de imagens bidimensionais é um importante problema na área de visão computacional. Diversas aplicações se beneficiam desta classe de algoritmos tais como: robótica, a indústria de entretenimento, aplicações médicas para diagnóstico e confecção de próteses e até mesmo exploração interplanetária. Esta aplicação pode ser dividida em duas etapas interdependentes: a estimação da posição e orientação da câmera no momento em que a imagem foi gerada, e a estimativa da estrutura tridimensional da cena. Este trabalho foca em técnicas de visão computacional usadas para estimar a trajetória de um veículo equipado com uma câmera, problema conhecido como odometria visual. Para obter medidas objetivas de eficiência e precisão, e poder comparar os resultados obtidos com o estado da arte, uma base de dados de alta precisão, bastante utilizada pela comunidade científica foi utilizada. No curso deste trabalho novas técnicas para rastreamento de detalhes, estimativa de posição de câmera, cálculo de posição 3D de pontos e recuperação de escala são propostos. Os resultados alcançados superam os mais bem ranqueados trabalhos na base de dados escolhida até o momento da publicação desta tese.
Recovering three-dimensional information from bi-dimensional images is an important problem in computer vision that finds several applications in our society. Robotics, entertainment industry, medical diagnose and prosthesis, and even interplanetary exploration benefit from vision based 3D estimation. The problem can be divided in two interdependent operations: estimating the camera position and orientation when each image was produced, and estimating the 3D scene structure. This work focuses on computer vision techniques, used to estimate the trajectory of a vehicle equipped camera, a problem known as visual odometry. In order to provide an objective measure of estimation efficiency and to compare the achieved results to the state-of-the-art works in visual odometry a high precision popular dataset was selected and used. In the course of this work new techniques for image feature tracking, camera pose estimation, point 3D position calculation and scale recovery are proposed. The achieved results outperform the best ranked results in the popular chosen dataset.
APA, Harvard, Vancouver, ISO, and other styles
2

Masson, Clément. "Direction estimation using visual odometry." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169377.

Full text
Abstract:
This Master thesis tackles the problem of measuring objects’ directions from a motionlessobservation point. A new method based on a single rotating camera requiring the knowledge ofonly two (or more) landmarks’ direction is proposed. In a first phase, multi-view geometry isused to estimate camera rotations and key elements’ direction from a set of overlapping images.Then in a second phase, the direction of any object can be estimated by resectioning the cameraassociated to a picture showing this object. A detailed description of the algorithmic chain isgiven, along with test results on both synthetic data and real images taken with an infraredcamera.
Detta masterarbete behandlar problemet med att mäta objekts riktningar från en fastobservationspunkt. En ny metod föreslås, baserad på en enda roterande kamera som kräverendast två (eller flera) landmärkens riktningar. I en första fas används multiperspektivgeometri,för att uppskatta kamerarotationer och nyckelelements riktningar utifrån en uppsättningöverlappande bilder. I en andra fas kan sedan riktningen hos vilket objekt som helst uppskattasgenom att kameran, associerad till en bild visande detta objekt, omsektioneras. En detaljeradbeskrivning av den algoritmiska kedjan ges, tillsammans med testresultat av både syntetisk dataoch verkliga bilder tagen med en infraröd kamera.
APA, Harvard, Vancouver, ISO, and other styles
3

Johansson, Fredrik. "Visual Stereo Odometry for Indoor Positioning." Thesis, Linköpings universitet, Datorseende, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-81215.

Full text
Abstract:
In this master thesis a visual odometry system is implemented and explained. Visual odometry is a technique, which could be used on autonomous vehicles to determine its current position and is preferably used indoors when GPS is notworking. The only input to the system are the images from a stereo camera and the output is the current location given in relative position. In the C++ implementation, image features are found and matched between the stereo images and the previous stereo pair, which gives a range of 150-250 verified feature matchings. The image coordinates are triangulated into a 3D-point cloud. The distance between two subsequent point clouds is minimized with respect to rigid transformations, which gives the motion described with six parameters, three for the translation and three for the rotation. Noise in the image coordinates gives reconstruction errors which makes the motion estimation very sensitive. The results from six experiments show that the weakness of the system is the ability to distinguish rotations from translations. However, if the system has additional knowledge of how it is moving, the minimization can be done with only three parameters and the system can estimate its position with less than 5 % error.
APA, Harvard, Vancouver, ISO, and other styles
4

Venturelli, Cavalheiro Guilherme. "Fusing visual odometry and depth completion." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122517.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 57-62).
Recent advances in technology indicate that autonomous vehicles and self-driving cats in particular may become commonplace in the near future. This thesis contributes to that scenario by studying the problem of depth perception based on sequences of camera images. We start by presenting a sensor fusion framework that achieves state-of-the-art performance when completing depth from sparse LiDAR measurements and a camera. Then, we study how the system performs under a variety of modifications of the sparse input until we ultimately replace LiDAR measurements with triangulations from a typical sparse visual odometry pipeline. We are then able to achieve a small improvement over the single image baseline and chart guidelines to assist in designing a system with even more substantial gains.
by Guilherme Venturelli Cavalheiro.
S.M.
S.M. Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
APA, Harvard, Vancouver, ISO, and other styles
5

Burusa, Akshay Kumar. "Visual-Inertial Odometry for Autonomous Ground Vehicles." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217284.

Full text
Abstract:
Monocular cameras are prominently used for estimating motion of Unmanned Aerial Vehicles. With growing interest in autonomous vehicle technology, the use of monocular cameras in ground vehicles is on the rise. This is especially favorable for localization in situations where Global Navigation Satellite System (GNSS) is unreliable, such as open-pit mining environments. However, most monocular camera based approaches suffer due to obscure scale information. Ground vehicles impose a greater difficulty due to high speeds and fast movements. This thesis aims to estimate the scale of monocular vision data by using an inertial sensor in addition to the camera. It is shown that the simultaneous estimation of pose and scale in autonomous ground vehicles is possible by the fusion of visual and inertial sensors in an Extended Kalman Filter (EKF) framework. However, the convergence of scale is sensitive to several factors including the initialization error. An accurate estimation of scale allows the accurate estimation of pose. This facilitates the localization of ground vehicles in the absence of GNSS, providing a reliable fall-back option.
Monokulära kameror används ofta vid rörelseestimering av obemannade flygande farkoster. Med det ökade intresset för autonoma fordon har även användningen av monokulära kameror i fordon ökat. Detta är fram för allt fördelaktigt i situationer där satellitnavigering (Global Navigation Satellite System (GNSS)) äropålitlig, exempelvis i dagbrott. De flesta system som använder sig av monokulära kameror har problem med att estimera skalan. Denna estimering blir ännu svårare på grund av ett fordons större hastigheter och snabbare rörelser. Syftet med detta exjobb är att försöka estimera skalan baserat på bild data från en monokulär kamera, genom att komplettera med data från tröghetssensorer. Det visas att simultan estimering av position och skala för ett fordon är möjligt genom fusion av bild- och tröghetsdata från sensorer med hjälp av ett utökat Kalmanfilter (EKF). Estimeringens konvergens beror på flera faktorer, inklusive initialiseringsfel. En noggrann estimering av skalan möjliggör också en noggrann estimering av positionen. Detta möjliggör lokalisering av fordon vid avsaknad av GNSS och erbjuder därmed en ökad redundans.
APA, Harvard, Vancouver, ISO, and other styles
6

Rao, Anantha N. "Learning-based Visual Odometry - A Transformer Approach." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627658636420617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Campanholo, Guizilini Vitor. "Non-Parametric Learning for Monocular Visual Odometry." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9903.

Full text
Abstract:
This thesis addresses the problem of incremental localization from visual information, a scenario commonly known as visual odometry. Current visual odometry algorithms are heavily dependent on camera calibration, using a pre-established geometric model to provide the transformation between input (optical flow estimates) and output (vehicle motion estimates) information. A novel approach to visual odometry is proposed in this thesis where the need for camera calibration, or even for a geometric model, is circumvented by the use of machine learning principles and techniques. A non-parametric Bayesian regression technique, the Gaussian Process (GP), is used to elect the most probable transformation function hypothesis from input to output, based on training data collected prior and during navigation. Other than eliminating the need for a geometric model and traditional camera calibration, this approach also allows for scale recovery even in a monocular configuration, and provides a natural treatment of uncertainties due to the probabilistic nature of GPs. Several extensions to the traditional GP framework are introduced and discussed in depth, and they constitute the core of the contributions of this thesis to the machine learning and robotics community. The proposed framework is tested in a wide variety of scenarios, ranging from urban and off-road ground vehicles to unconstrained 3D unmanned aircrafts. The results show a significant improvement over traditional visual odometry algorithms, and also surpass results obtained using other sensors, such as laser scanners and IMUs. The incorporation of these results to a SLAM scenario, using a Exact Sparse Information Filter (ESIF), is shown to decrease global uncertainty by exploiting revisited areas of the environment. Finally, a technique for the automatic segmentation of dynamic objects is presented, as a way to increase the robustness of image information and further improve visual odometry results.
APA, Harvard, Vancouver, ISO, and other styles
8

Wuthrich, Tori(Tori Lee). "Learning visual odometry primitives for computationally constrained platforms." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122419.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 51-52).
Autonomous navigation for robotic platforms, particularly techniques that leverage an onboard camera, are of currently of significant interest to the robotics community. Designing methods to localize small, resource-constrained robots is a particular challenge due to limited availability of computing power and physical space for sensors. A computer vision, machine learning-based localization method was proposed by researchers investigating the automation of medical procedures. However, we believed the method to also be promising for low size, weight, and power (SWAP) budget robots. Unlike for traditional odometry methods, in this case, a machine learning model can be trained offline, and can then generate odometry measurements quickly and efficiently. This thesis describes the implementation of the learning-based, visual odometry method in the context of autonomous drones. We refer to the method as RetiNav due to its similarities with the way the human eye processes light signals from its surroundings. We make several modifications to the method relative to the initial design based on a detailed parameter study, and we test the method on a variety of challenging flight datasets. We show that over the course of a trajectory, RetiNav achieves as low as 1.4% error in predicting the distance traveled. We conclude that such a method is a viable component of a localization system, and propose the next steps for work in this area.
by Tori Wuthrich.
S.M.
S.M. Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
APA, Harvard, Vancouver, ISO, and other styles
9

Greenberg, Jacob. "Visual Odometry for Autonomous MAV with On-Board Processing." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177290.

Full text
Abstract:
A new visual registration algorithm (Adaptive Iterative Closest Keypoint, AICK) is tested and evaluated as a positioning tool on a Micro Aerial Vehicle (MAV). Captured frames from a Kinect like RGB-D camera are analyzed and an estimated position of the MAV is extracted. The hope is to find a positioning solution for GPS-denied environments. This thesis is focused on an indoor office environment. The MAV is flown manually, capturing in-flight RGB-D images which are registered with the AICK algorithm. The result is analyzed to come to a conclusion if AICK is viable or not for autonomous flight based on on-board positioning estimates. The result shows potential for a working autonomous MAV in GPS-denied environments, however there are some surroundings that have proven difficult. The lack of visual features on e.g., a white wall causes problems and uncertainties in the positioning, which is even more troublesome when the distance to the surroundings exceed the RGB-D cameras depth range. With further work on these weaknesses we believe that a robust autonomous MAV using AICK for positioning is plausible.
En ny visuell registreringsalgoritm (Adaptive Iterative Closest Keypoint, AICK) testas och utvärderas som ett positioneringsverktyg på en Micro Aerial Vehicle (MAV). Tagna bilder från en Kinect liknande RGB-D kamera analyseras och en approximerad position av MAVen beräknas. Förhoppningen är att hitta en positioneringslösning för miljöer utan GPS förbindelse, där detta arbete fokuserar på kontorsmiljöer inomhus. MAVen flygs manuellt samtidigt som RGB-D bilder tas, dessa registreras sedan med hjälp av AICK. Resultatet analyseras för att kunna dra en slutsats om AICK är en rimlig metod eller inte för att åstadkomma autonom flygning med hjälp av den uppskattade positionen. Resultatet visar potentialen för en fungerande autonom MAV i miljöer utan GPS förbindelse, men det finns testade miljöer där AICK i dagsläget fungerar undermåligt. Bristen på visuella särdrag på t.ex. en vit vägg inför problem och osäkerheter i positioneringen, ännu mer besvärande är det när avståndet till omgivningen överskrider RGB-D kamerornas räckvidd. Med fortsatt arbete med dessa svagheter är en robust autonom MAV som använder AICK för positioneringen rimlig.
APA, Harvard, Vancouver, ISO, and other styles
10

Clark, Ronald. "Visual-inertial odometry, mapping and re-localization through learning." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:69b03c50-f315-42f8-ad41-d97cd4c9bf09.

Full text
Abstract:
Precise pose information is a fundamental prerequisite for numerous applications in robotics, AI and mobile computing. Monocular cameras are the ideal sensor for this purpose - they are cheap, lightweight and ubiquitous. As such, monocular visual localization is widely regarded as a cornerstone requirement of machine perception. However, a large gap still exists between the performance that these applications require and that which is achievable through existing monocular perception algorithms. In this thesis we directly tackle the issue of robust egocentric visual localization and mapping through a data-centric approach. As a first major contribution we propose novel learnt models for visual odometry which form the basis of the ego-motion estimates used in later chapters. The proposed approaches are less fragile and much more robust than existing approaches. We present experimental evidence that these approaches can not only approach the accuracy of standard methods but in many cases also show major improvements in computational and memory efficiency. To cope with the drift inherent to the odometry methods, we then introduce a novel learnt spatio-temporal model for performing global relocalization updates. The proposed approach allows one to efficiently infer the global location of an image stream at the fraction of the time of traditional feature-based approaches with minimal loss in localization accuracy. Finally, we present a novel SLAM system integrating our learnt priors for creating 3D maps from monocular image sequences. The approach is designed to harness multiple input sources, including prior depth and ego-motion estimates and incorporates both loop-closure and relocalization updates. The approach, based on the well-established standard visual-inertial structure-from-motion process, allows us to perform accurate posterior inference of camera poses and scene structure to significantly boost the reconstruction robustness and fidelity. Through our qualitative and quantitative experimentation on a wide range of datasets, we conclude that the proposed methods can bring accurate visual localization to a wide class of consumer devices and robotic platforms.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Visual Odometry"

1

Erdem, Uğur Murat, Nicholas Roy, John J. Leonard, and Michael E. Hasselmo. Spatial and episodic memory. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199674923.003.0029.

Full text
Abstract:
The neuroscience of spatial memory is one of the most promising areas for developing biomimetic solutions to complex engineering challenges. Grid cells are neurons recorded in the medial entorhinal cortex that fire when rats are in an array of locations in the environment falling on the vertices of tightly packed equilateral triangles. Grid cells suggest an exciting new approach for enhancing robot simultaneous localization and mapping (SLAM) in changing environments and could provide a common map for situational awareness between human and robotic teammates. Current models of grid cells are well suited to robotics, as they utilize input from self-motion and sensory flow similar to inertial sensors and visual odometry in robots. Computational models, supported by in vivo neural activity data, demonstrate how grid cell representations could provide a substrate for goal-directed behavior using hierarchical forward planning that finds novel shortcut trajectories in changing environments.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Visual Odometry"

1

Chien, Hsiang-Jen, Jr-Jiun Lin, Tang-Kai Yin, and Reinhard Klette. "Multi-objective Visual Odometry." In Image and Video Technology, 62–74. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75786-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gao, Xiang, and Tao Zhang. "Visual Odometry: Part II." In Introduction to Visual SLAM, 197–221. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-4939-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gao, Xiang, and Tao Zhang. "Practice: Stereo Visual Odometry." In Introduction to Visual SLAM, 331–46. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-4939-4_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gao, Xiang, and Tao Zhang. "Visual Odometry: Part I." In Introduction to Visual SLAM, 143–95. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-4939-4_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lianos, Konstantinos-Nektarios, Johannes L. Schönberger, Marc Pollefeys, and Torsten Sattler. "VSO: Visual Semantic Odometry." In Computer Vision – ECCV 2018, 246–63. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01225-0_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kalambe, Shrijay S., Elizabeth Rufus, Vinod Karar, and Shashi Poddar. "Descriptor- Using Low- for Visual Odometry." In Proceedings of 3rd International Conference on Computer Vision and Image Processing, 1–11. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-32-9291-8_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rani, Prachi, Arpit Jangid, Vinay P. Namboodiri, and K. S. Venkatesh. "Visual Odometry Based Omni-directional Hyperlapse." In Communications in Computer and Information Science, 3–13. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0020-2_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mirabdollah, M. Hossein, and Bärbel Mertsching. "Fast Techniques for Monocular Visual Odometry." In Lecture Notes in Computer Science, 297–307. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24947-6_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Van Hamme, David, Peter Veelaert, and Wilfried Philips. "Robust Visual Odometry Using Uncertainty Models." In Advanced Concepts for Intelligent Vision Systems, 1–12. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23687-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Scaramuzza, Davide, and Zichao Zhang. "Aerial Robots, Visual-Inertial Odometry of." In Encyclopedia of Robotics, 1–9. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-642-41610-1_71-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Visual Odometry"

1

Kleinschmidt, Sebastian P., and Bernardo Wagner. "Visual Multimodal Odometry: Robust Visual Odometry in Harsh Environments." In 2018 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). IEEE, 2018. http://dx.doi.org/10.1109/ssrr.2018.8468653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lin, Minjie, Qixin Cao, and Haoruo Zhang. "PVO:Panoramic Visual Odometry." In 2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM). IEEE, 2018. http://dx.doi.org/10.1109/icarm.2018.8610700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Center, Julian L., Kevin H. Knuth, Ali Mohammad-Djafari, Jean-François Bercher, and Pierre Bessiére. "Bayesian Visual Odometry." In BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING: Proceedings of the 30th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. AIP, 2011. http://dx.doi.org/10.1063/1.3573659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Flemmen, Henrik D., Rudolf Mester, Annette Stahl, Torleiv H. Bryne, and Edmund Førland Brekke. "Maritime radar odometry inspired by visual odometry." In 2023 26th International Conference on Information Fusion (FUSION). IEEE, 2023. http://dx.doi.org/10.23919/fusion52260.2023.10224142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Huai, Zheng, and Guoquan Huang. "Robocentric Visual-Inertial Odometry." In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018. http://dx.doi.org/10.1109/iros.2018.8593643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhu, Pengxiang, Yulin Yang, Wei Ren, and Guoquan Huang. "Cooperative Visual-Inertial Odometry." In 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021. http://dx.doi.org/10.1109/icra48506.2021.9561674.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Abdulov, Alexander, and Alexander Abramenkov. "Visual odometry system simulator." In 2017 International Siberian Conference on Control and Communications (SIBCON). IEEE, 2017. http://dx.doi.org/10.1109/sibcon.2017.7998584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ye, Weicai, Xinyue Lan, Shuo Chen, Yuhang Ming, Xingyuan Yu, Hujun Bao, Zhaopeng Cui, and Guofeng Zhang. "PVO: Panoptic Visual Odometry." In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.00924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Klenk, Simon, Marvin Motzet, Lukas Koestler, and Daniel Cremers. "Deep Event Visual Odometry." In 2024 International Conference on 3D Vision (3DV). IEEE, 2024. http://dx.doi.org/10.1109/3dv62453.2024.00036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wei, Peng, Guoliang Hua, Weibo Huang, Fanyang Meng, and Hong Liu. "Unsupervised Monocular Visual-inertial Odometry Network." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/325.

Full text
Abstract:
Recently, unsupervised methods for monocular visual odometry (VO), with no need for quantities of expensive labeled ground truth, have attracted much attention. However, these methods are inadequate for long-term odometry task, due to the inherent limitation of only using monocular visual data and the inability to handle the error accumulation problem. By utilizing supplemental low-cost inertial measurements, and exploiting the multi-view geometric constraint and sequential constraint, an unsupervised visual-inertial odometry framework (UnVIO) is proposed in this paper. Our method is able to predict the per-frame depth map, as well as extracting and self-adaptively fusing visual-inertial motion features from image-IMU stream to achieve long-term odometry task. A novel sliding window optimization strategy, which consists of an intra-window and an inter-window optimization, is introduced for overcoming the error accumulation and scale ambiguity problem. The intra-window optimization restrains the geometric inferences within the window through checking the photometric consistency. And the inter-window optimization checks the 3D geometric consistency and trajectory consistency among predictions of separate windows. Extensive experiments have been conducted on KITTI and Malaga datasets to demonstrate the superiority of UnVIO over other state-of-the-art VO / VIO methods. The codes are open-source.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Visual Odometry"

1

Pirozzo, David M., Philip A. Frederick, Shawn Hunt, Bernard Theisen, and Mike Del Rose. Spectrally Queued Feature Selection for Robotic Visual Odometery. Fort Belvoir, VA: Defense Technical Information Center, November 2010. http://dx.doi.org/10.21236/ada535663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography