Journal articles on the topic 'Visual Odometry'

To see the other types of publications on this topic, follow the link: Visual Odometry.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Visual Odometry.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sun, Qian, Ming Diao, Yibing Li, and Ya Zhang. "An improved binocular visual odometry algorithm based on the Random Sample Consensus in visual navigation systems." Industrial Robot: An International Journal 44, no. 4 (June 19, 2017): 542–51. http://dx.doi.org/10.1108/ir-11-2016-0280.

Full text
Abstract:
Purpose The purpose of this paper is to propose a binocular visual odometry algorithm based on the Random Sample Consensus (RANSAC) in visual navigation systems. Design/methodology/approach The authors propose a novel binocular visual odometry algorithm based on features from accelerated segment test (FAST) extractor and an improved matching method based on the RANSAC. Firstly, features are detected by utilizing the FAST extractor. Secondly, the detected features are roughly matched by utilizing the distance ration of the nearest neighbor and the second nearest neighbor. Finally, wrong matched feature pairs are removed by using the RANSAC method to reduce the interference of error matchings. Findings The performance of this new algorithm has been examined by an actual experiment data. The results shown that not only the robustness of feature detection and matching can be enhanced but also the positioning error can be significantly reduced by utilizing this novel binocular visual odometry algorithm. The feasibility and effectiveness of the proposed matching method and the improved binocular visual odometry algorithm were also verified in this paper. Practical implications This paper presents an improved binocular visual odometry algorithm which has been tested by real data. This algorithm can be used for outdoor vehicle navigation. Originality/value A binocular visual odometer algorithm based on FAST extractor and RANSAC methods is proposed to improve the positioning accuracy and robustness. Experiment results have verified the effectiveness of the present visual odometer algorithm.
APA, Harvard, Vancouver, ISO, and other styles
2

Srinivasan, M., S. Zhang, and N. Bidwell. "Visually mediated odometry in honeybees." Journal of Experimental Biology 200, no. 19 (October 1, 1997): 2513–22. http://dx.doi.org/10.1242/jeb.200.19.2513.

Full text
Abstract:
The ability of honeybees to gauge the distances of short flights was investigated under controlled laboratory conditions where a variety of potential odometric cues such as flight duration, energy consumption, image motion, airspeed, inertial navigation and landmarks were manipulated. Our findings indicate that honeybees can indeed measure short distances travelled and that they do so solely by analysis of image motion. Visual odometry seems to rely primarily on the motion that is sensed by the lateral regions of the visual field. Computation of distance flown is re-commenced whenever a prominent landmark is encountered en route. 'Re-setting' the odometer (or starting a new one) at each landmark facilitates accurate long-range navigation by preventing excessive accumulation of odometric errors. Distance appears to be learnt on the way to the food source and not on the way back.
APA, Harvard, Vancouver, ISO, and other styles
3

Scaramuzza, Davide, and Friedrich Fraundorfer. "Visual Odometry [Tutorial]." IEEE Robotics & Automation Magazine 18, no. 4 (December 2011): 80–92. http://dx.doi.org/10.1109/mra.2011.943233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Chenggong, Gen Li, Ruiqi Wang, and Lin Li. "Wheeled Robot Visual Odometer Based on Two-dimensional Iterative Closest Point Algorithm." Journal of Physics: Conference Series 2504, no. 1 (May 1, 2023): 012002. http://dx.doi.org/10.1088/1742-6596/2504/1/012002.

Full text
Abstract:
Abstract According to the two-dimensional motion characteristics of planar motion wheeled robot, the visual odometer was dimensionally reduced in this study. In the feature point matching part of visual odometer, the contour constraint was used to filter out the mismatched feature point pairs (abbreviated as FPP). This method could also filter out the matched FPP, and the feature of FPP was correct color image matches, however, their depth image error was large. This offered higher quality matched FPP for the subsequent interframe motion estimation. Dimension reduction was performed in the interframe motion estimation part, and the two-dimensional Iterative Closest Point (ICP) algorithm was used for camera motion estimation. The experiments indicated that the proposed algorithm effectively improved the computational speed and precision of planar motion wheeled robot visual odometer. This research indicates that the dimension reduction processing of ICP algorithm can effectively improve the operation speed and calculation accuracy of planar motion wheeled robot visual odometry, which provides a good reference and data support for the subsequent research of wheeled robot visual odometry in the future.
APA, Harvard, Vancouver, ISO, and other styles
5

CIOCOIU, Titus, Florin MOLDOVEANU, and Caius SULIMAN. "CAMERA CALIBRATION FOR VISUAL ODOMETRY SYSTEM." SCIENTIFIC RESEARCH AND EDUCATION IN THE AIR FORCE 18, no. 1 (June 24, 2016): 227–32. http://dx.doi.org/10.19062/2247-3173.2016.18.1.30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

An, Lifeng, Xinyu Zhang, Hongbo Gao, and Yuchao Liu. "Semantic segmentation–aided visual odometry for urban autonomous driving." International Journal of Advanced Robotic Systems 14, no. 5 (September 1, 2017): 172988141773566. http://dx.doi.org/10.1177/1729881417735667.

Full text
Abstract:
Visual odometry plays an important role in urban autonomous driving cars. Feature-based visual odometry methods sample the candidates randomly from all available feature points, while alignment-based visual odometry methods take all pixels into account. These methods hold an assumption that quantitative majority of candidate visual cues could represent the truth of motions. But in real urban traffic scenes, this assumption could be broken by lots of dynamic traffic participants. Big trucks or buses may occupy the main image parts of a front-view monocular camera and result in wrong visual odometry estimation. Finding available visual cues that could represent real motion is the most important and hardest step for visual odometry in the dynamic environment. Semantic attributes of pixels could be considered as a more reasonable factor for candidate selection in that case. This article analyzed the availability of all visual cues with the help of pixel-level semantic information and proposed a new visual odometry method that combines feature-based and alignment-based visual odometry methods with one optimization pipeline. The proposed method was compared with three open-source visual odometry algorithms on Kitti benchmark data sets and our own data set. Experimental results confirmed that the new approach provided effective improvement both on accurate and robustness in the complex dynamic scenes.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Jiabin, and Faqin Gao. "Improved visual inertial odometry based on deep learning." Journal of Physics: Conference Series 2078, no. 1 (November 1, 2021): 012016. http://dx.doi.org/10.1088/1742-6596/2078/1/012016.

Full text
Abstract:
Abstract The traditional visual inertial odometry according to the manually designed rules extracts key points. However, the manually designed extraction rules are easy to be affected and have poor robustness in the scene of illumination and perspective change, resulting in the decline of positioning accuracy. Deep learning methods show strong robustness in key point extraction. In order to improve the positioning accuracy of visual inertial odometer in the scene of illumination and perspective change, deep learning is introduced into the visual inertial odometer system for key point detection. The encoder part of MagicPoint network is improved by depthwise separable convolution, and then the network is trained by self-supervised method; A visual inertial odometer system based on deep learning is compose by using the trained network to replace the traditional key points detection algorithm on the basis of VINS. The key point detection network is tested on HPatches dataset, and the odometer positioning effect is evaluated on EUROC dataset. The results show that the improved visual inertial odometer based on deep learning can reduce the positioning error by more than 5% without affecting the real-time performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Borges, Paulo Vinicius Koerich, and Stephen Vidas. "Practical Infrared Visual Odometry." IEEE Transactions on Intelligent Transportation Systems 17, no. 8 (August 2016): 2205–13. http://dx.doi.org/10.1109/tits.2016.2515625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gonzalez, Ramon, Francisco Rodriguez, Jose Luis Guzman, Cedric Pradalier, and Roland Siegwart. "Combined visual odometry and visual compass for off-road mobile robots localization." Robotica 30, no. 6 (October 5, 2011): 865–78. http://dx.doi.org/10.1017/s026357471100110x.

Full text
Abstract:
SUMMARYIn this paper, we present the work related to the application of a visual odometry approach to estimate the location of mobile robots operating in off-road conditions. The visual odometry approach is based on template matching, which deals with estimating the robot displacement through a matching process between two consecutive images. Standard visual odometry has been improved using visual compass method for orientation estimation. For this purpose, two consumer-grade monocular cameras have been employed. One camera is pointing at the ground under the robot, and the other is looking at the surrounding environment. Comparisons with popular localization approaches, through physical experiments in off-road conditions, have shown the satisfactory behavior of the proposed strategy.
APA, Harvard, Vancouver, ISO, and other styles
10

Aguiar, André, Filipe Santos, Armando Jorge Sousa, and Luís Santos. "FAST-FUSION: An Improved Accuracy Omnidirectional Visual Odometry System with Sensor Fusion and GPU Optimization for Embedded Low Cost Hardware." Applied Sciences 9, no. 24 (December 15, 2019): 5516. http://dx.doi.org/10.3390/app9245516.

Full text
Abstract:
The main task while developing a mobile robot is to achieve accurate and robust navigation in a given environment. To achieve such a goal, the ability of the robot to localize itself is crucial. In outdoor, namely agricultural environments, this task becomes a real challenge because odometry is not always usable and global navigation satellite systems (GNSS) signals are blocked or significantly degraded. To answer this challenge, this work presents a solution for outdoor localization based on an omnidirectional visual odometry technique fused with a gyroscope and a low cost planar light detection and ranging (LIDAR), that is optimized to run in a low cost graphical processing unit (GPU). This solution, named FAST-FUSION, proposes to the scientific community three core contributions. The first contribution is an extension to the state-of-the-art monocular visual odometry (Libviso2) to work with omnidirectional cameras and single axis gyro to increase the system accuracy. The second contribution, it is an algorithm that considers low cost LIDAR data to estimate the motion scale and solve the limitations of monocular visual odometer systems. Finally, we propose an heterogeneous computing optimization that considers a Raspberry Pi GPU to improve the visual odometry runtime performance in low cost platforms. To test and evaluate FAST-FUSION, we created three open-source datasets in an outdoor environment. Results shows that FAST-FUSION is acceptable to run in real-time in low cost hardware and that outperforms the original Libviso2 approach in terms of time performance and motion estimation accuracy.
APA, Harvard, Vancouver, ISO, and other styles
11

Martínez-García, Edgar Alonso, Joaquín Rivero-Juárez, Luz Abril Torres-Méndez, and Jorge Enrique Rodas-Osollo. "Divergent trinocular vision observers design for extended Kalman filter robot state estimation." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 233, no. 5 (September 24, 2018): 524–47. http://dx.doi.org/10.1177/0959651818800908.

Full text
Abstract:
Here, we report the design of two deterministic observers that exploit the capabilities of a home-made divergent trinocular visual sensor to sense depth data. The three-dimensional key points that the observers can measure are triangulated for visual odometry and estimated by an extended Kalman filter. This work deals with a four-wheel-drive mobile robot with four passive suspensions. The direct and inverse kinematic solutions are deduced and used for the updating and prediction models of the extended Kalman filter as feedback for the robot’s position controller. The state-estimation visual odometry results were compared with the robot’s dead-reckoning kinematics, and both are combined as a recursive position controller. One observer model design is based on the analytical geometric multi-view approach. The other observer model has fundamentals on multi-view lateral optical flow, which was reformulated as nonspatial–temporal and is modeled by an exponential function. This work presents the analytical deductions of the models and formulations. Experimental validation deals with five main aspects: multi-view correction, a geometric observer for range measurement, an optical flow observer for range measurement, dead-reckoning and visual odometry. Furthermore, comparison of positioning includes a four-wheel odometer, deterministic visual observers and the observer–extended Kalman filter, compared with a vision-based global reference localization system.
APA, Harvard, Vancouver, ISO, and other styles
12

Jeon, Hyun-Ho, Jin-Hyung Kim, and Yun-Ho Ko. "RAFSet (Robust Aged Feature Set)-Based Monocular Visual Odometry." Journal of Institute of Control, Robotics and Systems 23, no. 12 (December 31, 2017): 1063–69. http://dx.doi.org/10.5302/j.icros.2017.17.0160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Bazeille, Stephane, Emmanuel Battesti, and David Filliat. "A Light Visual Mapping and Navigation Framework for Low-Cost Robots." Journal of Intelligent Systems 24, no. 4 (December 1, 2015): 505–24. http://dx.doi.org/10.1515/jisys-2014-0116.

Full text
Abstract:
AbstractWe address the problems of localization, mapping, and guidance for robots with limited computational resources by combining vision with the metrical information given by the robot odometry. We propose in this article a novel light and robust topometric simultaneous localization and mapping framework using appearance-based visual loop-closure detection enhanced with the odometry. The main advantage of this combination is that the odometry makes the loop-closure detection more accurate and reactive, while the loop-closure detection enables the long-term use of odometry for guidance by correcting the drift. The guidance approach is based on qualitative localization using vision and odometry, and is robust to visual sensor occlusions or changes in the scene. The resulting framework is incremental, real-time, and based on cheap sensors provided on many robots (a camera and odometry encoders). This approach is, moreover, particularly well suited for low-power robots as it is not dependent on the image processing frequency and latency, and thus it can be applied using remote processing. The algorithm has been validated on a Pioneer P3DX mobile robot in indoor environments, and its robustness is demonstrated experimentally for a large range of odometry noise levels.
APA, Harvard, Vancouver, ISO, and other styles
14

Jiang, Feng, Jianjun Gu, Shiqiang Zhu, Te Li, and Xinliang Zhong. "Visual Odometry Based 3D-Reconstruction." Journal of Physics: Conference Series 1961, no. 1 (July 1, 2021): 012074. http://dx.doi.org/10.1088/1742-6596/1961/1/012074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Comport, A. I., E. Malis, and P. Rives. "Real-time Quadrifocal Visual Odometry." International Journal of Robotics Research 29, no. 2-3 (January 5, 2010): 245–66. http://dx.doi.org/10.1177/0278364909356601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Yandong, Tao Zhang, Yuanchao Wang, Jingwei Ma, Yanhui Li, and Jingzhuang Han. "Compass aided visual-inertial odometry." Journal of Visual Communication and Image Representation 60 (April 2019): 101–15. http://dx.doi.org/10.1016/j.jvcir.2018.12.029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Lappe, M., M. Jenkin, and L. Harris. "Visual odometry by leaky integration." Journal of Vision 7, no. 9 (March 18, 2010): 147. http://dx.doi.org/10.1167/7.9.147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

PrasadSingh, Indushekhar. "VISUAL ODOMETRY FOR AUTONOMOUS VEHICLES." International Journal of Advanced Research 7, no. 9 (September 30, 2019): 1136–44. http://dx.doi.org/10.21474/ijar01/9765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

ZHANG, Jieqiang, and Ryuichi UEDA. "Visual Odometry from Brick Road." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2022 (2022): 2P1—I12. http://dx.doi.org/10.1299/jsmermd.2022.2p1-i12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Alapetite, Alexandre, Zhongyu Wang, John Paulin Hansen, Marcin Zajączkowski, and Mikołaj Patalan. "Comparison of Three Off-the-Shelf Visual Odometry Systems." Robotics 9, no. 3 (July 21, 2020): 56. http://dx.doi.org/10.3390/robotics9030056.

Full text
Abstract:
Positioning is an essential aspect of robot navigation, and visual odometry an important technique for continuous updating the internal information about robot position, especially indoors without GPS (Global Positioning System). Visual odometry is using one or more cameras to find visual clues and estimate robot movements in 3D relatively. Recent progress has been made, especially with fully integrated systems such as the RealSense T265 from Intel, which is the focus of this article. We compare between each other three visual odometry systems (and one wheel odometry, as a known baseline), on a ground robot. We do so in eight scenarios, varying the speed, the number of visual features, and with or without humans walking in the field of view. We continuously measure the position error in translation and rotation thanks to a ground truth positioning system. Our result shows that all odometry systems are challenged, but in different ways. The RealSense T265 and the ZED Mini have comparable performance, better than our baseline ORB-SLAM2 (mono-lens without inertial measurement unit (IMU)) but not excellent. In conclusion, a single odometry system might still not be sufficient, so using multiple instances and sensor fusion approaches are necessary while waiting for additional research and further improved products.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhu, Zihan, Yi Zhang, Weijun Wang, Wei Feng, Haowen Luo, and Yaojie Zhang. "Adaptive Adjustment of Factor’s Weight for a Multi-Sensor SLAM." Journal of Physics: Conference Series 2451, no. 1 (March 1, 2023): 012004. http://dx.doi.org/10.1088/1742-6596/2451/1/012004.

Full text
Abstract:
Abstract A multi-sensor fusion simultaneous localization and mapping(SLAM) method based on factor graph optimization that can adaptively modify the weight of the graph factor is proposed in this study, to enhance the localization and mapping capability of autonomous robots in tough situations. Firstly, the algorithm fuses multi-lines lidar, monocular camera, and inertial measurement unit(IMU) in the odometry. Second, the factor graph is constructed using lidar and visual odometry as the unary edge and binary edge constraints, respectively, with the motion determined by IMU odometry serving as the primary odometry in the system. Finally, different increments of IMU odometry, lidar odometry and visual odometry are computed as favor factors to realize the adaptive adjustment of the factor’s weight. The suggested method has greater location accuracy and a better mapping effect in complex situations when compared to previous algorithms.
APA, Harvard, Vancouver, ISO, and other styles
22

Yuan, Shuangjie, Jun Zhang, Yujia Lin, and Lu Yang. "Hybrid self-supervised monocular visual odometry system based on spatio-temporal features." Electronic Research Archive 32, no. 5 (2024): 3543–68. http://dx.doi.org/10.3934/era.2024163.

Full text
Abstract:
<abstract><p>For the autonomous and intelligent operation of robots in unknown environments, simultaneous localization and mapping (SLAM) is essential. Since the proposal of visual odometry, the use of visual odometry in the mapping process has greatly advanced the development of pure visual SLAM techniques. However, the main challenges in current monocular odometry algorithms are the poor generalization of traditional methods and the low interpretability of deep learning-based methods. This paper presented a hybrid self-supervised visual monocular odometry framework that combined geometric principles and multi-frame temporal information. Moreover, a post-odometry optimization module was proposed. By using image synthesis techniques to insert synthetic views between the two frames undergoing pose estimation, more accurate inter-frame pose estimation was achieved. Compared to other public monocular algorithms, the proposed approach showed reduced average errors in various scene sequences, with a translation error of $ 2.211\% $ and a rotation error of $ 0.418\; ^{\circ}/100m $. With the help of the proposed optimizer, the precision of the odometry algorithm was further improved, with a relative decrease of approximately 10$ \% $ intranslation error and 15$ \% $ in rotation error.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Baifan, Haowu Zhao, Ruyi Zhu, and Yemin Hu. "Marked-LIEO: Visual Marker-Aided LiDAR/IMU/Encoder Integrated Odometry." Sensors 22, no. 13 (June 23, 2022): 4749. http://dx.doi.org/10.3390/s22134749.

Full text
Abstract:
In this paper, we propose a visual marker-aided LiDAR/IMU/encoder integrated odometry, Marked-LIEO, to achieve pose estimation of mobile robots in an indoor long corridor environment. In the first stage, we design the pre-integration model of encoder and IMU respectively to realize the pose estimation combined with the pose estimation from the second stage providing prediction for the LiDAR odometry. In the second stage, we design low-frequency visual marker odometry, which is optimized jointly with LiDAR odometry to obtain the final pose estimation. In view of the wheel slipping and LiDAR degradation problems, we design an algorithm that can make the optimization weight of encoder odometry and LiDAR odometry adjust adaptively according to yaw angle and LiDAR degradation distance respectively. Finally, we realize the multi-sensor fusion localization through joint optimization of an encoder, IMU, LiDAR, and camera measurement information. Aiming at the problems of GNSS information loss and LiDAR degradation in indoor corridor environment, this method introduces the state prediction information of encoder and IMU and the absolute observation information of visual marker to achieve the accurate pose of indoor corridor environment, which has been verified by experiments in Gazebo simulation environment and real environment.
APA, Harvard, Vancouver, ISO, and other styles
24

Kim, Kyu-Won, Tae-Ki Jung, Seong-Hun Seo, and Gyu-In Jee. "Development of Tightly Coupled based LIDAR-Visual-Inertial Odometry." Journal of Institute of Control, Robotics and Systems 26, no. 8 (August 31, 2020): 597–603. http://dx.doi.org/10.5302/j.icros.2020.20.0076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Xu, Shuchen, Yongrong Sun, Kedong Zhao, Xiyu Fu, and Shuaishuai Wang. "Road-Network-Map-Assisted Vehicle Positioning Based on Pose Graph Optimization." Sensors 23, no. 17 (August 31, 2023): 7581. http://dx.doi.org/10.3390/s23177581.

Full text
Abstract:
Satellite signals are easily lost in urban areas, which causes difficulty in vehicles being located with high precision. Visual odometry has been increasingly applied in navigation systems to solve this problem. However, visual odometry relies on dead-reckoning technology, where a slight positioning error can accumulate over time, resulting in a catastrophic positioning error. Thus, this paper proposes a road-network-map-assisted vehicle positioning method based on the theory of pose graph optimization. This method takes the dead-reckoning result of visual odometry as the input and introduces constraints from the point-line form road network map to suppress the accumulated error and improve vehicle positioning accuracy. We design an optimization and prediction model, and the original trajectory of visual odometry is optimized to obtain the corrected trajectory by introducing constraints from map correction points. The vehicle positioning result at the next moment is predicted based on the latest output of the visual odometry and corrected trajectory. The experiments carried out on the KITTI and campus datasets demonstrate the superiority of the proposed method, which can provide stable and accurate vehicle position estimation in real-time, and has higher positioning accuracy than similar map-assisted methods.
APA, Harvard, Vancouver, ISO, and other styles
26

Huang, Gang, Zhaozheng Hu, Qianwen Tao, Fan Zhang, and Zhe Zhou. "Improved intelligent vehicle self-localization with integration of sparse visual map and high-speed pavement visual odometry." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 235, no. 1 (September 4, 2020): 177–87. http://dx.doi.org/10.1177/0954407020943306.

Full text
Abstract:
Localization is a fundamental requirement for intelligent vehicles. Conventional localization methods usually suffer from various limitations, such as low accuracy and blocked areas for Global Positioning System, high cost for inertial navigation system or light detection and ranging, and low robustness for visual simultaneous localization and mapping or visual odometry. To overcome these problems, we propose a novel localization method integrated with a sparse visual map and a high-speed pavement visual odometry. We use a lateral-view camera to sense the sparse visual map node for accurate map-based localization. We use a down-view high-speed camera for odometry computation between two sparse visual map nodes. With a high-speed camera, it is possible to extract and track pavement features with stable resolution imaging even in high-speed movement. We also develop a data-driven motion model for the Kalman filter to fuse the localization results from the sparse map and the high-speed pavement visual odometry to enhance vehicle localization. The proposed method was tested in two different scenarios in different pavement conditions. The experimental results demonstrate that the proposed method can improve vehicle localization with low cost and high feasibility.
APA, Harvard, Vancouver, ISO, and other styles
27

Thapa, Vikas, Abhishek Sharma, Beena Gairola, Amit K. Mondal, Vindhya Devalla, and Ravi K. Patel. "A Review on Visual Odometry Techniques for Mobile Robots: Types and Challenges." Recent Advances in Electrical & Electronic Engineering (Formerly Recent Patents on Electrical & Electronic Engineering) 13, no. 5 (September 22, 2020): 618–31. http://dx.doi.org/10.2174/2352096512666191004142546.

Full text
Abstract:
For autonomous navigation, tracking and obstacle avoidance, a mobile robot must have the knowledge of its position and localization over time. Among the available techniques for odometry, vision-based odometry is robust and economical technique. In addition, a combination of position estimation from odometry with interpretations of the surroundings using a mobile camera is effective. This paper presents an overview of current visual odometry approaches, applications, and challenges in mobile robots. The study offers a comparative analysis of different available techniques and algorithms associated with it, emphasizing on its efficiency and other feature extraction capability, applications and optimality of various techniques.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhao, Zixu, Yucheng Zhang, Long Long, Zaiwang Lu, and Jinglin Shi. "Efficient and adaptive lidar–visual–inertial odometry for agricultural unmanned ground vehicle." International Journal of Advanced Robotic Systems 19, no. 2 (March 1, 2022): 172988062210949. http://dx.doi.org/10.1177/17298806221094925.

Full text
Abstract:
The accuracy of agricultural unmanned ground vehicles’ localization directly affects the accuracy of their navigation. However, due to the changeable environment and fewer features in the agricultural scene, it is challenging for these unmanned ground vehicles to localize precisely in global positioning system-denied areas with a single sensor. In this article, we present an efficient and adaptive sensor-fusion odometry framework based on simultaneous localization and mapping to handle the localization problems of agricultural unmanned ground vehicles without the assistance of a global positioning system. The framework leverages three kinds of sub-odometry (lidar odometry, visual odometry and inertial odometry) and automatically combines them depending on the environment to provide accurate pose estimation in real time. The combination of sub-odometry is implemented by trading off the robustness and the accuracy of pose estimation. The efficiency and adaptability are mainly reflected in the novel surfel-based iterative closest point method for lidar odometry we propose, which utilizes the changeable surfel radius range and the adaptive iterative closest point initialization to improve the accuracy of pose estimation in different environments. We test our system in various agricultural unmanned ground vehicles’ working zones and some other open data sets, and the results prove that the proposed method shows better performance mainly in accuracy, efficiency and robustness, compared with the state-of-art methods.
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Haoran, Zhenglong Li, Hongwei Wang, Wenyan Cao, Fujing Zhang, and Yuheng Wang. "A Roadheader Positioning Method Based on Multi-Sensor Fusion." Electronics 12, no. 22 (November 7, 2023): 4556. http://dx.doi.org/10.3390/electronics12224556.

Full text
Abstract:
In coal mines, accurate positioning is vital for roadheader equipment. However, most roadheaders use a standalone strapdown inertial navigation system (SINS) which faces challenges like error accumulation, drift, initial alignment needs, temperature sensitivity, and the demand for high-quality sensors. In this paper, a roadheader Visual–Inertial Odometry (VIO) system is proposed, combining SINS and stereo visual odometry to adjust to coal mine environments. Given the inherently dimly lit conditions of coal mines, our system includes an image-enhancement module to preprocess images, aiding in feature matching for stereo visual odometry. Additionally, a Kalman filter merges the positional data from SINS and stereo visual odometry. When tested against three other methods on the KITTI and EuRoC datasets, our approach showed notable precision on the EBZ160M-2 Roadheader, with attitude errors less than 0.2751° and position discrepancies within 0.0328 m, proving its advantages over SINS.
APA, Harvard, Vancouver, ISO, and other styles
30

Liu, Qiang, Haidong Zhang, Yiming Xu, and Li Wang. "Unsupervised Deep Learning-Based RGB-D Visual Odometry." Applied Sciences 10, no. 16 (August 6, 2020): 5426. http://dx.doi.org/10.3390/app10165426.

Full text
Abstract:
Recently, deep learning frameworks have been deployed in visual odometry systems and achieved comparable results to traditional feature matching based systems. However, most deep learning-based frameworks inevitably need labeled data as ground truth for training. On the other hand, monocular odometry systems are incapable of restoring absolute scale. External or prior information has to be introduced for scale recovery. To solve these problems, we present a novel deep learning-based RGB-D visual odometry system. Our two main contributions are: (i) during network training and pose estimation, the depth images are fed into the network to form a dual-stream structure with the RGB images, and a dual-stream deep neural network is proposed. (ii) the system adopts an unsupervised end-to-end training method, thus the labor-intensive data labeling task is not required. We have tested our system on the KITTI dataset, and results show that the proposed RGB-D Visual Odometry (VO) system has obvious advantages over other state-of-the-art systems in terms of both translation and rotation errors.
APA, Harvard, Vancouver, ISO, and other styles
31

Wan, Yingcai, Qiankun Zhao, Cheng Guo, Chenlong Xu, and Lijing Fang. "Multi-Sensor Fusion Self-Supervised Deep Odometry and Depth Estimation." Remote Sensing 14, no. 5 (March 2, 2022): 1228. http://dx.doi.org/10.3390/rs14051228.

Full text
Abstract:
This paper presents a new deep visual-inertial odometry and depth estimation framework for improving the accuracy of depth estimation and ego-motion from image sequences and inertial measurement unit (IMU) raw data. The proposed framework predicts ego-motion and depth with absolute scale in a self-supervised manner. We first capture dense features and solve the pose by deep visual odometry (DVO), and then combine the pose estimation pipeline with deep inertial odometry (DIO) by the extended Kalman filter (EKF) method to produce the sparse depth and pose with absolute scale. We then join deep visual-inertial odometry (DeepVIO) with depth estimation by using sparse depth and the pose from DeepVIO pipeline to align the scale of the depth prediction with the triangulated point cloud and reduce image reconstruction error. Specifically, we use the strengths of learning-based visual-inertial odometry (VIO) and depth estimation to build an end-to-end self-supervised learning architecture. We evaluated the new framework on the KITTI datasets and compared it to the previous techniques. We show that our approach improves results for ego-motion estimation and achieves comparable results for depth estimation, especially in the detail area.
APA, Harvard, Vancouver, ISO, and other styles
32

Ramezani, M., D. Acharya, F. Gu, and K. Khoshelham. "INDOOR POSITIONING BY VISUAL-INERTIAL ODOMETRY." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W4 (September 14, 2017): 371–76. http://dx.doi.org/10.5194/isprs-annals-iv-2-w4-371-2017.

Full text
Abstract:
Indoor positioning is a fundamental requirement of many indoor location-based services and applications. In this paper, we explore the potential of low-cost and widely available visual and inertial sensors for indoor positioning. We describe the Visual-Inertial Odometry (VIO) approach and propose a measurement model for omnidirectional visual-inertial odometry (OVIO). The results of experiments in two simulated indoor environments show that the OVIO approach outperforms VIO and achieves a positioning accuracy of 1.1&amp;thinsp;% of the trajectory length.
APA, Harvard, Vancouver, ISO, and other styles
33

Qiu, Haiyang, Xu Zhang, Hui Wang, Dan Xiang, Mingming Xiao, Zhiyu Zhu, and Lei Wang. "A Robust and Integrated Visual Odometry Framework Exploiting the Optical Flow and Feature Point Method." Sensors 23, no. 20 (October 23, 2023): 8655. http://dx.doi.org/10.3390/s23208655.

Full text
Abstract:
In this paper, we propose a robust and integrated visual odometry framework exploiting the optical flow and feature point method that achieves faster pose estimate and considerable accuracy and robustness during the odometry process. Our method utilizes optical flow tracking to accelerate the feature point matching process. In the odometry, two visual odometry methods are used: global feature point method and local feature point method. When there is good optical flow tracking and enough key points optical flow tracking matching is successful, the local feature point method utilizes prior information from the optical flow to estimate relative pose transformation information. In cases where there is poor optical flow tracking and only a small number of key points successfully match, the feature point method with a filtering mechanism is used for posing estimation. By coupling and correlating the two aforementioned methods, this visual odometry greatly accelerates the computation time for relative pose estimation. It reduces the computation time of relative pose estimation to 40% of that of the ORB_SLAM3 front-end odometry, while ensuring that it is not too different from the ORB_SLAM3 front-end odometry in terms of accuracy and robustness. The effectiveness of this method was validated and analyzed using the EUROC dataset within the ORB_SLAM3 open-source framework. The experimental results serve as supporting evidence for the efficacy of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
34

Liu, Fei, Yashar Balazadegan Sarvrood, and Yang Gao. "Implementation and Analysis of Tightly Integrated INS/Stereo VO for Land Vehicle Navigation." Journal of Navigation 71, no. 1 (August 23, 2017): 83–99. http://dx.doi.org/10.1017/s037346331700056x.

Full text
Abstract:
Tight integration of inertial sensors and stereo visual odometry to bridge Global Navigation Satellite System (GNSS) signal outages in challenging environments has drawn increasing attention. However, the details of how feature pixel coordinates from visual odometry can be directly used to limit the quick drift of inertial sensors in a tight integration implementation have rarely been provided in previous works. For instance, a key challenge in tight integration of inertial and stereo visual datasets is how to correct inertial sensor errors using the pixel measurements from visual odometry, however this has not been clearly demonstrated in existing literature. As a result, this would also affect the proper implementation of the integration algorithms and their performance assessment. This work develops and implements the tight integration of an Inertial Measurement Unit (IMU) and stereo cameras in a local-level frame. The results of the integrated solutions are also provided and analysed. Land vehicle testing results show that not only the position accuracy is improved, but also better azimuth and velocity estimation can be achieved, when compared to stand-alone INS or stereo visual odometry solutions.
APA, Harvard, Vancouver, ISO, and other styles
35

Guizilini, Vitor, and Fabio Ramos. "Semi-parametric learning for visual odometry." International Journal of Robotics Research 32, no. 5 (April 2013): 526–46. http://dx.doi.org/10.1177/0278364912472245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Xu, Shaoyan, Tao Wang, Congyan Lang, Songhe Feng, and Yi Jin. "Graph-based visual odometry for VSLAM." Industrial Robot: An International Journal 45, no. 5 (August 20, 2018): 679–87. http://dx.doi.org/10.1108/ir-04-2018-0061.

Full text
Abstract:
Purpose Typical feature-matching algorithms use only unary constraints on appearances to build correspondences where little structure information is used. Ignoring structure information makes them sensitive to various environmental perturbations. The purpose of this paper is to propose a novel graph-based method that aims to improve matching accuracy by fully exploiting the structure information. Design/methodology/approach Instead of viewing a frame as a simple collection of keypoints, the proposed approach organizes a frame as a graph by treating each keypoint as a vertex, where structure information is integrated in edges between vertices. Subsequently, the matching process of finding keypoint correspondence is formulated in a graph matching manner. Findings The authors compare it with several state-of-the-art visual simultaneous localization and mapping algorithms on three datasets. Experimental results reveal that the ORB-G algorithm provides more accurate and robust trajectories in general. Originality/value Instead of viewing a frame as a simple collection of keypoints, the proposed approach organizes a frame as a graph by treating each keypoint as a vertex, where structure information is integrated in edges between vertices. Subsequently, the matching process of finding keypoint correspondence is formulated in a graph matching manner.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhu, Kaiying, Xiaoyan Jiang, Zhijun Fang, Yongbin Gao, Hamido Fujita, and Jenq-Neng Hwang. "Photometric transfer for direct visual odometry." Knowledge-Based Systems 213 (February 2021): 106671. http://dx.doi.org/10.1016/j.knosys.2020.106671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

He, Ming, Chaozheng Zhu, Qian Huang, Baosen Ren, and Jintao Liu. "A review of monocular visual odometry." Visual Computer 36, no. 5 (June 25, 2019): 1053–65. http://dx.doi.org/10.1007/s00371-019-01714-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

García-García, R., M. A. Sotelo, I. Parra, D. Fernández, J. E. Naranjo, and M. Gavilán. "3D Visual Odometry for Road Vehicles." Journal of Intelligent and Robotic Systems 51, no. 1 (October 4, 2007): 113–34. http://dx.doi.org/10.1007/s10846-007-9182-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Silva, H., A. Bernardino, and E. Silva. "Probabilistic Egomotion for Stereo Visual Odometry." Journal of Intelligent & Robotic Systems 77, no. 2 (April 8, 2014): 265–80. http://dx.doi.org/10.1007/s10846-014-0054-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Yu, Qinghua, Junhao Xiao, Huimin Lu, and Zhiqiang Zheng. "Hybrid-Residual-Based RGBD Visual Odometry." IEEE Access 6 (2018): 28540–51. http://dx.doi.org/10.1109/access.2018.2836928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Jeong, Jae Heon, and Nikolaus Correll. "Towards Real-Time Trinocular Visual Odometry." Applied Mechanics and Materials 490-491 (January 2014): 1424–29. http://dx.doi.org/10.4028/www.scientific.net/amm.490-491.1424.

Full text
Abstract:
Pose estimation of multi-camera rig which has not enough overlapping field of views for the stereo, is generally computationally expensive due to the offset of camera center and the bundle adjustment algorithm. We proposed a divide and conquer approach, which reduces the trinocular visual odometry problem to five monocular visual odometry problems, one for each individual camera sequence and two more using features matched temporally from consecutive images from the center to the left and right cameras, respectively. While this approach provides high accuracy over long distances in outdoor environment without requiring any additional sensors, it is computationally expensive, preventing real-time operation. In this paper, we evaluate trading off image resolution and frame rate to speed up computation with accuracy. Results show that scaling images down to a quarter of Full HD resolution can speed up computation by two orders of magnitude, while still providing acceptable accuracy, whereas dropping frames quickly deteriorates performance.
APA, Harvard, Vancouver, ISO, and other styles
43

Nistér, David, Oleg Naroditsky, and James Bergen. "Visual odometry for ground vehicle applications." Journal of Field Robotics 23, no. 1 (January 2006): 3–20. http://dx.doi.org/10.1002/rob.20103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Yangming, Jian Zhang, and Shuai Li. "STMVO: biologically inspired monocular visual odometry." Neural Computing and Applications 29, no. 6 (August 20, 2016): 215–25. http://dx.doi.org/10.1007/s00521-016-2536-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Tomažič, Simon, and Igor Škrjanc. "Monocular Visual Odometry on a Smartphone." IFAC-PapersOnLine 48, no. 10 (2015): 227–32. http://dx.doi.org/10.1016/j.ifacol.2015.08.136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Terekhov, Mikhail A. "Overview of Modern Approaches to Visual Odometry." Computer tools in education, no. 3 (September 30, 2019): 5–14. http://dx.doi.org/10.32603/2071-2340-2019-3-5-14.

Full text
Abstract:
In this paper we describe the tasks of Visual Odometry and Simultaneous Localization and Mapping systems along with their main applications. Next, we list some approaches used by the scientific community to create such systems in different time periods. We then proceed to explain in detail the more recent method based on bundle adjustment and show some of its variations for different applications. At last, we overview present-day research directions in the field of visual odometry and briefly present our work.
APA, Harvard, Vancouver, ISO, and other styles
47

Gao, Wenxiang, Guizhi Yang, Yuzhang Wang, Jiaxin Ke, Xungao Zhong, and Lihua Chen. "Robust visual odometry based on image enhancement." Journal of Physics: Conference Series 2402, no. 1 (December 1, 2022): 012010. http://dx.doi.org/10.1088/1742-6596/2402/1/012010.

Full text
Abstract:
Abstract With the rise of augmented reality and autonomous driving, visual SLAM (simultaneous localization and mapping) has become the focus of research again. Visual odometry is an important part of visual SLAM. Too dark or too strong light will reduce the image quality, resulting in a large deviation in the visual odometry trajectory. Therefore, this paper proposes a visual odometry with image enhancement. Identify the lighting state of the image by estimating the brightness value of the input image. Gamma correction based on truncated cumulative distribution function modulation is used to enhance images that are too dark. For too strong images, negative image strategy is used. An improved algorithm can boost the detailed texture of the image in a poor lighting environment accurately, thereby improving the accuracy of feature point matching and the precision of the pose estimation. Tests on the public EuRoC dataset demonstrate that the presented algorithm has better localization precision and robustness.
APA, Harvard, Vancouver, ISO, and other styles
48

Mostofi, N., A. Moussa, M. Elhabiby, and N. El-Sheimy. "RGB-D Indoor Plane-based 3D-Modeling using Autonomous Robot." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1 (November 7, 2014): 301–8. http://dx.doi.org/10.5194/isprsarchives-xl-1-301-2014.

Full text
Abstract:
3D model of indoor environments provide rich information that can facilitate the disambiguation of different places and increases the familiarization process to any indoor environment for the remote users. In this research work, we describe a system for visual odometry and 3D modeling using information from RGB-D sensor (Camera). The visual odometry method estimates the relative pose of the consecutive RGB-D frames through feature extraction and matching techniques. The pose estimated by visual odometry algorithm is then refined with iterative closest point (ICP) method. The switching technique between ICP and visual odometry in case of no visible features suppresses inconsistency in the final developed map. Finally, we add the loop closure to remove the deviation between first and last frames. In order to have a semantic meaning out of 3D models, the planar patches are segmented from RGB-D point clouds data using region growing technique followed by convex hull method to assign boundaries to the extracted patches. In order to build a final semantic 3D model, the segmented patches are merged using relative pose information obtained from the first step.
APA, Harvard, Vancouver, ISO, and other styles
49

Saha, Arindam, Bibhas Chandra Dhara, Saiyed Umer, Ahmad Ali AlZubi, Jazem Mutared Alanazi, and Kulakov Yurii. "CORB2I-SLAM: An Adaptive Collaborative Visual-Inertial SLAM for Multiple Robots." Electronics 11, no. 18 (September 6, 2022): 2814. http://dx.doi.org/10.3390/electronics11182814.

Full text
Abstract:
The generation of robust global maps of an unknown cluttered environment through a collaborative robotic framework is challenging. We present a collaborative SLAM framework, CORB2I-SLAM, in which each participating robot carries a camera (monocular/stereo/RGB-D) and an inertial sensor to run odometry. A centralized server stores all the maps and executes processor-intensive tasks, e.g., loop closing, map merging, and global optimization. The proposed framework uses well-established Visual-Inertial Odometry (VIO), and can be adapted to use Visual Odometry (VO) when the measurements from inertial sensors are noisy. The proposed system solves certain disadvantages of odometry-based systems such as erroneous pose estimation due to incorrect feature selection or losing track due to abrupt camera motion and provides a more accurate result. We perform feasibility tests on real robot autonomy and extensively validate the accuracy of CORB2I-SLAM on benchmark data sequences. We also evaluate its scalability and applicability in terms of the number of participating robots and network requirements, respectively.
APA, Harvard, Vancouver, ISO, and other styles
50

Das, Anweshan, Jos Elfring, and Gijs Dubbelman. "Real-Time Vehicle Positioning and Mapping Using Graph Optimization." Sensors 21, no. 8 (April 16, 2021): 2815. http://dx.doi.org/10.3390/s21082815.

Full text
Abstract:
In this work, we propose and evaluate a pose-graph optimization-based real-time multi-sensor fusion framework for vehicle positioning using low-cost automotive-grade sensors. Pose-graphs can model multiple absolute and relative vehicle positioning sensor measurements and can be optimized using nonlinear techniques. We model pose-graphs using measurements from a precise stereo camera-based visual odometry system, a robust odometry system using the in-vehicle velocity and yaw-rate sensor, and an automotive-grade GNSS receiver. Our evaluation is based on a dataset with 180 km of vehicle trajectories recorded in highway, urban, and rural areas, accompanied by postprocessed Real-Time Kinematic GNSS as ground truth. We compare the architecture’s performance with (i) vehicle odometry and GNSS fusion and (ii) stereo visual odometry, vehicle odometry, and GNSS fusion; for offline and real-time optimization strategies. The results exhibit a 20.86% reduction in the localization error’s standard deviation and a significant reduction in outliers when compared with automotive-grade GNSS receivers.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography