Academic literature on the topic 'Odometry estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Odometry estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Odometry estimation"

1

Nurmaini, Siti, and Sahat Pangidoan. "Localization of Leader-Follower Robot Using Extended Kalman Filter." Computer Engineering and Applications Journal 7, no. 2 (July 12, 2018): 95–108. http://dx.doi.org/10.18495/comengapp.v7i2.253.

Full text
Abstract:
Non-holonomic leader-follower robot must be capable to find its own position in order to be able to navigating autonomously in the environment this problem is known as localization. A common way to estimate the robot pose by using odometer. However, odometry measurement may cause inaccurate result due to the wheel slippage or other small noise sources. In this research, the Extended Kalman Filter (EKF) is proposed to minimize the error or the inaccuracy caused by the odometry measurement. The EKF algorithm works by fusing odometry and landmark information to produce a better estimation. A better estimation acknowledged whenever the estimated position lies close to the actual path, which represents a system without noise. Another experiment is conducted to observe the influence of numbers of landmark to the estimated position. The results show that the EKF technique is effective to estimate the leader pose and orientation pose with small error and the follower has the ability traverse close to leader based-on the actual path.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Q., C. Wang, S. Chen, X. Li, C. Wen, M. Cheng, and J. Li. "DEEP LIDAR ODOMETRY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 5, 2019): 1681–86. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-1681-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Most existing lidar odometry estimation strategies are formulated under a standard framework that includes feature selection, and pose estimation through feature matching. In this work, we present a novel pipeline called LO-Net for lidar odometry estimation from 3D lidar scanning data using deep convolutional networks. The network is trained in an end-to-end manner, it infers 6-DoF poses from the encoded sequential lidar data. Based on the new designed mask-weighted geometric constraint loss, the network automatically learns effective feature representation for the lidar odometry estimation problem, and implicitly exploits the sequential dependencies and dynamics. Experiments on benchmark datasets demonstrate that LO-Net has similar accuracy with the geometry-based approach.</p>
APA, Harvard, Vancouver, ISO, and other styles
3

Martínez-García, Edgar Alonso, Joaquín Rivero-Juárez, Luz Abril Torres-Méndez, and Jorge Enrique Rodas-Osollo. "Divergent trinocular vision observers design for extended Kalman filter robot state estimation." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 233, no. 5 (September 24, 2018): 524–47. http://dx.doi.org/10.1177/0959651818800908.

Full text
Abstract:
Here, we report the design of two deterministic observers that exploit the capabilities of a home-made divergent trinocular visual sensor to sense depth data. The three-dimensional key points that the observers can measure are triangulated for visual odometry and estimated by an extended Kalman filter. This work deals with a four-wheel-drive mobile robot with four passive suspensions. The direct and inverse kinematic solutions are deduced and used for the updating and prediction models of the extended Kalman filter as feedback for the robot’s position controller. The state-estimation visual odometry results were compared with the robot’s dead-reckoning kinematics, and both are combined as a recursive position controller. One observer model design is based on the analytical geometric multi-view approach. The other observer model has fundamentals on multi-view lateral optical flow, which was reformulated as nonspatial–temporal and is modeled by an exponential function. This work presents the analytical deductions of the models and formulations. Experimental validation deals with five main aspects: multi-view correction, a geometric observer for range measurement, an optical flow observer for range measurement, dead-reckoning and visual odometry. Furthermore, comparison of positioning includes a four-wheel odometer, deterministic visual observers and the observer–extended Kalman filter, compared with a vision-based global reference localization system.
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Qin Fan, Qing Li, and Nong Cheng. "Visual Odometry and 3D Mapping in Indoor Environments." Applied Mechanics and Materials 336-338 (July 2013): 348–54. http://dx.doi.org/10.4028/www.scientific.net/amm.336-338.348.

Full text
Abstract:
This paper presents a robust state estimation and 3D environment modeling approach that enables Micro Aerial Vehicle (MAV) operating in challenging GPS-denied indoor environments. A fast, accurate and robust approach to visual odometry is developed based on Microsoft Kinect. Discriminative features are extracted from RGB images and matched across consecutive frames. A robust least-square estimator is applied to get relative motion estimation. All computation is performed in real-time, which provides high frequency of 6 degree-of-freedom state estimation. A detailed 3D map of an indoor environment is also constructed.
APA, Harvard, Vancouver, ISO, and other styles
5

Jiménez, Paulo A., and Bijan Shirinzadeh. "Laser interferometry measurements based calibration and error propagation identification for pose estimation in mobile robots." Robotica 32, no. 1 (August 6, 2013): 165–74. http://dx.doi.org/10.1017/s0263574713000660.

Full text
Abstract:
SUMMARYA widely used method for pose estimation in mobile robots is odometry. Odometry allows the robot in real time to reconstruct its position and orientation from the wheels' encoder measurements. Given to its unbounded nature, odometry calculation accumulates errors with quadratic increase of error variance with traversed distance. This paper develops a novel method for odometry calibration and error propagation identification for mobile robots. The proposed method uses a laser-based interferometer to measure distance precisely. Two variants of the proposed calibration method are examined: the two-parameter model and the three-parameter model. Experimental results obtained using a Khepera 3 mobile robot showed that both methods significantly increase accuracy of the pose estimation, validating the effectiveness of the proposed calibration method.
APA, Harvard, Vancouver, ISO, and other styles
6

Gonzalez, Ramon, Francisco Rodriguez, Jose Luis Guzman, Cedric Pradalier, and Roland Siegwart. "Combined visual odometry and visual compass for off-road mobile robots localization." Robotica 30, no. 6 (October 5, 2011): 865–78. http://dx.doi.org/10.1017/s026357471100110x.

Full text
Abstract:
SUMMARYIn this paper, we present the work related to the application of a visual odometry approach to estimate the location of mobile robots operating in off-road conditions. The visual odometry approach is based on template matching, which deals with estimating the robot displacement through a matching process between two consecutive images. Standard visual odometry has been improved using visual compass method for orientation estimation. For this purpose, two consumer-grade monocular cameras have been employed. One camera is pointing at the ground under the robot, and the other is looking at the surrounding environment. Comparisons with popular localization approaches, through physical experiments in off-road conditions, have shown the satisfactory behavior of the proposed strategy.
APA, Harvard, Vancouver, ISO, and other styles
7

Valiente García, David, Lorenzo Fernández Rojo, Arturo Gil Aparicio, Luis Payá Castelló, and Oscar Reinoso García. "Visual Odometry through Appearance- and Feature-Based Method with Omnidirectional Images." Journal of Robotics 2012 (2012): 1–13. http://dx.doi.org/10.1155/2012/797063.

Full text
Abstract:
In the field of mobile autonomous robots, visual odometry entails the retrieval of a motion transformation between two consecutive poses of the robot by means of a camera sensor solely. A visual odometry provides an essential information for trajectory estimation in problems such as Localization and SLAM (Simultaneous Localization and Mapping). In this work we present a motion estimation based on a single omnidirectional camera. We exploited the maximized horizontal field of view provided by this camera, which allows us to encode large scene information into the same image. The estimation of the motion transformation between two poses is incrementally computed, since only the processing of two consecutive omnidirectional images is required. Particularly, we exploited the versatility of the information gathered by omnidirectional images to perform both an appearance-based and a feature-based method to obtain visual odometry results. We carried out a set of experiments in real indoor environments to test the validity and suitability of both methods. The data used in the experiments consists of a large sets of omnidirectional images captured along the robot's trajectory in three different real scenarios. Experimental results demonstrate the accuracy of the estimations and the capability of both methods to work in real-time.
APA, Harvard, Vancouver, ISO, and other styles
8

Jung, Changbae, and Woojin Chung. "Calibration of Kinematic Parameters for Two Wheel Differential Mobile Robots by Using Experimental Heading Errors." International Journal of Advanced Robotic Systems 8, no. 5 (January 1, 2011): 68. http://dx.doi.org/10.5772/50906.

Full text
Abstract:
Odometry using incremental wheel encoder sensors provides the relative position of mobile robots. This relative position is fundamental information for pose estimation by various sensors for EKF Localization, Monte Carlo Localization etc. Odometry is also used as unique information for localization of environmental conditions when absolute measurement systems are not available. However, odometry suffers from the accumulation of kinematic modeling errors of the wheel as the robot's travel distance increases. Therefore, systematic odometry errors need to be calibrated. Principal systematic error sources are unequal wheel diameters and uncertainty of the effective wheelbase. The UMBmark method is a practical and useful calibration scheme for systematic odometry errors of two-wheel differential mobile robots. However, the approximation errors of the calibration equations and the coupled effect between the two systematic error sources affect the performance of the kinematic parameter estimation. In this paper, we proposed a new calibration scheme whose calibration equations have less approximation errors. This new scheme uses the orientation errors of the robot's final pose in the test track. This scheme also considers the coupled effect between wheel diameter error and wheelbase error. Numerical simulations and experimental results verified that the proposed scheme accurately estimated the kinematic error parameters and improved the accuracy of odometry calibration significantly.
APA, Harvard, Vancouver, ISO, and other styles
9

Thapa, Vikas, Abhishek Sharma, Beena Gairola, Amit K. Mondal, Vindhya Devalla, and Ravi K. Patel. "A Review on Visual Odometry Techniques for Mobile Robots: Types and Challenges." Recent Advances in Electrical & Electronic Engineering (Formerly Recent Patents on Electrical & Electronic Engineering) 13, no. 5 (September 22, 2020): 618–31. http://dx.doi.org/10.2174/2352096512666191004142546.

Full text
Abstract:
For autonomous navigation, tracking and obstacle avoidance, a mobile robot must have the knowledge of its position and localization over time. Among the available techniques for odometry, vision-based odometry is robust and economical technique. In addition, a combination of position estimation from odometry with interpretations of the surroundings using a mobile camera is effective. This paper presents an overview of current visual odometry approaches, applications, and challenges in mobile robots. The study offers a comparative analysis of different available techniques and algorithms associated with it, emphasizing on its efficiency and other feature extraction capability, applications and optimality of various techniques.
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Kyuman, and Eric N. Johnson. "Latency Compensated Visual-Inertial Odometry for Agile Autonomous Flight." Sensors 20, no. 8 (April 14, 2020): 2209. http://dx.doi.org/10.3390/s20082209.

Full text
Abstract:
In visual-inertial odometry (VIO), inertial measurement unit (IMU) dead reckoning acts as the dynamic model for flight vehicles while camera vision extracts information about the surrounding environment and determines features or points of interest. With these sensors, the most widely used algorithm for estimating vehicle and feature states for VIO is an extended Kalman filter (EKF). The design of the standard EKF does not inherently allow for time offsets between the timestamps of the IMU and vision data. In fact, sensor-related delays that arise in various realistic conditions are at least partially unknown parameters. A lack of compensation for unknown parameters often leads to a serious impact on the accuracy of VIO systems and systems like them. To compensate for the uncertainties of the unknown time delays, this study incorporates parameter estimation into feature initialization and state estimation. Moreover, computing cross-covariance and estimating delays in online temporal calibration correct residual, Jacobian, and covariance. Results from flight dataset testing validate the improved accuracy of VIO employing latency compensated filtering frameworks. The insights and methods proposed here are ultimately useful in any estimation problem (e.g., multi-sensor fusion scenarios) where compensation for partially unknown time delays can enhance performance.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Odometry estimation"

1

Masson, Clément. "Direction estimation using visual odometry." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169377.

Full text
Abstract:
This Master thesis tackles the problem of measuring objects’ directions from a motionlessobservation point. A new method based on a single rotating camera requiring the knowledge ofonly two (or more) landmarks’ direction is proposed. In a first phase, multi-view geometry isused to estimate camera rotations and key elements’ direction from a set of overlapping images.Then in a second phase, the direction of any object can be estimated by resectioning the cameraassociated to a picture showing this object. A detailed description of the algorithmic chain isgiven, along with test results on both synthetic data and real images taken with an infraredcamera.
Detta masterarbete behandlar problemet med att mäta objekts riktningar från en fastobservationspunkt. En ny metod föreslås, baserad på en enda roterande kamera som kräverendast två (eller flera) landmärkens riktningar. I en första fas används multiperspektivgeometri,för att uppskatta kamerarotationer och nyckelelements riktningar utifrån en uppsättningöverlappande bilder. I en andra fas kan sedan riktningen hos vilket objekt som helst uppskattasgenom att kameran, associerad till en bild visande detta objekt, omsektioneras. En detaljeradbeskrivning av den algoritmiska kedjan ges, tillsammans med testresultat av både syntetisk dataoch verkliga bilder tagen med en infraröd kamera.
APA, Harvard, Vancouver, ISO, and other styles
2

Holmqvist, Niclas. "HANDHELD LIDAR ODOMETRY ESTIMATION AND MAPPING SYSTEM." Thesis, Mälardalens högskola, Inbyggda system, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-41137.

Full text
Abstract:
Ego-motion sensors are commonly used for pose estimation in Simultaneous Localization And Mapping (SLAM) algorithms. Inertial Measurement Units (IMUs) are popular sensors but suffer from integration drift over longer time scales. To remedy the drift they are often used in combination with additional sensors, such as a LiDAR. Pose estimation is used when scans, produced by these additional sensors, are being matched. The matching of scans can be computationally heavy as one scan can contain millions of data points. Methods exist to simplify the problem of finding the relative pose between sensor data, such as the Normal Distribution Transform SLAM algorithm. The algorithm separates the point cloud data into a voxelgrid and represent each voxel as a normal distribution, effectively decreasing the amount of data points. Registration is based on a function which converges to a minimum. Sub-optimal conditions can cause the function to converge at a local minimum. To remedy this problem this thesis explores the benefits of combining IMU sensor data to estimate the pose to be used in the NDT SLAM algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

CHEN, HONGYI. "GPS-oscillation-robust Localization and Visionaided Odometry Estimation." Thesis, KTH, Maskinkonstruktion (Inst.), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247299.

Full text
Abstract:
GPS/IMU integrated systems are commonly used for vehicle navigation. The algorithm for this coupled system is normally based on Kalman filter. However, oscillated GPS measurements in the urban environment can lead to localization divergence easily. Moreover, heading estimation may be sensitive to magnetic interference if it relies on IMU with integrated magnetometer. This report tries to solve the localization problem on GPS oscillation and outage, based on adaptive extended Kalman filter(AEKF). In terms of the heading estimation, stereo visual odometry(VO) is fused to overcome the effect by magnetic disturbance. Vision-aided AEKF based algorithm is tested in the cases of both good GPS condition and GPS oscillation with magnetic interference. Under the situations considered, the algorithm is verified to outperform conventional extended Kalman filter(CEKF) and unscented Kalman filter(UKF) in position estimation by 53.74% and 40.09% respectively, and decrease the drifting of heading estimation.
GPS/IMU integrerade system används ofta för navigering av fordon. Algoritmen för detta kopplade system är normalt baserat på ett Kalmanfilter. Ett problem med systemet är att oscillerade GPS mätningar i stadsmiljöer enkelt kan leda till en lokaliseringsdivergens. Dessutom kan riktningsuppskattningen vara känslig för magnetiska störningar om den är beroende av en IMU med integrerad magnetometer. Rapporten försöker lösa lokaliseringsproblemet som skapas av GPS-oscillationer och avbrott med hjälp av ett adaptivt förlängt Kalmanfilter (AEKF). När det gäller riktningsuppskattningen används stereovisuell odometri (VO) för att försvaga effekten av magnetiska störningar genom sensorfusion. En Visionsstödd AEKF-baserad algoritm testas i fall med både goda GPS omständigheter och med oscillationer i GPS mätningar med magnetiska störningar. Under de fallen som är aktuella är algoritmen verifierad för att överträffa det konventionella utökade Kalmanfilteret (CEKF) och ”Unscented Kalman filter” (UKF) när det kommer till positionsuppskattning med 53,74% respektive 40,09% samt minska fel i riktningsuppskattningen.
APA, Harvard, Vancouver, ISO, and other styles
4

Rao, Anantha N. "Learning-based Visual Odometry - A Transformer Approach." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627658636420617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Awang, Salleh Dayang Nur Salmi Dharmiza. "Study of vehicle localization optimization with visual odometry trajectory tracking." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS601.

Full text
Abstract:
Au sein des systèmes avancés d’aide à la conduite (Advanced Driver Assistance Systems - ADAS) pour les systèmes de transport intelligents (Intelligent Transport Systems - ITS), les systèmes de positionnement, ou de localisation, du véhicule jouent un rôle primordial. Le système GPS (Global Positioning System) largement employé ne peut donner seul un résultat précis à cause de facteurs extérieurs comme un environnement contraint ou l’affaiblissement des signaux. Ces erreurs peuvent être en partie corrigées en fusionnant les données GPS avec des informations supplémentaires provenant d'autres capteurs. La multiplication des systèmes d’aide à la conduite disponibles dans les véhicules nécessite de plus en plus de capteurs installés et augmente le volume de données utilisables. Dans ce cadre, nous nous sommes intéressés à la fusion des données provenant de capteurs bas cout pour améliorer le positionnement du véhicule. Parmi ces sources d’information, en parallèle au GPS, nous avons considérés les caméras disponibles sur les véhicules dans le but de faire de l’odométrie visuelle (Visual Odometry - VO), couplée à une carte de l’environnement. Nous avons étudié les caractéristiques de cette trajectoire reconstituée dans le but d’améliorer la qualité du positionnement latéral et longitudinal du véhicule sur la route, et de détecter les changements de voies possibles. Après avoir été fusionnée avec les données GPS, cette trajectoire générée est couplée avec la carte de l’environnement provenant d’Open-StreetMap (OSM). L'erreur de positionnement latérale est réduite en utilisant les informations de distribution de voie fournies par OSM, tandis que le positionnement longitudinal est optimisé avec une correspondance de courbes entre la trajectoire provenant de l’odométrie visuelle et les routes segmentées décrites dans OSM. Pour vérifier la robustesse du système, la méthode a été validée avec des jeux de données KITTI en considérant des données GPS bruitées par des modèles de bruits usuels. Plusieurs méthodes d’odométrie visuelle ont été utilisées pour comparer l’influence de la méthode sur le niveau d'amélioration du résultat après fusion des données. En utilisant la technique d’appariement des courbes que nous proposons, la précision du positionnement connait une amélioration significative, en particulier pour l’erreur longitudinale. Les performances de localisation sont comparables à celles des techniques SLAM (Simultaneous Localization And Mapping), corrigeant l’erreur d’orientation initiale provenant de l’odométrie visuelle. Nous avons ensuite employé la trajectoire provenant de l’odométrie visuelle dans le cadre de la détection de changement de voie. Cette indication est utile dans pour les systèmes de navigation des véhicules. La détection de changement de voie a été réalisée par une somme cumulative et une technique d’ajustement de courbe et obtient de très bon taux de réussite. Des perspectives de recherche sur la stratégie de détection sont proposées pour déterminer la voie initiale du véhicule. En conclusion, les résultats obtenus lors de ces travaux montrent l’intérêt de l’utilisation de la trajectoire provenant de l’odométrie visuelle comme source d’information pour la fusion de données à faible coût pour la localisation des véhicules. Cette source d’information provenant de la caméra est complémentaire aux données d’images traitées qui pourront par ailleurs être utilisées pour les différentes taches visée par les systèmes d’aides à la conduite
With the growing research on Advanced Driver Assistance Systems (ADAS) for Intelligent Transport Systems (ITS), accurate vehicle localization plays an important role in intelligent vehicles. The Global Positioning System (GPS) has been widely used but its accuracy deteriorates and susceptible to positioning error due to factors such as the restricting environments that results in signal weakening. This problem can be addressed by integrating the GPS data with additional information from other sensors. Meanwhile, nowadays, we can find vehicles equipped with sensors for ADAS applications. In this research, fusion of GPS with visual odometry (VO) and digital map is proposed as a solution to localization improvement with low-cost data fusion. From the published works on VO, it is interesting to know how the generated trajectory can further improve vehicle localization. By integrating the VO output with GPS and OpenStreetMap (OSM) data, estimates of vehicle position on the map can be obtained. The lateral positioning error is reduced by utilizing lane distribution information provided by OSM while the longitudinal positioning is optimized with curve matching between VO trajectory trail and segmented roads. To observe the system robustness, the method was validated with KITTI datasets tested with different common GPS noise. Several published VO methods were also used to compare improvement level after data fusion. Validation results show that the positioning accuracy achieved significant improvement especially for the longitudinal error with curve matching technique. The localization performance is on par with Simultaneous Localization and Mapping (SLAM) SLAM techniques despite the drift in VO trajectory input. The research on employability of VO trajectory is extended for a deterministic task in lane-change detection. This is to assist the routing service for lane-level direction in navigation. The lane-change detection was conducted by CUSUM and curve fitting technique that resulted in 100% successful detection for stereo VO. Further study for the detection strategy is however required to obtain the current true lane of the vehicle for lane-level accurate localization. With the results obtained from the proposed low-cost data fusion for localization, we see a bright prospect of utilizing VO trajectory with information from OSM to improve the performance. In addition to obtain VO trajectory, the camera mounted on the vehicle can also be used for other image processing applications to complement the system. This research will continue to develop with future works concluded in the last chapter of this thesis
APA, Harvard, Vancouver, ISO, and other styles
6

Ay, Emre. "Ego-Motion Estimation of Drones." Thesis, KTH, Robotik, perception och lärande, RPL, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210772.

Full text
Abstract:
To remove the dependency on external structure for drone positioning in GPS-denied environments, it is desirable to estimate the ego-motion of drones on-board. Visual positioning systems have been studied for quite some time and the literature on the area is diligent. The aim of this project is to investigate the currently available methods and implement a visual odometry system for drones which is capable of giving continuous estimates with a lightweight solution. In that manner, the state of the art systems are investigated and a visual odometry system is implemented based on the design decisions. The resulting system is shown to give acceptable estimates.
För att avlägsna behovet av extern infrastruktur så som GPS, som dessutominte är tillgänglig i många miljöer, är det önskvärt att uppskatta en drönares rörelse med sensor ombord. Visuella positioneringssystem har studerats under lång tid och litteraturen på området är ymnig. Syftet med detta projekt är att undersöka de för närvarande tillgängliga metodernaoch designa ett visuellt baserat positioneringssystem för drönare. Det resulterande systemet utvärderas och visas ge acceptabla positionsuppskattningar.
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Hong Yun. "Deep Learning for Visual-Inertial Odometry: Estimation of Monocular Camera Ego-Motion and its Uncertainty." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu156331321922759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ringdahl, Viktor. "Stereo Camera Pose Estimation to Enable Loop Detection." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-154392.

Full text
Abstract:
Visual Simultaneous Localization And Mapping (SLAM) allows for three dimensionalreconstruction from a camera’s output and simultaneous positioning of the camera withinthe reconstruction. With use cases ranging from autonomous vehicles to augmentedreality, the SLAM field has garnered interest both commercially and academically. A SLAM system performs odometry as it estimates the camera’s movement throughthe scene. The incremental estimation of odometry is not error free and exhibits driftover time with map inconsistencies as a result. Detecting the return to a previously seenplace, a loop, means that this new information regarding our position can be incorporatedto correct the trajectory retroactively. Loop detection can also facilitate relocalization ifthe system loses tracking due to e.g. heavy motion blur. This thesis proposes an odometric system making use of bundle adjustment within akeyframe based stereo SLAM application. This system is capable of detecting loops byutilizing the algorithm FAB-MAP. Two aspects of this system is evaluated, the odometryand the capability to relocate. Both of these are evaluated using the EuRoC MAV dataset,with an absolute trajectory RMS error ranging from 0.80 m to 1.70 m for the machinehall sequences. The capability to relocate is evaluated using a novel methodology that intuitively canbe interpreted. Results are given for different levels of strictness to encompass differentuse cases. The method makes use of reprojection of points seen in keyframes to definewhether a relocalization is possible or not. The system shows a capability to relocate inup to 85% of all cases when a keyframe exists that can project 90% of its points intothe current view. Errors in estimated poses were found to be correlated with the relativedistance, with errors less than 10 cm in 23% to 73% of all cases. The evaluation of the whole system is augmented with an evaluation of local imagedescriptors and pose estimation algorithms. The descriptor SIFT was found to performbest overall, but demanding to compute. BRISK was deemed the best alternative for afast yet accurate descriptor. Conclusions that can be drawn from this thesis is that FAB-MAP works well fordetecting loops as long as the addition of keyframes is handled appropriately.
APA, Harvard, Vancouver, ISO, and other styles
9

Ready, Bryce Benson. "Filtering Techniques for Pose Estimation with Applications to Unmanned Air Vehicles." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3490.

Full text
Abstract:
This work presents two novel methods of estimating the state of a dynamic system in a Kalman Filtering framework. The first is an application specific method for use with systems performing Visual Odometry in a mostly planar scene. Because a Visual Odometry method inherently provides relative information about the pose of a platform, we use this system as part of the time update in a Kalman Filtering framework, and develop a novel way to propagate the uncertainty of the pose through this time update method. Our initial results show that this method is able to reduce localization error significantly with respect to pure INS time update, limiting drift in our test system to around 30 meters for tens of seconds. The second key contribution of this work is the Manifold EKF, a generalized version of the Extended Kalman Filter which is explicitly designed to estimate manifold-valued states. This filter works for a large number of commonly useful manifolds, and may have applications to other manifolds as well. In our tests, the Manifold EKF demonstrated significant advantages in terms of consistency when compared to other filtering methods. We feel that these promising initial results merit further study of the Manifold EKF, related filters, and their properties.
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Jae-Hak, and Jae-Hak Kim@anu edu au. "Camera Motion Estimation for Multi-Camera Systems." The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20081211.011120.

Full text
Abstract:
The estimation of motion of multi-camera systems is one of the most important tasks in computer vision research. Recently, some issues have been raised about general camera models and multi-camera systems. Using many cameras as a single camera is studied [60], and the epipolar geometry constraints of general camera models is theoretically derived. Methods for calibration, including a self-calibration method for general camera models, are studied [78, 62]. Multi-camera systems are an example of practically implementable general camera models and they are widely used in many applications nowadays because of both the low cost of digital charge-coupled device (CCD) cameras and the high resolution of multiple images from the wide field of views. To our knowledge, no research has been conducted on the relative motion of multi-camera systems with non-overlapping views to obtain a geometrically optimal solution. ¶ In this thesis, we solve the camera motion problem for multi-camera systems by using linear methods and convex optimization techniques, and we make five substantial and original contributions to the field of computer vision. First, we focus on the problem of translational motion of omnidirectional cameras, which are multi-camera systems, and present a constrained minimization method to obtain robust estimation results. Given known rotation, we show that bilinear and trilinear relations can be used to build a system of linear equations, and singular value decomposition (SVD) is used to solve the equations. Second, we present a linear method that estimates the relative motion of generalized cameras, in particular, in the case of non-overlapping views. We also present four types of generalized cameras, which can be solvable using our proposed, modified SVD method. This is the first study finding linear relations for certain types of generalized cameras and performing experiments using our proposed linear method. Third, we present a linear 6-point method (5 points from the same camera and 1 point from another camera) that estimates the relative motion of multi-camera systems, where cameras have no overlapping views. In addition, we discuss the theoretical and geometric analyses of multi-camera systems as well as certain critical configurations where the scale of translation cannot be determined. Fourth, we develop a global solution under an L∞ norm error for the relative motion problem of multi-camera systems using second-order cone programming. Finally, we present a fast searching method to obtain a global solution under an L∞ norm error for the relative motion problem of multi-camera systems, with non-overlapping views, using a branch-and-bound algorithm and linear programming (LP). By testing the feasibility of LP at the earlier stage, we reduced the time of computation of solving LP.¶ We tested our proposed methods by performing experiments with synthetic and real data. The Ladybug2 camera, for example, was used in the experiment on estimation of the translation of omnidirectional cameras and in the estimation of the relative motion of non-overlapping multi-camera systems. These experiments showed that a global solution using L∞ to estimate the relative motion of multi-camera systems could be achieved.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Odometry estimation"

1

Santamaria-Navarro, A., J. Solà, and J. Andrade-Cetto. "Odometry Estimation for Aerial Manipulators." In Springer Tracts in Advanced Robotics, 219–28. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-12945-3_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Poddar, Shashi, Rahul Kottath, and Vinod Karar. "Motion Estimation Made Easy: Evolution and Trends in Visual Odometry." In Recent Advances in Computer Vision, 305–31. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03000-1_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Clement, Lee, Valentin Peretroukhin, and Jonathan Kelly. "Improving the Accuracy of Stereo Visual Odometry Using Visual Illumination Estimation." In Springer Proceedings in Advanced Robotics, 409–19. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50115-4_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Guerrero, Pablo, and Javier Ruiz-del-Solar. "Improving Robot Self-localization Using Landmarks’ Poses Tracking and Odometry Error Estimation." In RoboCup 2007: Robot Soccer World Cup XI, 148–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-68847-1_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Galarza, Juan, Esteban Pérez, Esteban Serrano, Andrés Tapia, and Wilbert G. Aguilar. "Pose Estimation Based on Monocular Visual Odometry and Lane Detection for Intelligent Vehicles." In Lecture Notes in Computer Science, 562–66. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-95282-6_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Xue-bo, Cong-yuan Wang, Yong-chun Fang, and Ke-xin Xing. "An Extended Kalman Filter-Based Robot Pose Estimation Approach with Vision and Odometry." In Wearable Sensors and Robots, 539–52. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-2404-7_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nguyen, Huu Hung, Quang Thi Nguyen, Cong Manh Tran, and Dong-Seong Kim. "Adaptive Essential Matrix Based Stereo Visual Odometry with Joint Forward-Backward Translation Estimation." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 127–37. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63083-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Musleh, Basam, David Martin, Arturo de la Escalera, Domingo Miguel Guinea, and Maria Carmen Garcia-Alegre. "Estimation and Prediction of the Vehicle’s Motion Based on Visual Odometry and Kalman Filter." In Advanced Concepts for Intelligent Vision Systems, 491–502. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33140-4_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Alonso Martínez-García, Edgar, and Luz Abril Torres-Méndez. "4WD Robot Posture Estimation by Radial Multi-View Visual Odometry." In Applications of Mobile Robots. IntechOpen, 2019. http://dx.doi.org/10.5772/intechopen.79130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Harr, M., and C. Schäfer. "Robust Odometry Estimation for Automated Driving and Map Data Collection." In AUTOREG 2017, 91–102. VDI Verlag, 2017. http://dx.doi.org/10.51202/9783181022924-91.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Odometry estimation"

1

Pereira, Fabio Irigon, Gustavo Ilha, Joel Luft, Marcelo Negreiros, and Altamiro Susin. "Monocular Visual Odometry with Cyclic Estimation." In 2017 30th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). IEEE, 2017. http://dx.doi.org/10.1109/sibgrapi.2017.7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Song, Xiaojing, Lakmal D. Seneviratne, Kaspar Althoefer, Zibin Song, and Yahya H. Zweiri. "Visual Odometry for Velocity Estimation of UGVs." In 2007 International Conference on Mechatronics and Automation. IEEE, 2007. http://dx.doi.org/10.1109/icma.2007.4303790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fanani, Nolang, Alina Sturck, Marc Barnada, and Rudolf Mester. "Multimodal scale estimation for monocular visual odometry." In 2017 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2017. http://dx.doi.org/10.1109/ivs.2017.7995955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mou, Wei, Han Wang, and Gerald Seet. "Efficient visual odometry estimation using stereo camera." In 2014 11th IEEE International Conference on Control & Automation (ICCA). IEEE, 2014. http://dx.doi.org/10.1109/icca.2014.6871128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

So, Edmond Wai Yan, Tetsuo Yoshimitsu, and Takashi Kubota. "Hopping Odometry: Motion Estimation with Selective Vision." In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009). IEEE, 2009. http://dx.doi.org/10.1109/iros.2009.5354065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kerl, Christian, Jurgen Sturm, and Daniel Cremers. "Robust odometry estimation for RGB-D cameras." In 2013 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2013. http://dx.doi.org/10.1109/icra.2013.6631104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nowruzi, Farzan, Dhanvin Kolhatkar, Prince Kapoor, and Robert Laganiere. "Point Cloud based Hierarchical Deep Odometry Estimation." In 7th International Conference on Vehicle Technology and Intelligent Transport Systems. SCITEPRESS - Science and Technology Publications, 2021. http://dx.doi.org/10.5220/0010442901120121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lu, Tongwei, Shihui Ai, Yudian xiong, and Yongyuan Jiang. "Monocular visual odometry-based 3D-2D motion estimation." In Automatic Target Recognition and Navigation, edited by Jayaram K. Udupa, Hanyu Hong, and Jianguo Liu. SPIE, 2018. http://dx.doi.org/10.1117/12.2286251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

yuan, Rong, Hongyi Fan, and Benjamin Kimia. "Dissecting scale from pose estimation in visual odometry." In British Machine Vision Conference 2017. British Machine Vision Association, 2017. http://dx.doi.org/10.5244/c.31.170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cho, Hae Min, HyungGi Jo, Seongwon Lee, and Euntai Kim. "Odometry Estimation via CNN using Sparse LiDAR Data." In 2019 16th International Conference on Ubiquitous Robots (UR). IEEE, 2019. http://dx.doi.org/10.1109/urai.2019.8768571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography