Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Odometry estimation.

Zeitschriftenartikel zum Thema „Odometry estimation“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Odometry estimation" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Nurmaini, Siti, und Sahat Pangidoan. „Localization of Leader-Follower Robot Using Extended Kalman Filter“. Computer Engineering and Applications Journal 7, Nr. 2 (12.07.2018): 95–108. http://dx.doi.org/10.18495/comengapp.v7i2.253.

Der volle Inhalt der Quelle
Annotation:
Non-holonomic leader-follower robot must be capable to find its own position in order to be able to navigating autonomously in the environment this problem is known as localization. A common way to estimate the robot pose by using odometer. However, odometry measurement may cause inaccurate result due to the wheel slippage or other small noise sources. In this research, the Extended Kalman Filter (EKF) is proposed to minimize the error or the inaccuracy caused by the odometry measurement. The EKF algorithm works by fusing odometry and landmark information to produce a better estimation. A better estimation acknowledged whenever the estimated position lies close to the actual path, which represents a system without noise. Another experiment is conducted to observe the influence of numbers of landmark to the estimated position. The results show that the EKF technique is effective to estimate the leader pose and orientation pose with small error and the follower has the ability traverse close to leader based-on the actual path.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Li, Q., C. Wang, S. Chen, X. Li, C. Wen, M. Cheng und J. Li. „DEEP LIDAR ODOMETRY“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (05.06.2019): 1681–86. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-1681-2019.

Der volle Inhalt der Quelle
Annotation:
<p><strong>Abstract.</strong> Most existing lidar odometry estimation strategies are formulated under a standard framework that includes feature selection, and pose estimation through feature matching. In this work, we present a novel pipeline called LO-Net for lidar odometry estimation from 3D lidar scanning data using deep convolutional networks. The network is trained in an end-to-end manner, it infers 6-DoF poses from the encoded sequential lidar data. Based on the new designed mask-weighted geometric constraint loss, the network automatically learns effective feature representation for the lidar odometry estimation problem, and implicitly exploits the sequential dependencies and dynamics. Experiments on benchmark datasets demonstrate that LO-Net has similar accuracy with the geometry-based approach.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Martínez-García, Edgar Alonso, Joaquín Rivero-Juárez, Luz Abril Torres-Méndez und Jorge Enrique Rodas-Osollo. „Divergent trinocular vision observers design for extended Kalman filter robot state estimation“. Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 233, Nr. 5 (24.09.2018): 524–47. http://dx.doi.org/10.1177/0959651818800908.

Der volle Inhalt der Quelle
Annotation:
Here, we report the design of two deterministic observers that exploit the capabilities of a home-made divergent trinocular visual sensor to sense depth data. The three-dimensional key points that the observers can measure are triangulated for visual odometry and estimated by an extended Kalman filter. This work deals with a four-wheel-drive mobile robot with four passive suspensions. The direct and inverse kinematic solutions are deduced and used for the updating and prediction models of the extended Kalman filter as feedback for the robot’s position controller. The state-estimation visual odometry results were compared with the robot’s dead-reckoning kinematics, and both are combined as a recursive position controller. One observer model design is based on the analytical geometric multi-view approach. The other observer model has fundamentals on multi-view lateral optical flow, which was reformulated as nonspatial–temporal and is modeled by an exponential function. This work presents the analytical deductions of the models and formulations. Experimental validation deals with five main aspects: multi-view correction, a geometric observer for range measurement, an optical flow observer for range measurement, dead-reckoning and visual odometry. Furthermore, comparison of positioning includes a four-wheel odometer, deterministic visual observers and the observer–extended Kalman filter, compared with a vision-based global reference localization system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wu, Qin Fan, Qing Li und Nong Cheng. „Visual Odometry and 3D Mapping in Indoor Environments“. Applied Mechanics and Materials 336-338 (Juli 2013): 348–54. http://dx.doi.org/10.4028/www.scientific.net/amm.336-338.348.

Der volle Inhalt der Quelle
Annotation:
This paper presents a robust state estimation and 3D environment modeling approach that enables Micro Aerial Vehicle (MAV) operating in challenging GPS-denied indoor environments. A fast, accurate and robust approach to visual odometry is developed based on Microsoft Kinect. Discriminative features are extracted from RGB images and matched across consecutive frames. A robust least-square estimator is applied to get relative motion estimation. All computation is performed in real-time, which provides high frequency of 6 degree-of-freedom state estimation. A detailed 3D map of an indoor environment is also constructed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Jiménez, Paulo A., und Bijan Shirinzadeh. „Laser interferometry measurements based calibration and error propagation identification for pose estimation in mobile robots“. Robotica 32, Nr. 1 (06.08.2013): 165–74. http://dx.doi.org/10.1017/s0263574713000660.

Der volle Inhalt der Quelle
Annotation:
SUMMARYA widely used method for pose estimation in mobile robots is odometry. Odometry allows the robot in real time to reconstruct its position and orientation from the wheels' encoder measurements. Given to its unbounded nature, odometry calculation accumulates errors with quadratic increase of error variance with traversed distance. This paper develops a novel method for odometry calibration and error propagation identification for mobile robots. The proposed method uses a laser-based interferometer to measure distance precisely. Two variants of the proposed calibration method are examined: the two-parameter model and the three-parameter model. Experimental results obtained using a Khepera 3 mobile robot showed that both methods significantly increase accuracy of the pose estimation, validating the effectiveness of the proposed calibration method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Gonzalez, Ramon, Francisco Rodriguez, Jose Luis Guzman, Cedric Pradalier und Roland Siegwart. „Combined visual odometry and visual compass for off-road mobile robots localization“. Robotica 30, Nr. 6 (05.10.2011): 865–78. http://dx.doi.org/10.1017/s026357471100110x.

Der volle Inhalt der Quelle
Annotation:
SUMMARYIn this paper, we present the work related to the application of a visual odometry approach to estimate the location of mobile robots operating in off-road conditions. The visual odometry approach is based on template matching, which deals with estimating the robot displacement through a matching process between two consecutive images. Standard visual odometry has been improved using visual compass method for orientation estimation. For this purpose, two consumer-grade monocular cameras have been employed. One camera is pointing at the ground under the robot, and the other is looking at the surrounding environment. Comparisons with popular localization approaches, through physical experiments in off-road conditions, have shown the satisfactory behavior of the proposed strategy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Valiente García, David, Lorenzo Fernández Rojo, Arturo Gil Aparicio, Luis Payá Castelló und Oscar Reinoso García. „Visual Odometry through Appearance- and Feature-Based Method with Omnidirectional Images“. Journal of Robotics 2012 (2012): 1–13. http://dx.doi.org/10.1155/2012/797063.

Der volle Inhalt der Quelle
Annotation:
In the field of mobile autonomous robots, visual odometry entails the retrieval of a motion transformation between two consecutive poses of the robot by means of a camera sensor solely. A visual odometry provides an essential information for trajectory estimation in problems such as Localization and SLAM (Simultaneous Localization and Mapping). In this work we present a motion estimation based on a single omnidirectional camera. We exploited the maximized horizontal field of view provided by this camera, which allows us to encode large scene information into the same image. The estimation of the motion transformation between two poses is incrementally computed, since only the processing of two consecutive omnidirectional images is required. Particularly, we exploited the versatility of the information gathered by omnidirectional images to perform both an appearance-based and a feature-based method to obtain visual odometry results. We carried out a set of experiments in real indoor environments to test the validity and suitability of both methods. The data used in the experiments consists of a large sets of omnidirectional images captured along the robot's trajectory in three different real scenarios. Experimental results demonstrate the accuracy of the estimations and the capability of both methods to work in real-time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Jung, Changbae, und Woojin Chung. „Calibration of Kinematic Parameters for Two Wheel Differential Mobile Robots by Using Experimental Heading Errors“. International Journal of Advanced Robotic Systems 8, Nr. 5 (01.01.2011): 68. http://dx.doi.org/10.5772/50906.

Der volle Inhalt der Quelle
Annotation:
Odometry using incremental wheel encoder sensors provides the relative position of mobile robots. This relative position is fundamental information for pose estimation by various sensors for EKF Localization, Monte Carlo Localization etc. Odometry is also used as unique information for localization of environmental conditions when absolute measurement systems are not available. However, odometry suffers from the accumulation of kinematic modeling errors of the wheel as the robot's travel distance increases. Therefore, systematic odometry errors need to be calibrated. Principal systematic error sources are unequal wheel diameters and uncertainty of the effective wheelbase. The UMBmark method is a practical and useful calibration scheme for systematic odometry errors of two-wheel differential mobile robots. However, the approximation errors of the calibration equations and the coupled effect between the two systematic error sources affect the performance of the kinematic parameter estimation. In this paper, we proposed a new calibration scheme whose calibration equations have less approximation errors. This new scheme uses the orientation errors of the robot's final pose in the test track. This scheme also considers the coupled effect between wheel diameter error and wheelbase error. Numerical simulations and experimental results verified that the proposed scheme accurately estimated the kinematic error parameters and improved the accuracy of odometry calibration significantly.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Thapa, Vikas, Abhishek Sharma, Beena Gairola, Amit K. Mondal, Vindhya Devalla und Ravi K. Patel. „A Review on Visual Odometry Techniques for Mobile Robots: Types and Challenges“. Recent Advances in Electrical & Electronic Engineering (Formerly Recent Patents on Electrical & Electronic Engineering) 13, Nr. 5 (22.09.2020): 618–31. http://dx.doi.org/10.2174/2352096512666191004142546.

Der volle Inhalt der Quelle
Annotation:
For autonomous navigation, tracking and obstacle avoidance, a mobile robot must have the knowledge of its position and localization over time. Among the available techniques for odometry, vision-based odometry is robust and economical technique. In addition, a combination of position estimation from odometry with interpretations of the surroundings using a mobile camera is effective. This paper presents an overview of current visual odometry approaches, applications, and challenges in mobile robots. The study offers a comparative analysis of different available techniques and algorithms associated with it, emphasizing on its efficiency and other feature extraction capability, applications and optimality of various techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Lee, Kyuman, und Eric N. Johnson. „Latency Compensated Visual-Inertial Odometry for Agile Autonomous Flight“. Sensors 20, Nr. 8 (14.04.2020): 2209. http://dx.doi.org/10.3390/s20082209.

Der volle Inhalt der Quelle
Annotation:
In visual-inertial odometry (VIO), inertial measurement unit (IMU) dead reckoning acts as the dynamic model for flight vehicles while camera vision extracts information about the surrounding environment and determines features or points of interest. With these sensors, the most widely used algorithm for estimating vehicle and feature states for VIO is an extended Kalman filter (EKF). The design of the standard EKF does not inherently allow for time offsets between the timestamps of the IMU and vision data. In fact, sensor-related delays that arise in various realistic conditions are at least partially unknown parameters. A lack of compensation for unknown parameters often leads to a serious impact on the accuracy of VIO systems and systems like them. To compensate for the uncertainties of the unknown time delays, this study incorporates parameter estimation into feature initialization and state estimation. Moreover, computing cross-covariance and estimating delays in online temporal calibration correct residual, Jacobian, and covariance. Results from flight dataset testing validate the improved accuracy of VIO employing latency compensated filtering frameworks. The insights and methods proposed here are ultimately useful in any estimation problem (e.g., multi-sensor fusion scenarios) where compensation for partially unknown time delays can enhance performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Salameh, Mohammed, Azizi Abdullah und Shahnorbanun Sahran. „Multiple Descriptors for Visual Odometry Trajectory Estimation“. International Journal on Advanced Science, Engineering and Information Technology 8, Nr. 4-2 (26.09.2018): 1423. http://dx.doi.org/10.18517/ijaseit.8.4-2.6834.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Ramezani, Milad, Kourosh Khoshelham und Clive Fraser. „Pose estimation by Omnidirectional Visual-Inertial Odometry“. Robotics and Autonomous Systems 105 (Juli 2018): 26–37. http://dx.doi.org/10.1016/j.robot.2018.03.007.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Costante, Gabriele, und Michele Mancini. „Uncertainty Estimation for Data-Driven Visual Odometry“. IEEE Transactions on Robotics 36, Nr. 6 (Dezember 2020): 1738–57. http://dx.doi.org/10.1109/tro.2020.3001674.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Teixeira, Bernardo, Hugo Silva, Anibal Matos und Eduardo Silva. „Deep Learning for Underwater Visual Odometry Estimation“. IEEE Access 8 (2020): 44687–701. http://dx.doi.org/10.1109/access.2020.2978406.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

An, Lifeng, Xinyu Zhang, Hongbo Gao und Yuchao Liu. „Semantic segmentation–aided visual odometry for urban autonomous driving“. International Journal of Advanced Robotic Systems 14, Nr. 5 (01.09.2017): 172988141773566. http://dx.doi.org/10.1177/1729881417735667.

Der volle Inhalt der Quelle
Annotation:
Visual odometry plays an important role in urban autonomous driving cars. Feature-based visual odometry methods sample the candidates randomly from all available feature points, while alignment-based visual odometry methods take all pixels into account. These methods hold an assumption that quantitative majority of candidate visual cues could represent the truth of motions. But in real urban traffic scenes, this assumption could be broken by lots of dynamic traffic participants. Big trucks or buses may occupy the main image parts of a front-view monocular camera and result in wrong visual odometry estimation. Finding available visual cues that could represent real motion is the most important and hardest step for visual odometry in the dynamic environment. Semantic attributes of pixels could be considered as a more reasonable factor for candidate selection in that case. This article analyzed the availability of all visual cues with the help of pixel-level semantic information and proposed a new visual odometry method that combines feature-based and alignment-based visual odometry methods with one optimization pipeline. The proposed method was compared with three open-source visual odometry algorithms on Kitti benchmark data sets and our own data set. Experimental results confirmed that the new approach provided effective improvement both on accurate and robustness in the complex dynamic scenes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Aguiar, André, Filipe Santos, Armando Jorge Sousa und Luís Santos. „FAST-FUSION: An Improved Accuracy Omnidirectional Visual Odometry System with Sensor Fusion and GPU Optimization for Embedded Low Cost Hardware“. Applied Sciences 9, Nr. 24 (15.12.2019): 5516. http://dx.doi.org/10.3390/app9245516.

Der volle Inhalt der Quelle
Annotation:
The main task while developing a mobile robot is to achieve accurate and robust navigation in a given environment. To achieve such a goal, the ability of the robot to localize itself is crucial. In outdoor, namely agricultural environments, this task becomes a real challenge because odometry is not always usable and global navigation satellite systems (GNSS) signals are blocked or significantly degraded. To answer this challenge, this work presents a solution for outdoor localization based on an omnidirectional visual odometry technique fused with a gyroscope and a low cost planar light detection and ranging (LIDAR), that is optimized to run in a low cost graphical processing unit (GPU). This solution, named FAST-FUSION, proposes to the scientific community three core contributions. The first contribution is an extension to the state-of-the-art monocular visual odometry (Libviso2) to work with omnidirectional cameras and single axis gyro to increase the system accuracy. The second contribution, it is an algorithm that considers low cost LIDAR data to estimate the motion scale and solve the limitations of monocular visual odometer systems. Finally, we propose an heterogeneous computing optimization that considers a Raspberry Pi GPU to improve the visual odometry runtime performance in low cost platforms. To test and evaluate FAST-FUSION, we created three open-source datasets in an outdoor environment. Results shows that FAST-FUSION is acceptable to run in real-time in low cost hardware and that outperforms the original Libviso2 approach in terms of time performance and motion estimation accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Conduraru, Ionel, Ioan Doroftei, Dorin Luca und Alina Conduraru Slatineanu. „Odometry Aspects of an Omni-Directional Mobile Robot with Modified Mecanum Wheels“. Applied Mechanics and Materials 658 (Oktober 2014): 587–92. http://dx.doi.org/10.4028/www.scientific.net/amm.658.587.

Der volle Inhalt der Quelle
Annotation:
Mobile robots have a large scale use in industry, military operations, exploration and other applications where human intervention is risky. When a mobile robot has to move in small and narrow spaces and to avoid obstacles, mobility is one of its main issues. An omni-directional drive mechanism is very attractive because it guarantees a very good mobility in such cases. Also, the accurate estimation of the position is a key component for the successful operation for most of autonomous mobile robots. In this work, some odometry aspects of an omni-directional robot are presented and a simple odometer solution is proposed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Fazekas, Máté, Péter Gáspár und Balázs Németh. „Velocity Estimation via Wheel Circumference Identification“. Periodica Polytechnica Transportation Engineering 49, Nr. 3 (01.09.2021): 250–60. http://dx.doi.org/10.3311/pptr.18623.

Der volle Inhalt der Quelle
Annotation:
The article presents a velocity estimation algorithm through the wheel encoder-based odometry and wheel circumference identification. The motivation of the paper is that a proper model can improve the motion estimation in poor sensor performance cases. For example, when the GNSS signals are unavailable, or when the vision-based methods are incorrect due to the insufficient number of features, furthermore, when the IMU-based method fails due to the lack of frequent accelerations. In these situations, the wheel encoders can be an appropriate choice for state estimation. However, this type of estimation suffers from parameter uncertainty. In the paper, a wheel circumference identification is proposed to improve the velocity estimation. The algorithm listens to the incoming sensor measurements and estimates the wheel circumferences recursively with a nonlinear least squares method. The experimental results demonstrate that with the application of the identified parameters in the wheel odometry model, accurate velocity estimation can be obtained with high frequency. Thus, the presented algorithm can improve the motion estimation in the driver assistant functions of autonomous vehicles.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Boukhers, Zeyd, Kimiaki Shirahama und Marcin Grzegorzek. „Less restrictive camera odometry estimation from monocular camera“. Multimedia Tools and Applications 77, Nr. 13 (27.09.2017): 16199–222. http://dx.doi.org/10.1007/s11042-017-5195-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

de Saxe, Christopher, und David Cebon. „Estimation of trailer off-tracking using visual odometry“. Vehicle System Dynamics 57, Nr. 5 (18.06.2018): 752–76. http://dx.doi.org/10.1080/00423114.2018.1484498.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Parra, I., M. A. Sotelo, D. F. Llorca und M. Ocaña. „Robust visual odometry for vehicle localization in urban environments“. Robotica 28, Nr. 3 (22.05.2009): 441–52. http://dx.doi.org/10.1017/s026357470900575x.

Der volle Inhalt der Quelle
Annotation:
SUMMARYThis paper describes a new approach for estimating the vehicle motion trajectory in complex urban environments by means of visual odometry. A new strategy for robust feature extraction and data post-processing is developed and tested on-road. Images from scale-invariant feature transform (SIFT) features are used in order to cope with the complexity of urban environments. The obtained results are discussed and compared to previous works. In the prototype system, the ego-motion of the vehicle is computed using a stereo-vision system mounted next to the rear view mirror of the car. Feature points are matched between pairs of frames and linked into 3D trajectories. The distance between estimations is dynamically adapted based on re-projection and estimation errors. Vehicle motion is estimated using the non-linear, photogrametric approach based on RAndom SAmple Consensus (RANSAC). The final goal is to provide on-board driver assistance in navigation tasks, or to provide a means of autonomously navigating a vehicle. The method has been tested in real traffic conditions without using prior knowledge about the scene or the vehicle motion. An example of how to estimate a vehicle's trajectory is provided along with suggestions for possible further improvement of the proposed odometry algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Fazekas, Máté, Péter Gáspár und Balázs Németh. „Calibration and Improvement of an Odometry Model with Dynamic Wheel and Lateral Dynamics Integration“. Sensors 21, Nr. 2 (06.01.2021): 337. http://dx.doi.org/10.3390/s21020337.

Der volle Inhalt der Quelle
Annotation:
Localization is a key part of an autonomous system, such as a self-driving car. The main sensor for the task is the GNSS, however its limitations can be eliminated only by integrating other methods, for example wheel odometry, which requires a well-calibrated model. This paper proposes a novel wheel odometry model and its calibration. The parameters of the nonlinear dynamic system are estimated with Gauss–Newton regression. Due to only automotive-grade sensors are applied to reach a cost-effective system, the measurement uncertainty highly corrupts the estimation accuracy. The problem is handled with a unique Kalman-filter addition to the iterative loop. The experimental results illustrate that without the proposed improvements, in particular the dynamic wheel assumption and integrated filtering, the model cannot be calibrated precisely. With the well-calibrated odometry, the localization accuracy improves significantly and the system can be used as a cost-effective motion estimation sensor in autonomous functions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Yoon, Sung-Joo, und Taejung Kim. „Development of Stereo Visual Odometry Based on Photogrammetric Feature Optimization“. Remote Sensing 11, Nr. 1 (01.01.2019): 67. http://dx.doi.org/10.3390/rs11010067.

Der volle Inhalt der Quelle
Annotation:
One of the important image processing technologies is visual odometry (VO) technology. VO estimates platform motion through a sequence of images. VO is of interest in the virtual reality (VR) industry as well as the automobile industry because the construction cost is low. In this study, we developed stereo visual odometry (SVO) based on photogrammetric geometric interpretation. The proposed method performed feature optimization and pose estimation through photogrammetric bundle adjustment. After corresponding the point extraction step, the feature optimization was carried out with photogrammetry-based and vision-based optimization. Then, absolute orientation was performed for pose estimation through bundle adjustment. We used ten sequences provided by the Karlsruhe institute of technology and Toyota technological institute (KITTI) community. Through a two-step optimization process, we confirmed that the outliers, which were not removed by conventional outlier filters, were removed. We also were able to confirm the applicability of photogrammetric techniques to stereo visual odometry technology.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Esfandiari, Hooman, Derek Lichti und Carolyn Anglin. „Single-camera visual odometry to track a surgical X-ray C-arm base“. Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 231, Nr. 12 (17.10.2017): 1140–51. http://dx.doi.org/10.1177/0954411917735556.

Der volle Inhalt der Quelle
Annotation:
This study provides a framework for a single-camera odometry system for localizing a surgical C-arm base. An application-specific monocular visual odometry system (a downward-looking consumer-grade camera rigidly attached to the C-arm base) is proposed in this research. The cumulative dead-reckoning estimation of the base is extracted based on frame-to-frame homography estimation. Optical-flow results are utilized to feed the odometry. Online positional and orientation parameters are then reported. Positional accuracy of better than 2% (of the total traveled distance) for most of the cases and 4% for all the cases studied and angular accuracy of better than 2% (of absolute cumulative changes in orientation) were achieved with this method. This study provides a robust and accurate tracking framework that not only can be integrated with the current C-arm joint-tracking system (i.e. TC-arm) but also is capable of being employed for similar applications in other fields (e.g. robotics).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Hong, Euntae, und Jongwoo Lim. „Visual-Inertial Odometry with Robust Initialization and Online Scale Estimation“. Sensors 18, Nr. 12 (05.12.2018): 4287. http://dx.doi.org/10.3390/s18124287.

Der volle Inhalt der Quelle
Annotation:
Visual-inertial odometry (VIO) has recently received much attention for efficient and accurate ego-motion estimation of unmanned aerial vehicle systems (UAVs). Recent studies have shown that optimization-based algorithms achieve typically high accuracy when given enough amount of information, but occasionally suffer from divergence when solving highly non-linear problems. Further, their performance significantly depends on the accuracy of the initialization of inertial measurement unit (IMU) parameters. In this paper, we propose a novel VIO algorithm of estimating the motional state of UAVs with high accuracy. The main technical contributions are the fusion of visual information and pre-integrated inertial measurements in a joint optimization framework and the stable initialization of scale and gravity using relative pose constraints. To account for the ambiguity and uncertainty of VIO initialization, a local scale parameter is adopted in the online optimization. Quantitative comparisons with the state-of-the-art algorithms on the European Robotics Challenge (EuRoC) dataset verify the efficacy and accuracy of the proposed method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Aladem, Mohamed, und Samir Rawashdeh. „Lightweight Visual Odometry for Autonomous Mobile Robots“. Sensors 18, Nr. 9 (28.08.2018): 2837. http://dx.doi.org/10.3390/s18092837.

Der volle Inhalt der Quelle
Annotation:
Vision-based motion estimation is an effective means for mobile robot localization and is often used in conjunction with other sensors for navigation and path planning. This paper presents a low-overhead real-time ego-motion estimation (visual odometry) system based on either a stereo or RGB-D sensor. The algorithm’s accuracy outperforms typical frame-to-frame approaches by maintaining a limited local map, while requiring significantly less memory and computational power in contrast to using global maps common in full visual SLAM methods. The algorithm is evaluated on common publicly available datasets that span different use-cases and performance is compared to other comparable open-source systems in terms of accuracy, frame rate and memory requirements. This paper accompanies the release of the source code as a modular software package for the robotics community compatible with the Robot Operating System (ROS).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Javanmard-Gh., A., D. Iwaszczuk und S. Roth. „DEEPLIO: DEEP LIDAR INERTIAL SENSOR FUSION FOR ODOMETRY ESTIMATION“. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-1-2021 (17.06.2021): 47–54. http://dx.doi.org/10.5194/isprs-annals-v-1-2021-47-2021.

Der volle Inhalt der Quelle
Annotation:
Abstract. Having a good estimate of the position and orientation of a mobile agent is essential for many application domains such as robotics, autonomous driving, and virtual and augmented reality. In particular, when using LiDAR and IMU sensors as the inputs, most existing methods still use classical filter-based fusion methods to achieve this task. In this work, we propose DeepLIO, a modular, end-to-end learning-based fusion framework for odometry estimation using LiDAR and IMU sensors. For this task, our network learns an appropriate fusion function by considering different modalities of its input latent feature vectors. We also formulate a loss function, where we combine both global and local pose information over an input sequence to improve the accuracy of the network predictions. Furthermore, we design three sub-networks with different modules and architectures derived from DeepLIO to analyze the effect of each sensory input on the task of odometry estimation. Experiments on the benchmark dataset demonstrate that DeepLIO outperforms existing learning-based and model-based methods regarding orientation estimation and shows a marginal position accuracy difference.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Han, Chenlei, Michael Frey und Frank Gauterin. „Modular Approach for Odometry Localization Method for Vehicles with Increased Maneuverability“. Sensors 21, Nr. 1 (25.12.2020): 79. http://dx.doi.org/10.3390/s21010079.

Der volle Inhalt der Quelle
Annotation:
Localization and navigation not only serve to provide positioning and route guidance information for users, but also are important inputs for vehicle control. This paper investigates the possibility of using odometry to estimate the position and orientation of a vehicle with a wheel individual steering system in omnidirectional parking maneuvers. Vehicle models and sensors have been identified for this application. Several odometry versions are designed using a modular approach, which was developed in this paper to help users to design state estimators. Different odometry versions have been implemented and validated both in the simulation environment and in real driving tests. The evaluated results show that the versions using more models and using state variables in models provide both more accurate and more robust estimation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Xu, Bo, Yu Chen, Shoujian Zhang und Jingrong Wang. „Improved Point–Line Visual–Inertial Odometry System Using Helmert Variance Component Estimation“. Remote Sensing 12, Nr. 18 (07.09.2020): 2901. http://dx.doi.org/10.3390/rs12182901.

Der volle Inhalt der Quelle
Annotation:
Mobile platform visual image sequence inevitably has large areas with various types of weak textures, which affect the acquisition of accurate pose in the subsequent platform moving process. The visual–inertial odometry (VIO) with point features and line features as visual information shows a good performance in weak texture environments, which can solve these problems to a certain extent. However, the extraction and matching of line features are time consuming, and reasonable weights between the point and line features are hard to estimate, which makes it difficult to accurately track the pose of the platform in real time. In order to overcome the deficiency, an improved effective point–line visual–inertial odometry system is proposed in this paper, which makes use of geometric information of line features and combines with pixel correlation coefficient to match the line features. Furthermore, this system uses the Helmert variance component estimation method to adjust weights between point features and line features. Comprehensive experimental results on the two datasets of EuRoc MAV and PennCOSYVIO demonstrate that the point–line visual–inertial odometry system developed in this paper achieved significant improvements in both localization accuracy and efficiency compared with several state-of-the-art VIO systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Zhang, Chaofan, Yong Liu, Fan Wang, Yingwei Xia und Wen Zhang. „VINS-MKF: A Tightly-Coupled Multi-Keyframe Visual-Inertial Odometry for Accurate and Robust State Estimation“. Sensors 18, Nr. 11 (19.11.2018): 4036. http://dx.doi.org/10.3390/s18114036.

Der volle Inhalt der Quelle
Annotation:
State estimation is crucial for robot autonomy, visual odometry (VO) has received significant attention in the robotics field because it can provide accurate state estimation. However, the accuracy and robustness of most existing VO methods are degraded in complex conditions, due to the limited field of view (FOV) of the utilized camera. In this paper, we present a novel tightly-coupled multi-keyframe visual-inertial odometry (called VINS-MKF), which can provide an accurate and robust state estimation for robots in an indoor environment. We first modify the monocular ORBSLAM (Oriented FAST and Rotated BRIEF Simultaneous Localization and Mapping) to multiple fisheye cameras alongside an inertial measurement unit (IMU) to provide large FOV visual-inertial information. Then, a novel VO framework is proposed to ensure the efficiency of state estimation, by adopting a GPU (Graphics Processing Unit) based feature extraction method and parallelizing the feature extraction thread that is separated from the tracking thread with the mapping thread. Finally, a nonlinear optimization method is formulated for accurate state estimation, which is characterized as being multi-keyframe, tightly-coupled and visual-inertial. In addition, accurate initialization and a novel MultiCol-IMU camera model are coupled to further improve the performance of VINS-MKF. To the best of our knowledge, it’s the first tightly-coupled multi-keyframe visual-inertial odometry that joins measurements from multiple fisheye cameras and IMU. The performance of the VINS-MKF was validated by extensive experiments using home-made datasets, and it showed improved accuracy and robustness over the state-of-art VINS-Mono.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Bag, Suvam, Vishwas Venkatachalapathy und RaymondW Ptucha. „Motion Estimation Using Visual Odometry and Deep Learning Localization“. Electronic Imaging 2017, Nr. 19 (29.01.2017): 62–69. http://dx.doi.org/10.2352/issn.2470-1173.2017.19.avm-022.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Abdu, Ahmed, Hakim A. Abdo und Al-Alimi Dalal. „Robust Monocular Visual Odometry Trajectory Estimation in Urban Environments“. International Journal of Information Technology and Computer Science 11, Nr. 10 (08.10.2019): 12–18. http://dx.doi.org/10.5815/ijitcs.2019.10.02.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Lin, Lili, Weisheng Wang, Wan Luo, Lesheng Song und Wenhui Zhou. „Unsupervised monocular visual odometry with decoupled camera pose estimation“. Digital Signal Processing 114 (Juli 2021): 103052. http://dx.doi.org/10.1016/j.dsp.2021.103052.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Liu, Qiang, Haidong Zhang, Yiming Xu und Li Wang. „Unsupervised Deep Learning-Based RGB-D Visual Odometry“. Applied Sciences 10, Nr. 16 (06.08.2020): 5426. http://dx.doi.org/10.3390/app10165426.

Der volle Inhalt der Quelle
Annotation:
Recently, deep learning frameworks have been deployed in visual odometry systems and achieved comparable results to traditional feature matching based systems. However, most deep learning-based frameworks inevitably need labeled data as ground truth for training. On the other hand, monocular odometry systems are incapable of restoring absolute scale. External or prior information has to be introduced for scale recovery. To solve these problems, we present a novel deep learning-based RGB-D visual odometry system. Our two main contributions are: (i) during network training and pose estimation, the depth images are fed into the network to form a dual-stream structure with the RGB images, and a dual-stream deep neural network is proposed. (ii) the system adopts an unsupervised end-to-end training method, thus the labor-intensive data labeling task is not required. We have tested our system on the KITTI dataset, and results show that the proposed RGB-D Visual Odometry (VO) system has obvious advantages over other state-of-the-art systems in terms of both translation and rotation errors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Liu, Fei, Yashar Balazadegan Sarvrood und Yang Gao. „Implementation and Analysis of Tightly Integrated INS/Stereo VO for Land Vehicle Navigation“. Journal of Navigation 71, Nr. 1 (23.08.2017): 83–99. http://dx.doi.org/10.1017/s037346331700056x.

Der volle Inhalt der Quelle
Annotation:
Tight integration of inertial sensors and stereo visual odometry to bridge Global Navigation Satellite System (GNSS) signal outages in challenging environments has drawn increasing attention. However, the details of how feature pixel coordinates from visual odometry can be directly used to limit the quick drift of inertial sensors in a tight integration implementation have rarely been provided in previous works. For instance, a key challenge in tight integration of inertial and stereo visual datasets is how to correct inertial sensor errors using the pixel measurements from visual odometry, however this has not been clearly demonstrated in existing literature. As a result, this would also affect the proper implementation of the integration algorithms and their performance assessment. This work develops and implements the tight integration of an Inertial Measurement Unit (IMU) and stereo cameras in a local-level frame. The results of the integrated solutions are also provided and analysed. Land vehicle testing results show that not only the position accuracy is improved, but also better azimuth and velocity estimation can be achieved, when compared to stand-alone INS or stereo visual odometry solutions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Boulekchour, Mohammed, Nabil Aouf und Mark Richardson. „Robust L∞convex optimisation for monocular visual odometry trajectory estimation“. Robotica 34, Nr. 3 (09.07.2014): 703–22. http://dx.doi.org/10.1017/s0263574714001829.

Der volle Inhalt der Quelle
Annotation:
SUMMARYThe most important applications of many computer vision systems are based on robust features extraction, matching and tracking. Due to their extraction techniques, image features locations accuracy is heavily dependent on the variation in intensity within their neighbourhoods, from which their uncertainties are estimated. In the present work, a robust L∞optimisation solution for monocular motion estimation systems has been presented. The uncertainty estimation techniques based on SIFT derivative approach and its propagation through the eight-point algorithm, singular value decomposition SVD and the triangulation algorithm have proved an improvement to the global motion estimation. Using monocular systems makes the motion estimation challenging due to the absolute scale ambiguity caused by projective effects. For this, we propose robust tools to estimate both the trajectory of a moving object and the unknown absolute scale ratio between consecutive image pairs. Experimental evaluations showed that robust convex optimisation with the L∞norm under uncertain data and the Robust Least Squares clearly outperform classical methods based on Least Squares and Levenberg-Marquardt algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Kersten, J., und V. Rodehorst. „ENHANCEMENT STRATEGIES FOR FRAME-TO-FRAME UAS STEREO VISUAL ODOMETRY“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (09.06.2016): 511–18. http://dx.doi.org/10.5194/isprsarchives-xli-b3-511-2016.

Der volle Inhalt der Quelle
Annotation:
Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Kersten, J., und V. Rodehorst. „ENHANCEMENT STRATEGIES FOR FRAME-TO-FRAME UAS STEREO VISUAL ODOMETRY“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (09.06.2016): 511–18. http://dx.doi.org/10.5194/isprs-archives-xli-b3-511-2016.

Der volle Inhalt der Quelle
Annotation:
Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Yuan, Cheng, Jizhou Lai, Pin Lyu, Peng Shi, Wei Zhao und Kai Huang. „A Novel Fault-Tolerant Navigation and Positioning Method with Stereo-Camera/Micro Electro Mechanical Systems Inertial Measurement Unit (MEMS-IMU) in Hostile Environment“. Micromachines 9, Nr. 12 (27.11.2018): 626. http://dx.doi.org/10.3390/mi9120626.

Der volle Inhalt der Quelle
Annotation:
Visual odometry (VO) is a new navigation and positioning method that estimates the ego-motion of vehicles from images. However, VO with unsatisfactory performance can fail severely in hostile environment because of the less feature, fast angular motions, or illumination change. Thus, enhancing the robustness of VO in hostile environment has become a popular research topic. In this paper, a novel fault-tolerant visual-inertial odometry (VIO) navigation and positioning method framework is presented. The micro electro mechanical systems inertial measurement unit (MEMS-IMU) is used to aid the stereo-camera, for a robust pose estimation in hostile environment. In the algorithm, the MEMS-IMU pre-integration is deployed to improve the motion estimation accuracy and robustness in the cases of similar or few feature points. Besides, a dramatic change detector and an adaptive observation noise factor are introduced, tolerating and decreasing the estimation error that is caused by large angular motion or wrong matching. Experiments in hostile environment showing that the presented method can achieve better position estimation when compared with the traditional VO and VIO method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Yoon, S. J., W. S. Yoon, J. W. Jung und T. Kim. „DEVELOPMENT OF A SINGLE-VIEW ODOMETER BASED ON PHOTOGRAMMETRIC BUNDLE ADJUSTMENT“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (30.05.2018): 1219–23. http://dx.doi.org/10.5194/isprs-archives-xlii-2-1219-2018.

Der volle Inhalt der Quelle
Annotation:
Recently, a vehicle is equipped with various sensors, which aim smart and autonomous functions. Single-view odometer estimates its pose using a monoscopic camera mounted on a vehicle. It was generally studied in the field of computer vision. On the other hands, photogrammetry focuses to produce precise three-dimensional position information using bundle adjustment methods. Therefore, this paper proposes to apply photogrammetric approach to single view odometer. Firstly, it performs real-time corresponding point extraction. Next, it estimates the pose using relative orientation based on coplanarity conditions. Then, scale calibration is performed to convert the estimated translation in the model space to the translation in the real space. Finally, absolute orientation is performed using more than three images. In this step, we also extract the appropriate model points through verification procedure. For experiments, we used the data provided by KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) community. This technique took 0.12 seconds of processing time per frame. The rotation estimation error was about 0.005 degree per meter and the translation estimation error was about 6.8&amp;thinsp;%. The results of this study have shown the applicability of photogrammetry to visual odometry technology.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Jeong, Jae Heon, und Nikolaus Correll. „Towards Real-Time Trinocular Visual Odometry“. Applied Mechanics and Materials 490-491 (Januar 2014): 1424–29. http://dx.doi.org/10.4028/www.scientific.net/amm.490-491.1424.

Der volle Inhalt der Quelle
Annotation:
Pose estimation of multi-camera rig which has not enough overlapping field of views for the stereo, is generally computationally expensive due to the offset of camera center and the bundle adjustment algorithm. We proposed a divide and conquer approach, which reduces the trinocular visual odometry problem to five monocular visual odometry problems, one for each individual camera sequence and two more using features matched temporally from consecutive images from the center to the left and right cameras, respectively. While this approach provides high accuracy over long distances in outdoor environment without requiring any additional sensors, it is computationally expensive, preventing real-time operation. In this paper, we evaluate trading off image resolution and frame rate to speed up computation with accuracy. Results show that scaling images down to a quarter of Full HD resolution can speed up computation by two orders of magnitude, while still providing acceptable accuracy, whereas dropping frames quickly deteriorates performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Ma, Fangwu, Jinzhu Shi, Yu Yang, Jinhang Li und Kai Dai. „ACK-MSCKF: Tightly-Coupled Ackermann Multi-State Constraint Kalman Filter for Autonomous Vehicle Localization“. Sensors 19, Nr. 21 (05.11.2019): 4816. http://dx.doi.org/10.3390/s19214816.

Der volle Inhalt der Quelle
Annotation:
Visual-Inertial Odometry (VIO) is subjected to additional unobservable directions under the special motions of ground vehicles, resulting in larger pose estimation errors. To address this problem, a tightly-coupled Ackermann visual-inertial odometry (ACK-MSCKF) is proposed to fuse Ackermann error state measurements and the Stereo Multi-State Constraint Kalman Filter (S-MSCKF) with a tightly-coupled filter-based mechanism. In contrast with S-MSCKF, in which the inertial measurement unit (IMU) propagates the vehicle motion and then the propagation is corrected by stereo visual measurements, we successively update the propagation with Ackermann error state measurements and visual measurements after the process model and state augmentation. This way, additional constraints from the Ackermann measurements are exploited to improve the pose estimation accuracy. Both qualitative and quantitative experimental results evaluated under real-world datasets from an Ackermann steering vehicle lead to the following demonstration: ACK-MSCKF can significantly improve the pose estimation accuracy of S-MSCKF under the special motions of autonomous vehicles, and keep accurate and robust pose estimation available under different vehicle driving cycles and environmental conditions. This paper accompanies the source code for the robotics community.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Kim, Joo-Hee, und In-Cheol Kim. „Robust Real-Time Visual Odometry Estimation for 3D Scene Reconstruction“. KIPS Transactions on Software and Data Engineering 4, Nr. 4 (30.04.2015): 187–94. http://dx.doi.org/10.3745/ktsde.2015.4.4.187.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Meng, Xuyang, Chunxiao Fan, Yue Ming, Yuan Shen und Hui Yu. „Un-VDNet: unsupervised network for visual odometry and depth estimation“. Journal of Electronic Imaging 28, Nr. 06 (26.12.2019): 1. http://dx.doi.org/10.1117/1.jei.28.6.063015.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Zhou, Dingfu, Yuchao Dai und Hongdong Li. „Ground-Plane-Based Absolute Scale Estimation for Monocular Visual Odometry“. IEEE Transactions on Intelligent Transportation Systems 21, Nr. 2 (Februar 2020): 791–802. http://dx.doi.org/10.1109/tits.2019.2900330.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Yang, Xiaohan, Xiaojuan Li, Yong Guan, Jiadong Song und Rui Wang. „Overfitting reduction of pose estimation for deep learning visual odometry“. China Communications 17, Nr. 6 (Juni 2020): 196–210. http://dx.doi.org/10.23919/jcc.2020.06.016.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Aqel, Mohammad O. A., Mohammad H. Marhaban, M. Iqbal Saripan und Napsiah Bt Ismail. „Estimation of image scale variations in monocular visual odometry systems“. IEEJ Transactions on Electrical and Electronic Engineering 12, Nr. 2 (15.12.2016): 228–43. http://dx.doi.org/10.1002/tee.22370.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Nisar, Barza, Philipp Foehn, Davide Falanga und Davide Scaramuzza. „VIMO: Simultaneous Visual Inertial Model-Based Odometry and Force Estimation“. IEEE Robotics and Automation Letters 4, Nr. 3 (Juli 2019): 2785–92. http://dx.doi.org/10.1109/lra.2019.2918689.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Nguyen, Thien Hoang, Thien-Minh Nguyen, Muqing Cao und Lihua Xie. „Loosely-Coupled Ultra-wideband-Aided Scale Correction for Monocular Visual Odometry“. Unmanned Systems 08, Nr. 02 (17.03.2020): 179–90. http://dx.doi.org/10.1142/s2301385020500119.

Der volle Inhalt der Quelle
Annotation:
In this paper, we propose a method to address the problem of scale uncertainty in monocular visual odometry (VO), which includes scale ambiguity and scale drift, using distance measurements from a single ultra-wideband (UWB) anchor. A variant of Levenberg–Marquardt (LM) nonlinear least squares regression method is proposed to rectify unscaled position data from monocular odometry with 1D point-to-point distance measurements. As a loosely-coupled approach, our method is flexible in that each input block can be replaced with one’s preferred choices for monocular odometry/SLAM algorithm and UWB sensor. Furthermore, we do not require the location of the UWB anchor as prior knowledge and will estimate both scale and anchor location simultaneously. However, it is noted that a good initial guess for anchor position can result in more accurate scale estimation. The performance of our method is compared with state-of-the-art on both public datasets and real-life experiments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Kučić, Mario, und Marko Valčić. „Stereo Visual Odometry for Indoor Localization of Ship Model“. Journal of Maritime & Transportation Science 58, Nr. 1 (Juni 2020): 57–75. http://dx.doi.org/10.18048/2020.58.04.

Der volle Inhalt der Quelle
Annotation:
Typically, ships are designed for open sea navigation and thus research of autonomous ships is mostly done for that particular area. This paper explores the possibility of using low-cost sensors for localization inside the small navigation area. The localization system is based on the technology used for developing autonomous cars. The main part of the system is visual odometry using stereo cameras fused with Inertial Measurement Unit (IMU) data coupled with Kalman and particle filters to get decimetre level accuracy inside a basin for different surface conditions. The visual odometry uses cropped frames for stereo cameras and Good features to track algorithm for extracting features to get depths for each feature that is used for estimation of ship model movement. Experimental results showed that the proposed system could localize itself within a decimetre accuracy implying that there is a real possibility for ships in using visual odometry for autonomous navigation on narrow waterways, which can have a significant impact on future transportation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie