Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Range-Only-SLAM (Simultaneous Localization and Mapping).

Zeitschriftenartikel zum Thema „Range-Only-SLAM (Simultaneous Localization and Mapping)“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Range-Only-SLAM (Simultaneous Localization and Mapping)" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Tsubouchi, Takashi. „Introduction to Simultaneous Localization and Mapping“. Journal of Robotics and Mechatronics 31, Nr. 3 (20.06.2019): 367–74. http://dx.doi.org/10.20965/jrm.2019.p0367.

Der volle Inhalt der Quelle
Annotation:
Simultaneous localization and mapping (SLAM) forms the core of the technology that supports mobile robots. With SLAM, when a robot is moving in an actual environment, real world information is imported to a computer on the robot via a sensor, and robot’s physical location and a map of its surrounding environment of the robot are created. SLAM is a major topic in mobile robot research. Although the information, supported by a mathematical description, is derived from a space in reality, it is formulated based on a probability theory when being handled. Therefore, this concept contributes not only to the research and development concerning mobile robots, but also to the training of mathematics and computer implementation, aimed mainly at position estimation and map creation for the mobile robots. This article focuses on the SLAM technology, including a brief overview of its history, insights from the author, and, finally, introduction of a specific example that the author was involved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Torres-González, A., J. R. Martinez-de Dios, A. Jiménez-Cano und A. Ollero. „An Efficient Fast-Mapping SLAM Method for UAS Applications Using Only Range Measurements“. Unmanned Systems 04, Nr. 02 (April 2016): 155–65. http://dx.doi.org/10.1142/s2301385016500035.

Der volle Inhalt der Quelle
Annotation:
This paper deals with 3D Simultaneous Localization and Mapping (SLAM), where the UAS uses only range measurements to build a local map of an unknown environment and to self-localize in that map. In the recent years Range Only (RO) SLAM has attracted significant interest, it is suitable for non line-of-sight conditions and bad lighting, being superior to visual SLAM in some problems. However, some issues constrain its applicability in practical cases, such as delays in map building and low map and UAS estimation accuracies. This paper proposes a 3D RO-SLAM scheme for UAS that specifically focuses on improving map building delays and accuracy levels without compromising efficiency in the consumption of resources. The scheme integrates sonar measurements together with range measurements between the robot and beacons deployed in the scenario. The proposed scheme presents two main advantages: (1) it integrates direct range measurements between the robot and the beacons and also range measurements between beacons — called inter-beacon measurements — which significantly reduce map building times and improve map and UAS localization accuracies; and (2) the SLAM scheme is endowed with a supervisory module that self-adapts the measurements that are integrated in SLAM reducing computational, bandwidth and energy consumption. Experimental validation in field experiments with an octorotor UAS showed that the proposed scheme improved map building times in 72%, map accuracy in 40% and UAS localization accuracy in 12%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kim, Jung-Hee, und Doik Kim. „Computationally Efficient Cooperative Dynamic Range-Only SLAM Based on Sum of Gaussian Filter“. Sensors 20, Nr. 11 (10.06.2020): 3306. http://dx.doi.org/10.3390/s20113306.

Der volle Inhalt der Quelle
Annotation:
A cooperative dynamic range-only simultaneous localization and mapping (CDRO-SLAM) algorithm based on the sum of Gaussian (SoG) filter was recently introduced. The main characteristics of the CDRO-SLAM are (i) the integration of inter-node ranges as well as usual direct robot-node ranges to improve the convergence rate and localization accuracy and (ii) the tracking of any moving nodes under dynamic environments by resetting and updating the SoG variables. In this paper, an efficient implementation of the CDRO-SLAM (eCDRO-SLAM) is proposed to mitigate the high computational burden of the CDRO-SLAM due to the inter-node measurements. Furthermore, a thorough computational analysis is presented, which reveals that the computational efficiency of the eCDRO-SLAM is significantly improved over the CDRO-SLAM. The performance of the proposed eCDRO-SLAM is compared with those of several conventional RO-SLAM algorithms and the results show that the proposed efficient algorithm has a faster convergence rate and a similar map estimation error regardless of the map size. Accordingly, the proposed eCDRO-SLAM can be utilized in various RO-SLAM applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Xu, S., Z. Ji, D. T. Pham und F. Yu. „Simultaneous localization and mapping: swarm robot mutual localization and sonar arc bidirectional carving mapping“. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 225, Nr. 3 (10.09.2010): 733–44. http://dx.doi.org/10.1243/09544062jmes2239.

Der volle Inhalt der Quelle
Annotation:
This work primarily aims to study robot swarm global mapping in a static indoor environment. Due to the prerequisite estimation of the robots' own poses, it is upgraded to a simultaneous localization and mapping (SLAM) problem. Five techniques are proposed to solve the SLAM problem, including the extended Kalman filter (EKF)-based mutual localization, sonar arc bidirectional carving mapping, grid-oriented correlation, working robot group substitution, and termination rule. The EKF mutual localization algorithm updates the pose estimates of not only the current robot, but also the landmark-functioned robots. The arc-carving mapping algorithm is to increase the azimuth resolution of sonar readings by using their freespace regions to shrink the possible regions. It is further improved in both accuracy and efficiency by the creative ideas of bidirectional carving, grid-orientedly correlated-arc carving, working robot group substitution, and termination rule. Software simulation and hardware experiment have verified the feasibility of the proposed SLAM philosophy when implemented in a typical medium-cluttered office by a team of three robots. Besides the combined effect, individual algorithm components have also been investigated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

RASHIDI, ALI JABAR, und SAEED MOHAMMADLOO. „SIMULTANEOUS COOPERATIVE LOCALIZATION FOR AUVs USING RANGE-ONLY SENSORS“. International Journal of Information Acquisition 08, Nr. 02 (Juni 2011): 117–32. http://dx.doi.org/10.1142/s0219878911002380.

Der volle Inhalt der Quelle
Annotation:
The absence of GPS underwater makes navigation for autonomous underwater vehicles (AUVs) a challenge. Moreover, the use of static beacons in the form of a long baseline (LBL) array limits the operation area to a few square kilometers and requires substantial deployment effort before operations. In this paper, an algorithm for cooperative localization of AUVs is proposed. We describe a form of cooperative Simultaneous Localization and Mapping (SLAM). Each of the robots in the group is equipped with an Inertial Measurement Unit (IMU) and some of them have a range-only sonar sensor that can determine the relative distance to the others. Two estimators, in the form of a Kalman filter, process the available position information from all the members of the team and produce a pose estimate for every one of them. Simulation results are presented for a typical localization example of three AUVs formation in a large environment and indirect trajectory. The results show that our proposed method offers good localization accuracy, although a small number of low-cost sensors are needed for each vehicle, which validates that it is an economical and practical localization approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Hsu, Chen-Chien, Wei-Yen Wang, Tung-Yuan Lin, Yin-Tien Wang und Teng-Wei Huang. „Enhanced Simultaneous Localization and Mapping (ESLAM) for Mobile Robots“. International Journal of Humanoid Robotics 14, Nr. 02 (16.04.2017): 1750007. http://dx.doi.org/10.1142/s0219843617500074.

Der volle Inhalt der Quelle
Annotation:
FastSLAM, such as FastSLAM 1.0 and FastSLAM 2.0, is a popular algorithm to solve the simultaneous localization and mapping (SLAM) problem for mobile robots. In real environments, however, the execution speed by FastSLAM would be too slow to achieve the objective of real-time design with a satisfactory accuracy because of excessive comparisons of the measurement with all the existing landmarks in particles, particularly when the number of landmarks is drastically increased. In this paper, an enhanced SLAM (ESLAM) is proposed, which uses not only odometer information but also sensor measurements to estimate the robot’s pose in the prediction step. Landmark information that has the maximum likelihood is then used to update the robot’s pose before updating the landmarks’ location. Compared to existing FastSLAM algorithms, the proposed ESLAM algorithm has a better performance in terms of computation efficiency as well as localization and mapping accuracy as demonstrated in the illustrated examples.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Herranz, F., A. Llamazares, E. Molinos, M. Ocaña und M. A. Sotelo. „WiFi SLAM algorithms: an experimental comparison“. Robotica 34, Nr. 4 (18.07.2014): 837–58. http://dx.doi.org/10.1017/s0263574714001908.

Der volle Inhalt der Quelle
Annotation:
SUMMARYLocalization and mapping in indoor environments, such as airports and hospitals, are key tasks for almost every robotic platform. Some researchers suggest the use of Range-Only (RO) sensors based on WiFi (Wireless Fidelity) technology with SLAM (Simultaneous Localization And Mapping) techniques to solve both problems. The current state of the art in RO SLAM is mainly focused on the filtering approach, while the study of smoothing approaches with RO sensors is quite incomplete. This paper presents a comparison between filtering algorithms, such as EKF and FastSLAM, and a smoothing algorithm, the SAM (Smoothing And Mapping). Experimental results are obtained in indoor environments using WiFi sensors. The results demonstrate the feasibility of the smoothing approach using WiFi sensors in an indoor environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Wang, Hongling, Chengjin Zhang, Yong Song und Bao Pang. „Information-Fusion Methods Based Simultaneous Localization and Mapping for Robot Adapting to Search and Rescue Postdisaster Environments“. Journal of Robotics 2018 (2018): 1–13. http://dx.doi.org/10.1155/2018/4218324.

Der volle Inhalt der Quelle
Annotation:
The first application of utilizing unique information-fusion SLAM (IF-SLAM) methods is developed for mobile robots performing simultaneous localization and mapping (SLAM) adapting to search and rescue (SAR) environments in this paper. Several fusion approaches, parallel measurements filtering, exploration trajectories fusing, and combination sensors’ measurements and mobile robots’ trajectories, are proposed. The novel integration particle filter (IPF) and optimal improved EKF (IEKF) algorithms are derived for information-fusion systems to perform SLAM task in SAR scenarios. The information-fusion architecture consists of multirobots and multisensors (MAM); multiple robots mount on-board laser range finder (LRF) sensors, localization sonars, gyro odometry, Kinect-sensor, RGB-D camera, and other proprioceptive sensors. This information-fusion SLAM (IF-SLAM) is compared with conventional methods, which indicates that fusion trajectory is more consistent with estimated trajectories and real observation trajectories. The simulations and experiments of SLAM process are conducted in both cluttered indoor environment and outdoor collapsed unstructured scenario, and experimental results validate the effectiveness of the proposed information-fusion methods in improving SLAM performances adapting to SAR scenarios.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

McGarey, Patrick, Kirk MacTavish, François Pomerleau und Timothy D. Barfoot. „TSLAM: Tethered simultaneous localization and mapping for mobile robots“. International Journal of Robotics Research 36, Nr. 12 (Oktober 2017): 1363–86. http://dx.doi.org/10.1177/0278364917732639.

Der volle Inhalt der Quelle
Annotation:
Tethered mobile robots are useful for exploration in steep, rugged, and dangerous terrain. A tether can provide a robot with robust communications, power, and mechanical support, but also constrains motion. In cluttered environments, the tether will wrap around a number of intermediate ‘anchor points’, complicating navigation. We show that by measuring the length of tether deployed and the bearing to the most recent anchor point, we can formulate a tethered simultaneous localization and mapping (TSLAM) problem that allows us to estimate the pose of the robot and the positions of the anchor points, using only low-cost, nonvisual sensors. This information is used by the robot to safely return along an outgoing trajectory while avoiding tether entanglement. We are motivated by TSLAM as a building block to aid conventional, camera, and laser-based approaches to simultaneous localization and mapping (SLAM), which tend to fail in dark and or dusty environments. Unlike conventional range-bearing SLAM, the TSLAM problem must account for the fact that the tether-length measurements are a function of the robot’s pose and all the intermediate anchor-point positions. While this fact has implications on the sparsity that can be exploited in our method, we show that a solution to the TSLAM problem can still be found and formulate two approaches: (i) an online particle filter based on FastSLAM and (ii) an efficient, offline batch solution. We demonstrate that either method outperforms odometry alone, both in simulation and in experiments using our TReX (Tethered Robotic eXplorer) mobile robot operating in flat-indoor and steep-outdoor environments. For the indoor experiment, we compare each method using the same dataset with ground truth, showing that batch TSLAM outperforms particle-filter TSLAM in localization and mapping accuracy, owing to superior anchor-point detection, data association, and outlier rejection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Wu, Ming, Lin Lin Li, Cheng Jian Li, Hong Qiao Wang und Zhen Hua Wei. „Simultaneous Localization, Mapping and Detection of Objects for Mobile Robot Based on Information Fusion in Dynamic Environment“. Advanced Materials Research 1014 (Juli 2014): 319–22. http://dx.doi.org/10.4028/www.scientific.net/amr.1014.319.

Der volle Inhalt der Quelle
Annotation:
This paper presents a novel approach for simultaneous localization, mapping (SLAM) and detection of moving object based on information fusion. We use different information sources, such as laser range scanner, and monocular camera, to improve the ability of distinction between object and background environment. The approach improves the accuracy of SLAM in complex environment, reduces the interference caused by objects, and enhances the practical utility of traditional methods of SLAM. Moreover, the approach expands fields of both research and application of SLAM in combination with target tracking method. Results in real robot experiments show the effectiveness of our method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Wang, Su, Yukinori Kobayashi, Ankit A. Ravankar, Abhijeet Ravankar und Takanori Emaru. „A Novel Approach for Lidar-Based Robot Localization in a Scale-Drifted Map Constructed Using Monocular SLAM“. Sensors 19, Nr. 10 (14.05.2019): 2230. http://dx.doi.org/10.3390/s19102230.

Der volle Inhalt der Quelle
Annotation:
Scale ambiguity and drift are inherent drawbacks of a pure-visual monocular simultaneous localization and mapping (SLAM) system. This problem could be a crucial challenge for other robots with range sensors to perform localization in a map previously built by a monocular camera. In this paper, a metrically inconsistent priori map is made by the monocular SLAM that is subsequently used to perform localization on another robot only using a laser range finder (LRF). To tackle the problem of the metric inconsistency, this paper proposes a 2D-LRF-based localization algorithm which allows the robot to locate itself and resolve the scale of the local map simultaneously. To align the data from 2D LRF to the map, 2D structures are extracted from the 3D point cloud map obtained by the visual SLAM process. Next, a modified Monte Carlo localization (MCL) approach is proposed to estimate the robot’s state which is composed of both the robot’s pose and map’s relative scale. Finally, the effectiveness of the proposed system is demonstrated in the experiments on a public benchmark dataset as well as in a real-world scenario. The experimental results indicate that the proposed method is able to globally localize the robot in real-time. Additionally, even in a badly drifted map, the successful localization can also be achieved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Munguía, Rodrigo, und Antoni Grau. „Monocular SLAM for Visual Odometry: A Full Approach to the Delayed Inverse-Depth Feature Initialization Method“. Mathematical Problems in Engineering 2012 (2012): 1–26. http://dx.doi.org/10.1155/2012/676385.

Der volle Inhalt der Quelle
Annotation:
This paper describes in a detailed manner a method to implement a simultaneous localization and mapping (SLAM) system based on monocular vision for applications of visual odometry, appearance-based sensing, and emulation of range-bearing measurements. SLAM techniques are required to operate mobile robots ina prioriunknown environments using only on-board sensors to simultaneously build a map of their surroundings; this map will be needed for the robot to track its position. In this context, the 6-DOF (degree of freedom) monocular camera case (monocular SLAM) possibly represents the harder variant of SLAM. In monocular SLAM, a single camera, which is freely moving through its environment, represents the sole sensory input to the system. The method proposed in this paper is based on a technique called delayed inverse-depth feature initialization, which is intended to initialize new visual features on the system. In this work, detailed formulation, extended discussions, and experiments with real data are presented in order to validate and to show the performance of the proposal.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Wang, Jun, und Yasutake Takahashi. „Particle Smoother-Based Landmark Mapping for the SLAM Method of an Indoor Mobile Robot with a Non-Gaussian Detection Model“. Journal of Sensors 2019 (14.10.2019): 1–19. http://dx.doi.org/10.1155/2019/3717298.

Der volle Inhalt der Quelle
Annotation:
HF-band radio-frequency identification (RFID) is a robust identification system that is rarely influenced by objects in the robot activity area or by illumination conditions. An HF-band RFID system is capable of facilitating a reasonably accurate and robust self-localization of indoor mobile robots. An RFID-based self-localization system for an indoor mobile robot requires prior knowledge of the map which contains the ID information and positions of the RFID tags used in the environment. Generally, the map of RFID tags is manually built. To reduce labor costs, the simultaneous localization and mapping (SLAM) technique is designed to localize the mobile robot and build a map of the RFID tags simultaneously. In this study, multiple HF-band RFID readers are installed on the bottom of an omnidirectional mobile robot and RFID tags are spread on the floor. Because the tag detection process of the HF-band RFID system does not follow a standard Gaussian distribution, extended Kalman filter- (EKF-) based landmark updates are unsuitable. This paper proposes a novel SLAM method for the indoor mobile robot with a non-Gaussian detection model, by using the particle smoother for the landmark mapping and particle filter for the self-localization of the mobile robot. The proposed SLAM method is evaluated through experiments with the HF-band RFID system which has the non-Gaussian detection model. Furthermore, the proposed SLAM method is also evaluated by a range and bearing sensor which has the standard Gaussian detection model. In particular, the proposed method is compared against two other SLAM methods: FastSLAM and SLAM methods utilize particle filter for both the landmark updating and robot self-localization. The experimental results show the validity and superiority of the proposed SLAM method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Hua, Gangchen, und Xu Tan. „Affine-Invariant Geometric Constraints-Based High Accuracy Simultaneous Localization and Mapping“. Journal of Sensors 2017 (2017): 1–7. http://dx.doi.org/10.1155/2017/1969351.

Der volle Inhalt der Quelle
Annotation:
In this study we describe a new appearance-based loop-closure detection method for online incremental simultaneous localization and mapping (SLAM) using affine-invariant-based geometric constraints. Unlike other pure bag-of-words-based approaches, our proposed method uses geometric constraints as a supplement to improve accuracy. By establishing an affine-invariant hypothesis, the proposed method excludes incorrect visual words and calculates the dispersion of correctly matched visual words to improve the accuracy of the likelihood calculation. In addition, camera’s intrinsic parameters and distortion coefficients are adequate for this method. 3D measuring is not necessary. We use the mechanism of Long-Term Memory and Working Memory (WM) to manage the memory. Only a limited size of the WM is used for loop-closure detection; therefore the proposed method is suitable for large-scale real-time SLAM. We tested our method using the CityCenter and Lip6Indoor datasets. Our proposed method results can effectively correct the typical false-positive localization of previous methods, thus gaining better recall ratios and better precision.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Moratuwage, Diluka, Martin Adams und Felipe Inostroza. „δ-Generalized Labeled Multi-Bernoulli Simultaneous Localization and Mapping with an Optimal Kernel-Based Particle Filtering Approach“. Sensors 19, Nr. 10 (17.05.2019): 2290. http://dx.doi.org/10.3390/s19102290.

Der volle Inhalt der Quelle
Annotation:
Under realistic environmental conditions, heuristic-based data association and map management routines often result in divergent map and trajectory estimates in robotic Simultaneous Localization And Mapping (SLAM). To address these issues, SLAM solutions have been proposed based on the Random Finite Set (RFS) framework, which models the map and measurements such that the usual requirements of external data association routines and map management heuristics can be circumvented and realistic sensor detection uncertainty can be taken into account. Rao–Blackwellized particle filter (RBPF)-based RFS SLAM solutions have been demonstrated using the Probability Hypothesis Density (PHD) filter and subsequently the Labeled Multi-Bernoulli (LMB) filter. In multi-target tracking, the LMB filter, which was introduced as an efficient approximation to the computationally expensive δ -Generalized LMB ( δ -GLMB) filter, converts its representation of an LMB distribution to δ -GLMB form during the measurement update step. This not only results in a loss of information yielding inferior results (compared to the δ -GLMB filter) but also fails to take computational advantages in parallelized implementations possible with RBPF-based SLAM algorithms. Similar to state-of-the-art random vector-valued RBPF solutions such as FastSLAM and MH-FastSLAM, the performances of all RBPF-based SLAM algorithms based on the RFS framework also diverge from ground truth over time due to random sampling approaches, which only rely on control noise variance. Further, the methods lose particle diversity and diverge over time as a result of particle degeneracy. To alleviate this problem and further improve the quality of map estimates, a SLAM solution using an optimal kernel-based particle filter combined with an efficient variant of the δ -GLMB filter ( δ -GLMB-SLAM) is presented. The performance of the proposed δ -GLMB-SLAM algorithm, referred to as δ -GLMB-SLAM2.0, was demonstrated using simulated datasets and a section of the publicly available KITTI dataset. The results suggest that even with a limited number of particles, δ -GLMB-SLAM2.0 outperforms state-of-the-art RBPF-based RFS SLAM algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Zhang, Yi, und Fei Huang. „Panoramic Visual SLAM Technology for Spherical Images“. Sensors 21, Nr. 3 (21.01.2021): 705. http://dx.doi.org/10.3390/s21030705.

Der volle Inhalt der Quelle
Annotation:
Simultaneous Localization and Mapping (SLAM) technology is one of the best methods for fast 3D reconstruction and mapping. However, the accuracy of SLAM is not always high enough, which is currently the subject of much research interest. Panoramic vision can provide us with a wide range of angles of view, many feature points, and rich information. The panoramic multi-view cross-imaging feature can be used to realize instantaneous omnidirectional spatial information acquisition and improve the positioning accuracy of SLAM. In this study, we investigated panoramic visual SLAM positioning technology, including three core research points: (1) the spherical imaging model; (2) spherical image feature extraction and matching methods, including the Spherical Oriented FAST and Rotated BRIEF (SPHORB) and ternary scale-invariant feature transform (SIFT) algorithms; and (3) the panoramic visual SLAM algorithm. The experimental results show that the method of panoramic visual SLAM can improve the robustness and accuracy of a SLAM system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Xu, Xiao Long, Jian Wen Zhang und Li Ping Chen. „A Fast Algorithm for Multi-Robot SLAM“. Advanced Materials Research 562-564 (August 2012): 941–44. http://dx.doi.org/10.4028/www.scientific.net/amr.562-564.941.

Der volle Inhalt der Quelle
Annotation:
Multi-robot SLAM has become a research hotspot in robot simultaneous localization and mapping (SLAM) in recent years. In this paper, we propose multi-robot exactly sparse extended information filters algorithm (MRESEIF) to solve Multi-robot SLAM problem. MRESEIF algorithm sets the threshold to divide the features which are observed by robots into two parts. One part of features are used on observation update, while the other one are used on robots relocalization. The simulation data shows that MRESEIF algorithm can solve multi-robot SLAM problem effectively, the oberservation update time and motion update time are kept in constant range, and the accuracy of robot poses is perfect.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Chow, J. C. K. „DRIFT-FREE INDOOR NAVIGATION USING SIMULTANEOUS LOCALIZATION AND MAPPING OF THE AMBIENT HETEROGENEOUS MAGNETIC FIELD“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W7 (12.09.2017): 339–44. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w7-339-2017.

Der volle Inhalt der Quelle
Annotation:
In the absence of external reference position information (e.g. surveyed targets or Global Navigation Satellite Systems) Simultaneous Localization and Mapping (SLAM) has proven to be an effective method for indoor navigation. The positioning drift can be reduced with regular loop-closures and global relaxation as the backend, thus achieving a good balance between exploration and exploitation. Although vision-based systems like laser scanners are typically deployed for SLAM, these sensors are heavy, energy inefficient, and expensive, making them unattractive for wearables or smartphone applications. However, the concept of SLAM can be extended to non-optical systems such as magnetometers. Instead of matching features such as walls and furniture using some variation of the Iterative Closest Point algorithm, the local magnetic field can be matched to provide loop-closure and global trajectory updates in a Gaussian Process (GP) SLAM framework. With a MEMS-based inertial measurement unit providing a continuous trajectory, and the matching of locally distinct magnetic field maps, experimental results in this paper show that a drift-free navigation solution in an indoor environment with millimetre-level accuracy can be achieved. The GP-SLAM approach presented can be formulated as a maximum a posteriori estimation problem and it can naturally perform loop-detection, feature-to-feature distance minimization, global trajectory optimization, and magnetic field map estimation simultaneously. Spatially continuous features (i.e. smooth magnetic field signatures) are used instead of discrete feature correspondences (e.g. point-to-point) as in conventional vision-based SLAM. These position updates from the ambient magnetic field also provide enough information for calibrating the accelerometer bias and gyroscope bias in-use. The only restriction for this method is the need for magnetic disturbances (which is typically not an issue for indoor environments); however, no assumptions are required for the general motion of the sensor (e.g. static periods).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Ceron, Jesus D., Felix Kluge, Arne Küderle, Bjoern M. Eskofier und Diego M. López. „Simultaneous Indoor Pedestrian Localization and House Mapping Based on Inertial Measurement Unit and Bluetooth Low-Energy Beacon Data“. Sensors 20, Nr. 17 (22.08.2020): 4742. http://dx.doi.org/10.3390/s20174742.

Der volle Inhalt der Quelle
Annotation:
Indoor location estimation is crucial to provide context-based assistance in home environments. In this study, a method for simultaneous indoor pedestrian localization and house mapping is proposed and evaluated. The method fuses a person’s movement data from an Inertial Measurement Unit (IMU) with proximity and activity-related data from Bluetooth Low-Energy (BLE) beacons deployed in the indoor environment. The person’s and beacons’ localization is performed simultaneously using a combination of particle and Kalman Filters. We evaluated the method using data from eight participants who performed different activities in an indoor environment. As a result, the average participant’s localization error was 1.05 ± 0.44 m, and the average beacons’ localization error was 0.82 ± 0.24 m. The proposed method is able to construct a map of the indoor environment by localizing the BLE beacons and simultaneously locating the person. The results obtained demonstrate that the proposed method could point to a promising roadmap towards the development of simultaneous localization and home mapping system based only on one IMU and a few BLE beacons. To the best of our knowledge, this is the first method that includes the beacons’ data movement as activity-related events in a method for pedestrian Simultaneous Localization and Mapping (SLAM).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Han, Dongxiao, Yuwen Li, Tao Song und Zhenyang Liu. „Multi-Objective Optimization of Loop Closure Detection Parameters for Indoor 2D Simultaneous Localization and Mapping“. Sensors 20, Nr. 7 (30.03.2020): 1906. http://dx.doi.org/10.3390/s20071906.

Der volle Inhalt der Quelle
Annotation:
Aiming at addressing the issues related to the tuning of loop closure detection parameters for indoor 2D graph-based simultaneous localization and mapping (SLAM), this article proposes a multi-objective optimization method for these parameters. The proposed method unifies the Karto SLAM algorithm, an efficient evaluation approach for map quality with three quantitative metrics, and a multi-objective optimization algorithm. More particularly, the evaluation metrics, i.e., the proportion of occupied grids, the number of corners and the amount of enclosed areas, can reflect the errors such as overlaps, blurring and misalignment when mapping nested loops, even in the absence of ground truth. The proposed method has been implemented and validated by testing on four datasets and two real-world environments. For all these tests, the map quality can be improved using the proposed method. Only loop closure detection parameters have been considered in this article, but the proposed evaluation metrics and optimization method have potential applications in the automatic tuning of other SLAM parameters to improve the map quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Karam, S., V. Lehtola und G. Vosselman. „STRATEGIES TO INTEGRATE IMU AND LIDAR SLAM FOR INDOOR MAPPING“. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-1-2020 (03.08.2020): 223–30. http://dx.doi.org/10.5194/isprs-annals-v-1-2020-223-2020.

Der volle Inhalt der Quelle
Annotation:
Abstract. In recent years, the importance of indoor mapping increased in a wide range of applications, such as facility management and mapping hazardous sites. The essential technique behind indoor mapping is simultaneous localization and mapping (SLAM) because SLAM offers suitable positioning estimates in environments where satellite positioning is not available. State-of-the-art indoor mobile mapping systems employ Visual-based SLAM or LiDAR-based SLAM. However, Visual-based SLAM is sensitive to textureless environments and, similarly, LiDAR-based SLAM is sensitive to a number of pose configurations where the geometry of laser observations is not strong enough to reliably estimate the six-degree-of-freedom (6DOF) pose of the system. In this paper, we present different strategies that utilize the benefits of the inertial measurement unit (IMU) in the pose estimation and support LiDAR-based SLAM in overcoming these problems. The proposed strategies have been implemented and tested using different datasets and our experimental results demonstrate that the proposed methods do indeed overcome these problems. We conclude that IMU observations increase the robustness of SLAM, which is expected, but also that the best reconstruction accuracy is obtained not with a blind use of all observations but by filtering the measurements with a proposed reliability measure. To this end, our results show promising improvements in reconstruction accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Lourenço, Pedro, Pedro Batista, Paulo Oliveira, Carlos Silvestre und C. L. Philip Chen. „Sensor-based globally exponentially stable range-only simultaneous localization and mapping“. Robotics and Autonomous Systems 68 (Juni 2015): 72–85. http://dx.doi.org/10.1016/j.robot.2015.01.010.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Xiong, Hui, Youping Chen, Xiaoping Li, Bing Chen und Jun Zhang. „A scan matching simultaneous localization and mapping algorithm based on particle filter“. Industrial Robot: An International Journal 43, Nr. 6 (17.10.2016): 607–16. http://dx.doi.org/10.1108/ir-07-2015-0138.

Der volle Inhalt der Quelle
Annotation:
Purpose The purpose of this paper is to present a scan matching simultaneous localization and mapping (SLAM) algorithm based on particle filter to generate the grid map online. It mainly focuses on reducing the memory consumption and alleviating the loop closure problem. Design/methodology/approach The proposed method alleviates the loop closure problem by improving the accuracy of the robot’s pose. First, two improvements were applied to enhance the accuracy of the hill climbing scan matching. Second, a particle filter was used to maintain the diversity of the robot’s pose and then to supply potential seeds to the hill climbing scan matching to ensure that the best match point was the global optimum. The proposed method reduces the memory consumption by maintaining only a single grid map. Findings Simulation and experimental results have proved that this method can build a consistent map of a complex environment. Meanwhile, it reduced the memory consumption and alleviates the loop closure problem. Originality/value In this paper, a new SLAM algorithm has been proposed. It can reduce the memory consumption and alleviate the loop closure problem without lowering the accuracy of the generated grid map.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Liu, Chang, Jin Zhao, Nianyi Sun, Qingrong Yang und Leilei Wang. „IT-SVO: Improved Semi-Direct Monocular Visual Odometry Combined with JS Divergence in Restricted Mobile Devices“. Sensors 21, Nr. 6 (12.03.2021): 2025. http://dx.doi.org/10.3390/s21062025.

Der volle Inhalt der Quelle
Annotation:
Simultaneous localization and mapping (SLAM) has a wide range for applications in mobile robotics. Lightweight and inexpensive vision sensors have been widely used for localization in GPS-denied or weak GPS environments. Mobile robots not only estimate their pose, but also correct their position according to the environment, so a proper mathematical model is required to obtain the state of robots in their circumstances. Usually, filter-based SLAM/VO regards the model as a Gaussian distribution in the mapping thread, which deals with the complicated relationship between mean and covariance. The covariance in SLAM or VO represents the uncertainty of map points. Therefore, the methods, such as probability theory and information theory play a significant role in estimating the uncertainty. In this paper, we combine information theory with classical visual odometry (SVO) and take Jensen-Shannon divergence (JS divergence) instead of Kullback-Leibler divergence (KL divergence) to estimate the uncertainty of depth. A more suitable methodology for SVO is that explores to improve the accuracy and robustness of mobile devices in unknown environments. Meanwhile, this paper aims to efficiently utilize small portability for location and provide a priori knowledge of the latter application scenario. Therefore, combined with SVO, JS divergence is implemented, which has been realized. It not only has the property of accurate distinction of outliers, but also converges the inliers quickly. Simultaneously, the results show, under the same computational simulation, that SVO combined with JS divergence can more accurately locate its state in the environment than the combination with KL divergence.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Zhao, Xiong, Tao Zuo und Xinyu Hu. „OFM-SLAM: A Visual Semantic SLAM for Dynamic Indoor Environments“. Mathematical Problems in Engineering 2021 (08.04.2021): 1–16. http://dx.doi.org/10.1155/2021/5538840.

Der volle Inhalt der Quelle
Annotation:
Most of the current visual Simultaneous Localization and Mapping (SLAM) algorithms are designed based on the assumption of a static environment, and their robustness and accuracy in the dynamic environment do not behave well. The reason is that moving objects in the scene will cause the mismatch of features in the pose estimation process, which further affects its positioning and mapping accuracy. In the meantime, the three-dimensional semantic map plays a key role in mobile robot navigation, path planning, and other tasks. In this paper, we present OFM-SLAM: Optical Flow combining MASK-RCNN SLAM, a novel visual SLAM for semantic mapping in dynamic indoor environments. Firstly, we use the Mask-RCNN network to detect potential moving objects which can generate masks of dynamic objects. Secondly, an optical flow method is adopted to detect dynamic feature points. Then, we combine the optical flow method and the MASK-RCNN for full dynamic points’ culling, and the SLAM system is able to track without these dynamic points. Finally, the semantic labels obtained from MASK-RCNN are mapped to the point cloud for generating a three-dimensional semantic map that only contains the static parts of the scenes and their semantic information. We evaluate our system in public TUM datasets. The results of our experiments demonstrate that our system is more effective in dynamic scenarios, and the OFM-SLAM can estimate the camera pose more accurately and acquire a more precise localization in the high dynamic environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Bonin-Font, Francisco, und Antoni Burguera. „Towards Multi-Robot Visual Graph-SLAM for Autonomous Marine Vehicles“. Journal of Marine Science and Engineering 8, Nr. 6 (14.06.2020): 437. http://dx.doi.org/10.3390/jmse8060437.

Der volle Inhalt der Quelle
Annotation:
State of the art approaches to Multi-robot localization and mapping still present multiple issues to be improved, offering a wide range of possibilities for researchers and technology. This paper presents a new algorithm for visual Multi-robot simultaneous localization and mapping, used to join, in a common reference system, several trajectories of different robots that participate simultaneously in a common mission. One of the main problems in centralized configurations, where the leader can receive multiple data from the rest of robots, is the limited communications bandwidth that delays the data transmission and can be overloaded quickly, restricting the reactive actions. This paper presents a new approach to Multi-robot visual graph Simultaneous Localization and Mapping (SLAM) that aims to perform a joined topological map, which evolves in different directions according to the different trajectories of the different robots. The main contributions of this new strategy are centered on: (a) reducing to hashes of small dimensions the visual data to be exchanged among all agents, diminishing, in consequence, the data delivery time, (b) running two different phases of SLAM, intra- and inter-session, with their respective loop-closing tasks, with a trajectory joining action in between, with high flexibility in their combination, (c) simplifying the complete SLAM process, in concept and implementation, and addressing it to correct the trajectory of several robots, initially and continuously estimated by means of a visual odometer, and (d) executing the process online, in order to assure a successful accomplishment of the mission, with the planned trajectories and at the planned points. Primary results included in this paper show a promising performance of the algorithm in visual datasets obtained in different points on the coast of the Balearic Islands, either by divers or by an Autonomous Underwater Vehicle (AUV) equipped with cameras.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Cheein, Fernando Auat, Gustavo Scaglia, Fernando di Sciasio und Ricardo Carelli. „Feature Selection Criteria for Real Time EKF-SLAM Algorithm“. International Journal of Advanced Robotic Systems 6, Nr. 3 (01.01.2009): 21. http://dx.doi.org/10.5772/7237.

Der volle Inhalt der Quelle
Annotation:
This paper presents a seletion procedure for environmet features for the correction stage of a SLAM (Simultaneous Localization and Mapping) algorithm based on an Extended Kalman Filter (EKF). This approach decreases the computational time of the correction stage which allows for real and constant-time implementations of the SLAM. The selection procedure consists in chosing the features the SLAM system state covariance is more sensible to. The entire system is implemented on a mobile robot equipped with a range sensor laser. The features extracted from the environment correspond to lines and corners. Experimental results of the real time SLAM algorithm and an analysis of the processing-time consumed by the SLAM with the feature selection procedure proposed are shown. A comparison between the feature selection approach proposed and the classical sequential EKF-SLAM along with an entropy feature selection approach is also performed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Tan, Feng, Winfried Lohmiller und Jean-Jacques Slotine. „Analytical SLAM without linearization“. International Journal of Robotics Research 36, Nr. 13-14 (13.06.2017): 1554–78. http://dx.doi.org/10.1177/0278364917710541.

Der volle Inhalt der Quelle
Annotation:
This paper solves the classical problem of simultaneous localization and mapping (SLAM) in a fashion that avoids linearized approximations altogether. Based on the creation of virtual synthetic measurements, the algorithm uses a linear time-varying Kalman observer, bypassing errors and approximations brought by the linearization process in traditional extended Kalman filtering SLAM. Convergence rates of the algorithm are established using contraction analysis. Different combinations of sensor information can be exploited, such as bearing measurements, range measurements, optical flow, or time-to-contact. SLAM-DUNK, a more advanced version of the algorithm in global coordinates, exploits the conditional independence property of the SLAM problem, decoupling the covariance matrices between different landmarks and reducing computational complexity to O(n). As illustrated in simulations, the proposed algorithm can solve SLAM problems in both 2D and 3D scenarios with guaranteed convergence rates in a full nonlinear context.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Hua, Gangchen, und Osamu Hasegawa. „A Robust Visual-Feature-Extraction Method for Simultaneous Localization and Mapping in Public Outdoor Environment“. Journal of Advanced Computational Intelligence and Intelligent Informatics 19, Nr. 1 (20.01.2015): 11–22. http://dx.doi.org/10.20965/jaciii.2015.p0011.

Der volle Inhalt der Quelle
Annotation:
We describe a new feature extraction method based on the geometric structure of matched local feature points that extracts robust features from an image sequence and performs satisfactorily in highly dynamic environments. Our proposed method is more accurate than other such methods in appearance-only simultaneous localization and mapping (SLAM). Compared to position-invariant robust features [1], it is also more suitable for low-cost, single lens cameras with narrow fields of view. Testing our method in an outdoor environment at Shibuya Station. We captured images using a conventional hand-held single-lens video camera. These environments of experiments are public environments without any planned landmarks. Results have shown that the proposed method accurately obtains matches for two visual-feature sets and that stable, accurate SLAM is achieved in dynamic public environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Liu, Xiao, Lei Zhang, Shengran Qin, Daji Tian, Shihan Ouyang und Chu Chen. „Optimized LOAM Using Ground Plane Constraints and SegMatch-Based Loop Detection“. Sensors 19, Nr. 24 (09.12.2019): 5419. http://dx.doi.org/10.3390/s19245419.

Der volle Inhalt der Quelle
Annotation:
Reducing the cumulative error in the process of simultaneous localization and mapping (SLAM) has always been a hot issue. In this paper, in order to improve the localization and mapping accuracy of ground vehicles, we proposed a novel optimized lidar odometry and mapping method using ground plane constraints and SegMatch-based loop detection. We only used the lidar point cloud to estimate the pose between consecutive frames, without any other sensors, such as Global Positioning System (GPS) and Inertial Measurement Unit (IMU). Firstly, the ground plane constraints were used to reduce matching errors. Then, based on more accurate lidar odometry obtained from lidar odometry and mapping (LOAM), SegMatch completed segmentation matching and loop detection to optimize the global pose. The neighborhood search was also used to accomplish the loop detection task in case of failure. Finally, the proposed method was evaluated and compared with the existing 3D lidar SLAM methods. Experiment results showed that the proposed method could realize low drift localization and dense 3D point cloud map construction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Du, Shitong, Yifan Li, Xuyou Li und Menghao Wu. „LiDAR Odometry and Mapping Based on Semantic Information for Outdoor Environment“. Remote Sensing 13, Nr. 15 (21.07.2021): 2864. http://dx.doi.org/10.3390/rs13152864.

Der volle Inhalt der Quelle
Annotation:
Simultaneous Localization and Mapping (SLAM) in an unknown environment is a crucial part for intelligent mobile robots to achieve high-level navigation and interaction tasks. As one of the typical LiDAR-based SLAM algorithms, the Lidar Odometry and Mapping in Real-time (LOAM) algorithm has shown impressive results. However, LOAM only uses low-level geometric features without considering semantic information. Moreover, the lack of a dynamic object removal strategy limits the algorithm to obtain higher accuracy. To this end, this paper extends the LOAM pipeline by integrating semantic information into the original framework. Specifically, we first propose a two-step dynamic objects filtering strategy. Point-wise semantic labels are then used to improve feature extraction and searching for corresponding points. We evaluate the performance of the proposed method in many challenging scenarios, including highway, country and urban from the KITTI dataset. The results demonstrate that the proposed SLAM system outperforms the state-of-the-art SLAM methods in terms of accuracy and robustness.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Kuo, Bor-Woei, Hsun-Hao Chang, Yung-Chang Chen und Shi-Yu Huang. „A Light-and-Fast SLAM Algorithm for Robots in Indoor Environments Using Line Segment Map“. Journal of Robotics 2011 (2011): 1–12. http://dx.doi.org/10.1155/2011/257852.

Der volle Inhalt der Quelle
Annotation:
Simultaneous Localization and Mapping (SLAM) is an important technique for robotic system navigation. Due to the high complexity of the algorithm, SLAM usually needs long computational time or large amount of memory to achieve accurate results. In this paper, we present a lightweight Rao-Blackwellized particle filter- (RBPF-) based SLAM algorithm for indoor environments, which uses line segments extracted from the laser range finder as the fundamental map structure so as to reduce the memory usage. Since most major structures of indoor environments are usually orthogonal to each other, we can also efficiently increase the accuracy and reduce the complexity of our algorithm by exploiting this orthogonal property of line segments, that is, we treat line segments that are parallel or perpendicular to each other in a special way when calculating the importance weight of each particle. Experimental results shows that our work is capable of drawing maps in complex indoor environments, needing only very low amount of memory and much less computational time as compared to other grid map-based RBPF SLAM algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Yan, Rui-Jun, Jing Wu, Ming-Lei Shao, Kyoo-Sik Shin, Ji-Yeong Lee und Chang-Soo Han. „Mutually converted arc–line segment-based SLAM with summing parameters“. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 229, Nr. 11 (08.09.2014): 2094–114. http://dx.doi.org/10.1177/0954406214551036.

Der volle Inhalt der Quelle
Annotation:
This paper presents a mutually converted arc–line segment-based simultaneous localization and mapping (SLAM) algorithm by distinguishing what we call the summing parameters from other types. These redefined parameters are a combination of the coordinate values of the measuring points. Unlike most traditional features-based simultaneous localization and mapping algorithms that only update the same type of features with a covariance matrix, our algorithm can match and update different types of features, such as the arc and line. For each separated data set from every new scan, the necessary information of the measured points is stored by the small constant number of the summing parameters. The arc and line segments are extracted according to the different limit values but based on the same parameters, from which their covariance matrix can also be computed. If one stored segment matches a new extracted segment successfully, two segments can be merged as one whether the features are the same type or not. The mergence is achieved by only summing the corresponding summing parameters of the two segments. Three simultaneous localization and mapping experiments in three different indoor environments were done to demonstrate the robustness, accuracy, and effectiveness of the proposed method. The data set of the Massachusetts Institute Of Technology (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) Building was used to validate that our method has good adaptability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Botteghi, N., B. Sirmacek, R. Schulte, M. Poel und C. Brune. „REINFORCEMENT LEARNING HELPS SLAM: LEARNING TO BUILD MAPS“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B4-2020 (25.08.2020): 329–35. http://dx.doi.org/10.5194/isprs-archives-xliii-b4-2020-329-2020.

Der volle Inhalt der Quelle
Annotation:
Abstract. In this research, we investigate the use of Reinforcement Learning (RL) for an effective and robust solution for exploring unknown and indoor environments and reconstructing their maps. We benefit from a Simultaneous Localization and Mapping (SLAM) algorithm for real-time robot localization and mapping. Three different reward functions are compared and tested in different environments with growing complexity. The performances of the three different RL-based path planners are assessed not only on the training environments, but also on an a priori unseen environment to test the generalization properties of the policies. The results indicate that RL-based planners trained to maximize the coverage of the map are able to consistently explore and construct the maps of different indoor environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Hastürk, Özgür, und Aydan M. Erkmen. „DUDMap: 3D RGB-D mapping for dense, unstructured, and dynamic environment“. International Journal of Advanced Robotic Systems 18, Nr. 3 (01.05.2021): 172988142110161. http://dx.doi.org/10.1177/17298814211016178.

Der volle Inhalt der Quelle
Annotation:
Simultaneous localization and mapping (SLAM) problem has been extensively studied by researchers in the field of robotics, however, conventional approaches in mapping assume a static environment. The static assumption is valid only in a small region, and it limits the application of visual SLAM in dynamic environments. The recently proposed state-of-the-art SLAM solutions for dynamic environments use different semantic segmentation methods such as mask R-CNN and SegNet; however, these frameworks are based on a sparse mapping framework (ORBSLAM). In addition, segmentation process increases the computational power, which makes these SLAM algorithms unsuitable for real-time mapping. Therefore, there is no effective dense RGB-D SLAM method for real-world unstructured and dynamic environments. In this study, we propose a novel real-time dense SLAM method for dynamic environments, where 3D reconstruction error is manipulated for identification of static and dynamic classes having generalized Gaussian distribution. Our proposed approach requires neither explicit object tracking nor object classifier, which makes it robust to any type of moving object and suitable for real-time mapping. Our method eliminates the repeated views and uses consistent data that enhance the performance of volumetric fusion. For completeness, we compare our proposed method using different types of high dynamic dataset, which are publicly available, to demonstrate the versatility and robustness of our approach. Experiments show that its tracking performance is better than other dense and dynamic SLAM approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Jiang, Guolai, Lei Yin, Guodong Liu, Weina Xi und Yongsheng Ou. „FFT-Based Scan-Matching for SLAM Applications with Low-Cost Laser Range Finders“. Applied Sciences 9, Nr. 1 (22.12.2018): 41. http://dx.doi.org/10.3390/app9010041.

Der volle Inhalt der Quelle
Annotation:
Simultaneous Localization and Mapping (SLAM) is an active area of robot research. SLAM with a laser range finder (LRF) is effective for localization and navigation. However, commercial robots usually have to use low-cost LRF sensors, which result in lower resolution and higher noise. Traditional scan-matching algorithms may often fail while the robot is running too quickly in complex environments. In order to enhance the stability of matching in the case of large pose differences, this paper proposes a new method of scan-matching mainly based on Fast Fourier Transform (FFT) as well as its application with a low-cost LRF sensor. In our method, we change the scan data within a range of distances from the laser to various images. FFT is applied to the images to determine the rotation angle and translation parameters. Meanwhile, a new kind of feature based on missing data is proposed to determine the rough estimation of the rotation angle under some representative scenes, such as corridors. Finally, Iterative Closest Point (ICP) is applied to determine the best match. Experimental results show that the proposed method can improve the scan-matching and SLAM performance for low-cost LRFs in complex environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Li, Shan Shan, Ya Hong Xu, Xiao Long Xu und Ning Hui Zhu. „Simulation Research on Multi-Robot SLAM of Information Filter“. Applied Mechanics and Materials 278-280 (Januar 2013): 660–63. http://dx.doi.org/10.4028/www.scientific.net/amm.278-280.660.

Der volle Inhalt der Quelle
Annotation:
Multi-robot SLAM is a technology of multi-robot simultaneous localization and mapping cooperatively in unknown environment. We use information filters algorithm to solve multi-robot SLAM problem. Information filers use information matrix to describe the SLAM uncertain information, and multi-robot SLAM information matrix contains all robots poses and all the features observations which observed by all robots. We set up the multi-robot models, motion models and observe models, and set up feature-environment to simulation in computer. We set threshold to divide the features which are observed by robot k on current time, one part of features are used to relocalization as active maps, and the other features are used to update information matrix and information state. The simulation data shows that information filters can keep the accuracy of the robots poses, and keep the motion update and the observation update in the constant range, the algorithm improve the effective of multi-robot SLAM.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Ekvall, Staffan, Danica Kragic und Patric Jensfelt. „Object detection and mapping for service robot tasks“. Robotica 25, Nr. 2 (März 2007): 175–87. http://dx.doi.org/10.1017/s0263574706003237.

Der volle Inhalt der Quelle
Annotation:
SUMMARYThe problem studied in this paper is a mobile robot that autonomously navigates in a domestic environment, builds a map as it moves along and localizes its position in it. In addition, the robot detects predefined objects, estimates their position in the environment and integrates this with the localization module to automatically put the objects in the generated map. Thus, we demonstrate one of the possible strategies for the integration of spatial and semantic knowledge in a service robot scenario where a simultaneous localization and mapping (SLAM) and object detection recognition system work in synergy to provide a richer representation of the environment than it would be possible with either of the methods alone. Most SLAM systems build maps that are only used for localizing the robot. Such maps are typically based on grids or different types of features such as point and lines. The novelty is the augmentation of this process with an object-recognition system that detects objects in the environment and puts them in the map generated by the SLAM system. The metric map is also split into topological entities corresponding to rooms. In this way, the user can command the robot to retrieve a certain object from a certain room. We present the results of map building and an extensive evaluation of the object detection algorithm performed in an indoor setting.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Jiang, Guolai, Lei Yin, Shaokun Jin, Chaoran Tian, Xinbo Ma und Yongsheng Ou. „A Simultaneous Localization and Mapping (SLAM) Framework for 2.5D Map Building Based on Low-Cost LiDAR and Vision Fusion“. Applied Sciences 9, Nr. 10 (22.05.2019): 2105. http://dx.doi.org/10.3390/app9102105.

Der volle Inhalt der Quelle
Annotation:
The method of simultaneous localization and mapping (SLAM) using a light detection and ranging (LiDAR) sensor is commonly adopted for robot navigation. However, consumer robots are price sensitive and often have to use low-cost sensors. Due to the poor performance of a low-cost LiDAR, error accumulates rapidly while SLAM, and it may cause a huge error for building a larger map. To cope with this problem, this paper proposes a new graph optimization-based SLAM framework through the combination of low-cost LiDAR sensor and vision sensor. In the SLAM framework, a new cost-function considering both scan and image data is proposed, and the Bag of Words (BoW) model with visual features is applied for loop close detection. A 2.5D map presenting both obstacles and vision features is also proposed, as well as a fast relocation method with the map. Experiments were taken on a service robot equipped with a 360° low-cost LiDAR and a front-view RGB-D camera in the real indoor scene. The results show that the proposed method has better performance than using LiDAR or camera only, while the relocation speed with our 2.5D map is much faster than with traditional grid map.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Liu, Yanli, Heng Zhang und Chao Huang. „A Novel RGB-D SLAM Algorithm Based on Cloud Robotics“. Sensors 19, Nr. 23 (01.12.2019): 5288. http://dx.doi.org/10.3390/s19235288.

Der volle Inhalt der Quelle
Annotation:
In this paper, we present a novel red-green-blue-depth simultaneous localization and mapping (RGB-D SLAM) algorithm based on cloud robotics, which combines RGB-D SLAM with the cloud robot and offloads the back-end process of the RGB-D SLAM algorithm to the cloud. This paper analyzes the front and back parts of the original RGB-D SLAM algorithm and improves the algorithm from three aspects: feature extraction, point cloud registration, and pose optimization. Experiments show the superiority of the improved algorithm. In addition, taking advantage of the cloud robotics, the RGB-D SLAM algorithm is combined with the cloud robot and the back-end part of the computationally intensive algorithm is offloaded to the cloud. Experimental validation is provided, which compares the cloud robotic-based RGB-D SLAM algorithm with the local RGB-D SLAM algorithm. The results of the experiments demonstrate the superiority of our framework. The combination of cloud robotics and RGB-D SLAM can not only improve the efficiency of SLAM but also reduce the robot’s price and size.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Liu, Quanpan, Zhengjie Wang und Huan Wang. „SD-VIS: A Fast and Accurate Semi-Direct Monocular Visual-Inertial Simultaneous Localization and Mapping (SLAM)“. Sensors 20, Nr. 5 (09.03.2020): 1511. http://dx.doi.org/10.3390/s20051511.

Der volle Inhalt der Quelle
Annotation:
In practical applications, how to achieve a perfect balance between high accuracy and computational efficiency can be the main challenge for simultaneous localization and mapping (SLAM). To solve this challenge, we propose SD-VIS, a novel fast and accurate semi-direct visual-inertial SLAM framework, which can estimate camera motion and structure of surrounding sparse scenes. In the initialization procedure, we align the pre-integrated IMU measurements and visual images and calibrate out the metric scale, initial velocity, gravity vector, and gyroscope bias by using multiple view geometry (MVG) theory based on the feature-based method. At the front-end, keyframes are tracked by feature-based method and used for back-end optimization and loop closure detection, while non-keyframes are utilized for fast-tracking by direct method. This strategy makes the system not only have the better real-time performance of direct method, but also have high accuracy and loop closing detection ability based on feature-based method. At the back-end, we propose a sliding window-based tightly-coupled optimization framework, which can get more accurate state estimation by minimizing the visual and IMU measurement errors. In order to limit the computational complexity, we adopt the marginalization strategy to fix the number of keyframes in the sliding window. Experimental evaluation on EuRoC dataset demonstrates the feasibility and superior real-time performance of SD-VIS. Compared with state-of-the-art SLAM systems, we can achieve a better balance between accuracy and speed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Cheein, Fernando A. Auat, Fernando di Sciascio, Gustavo Scaglia und Ricardo Carelli. „Towards features updating selection based on the covariance matrix of the SLAM system state“. Robotica 29, Nr. 2 (31.03.2010): 271–82. http://dx.doi.org/10.1017/s0263574710000111.

Der volle Inhalt der Quelle
Annotation:
SUMMARYThis paper addresses the problem of a features selection criterion for a simultaneous localization and mapping (SLAM) algorithm implemented on a mobile robot. This SLAM algorithm is a sequential extended Kalman filter (EKF) implementation that extracts corners and lines from the environment. The selection procedure is made according to the convergence theorem of the EKF-based SLAM. Thus, only those features that contribute the most to the decreasing of the uncertainty ellipsoid volume of the SLAM system state will be chosen for the correction stage of the algorithm. The proposed features selection procedure restricts the number of features to be updated during the SLAM process, thus allowing real time implementations with non-reactive mobile robot navigation controllers. In addition, a Monte Carlo experiment is carried out in order to show the map reconstruction precision according to the Kullback–Leibler divergence curves. Consistency analysis of the proposed SLAM algorithm and experimental results in real environments are also shown in this work.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Wang, Shuhang, Jianfeng Li, Pengshuai Yang, Tianxiao Gao, Alex R. Bowers und Gang Luo. „Towards Wide Range Tracking of Head Scanning Movement in Driving“. International Journal of Pattern Recognition and Artificial Intelligence 34, Nr. 13 (20.04.2020): 2050033. http://dx.doi.org/10.1142/s0218001420500330.

Der volle Inhalt der Quelle
Annotation:
Gaining environmental awareness through lateral head scanning (yaw rotations) is important for driving safety, especially when approaching intersections. Therefore, head scanning movements could be an important behavioral metric for driving safety research and driving risk mitigation systems. Tracking head scanning movements with a single in-car camera is preferred hardware-wise, but it is very challenging to track the head over almost a [Formula: see text] range. In this paper, we investigate two state-of-the-art methods, a multi-loss deep residual learning method with 50 layers (multi-loss ResNet-50) and an ORB feature-based simultaneous localization and mapping method (ORB-SLAM). While deep learning methods have been extensively studied for head pose detection, this is the first study in which SLAM has been employed to innovatively track head scanning over a very wide range. Our laboratory experimental results showed that ORB-SLAM was more accurate than multi-loss ResNet-50, which often failed when many facial features were not in the view. On the contrary, ORB-SLAM was able to continue tracking as it does not rely on particular facial features. Testing with real driving videos demonstrated the feasibility of using ORB-SLAM for tracking large lateral head scans in naturalistic video data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Sakai, Atsushi, Teppei Saitoh und Yoji Kuroda. „Robust Landmark Estimation and Unscented Particle Sampling for SLAM in Dynamic Outdoor Environment“. Journal of Robotics and Mechatronics 22, Nr. 2 (20.04.2010): 140–49. http://dx.doi.org/10.20965/jrm.2010.p0140.

Der volle Inhalt der Quelle
Annotation:
In this paper, we propose a set of techniques for accurate and practical Simultaneous Localization And Mapping (SLAM) in dynamic outdoor environments. The techniques are categorized into Landmark estimation and Unscented particle sampling. Landmark estimation features stable feature detection and data management for estimating landmarks accurately, robustly, and at a low-calculation cost. The stable feature detection removes dynamic objects and sensor noise with scan subtraction, detects feature points sparsely and evenly, and sets data association parameters with landmark density. The data management calculates landmark existence probability and spurious landmarks are removed, utilizes landmark exclusivity for data association, and predicts importance weights using the observation range. Unscented particle sampling is based on Unscented Transformation for accurate SLAM. Simulation results of SLAM using our landmark estimation and experimental results of our SLAM in dynamic outdoor environments are presented and discussed. The results show that our landmark estimation decrease SLAM calculation time and maximum position error by 80% compared to conventional landmark estimation, and position estimation of SLAM with Unscented particle sampling ismore accurate than FastSLAM2.0 in dynamic outdoor environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Rohou, Simon, Peter Franek, Clément Aubry und Luc Jaulin. „Proving the existence of loops in robot trajectories“. International Journal of Robotics Research 37, Nr. 12 (Oktober 2018): 1500–1516. http://dx.doi.org/10.1177/0278364918808367.

Der volle Inhalt der Quelle
Annotation:
In this paper we present a reliable method to verify the existence of loops along the uncertain trajectory of a robot, based on proprioceptive measurements only, within a bounded-error context. The loop closure detection is one of the key points in simultaneous localization and mapping (SLAM) methods, especially in homogeneous environments with difficult scenes recognitions. The proposed approach is generic and could be coupled with conventional SLAM algorithms to reliably reduce their computing burden, thus improving the localization and mapping processes in the most challenging environments such as unexplored underwater extents. To prove that a robot performed a loop whatever the uncertainties in its evolution, we employ the notion of topological degree that originates in the field of differential topology. We show that a verification tool based on the topological degree is an optimal method for proving robot loops. This is demonstrated both on datasets from real missions involving autonomous underwater vehicles and by a mathematical discussion.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Zhao, Shibo, und Zheng Fang. „Direct Depth SLAM: Sparse Geometric Feature Enhanced Direct Depth SLAM System for Low-Texture Environments“. Sensors 18, Nr. 10 (06.10.2018): 3339. http://dx.doi.org/10.3390/s18103339.

Der volle Inhalt der Quelle
Annotation:
This paper presents a real-time, robust and low-drift depth-only SLAM (simultaneous localization and mapping) method for depth cameras by utilizing both dense range flow and sparse geometry features from sequential depth images. The proposed method is mainly composed of three optimization layers, namely Direct Depth layer, ICP (iterative closet point) Refined layer and Graph Optimization layer. The Direct Depth layer uses a range flow constraint equation to solve the fast 6-DOF (six degrees of freedom) frame-to-frame pose estimation problem. Then, the ICP Refined layer is used to reduce the local drift by applying local map based motion estimation strategy. After that, we propose a loop closure detection algorithm by extracting and matching sparse geometric features and construct a pose graph for the purpose of global pose optimization. We evaluate the performance of our method using benchmark datasets and real scene data. Experiment results show that our front-end algorithm clearly over performs the classic methods and our back-end algorithm is robust to find loop closures and reduce the global drift.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Takleh Omar Takleh, Talha, Nordin Abu Bakar, Shuzlina Abdul Rahman, Raseeda Hamzah und Zalilah Abd Aziz. „A Brief Survey on SLAM Methods in Autonomous Vehicle“. International Journal of Engineering & Technology 7, Nr. 4.27 (30.11.2018): 38. http://dx.doi.org/10.14419/ijet.v7i4.27.22477.

Der volle Inhalt der Quelle
Annotation:
The overall purpose of this paper is to provide an introductory survey in the area of Simultaneous Localization and Mapping (SLAM) particularly its utilization in autonomous vehicle or more specifically in self-driving cars, especially after the release of commercial semi-autonomous car like the Tesla vehicles as well as the Google Waymo vehicle. Before we begin diving into the concept of SLAM, we need to understand the importance of SLAM and problems that expand to the various methods developed by numerous researchers to solve it. Thus, in this paper we will start by giving the general concept behind SLAM, followed by sharing details of its different categories and the various methods that form the SLAM function in today’s autonomous vehicles; which can solve the SLAM problem. These methods are the current trends that are widely focused in the research community in producing solutions to the SLAM problem; not only in autonomous vehicle but in the robotics field as well. Next, we will compare each of these methods in terms of its pros and cons before concluding the paper by looking at future SLAM challenges.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Ulmschneider, Markus, Christian Gentner, Thomas Jost und Armin Dammann. „Rao-Blackwellized Gaussian Sum Particle Filtering for Multipath Assisted Positioning“. Journal of Electrical and Computer Engineering 2018 (2018): 1–15. http://dx.doi.org/10.1155/2018/4761601.

Der volle Inhalt der Quelle
Annotation:
In multipath assisted positioning, multipath components arriving at a receiver are regarded as being transmitted by a virtual transmitter in a line-of-sight condition. As the locations and clock offsets of the virtual and physical transmitters are in general unknown, simultaneous localization and mapping (SLAM) schemes can be applied to simultaneously localize a user and estimate the states of physical and virtual transmitters as landmarks. Hence, multipath assisted positioning enables localizing a user with only one physical transmitter depending on the scenario. In this paper, we present and derive a novel filtering approach for our multipath assisted positioning algorithm called Channel-SLAM. Making use of Rao-Blackwellization, the location of a user is tracked by a particle filter, and each landmark is represented by a sum of Gaussian probability density functions, whose parameters are estimated by unscented Kalman filters. Since data association, that is, finding correspondences among landmarks, is essential for robust long-term SLAM, we also derive a data association scheme. We evaluate our filtering approach for multipath assisted positioning by simulations in an urban scenario and by outdoor measurements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Munguía, Rodrigo, Carlos López-Franco, Emmanuel Nuño und Adriana López-Franco. „Method for SLAM Based on Omnidirectional Vision: A Delayed-EKF Approach“. Journal of Sensors 2017 (2017): 1–14. http://dx.doi.org/10.1155/2017/7342931.

Der volle Inhalt der Quelle
Annotation:
This work presents a method for implementing a visual-based simultaneous localization and mapping (SLAM) system using omnidirectional vision data, with application to autonomous mobile robots. In SLAM, a mobile robot operates in an unknown environment using only on-board sensors to simultaneously build a map of its surroundings, which it uses to track its position. The SLAM is perhaps one of the most fundamental problems to solve in robotics to build mobile robots truly autonomous. The visual sensor used in this work is an omnidirectional vision sensor; this sensor provides a wide field of view which is advantageous in a mobile robot in an autonomous navigation task. Since the visual sensor used in this work is monocular, a method to recover the depth of the features is required. To estimate the unknown depth we propose a novel stochastic triangulation technique. The system proposed in this work can be applied to indoor or cluttered environments for performing visual-based navigation when GPS signal is not available. Experiments with synthetic and real data are presented in order to validate the proposal.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Cho, Sungjin, Chansoo Kim, Jaehyun Park, Myoungho Sunwoo und Kichun Jo. „Semantic Point Cloud Mapping of LiDAR Based on Probabilistic Uncertainty Modeling for Autonomous Driving“. Sensors 20, Nr. 20 (19.10.2020): 5900. http://dx.doi.org/10.3390/s20205900.

Der volle Inhalt der Quelle
Annotation:
LiDAR-based Simultaneous Localization And Mapping (SLAM), which provides environmental information for autonomous vehicles by map building, is a major challenge for autonomous driving. In addition, the semantic information has been used for the LiDAR-based SLAM with the advent of deep neural network-based semantic segmentation algorithms. The semantic segmented point clouds provide a much greater range of functionality for autonomous vehicles than geometry alone, which can play an important role in the mapping step. However, due to the uncertainty of the semantic segmentation algorithms, the semantic segmented point clouds have limitations in being directly used for SLAM. In order to solve the limitations, this paper proposes a semantic segmentation-based LiDAR SLAM system considering the uncertainty of the semantic segmentation algorithms. The uncertainty is explicitly modeled by proposed probability models which are come from the data-driven approaches. Based on the probability models, this paper proposes semantic registration which calculates the transformation relationship of consecutive point clouds using semantic information with proposed probability models. Furthermore, the proposed probability models are used to determine the semantic class of the points when the multiple scans indicate different classes due to the uncertainty. The proposed framework is verified and evaluated by the KITTI dataset and outdoor environments. The experiment results show that the proposed semantic mapping framework reduces the errors of the mapping poses and eliminates the ambiguity of the semantic information of the generated semantic map.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie