Siga este enlace para ver otros tipos de publicaciones sobre el tema: Slam LiDAR.

Artículos de revistas sobre el tema "Slam LiDAR"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Slam LiDAR".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Jie, Lu, Zhi Jin, Jinping Wang, Letian Zhang y Xiaojun Tan. "A SLAM System with Direct Velocity Estimation for Mechanical and Solid-State LiDARs". Remote Sensing 14, n.º 7 (4 de abril de 2022): 1741. http://dx.doi.org/10.3390/rs14071741.

Texto completo
Resumen
Simultaneous localization and mapping (SLAM) is essential for intelligent robots operating in unknown environments. However, existing algorithms are typically developed for specific types of solid-state LiDARs, leading to weak feature representation abilities for new sensors. Moreover, LiDAR-based SLAM methods are limited by distortions caused by LiDAR ego motion. To address the above issues, this paper presents a versatile and velocity-aware LiDAR-based odometry and mapping (VLOM) system. A spherical projection-based feature extraction module is utilized to process the raw point cloud generated by various LiDARs, hence avoiding the time-consuming adaptation of various irregular scan patterns. The extracted features are grouped into higher-level clusters to filter out smaller objects and reduce false matching during feature association. Furthermore, bundle adjustment is adopted to jointly estimate the poses and velocities for multiple scans, effectively improving the velocity estimation accuracy and compensating for point cloud distortions. Experiments on publicly available datasets demonstrate the superiority of VLOM over other state-of-the-art LiDAR-based SLAM systems in terms of accuracy and robustness. Additionally, the satisfactory performance of VLOM on RS-LiDAR-M1, a newly released solid-state LiDAR, shows its applicability to a wide range of LiDARs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Sier, Ha, Qingqing Li, Xianjia Yu, Jorge Peña Queralta, Zhuo Zou y Tomi Westerlund. "A Benchmark for Multi-Modal LiDAR SLAM with Ground Truth in GNSS-Denied Environments". Remote Sensing 15, n.º 13 (28 de junio de 2023): 3314. http://dx.doi.org/10.3390/rs15133314.

Texto completo
Resumen
LiDAR-based simultaneous localization and mapping (SLAM) approaches have obtained considerable success in autonomous robotic systems. This is in part owing to the high accuracy of robust SLAM algorithms and the emergence of new and lower-cost LiDAR products. This study benchmarks the current state-of-the-art LiDAR SLAM algorithms with a multi-modal LiDAR sensor setup, showcasing diverse scanning modalities (spinning and solid state) and sensing technologies, and LiDAR cameras, mounted on a mobile sensing and computing platform. We extend our previous multi-modal multi-LiDAR dataset with additional sequences and new sources of ground truth data. Specifically, we propose a new multi-modal multi-LiDAR SLAM-assisted and ICP-based sensor fusion method for generating ground truth maps. With these maps, we then match real-time point cloud data using a normal distributions transform (NDT) method to obtain the ground truth with a full six-degrees-of-freedom (DOF) pose estimation. These novel ground truth data leverage high-resolution spinning and solid-state LiDARs. We also include new open road sequences with GNSS-RTK data and additional indoor sequences with motion capture (MOCAP) ground truth, complementing the previous forest sequences with MOCAP data. We perform an analysis of the positioning accuracy achieved, comprising ten unique configurations generated by pairing five distinct LiDAR sensors with five SLAM algorithms, to critically compare and assess their respective performance characteristics. We also report the resource utilization in four different computational platforms and a total of five settings (Intel and Jetson ARM CPUs). Our experimental results show that the current state-of-the-art LiDAR SLAM algorithms perform very differently for different types of sensors. More results, code, and the dataset can be found at GitHub.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Zhao, Yu-Lin, Yi-Tian Hong y Han-Pang Huang. "Comprehensive Performance Evaluation between Visual SLAM and LiDAR SLAM for Mobile Robots: Theories and Experiments". Applied Sciences 14, n.º 9 (6 de mayo de 2024): 3945. http://dx.doi.org/10.3390/app14093945.

Texto completo
Resumen
SLAM (Simultaneous Localization and Mapping), primarily relying on camera or LiDAR (Light Detection and Ranging) sensors, plays a crucial role in robotics for localization and environmental reconstruction. This paper assesses the performance of two leading methods, namely ORB-SLAM3 and SC-LeGO-LOAM, focusing on localization and mapping in both indoor and outdoor environments. The evaluation employs artificial and cost-effective datasets incorporating data from a 3D LiDAR and an RGB-D (color and depth) camera. A practical approach is introduced for calculating ground-truth trajectories and during benchmarking, reconstruction maps based on ground truth are established. To assess the performance, ATE and RPE are utilized to evaluate the accuracy of localization; standard deviation is employed to compare the stability during the localization process for different methods. While both algorithms exhibit satisfactory positioning accuracy, their performance is suboptimal in scenarios with inadequate textures. Furthermore, 3D reconstruction maps established by the two approaches are also provided for direct observation of their differences and the limitations encountered during map construction. Moreover, the research includes a comprehensive comparison of computational performance metrics, encompassing Central Processing Unit (CPU) utilization, memory usage, and an in-depth analysis. This evaluation revealed that Visual SLAM requires more CPU resources than LiDAR SLAM, primarily due to additional data storage requirements, emphasizing the impact of environmental factors on resource requirements. In conclusion, LiDAR SLAM is more suitable for the outdoors due to its comprehensive nature, while Visual SLAM excels indoors, compensating for sparse aspects in LiDAR SLAM. To facilitate further research, a technical guide was also provided for the researchers in related fields.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Chen, Shoubin, Baoding Zhou, Changhui Jiang, Weixing Xue y Qingquan Li. "A LiDAR/Visual SLAM Backend with Loop Closure Detection and Graph Optimization". Remote Sensing 13, n.º 14 (10 de julio de 2021): 2720. http://dx.doi.org/10.3390/rs13142720.

Texto completo
Resumen
LiDAR (light detection and ranging), as an active sensor, is investigated in the simultaneous localization and mapping (SLAM) system. Typically, a LiDAR SLAM system consists of front-end odometry and back-end optimization modules. Loop closure detection and pose graph optimization are the key factors determining the performance of the LiDAR SLAM system. However, the LiDAR works at a single wavelength (905 nm), and few textures or visual features are extracted, which restricts the performance of point clouds matching based loop closure detection and graph optimization. With the aim of improving LiDAR SLAM performance, in this paper, we proposed a LiDAR and visual SLAM backend, which utilizes LiDAR geometry features and visual features to accomplish loop closure detection. Firstly, the bag of word (BoW) model, describing the visual similarities, was constructed to assist in the loop closure detection and, secondly, point clouds re-matching was conducted to verify the loop closure detection and accomplish graph optimization. Experiments with different datasets were carried out for assessing the proposed method, and the results demonstrated that the inclusion of the visual features effectively helped with the loop closure detection and improved LiDAR SLAM performance. In addition, the source code, which is open source, is available for download once you contact the corresponding author.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Peng, Gang, Yicheng Zhou, Lu Hu, Li Xiao, Zhigang Sun, Zhangang Wu y Xukang Zhu. "VILO SLAM: Tightly Coupled Binocular Vision–Inertia SLAM Combined with LiDAR". Sensors 23, n.º 10 (9 de mayo de 2023): 4588. http://dx.doi.org/10.3390/s23104588.

Texto completo
Resumen
For the existing visual–inertial SLAM algorithm, when the robot is moving at a constant speed or purely rotating and encounters scenes with insufficient visual features, problems of low accuracy and poor robustness arise. Aiming to solve the problems of low accuracy and robustness of the visual inertial SLAM algorithm, a tightly coupled vision-IMU-2D lidar odometry (VILO) algorithm is proposed. Firstly, low-cost 2D lidar observations and visual–inertial observations are fused in a tightly coupled manner. Secondly, the low-cost 2D lidar odometry model is used to derive the Jacobian matrix of the lidar residual with respect to the state variable to be estimated, and the residual constraint equation of the vision-IMU-2D lidar is constructed. Thirdly, the nonlinear solution method is used to obtain the optimal robot pose, which solves the problem of how to fuse 2D lidar observations with visual–inertial information in a tightly coupled manner. The results show that the algorithm still has reliable pose-estimation accuracy and robustness in many special environments, and the position error and yaw angle error are greatly reduced. Our research improves the accuracy and robustness of the multi-sensor fusion SLAM algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Dang, Xiangwei, Zheng Rong y Xingdong Liang. "Sensor Fusion-Based Approach to Eliminating Moving Objects for SLAM in Dynamic Environments". Sensors 21, n.º 1 (1 de enero de 2021): 230. http://dx.doi.org/10.3390/s21010230.

Texto completo
Resumen
Accurate localization and reliable mapping is essential for autonomous navigation of robots. As one of the core technologies for autonomous navigation, Simultaneous Localization and Mapping (SLAM) has attracted widespread attention in recent decades. Based on vision or LiDAR sensors, great efforts have been devoted to achieving real-time SLAM that can support a robot’s state estimation. However, most of the mature SLAM methods generally work under the assumption that the environment is static, while in dynamic environments they will yield degenerate performance or even fail. In this paper, first we quantitatively evaluate the performance of the state-of-the-art LiDAR-based SLAMs taking into account different pattens of moving objects in the environment. Through semi-physical simulation, we observed that the shape, size, and distribution of moving objects all can impact the performance of SLAM significantly, and obtained instructive investigation results by quantitative comparison between LOAM and LeGO-LOAM. Secondly, based on the above investigation, a novel approach named EMO to eliminating the moving objects for SLAM fusing LiDAR and mmW-radar is proposed, towards improving the accuracy and robustness of state estimation. The method fully uses the advantages of different characteristics of two sensors to realize the fusion of sensor information with two different resolutions. The moving objects can be efficiently detected based on Doppler effect by radar, accurately segmented and localized by LiDAR, then filtered out from the point clouds through data association and accurate synchronized in time and space. Finally, the point clouds representing the static environment are used as the input of SLAM. The proposed approach is evaluated through experiments using both semi-physical simulation and real-world datasets. The results demonstrate the effectiveness of the method at improving SLAM performance in accuracy (decrease by 30% at least in absolute position error) and robustness in dynamic environments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Debeunne, César y Damien Vivet. "A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping". Sensors 20, n.º 7 (7 de abril de 2020): 2068. http://dx.doi.org/10.3390/s20072068.

Texto completo
Resumen
Autonomous navigation requires both a precise and robust mapping and localization solution. In this context, Simultaneous Localization and Mapping (SLAM) is a very well-suited solution. SLAM is used for many applications including mobile robotics, self-driving cars, unmanned aerial vehicles, or autonomous underwater vehicles. In these domains, both visual and visual-IMU SLAM are well studied, and improvements are regularly proposed in the literature. However, LiDAR-SLAM techniques seem to be relatively the same as ten or twenty years ago. Moreover, few research works focus on vision-LiDAR approaches, whereas such a fusion would have many advantages. Indeed, hybridized solutions offer improvements in the performance of SLAM, especially with respect to aggressive motion, lack of light, or lack of visual features. This study provides a comprehensive survey on visual-LiDAR SLAM. After a summary of the basic idea of SLAM and its implementation, we give a complete review of the state-of-the-art of SLAM research, focusing on solutions using vision, LiDAR, and a sensor fusion of both modalities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Xu, Xiaobin, Lei Zhang, Jian Yang, Chenfei Cao, Wen Wang, Yingying Ran, Zhiying Tan y Minzhou Luo. "A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR". Remote Sensing 14, n.º 12 (13 de junio de 2022): 2835. http://dx.doi.org/10.3390/rs14122835.

Texto completo
Resumen
The ability of intelligent unmanned platforms to achieve autonomous navigation and positioning in a large-scale environment has become increasingly demanding, in which LIDAR-based Simultaneous Localization and Mapping (SLAM) is the mainstream of research schemes. However, the LIDAR-based SLAM system will degenerate and affect the localization and mapping effects in extreme environments with high dynamics or sparse features. In recent years, a large number of LIDAR-based multi-sensor fusion SLAM works have emerged in order to obtain a more stable and robust system. In this work, the development process of LIDAR-based multi-sensor fusion SLAM and the latest research work are highlighted. After summarizing the basic idea of SLAM and the necessity of multi-sensor fusion, this paper introduces the basic principles and recent work of multi-sensor fusion in detail from four aspects based on the types of fused sensors and data coupling methods. Meanwhile, we review some SLAM datasets and compare the performance of five open-source algorithms using the UrbanNav dataset. Finally, the development trend and popular research directions of SLAM based on 3D LIDAR multi-sensor fusion are discussed and summarized.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Bu, Zean, Changku Sun y Peng Wang. "Semantic Lidar-Inertial SLAM for Dynamic Scenes". Applied Sciences 12, n.º 20 (18 de octubre de 2022): 10497. http://dx.doi.org/10.3390/app122010497.

Texto completo
Resumen
Over the past few years, many impressive lidar-inertial SLAM systems have been developed and perform well under static scenes. However, most tasks are under dynamic environments in real life, and the determination of a method to improve accuracy and robustness poses a challenge. In this paper, we propose a semantic lidar-inertial SLAM approach with the combination of a point cloud semantic segmentation network and lidar-inertial SLAM LIO mapping for dynamic scenes. We import an attention mechanism to the PointConv network to build an attention weight function to improve the capacity to predict details. The semantic segmentation results of the point clouds from lidar enable us to obtain point-wise labels for each lidar frame. After filtering the dynamic objects, the refined global map of the lidar-inertial SLAM sytem is clearer, and the estimated trajectory can achieve a higher precision. We conduct experiments on an UrbanNav dataset, whose challenging highway sequences have a large number of moving cars and pedestrians. The results demonstrate that, compared with other SLAM systems, the accuracy of trajectory can be improved to different degrees.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Abdelhafid, El Farnane, Youssefi My Abdelkader, Mouhsen Ahmed, Dakir Rachid y El Ihyaoui Abdelilah. "Visual and light detection and ranging-based simultaneous localization and mapping for self-driving cars". International Journal of Electrical and Computer Engineering (IJECE) 12, n.º 6 (1 de diciembre de 2022): 6284. http://dx.doi.org/10.11591/ijece.v12i6.pp6284-6292.

Texto completo
Resumen
<span lang="EN-US">In recent years, there has been a strong demand for self-driving cars. For safe navigation, self-driving cars need both precise localization and robust mapping. While global navigation satellite system (GNSS) can be used to locate vehicles, it has some limitations, such as satellite signal absence (tunnels and caves), which restrict its use in urban scenarios. Simultaneous localization and mapping (SLAM) are an excellent solution for identifying a vehicle’s position while at the same time constructing a representation of the environment. SLAM-based visual and light detection and ranging (LIDAR) refer to using cameras and LIDAR as source of external information. This paper presents an implementation of SLAM algorithm for building a map of environment and obtaining car’s trajectory using LIDAR scans. A detailed overview of current visual and LIDAR SLAM approaches has also been provided and discussed. Simulation results referred to LIDAR scans indicate that SLAM is convenient and helpful in localization and mapping.</span>
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Soebhakti, Hendawan y Robbi Hermawansya Pangantar. "Simulation of Mobile Robot Navigation System using Hector SLAM on ROS". JURNAL INTEGRASI 16, n.º 1 (27 de marzo de 2024): 11–20. http://dx.doi.org/10.30871/ji.v16i1.5755.

Texto completo
Resumen
The ability to move from one point to the destination point autonomously is very necessary in AMR robots, to be able to meet this, the robot must be able to detect the surrounding environment and know its location to the environment, the Hector SLAM algorithm is added using the LIDAR sensor, and to find out the ability of the LIDAR sensor with the Hector SLAM and computer specifications in order to process properly, a simulation of the HECTOR SLAM with the LIDAR sensor was made. Simulation is carried out by creating an environment map on the Gazebo. Then explore environmental mapping using Hokuyo LIDAR which has been added to the turtlebot3 model waffle_pi to the simulated environment map. In this study, a model of the second floor lobby environment and Brail of the Batam State Polytechnic was used which was made in the form of a simulation on the Gazebo, where robots that have used LIDAR will be controlled with a keyboard around the simulation environment, where simultaneously the mapping and localization process runs and the process can be seen on the Rviz in real-time, where LIDAR will send data in the form of distance readings that will be received by Hector SLAM. The results of this study are expected that Hector SLAM using LIDAR sensor simulation can produce environmental mapping and localization in the simulation environment and obtain a minimum computer specification to process data from the SLAM Hector process using LIDAR sensors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Huang, Baichuan, Jun Zhao, Sheng Luo y Jingbin Liu. "A Survey of Simultaneous Localization and Mapping with an Envision in 6G Wireless Networks". Journal of Global Positioning Systems 17, n.º 2 (2021): 206–36. http://dx.doi.org/10.5081/jgps.17.2.206.

Texto completo
Resumen
Simultaneous Localization and Mapping (SLAM) achieves the purpose of simultaneous positioning and map construction based on self-perception. The paper makes an overview in SLAM including Lidar SLAM, visual SLAM, and their fusion. For Lidar or visual SLAM, the survey illustrates the basic type and product of sensors, open source system in sort and history, deep learning embedded, the challenge and future. Additionally, visual inertial odometry is supplemented. For Lidar and visual fused SLAM, the paper highlights the multi-sensors calibration, the fusion in hardware, data, task layer. The open question and an envision in 6G wieless networks with SLAM end the paper. The contributions of this paper can be summarized as follows: the paper provides a high quality and full-scale overview in SLAM. It's very friendly for new researchers to hold the development of SLAM and learn it very obviously. Also, the paper can be considered as dictionary for experienced researchers to search and find new interested orientation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Chen, Zhijian, Aigong Xu, Xin Sui, Changqiang Wang, Siyu Wang, Jiaxin Gao y Zhengxu Shi. "Improved-UWB/LiDAR-SLAM Tightly Coupled Positioning System with NLOS Identification Using a LiDAR Point Cloud in GNSS-Denied Environments". Remote Sensing 14, n.º 6 (12 de marzo de 2022): 1380. http://dx.doi.org/10.3390/rs14061380.

Texto completo
Resumen
Reliable absolute positioning is indispensable in long-term positioning systems. Although simultaneous localization and mapping based on light detection and ranging (LiDAR-SLAM) is effective in global navigation satellite system (GNSS)-denied environments, it can provide only local positioning results, with error divergence over distance. Ultrawideband (UWB) technology is an effective alternative; however, non-line-of-sight (NLOS) propagation in complex indoor environments severely affects the precision of UWB positioning, and LiDAR-SLAM typically provides more robust results under such conditions. For robust and high-precision positioning, we propose an improved-UWB/LiDAR-SLAM tightly coupled (TC) integrated algorithm. This method is the first to combine a LiDAR point cloud map generated via LiDAR-SLAM with position information from UWB anchors to distinguish between line-of-sight (LOS) and NLOS measurements through obstacle detection and NLOS identification (NI) in real time. Additionally, to alleviate positioning error accumulation in long-term SLAM, an improved-UWB/LiDAR-SLAM TC positioning model is constructed using UWB LOS measurements and LiDAR-SLAM positioning information. Parameter solving using a robust extended Kalman filter (REKF) to suppress the effect of UWB gross errors improves the robustness and positioning performance of the integrated system. Experimental results show that the proposed NI method using the LiDAR point cloud can efficiently and accurately identify UWB NLOS errors to improve the performance of UWB ranging and positioning in real scenarios. The TC integrated method combining NI and REKF achieves better positioning effectiveness and robustness than other comparative methods and satisfactory control of sensor errors with a root-mean-square error of 0.094 m, realizing subdecimeter indoor positioning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Frosi, Matteo y Matteo Matteucci. "ART-SLAM: Accurate Real-Time 6DoF LiDAR SLAM". IEEE Robotics and Automation Letters 7, n.º 2 (abril de 2022): 2692–99. http://dx.doi.org/10.1109/lra.2022.3144795.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Lou, Lu, Yitian Li, Qi Zhang y Hanbing Wei. "SLAM and 3D Semantic Reconstruction Based on the Fusion of Lidar and Monocular Vision". Sensors 23, n.º 3 (29 de enero de 2023): 1502. http://dx.doi.org/10.3390/s23031502.

Texto completo
Resumen
Monocular camera and Lidar are the two most commonly used sensors in unmanned vehicles. Combining the advantages of the two is the current research focus of SLAM and semantic analysis. In this paper, we propose an improved SLAM and semantic reconstruction method based on the fusion of Lidar and monocular vision. We fuse the semantic image with the low-resolution 3D Lidar point clouds and generate dense semantic depth maps. Through visual odometry, ORB feature points with depth information are selected to improve positioning accuracy. Our method uses parallel threads to aggregate 3D semantic point clouds while positioning the unmanned vehicle. Experiments are conducted on the public CityScapes and KITTI Visual Odometry datasets, and the results show that compared with the ORB-SLAM2 and DynaSLAM, our positioning error is approximately reduced by 87%; compared with the DEMO and DVL-SLAM, our positioning accuracy improves in most sequences. Our 3D reconstruction quality is better than DynSLAM and contains semantic information. The proposed method has engineering application value in the unmanned vehicles field.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Huang, Baichuan, Jun Zhao, Sheng Luo y Jingbin Liu. "A Survey of Simultaneous Localization and Mapping with an Envision in 6G Wireless Networks". Journal of Global Positioning Systems 17, n.º 1 (2021): 94–127. http://dx.doi.org/10.5081/jgps.17.1.94.

Texto completo
Resumen
Simultaneous Localization and Mapping (SLAM) achieves the purpose of simultaneous positioning and map construction based on self-perception. The paper makes an overview in SLAM including Lidar SLAM, visual SLAM, and their fusion. For Lidar or visual SLAM, the survey illustrates the basic type and product of sensors, open source system in sort and history, deep learning embedded, the challenge and future. Additionally, visual inertial odometry is supplemented. For Lidar and visual fused SLAM, the paper highlights the multi-sensors calibration, the fusion in hardware, data, task layer. The open question and forward thinking end the paper. The contributions of this paper can be summarized as follows: the paper provides a high quality and full-scale overview in SLAM. It's very friendly for new researchers to hold the development of SLAM and learn it very obviously. Also, the paper can be considered as dictionary for experienced researchers to search and find new interested orientation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Wei, Weichen, Bijan Shirinzadeh, Rohan Nowell, Mohammadali Ghafarian, Mohamed M. A. Ammar y Tianyao Shen. "Enhancing Solid State LiDAR Mapping with a 2D Spinning LiDAR in Urban Scenario SLAM on Ground Vehicles". Sensors 21, n.º 5 (4 de marzo de 2021): 1773. http://dx.doi.org/10.3390/s21051773.

Texto completo
Resumen
Solid-State LiDAR (SSL) takes an increasing share of the LiDAR market. Compared with traditional spinning LiDAR, SSLs are more compact, energy-efficient and cost-effective. Generally, the current study of SSL mapping is limited to adapting existing SLAM algorithms to an SSL sensor. However, compared with spinning LiDARs, SSLs are different in terms of their irregular scan patterns and limited FOV. Directly applying existing SLAM approaches on them often increase the instability of a mapping process. This study proposes a systematic design, which consists of a dual-LiDAR mapping system and a three DOF interpolated six DOF odometry. For dual-LiDAR mapping, this work uses a 2D LiDAR to enhance a 3D SSL performance on a ground vehicle platform. The proposed system takes a 2D LiDAR to preprocess the scanning field into a number of feature sections according to the curvatures on the 2D fraction. Subsequently, this section information is passed to 3D SSL for direction feature selection. Additionally, this work proposes an odometry interpolation method which uses both LiDARs to generate two separated odometries. The proposed odometry interpolation method selectively determines the appropriate odometry information to update the system state under challenging conditions. Experiments are conducted in different scenarios. The results proves that the proposed approach is able to utilise 12 times more corner features from the environment than the comparied method, thus results in a demonstrable improvement in its absolute position error.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Vultaggio, F., F. d’Apolito, C. Sulzbachner y P. Fanta-Jende. "SIMULATION OF LOW-COST MEMS-LIDAR AND ANALYSIS OF ITS EFFECT ON THE PERFORMANCES OF STATE-OF-THE-ART SLAMS". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W1-2023 (25 de mayo de 2023): 539–45. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w1-2023-539-2023.

Texto completo
Resumen
Abstract. Indoor Unmanned Aerial Vehicles have often been tasked with performing SLAM, and the sensors most used in literature and industry have been cameras. Spanning from stereo to event cameras, visual algorithms have often been the de facto choice for localization. While visual SLAM has reached a high level of accuracy in localization, accurate map reconstruction still proves to be challenging. Meanwhile, LiDAR sensors have been used for years to obtain accurate maps. First in surveying applications and in the past ten years in the automotive sector. The weight, power, and size constraints of most traditional LiDARs have prevented their installation on UAVs for indoor use. MEMS-based LiDARs have already been used in UAVs but had to rely on algorithms designed to deal with their small FOV. Recently, a MEMS-based LiDAR with a wide field of view (360°*59°) and weighing 265 g has sparked interest in its potential for indoor UAV SLAM. We performed an extensive battery of tests in simulation environments to provide a first look into its effect on state-of-the-art SLAM algorithms, highlight which ones can provide the best results, and what improvements may be most beneficial. This paper aims to provide assistance in further research in the field by releasing the tool used for this work.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Chen, Weifeng, Chengjun Zhou, Guangtao Shang, Xiyang Wang, Zhenxiong Li, Chonghui Xu y Kai Hu. "SLAM Overview: From Single Sensor to Heterogeneous Fusion". Remote Sensing 14, n.º 23 (28 de noviembre de 2022): 6033. http://dx.doi.org/10.3390/rs14236033.

Texto completo
Resumen
After decades of development, LIDAR and visual SLAM technology has relatively matured and been widely used in the military and civil fields. SLAM technology enables the mobile robot to have the abilities of autonomous positioning and mapping, which allows the robot to move in indoor and outdoor scenes where GPS signals are scarce. However, SLAM technology relying only on a single sensor has its limitations. For example, LIDAR SLAM is not suitable for scenes with highly dynamic or sparse features, and visual SLAM has poor robustness in low-texture or dark scenes. However, through the fusion of the two technologies, they have great potential to learn from each other. Therefore, this paper predicts that SLAM technology combining LIDAR and visual sensors, as well as various other sensors, will be the mainstream direction in the future. This paper reviews the development history of SLAM technology, deeply analyzes the hardware information of LIDAR and cameras, and presents some classical open source algorithms and datasets. According to the algorithm adopted by the fusion sensor, the traditional multi-sensor fusion methods based on uncertainty, features, and novel deep learning are introduced in detail. The excellent performance of the multi-sensor fusion method in complex scenes is summarized, and the future development of multi-sensor fusion method is prospected.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Ismail, Hasan, Rohit Roy, Long-Jye Sheu, Wei-Hua Chieng y Li-Chuan Tang. "Exploration-Based SLAM (e-SLAM) for the Indoor Mobile Robot Using Lidar". Sensors 22, n.º 4 (21 de febrero de 2022): 1689. http://dx.doi.org/10.3390/s22041689.

Texto completo
Resumen
This paper attempts to uncover one possible method for the IMR (indoor mobile robot) to perform indoor exploration associated with SLAM (simultaneous localization and mapping) using LiDAR. Specifically, the IMR is required to construct a map when it has landed on an unexplored floor of a building. We had implemented the e-SLAM (exploration-based SLAM) using the coordinate transformation and the navigation prediction techniques to achieve that purpose in the engineering school building which consists of many 100-m2 labs, corridors, elevator waiting space and the lobby. We first derive the LiDAR mesh for the orthogonal walls and filter out the static furniture and dynamic humans in the same space as the IMR. Then, we define the LiDAR pose frame including the translation and rotation from the orthogonal walls. According to the MSC (most significant corner) obtained from the intersection of the orthogonal walls, we calculate the displacement of the IMR. The orientation of the IMR is calculated from the alignment of orthogonal walls in the consecutive LiDAR pose frames, which is also assisted by the LQE (linear quadratic estimation) method. All the computation can be done in a single processor machine in real-time. The e-SLAM technique leads to a potential for the in-house service robot to start operation without having pre-scan LiDAR maps, which can save the installation time of the service robot. In this study, we use only the LiDAR and compared our result with the IMU to verify the consistency between the two navigation sensors in the experiments. The scenario of the experiment consists of rooms, corridors, elevators, and the lobby, which is common to most office buildings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Harish, I. "Slam Using LIDAR For UGV". International Journal for Research in Applied Science and Engineering Technology V, n.º III (28 de marzo de 2017): 1157–60. http://dx.doi.org/10.22214/ijraset.2017.3211.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Roy, Rohit, You-Peng Tu, Long-Jye Sheu, Wei-Hua Chieng, Li-Chuan Tang y Hasan Ismail. "Path Planning and Motion Control of Indoor Mobile Robot under Exploration-Based SLAM (e-SLAM)". Sensors 23, n.º 7 (30 de marzo de 2023): 3606. http://dx.doi.org/10.3390/s23073606.

Texto completo
Resumen
Indoor mobile robot (IMR) motion control for e-SLAM techniques with limited sensors, i.e., only LiDAR, is proposed in this research. The path was initially generated from simple floor plans constructed by the IMR exploration. The path planning starts from the vertices which can be traveled through, proceeds to the velocity planning on both cornering and linear motion, and reaches the interpolated discrete points joining the vertices. The IMR recognizes its location and environment gradually from the LiDAR data. The study imposes the upper rings of the LiDAR image to perform localization while the lower rings are for obstacle detection. The IMR must travel through a series of featured vertices and perform the path planning further generating an integrated LiDAR image. A considerable challenge is that the LiDAR data are the only source to be compared with the path planned according to the floor map. Certain changes still need to be adapted into, for example, the distance precision with relevance to the floor map and the IMR deviation in order to avoid obstacles on the path. The LiDAR setting and IMR speed regulation account for a critical issue. The study contributed to integrating a step-by-step procedure of implementing path planning and motion control using solely the LiDAR data along with the integration of various pieces of software. The control strategy is thus improved while experimenting with various proportional control gains for position, orientation, and velocity of the LiDAR in the IMR.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Chen, Guangrong y Liang Hong. "Research on Environment Perception System of Quadruped Robots Based on LiDAR and Vision". Drones 7, n.º 5 (20 de mayo de 2023): 329. http://dx.doi.org/10.3390/drones7050329.

Texto completo
Resumen
Due to the high stability and adaptability, quadruped robots are currently highly discussed in the robotics field. To overcome the complicated environment indoor or outdoor, the quadruped robots should be configured with an environment perception system, which mostly contain LiDAR or a vision sensor, and SLAM (Simultaneous Localization and Mapping) is deployed. In this paper, the comparative experimental platforms, including a quadruped robot and a vehicle, with LiDAR and a vision sensor are established firstly. Secondly, a single sensor SLAM, including LiDAR SLAM and Visual SLAM, are investigated separately to highlight their advantages and disadvantages. Then, multi-sensor SLAM based on LiDAR and vision are addressed to improve the environmental perception performance. Thirdly, the improved YOLOv5 (You Only Look Once) by adding ASFF (adaptive spatial feature fusion) is employed to do the image processing of gesture recognition and achieve the human–machine interaction. Finally, the challenge of environment perception system for mobile robot based on comparison between wheeled and legged robots is discussed. This research provides an insight for the environment perception of legged robots.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Bolkas, D., M. O’Banion y C. J. Belleman. "COMBINATION OF TLS AND SLAM LIDAR FOR LEVEE MONITORING". ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2022 (17 de mayo de 2022): 641–47. http://dx.doi.org/10.5194/isprs-annals-v-3-2022-641-2022.

Texto completo
Resumen
Abstract. Monitoring of engineering structures is important for ensuring safety of operation. Traditional surveying methods have proven to be reliable; however, the advent of new point cloud technologies such as terrestrial laser scanning (TLS) and small unmanned aerial systems (sUAS) have provided an unprecedented wealth of data. Furthermore, simultaneous localization and mapping (SLAM) is now able to facilitate the collection of registered point clouds on the fly. SLAM is most successful when applied to indoor environments where the algorithm can identify primitives (points, planes, lines) for registration, but it can be problematic in outdoor settings where there is absence of constructed features. This work includes the collection of SLAM-based LiDAR data along a levee for the purpose of inspection and monitoring. Due to the outdoor setting and absence of man-made features, the resulting point cloud was considerably distorted due to erroneous drift in sensor orientation. A correction algorithm is proposed that relies on reference TLS point cloud data to remove drift distortions identified in the SLAM LiDAR. Results indicate an alignment between the corrected SLAM LiDAR and TLS data of around ±10cm, which is sufficient for general inspection and multi-epoch monitoring of levees. The algorithm is based on common points identified in the TLS and SLAM data and necessitates that the SLAM LiDAR is collected in individual, one-way lines to allow correction of distortions as a function of distance from the starting point. This approach increases the efficiency of LiDAR-based levee monitoring by reducing the time required to survey the levees.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Wen, Weisong y Li-Ta Hsu. "AGPC-SLAM: Absolute Ground Plane Constrained 3D Lidar SLAM". NAVIGATION: Journal of the Institute of Navigation 69, n.º 3 (2022): navi.527. http://dx.doi.org/10.33012/navi.527.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Chen, Wenqiang, Yu Wang, Haoyao Chen y Yunhui Liu. "EIL‐SLAM: Depth‐enhanced edge‐based infrared‐LiDAR SLAM". Journal of Field Robotics 39, n.º 2 (7 de octubre de 2021): 117–30. http://dx.doi.org/10.1002/rob.22040.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Shin, Young-Sik, Yeong Sang Park y Ayoung Kim. "DVL-SLAM: sparse depth enhanced direct visual-LiDAR SLAM". Autonomous Robots 44, n.º 2 (6 de agosto de 2019): 115–30. http://dx.doi.org/10.1007/s10514-019-09881-0.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Brindza, Ján, Pavol Kajánek y Ján Erdélyi. "Lidar-Based Mobile Mapping System for an Indoor Environment". Slovak Journal of Civil Engineering 30, n.º 2 (1 de junio de 2022): 47–58. http://dx.doi.org/10.2478/sjce-2022-0014.

Texto completo
Resumen
Abstract The article deals with developing and testing a low-cost measuring system for simultaneous localisation and mapping (SLAM) in an indoor environment. The measuring system consists of three orthogonally-placed 2D lidars, a robotic platform with two wheel speed sensors, and an inertial measuring unit (IMU). The paper describes the data processing model used for both the estimation of the trajectory of SLAM and the creation of a 3D model of the environment based on the estimated trajectory of the SLAM. The main problem of SLAM usage is the accumulation of errors caused by the imperfect transformation of two scans into each other. The data processing developed includes an automatic evaluation and correction of the slope of the lidar. Furthermore, during the calculation of the trajectory, a repeatedly traversed area is identified (loop closure), which enables the optimisation of the trajectory determined. The system was tested in the indoor environment of the Faculty of Civil Engineering of the Slovak University of Technology in Bratislava.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Chang, Le, Xiaoji Niu y Tianyi Liu. "GNSS/IMU/ODO/LiDAR-SLAM Integrated Navigation System Using IMU/ODO Pre-Integration". Sensors 20, n.º 17 (20 de agosto de 2020): 4702. http://dx.doi.org/10.3390/s20174702.

Texto completo
Resumen
In this paper, we proposed a multi-sensor integrated navigation system composed of GNSS (global navigation satellite system), IMU (inertial measurement unit), odometer (ODO), and LiDAR (light detection and ranging)-SLAM (simultaneous localization and mapping). The dead reckoning results were obtained using IMU/ODO in the front-end. The graph optimization was used to fuse the GNSS position, IMU/ODO pre-integration results, and the relative position and relative attitude from LiDAR-SLAM to obtain the final navigation results in the back-end. The odometer information is introduced in the pre-integration algorithm to mitigate the large drift rate of the IMU. The sliding window method was also adopted to avoid the increasing parameter numbers of the graph optimization. Land vehicle tests were conducted in both open-sky areas and tunnel cases. The tests showed that the proposed navigation system can effectually improve accuracy and robustness of navigation. During the navigation drift evaluation of the mimic two-minute GNSS outages, compared to the conventional GNSS/INS (inertial navigation system)/ODO integration, the root mean square (RMS) of the maximum position drift errors during outages in the proposed navigation system were reduced by 62.8%, 72.3%, and 52.1%, along the north, east, and height, respectively. Moreover, the yaw error was reduced by 62.1%. Furthermore, compared to the GNSS/IMU/LiDAR-SLAM integration navigation system, the assistance of the odometer and non-holonomic constraint reduced vertical error by 72.3%. The test in the real tunnel case shows that in weak environmental feature areas where the LiDAR-SLAM can barely work, the assistance of the odometer in the pre-integration is critical and can effectually reduce the positioning drift along the forward direction and maintain the SLAM in the short-term. Therefore, the proposed GNSS/IMU/ODO/LiDAR-SLAM integrated navigation system can effectually fuse the information from multiple sources to maintain the SLAM process and significantly mitigate navigation error, especially in harsh areas where the GNSS signal is severely degraded and environmental features are insufficient for LiDAR-SLAM.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Park, K. W. y S. Y. Park. "VISUAL LIDAR ODOMETRY USING TREE TRUNK DETECTION AND LIDAR LOCALIZATION". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (13 de diciembre de 2023): 627–32. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-627-2023.

Texto completo
Resumen
Abstract. This paper presents a method of visual LiDAR odometry and forest mapping, leveraging tree trunk detection and LiDAR localization techniques. In environments like dense forests, where smooth GPS signals are unreliable, we employ camera and LiDAR sensors to accurately estimate the robot's position. However, forested or orchard settings introduce unique challenges, including a diverse mixture of trees, tall grass, and uneven terrain. To address these complexities, we propose a distance-based filtering method to extract data composed solely of tree trunk information from 2D LiDAR. By restoring arc data from the LiDAR sensor to its circular shape, we obtain position and radius measurements of reference trees in the LiDAR coordinate system. Then, these values are stored in a comprehensive tree trunk database. Our approach combines visual-based SLAM and LiDAR-based SLAM independently, followed by an integration step using the Extended Kalman Filter (EKF) to improve odometry estimation. Utilizing the obtained odometry information and the EKF, we generate a tree map based on observed trees. In addition, we use the tree position in the map as the landmark to reduce the localization error in the proposed SLAM algorithm. Experimental results show that the loop-closing error ranges between 0.3 to 0.5 meters. In the future, it is expected that this method will also be applicable in the fields of path planning and navigation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Wen, Jingren, Chuang Qian, Jian Tang, Hui Liu, Wenfang Ye y Xiaoyun Fan. "2D LiDAR SLAM Back-End Optimization with Control Network Constraint for Mobile Mapping". Sensors 18, n.º 11 (29 de octubre de 2018): 3668. http://dx.doi.org/10.3390/s18113668.

Texto completo
Resumen
Simultaneous localization and mapping (SLAM) has been investigated in the field of robotics for two decades, as it is considered to be an effective method for solving the positioning and mapping problem in a single framework. In the SLAM community, the Extended Kalman Filter (EKF) based SLAM and particle filter SLAM are the most mature technologies. After years of development, graph-based SLAM is becoming the most promising technology and a lot of progress has been made recently with respect to accuracy and efficiency. No matter which SLAM method is used, loop closure is a vital part for overcoming the accumulated errors. However, in 2D Light Detection and Ranging (LiDAR) SLAM, on one hand, it is relatively difficult to extract distinctive features in LiDAR scans for loop closure detection, as 2D LiDAR scans encode much less information than images; on the other hand, there is also some special mapping scenery, where no loop closure exists. Thereby, in this paper, instead of loop closure detection, we first propose the method to introduce extra control network constraint (CNC) to the back-end optimization of graph-based SLAM, by aligning the LiDAR scan center with the control vertex of the presurveyed control network to optimize all the poses of scans and submaps. Field tests were carried out in a typical urban Global Navigation Satellite System (GNSS) weak outdoor area. The results prove that the position Root Mean Square (RMS) error of the selected key points is 0.3614 m, evaluated with a reference map produced by Terrestrial Laser Scanner (TLS). Mapping accuracy is significantly improved, compared to the mapping RMS of 1.6462 m without control network constraint. Adding distance constraints of the control network to the back-end optimization is an effective and practical method to solve the drift accumulation of LiDAR front-end scan matching.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Filip, Iulian, Juhyun Pyo, Meungsuk Lee y Hangil Joe. "LiDAR SLAM with a Wheel Encoder in a Featureless Tunnel Environment". Electronics 12, n.º 4 (17 de febrero de 2023): 1002. http://dx.doi.org/10.3390/electronics12041002.

Texto completo
Resumen
Simultaneous localization and mapping (SLAM) represents a crucial algorithm in the autonomous navigation of ground vehicles. Several studies were conducted to improve the SLAM algorithm using various sensors and robot platforms. However, only a few works have focused on applications inside low-illuminated featureless tunnel environments. In this work, we present an improved SLAM algorithm using wheel encoder data from an autonomous ground vehicle (AGV) to obtain robust performance in a featureless tunnel environment. The improved SLAM system uses FAST-LIO2 LiDAR SLAM as the baseline algorithm, and the additional wheel encoder sensor data are integrated into the baseline SLAM structure using the extended Kalman filter (EKF) algorithm. The EKF algorithm is used after the LiDAR odometry estimation and before the mapping process of FAST-LIO2. The prediction step uses the wheel encoder and inertial measurement unit (IMU) data, while the correction step uses the FAST-LIO2 LiDAR state estimation. We used an AGV to conduct experiments in flat and inclined terrain sections in a tunnel environment. The results showed that the mapping and the localization process in the SLAM algorithm was greatly improved in a featureless tunnel environment considering both inclined and flat terrains.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Abdelaziz, Nader y Ahmed El-Rabbany. "Deep Learning-Aided Inertial/Visual/LiDAR Integration for GNSS-Challenging Environments". Sensors 23, n.º 13 (29 de junio de 2023): 6019. http://dx.doi.org/10.3390/s23136019.

Texto completo
Resumen
This research develops an integrated navigation system, which fuses the measurements of the inertial measurement unit (IMU), LiDAR, and monocular camera using an extended Kalman filter (EKF) to provide accurate positioning during prolonged GNSS signal outages. The system features the use of an integrated INS/monocular visual simultaneous localization and mapping (SLAM) navigation system that takes advantage of LiDAR depth measurements to correct the scale ambiguity that results from monocular visual odometry. The proposed system was tested using two datasets, namely, the KITTI and the Leddar PixSet, which cover a wide range of driving environments. The system yielded an average reduction in the root-mean-square error (RMSE) of about 80% and 92% in the horizontal and upward directions, respectively. The proposed system was compared with an INS/monocular visual SLAM/LiDAR SLAM integration and to some state-of-the-art SLAM algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

He, Yuhang, Bo Li, Jianyuan Ruan, Aihua Yu y Beiping Hou. "ZUST Campus: A Lightweight and Practical LiDAR SLAM Dataset for Autonomous Driving Scenarios". Electronics 13, n.º 7 (2 de abril de 2024): 1341. http://dx.doi.org/10.3390/electronics13071341.

Texto completo
Resumen
This research proposes a lightweight and applicable dataset with a precise elevation ground truth and extrinsic calibration toward the LiDAR (Light Detection and Ranging) SLAM (Simultaneous Localization and Mapping) task in the field of autonomous driving. Our dataset focuses on more cost-effective platforms with limited computational power and low-resolution three-dimensional LiDAR sensors (16-beam LiDAR), and fills the gaps in the existing literature. Our data include abundant scenarios that include degenerated environments, dynamic objects, and large slope terrain to facilitate the investigation of the performance of the SLAM system. We provided the ground truth pose from RTK-GPS and carefully rectified its elevation errors, and designed an extra method to evaluate the vertical drift. The module for calibrating the LiDAR and IMU was also enhanced to ensure the precision of point cloud data. The reliability and applicability of the dataset are fully tested through a series of experiments using several state-of-the-art LiDAR SLAM methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Wu, H., R. Zhong, D. Xie, C. Chen, J. Tang, C. Wu y X. Qi. "MR-MD:MULTI-ROBOT MAPPING WITH MANHATTAN DESCRIPTOR". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (13 de diciembre de 2023): 687–92. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-687-2023.

Texto completo
Resumen
Abstract. Simultaneous Localization and Mapping (SLAM) technology, utilizing Light Detection and Ranging (LiDAR) sensors, is crucial for 3D environment perception and mapping. However, the absence of absolute observations and the inefficiency of single-robot perception present challenges for LiDAR SLAM in indoor environments. In this paper, we propose a multi-robot (MR) collaborative mapping method based on the Manhattan descriptor (MD) named MR-MD to overcome these limitations and improve the perception accuracy of LiDAR SLAM in indoor environments. The proposed method consists of two modules: MD generation and MD optimization. In the first module, each robot builds a local submap and constructs MD by parameterizing the planes in the submap. In the second module, the global main direction is updated using the historical MD of each robot, and constraints are built for each robot's horizontal and vertical directions according to their current MD and optimized. We conducted extensive comparisons with other multi-robot and single-robot LiDAR SLAM methods using real indoor data, and the results show that our method achieved higher mapping accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Karam, S., V. Lehtola y G. Vosselman. "STRATEGIES TO INTEGRATE IMU AND LIDAR SLAM FOR INDOOR MAPPING". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-1-2020 (3 de agosto de 2020): 223–30. http://dx.doi.org/10.5194/isprs-annals-v-1-2020-223-2020.

Texto completo
Resumen
Abstract. In recent years, the importance of indoor mapping increased in a wide range of applications, such as facility management and mapping hazardous sites. The essential technique behind indoor mapping is simultaneous localization and mapping (SLAM) because SLAM offers suitable positioning estimates in environments where satellite positioning is not available. State-of-the-art indoor mobile mapping systems employ Visual-based SLAM or LiDAR-based SLAM. However, Visual-based SLAM is sensitive to textureless environments and, similarly, LiDAR-based SLAM is sensitive to a number of pose configurations where the geometry of laser observations is not strong enough to reliably estimate the six-degree-of-freedom (6DOF) pose of the system. In this paper, we present different strategies that utilize the benefits of the inertial measurement unit (IMU) in the pose estimation and support LiDAR-based SLAM in overcoming these problems. The proposed strategies have been implemented and tested using different datasets and our experimental results demonstrate that the proposed methods do indeed overcome these problems. We conclude that IMU observations increase the robustness of SLAM, which is expected, but also that the best reconstruction accuracy is obtained not with a blind use of all observations but by filtering the measurements with a proposed reliability measure. To this end, our results show promising improvements in reconstruction accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Peng, Hongrui, Ziyu Zhao y Liguan Wang. "A Review of Dynamic Object Filtering in SLAM Based on 3D LiDAR". Sensors 24, n.º 2 (19 de enero de 2024): 645. http://dx.doi.org/10.3390/s24020645.

Texto completo
Resumen
SLAM (Simultaneous Localization and Mapping) based on 3D LiDAR (Laser Detection and Ranging) is an expanding field of research with numerous applications in the areas of autonomous driving, mobile robotics, and UAVs (Unmanned Aerial Vehicles). However, in most real-world scenarios, dynamic objects can negatively impact the accuracy and robustness of SLAM. In recent years, the challenge of achieving optimal SLAM performance in dynamic environments has led to the emergence of various research efforts, but there has been relatively little relevant review. This work delves into the development process and current state of SLAM based on 3D LiDAR in dynamic environments. After analyzing the necessity and importance of filtering dynamic objects in SLAM, this paper is developed from two dimensions. At the solution-oriented level, mainstream methods of filtering dynamic targets in 3D point cloud are introduced in detail, such as the ray-tracing-based approach, the visibility-based approach, the segmentation-based approach, and others. Then, at the problem-oriented level, this paper classifies dynamic objects and summarizes the corresponding processing strategies for different categories in the SLAM framework, such as online real-time filtering, post-processing after the mapping, and Long-term SLAM. Finally, the development trends and research directions of dynamic object filtering in SLAM based on 3D LiDAR are discussed and predicted.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Jiang, Guolai, Lei Yin, Shaokun Jin, Chaoran Tian, Xinbo Ma y Yongsheng Ou. "A Simultaneous Localization and Mapping (SLAM) Framework for 2.5D Map Building Based on Low-Cost LiDAR and Vision Fusion". Applied Sciences 9, n.º 10 (22 de mayo de 2019): 2105. http://dx.doi.org/10.3390/app9102105.

Texto completo
Resumen
The method of simultaneous localization and mapping (SLAM) using a light detection and ranging (LiDAR) sensor is commonly adopted for robot navigation. However, consumer robots are price sensitive and often have to use low-cost sensors. Due to the poor performance of a low-cost LiDAR, error accumulates rapidly while SLAM, and it may cause a huge error for building a larger map. To cope with this problem, this paper proposes a new graph optimization-based SLAM framework through the combination of low-cost LiDAR sensor and vision sensor. In the SLAM framework, a new cost-function considering both scan and image data is proposed, and the Bag of Words (BoW) model with visual features is applied for loop close detection. A 2.5D map presenting both obstacles and vision features is also proposed, as well as a fast relocation method with the map. Experiments were taken on a service robot equipped with a 360° low-cost LiDAR and a front-view RGB-D camera in the real indoor scene. The results show that the proposed method has better performance than using LiDAR or camera only, while the relocation speed with our 2.5D map is much faster than with traditional grid map.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Collings, Simon, Tara J. Martin, Emili Hernandez, Stuart Edwards, Andrew Filisetti, Gavin Catt, Andreas Marouchos, Matt Boyd y Carl Embry. "Findings from a Combined Subsea LiDAR and Multibeam Survey at Kingston Reef, Western Australia". Remote Sensing 12, n.º 15 (30 de julio de 2020): 2443. http://dx.doi.org/10.3390/rs12152443.

Texto completo
Resumen
Light Detection and Ranging (LiDAR), a comparatively new technology in the field of underwater surveying, has principally been used for taking precise measurement of undersea structures in the oil and gas industry. Typically, the LiDAR is deployed on a remotely operated vehicle (ROV), which will “land” on the seafloor in order to generate a 3D point cloud of its environment from a stationary position. To explore the potential of subsea LiDAR on a moving platform in an environmental context, we deployed an underwater LiDAR system simultaneously with a multibeam echosounder (MBES), surveying Kingston Reef off the coast of Rottnest Island, Western Australia. This paper compares and summarises the relative accuracy and characteristics of underwater LiDAR and multibeam sonar and investigates synergies between sonar and LiDAR technology for the purpose of benthic habitat mapping and underwater simultaneous localisation and mapping (SLAM) for Autonomous Underwater Vehicles (AUVs). We found that LiDAR reflectivity and multibeam backscatter are complementary technologies for habitat mapping, which can combine to discriminate between habitats that could not be mapped with either one alone. For robot navigation, SLAM can be effectively applied with either technology, however, when a Global Navigation Satellite System (GNSS) is available, SLAM does not significantly improve the self-consistency of multibeam data, but it does for LiDAR.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Karimi, Mojtaba, Martin Oelsch, Oliver Stengel, Edwin Babaians y Eckehard Steinbach. "LoLa-SLAM: Low-Latency LiDAR SLAM Using Continuous Scan Slicing". IEEE Robotics and Automation Letters 6, n.º 2 (abril de 2021): 2248–55. http://dx.doi.org/10.1109/lra.2021.3060721.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Ai, M., M. Elhabiby, I. Asl Sabbaghian Hokmabadi y N. El-Sheimy. "LIDAR-INERTIAL NAVIGATION BASED ON MAP AIDED DISTANCE CONSTRAINT AND FACTOR GRAPH OPTIMIZATION". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (13 de diciembre de 2023): 875–80. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-875-2023.

Texto completo
Resumen
Abstract. The simultaneous localization and mapping (SLAM) is one of the well-developed positioning technology that provides high accuracy and reliability positioning for automatic vehicles and robotics applications. Integrating Light Detection and Ranging (LiDAR) with an Inertial Measurement Unit (IMU) has emerged as a promising technique for achieving stable navigation results in dense urban environments, outperforming vision-based or pure Inertial Navigation System (INS) solutions. However, conventional LiDAR-Inertial SLAM systems often suffer from limited perception of surrounding geometric information, resulting in unexpected and accumulating errors. In this paper, we proposed a LiDAR-Inertial SLAM scheme that utilizes a prior structural information map which is generated from opensource OpenStreetMap (OSM). In contrast to conventional solutions of OSM-aided SLAM approaches, our method extracts the vectorized models of road and building and synthetically generates dense point maps for LiDAR registration. Specifically, a structural map processing module extracts the road models and building models from OSM and generates a structure information map (SIM) with dense point clouds. Secondly, a map aided distance (MD) constraint is calculated by registering selected keyframes and the prior SIM. Finally, a factor graph optimization (FGO) algorithm is involved to integrate the relative transformation obtained from LiDAR odometry, IMU pre-integration, and the map aided distance constraints. To evaluate the proposed LiDAR-based positioning accuracy, experimental evaluation is implemented in an opensource dataset collected in the urban canyon environments. Experimental results demonstrates that with the help of the proposed MD constraint, the LiDAR-based navigation solution can achieve accurate positioning, with a root mean square deviation (RSME) of 4.7 m.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Xu, Y., C. Chen, Z. Wang, B. Yang, W. Wu, L. Li, J. Wu y L. Zhao. "PMLIO: PANORAMIC TIGHTLY-COUPLED MULTI-LIDAR-INERTIAL ODOMETRY AND MAPPING". ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-1/W1-2023 (5 de diciembre de 2023): 703–8. http://dx.doi.org/10.5194/isprs-annals-x-1-w1-2023-703-2023.

Texto completo
Resumen
Abstract. The limited field of view (FoV) of single LiDAR poses challenges for robots to achieve comprehensive environmental perception. Incorporating multiple LiDAR sensors can effectively broaden the FoV of robots, providing abundant measurements to facilitate simultaneous localization and mapping (SLAM). In this paper, we propose a panoramic tightly-coupled multi-LiDAR-inertial odometry and mapping framework, which fully leverages the properties of solid-state LiDAR and spinning LiDAR. The key of the proposed framework lies in the effective completion of multi-LiDAR spatial-temporal fusion. Additionally, we employ the iterated extended Kalman filter to achieve tightly-coupled inertial odometry and mapping with IMU data. PMLIO showcases competitive performance on multiple scenarios data, compared with state-of-the-art single LiDAR-inertial SLAM algorithms, and reaches a noteworthy improvement of 27.1% and 12.9% in max and median of absolute pose error (APE) respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Sun, Y., F. Huang, W. Wen, L. T. Hsu y X. Liu. "MULTI-ROBOT COOPERATIVE LIDAR SLAM FOR EFFICIENT MAPPING IN URBAN SCENES". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W1-2023 (25 de mayo de 2023): 473–78. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w1-2023-473-2023.

Texto completo
Resumen
Abstract. We first use the multi-robot SLAM framework DiSCo-SLAM to evaluate the performance of cooperative SLAM based on the complicated dataset in urban scenes. Besides, we perform comparisons of single-robot SLAM and multi-robot SLAM to explore whether the cooperative framework can noticeably improve robot localization performance and the influence of inter-robot constraints in local pose graph, utilizing an identical dataset generated via the Carla simulator. Our findings indicate that under specific conditions, the integration of inter-robot constraints may effectively mitigate drift in local pose estimation. The extent to which inter-robot constraints affect the correction of local SLAM is related to various factors, such as the confidence level of the constraints and the range of keyframes imposed by the constraint.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Suleymanoglu, B., M. Soycan y C. Toth. "INDOOR MAPPING: EXPERIENCES WITH LIDAR SLAM". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2022 (30 de mayo de 2022): 279–85. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2022-279-2022.

Texto completo
Resumen
Abstract. Indoor mapping is gaining more interest in both research as well as in emerging applications. Building information systems (BIM) and indoor navigation are probably the driving force behind this trend. For accurate mapping, the platform trajectory reconstruction, or in other words sensor orientation, is essential to reduce or even eliminate for extensive ground control. Simultaneous localization and mapping (SLAM) is the computation problem of how to simultaneously estimate the platform/sensor trajectory while reconstructing the object space; usually, a real-time operation is assumed. Here we investigate the performance of two LiDAR SLAM tools based on using indoor data, acquired by a remotely controlled robot sensor platform. All comparisons were performed on similar datasets using appropriate metrics and encouraging results were obtained as a consequence of initial test studies yet further research is needed to analyse these tools and their accuracy comprehensively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Seki, Hiroshi, Yuhi Yamamoto y Sumito Nagasawa. "The Influence of Micro-Hexapod Walking-Induced Pose Changes on LiDAR-SLAM Mapping Performance". Sensors 24, n.º 2 (19 de enero de 2024): 639. http://dx.doi.org/10.3390/s24020639.

Texto completo
Resumen
Micro-hexapods, well-suited for navigating tight or uneven spaces and suitable for mass production, hold promise for exploration by robot groups, particularly in disaster scenarios. However, research on simultaneous localization and mapping (SLAM) for micro-hexapods has been lacking. Previous studies have not adequately addressed the development of SLAM systems considering changes in the body axis, and there is a lack of comparative evaluation with other movement mechanisms. This study aims to assess the influence of walking on SLAM capabilities in hexapod robots. Experiments were conducted using the same SLAM system and LiDAR on both a hexapod robot and crawler robot. The study compares map accuracy and LiDAR point cloud data through pattern matching. The experimental results reveal significant fluctuations in LiDAR point cloud data in hexapod robots due to changes in the body axis, leading to a decrease in map accuracy. In the future, the development of SLAM systems considering body axis changes is expected to be crucial for multi-legged robots like micro-hexapods. Therefore, we propose the implementation of a system that incorporates body axis changes during locomotion using inertial measurement units and similar sensors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Messbah, Hind, Mohamed Emharraf y Mohammed Saber. "Robot Indoor Navigation: Comparative Analysis of LiDAR 2D and Visual SLAM". IAES International Journal of Robotics and Automation (IJRA) 13, n.º 1 (1 de marzo de 2024): 41. http://dx.doi.org/10.11591/ijra.v13i1.pp41-49.

Texto completo
Resumen
<span lang="EN-US">Robot indoor navigation has become a significant area of research and development for applications such as autonomous robots, smart homes, and industrial automation. This article presents an in-depth comparative analysis of LiDAR 2D and visual sensor simultaneous localization and mapping (SLAM) approaches for robot indoor navigation. The increasing demand for autonomous robots in indoor environments has led to the development of various SLAM techniques for mapping and localization. LiDAR 2D and visual sensor-based SLAM methods are widely used due to their low cost and ease of implementation. The article provides an overview of LiDAR 2D and visual sensor-based SLAM techniques, including their working principles, advantages, and limitations. A comprehensive comparative analysis is conducted, assesing their capabilities in terms of robustness, accuracy, and computational requirements. The article also discusses the impact of environmental factors, such as lighting conditions and obstacles, on the performance of both approaches. The analysis’s findings highlight each approach’s strengths and weaknesses, providing valuable insights for researchers and practitioners in selecting the appropriate SLAM method for robot indoor navigation based on specific requirements and constraints</span>
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

He, Jionglin, Jiaxiang Fang, Shuping Xu y Dingzhe Yang. "Indoor Robot SLAM with Multi-Sensor Fusion". International Journal of Advanced Network, Monitoring and Controls 9, n.º 1 (1 de enero de 2024): 10–21. http://dx.doi.org/10.2478/ijanmc-2024-0002.

Texto completo
Resumen
Abstract In order to solve the problem of large positioning error and incomplete mapping of SLAM based on two-dimensional lidar in indoor environment, a multi-sensor fusion SLAM algorithm for indoor robots was proposed. Aiming at the mismatch problem of the traditional ICP algorithm in the front end of the lidar SLAM, the algorithm adopts the PL-ICP algorithm that is more suitable for the indoor environment, and uses the extended Kalman filter to fuse the wheel odometer and IMU to provide the initial motion estimation value. Then, during the mapping phase, the pseudo 2D laser data converted from the 3D point cloud data obtained by the depth camera is fused with the data obtained from the 2D lidar to compensate for the lack of vertical field of view in the 2D lidar mapping. The final experimental results show that the fusion odometer data has improved the positioning accuracy by at least 33% compared to a single wheeled odometer, providing a higher initial iteration value for the PL-ICP algorithm. At the same time, fusion mapping compensates for the shortcomings of a single two-dimensional lidar mapping, and constructs an environmental map with more complete environmental information.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Song, Chengqun, Bo Zeng, Jun Cheng, Fuxiang Wu y Fusheng Hao. "PSMD-SLAM: Panoptic Segmentation-Aided Multi-Sensor Fusion Simultaneous Localization and Mapping in Dynamic Scenes". Applied Sciences 14, n.º 9 (30 de abril de 2024): 3843. http://dx.doi.org/10.3390/app14093843.

Texto completo
Resumen
Multi-sensor fusion is pivotal in augmenting the robustness and precision of simultaneous localization and mapping (SLAM) systems. The LiDAR–visual–inertial approach has been empirically shown to adeptly amalgamate the benefits of these sensors for SLAM across various scenarios. Furthermore, methods of panoptic segmentation have been introduced to deliver pixel-level semantic and instance segmentation data in a single instance. This paper delves deeper into these methodologies, introducing PSMD-SLAM, a novel panoptic segmentation assisted multi-sensor fusion SLAM approach tailored for dynamic environments. Our approach employs both probability propagation-based and PCA-based clustering techniques, supplemented by panoptic segmentation. This is utilized for dynamic object detection and the removal of visual and LiDAR data, respectively. Furthermore, we introduce a module designed for the robust real-time estimation of the 6D pose of dynamic objects. We test our approach on a publicly available dataset and show that PSMD-SLAM outperforms other SLAM algorithms in terms of accuracy and robustness, especially in dynamic environments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Wen, Weisong, Li-Ta Hsu y Guohao Zhang. "Performance Analysis of NDT-based Graph SLAM for Autonomous Vehicle in Diverse Typical Driving Scenarios of Hong Kong". Sensors 18, n.º 11 (14 de noviembre de 2018): 3928. http://dx.doi.org/10.3390/s18113928.

Texto completo
Resumen
Robust and lane-level positioning is essential for autonomous vehicles. As an irreplaceable sensor, Light detection and ranging (LiDAR) can provide continuous and high-frequency pose estimation by means of mapping, on condition that enough environment features are available. The error of mapping can accumulate over time. Therefore, LiDAR is usually integrated with other sensors. In diverse urban scenarios, the environment feature availability relies heavily on the traffic (moving and static objects) and the degree of urbanization. Common LiDAR-based simultaneous localization and mapping (SLAM) demonstrations tend to be studied in light traffic and less urbanized area. However, its performance can be severely challenged in deep urbanized cities, such as Hong Kong, Tokyo, and New York with dense traffic and tall buildings. This paper proposes to analyze the performance of standalone NDT-based graph SLAM and its reliability estimation in diverse urban scenarios to further evaluate the relationship between the performance of LiDAR-based SLAM and scenario conditions. The normal distribution transform (NDT) is employed to calculate the transformation between frames of point clouds. Then, the LiDAR odometry is performed based on the calculated continuous transformation. The state-of-the-art graph-based optimization is used to integrate the LiDAR odometry measurements to implement optimization. The 3D building models are generated and the definition of the degree of urbanization based on Skyplot is proposed. Experiments are implemented in different scenarios with different degrees of urbanization and traffic conditions. The results show that the performance of the LiDAR-based SLAM using NDT is strongly related to the traffic condition and degree of urbanization. The best performance is achieved in the sparse area with normal traffic and the worse performance is obtained in dense urban area with 3D positioning error (summation of horizontal and vertical) gradients of 0.024 m/s and 0.189 m/s, respectively. The analyzed results can be a comprehensive benchmark for evaluating the performance of standalone NDT-based graph SLAM in diverse scenarios which is significant for multi-sensor fusion of autonomous vehicle.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Yang, Xin, Xiaohu Lin, Wanqiang Yao, Hongwei Ma, Junliang Zheng y Bolin Ma. "A Robust LiDAR SLAM Method for Underground Coal Mine Robot with Degenerated Scene Compensation". Remote Sensing 15, n.º 1 (29 de diciembre de 2022): 186. http://dx.doi.org/10.3390/rs15010186.

Texto completo
Resumen
Simultaneous localization and mapping (SLAM) is the key technology for the automation of intelligent mining equipment and the digitization of the mining environment. However, the shotcrete surface and symmetrical roadway in underground coal mines make light detection and ranging (LiDAR) SLAM prone to degeneration, which leads to the failure of mobile robot localization and mapping. To address these issues, this paper proposes a robust LiDAR SLAM method which detects and compensates for the degenerated scenes by integrating LiDAR and inertial measurement unit (IMU) data. First, the disturbance model is used to detect the direction and degree of degeneration caused by insufficient line and plane feature constraints for obtaining the factor and vector of degeneration. Second, the degenerated state is divided into rotation and translation. The pose obtained by IMU pre-integration is projected to plane features and then used for local map matching to achieve two-step degenerated compensation. Finally, a globally consistent LiDAR SLAM is implemented based on sliding window factor graph optimization. The extensive experimental results show that the proposed method achieves better robustness than LeGO-LOAM and LIO-SAM. The absolute position root mean square error (RMSE) is only 0.161 m, which provides an important reference for underground autonomous localization and navigation in intelligent mining and safety inspection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía