Статті в журналах з теми "SLAM mapping"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: SLAM mapping.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "SLAM mapping".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Saat, Shahrizal, AN MF Airini, Muhammad Salihin Saealal, A. R. Wan Norhisyam, and M. S. Farees Ezwan. "Hector SLAM 2D Mapping for Simultaneous Localization and Mapping (SLAM)." Journal of Engineering and Applied Sciences 14, no. 16 (November 10, 2019): 5610–15. http://dx.doi.org/10.36478/jeasci.2019.5610.5615.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lu, Xiaoyun, Hu Wang, Shuming Tang, Huimin Huang, and Chuang Li. "DM-SLAM: Monocular SLAM in Dynamic Environments." Applied Sciences 10, no. 12 (June 21, 2020): 4252. http://dx.doi.org/10.3390/app10124252.

Повний текст джерела
Анотація:
Many classic visual monocular SLAM (simultaneous localization and mapping) systems have been developed over the past decades, yet most of them fail when dynamic scenarios dominate. DM-SLAM is proposed for handling dynamic objects in environments based on ORB-SLAM2. This article mainly concentrates on two aspects. Firstly, we proposed a distribution and local-based RANSAC (Random Sample Consensus) algorithm (DLRSAC) to extract static features from the dynamic scene based on awareness of the nature difference between motion and static, which is integrated into initialization of DM-SLAM. Secondly, we designed a candidate map points selection mechanism based on neighborhood mutual exclusion to balance the accuracy of tracking camera pose and system robustness in motion scenes. Finally, we conducted experiments in the public dataset and compared DM-SLAM with ORB-SLAM2. The experiments corroborated the superiority of the DM-SLAM.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Boyu, Kuang, Chen Yuheng, and Rana Zeeshan A. "OG-SLAM: A real-time and high-accurate monocular visual SLAM framework." Trends in Computer Science and Information Technology 7, no. 2 (July 26, 2022): 047–54. http://dx.doi.org/10.17352/tcsit.000050.

Повний текст джерела
Анотація:
The challenge of improving the accuracy of monocular Simultaneous Localization and Mapping (SLAM) is considered, which widely appears in computer vision, autonomous robotics, and remote sensing. A new framework (ORB-GMS-SLAM (or OG-SLAM)) is proposed, which introduces the region-based motion smoothness into a typical Visual SLAM (V-SLAM) system. The region-based motion smoothness is implemented by integrating the Oriented Fast and Rotated Brief (ORB) features and the Grid-based Motion Statistics (GMS) algorithm into the feature matching process. The OG-SLAM significantly reduces the absolute trajectory error (ATE) on the key-frame trajectory estimation without compromising the real-time performance. This study compares the proposed OG-SLAM to an advanced V-SLAM system (ORB-SLAM2). The results indicate the highest accuracy improvement of almost 75% on a typical RGB-D SLAM benchmark. Compared with other ORB-SLAM2 settings (1800 key points), the OG-SLAM improves the accuracy by around 20% without losing performance in real-time. The OG-SLAM framework has a significant advantage over the ORB-SLAM2 system in that it is more robust for rotation, loop-free, and long ground-truth length scenarios. Furthermore, as far as the authors are aware, this framework is the first attempt to integrate the GMS algorithm into the V-SLAM.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Peng, Tao, Dingnan Zhang, Don Lahiru Nirmal Hettiarachchi, and John Loomis. "An Evaluation of Embedded GPU Systems for Visual SLAM Algorithms." Electronic Imaging 2020, no. 6 (January 26, 2020): 325–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.6.iriacv-074.

Повний текст джерела
Анотація:
Simultaneous Localization and Mapping (SLAM) solves the computational problem of estimating the location of a robot and the map of the environment. SLAM is widely used in the area of navigation, odometry, and mobile robot mapping. However, the performance and efficiency of the small industrial mobile robots and unmanned aerial vehicles (UAVs) are highly constrained to the battery capacity. Therefore, a mobile robot, especially a UAV, requires low power consumption while maintaining high performance. This paper demonstrates holistic and quantitative performance evaluations of embedded computing devices that run on the Nvidia Jetson platform. Evaluations are based on the execution of two state-of-the-art Visual SLAM algorithms, ORB-SLAM2 and OpenVSLAM, on Nvidia Jetson Nano, Nvidia Jetson TX2, and Nvidia Jetson Xavier.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Sun, Liuxin, Junyu Wei, Shaojing Su, and Peng Wu. "SOLO-SLAM: A Parallel Semantic SLAM Algorithm for Dynamic Scenes." Sensors 22, no. 18 (September 15, 2022): 6977. http://dx.doi.org/10.3390/s22186977.

Повний текст джерела
Анотація:
Simultaneous localization and mapping (SLAM) is a core technology for mobile robots working in unknown environments. Most existing SLAM techniques can achieve good localization accuracy in static scenes, as they are designed based on the assumption that unknown scenes are rigid. However, real-world environments are dynamic, resulting in poor performance of SLAM algorithms. Thus, to optimize the performance of SLAM techniques, we propose a new parallel processing system, named SOLO-SLAM, based on the existing ORB-SLAM3 algorithm. By improving the semantic threads and designing a new dynamic point filtering strategy, SOLO-SLAM completes the tasks of semantic and SLAM threads in parallel, thereby effectively improving the real-time performance of SLAM systems. Additionally, we further enhance the filtering effect for dynamic points using a combination of regional dynamic degree and geometric constraints. The designed system adds a new semantic constraint based on semantic attributes of map points, which solves, to some extent, the problem of fewer optimization constraints caused by dynamic information filtering. Using the publicly available TUM dataset, SOLO-SLAM is compared with other state-of-the-art schemes. Our algorithm outperforms ORB-SLAM3 in accuracy (maximum improvement is 97.16%) and achieves better results than Dyna-SLAM with respect to time efficiency (maximum improvement is 90.07%).
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Skrzypczyński, Piotr. "Simultaneous localization and mapping: A feature-based probabilistic approach." International Journal of Applied Mathematics and Computer Science 19, no. 4 (December 1, 2009): 575–88. http://dx.doi.org/10.2478/v10006-009-0045-z.

Повний текст джерела
Анотація:
Simultaneous localization and mapping: A feature-based probabilistic approachThis article provides an introduction to Simultaneous Localization And Mapping (SLAM), with the focus on probabilistic SLAM utilizing a feature-based description of the environment. A probabilistic formulation of the SLAM problem is introduced, and a solution based on the Extended Kalman Filter (EKF-SLAM) is shown. Important issues of convergence, consistency, observability, data association and scaling in EKF-SLAM are discussed from both theoretical and practical points of view. Major extensions to the basic EKF-SLAM method and some recent advances in SLAM are also presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Song, Jooeun, and Joongjin Kook. "Visual SLAM Based Spatial Recognition and Visualization Method for Mobile AR Systems." Applied System Innovation 5, no. 1 (January 5, 2022): 11. http://dx.doi.org/10.3390/asi5010011.

Повний текст джерела
Анотація:
The simultaneous localization and mapping (SLAM) market is growing rapidly with advances in Machine Learning, Drones, and Augmented Reality (AR) technologies. However, due to the absence of an open source-based SLAM library for developing AR content, most SLAM researchers are required to conduct their own research and development to customize SLAM. In this paper, we propose an open source-based Mobile Markerless AR System by building our own pipeline based on Visual SLAM. To implement the Mobile AR System of this paper, we use ORB-SLAM3 and Unity Engine and experiment with running our system in a real environment and confirming it in the Unity Engine’s Mobile Viewer. Through this experimentation, we can verify that the Unity Engine and the SLAM System are tightly integrated and communicate smoothly. In addition, we expect to accelerate the growth of SLAM technology through this research.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zhang, Haoyang. "Deep Learning Applications in Simultaneous Localization and Mapping." Journal of Physics: Conference Series 2181, no. 1 (January 1, 2022): 012012. http://dx.doi.org/10.1088/1742-6596/2181/1/012012.

Повний текст джерела
Анотація:
Abstract Simultaneous Location and Mapping (SLAM) is a research hotspot in the field of intelligent robots in recent years. Its processing object is the visual image. Deep learning has achieved great success in the field of computer vision, which makes the combination of deep learning and slam technology a feasible scheme. This paper summarizes some applications of deep learning in SLAM technology and introduces its latest research results. The advantages and disadvantages of deep-learning-based-SLAM technology are compared with those of traditional SLAM. Finally, the future development direction of SLAM plus deep learning technology is prospected.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zhang, Zijie, and Jing Zeng. "A Survey on Visual Simultaneously Localization and Mapping." Frontiers in Computing and Intelligent Systems 1, no. 1 (August 2, 2022): 18–21. http://dx.doi.org/10.54097/fcis.v1i1.1089.

Повний текст джерела
Анотація:
Visual simultaneous localization and mapping (VSLAM) is an important branch of intelligent robot technology, which refers to the use of cameras as the only external sensors to achieve self-localization in unfamiliar environments while creating environmental maps. The map constructed by slam is the basis for subsequent robots to achieve autonomous positioning, path planning and obstacle avoidance tasks. This paper introduces the development of visual Slam at home and abroad, the basic methods of visual slam, and the key problems in visual slam, and discusses the main development trends and research hotspots of visual slam.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Luo, Kaiqing, Manling Lin, Pengcheng Wang, Siwei Zhou, Dan Yin, and Haolan Zhang. "Improved ORB-SLAM2 Algorithm Based on Information Entropy and Image Sharpening Adjustment." Mathematical Problems in Engineering 2020 (September 23, 2020): 1–13. http://dx.doi.org/10.1155/2020/4724310.

Повний текст джерела
Анотація:
Simultaneous Localization and Mapping (SLAM) has become a research hotspot in the field of robots in recent years. However, most visual SLAM systems are based on static assumptions which ignored motion effects. If image sequences are not rich in texture information or the camera rotates at a large angle, SLAM system will fail to locate and map. To solve these problems, this paper proposes an improved ORB-SLAM2 algorithm based on information entropy and sharpening processing. The information entropy corresponding to the segmented image block is calculated, and the entropy threshold is determined by the adaptive algorithm of image entropy threshold, and then the image block which is smaller than the information entropy threshold is sharpened. The experimental results show that compared with the ORB-SLAM2 system, the relative trajectory error decreases by 36.1% and the absolute trajectory error decreases by 45.1% compared with ORB-SLAM2. Although these indicators are greatly improved, the processing time is not greatly increased. To some extent, the algorithm solves the problem of system localization and mapping failure caused by camera large angle rotation and insufficient image texture information.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Guan, Hongliang, Chengyuan Qian, Tingsong Wu, Xiaoming Hu, Fuzhou Duan, and Xinyi Ye. "A Dynamic Scene Vision SLAM Method Incorporating Object Detection and Object Characterization." Sustainability 15, no. 4 (February 8, 2023): 3048. http://dx.doi.org/10.3390/su15043048.

Повний текст джерела
Анотація:
Simultaneous localization and mapping (SLAM) based on RGB-D cameras has been widely used for robot localization and navigation in unknown environments. Most current SLAM methods are constrained by static environment assumptions and perform poorly in real-world dynamic scenarios. To improve the robustness and performance of SLAM systems in dynamic environments, this paper proposes a new RGB-D SLAM method for indoor dynamic scenes based on object detection. The method presented in this paper improves on the ORB-SLAM3 framework. First, we designed an object detection module based on YOLO v5 and relied on it to improve the tracking module of ORB-SLAM3 and the localization accuracy of ORB-SLAM3 in dynamic environments. The dense point cloud map building module was also included, which excludes dynamic objects from the environment map to create a static environment point cloud map with high readability and reusability. Full comparison experiments with the original ORB-SLAM3 and two representative semantic SLAM methods on the TUM RGB-D dataset show that: the method in this paper can run at 30+fps, the localization accuracy improved to varying degrees compared to ORB-SLAM3 in all four image sequences, and the absolute trajectory accuracy can be improved by up to 91.10%. The localization accuracy of the method in this paper is comparable to that of DS-SLAM, DynaSLAM and the two recent target detection-based SLAM algorithms, but it runs faster. The RGB-D SLAM method proposed in this paper, which combines the most advanced object detection method and visual SLAM framework, outperforms other methods in terms of localization accuracy and map construction in a dynamic indoor environment and has a certain reference value for navigation, localization, and 3D reconstruction.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Boal, Jaime, Álvaro Sánchez-Miralles, and Álvaro Arranz. "Topological simultaneous localization and mapping: a survey." Robotica 32, no. 5 (December 3, 2013): 803–21. http://dx.doi.org/10.1017/s0263574713001070.

Повний текст джерела
Анотація:
SUMMARYOne of the main challenges in robotics is navigating autonomously through large, unknown, and unstructured environments. Simultaneous localization and mapping (SLAM) is currently regarded as a viable solution for this problem. As the traditional metric approach to SLAM is experiencing computational difficulties when exploring large areas, increasing attention is being paid to topological SLAM, which is bound to provide sufficiently accurate location estimates, while being significantly less computationally demanding. This paper intends to provide an introductory overview of the most prominent techniques that have been applied to topological SLAM in terms of feature detection, map matching, and map fusion.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Karam, S., V. Lehtola, and G. Vosselman. "STRATEGIES TO INTEGRATE IMU AND LIDAR SLAM FOR INDOOR MAPPING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-1-2020 (August 3, 2020): 223–30. http://dx.doi.org/10.5194/isprs-annals-v-1-2020-223-2020.

Повний текст джерела
Анотація:
Abstract. In recent years, the importance of indoor mapping increased in a wide range of applications, such as facility management and mapping hazardous sites. The essential technique behind indoor mapping is simultaneous localization and mapping (SLAM) because SLAM offers suitable positioning estimates in environments where satellite positioning is not available. State-of-the-art indoor mobile mapping systems employ Visual-based SLAM or LiDAR-based SLAM. However, Visual-based SLAM is sensitive to textureless environments and, similarly, LiDAR-based SLAM is sensitive to a number of pose configurations where the geometry of laser observations is not strong enough to reliably estimate the six-degree-of-freedom (6DOF) pose of the system. In this paper, we present different strategies that utilize the benefits of the inertial measurement unit (IMU) in the pose estimation and support LiDAR-based SLAM in overcoming these problems. The proposed strategies have been implemented and tested using different datasets and our experimental results demonstrate that the proposed methods do indeed overcome these problems. We conclude that IMU observations increase the robustness of SLAM, which is expected, but also that the best reconstruction accuracy is obtained not with a blind use of all observations but by filtering the measurements with a proposed reliability measure. To this end, our results show promising improvements in reconstruction accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Wang, Yin-Tien, Chen-Tung Chi, and Ying-Chieh Feng. "Robot mapping using local invariant feature detectors." Engineering Computations 31, no. 2 (February 25, 2014): 297–316. http://dx.doi.org/10.1108/ec-01-2013-0024.

Повний текст джерела
Анотація:
Purpose – To build a persistent map with visual landmarks is one of the most important steps for implementing the visual simultaneous localization and mapping (SLAM). The corner detector is a common method utilized to detect visual landmarks for constructing a map of the environment. However, due to the scale-variant characteristic of corner detection, extensive computational cost is needed to recover the scale and orientation of corner features in SLAM tasks. The purpose of this paper is to build the map using a local invariant feature detector, namely speeded-up robust features (SURF), to detect scale- and orientation-invariant features as well as provide a robust representation of visual landmarks for SLAM. Design/methodology/approach – SURF are scale- and orientation-invariant features which have higher repeatability than that obtained by other detection methods. Furthermore, SURF algorithms have better processing speed than other scale-invariant detection method. The procedures of detection, description and matching of regular SURF algorithms are modified in this paper in order to provide a robust representation of visual landmarks in SLAM. The sparse representation is also used to describe the environmental map and to reduce the computational complexity in state estimation using extended Kalman filter (EKF). Furthermore, the effective procedures of data association and map management for SURF features in SLAM are also designed to improve the accuracy of robot state estimation. Findings – Experimental works were carried out on an actual system with binocular vision sensors to prove the feasibility and effectiveness of the proposed algorithms. EKF SLAM with the modified SURF algorithms was applied in the experiments including the evaluation of accurate state estimation as well as the implementation of large-area SLAM. The performance of the modified SURF algorithms was compared with those obtained by regular SURF algorithms. The results show that the SURF with less-dimensional descriptors is the most suitable representation of visual landmarks. Meanwhile, the integrated system is successfully validated to fulfill the capabilities of visual SLAM system. Originality/value – The contribution of this paper is the novel approach to overcome the problem of recovering the scale and orientation of visual landmarks in SLAM tasks. This research also extends the usability of local invariant feature detectors in SLAM tasks by utilizing its robust representation of visual landmarks. Furthermore, data association and map management designed for SURF-based mapping in this paper also give another perspective for improving the robustness of SLAM systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Song, Jooeun, and Joongjin Kook. "Mapping Server Collaboration Architecture Design with OpenVSLAM for Mobile Devices." Applied Sciences 12, no. 7 (April 5, 2022): 3653. http://dx.doi.org/10.3390/app12073653.

Повний текст джерела
Анотація:
SLAM technology, which is used for spatial recognition in autonomous driving and robotics, has recently emerged as an important technology to provide high-quality AR contents on mobile devices due to the spread of XR and metaverse technologies. In this paper, we designed, implemented, and verified the SLAM system that can be used on mobile devices. Mobile SLAM is composed of a stand-alone type that directly performs SLAM operation on a mobile device and a mapping server type that additionally configures a mapping server based on FastAPI to perform SLAM operation on the server and transmits data for map visualization to a mobile device. The mobile SLAM system proposed in this paper mixes the two types in order to make SLAM operation and map generation more efficient. The stand-alone type of SLAM system was configured as an Android app by porting the OpenVSLAM library to the Unity engine, and the map generation and performance were evaluated on desktop PCs and mobile devices. The mobile SLAM system in this paper is an open-source project, so it is expected to help develop AR contents based on SLAM in a mobile environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Alsadik, Bashar, and Samer Karam. "The Simultaneous Localization and Mapping (SLAM)-An Overview." Journal of Applied Science and Technology Trends 2, no. 04 (November 18, 2021): 120–31. http://dx.doi.org/10.38094/jastt204117.

Повний текст джерела
Анотація:
Positioning is a need for many applications related to mapping and navigation either in civilian or military domains. The significant developments in satellite-based techniques, sensors, telecommunications, computer hardware and software, image processing, etc. positively influenced to solve the positioning problem efficiently and instantaneously. Accordingly, the mentioned development empowered the applications and advancement of autonomous navigation. One of the most interesting developed positioning techniques is what is called in robotics as the Simultaneous Localization and Mapping SLAM. The SLAM problem solution has witnessed a quick improvement in the last decades either using active sensors like the RAdio Detection And Ranging (Radar) and Light Detection and Ranging (LiDAR) or passive sensors like cameras. Definitely, positioning and mapping is one of the main tasks for Geomatics engineers, and therefore it's of high importance for them to understand the SLAM topic which is not easy because of the huge documentation and algorithms available and the various SLAM solutions in terms of the mathematical models, complexity, the sensors used, and the type of applications. In this paper, a clear and simplified explanation is introduced about SLAM from a Geomatical viewpoint avoiding going into the complicated algorithmic details behind the presented techniques. In this way, a general overview of SLAM is presented showing the relationship between its different components and stages like the core part of the front-end and back-end and their relation to the SLAM paradigm. Furthermore, we explain the major mathematical techniques of filtering and pose graph optimization either using visual or LiDAR SLAM and introduce a summary of the deep learning efficient contribution to the SLAM problem. Finally, we address examples of some existing practical applications of SLAM in our reality.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Alsadik, Bashar, and Samer Karam. "The Simultaneous Localization and Mapping (SLAM)-An Overview." Surveying and Geospatial Engineering Journal 2, no. 01 (May 18, 2021): 01–12. http://dx.doi.org/10.38094/sgej1027.

Повний текст джерела
Анотація:
Positioning is a need for many applications related to mapping and navigation either in civilian or military domains. The significant developments in satellite-based techniques, sensors, telecommunications, computer hardware and software, image processing, etc. positively influenced to solve the positioning problem efficiently and instantaneously. Accordingly, the mentioned development empowered the applications and advancement of autonomous navigation. One of the most interesting developed positioning techniques is what is called in robotics as the Simultaneous Localization and Mapping SLAM. The SLAM problem solution has witnessed a quick improvement in the last decades either using active sensors like the RAdio Detection And Ranging (Radar) and Light Detection and Ranging (LiDAR) or passive sensors like cameras. Definitely, positioning and mapping is one of the main tasks for Geomatics engineers, and therefore it's of high importance for them to understand the SLAM topic which is not easy because of the huge documentation and algorithms available and the various SLAM solutions in terms of the mathematical models, complexity, the sensors used, and the type of applications. In this paper, a clear and simplified explanation is introduced about SLAM from a Geomatical viewpoint avoiding going into the complicated algorithmic details behind the presented techniques. In this way, a general overview of SLAM is presented showing the relationship between its different components and stages like the core part of the front-end and back-end and their relation to the SLAM paradigm. Furthermore, we explain the major mathematical techniques of filtering and pose graph optimization either using visual or LiDAR SLAM and introduce a summary of the deep learning efficient contribution to the SLAM problem. Finally, we address examples of some existing practical applications of SLAM in our reality.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Dai, Yong, Jiaxin Wu, and Duo Wang. "A Review of Common Techniques for Visual Simultaneous Localization and Mapping." Journal of Robotics 2023 (February 17, 2023): 1–21. http://dx.doi.org/10.1155/2023/8872822.

Повний текст джерела
Анотація:
Mobile robots are widely used in medicine, agriculture, home furnishing, and industry. Simultaneous localization and mapping (SLAM) is the working basis of mobile robots, so it is extremely necessary and meaningful for making researches on SLAM technology. SLAM technology involves robot mechanism kinematics, logic, mathematics, perceptual detection, and other fields. However, it faces the problem of classifying the technical content, which leads to diverse technical frameworks of SLAM. Among all sorts of SLAM, visual SLAM (V-SLAM) has become the key academic research due to its advantages of low price, easy installation, and simple algorithm model. Firstly, we illustrate the superiority of V-SLAM by comparing it with other localization techniques. Secondly, we sort out some open-source V-SLAM algorithms and compare their real-time performance, robustness, and innovation. Then, we analyze the frameworks, mathematical models, and related basic theoretical knowledge of V-SLAM. Meanwhile, we review the related works from four aspects: visual odometry, back-end optimization, loop closure detection, and mapping. Finally, we prospect the future development trend and make a foundation for researchers to expand works in the future. All in all, this paper classifies each module of V-SLAM in detail and provides better readability to readers. This is undoubtedly the most comprehensive review of V-SLAM recently.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Wang, J., and M. Shahbazi. "MAPPING QUALITY EVALUATION OF MONOCULAR SLAM SOLUTIONS FOR MICRO AERIAL VEHICLES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W17 (November 29, 2019): 413–20. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w17-413-2019.

Повний текст джерела
Анотація:
Abstract. Monocular simultaneous localization and mapping (SLAM) attracted much attention in the mobile-robotics domain over the past decades along with the advancements of small-format, consumer-grade digital cameras. This is especially the case for micro air vehicles (MAV) due to their payload and power limitations. The quality of global 3D reconstruction by SLAM solutions is a critical factor in occupancy-grid mapping, obstacle avoidance, and map representation. Although several benchmarks have been created in the past to evaluate the quality of vision-based localization and trajectory-estimation, the quality of mapping products has been rarely studied. This paper evaluates the quality of three state-of-the-art open-source monocular SLAM solutions including LSD-SLAM, ORB-SLAM, and LDSO in terms of the geometric accuracy of the global mapping. Since there is no ground-truth information of the testing environment in existing visual SLAM benchmark datasets (e.g., EuRoC, TUM, and KITTI), an evaluation dataset using a quadcopter and a terrestrial laser scanner is created in this work. The dataset is composed of the image data extracted from the recorded videos by flying a drone in the test environment and the high-fidelity point clouds of the test area acquired by a terrestrial laser scanner as the ground truth reference. The mapping quality evaluation of the three SLAM algorithms was mainly conducted on geometric accuracy comparisons by calculating the deviation distance between each SLAM-derived point clouds and the laser-scanned reference. The mapping quality was also discussed with respect to their noise levels as well as further applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Lahemer and Rad. "An Adaptive Augmented Vision-Based Ellipsoidal SLAM for Indoor Environments." Sensors 19, no. 12 (June 21, 2019): 2795. http://dx.doi.org/10.3390/s19122795.

Повний текст джерела
Анотація:
In this paper, the problem of Simultaneous Localization And Mapping (SLAM) is addressed via a novel augmented landmark vision-based ellipsoidal SLAM. The algorithm is implemented on a NAO humanoid robot and is tested in an indoor environment. The main feature of the system is the implementation of SLAM with a monocular vision system. Distinguished landmarks referred to as NAOmarks are employed to localize the robot via its monocular vision system. We henceforth introduce the notion of robotic augmented reality (RAR) and present a monocular Extended Kalman Filter (EKF)/ellipsoidal SLAM in order to improve the performance and alleviate the computational effort, to provide landmark identification, and to simplify the data association problem. The proposed SLAM algorithm is implemented in real-time to further calibrate the ellipsoidal SLAM parameters, noise bounding, and to improve its overall accuracy. The augmented EKF/ellipsoidal SLAM algorithms are compared with the regular EKF/ellipsoidal SLAM methods and the merits of each algorithm is also discussed in the paper. The real-time experimental and simulation studies suggest that the adaptive augmented ellipsoidal SLAM is more accurate than the conventional EKF/ellipsoidal SLAMs.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Zhang, Zijing, Fei Zhang, and Chuantang Ji. "Multi-robot cardinality-balanced multi-Bernoulli filter simultaneous localization and mapping method." Measurement Science and Technology 33, no. 3 (December 23, 2021): 035101. http://dx.doi.org/10.1088/1361-6501/ac3784.

Повний текст джерела
Анотація:
Abstract In order to improve the simultaneous localization and mapping (SLAM) accuracy of mobile robots in complex indoor environments, the multi-robot cardinality-balanced multi-Bernoulli filter SLAM (MR-CBMber-SLAM) method is proposed. First of all, this method introduces a multi-Bernoulli filter based on the random finite set (RFS) theory to solve the complex data association problem. This method aims to overcome the problem that the multi-Bernoulli filter will overestimate the aspect of SLAM map feature estimation, and combines the strategy of balancing cardinality with a multi-Bernoulli filter. What is more, in order to further improve the accuracy and operating efficiency of SLAM, a multi-robot strategy and a multi-robot Gaussian information-fusion method are proposed. In the experiment, the MR-CBMber-SLAM method is compared with the multi-vehicle probability hypothesis density SLAM (MV-PHD-SLAM) method. The experimental results show that the MR-CBMber-SLAM method is better than MV-PHD-SLAM method. Therefore, it effectively verifies that the MR-CBMber-SLAM method is more adaptable to a complex indoor environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Liu, Liming, and Jonathan M. Aitken. "HFNet-SLAM: An Accurate and Real-Time Monocular SLAM System with Deep Features." Sensors 23, no. 4 (February 13, 2023): 2113. http://dx.doi.org/10.3390/s23042113.

Повний текст джерела
Анотація:
Image tracking and retrieval strategies are of vital importance in visual Simultaneous Localization and Mapping (SLAM) systems. For most state-of-the-art systems, hand-crafted features and bag-of-words (BoW) algorithms are the common solutions. Recent research reports the vulnerability of these traditional algorithms in complex environments. To replace these methods, this work proposes HFNet-SLAM, an accurate and real-time monocular SLAM system built on the ORB-SLAM3 framework incorporated with deep convolutional neural networks (CNNs). This work provides a pipeline of feature extraction, keypoint matching, and loop detection fully based on features from CNNs. The performance of this system has been validated on public datasets against other state-of-the-art algorithms. The results reveal that the HFNet-SLAM achieves the lowest errors among systems available in the literature. Notably, the HFNet-SLAM obtains an average accuracy of 2.8 cm in EuRoC dataset in pure visual configuration. Besides, it doubles the accuracy in medium and large environments in TUM-VI dataset compared with ORB-SLAM3. Furthermore, with the optimisation of TensorRT technology, the entire system can run in real-time at 50 FPS.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

You, Yingxuan, Peng Wei, Jialun Cai, Weibo Huang, Risheng Kang, and Hong Liu. "MISD-SLAM: Multimodal Semantic SLAM for Dynamic Environments." Wireless Communications and Mobile Computing 2022 (April 5, 2022): 1–13. http://dx.doi.org/10.1155/2022/7600669.

Повний текст джерела
Анотація:
Simultaneous localization and mapping (SLAM) is one of the most essential technologies for mobile robots. Although great progress has been made in the field of SLAM in recent years, there are a number of challenges for SLAM in dynamic environments and high-level semantic scenes. In this paper, we propose a novel multimodal semantic SLAM system (MISD-SLAM), which removes the dynamic objects in the environments and reconstructs the static background with semantic information. MISD-SLAM builds three main processes: instance segmentation, dynamic pixels removal, and semantic 3D map construction. An instance segmentation network is used to provide semantic knowledge of surrounding environments in instance level. The ORB features located on the predefined dynamic objects are removed directly. In this way, MISD-SLAM effectively reduces the impact of dynamic objects to provide precise pose estimation. Then, combining multiview geometry constraint with K -means clustering algorithm, our system removes the undefined but moving pixels. Meanwhile, a 3D dense point cloud map with semantic information is reconstructed, which recovers the static background without the corruptions of dynamic objects. Finally, we evaluate MISD-SLAM by comparing to ORB-SLAM3 and the state-of-the-art dynamic SLAM systems in TUM RGB-D datasets and real-world dynamic indoor environments. The results indicate that our method significantly improves the localization accuracy and system robustness, especially in high-dynamic environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Ni, Jianjun, Li Wang, Xiaotian Wang, and Guangyi Tang. "An Improved Visual SLAM Based on Map Point Reliability under Dynamic Environments." Applied Sciences 13, no. 4 (February 20, 2023): 2712. http://dx.doi.org/10.3390/app13042712.

Повний текст джерела
Анотація:
The visual simultaneous localization and mapping (SLAM) method under dynamic environments is a hot and challenging issue in the robotic field. The oriented FAST and Rotated BRIEF (ORB) SLAM algorithm is one of the most effective methods. However, the traditional ORB-SLAM algorithm cannot perform well in dynamic environments due to the feature points of dynamic map points at different timestamps being incorrectly matched. To deal with this problem, an improved visual SLAM method built on ORB-SLAM3 is proposed in this paper. In the proposed method, an improved new map points screening strategy and the repeated exiting map points elimination strategy are presented and combined to identify obvious dynamic map points. Then, a concept of map point reliability is introduced in the ORB-SLAM3 framework. Based on the proposed reliability calculation of the map points, a multi-period check strategy is used to identify the unobvious dynamic map points, which can further deal with the dynamic problem in visual SLAM, for those unobvious dynamic objects. Finally, various experiments are conducted on the challenging dynamic sequences of the TUM RGB-D dataset to evaluate the performance of our visual SLAM method. The experimental results demonstrate that our SLAM method can run at an average time of 17.51 ms per frame. Compared with ORB-SLAM3, the average RMSE of the absolute trajectory error (ATE) of the proposed method in nine dynamic sequences of the TUM RGB-D dataset can be reduced by 63.31%. Compared with the real-time dynamic SLAM methods, the proposed method can obtain state-of-the-art performance. The results prove that the proposed method is a real-time visual SLAM, which is effective in dynamic environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Tsubouchi, Takashi. "Introduction to Simultaneous Localization and Mapping." Journal of Robotics and Mechatronics 31, no. 3 (June 20, 2019): 367–74. http://dx.doi.org/10.20965/jrm.2019.p0367.

Повний текст джерела
Анотація:
Simultaneous localization and mapping (SLAM) forms the core of the technology that supports mobile robots. With SLAM, when a robot is moving in an actual environment, real world information is imported to a computer on the robot via a sensor, and robot’s physical location and a map of its surrounding environment of the robot are created. SLAM is a major topic in mobile robot research. Although the information, supported by a mathematical description, is derived from a space in reality, it is formulated based on a probability theory when being handled. Therefore, this concept contributes not only to the research and development concerning mobile robots, but also to the training of mathematics and computer implementation, aimed mainly at position estimation and map creation for the mobile robots. This article focuses on the SLAM technology, including a brief overview of its history, insights from the author, and, finally, introduction of a specific example that the author was involved.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Zheng, Shuran, Jinling Wang, Chris Rizos, Weidong Ding, and Ahmed El-Mowafy. "Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis." Remote Sensing 15, no. 4 (February 20, 2023): 1156. http://dx.doi.org/10.3390/rs15041156.

Повний текст джерела
Анотація:
The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online four–five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Daoud, Hayyan Afeef, Aznul Qalid Md. Sabri, Chu Kiong Loo, and Ali Mohammed Mansoor. "SLAMM: Visual monocular SLAM with continuous mapping using multiple maps." PLOS ONE 13, no. 4 (April 27, 2018): e0195878. http://dx.doi.org/10.1371/journal.pone.0195878.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Hastürk, Özgür, and Aydan M. Erkmen. "DUDMap: 3D RGB-D mapping for dense, unstructured, and dynamic environment." International Journal of Advanced Robotic Systems 18, no. 3 (May 1, 2021): 172988142110161. http://dx.doi.org/10.1177/17298814211016178.

Повний текст джерела
Анотація:
Simultaneous localization and mapping (SLAM) problem has been extensively studied by researchers in the field of robotics, however, conventional approaches in mapping assume a static environment. The static assumption is valid only in a small region, and it limits the application of visual SLAM in dynamic environments. The recently proposed state-of-the-art SLAM solutions for dynamic environments use different semantic segmentation methods such as mask R-CNN and SegNet; however, these frameworks are based on a sparse mapping framework (ORBSLAM). In addition, segmentation process increases the computational power, which makes these SLAM algorithms unsuitable for real-time mapping. Therefore, there is no effective dense RGB-D SLAM method for real-world unstructured and dynamic environments. In this study, we propose a novel real-time dense SLAM method for dynamic environments, where 3D reconstruction error is manipulated for identification of static and dynamic classes having generalized Gaussian distribution. Our proposed approach requires neither explicit object tracking nor object classifier, which makes it robust to any type of moving object and suitable for real-time mapping. Our method eliminates the repeated views and uses consistent data that enhance the performance of volumetric fusion. For completeness, we compare our proposed method using different types of high dynamic dataset, which are publicly available, to demonstrate the versatility and robustness of our approach. Experiments show that its tracking performance is better than other dense and dynamic SLAM approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Debeunne, César, and Damien Vivet. "A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping." Sensors 20, no. 7 (April 7, 2020): 2068. http://dx.doi.org/10.3390/s20072068.

Повний текст джерела
Анотація:
Autonomous navigation requires both a precise and robust mapping and localization solution. In this context, Simultaneous Localization and Mapping (SLAM) is a very well-suited solution. SLAM is used for many applications including mobile robotics, self-driving cars, unmanned aerial vehicles, or autonomous underwater vehicles. In these domains, both visual and visual-IMU SLAM are well studied, and improvements are regularly proposed in the literature. However, LiDAR-SLAM techniques seem to be relatively the same as ten or twenty years ago. Moreover, few research works focus on vision-LiDAR approaches, whereas such a fusion would have many advantages. Indeed, hybridized solutions offer improvements in the performance of SLAM, especially with respect to aggressive motion, lack of light, or lack of visual features. This study provides a comprehensive survey on visual-LiDAR SLAM. After a summary of the basic idea of SLAM and its implementation, we give a complete review of the state-of-the-art of SLAM research, focusing on solutions using vision, LiDAR, and a sensor fusion of both modalities.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Abdul-Rahman, Shuzlina, Mohamad Soffi Abd Razak, Aliya Hasanah Binti Mohd Mushin, Raseeda Hamzah, Nordin Abu Bakar, and Zalilah Abd Aziz. "Simulation of simultaneous localization and mapping using 3D point cloud data." Indonesian Journal of Electrical Engineering and Computer Science 16, no. 2 (November 1, 2019): 941. http://dx.doi.org/10.11591/ijeecs.v16.i2.pp941-949.

Повний текст джерела
Анотація:
<span>Abstract—This paper presents a simulation study of Simultaneous Localization and Mapping (SLAM) using 3D point cloud data from Light Detection and Ranging (LiDAR) technology. Methods like simulation is useful to simplify the process of learning algorithms particularly when collecting and annotating large volumes of real data is impractical and expensive. In this study, a map of a given environment was constructed in Robotic Operating System platform with Gazebo Simulator. The paper begins by presenting the most currently popular algorithm that are widely used in SLAM namely Extended Kalman Filter, Graph SLAM and Fast SLAM. The study performed the simulations by using standard SLAM with Turtlebot and Husky robots. Husky robot was further compared with ACML algorithm. The results showed that Hector SLAM could reach the goal faster than ACML algorithm in a pre-defined map. Further studies in this field with other SLAM algorithms would certainly beneficial to many parties due to the demands of robotic application.</span>
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Nguyen Hoang Thuy, Trang, and Stanislav Shydlouski. "Situations in Construction of 3D Mapping for Slam." MATEC Web of Conferences 155 (2018): 01055. http://dx.doi.org/10.1051/matecconf/201815501055.

Повний текст джерела
Анотація:
Nowadays, the simultaneous localization and mapping (SLAM) approach has become one of the most advanced engineering methods used for mobile robots to build maps in unknown or inaccessible spaces. Update maps before a certain area while tracking current location and distance. The motivation behind writing this paper is mainly to help us better understand about SLAM and the study situation of SLAM in the world today. Through this, we find the optimal algorithm for moving robots in three dimensions.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Bauersfeld, Leonard, and Guillaume Ducard. "RTOB SLAM: Real-Time Onboard Laser-Based Localization and Mapping." Vehicles 3, no. 4 (November 16, 2021): 778–89. http://dx.doi.org/10.3390/vehicles3040046.

Повний текст джерела
Анотація:
RTOB-SLAM is a new low-computation framework for real-time onboard simultaneous localization and mapping (SLAM) and obstacle avoidance for autonomous vehicles. A low-resolution 2D laser scanner is used and a small form-factor computer perform all computations onboard. The SLAM process is based on laser scan matching with the iterative closest point technique to estimate the vehicle’s current position by aligning the new scan with the map. This paper describes a new method which uses only a small subsample of the global map for scan matching, which improves the performance and allows for a map to adapt to a dynamic environment by partly forgetting the past. A detailed comparison between this method and current state-of-the-art SLAM frameworks is given, together with a methodology to choose the parameters of the RTOB-SLAM. The RTOB-SLAM has been implemented in ROS and perform well in various simulations and real experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Filip, Iulian, Juhyun Pyo, Meungsuk Lee, and Hangil Joe. "LiDAR SLAM with a Wheel Encoder in a Featureless Tunnel Environment." Electronics 12, no. 4 (February 17, 2023): 1002. http://dx.doi.org/10.3390/electronics12041002.

Повний текст джерела
Анотація:
Simultaneous localization and mapping (SLAM) represents a crucial algorithm in the autonomous navigation of ground vehicles. Several studies were conducted to improve the SLAM algorithm using various sensors and robot platforms. However, only a few works have focused on applications inside low-illuminated featureless tunnel environments. In this work, we present an improved SLAM algorithm using wheel encoder data from an autonomous ground vehicle (AGV) to obtain robust performance in a featureless tunnel environment. The improved SLAM system uses FAST-LIO2 LiDAR SLAM as the baseline algorithm, and the additional wheel encoder sensor data are integrated into the baseline SLAM structure using the extended Kalman filter (EKF) algorithm. The EKF algorithm is used after the LiDAR odometry estimation and before the mapping process of FAST-LIO2. The prediction step uses the wheel encoder and inertial measurement unit (IMU) data, while the correction step uses the FAST-LIO2 LiDAR state estimation. We used an AGV to conduct experiments in flat and inclined terrain sections in a tunnel environment. The results showed that the mapping and the localization process in the SLAM algorithm was greatly improved in a featureless tunnel environment considering both inclined and flat terrains.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Liu, Zhiying, Xiren Miao, Zhiqiang Xie, Hao Jiang, and Jing Chen. "Power Tower Inspection Simultaneous Localization and Mapping: A Monocular Semantic Positioning Approach for UAV Transmission Tower Inspection." Sensors 22, no. 19 (September 28, 2022): 7360. http://dx.doi.org/10.3390/s22197360.

Повний текст джерела
Анотація:
Realizing autonomous unmanned aerial vehicle (UAV) inspection is of great significance for power line maintenance. This paper introduces a scheme of using the structure of a tower to realize visual geographical positioning of UAV for tower inspection and presents a monocular semantic simultaneous localization and mapping (SLAM) framework termed PTI-SLAM (power tower inspection SLAM) to cope with the challenge of a tower inspection scene. The proposed scheme utilizes prior knowledge of tower component geolocation and regards geographical positioning as the estimation of transformation between SLAM and the geographic coordinates. To accomplish the robust positioning and semi-dense semantic mapping with limited computing power, PTI-SLAM combines the feature-based SLAM method with a fusion-based direct method and conveys a loosely coupled architecture of a semantic task and a SLAM task. The fusion-based direct method is specially designed to overcome the fragility of the direct method against adverse conditions concerning the inspection scene. Experiment results show that PTI-SLAM inherits the robustness advantage of the feature-based method and the semi-dense mapping ability of the direct method and achieves decimeter-level real-time positioning in the airborne system. The experiment concerning geographical positioning indicates more competitive accuracy compared to the previous visual approach and artificial UAV operating, demonstrating the potential of PTI-SLAM.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Wang, Yi, Haoyu Bu, Xiaolong Zhang, and Jia Cheng. "YPD-SLAM: A Real-Time VSLAM System for Handling Dynamic Indoor Environments." Sensors 22, no. 21 (November 7, 2022): 8561. http://dx.doi.org/10.3390/s22218561.

Повний текст джерела
Анотація:
Aiming at the problem that Simultaneous localization and mapping (SLAM) is greatly disturbed by many dynamic elements in the actual environment, this paper proposes a real-time Visual SLAM (VSLAM) algorithm to deal with a dynamic indoor environment. Firstly, a lightweight YoloFastestV2 deep learning model combined with NCNN and Mobile Neural Network (MNN) inference frameworks is used to obtain preliminary semantic information of images. The dynamic feature points are removed according to epipolar constraint and dynamic properties of objects between consecutive frames. Since reducing the number of feature points after rejection affects the pose estimation, this paper innovatively combines Cylinder and Plane Extraction (CAPE) planar detection. We generate planes from depth maps and then introduce planar and in-plane point constraints into the nonlinear optimization of SLAM. Finally, the algorithm is tested on the publicly available TUM (RGB-D) dataset, and the average improvement in localization accuracy over ORB-SLAM2, DS-SLAM, and RDMO-SLAM is about 91.95%, 27.21%, and 30.30% under dynamic sequences, respectively. The single-frame tracking time of the whole system is only 42.68 ms, which is 44.1%, being 14.6–34.33% higher than DS-SLAM, RDMO-SLAM, and RDS-SLAM respectively. The system that we proposed significantly increases processing speed, performs better in real-time, and is easily deployed on various platforms.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Zhong, Qiubo, and Xiaoyi Fang. "A BigBiGAN-Based Loop Closure Detection Algorithm for Indoor Visual SLAM." Journal of Electrical and Computer Engineering 2021 (July 21, 2021): 1–10. http://dx.doi.org/10.1155/2021/9978022.

Повний текст джерела
Анотація:
Loop closure detection serves as the fulcrum of improving the accuracy and precision in simultaneous localization and mapping (SLAM). The majority of loop detection methods extract artificial features, which fall short of learning comprehensive data information, but unsupervised learning as a typical deep learning method excels in self-access learning and clustering to analyze the similarity without handling the data. Moreover, the unsupervised learning method does solve restrictions on image quality and singleness semantics in many traditional SLAM methods. Therefore, a loop closure detection strategy based on an unsupervised learning method is proposed in this paper. The main component adopts BigBiGAN to extract features and establish an original bag of words. Then, the complete bag of words is used to detect loop closing. Finally, a considerable validation check of the ORB descriptor is added to verify the result and output outcome of loop closure detection. The proposed algorithm and other compared algorithms are, respectively, applied on Autolabor Pro1 to execute the indoor visual SLAM. The experiment shows that the proposed algorithm increases the recall rate by 20% compared with ORB-SLAM2 and LSD-SLAM. And it also improves at least 40.0% accuracy than others and reduces 14% time loss of ORB-SLAM2. Therefore, the presented SLAM based on BigBiGAN does benefit much the visual SLAM in the indoor environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Jia, Guanwei, Xiaoying Li, Dongming Zhang, Weiqing Xu, Haojie Lv, Yan Shi, and Maolin Cai. "Visual-SLAM Classical Framework and Key Techniques: A Review." Sensors 22, no. 12 (June 17, 2022): 4582. http://dx.doi.org/10.3390/s22124582.

Повний текст джерела
Анотація:
With the significant increase in demand for artificial intelligence, environmental map reconstruction has become a research hotspot for obstacle avoidance navigation, unmanned operations, and virtual reality. The quality of the map plays a vital role in positioning, path planning, and obstacle avoidance. This review starts with the development of SLAM (Simultaneous Localization and Mapping) and proceeds to a review of V-SLAM (Visual-SLAM) from its proposal to the present, with a summary of its historical milestones. In this context, the five parts of the classic V-SLAM framework—visual sensor, visual odometer, backend optimization, loop detection, and mapping—are explained separately. Meanwhile, the details of the latest methods are shown; VI-SLAM (Visual inertial SLAM) is reviewed and extended. The four critical techniques of V-SLAM and its technical difficulties are summarized as feature detection and matching, selection of keyframes, uncertainty technology, and expression of maps. Finally, the development direction and needs of the V-SLAM field are proposed.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Wen, Jingren, Chuang Qian, Jian Tang, Hui Liu, Wenfang Ye, and Xiaoyun Fan. "2D LiDAR SLAM Back-End Optimization with Control Network Constraint for Mobile Mapping." Sensors 18, no. 11 (October 29, 2018): 3668. http://dx.doi.org/10.3390/s18113668.

Повний текст джерела
Анотація:
Simultaneous localization and mapping (SLAM) has been investigated in the field of robotics for two decades, as it is considered to be an effective method for solving the positioning and mapping problem in a single framework. In the SLAM community, the Extended Kalman Filter (EKF) based SLAM and particle filter SLAM are the most mature technologies. After years of development, graph-based SLAM is becoming the most promising technology and a lot of progress has been made recently with respect to accuracy and efficiency. No matter which SLAM method is used, loop closure is a vital part for overcoming the accumulated errors. However, in 2D Light Detection and Ranging (LiDAR) SLAM, on one hand, it is relatively difficult to extract distinctive features in LiDAR scans for loop closure detection, as 2D LiDAR scans encode much less information than images; on the other hand, there is also some special mapping scenery, where no loop closure exists. Thereby, in this paper, instead of loop closure detection, we first propose the method to introduce extra control network constraint (CNC) to the back-end optimization of graph-based SLAM, by aligning the LiDAR scan center with the control vertex of the presurveyed control network to optimize all the poses of scans and submaps. Field tests were carried out in a typical urban Global Navigation Satellite System (GNSS) weak outdoor area. The results prove that the position Root Mean Square (RMS) error of the selected key points is 0.3614 m, evaluated with a reference map produced by Terrestrial Laser Scanner (TLS). Mapping accuracy is significantly improved, compared to the mapping RMS of 1.6462 m without control network constraint. Adding distance constraints of the control network to the back-end optimization is an effective and practical method to solve the drift accumulation of LiDAR front-end scan matching.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Joachim, L., W. Zhang, N. Haala, and U. Soergel. "EVALUATION OF THE QUALITY OF REAL-TIME MAPPING WITH CRANE CAMERAS AND VISUAL SLAM ALGORITHMS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2022 (May 30, 2022): 545–52. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2022-545-2022.

Повний текст джерела
Анотація:
Abstract. In the context of the development of an autonomous tower crane, the usage of crane cameras to map the respective workspace as basis for autonomous path planning is investigated. The goal is to generate an up-to-date DEM as input for the crane control. As construction sites are highly dynamic scenes, it is crucial to be able to react to any changes. Thus, real-time mapping with a visual SLAM solution is aspired. As the quality of the DEM is important for such a safety critical application, we are evaluating the mapping quality of four state-of-the-art SLAM solutions, namely ORB-SLAM3, LDSO, DSM and DROID-SLAM. The results show that all approaches can handle our specific crane camera setup and thus are generally suited for our application. The DEM accuracies of all tested methods are even competitive with the result from standard offline photogrammetric processing, at least for the major part of the test site. However, there are limitations with regards to the DEM completeness. Consequently, our investigations show that the tested methods deliver a good basis for real-time accurate mapping, but for their application for autonomous path planning further refinements have to be made.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Jama, Michal, and Dale Schinstock. "Parallel Tracking and Mapping for Controlling VTOL Airframe." Journal of Control Science and Engineering 2011 (2011): 1–10. http://dx.doi.org/10.1155/2011/413074.

Повний текст джерела
Анотація:
This work presents a vision based system for navigation on a vertical takeoff and landing unmanned aerial vehicle (UAV). This is a monocular vision based, simultaneous localization and mapping (SLAM) system, which measures the position and orientation of the camera and builds a map of the environment using a video stream from a single camera. This is different from past SLAM solutions on UAV which use sensors that measure depth, like LIDAR, stereoscopic cameras or depth cameras. Solution presented in this paper extends and significantly modifies a recent open-source algorithm that solves SLAM problem using approach fundamentally different from a traditional approach. Proposed modifications provide the position measurements necessary for the navigation solution on a UAV. The main contributions of this work include: (1) extension of the map building algorithm to enable it to be used realistically while controlling a UAV and simultaneously building the map; (2) improved performance of the SLAM algorithm for lower camera frame rates; and (3) the first known demonstration of a monocular SLAM algorithm successfully controlling a UAV while simultaneously building the map. This work demonstrates that a fully autonomous UAV that uses monocular vision for navigation is feasible.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Zhao, Xiong, Tao Zuo, and Xinyu Hu. "OFM-SLAM: A Visual Semantic SLAM for Dynamic Indoor Environments." Mathematical Problems in Engineering 2021 (April 8, 2021): 1–16. http://dx.doi.org/10.1155/2021/5538840.

Повний текст джерела
Анотація:
Most of the current visual Simultaneous Localization and Mapping (SLAM) algorithms are designed based on the assumption of a static environment, and their robustness and accuracy in the dynamic environment do not behave well. The reason is that moving objects in the scene will cause the mismatch of features in the pose estimation process, which further affects its positioning and mapping accuracy. In the meantime, the three-dimensional semantic map plays a key role in mobile robot navigation, path planning, and other tasks. In this paper, we present OFM-SLAM: Optical Flow combining MASK-RCNN SLAM, a novel visual SLAM for semantic mapping in dynamic indoor environments. Firstly, we use the Mask-RCNN network to detect potential moving objects which can generate masks of dynamic objects. Secondly, an optical flow method is adopted to detect dynamic feature points. Then, we combine the optical flow method and the MASK-RCNN for full dynamic points’ culling, and the SLAM system is able to track without these dynamic points. Finally, the semantic labels obtained from MASK-RCNN are mapped to the point cloud for generating a three-dimensional semantic map that only contains the static parts of the scenes and their semantic information. We evaluate our system in public TUM datasets. The results of our experiments demonstrate that our system is more effective in dynamic scenarios, and the OFM-SLAM can estimate the camera pose more accurately and acquire a more precise localization in the high dynamic environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Ullah, Inam, Xin Su, Xuewu Zhang, and Dongmin Choi. "Simultaneous Localization and Mapping Based on Kalman Filter and Extended Kalman Filter." Wireless Communications and Mobile Computing 2020 (June 8, 2020): 1–12. http://dx.doi.org/10.1155/2020/2138643.

Повний текст джерела
Анотація:
For more than two decades, the issue of simultaneous localization and mapping (SLAM) has gained more attention from researchers and remains an influential topic in robotics. Currently, various algorithms of the mobile robot SLAM have been investigated. However, the probability-based mobile robot SLAM algorithm is often used in the unknown environment. In this paper, the authors proposed two main algorithms of localization. First is the linear Kalman Filter (KF) SLAM, which consists of five phases, such as (a) motionless robot with absolute measurement, (b) moving vehicle with absolute measurement, (c) motionless robot with relative measurement, (d) moving vehicle with relative measurement, and (e) moving vehicle with relative measurement while the robot location is not detected. The second localization algorithm is the SLAM with the Extended Kalman Filter (EKF). Finally, the proposed SLAM algorithms are tested by simulations to be efficient and viable. The simulation results show that the presented SLAM approaches can accurately locate the landmark and mobile robot.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Jia, Shifeng. "LRD-SLAM: A Lightweight Robust Dynamic SLAM Method by Semantic Segmentation Network." Wireless Communications and Mobile Computing 2022 (November 21, 2022): 1–19. http://dx.doi.org/10.1155/2022/7332390.

Повний текст джерела
Анотація:
With the development of intelligent concepts in various fields, research on driverless and intelligent industrial robots has increased. Vision-based simultaneous localization and mapping (SLAM) is a widely used technique. Most conventional visual SLAM algorithms are assumed to work in ideal static environments; however, such environments rarely exist in real life. Thus, it is important to develop visual SLAM algorithms that can determine their own positions and perceive the environment in real dynamic environments. This paper proposes a lightweight robust dynamic SLAM system based on a novel semantic segmentation network (LRD-SLAM). In the proposed system, a fast deep convolutional neural network (FNet) is implemented into ORB-SLAM2 as a semantic segmentation thread. In addition, a multiview geometry method is introduced, in which the accuracy of detecting dynamic points is further improved through the difference in parallax angle and depth, and the information of the keyframes is used to repair the static background information absent from the removal of dynamic objects, to facilitate the subsequent reconstruction of the point cloud map. Experimental results obtained using the TUM RGB-D dataset demonstrate that the proposed system improves the positioning accuracy and robustness of visual SLAM in indoor pedestrian dynamic environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Tang, Jian, Jingren Wen, and Chuang Qian. "A Distributed Indoor Mapping Method Based on Control-Network-Aided SLAM: Scheme and Analysis." Applied Sciences 10, no. 7 (April 2, 2020): 2420. http://dx.doi.org/10.3390/app10072420.

Повний текст джерела
Анотація:
Indoor mobile mapping techniques are important for indoor navigation and indoor modeling. As an efficient method, Simultaneous Localization and Mapping (SLAM) based on Light Detection and Ranging (LiDAR) has been applied for fast indoor mobile mapping. It can quickly construct high-precision indoor maps in a certain small region. However, with the expansion of the mapping area, SLAM-based mapping methods face many difficulties, such as loop closure detection, large amounts of calculation, large memory occupation, and limited mapping precision. In this paper, we propose a distributed indoor mapping scheme based on control-network-aided SLAM to solve the problem of mapping for large-scale environments. Its effectiveness is analyzed from the relative accuracy and absolute accuracy of the mapping results. The experimental results show that the relative accuracy can reach 0.08 m, an improvement of 49.8% compared to the mapping result without loop closure. The absolute accuracy can reach 0.13 m, which proves the method’s feasibility for distributed mapping. The accuracies under different numbers of control points are also compared to find the suitable structure of the control network.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Saat, Shahrizal, WN Abd Rashid, MZM Tumari, and MS Saealal. "HECTORSLAM 2D MAPPING FOR SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM)." Journal of Physics: Conference Series 1529 (April 2020): 042032. http://dx.doi.org/10.1088/1742-6596/1529/4/042032.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Abdelhafid, El Farnane, Youssefi My Abdelkader, Mouhsen Ahmed, Dakir Rachid, and El Ihyaoui Abdelilah. "Visual and light detection and ranging-based simultaneous localization and mapping for self-driving cars." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 6 (December 1, 2022): 6284. http://dx.doi.org/10.11591/ijece.v12i6.pp6284-6292.

Повний текст джерела
Анотація:
<span lang="EN-US">In recent years, there has been a strong demand for self-driving cars. For safe navigation, self-driving cars need both precise localization and robust mapping. While global navigation satellite system (GNSS) can be used to locate vehicles, it has some limitations, such as satellite signal absence (tunnels and caves), which restrict its use in urban scenarios. Simultaneous localization and mapping (SLAM) are an excellent solution for identifying a vehicle’s position while at the same time constructing a representation of the environment. SLAM-based visual and light detection and ranging (LIDAR) refer to using cameras and LIDAR as source of external information. This paper presents an implementation of SLAM algorithm for building a map of environment and obtaining car’s trajectory using LIDAR scans. A detailed overview of current visual and LIDAR SLAM approaches has also been provided and discussed. Simulation results referred to LIDAR scans indicate that SLAM is convenient and helpful in localization and mapping.</span>
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Suleymanoglu, B., M. Soycan, and C. Toth. "INDOOR MAPPING: EXPERIENCES WITH LIDAR SLAM." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2022 (May 30, 2022): 279–85. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2022-279-2022.

Повний текст джерела
Анотація:
Abstract. Indoor mapping is gaining more interest in both research as well as in emerging applications. Building information systems (BIM) and indoor navigation are probably the driving force behind this trend. For accurate mapping, the platform trajectory reconstruction, or in other words sensor orientation, is essential to reduce or even eliminate for extensive ground control. Simultaneous localization and mapping (SLAM) is the computation problem of how to simultaneously estimate the platform/sensor trajectory while reconstructing the object space; usually, a real-time operation is assumed. Here we investigate the performance of two LiDAR SLAM tools based on using indoor data, acquired by a remotely controlled robot sensor platform. All comparisons were performed on similar datasets using appropriate metrics and encouraging results were obtained as a consequence of initial test studies yet further research is needed to analyse these tools and their accuracy comprehensively.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Nüchter, Andreas, Kai Lingemann, Joachim Hertzberg, and Hartmut Surmann. "6D SLAM-3D mapping outdoor environments." Journal of Field Robotics 24, no. 8-9 (August 2007): 699–722. http://dx.doi.org/10.1002/rob.20209.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Akpınar, Burak. "Performance of Different SLAM Algorithms for Indoor and Outdoor Mapping Applications." Applied System Innovation 4, no. 4 (December 17, 2021): 101. http://dx.doi.org/10.3390/asi4040101.

Повний текст джерела
Анотація:
Indoor and outdoor mapping studies can be completed relatively quickly, depending on the developments in Mobile Mapping Systems. Especially in indoor environments where high accuracy GNSS positions cannot be used, mapping studies can be carried out with SLAM algorithms. Although there are many different SLAM algorithms in the literature, each can produce results with different accuracy according to the mapped environment. In this study, 3D maps were produced with LOAM, A-LOAM, and HDL Graph SLAM algorithms in different environments such as long corridors, staircases, and outdoor environments, and the accuracies of the maps produced with different algorithms were compared. For this purpose, a mobile mapping platform using Velodyne VLP-16 LIDAR sensor was developed, and the odometer drift, which causes loss of accuracy in the data collected, was minimized by loop closure and plane detection methods. As a result of the tests, it was determined that the results of the LOAM algorithm were not as accurate as those of the A-LOAM and HDL Graph SLAM algorithms. Both indoor and outdoor environments and the A-LOAM results’ accuracy were two times better than HDL Graph SLAM results.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Nüchter, A., M. Bleier, J. Schauer, and P. Janotta. "IMPROVING GOOGLE'S CARTOGRAPHER 3D MAPPING BY CONTINUOUS-TIME SLAM." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W3 (February 23, 2017): 543–49. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w3-543-2017.

Повний текст джерела
Анотація:
This paper shows how to use the result of Google's SLAM solution, called Cartographer, to bootstrap our continuous-time SLAM algorithm. The presented approach optimizes the consistency of the global point cloud, and thus improves on Google’s results. We use the algorithms and data from Google as input for our continuous-time SLAM software. We also successfully applied our software to a similar backpack system which delivers consistent 3D point clouds even in absence of an IMU.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії