Статті в журналах з теми "Semantic SLAM"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Semantic SLAM.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Semantic SLAM".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Sun, Liuxin, Junyu Wei, Shaojing Su, and Peng Wu. "SOLO-SLAM: A Parallel Semantic SLAM Algorithm for Dynamic Scenes." Sensors 22, no. 18 (September 15, 2022): 6977. http://dx.doi.org/10.3390/s22186977.

Повний текст джерела
Анотація:
Simultaneous localization and mapping (SLAM) is a core technology for mobile robots working in unknown environments. Most existing SLAM techniques can achieve good localization accuracy in static scenes, as they are designed based on the assumption that unknown scenes are rigid. However, real-world environments are dynamic, resulting in poor performance of SLAM algorithms. Thus, to optimize the performance of SLAM techniques, we propose a new parallel processing system, named SOLO-SLAM, based on the existing ORB-SLAM3 algorithm. By improving the semantic threads and designing a new dynamic point filtering strategy, SOLO-SLAM completes the tasks of semantic and SLAM threads in parallel, thereby effectively improving the real-time performance of SLAM systems. Additionally, we further enhance the filtering effect for dynamic points using a combination of regional dynamic degree and geometric constraints. The designed system adds a new semantic constraint based on semantic attributes of map points, which solves, to some extent, the problem of fewer optimization constraints caused by dynamic information filtering. Using the publicly available TUM dataset, SOLO-SLAM is compared with other state-of-the-art schemes. Our algorithm outperforms ORB-SLAM3 in accuracy (maximum improvement is 97.16%) and achieves better results than Dyna-SLAM with respect to time efficiency (maximum improvement is 90.07%).
Стилі APA, Harvard, Vancouver, ISO та ін.
2

You, Yingxuan, Peng Wei, Jialun Cai, Weibo Huang, Risheng Kang, and Hong Liu. "MISD-SLAM: Multimodal Semantic SLAM for Dynamic Environments." Wireless Communications and Mobile Computing 2022 (April 5, 2022): 1–13. http://dx.doi.org/10.1155/2022/7600669.

Повний текст джерела
Анотація:
Simultaneous localization and mapping (SLAM) is one of the most essential technologies for mobile robots. Although great progress has been made in the field of SLAM in recent years, there are a number of challenges for SLAM in dynamic environments and high-level semantic scenes. In this paper, we propose a novel multimodal semantic SLAM system (MISD-SLAM), which removes the dynamic objects in the environments and reconstructs the static background with semantic information. MISD-SLAM builds three main processes: instance segmentation, dynamic pixels removal, and semantic 3D map construction. An instance segmentation network is used to provide semantic knowledge of surrounding environments in instance level. The ORB features located on the predefined dynamic objects are removed directly. In this way, MISD-SLAM effectively reduces the impact of dynamic objects to provide precise pose estimation. Then, combining multiview geometry constraint with K -means clustering algorithm, our system removes the undefined but moving pixels. Meanwhile, a 3D dense point cloud map with semantic information is reconstructed, which recovers the static background without the corruptions of dynamic objects. Finally, we evaluate MISD-SLAM by comparing to ORB-SLAM3 and the state-of-the-art dynamic SLAM systems in TUM RGB-D datasets and real-world dynamic indoor environments. The results indicate that our method significantly improves the localization accuracy and system robustness, especially in high-dynamic environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Bowman, Sean, Kostas Daniilidis, and George Pappas. "Robust Object-Level Semantic Visual SLAM Using Semantic Keypoints." Field Robotics 2, no. 1 (March 10, 2022): 513–24. http://dx.doi.org/10.55417/fr.2022018.

Повний текст джерела
Анотація:
Simultaneous Localization and Mapping (SLAM) has traditionally relied on representing the environment as low-level, geometric features, such as points, lines, and planes. Recent advances in object recognition capabilities, however, as well as demand for environment representations that facilitate higher-level autonomy, have motivated an object-based Semantic SLAM. We present a Semantic SLAM algorithm that directly incorporates a sparse representation of objects into a factor-graph SLAM optimization, resulting in a system that is efficient, robust to varying object shapes and environments, and easy to incorporate into an existing SLAM pipeline. Our keypoint-based representation facilitates robust detection in varying conditions and intraclass shape variation, as well as computational efficiency. We demonstrate the performance of our algorithm in two different SLAM systems and in varying environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Guan, Peiyu, Zhiqiang Cao, Erkui Chen, Shuang Liang, Min Tan, and Junzhi Yu. "A real-time semantic visual SLAM approach with points and objects." International Journal of Advanced Robotic Systems 17, no. 1 (January 1, 2020): 172988142090544. http://dx.doi.org/10.1177/1729881420905443.

Повний текст джерела
Анотація:
Visual simultaneously localization and mapping (SLAM) is important for self-localization and environment perception of service robots, where semantic SLAM can provide a more accurate localization result and a map with abundant semantic information. In this article, we propose a real-time PO-SLAM approach with the combination of both point and object measurements. With point–point association in ORB-SLAM2, we also consider point–object association based on object segmentation and object–object association, where the object segmentation is employed by combining object detection with depth histogram. Also, besides the constraint of feature points belonging to an object, a semantic constraint of relative position invariance among objects is introduced. Accordingly, two semantic loss functions with point and object information are designed and added to the bundle adjustment optimization. The effectiveness of the proposed approach is verified by experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Han, Shuangquan, and Zhihong Xi. "Dynamic Scene Semantics SLAM Based on Semantic Segmentation." IEEE Access 8 (2020): 43563–70. http://dx.doi.org/10.1109/access.2020.2977684.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Long, Fei, Lei Ding, and Jianfeng Li. "DGFlow-SLAM: A Novel Dynamic Environment RGB-D SLAM without Prior Semantic Knowledge Based on Grid Segmentation of Scene Flow." Biomimetics 7, no. 4 (October 13, 2022): 163. http://dx.doi.org/10.3390/biomimetics7040163.

Повний текст джерела
Анотація:
Currently, using semantic segmentation networks to distinguish dynamic and static key points has become a mainstream designing method for semantic SLAM systems. However, the semantic SLAM systems must have prior semantic knowledge of relevant dynamic objects, and their processing speed is inversely proportional to the recognition accuracy. To simultaneously enhance the speed and accuracy for recognizing dynamic objects in different environments, a novel SLAM system without prior semantics called DGFlow-SLAM is proposed in this paper. A novel grid segmentation method is used in the system to segment the scene flow, and then an adaptive threshold method is used to roughly detect the dynamic objects. Based on this, a deep mean clustering segmentation method is applied to find potential dynamic targets. Finally, the results of grid segmentation and depth mean clustering segmentation are jointly used to find moving objects accurately, and all the feature points of the moving objects are removed on the premise of retaining the static part of the moving object. The experimental results show that on the dynamic sequence dataset of TUM RGB-D, compared with the DynaSLAM system with the highest accuracy for detecting moderate and violent motion and the DS-SLAM with the highest accuracy for detecting slight motion, DGflow-SLAM obtains similar accuracy results and improves the accuracy by 7.5%. In addition, DGflow-SLAM is 10 times and 1.27 times faster than DynaSLAM and DS-SLAM, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Jia, Shifeng. "LRD-SLAM: A Lightweight Robust Dynamic SLAM Method by Semantic Segmentation Network." Wireless Communications and Mobile Computing 2022 (November 21, 2022): 1–19. http://dx.doi.org/10.1155/2022/7332390.

Повний текст джерела
Анотація:
With the development of intelligent concepts in various fields, research on driverless and intelligent industrial robots has increased. Vision-based simultaneous localization and mapping (SLAM) is a widely used technique. Most conventional visual SLAM algorithms are assumed to work in ideal static environments; however, such environments rarely exist in real life. Thus, it is important to develop visual SLAM algorithms that can determine their own positions and perceive the environment in real dynamic environments. This paper proposes a lightweight robust dynamic SLAM system based on a novel semantic segmentation network (LRD-SLAM). In the proposed system, a fast deep convolutional neural network (FNet) is implemented into ORB-SLAM2 as a semantic segmentation thread. In addition, a multiview geometry method is introduced, in which the accuracy of detecting dynamic points is further improved through the difference in parallax angle and depth, and the information of the keyframes is used to repair the static background information absent from the removal of dynamic objects, to facilitate the subsequent reconstruction of the point cloud map. Experimental results obtained using the TUM RGB-D dataset demonstrate that the proposed system improves the positioning accuracy and robustness of visual SLAM in indoor pedestrian dynamic environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Fan, Yingchun, Qichi Zhang, Yuliang Tang, Shaofen Liu, and Hong Han. "Blitz-SLAM: A semantic SLAM in dynamic environments." Pattern Recognition 121 (January 2022): 108225. http://dx.doi.org/10.1016/j.patcog.2021.108225.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Miao, Sheng, Xiaoxiong Liu, Dazheng Wei, and Changze Li. "A Visual SLAM Robust against Dynamic Objects Based on Hybrid Semantic-Geometry Information." ISPRS International Journal of Geo-Information 10, no. 10 (October 4, 2021): 673. http://dx.doi.org/10.3390/ijgi10100673.

Повний текст джерела
Анотація:
A visual localization approach for dynamic objects based on hybrid semantic-geometry information is presented. Due to the interference of moving objects in the real environment, the traditional simultaneous localization and mapping (SLAM) system can be corrupted. To address this problem, we propose a method for static/dynamic image segmentation that leverages semantic and geometric modules, including optical flow residual clustering, epipolar constraint checks, semantic segmentation, and outlier elimination. We integrated the proposed approach into the state-of-the-art ORB-SLAM2 and evaluated its performance on both public datasets and a quadcopter platform. Experimental results demonstrated that the root-mean-square error of the absolute trajectory error improved, on average, by 93.63% in highly dynamic benchmarks when compared with ORB-SLAM2. Thus, the proposed method can improve the performance of state-of-the-art SLAM systems in challenging scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wu, Yakun, Li Luo, Shujuan Yin, Mengqi Yu, Fei Qiao, Hongzhi Huang, Xuesong Shi, Qi Wei, and Xinjun Liu. "An FPGA Based Energy Efficient DS-SLAM Accelerator for Mobile Robots in Dynamic Environment." Applied Sciences 11, no. 4 (February 18, 2021): 1828. http://dx.doi.org/10.3390/app11041828.

Повний текст джерела
Анотація:
The Simultaneous Localization and Mapping (SLAM) algorithm is a hotspot in robot application research with the ability to help mobile robots solve the most fundamental problems of “localization” and “mapping”. The visual semantic SLAM algorithm fused with semantic information enables robots to understand the surrounding environment better, thus dealing with complexity and variability of real application scenarios. DS-SLAM (Semantic SLAM towards Dynamic Environment), one of the representative works in visual semantic SLAM, enhances the robustness in the dynamic scene through semantic information. However, the introduction of deep learning increases the complexity of the system, which makes it a considerable challenge to achieve the real-time semantic SLAM system on the low-power embedded platform. In this paper, we realized the high energy-efficiency DS-SLAM algorithm on the Field Programmable Gate Array (FPGA) based heterogeneous platform through the optimization co-design of software and hardware with the help of OpenCL (Open Computing Language) development flow. Compared with Intel i7 CPU on the TUM dataset, our accelerator achieves up to 13× frame rate improvement, and up to 18× energy efficiency improvement, without significant loss in accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Xinyu, HU, ZUO Tao, ZHANG Jinbo, and WU Yiwei. "Semantic SLAM based on visual SLAM and object detection." Journal of Applied Optics 42, no. 1 (2021): 57–64. http://dx.doi.org/10.5768/jao202142.0102002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Cui, Linyan, and Chaowei Ma. "SOF-SLAM: A Semantic Visual SLAM for Dynamic Environments." IEEE Access 7 (2019): 166528–39. http://dx.doi.org/10.1109/access.2019.2952161.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Cui, Linyan, and Chaowei Ma. "SDF-SLAM: Semantic Depth Filter SLAM for Dynamic Environments." IEEE Access 8 (2020): 95301–11. http://dx.doi.org/10.1109/access.2020.2994348.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Zhao, Xiong, Tao Zuo, and Xinyu Hu. "OFM-SLAM: A Visual Semantic SLAM for Dynamic Indoor Environments." Mathematical Problems in Engineering 2021 (April 8, 2021): 1–16. http://dx.doi.org/10.1155/2021/5538840.

Повний текст джерела
Анотація:
Most of the current visual Simultaneous Localization and Mapping (SLAM) algorithms are designed based on the assumption of a static environment, and their robustness and accuracy in the dynamic environment do not behave well. The reason is that moving objects in the scene will cause the mismatch of features in the pose estimation process, which further affects its positioning and mapping accuracy. In the meantime, the three-dimensional semantic map plays a key role in mobile robot navigation, path planning, and other tasks. In this paper, we present OFM-SLAM: Optical Flow combining MASK-RCNN SLAM, a novel visual SLAM for semantic mapping in dynamic indoor environments. Firstly, we use the Mask-RCNN network to detect potential moving objects which can generate masks of dynamic objects. Secondly, an optical flow method is adopted to detect dynamic feature points. Then, we combine the optical flow method and the MASK-RCNN for full dynamic points’ culling, and the SLAM system is able to track without these dynamic points. Finally, the semantic labels obtained from MASK-RCNN are mapped to the point cloud for generating a three-dimensional semantic map that only contains the static parts of the scenes and their semantic information. We evaluate our system in public TUM datasets. The results of our experiments demonstrate that our system is more effective in dynamic scenarios, and the OFM-SLAM can estimate the camera pose more accurately and acquire a more precise localization in the high dynamic environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Li, Jingyu, Rongfen Zhang, Yuhong Liu, Zaiteng Zhang, Runze Fan, and Wenjiang Liu. "The Method of Static Semantic Map Construction Based on Instance Segmentation and Dynamic Point Elimination." Electronics 10, no. 16 (August 5, 2021): 1883. http://dx.doi.org/10.3390/electronics10161883.

Повний текст джерела
Анотація:
Semantic information usually contains a description of the environment content, which enables mobile robot to understand the environment and improves its ability to interact with the environment. In high-level human–computer interaction application, the Simultaneous Localization and Mapping (SLAM) system not only needs higher accuracy and robustness, but also has the ability to construct a static semantic map of the environment. However, traditional visual SLAM lacks semantic information. Furthermore, in an actual scene, dynamic objects will reduce the system performance and also generate redundancy when constructing map. these all directly affect the robot’s ability to perceive and understand the surrounding environment. Based on ORB-SLAM3, this article proposes a new Algorithm that uses semantic information and the global dense optical flow as constraints to generate dynamic-static mask and eliminate dynamic objects. then, to further construct a static 3D semantic map under indoor dynamic environments, a fusion of 2D semantic information and 3D point cloud is carried out. the experimental results on different types of dataset sequences show that, compared with original ORB-SLAM3, both Absolute Pose Error (APE) and Relative Pose Error (RPE) have been ameliorated to varying degrees, especially on freiburg3-walking-xyz, the APE reduced by 97.78% from the original average value of 0.523, and RPE reduced by 52.33% from the original average value of 0.0193. Compared with DS-SLAM and DynaSLAM, our system improves real-time performance while ensuring accuracy and robustness. Meanwhile, the expected map with environmental semantic information is built, and the map redundancy caused by dynamic objects is successfully reduced. the test results in real scenes further demonstrate the effect of constructing static semantic maps and prove the effectiveness of our Algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Marchesi, Giulia, Christian Eichhorn, David A. Plecher, Yuta Itoh, and Gudrun Klinker. "EnvSLAM: Combining SLAM Systems and Neural Networks to Improve the Environment Fusion in AR Applications." ISPRS International Journal of Geo-Information 10, no. 11 (November 12, 2021): 772. http://dx.doi.org/10.3390/ijgi10110772.

Повний текст джерела
Анотація:
Augmented Reality (AR) has increasingly benefited from the use of Simultaneous Localization and Mapping (SLAM) systems. This technology has enabled developers to create AR markerless applications, but lack semantic understanding of their environment. The inclusion of this information would empower AR applications to better react to the surroundings more realistically. To gain semantic knowledge, in recent years, focus has shifted toward fusing SLAM systems with neural networks, giving birth to the field of Semantic SLAM. Building on existing research, this paper aimed to create a SLAM system that generates a 3D map using ORB-SLAM2 and enriches it with semantic knowledge originated from the Fast-SCNN network. The key novelty of our approach is a new method for improving the predictions of neural networks, employed to balance the loss of accuracy introduced by efficient real-time models. Exploiting sensor information provided by a smartphone, GPS coordinates are utilized to query the OpenStreetMap database. The returned information is used to understand which classes are currently absent in the environment, so that they can be removed from the network’s prediction with the goal of improving its accuracy. We achieved 87.40% Pixel Accuracy with Fast-SCNN on our custom version of COCO-Stuff and showed an improvement by involving GPS data for our self-made smartphone dataset resulting in 90.24% Pixel Accuracy. Having in mind the use on smartphones, the implementation aimed to find a trade-off between accuracy and efficiency, making the system achieve an unprecedented speed. To this end, the system was carefully designed and a strong focus on lightweight neural networks is also fundamental. This enabled the creation of an above real-time Semantic SLAM system that we called EnvSLAM (Environment SLAM). Our extensive evaluation reveals the efficiency of the system features and the operability in above real-time (48.1 frames per second with an input image resolution of 640 × 360 pixels). Moreover, the GPS integration indicates an effective improvement of the network’s prediction accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Bu, Zean, Changku Sun, and Peng Wang. "Semantic Lidar-Inertial SLAM for Dynamic Scenes." Applied Sciences 12, no. 20 (October 18, 2022): 10497. http://dx.doi.org/10.3390/app122010497.

Повний текст джерела
Анотація:
Over the past few years, many impressive lidar-inertial SLAM systems have been developed and perform well under static scenes. However, most tasks are under dynamic environments in real life, and the determination of a method to improve accuracy and robustness poses a challenge. In this paper, we propose a semantic lidar-inertial SLAM approach with the combination of a point cloud semantic segmentation network and lidar-inertial SLAM LIO mapping for dynamic scenes. We import an attention mechanism to the PointConv network to build an attention weight function to improve the capacity to predict details. The semantic segmentation results of the point clouds from lidar enable us to obtain point-wise labels for each lidar frame. After filtering the dynamic objects, the refined global map of the lidar-inertial SLAM sytem is clearer, and the estimated trajectory can achieve a higher precision. We conduct experiments on an UrbanNav dataset, whose challenging highway sequences have a large number of moving cars and pedestrians. The results demonstrate that, compared with other SLAM systems, the accuracy of trajectory can be improved to different degrees.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Liu, Yubao, and Jun Miura. "RDS-SLAM: Real-Time Dynamic SLAM Using Semantic Segmentation Methods." IEEE Access 9 (2021): 23772–85. http://dx.doi.org/10.1109/access.2021.3050617.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Bavle, Hriday, Paloma De La Puente, Jonathan P. How, and Pascual Campoy. "VPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems." IEEE Access 8 (2020): 60704–18. http://dx.doi.org/10.1109/access.2020.2983121.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Chen, Weifeng, Guangtao Shang, Kai Hu, Chengjun Zhou, Xiyang Wang, Guisheng Fang, and Aihong Ji. "A Monocular-Visual SLAM System with Semantic and Optical-Flow Fusion for Indoor Dynamic Environments." Micromachines 13, no. 11 (November 17, 2022): 2006. http://dx.doi.org/10.3390/mi13112006.

Повний текст джерела
Анотація:
A static environment is a prerequisite for the stable operation of most visual SLAM systems, which limits the practical use of most existing systems. The robustness and accuracy of visual SLAM systems in dynamic environments still face many complex challenges. Only relying on semantic information or geometric methods cannot filter out dynamic feature points well. Considering the problem of dynamic objects easily interfering with the localization accuracy of SLAM systems, this paper proposes a new monocular SLAM algorithm for use in dynamic environments. This improved algorithm combines semantic information and geometric methods to filter out dynamic feature points. Firstly, an adjusted Mask R-CNN removes prior highly dynamic objects. The remaining feature-point pairs are matched via the optical-flow method and a fundamental matrix is calculated using those matched feature-point pairs. Then, the environment’s actual dynamic feature points are filtered out using the polar geometric constraint. The improved system can effectively filter out the feature points of dynamic targets. Finally, our experimental results on the TUM RGB-D and Bonn RGB-D Dynamic datasets showed that the proposed method could improve the pose estimation accuracy of a SLAM system in a dynamic environment, especially in the case of high indoor dynamics. The performance effect was better than that of the existing ORB-SLAM2. It also had a higher running speed than DynaSLAM, which is a similar dynamic visual SLAM algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Ni, Jianjun, Tao Gong, Yafei Gu, Jinxiu Zhu, and Xinnan Fan. "An Improved Deep Residual Network-Based Semantic Simultaneous Localization and Mapping Method for Monocular Vision Robot." Computational Intelligence and Neuroscience 2020 (February 10, 2020): 1–14. http://dx.doi.org/10.1155/2020/7490840.

Повний текст джерела
Анотація:
The robot simultaneous localization and mapping (SLAM) is a very important and useful technology in the robotic field. However, the environmental map constructed by the traditional visual SLAM method contains little semantic information, which cannot satisfy the needs of complex applications. The semantic map can deal with this problem efficiently, which has become a research hot spot. This paper proposed an improved deep residual network- (ResNet-) based semantic SLAM method for monocular vision robots. In the proposed approach, an improved image matching algorithm based on feature points is presented, to enhance the anti-interference ability of the algorithm. Then, the robust feature point extraction method is adopted in the front-end module of the SLAM system, which can effectively reduce the probability of camera tracking loss. In addition, the improved key frame insertion method is introduced in the visual SLAM system to enhance the stability of the system during the turning and moving of the robot. Furthermore, an improved ResNet model is proposed to extract the semantic information of the environment to complete the construction of the semantic map of the environment. Finally, various experiments are conducted and the results show that the proposed method is effective.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Cho, Sungjin, Chansoo Kim, Jaehyun Park, Myoungho Sunwoo, and Kichun Jo. "Semantic Point Cloud Mapping of LiDAR Based on Probabilistic Uncertainty Modeling for Autonomous Driving." Sensors 20, no. 20 (October 19, 2020): 5900. http://dx.doi.org/10.3390/s20205900.

Повний текст джерела
Анотація:
LiDAR-based Simultaneous Localization And Mapping (SLAM), which provides environmental information for autonomous vehicles by map building, is a major challenge for autonomous driving. In addition, the semantic information has been used for the LiDAR-based SLAM with the advent of deep neural network-based semantic segmentation algorithms. The semantic segmented point clouds provide a much greater range of functionality for autonomous vehicles than geometry alone, which can play an important role in the mapping step. However, due to the uncertainty of the semantic segmentation algorithms, the semantic segmented point clouds have limitations in being directly used for SLAM. In order to solve the limitations, this paper proposes a semantic segmentation-based LiDAR SLAM system considering the uncertainty of the semantic segmentation algorithms. The uncertainty is explicitly modeled by proposed probability models which are come from the data-driven approaches. Based on the probability models, this paper proposes semantic registration which calculates the transformation relationship of consecutive point clouds using semantic information with proposed probability models. Furthermore, the proposed probability models are used to determine the semantic class of the points when the multiple scans indicate different classes due to the uncertainty. The proposed framework is verified and evaluated by the KITTI dataset and outdoor environments. The experiment results show that the proposed semantic mapping framework reduces the errors of the mapping poses and eliminates the ambiguity of the semantic information of the generated semantic map.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Li, Peng, Lili Yin, Jiali Gao, and Yuezhongyi Sun. "Semantic Optimization of Feature-Based SLAM." Mathematical Problems in Engineering 2021 (April 12, 2021): 1–10. http://dx.doi.org/10.1155/2021/5581788.

Повний текст джерела
Анотація:
The purpose of this paper is to provide reasonable recommendation and removal of inappropriate information for SLAM (Simultaneous Localization and Mapping) technology based on feature method. The methodology is to propose a semantic recognition of environment objects in the natural scene through object detection, which is a kind of bag of word method in SLAM problem between the key frames and object level, the method of establishing key frames, and the relationship between the target object levels, through the practical significance of the target object level to judge the merits of the target object level information, and then combined with key frames in the visual SLAM relations with relevant information, so as to get object level targets in each key frame and the relationship between the relevant information, so as to achieve through the object level semantic information to judge the merits of the key frames and screening, as well as to the key frames to judge the merits of the relevant information and screening purpose. The finding of the study is the above method can retain the information of high reliability and good stability for visual SLAM and process the key frames with poor reliability or low stability and the information related to key frames.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Cheng, Jiyu, Yuxiang Sun, and Max Q. H. Meng. "Robust Semantic Mapping in Challenging Environments." Robotica 38, no. 2 (May 21, 2019): 256–70. http://dx.doi.org/10.1017/s0263574719000584.

Повний текст джерела
Анотація:
SummaryVisual simultaneous localization and mapping (visual SLAM) has been well developed in recent decades. To facilitate tasks such as path planning and exploration, traditional visual SLAM systems usually provide mobile robots with the geometric map, which overlooks the semantic information. To address this problem, inspired by the recent success of the deep neural network, we combine it with the visual SLAM system to conduct semantic mapping. Both the geometric and semantic information will be projected into the 3D space for generating a 3D semantic map. We also use an optical-flow-based method to deal with the moving objects such that our method is capable of working robustly in dynamic environments. We have performed our experiments in the public TUM dataset and our recorded office dataset. Experimental results demonstrate the feasibility and impressive performance of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Wang, Weiqi, Xiong You, Xin Zhang, Lingyu Chen, Lantian Zhang, and Xu Liu. "LiDAR-Based SLAM under Semantic Constraints in Dynamic Environments." Remote Sensing 13, no. 18 (September 13, 2021): 3651. http://dx.doi.org/10.3390/rs13183651.

Повний текст джерела
Анотація:
Facing the realistic demands of the application environment of robots, the application of simultaneous localisation and mapping (SLAM) has gradually moved from static environments to complex dynamic environments, while traditional SLAM methods usually result in pose estimation deviations caused by errors in data association due to the interference of dynamic elements in the environment. This problem is effectively solved in the present study by proposing a SLAM approach based on light detection and ranging (LiDAR) under semantic constraints in dynamic environments. Four main modules are used for the projection of point cloud data, semantic segmentation, dynamic element screening, and semantic map construction. A LiDAR point cloud semantic segmentation network SANet based on a spatial attention mechanism is proposed, which significantly improves the real-time performance and accuracy of point cloud semantic segmentation. A dynamic element selection algorithm is designed and used with prior knowledge to significantly reduce the pose estimation deviations caused by SLAM dynamic elements. The results of experiments conducted on the public datasets SemanticKITTI, KITTI, and SemanticPOSS show that the accuracy and robustness of the proposed approach are significantly improved.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Liu, Zhiying, Xiren Miao, Zhiqiang Xie, Hao Jiang, and Jing Chen. "Power Tower Inspection Simultaneous Localization and Mapping: A Monocular Semantic Positioning Approach for UAV Transmission Tower Inspection." Sensors 22, no. 19 (September 28, 2022): 7360. http://dx.doi.org/10.3390/s22197360.

Повний текст джерела
Анотація:
Realizing autonomous unmanned aerial vehicle (UAV) inspection is of great significance for power line maintenance. This paper introduces a scheme of using the structure of a tower to realize visual geographical positioning of UAV for tower inspection and presents a monocular semantic simultaneous localization and mapping (SLAM) framework termed PTI-SLAM (power tower inspection SLAM) to cope with the challenge of a tower inspection scene. The proposed scheme utilizes prior knowledge of tower component geolocation and regards geographical positioning as the estimation of transformation between SLAM and the geographic coordinates. To accomplish the robust positioning and semi-dense semantic mapping with limited computing power, PTI-SLAM combines the feature-based SLAM method with a fusion-based direct method and conveys a loosely coupled architecture of a semantic task and a SLAM task. The fusion-based direct method is specially designed to overcome the fragility of the direct method against adverse conditions concerning the inspection scene. Experiment results show that PTI-SLAM inherits the robustness advantage of the feature-based method and the semi-dense mapping ability of the direct method and achieves decimeter-level real-time positioning in the airborne system. The experiment concerning geographical positioning indicates more competitive accuracy compared to the previous visual approach and artificial UAV operating, demonstrating the potential of PTI-SLAM.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Wang, Zemin, Qian Zhang, Jiansheng Li, Shuming Zhang, and Jingbin Liu. "A Computationally Efficient Semantic SLAM Solution for Dynamic Scenes." Remote Sensing 11, no. 11 (June 6, 2019): 1363. http://dx.doi.org/10.3390/rs11111363.

Повний текст джерела
Анотація:
In various dynamic scenes, there are moveable objects such as pedestrians, which may challenge simultaneous localization and mapping (SLAM) algorithms. Consequently, the localization accuracy may be degraded, and a moving object may negatively impact the constructed maps. Maps that contain semantic information of dynamic objects impart humans or robots with the ability to semantically understand the environment, and they are critical for various intelligent systems and location-based services. In this study, we developed a computationally efficient SLAM solution that is able to accomplish three tasks in real time: (1) complete localization without accuracy loss due to the existence of dynamic objects and generate a static map that does not contain moving objects, (2) extract semantic information of dynamic objects through a computionally efficient approach, and (3) eventually generate semantic maps, which overlay semantic objects on static maps. The proposed semantic SLAM solution was evaluated through four different experiments on two data sets, respectively verifying the tracking accuracy, computational efficiency, and the quality of the generated static maps and semantic maps. The results show that the proposed SLAM solution is computationally efficient by reducing the time consumption for building maps by 2/3; moreover, the relative localization accuracy is improved, with a translational error of only 0.028 m, and is not degraded by dynamic objects. Finally, the proposed solution generates static maps of a dynamic scene without moving objects and semantic maps with high-precision semantic information of specific objects.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Orlova, Svetlana, and Alexander Lopota. "Scene recognition for confined spaces in mobile robotics: current state and tendencies." Robotics and Technical Cybernetics 10, no. 1 (March 2022): 14–24. http://dx.doi.org/10.31776/rtcj.10102.

Повний текст джерела
Анотація:
The article discusses the problem of scene recognition for mobile robotics. Subtasks that have to be solved to implement a high-level understanding of the environment are considered. The basis here is an understanding of the geometry and semantics of the scene, which can be decomposed into subtasks of robot localization, mapping and semantic analysis. Simultaneous localization and mapping (SLAM) techniques have already been successfully applied and, although they have some as yet unresolved problems for dynamic environments, do not present a problem for this issue. The focus of the work is on the task of semantic analysis of the scene, which assumes three-dimensional segmentation. The field of 3D segmentation, like the field of image segmentation, has been decomposed into semantic and object segmentation, contrary to the needs of many potential applications. However, at present, panoptic segmentation is beginning to develop, combining the two previous ones and most fully describing the scene. The paper reviews the methods of 3D panoptic segmentation, identifies promising approaches. The actual problems of the scene recognition problem are also discussed. There is a clear trend towards the development of complex incremental methods of metric-semantic SLAM, which combine segmentation with SLAM methods, and the use of scene graphs, which allow describing the geometry, semantics of scene elements and the relationship between them. Scene graphs are especially promising for the field of mobile robotics, since they provide a transition from low-level representations of objects and spaces (for example, segmented point clouds) to describing a scene at a high level of abstraction, close to a human one (a list of objects in a scene, their properties and location relative to each other).
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Zhang, Jincheng, Prashant Ganesh, Kyle Volle, Andrew Willis, and Kevin Brink. "Low-Bandwidth and Compute-Bound RGB-D Planar Semantic SLAM." Sensors 21, no. 16 (August 10, 2021): 5400. http://dx.doi.org/10.3390/s21165400.

Повний текст джерела
Анотація:
Visual simultaneous location and mapping (SLAM) using RGB-D cameras has been a necessary capability for intelligent mobile robots. However, when using point-cloud map representations as most RGB-D SLAM systems do, limitations in onboard compute resources, and especially communication bandwidth can significantly limit the quantity of data processed and shared. This article proposes techniques that help address these challenges by mapping point clouds to parametric models in order to reduce computation and bandwidth load on agents. This contribution is coupled with a convolutional neural network (CNN) that extracts semantic information. Semantics provide guidance in object modeling which can reduce the geometric complexity of the environment. Pairing a parametric model with a semantic label allows agents to share the knowledge of the world with much less complexity, opening a door for multi-agent systems to perform complex tasking, and human–robot cooperation. This article takes the first step towards a generalized parametric model by limiting the geometric primitives to a planar surface and providing semantic labels when appropriate. Two novel compression algorithms for depth data and a method to independently fit planes to RGB-D data are provided, so that plane data can be used for real-time odometry estimation and mapping. Additionally, we extend maps with semantic information predicted from sparse geometries (planes) by a CNN. In experiments, the advantages of our approach in terms of computational and bandwidth resources savings are demonstrated and compared with other state-of-the-art SLAM systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Wang, Yi, Haoyu Bu, Xiaolong Zhang, and Jia Cheng. "YPD-SLAM: A Real-Time VSLAM System for Handling Dynamic Indoor Environments." Sensors 22, no. 21 (November 7, 2022): 8561. http://dx.doi.org/10.3390/s22218561.

Повний текст джерела
Анотація:
Aiming at the problem that Simultaneous localization and mapping (SLAM) is greatly disturbed by many dynamic elements in the actual environment, this paper proposes a real-time Visual SLAM (VSLAM) algorithm to deal with a dynamic indoor environment. Firstly, a lightweight YoloFastestV2 deep learning model combined with NCNN and Mobile Neural Network (MNN) inference frameworks is used to obtain preliminary semantic information of images. The dynamic feature points are removed according to epipolar constraint and dynamic properties of objects between consecutive frames. Since reducing the number of feature points after rejection affects the pose estimation, this paper innovatively combines Cylinder and Plane Extraction (CAPE) planar detection. We generate planes from depth maps and then introduce planar and in-plane point constraints into the nonlinear optimization of SLAM. Finally, the algorithm is tested on the publicly available TUM (RGB-D) dataset, and the average improvement in localization accuracy over ORB-SLAM2, DS-SLAM, and RDMO-SLAM is about 91.95%, 27.21%, and 30.30% under dynamic sequences, respectively. The single-frame tracking time of the whole system is only 42.68 ms, which is 44.1%, being 14.6–34.33% higher than DS-SLAM, RDMO-SLAM, and RDS-SLAM respectively. The system that we proposed significantly increases processing speed, performs better in real-time, and is easily deployed on various platforms.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Du, Shitong, Yifan Li, Xuyou Li, and Menghao Wu. "LiDAR Odometry and Mapping Based on Semantic Information for Outdoor Environment." Remote Sensing 13, no. 15 (July 21, 2021): 2864. http://dx.doi.org/10.3390/rs13152864.

Повний текст джерела
Анотація:
Simultaneous Localization and Mapping (SLAM) in an unknown environment is a crucial part for intelligent mobile robots to achieve high-level navigation and interaction tasks. As one of the typical LiDAR-based SLAM algorithms, the Lidar Odometry and Mapping in Real-time (LOAM) algorithm has shown impressive results. However, LOAM only uses low-level geometric features without considering semantic information. Moreover, the lack of a dynamic object removal strategy limits the algorithm to obtain higher accuracy. To this end, this paper extends the LOAM pipeline by integrating semantic information into the original framework. Specifically, we first propose a two-step dynamic objects filtering strategy. Point-wise semantic labels are then used to improve feature extraction and searching for corresponding points. We evaluate the performance of the proposed method in many challenging scenarios, including highway, country and urban from the KITTI dataset. The results demonstrate that the proposed SLAM system outperforms the state-of-the-art SLAM methods in terms of accuracy and robustness.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Yan, Li, Xiao Hu, Leyang Zhao, Yu Chen, Pengcheng Wei, and Hong Xie. "DGS-SLAM: A Fast and Robust RGBD SLAM in Dynamic Environments Combined by Geometric and Semantic Information." Remote Sensing 14, no. 3 (February 8, 2022): 795. http://dx.doi.org/10.3390/rs14030795.

Повний текст джерела
Анотація:
Visual Simultaneous Localization and Mapping (VSLAM) is a prerequisite for robots to accomplish fully autonomous movement and exploration in unknown environments. At present, many impressive VSLAM systems have emerged, but most of them rely on the static world assumption, which limits their application in real dynamic scenarios. To improve the robustness and efficiency of the system in dynamic environments, this paper proposes a dynamic RGBD SLAM based on a combination of geometric and semantic information (DGS-SLAM). First, a dynamic object detection module based on the multinomial residual model is proposed, which executes the motion segmentation of the scene by combining the motion residual information of adjacent frames and the potential motion information of the semantic segmentation module. Second, a camera pose tracking strategy using feature point classification results is designed to achieve robust system tracking. Finally, according to the results of dynamic segmentation and camera tracking, a semantic segmentation module based on a semantic frame selection strategy is designed for extracting potential moving targets in the scene. Extensive evaluation in public TUM and Bonn datasets demonstrates that DGS-SLAM has higher robustness and speed than state-of-the-art dynamic RGB-D SLAM systems in dynamic scenes.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Liao, Ziwei, Yutong Hu, Jiadong Zhang, Xianyu Qi, Xiaoyu Zhang, and Wei Wang. "SO-SLAM: Semantic Object SLAM With Scale Proportional and Symmetrical Texture Constraints." IEEE Robotics and Automation Letters 7, no. 2 (April 2022): 4008–15. http://dx.doi.org/10.1109/lra.2022.3148465.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Wu, Wenxin, Liang Guo, Hongli Gao, Zhichao You, Yuekai Liu, and Zhiqiang Chen. "YOLO-SLAM: A semantic SLAM system towards dynamic environment with geometric constraint." Neural Computing and Applications 34, no. 8 (January 8, 2022): 6011–26. http://dx.doi.org/10.1007/s00521-021-06764-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Sarkar, A., R. Reiger, S. Roy, R. Chaterjee, A. Datta, J. P. Gupta, and A. Sowmyan. "SLAM Using Relational Trees and Semantics." Advanced Materials Research 452-453 (January 2012): 648–53. http://dx.doi.org/10.4028/www.scientific.net/amr.452-453.648.

Повний текст джерела
Анотація:
This work attempts to develop a method for SLAM using semantics based on FastSLAM 2.0. Our approach to semantic mapping consists of segmenting images obtained from two sensors (optical and radar) aboard a UAV. We then identify landmarks within the segmented image, followed by the construction of relational trees with the landmarks; these trees are then used at consecutive time-steps of the robot’s motion for its localization as well as update of the landmarks. The term semantics has been used for region-landmarks which are validated with a look-up table (LUT) of the predefined surface type information, a superset of the robot’s actual environment. Finally, based on particle filters, the posterior density of the state of the robot is estimated and a 2-D semantics map is constructed. The methodology has been tested in a situation wherein the robot’s true environment and path have been simulated. For simulation we consider satellite images of optical and radar sensors of the robot’s environment. At different time-steps the robot’s images are cropped from these images, incorporating errors in the robot’s control information. Experiments carried out on the simulated environment have provided encouraging results.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Chi, Jinxin, Hao Wu, and Guohui Tian. "Object-Oriented 3D Semantic Mapping Based on Instance Segmentation." Journal of Advanced Computational Intelligence and Intelligent Informatics 23, no. 4 (July 20, 2019): 695–704. http://dx.doi.org/10.20965/jaciii.2019.p0695.

Повний текст джерела
Анотація:
Service robots gain both geometric and semantic information about the environment with the help of semantic mapping, providing more intelligent services. However, a majority of studies for semantic mapping thus far require priori knowledge 3D object models or maps with a few object categories that neglect separate individual objects. In view of these problems, an object-oriented 3D semantic mapping method is proposed by combining state-of-the-art deep-learning-based instance segmentation and a visual simultaneous localization and mapping (SLAM) algorithm, which helps robots not only gain navigation-oriented geometric information about the surrounding environment, but also obtain individually-oriented attribute and location information about the objects. Meanwhile, an object recognition and target association algorithm applied to continuous image frames is proposed by combining visual SLAM, which uses visual consistency between image frames to promote the result of object matching and recognition over continuous image frames, and improve the object recognition accuracy. Finally, a 3D semantic mapping system is implemented based on Mask R-CNN and ORB-SLAM2 frameworks. A simulation experiment is carried out on the ICL-NUIM dataset and the experimental results show that the system can generally recognize all the types of objects in the scene and generate fine point cloud models of these objects, which verifies the effectiveness of our algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Zou Bin, 邹斌, 林思阳 Lin Siyang, and 尹智帅 Yin Zhishuai. "Semantic Mapping Based on YOLOv3 and Visual SLAM." Laser & Optoelectronics Progress 57, no. 20 (2020): 201012. http://dx.doi.org/10.3788/lop57.201012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Chen, Weifeng, Guangtao Shang, Aihong Ji, Chengjun Zhou, Xiyang Wang, Chonghui Xu, Zhenxiong Li, and Kai Hu. "An Overview on Visual SLAM: From Tradition to Semantic." Remote Sensing 14, no. 13 (June 23, 2022): 3010. http://dx.doi.org/10.3390/rs14133010.

Повний текст джерела
Анотація:
Visual SLAM (VSLAM) has been developing rapidly due to its advantages of low-cost sensors, the easy fusion of other sensors, and richer environmental information. Traditional visionbased SLAM research has made many achievements, but it may fail to achieve wished results in challenging environments. Deep learning has promoted the development of computer vision, and the combination of deep learning and SLAM has attracted more and more attention. Semantic information, as high-level environmental information, can enable robots to better understand the surrounding environment. This paper introduces the development of VSLAM technology from two aspects: traditional VSLAM and semantic VSLAM combined with deep learning. For traditional VSLAM, we summarize the advantages and disadvantages of indirect and direct methods in detail and give some classical VSLAM open-source algorithms. In addition, we focus on the development of semantic VSLAM based on deep learning. Starting with typical neural networks CNN and RNN, we summarize the improvement of neural networks for the VSLAM system in detail. Later, we focus on the help of target detection and semantic segmentation for VSLAM semantic information introduction. We believe that the development of the future intelligent era cannot be without the help of semantic technology. Introducing deep learning into the VSLAM system to provide semantic information can help robots better perceive the surrounding environment and provide people with higher-level help.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Wu, Haibin, Jianbo Zhao, Kaiyang Xu, Yan Zhang, Ruotong Xu, Aili Wang, and Yuji Iwahori. "Semantic SLAM Based on Deep Learning in Endocavity Environment." Symmetry 14, no. 3 (March 19, 2022): 614. http://dx.doi.org/10.3390/sym14030614.

Повний текст джерела
Анотація:
Traditional endoscopic treatment methods restrict the surgeon’s field of view. New approaches to laparoscopic visualization have emerged due to the advent of robot-assisted surgical techniques. Lumen simultaneous localization and mapping (SLAM) technology can use the image sequence taken by the endoscope to estimate the pose of the endoscope and reconstruct the lumen scene in minimally invasive surgery. This technology gives the surgeon better visual perception and is the basis for the development of surgical navigation systems as well as medical augmented reality. However, the movement of surgical instruments in the internal cavity can interfere with the SLAM algorithm, and the feature points extracted from the surgical instruments may cause errors. Therefore, we propose a modified endocavity SLAM method combined with deep learning semantic segmentation that introduces a convolution neural network based on U-Net architecture with a symmetric encoder–decoder structure in the visual odometry with the goals of solving the binary segmentation problem between surgical instruments and the lumen background and distinguishing dynamic feature points. Its segmentation performance is improved by using pretrained encoders on the network model to obtain more accurate pixel-level instrument segmentation. In this setting, the semantic segmentation is used to reject the feature points on the surgical instruments and reduce the impact caused by dynamic surgical instruments. This can provide more stable and accurate mapping results compared to ordinary SLAM systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Cheng, Junhao, Zhi Wang, Hongyan Zhou, Li Li, and Jian Yao. "DM-SLAM: A Feature-Based SLAM System for Rigid Dynamic Scenes." ISPRS International Journal of Geo-Information 9, no. 4 (March 27, 2020): 202. http://dx.doi.org/10.3390/ijgi9040202.

Повний текст джерела
Анотація:
Most Simultaneous Localization and Mapping (SLAM) methods assume that environments are static. Such a strong assumption limits the application of most visual SLAM systems. The dynamic objects will cause many wrong data associations during the SLAM process. To address this problem, a novel visual SLAM method that follows the pipeline of feature-based methods called DM-SLAM is proposed in this paper. DM-SLAM combines an instance segmentation network with optical flow information to improve the location accuracy in dynamic environments, which supports monocular, stereo, and RGB-D sensors. It consists of four modules: semantic segmentation, ego-motion estimation, dynamic point detection and a feature-based SLAM framework. The semantic segmentation module obtains pixel-wise segmentation results of potentially dynamic objects, and the ego-motion estimation module calculates the initial pose. In the third module, two different strategies are presented to detect dynamic feature points for RGB-D/stereo and monocular cases. In the first case, the feature points with depth information are reprojected to the current frame. The reprojection offset vectors are used to distinguish the dynamic points. In the other case, we utilize the epipolar constraint to accomplish this task. Furthermore, the static feature points left are fed into the fourth module. The experimental results on the public TUM and KITTI datasets demonstrate that DM-SLAM outperforms the standard visual SLAM baselines in terms of accuracy in highly dynamic environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Long, Xudong, Weiwei Zhang, and Bo Zhao. "PSPNet-SLAM: A Semantic SLAM Detect Dynamic Object by Pyramid Scene Parsing Network." IEEE Access 8 (2020): 214685–95. http://dx.doi.org/10.1109/access.2020.3041038.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Vallicrosa, Guillem, Khadidja Himri, Pere Ridao, and Nuno Gracias. "Semantic Mapping for Autonomous Subsea Intervention." Sensors 21, no. 20 (October 11, 2021): 6740. http://dx.doi.org/10.3390/s21206740.

Повний текст джерела
Анотація:
This paper presents a method to build a semantic map to assist an underwater vehicle-manipulator system in performing intervention tasks autonomously in a submerged man-made pipe structure. The method is based on the integration of feature-based simultaneous localization and mapping (SLAM) and 3D object recognition using a database of a priori known objects. The robot uses Doppler velocity log (DVL), pressure, and attitude and heading reference system (AHRS) sensors for navigation and is equipped with a laser scanner providing non-coloured 3D point clouds of the inspected structure in real time. The object recognition module recognises the pipes and objects within the scan and passes them to the SLAM, which adds them to the map if not yet observed. Otherwise, it uses them to correct the map and the robot navigation if they were already mapped. The SLAM provides a consistent map and a drift-less navigation. Moreover, it provides a global identifier for every observed object instance and its pipe connectivity. This information is fed back to the object recognition module, where it is used to estimate the object classes using Bayesian techniques over the set of those object classes which are compatible in terms of pipe connectivity. This allows fusing of all the already available object observations to improve recognition. The outcome of the process is a semantic map made of pipes connected through valves, elbows and tees conforming to the real structure. Knowing the class and the position of objects will enable high-level manipulation commands in the near future.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Wang, Sheng, Guohua Gou, Haigang Sui, Yufeng Zhou, Hao Zhang, and Jiajie Li. "CDSFusion: Dense Semantic SLAM for Indoor Environment Using CPU Computing." Remote Sensing 14, no. 4 (February 17, 2022): 979. http://dx.doi.org/10.3390/rs14040979.

Повний текст джерела
Анотація:
Unmanned Aerial Vehicles (UAVs) require the ability to robustly perceive surrounding scenes for autonomous navigation. The semantic reconstruction of the scene is a truly functional understanding of the environment. However, high-performance computing is generally not available on most UAVs, so a lightweight real-time semantic reconstruction method is necessary. Existing methods rely on GPU, and it is difficult to achieve real-time semantic reconstruction on CPU. To solve the problem, an indoor dense semantic Simultaneous Localization and Mapping (SLAM) method using CPU computing is proposed in this paper, named CDSFusion. The CDSFusion is the first system integrating RGBD-based Visual-Inertial Odometry (VIO), semantic segmentation and 3D reconstruction in real-time on a CPU. In our VIO method, the depth information is introduced to improve the accuracy of pose estimation, and FAST features are used for faster tracking. In our semantic reconstruction method, the PSPNet (Pyramid Scene Parsing Network) pre-trained model is optimized to provide the semantic information in real-time on the CPU, and the semantic point clouds are fused using Voxblox. The experimental results demonstrate that camera tracking is accelerated without loss of accuracy in our VIO, and a 3D semantic map is reconstructed in real-time, which is comparable to one generated by the GPU-dependent method.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Zhang, Yu, Xiping Xu, Ning Zhang, and Yaowen Lv. "A Semantic SLAM System for Catadioptric Panoramic Cameras in Dynamic Environments." Sensors 21, no. 17 (September 1, 2021): 5889. http://dx.doi.org/10.3390/s21175889.

Повний текст джерела
Анотація:
When a traditional visual SLAM system works in a dynamic environment, it will be disturbed by dynamic objects and perform poorly. In order to overcome the interference of dynamic objects, we propose a semantic SLAM system for catadioptric panoramic cameras in dynamic environments. A real-time instance segmentation network is used to detect potential moving targets in the panoramic image. In order to find the real dynamic targets, potential moving targets are verified according to the sphere’s epipolar constraints. Then, when extracting feature points, the dynamic objects in the panoramic image are masked. Only static feature points are used to estimate the pose of the panoramic camera, so as to improve the accuracy of pose estimation. In order to verify the performance of our system, experiments were conducted on public data sets. The experiments showed that in a highly dynamic environment, the accuracy of our system is significantly better than traditional algorithms. By calculating the RMSE of the absolute trajectory error, we found that our system performed up to 96.3% better than traditional SLAM. Our catadioptric panoramic camera semantic SLAM system has higher accuracy and robustness in complex dynamic environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Deng, Wenbang, Kaihong Huang, Xieyuanli Chen, Zhiqian Zhou, Chenghao Shi, Ruibin Guo, and Hui Zhang. "Semantic RGB-D SLAM for Rescue Robot Navigation." IEEE Access 8 (2020): 221320–29. http://dx.doi.org/10.1109/access.2020.3031867.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Wang, Jiwu, and Yafan Liu. "Visual SLAM System Design based on Semantic Segmentation." Proceedings of International Conference on Artificial Life and Robotics 24 (January 10, 2019): 316–19. http://dx.doi.org/10.5954/icarob.2019.os12-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Chen, Zhuo, Xiaoming Liu, Masaru Kojima, Qiang Huang, and Tatsuo Arai. "A Wearable Navigation Device for Visually Impaired People Based on the Real-Time Semantic Visual SLAM System." Sensors 21, no. 4 (February 23, 2021): 1536. http://dx.doi.org/10.3390/s21041536.

Повний текст джерела
Анотація:
Wearable auxiliary devices for visually impaired people are highly attractive research topics. Although many proposed wearable navigation devices can assist visually impaired people in obstacle avoidance and navigation, these devices cannot feedback detailed information about the obstacles or help the visually impaired understand the environment. In this paper, we proposed a wearable navigation device for the visually impaired by integrating the semantic visual SLAM (Simultaneous Localization And Mapping) and the newly launched powerful mobile computing platform. This system uses an Image-Depth (RGB-D) camera based on structured light as the sensor, as the control center. We also focused on the technology that combines SLAM technology with the extraction of semantic information from the environment. It ensures that the computing platform understands the surrounding environment in real-time and can feed it back to the visually impaired in the form of voice broadcast. Finally, we tested the performance of the proposed semantic visual SLAM system on this device. The results indicate that the system can run in real-time on a wearable navigation device with sufficient accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Ge, Gengyu, Zhong Qin, and Lilve Fan. "An Improved VSLAM for Mobile Robot Localization in Corridor Environment." Advances in Multimedia 2022 (May 23, 2022): 1–10. http://dx.doi.org/10.1155/2022/3941995.

Повний текст джерела
Анотація:
Localization is a fundamental capability for an autonomous mobile robot, especially in the navigation process. The commonly used laser-based simultaneous localization and mapping (SLAM) method can build a grid map of the indoor environment and realize localization task. However, when a robot comes to a long corridor where there exists many geometrically symmetrical and similar structures, it often fails to position itself. Besides, the environment is not represented to a semantic level that the robot cannot interact well with users. To solve these crucial issues, in this paper, we propose an improved visual SLAM approach to realize a robust and precise global localization. The system is divided into two main steps. The first step is to construct a topological semantic map using visual SLAM, text detection and recognition, and laser sensor data. The second step is the localization which repeats part work of the first step but makes the best use of the prebuilt semantic map. Experiments show that our approach and solutions perform well and localize successfully almost everywhere in the corridor environment while traditional methods fail.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Sualeh, Muhammad, and Gon-Woo Kim. "Semantics Aware Dynamic SLAM Based on 3D MODT." Sensors 21, no. 19 (September 23, 2021): 6355. http://dx.doi.org/10.3390/s21196355.

Повний текст джерела
Анотація:
The idea of SLAM (Simultaneous Localization and Mapping) being a solved problem revolves around the static world assumption, even though autonomous systems are gaining environmental perception capabilities by exploiting the advances in computer vision and data-driven approaches. The computational demands and time complexities remain the main impediment in the effective fusion of the paradigms. In this paper, a framework to solve the dynamic SLAM problem is proposed. The dynamic regions of the scene are handled by making use of Visual-LiDAR based MODT (Multiple Object Detection and Tracking). Furthermore, minimal computational demands and real-time performance are ensured. The framework is tested on the KITTI Datasets and evaluated against the publicly available evaluation tools for a fair comparison with state-of-the-art SLAM algorithms. The results suggest that the proposed dynamic SLAM framework can perform in real-time with budgeted computational resources. In addition, the fused MODT provides rich semantic information that can be readily integrated into SLAM.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Lai, Tin. "A Review on Visual-SLAM: Advancements from Geometric Modelling to Learning-Based Semantic Scene Understanding Using Multi-Modal Sensor Fusion." Sensors 22, no. 19 (September 25, 2022): 7265. http://dx.doi.org/10.3390/s22197265.

Повний текст джерела
Анотація:
Simultaneous Localisation and Mapping (SLAM) is one of the fundamental problems in autonomous mobile robots where a robot needs to reconstruct a previously unseen environment while simultaneously localising itself with respect to the map. In particular, Visual-SLAM uses various sensors from the mobile robot for collecting and sensing a representation of the map. Traditionally, geometric model-based techniques were used to tackle the SLAM problem, which tends to be error-prone under challenging environments. Recent advancements in computer vision, such as deep learning techniques, have provided a data-driven approach to tackle the Visual-SLAM problem. This review summarises recent advancements in the Visual-SLAM domain using various learning-based methods. We begin by providing a concise overview of the geometric model-based approaches, followed by technical reviews on the current paradigms in SLAM. Then, we present the various learning-based approaches to collecting sensory inputs from mobile robots and performing scene understanding. The current paradigms in deep-learning-based semantic understanding are discussed and placed under the context of Visual-SLAM. Finally, we discuss challenges and further opportunities in the direction of learning-based approaches in Visual-SLAM.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії