Journal articles on the topic 'Simultaneous localisation and mapping'

To see the other types of publications on this topic, follow the link: Simultaneous localisation and mapping.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Simultaneous localisation and mapping.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kim, Jonghyuk. "Rao-Blackwellised Inertial Simultaneous Localisation and Mapping." IFAC Proceedings Volumes 41, no. 2 (2008): 9522–27. http://dx.doi.org/10.3182/20080706-5-kr-1001.01610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Xun, and JianGuo Wang. "Detecting glass in Simultaneous Localisation and Mapping." Robotics and Autonomous Systems 88 (February 2017): 97–103. http://dx.doi.org/10.1016/j.robot.2016.11.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Sufang, Tingyang Xie, and Peng Chen. "Simultaneous localisation and mapping of intelligent mobile robots." International Journal of Cybernetics and Cyber-Physical Systems 1, no. 1 (2021): 93. http://dx.doi.org/10.1504/ijccps.2021.113103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bryson, Mitch, and Salah Sukkarieh. "Architectures for Cooperative Airborne Simultaneous Localisation and Mapping." Journal of Intelligent and Robotic Systems 55, no. 4-5 (January 16, 2009): 267–97. http://dx.doi.org/10.1007/s10846-008-9303-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Sufang, Tingyang Xie, and Peng Chen. "Simultaneous localisation and mapping of intelligent mobile robots." International Journal of Cybernetics and Cyber-Physical Systems 1, no. 1 (2021): 1. http://dx.doi.org/10.1504/ijccps.2021.10033929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ma, Teng, Ye Li, Yusen Gong, Rupeng Wang, Mingwei Sheng, and Qiang Zhang. "AUV Bathymetric Simultaneous Localisation and Mapping Using Graph Method." Journal of Navigation 72, no. 06 (July 5, 2019): 1602–22. http://dx.doi.org/10.1017/s0373463319000286.

Full text
Abstract:
Although topographic mapping missions and geological surveys carried out by Autonomous Underwater Vehicles (AUVs) are becoming increasingly prevalent, the lack of precise navigation in these scenarios still limits their application. This paper deals with the problems of long-term underwater navigation for AUVs and provides new mapping techniques by developing a Bathymetric Simultaneous Localisation And Mapping (BSLAM) method based on graph SLAM technology. To considerably reduce the calculation cost, the trajectory of the AUV is divided into various submaps based on Differences of Normals (DoN). Loop closures between submaps are obtained by terrain matching; meanwhile, maximum likelihood terrain estimation is also introduced to build weak data association within the submap. Assisted by one weight voting method for loop closures, the global and local trajectory corrections work together to provide an accurate navigation solution for AUVs with weak data association and inaccurate loop closures. The viability, accuracy and real-time performance of the proposed algorithm are verified with data collected onboard, including an 8 km planned track recorded at a speed of 4 knots in Qingdao, China.
APA, Harvard, Vancouver, ISO, and other styles
7

Mahony, Robert, Tarek Hamel, and Jochen Trumpf. "An homogeneous space geometry for simultaneous localisation and mapping." Annual Reviews in Control 51 (2021): 254–67. http://dx.doi.org/10.1016/j.arcontrol.2021.04.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

van Goor, Pieter, Robert Mahony, Tarek Hamel, and Jochen Trumpf. "Constructive observer design for Visual Simultaneous Localisation and Mapping." Automatica 132 (October 2021): 109803. http://dx.doi.org/10.1016/j.automatica.2021.109803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lepej, Peter, and Jurij Rakun. "Simultaneous localisation and mapping in a complex field environment." Biosystems Engineering 150 (October 2016): 160–69. http://dx.doi.org/10.1016/j.biosystemseng.2016.08.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bala, Jibril Abdullahi, Steve Adetunji Adeshina, and Abiodun Musa Aibinu. "Advances in Visual Simultaneous Localisation and Mapping Techniques for Autonomous Vehicles: A Review." Sensors 22, no. 22 (November 18, 2022): 8943. http://dx.doi.org/10.3390/s22228943.

Full text
Abstract:
The recent advancements in Information and Communication Technology (ICT) as well as increasing demand for vehicular safety has led to significant progressions in Autonomous Vehicle (AV) technology. Perception and Localisation are major operations that determine the success of AV development and usage. Therefore, significant research has been carried out to provide AVs with the capabilities to not only sense and understand their surroundings efficiently, but also provide detailed information of the environment in the form of 3D maps. Visual Simultaneous Localisation and Mapping (V-SLAM) has been utilised to enable a vehicle understand its surroundings, map the environment, and identify its position within the area. This paper presents a detailed review of V-SLAM techniques implemented for AV perception and localisation. An overview of SLAM techniques is presented. In addition, an in-depth review is conducted to highlight various V-SLAM schemes, their strengths, and limitations. Challenges associated with V-SLAM deployment and future research directions are also provided in this paper.
APA, Harvard, Vancouver, ISO, and other styles
11

Pailhas, Yan, Chris Capus, Keith Brown, and Yvan Petillot. "Design of artificial landmarks for underwater simultaneous localisation and mapping." IET Radar, Sonar & Navigation 7, no. 1 (January 2013): 10–18. http://dx.doi.org/10.1049/iet-rsn.2011.0103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Arturo, G. A., R. G. Óscar, J. C. Miguel, B. G. Mónica, and U. G. David. "Cooperative simultaneous localisation and mapping using independent Rao–Blackwellised filters." IET Computer Vision 6, no. 5 (September 1, 2012): 407–14. http://dx.doi.org/10.1049/iet-cvi.2011.0108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hwang, Arom, and Woojae Seong. "Simultaneous Mapping and Localisation for Small Military Unmanned Underwater Vehicle." Defence Science Journal 62, no. 4 (July 6, 2012): 223–27. http://dx.doi.org/10.14429/dsj.62.1002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Doh, N. L., H. Cho, W. K. Chung, and K. Lee. "Simultaneous localisation and mapping algorithm for topological maps with dynamics." IET Control Theory & Applications 3, no. 9 (September 1, 2009): 1249–60. http://dx.doi.org/10.1049/iet-cta.2008.0254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Wu, Xinzhao, Peiqing Li, Qipeng Li, and Zhuoran Li. "Two-dimensional-simultaneous Localisation and Mapping Study Based on Factor Graph Elimination Optimisation." Sustainability 15, no. 2 (January 8, 2023): 1172. http://dx.doi.org/10.3390/su15021172.

Full text
Abstract:
A robust multi-sensor fusion simultaneous localization and mapping (SLAM) algorithm for complex road surfaces is proposed to improve recognition accuracy and reduce system memory occupation, aiming to enhance the computational efficiency of light detection and ranging in complex environments. First, a weighted signed distance function (W-SDF) map-based SLAM method is proposed. It uses a W-SDF map to capture the environment with less accuracy than the raster size but with high localization accuracy. The Levenberg–Marquardt method is used to solve the scan-matching problem in laser SLAM; it effectively alleviates the limitations of the Gaussian–Newton method that may lead to insufficient local accuracy, and reduces localisation errors. Second, ground constraint factors are added to the factor graph, and a multi-sensor fusion localisation algorithm is proposed based on factor graph elimination optimisation. A sliding window is added to the chain factor graph model to retain the historical state information within the window and avoid high-dimensional matrix operations. An elimination algorithm is introduced to transform the factor graph into a Bayesian network to marginalize the historical states and reduce the matrix dimensionality, thereby improving the algorithm localisation accuracy and reducing the memory occupation. Finally, the proposed algorithm is compared and validated with two traditional algorithms based on an unmanned cart. Experiments show that the proposed algorithm reduces memory consumption and improves localisation accuracy compared to the Hector algorithm and Cartographer algorithm, has good performance in terms of accuracy, reliability and computational efficiency in complex pavement environments, and is better utilised in practical environments.
APA, Harvard, Vancouver, ISO, and other styles
16

Huang, Shoudong. "A review of optimisation strategies used in simultaneous localisation and mapping." Journal of Control and Decision 6, no. 1 (November 29, 2018): 61–74. http://dx.doi.org/10.1080/23307706.2018.1552207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Fernández, Lorenzo, Oscar Reinoso, Luis Miguel Jimenez, and Luis Payá. "Appearance-based approach to hybrid metric-topological simultaneous localisation and mapping." IET Intelligent Transport Systems 8, no. 8 (December 1, 2014): 688–99. http://dx.doi.org/10.1049/iet-its.2013.0086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Bedkowski, Janusz, Timo Röhling, Frank Hoeller, Dirk Shulz, and Frank E. Schneider. "Benchmark of 6D SLAM (6D Simultaneous Localisation and Mapping) Algorithms with Robotic Mobile Mapping Systems." Foundations of Computing and Decision Sciences 42, no. 3 (September 1, 2017): 275–95. http://dx.doi.org/10.1515/fcds-2017-0014.

Full text
Abstract:
AbstractThis work concerns the study of 6DSLAM algorithms with an application of robotic mobile mapping systems. The architecture of the 6DSLAM algorithm is designed for evaluation of different data registration strategies. The algorithm is composed of the iterative registration component, thus ICP (Iterative Closest Point), ICP (point to projection), ICP with semantic discrimination of points, LS3D (Least Square Surface Matching), NDT (Normal Distribution Transform) can be chosen. Loop closing is based on LUM and LS3D. The main research goal was to investigate the semantic discrimination of measured points that improve the accuracy of final map especially in demanding scenarios such as multi-level maps (e.g., climbing stairs). The parallel programming based nearest neighborhood search implementation such as point to point, point to projection, semantic discrimination of points is used. The 6DSLAM framework is based on modified 3DTK and PCL open source libraries and parallel programming techniques using NVIDIA CUDA. The paper shows experiments that are demonstrating advantages of proposed approach in relation to practical applications. The major added value of presented research is the qualitative and quantitative evaluation based on realistic scenarios including ground truth data obtained by geodetic survey. The research novelty looking from mobile robotics is the evaluation of LS3D algorithm well known in geodesy.
APA, Harvard, Vancouver, ISO, and other styles
19

Goor, Pieter van, Robert Mahony, Tarek Hamel, and Jochen Trumpf. "An Observer Design for Visual Simultaneous Localisation and Mapping with Output Equivariance." IFAC-PapersOnLine 53, no. 2 (2020): 9560–65. http://dx.doi.org/10.1016/j.ifacol.2020.12.2438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Touchette, S., W. Gueaieb, and E. Lanteigne. "Efficient Cholesky Factor Recovery for Column Reordering in Simultaneous Localisation and Mapping." Journal of Intelligent & Robotic Systems 84, no. 1-4 (April 19, 2016): 859–75. http://dx.doi.org/10.1007/s10846-016-0367-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Petrov, Nikita, Oleg Krasnov, and Alexander G. Yarovoy. "Auto-Calibration of Automotive Radars in Operational Mode Using Simultaneous Localisation and Mapping." IEEE Transactions on Vehicular Technology 70, no. 3 (March 2021): 2062–75. http://dx.doi.org/10.1109/tvt.2021.3058778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Panzieri, Stefano, Federica Pascucci, and Roberto Setola. "Simultaneous localisation and mapping of a mobile robot via interlaced extended Kalman filter." International Journal of Modelling, Identification and Control 4, no. 1 (2008): 68. http://dx.doi.org/10.1504/ijmic.2008.021001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Wong, Rex H., Jizhong Xiao, Samleo L. Joseph, and Shouling He. "A mixed model data association for simultaneous localisation and mapping in dynamic environments." International Journal of Mechatronics and Automation 3, no. 1 (2013): 1. http://dx.doi.org/10.1504/ijma.2013.052614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Barros, Alfredo, Maria Cardoso, Peng Sheng, Paulo Costa, and Christina Pelizon. "Radioguided localisation of non-palpable breast lesions and simultaneous sentinel lymph node mapping." European Journal of Nuclear Medicine and Molecular Imaging 29, no. 12 (December 1, 2002): 1561–65. http://dx.doi.org/10.1007/s00259-002-0936-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kim, Jonghyuk, Jose Guivant, Martin L. Sollie, Torleiv H. Bryne, and Tor Arne Johansen. "Compressed pseudo-SLAM: pseudorange-integrated compressed simultaneous localisation and mapping for unmanned aerial vehicle navigation." Journal of Navigation 74, no. 5 (March 26, 2021): 1091–103. http://dx.doi.org/10.1017/s037346332100031x.

Full text
Abstract:
AbstractThis paper addresses the fusion of the pseudorange/pseudorange rate observations from the global navigation satellite system and the inertial–visual simultaneous localisation and mapping (SLAM) to achieve reliable navigation of unmanned aerial vehicles. This work extends the previous work on a simulation-based study [Kim et al. (2017). Compressed fusion of GNSS and inertial navigation with simultaneous localisation and mapping. IEEE Aerospace and Electronic Systems Magazine, 32(8), 22–36] to a real-flight dataset collected from a fixed-wing unmanned aerial vehicle platform. The dataset consists of measurements from visual landmarks, an inertial measurement unit, and pseudorange and pseudorange rates. We propose a novel all-source navigation filter, termed a compressed pseudo-SLAM, which can seamlessly integrate all available information in a computationally efficient way. In this framework, a local map is dynamically defined around the vehicle, updating the vehicle and local landmark states within the region. A global map includes the rest of the landmarks and is updated at a much lower rate by accumulating (or compressing) the local-to-global correlation information within the filter. It will show that the horizontal navigation error is effectively constrained with one satellite vehicle and one landmark observation. The computational cost will be analysed, demonstrating the efficiency of the method.
APA, Harvard, Vancouver, ISO, and other styles
26

Adamowicz, Mateusz, Leszek Ambroziak, and Mirosław Kondratiuk. "Efficient Non-Odometry Method for Environment Mapping and Localisation of Mobile Robots." Acta Mechanica et Automatica 15, no. 1 (March 1, 2021): 24–29. http://dx.doi.org/10.2478/ama-2021-0004.

Full text
Abstract:
Abstract The paper presents the simple algorithm of simultaneous localisation and mapping (SLAM) without odometry information. The proposed algorithm is based only on scanning laser range finder. The theoretical foundations of the proposed method are presented. The most important element of the work is the experimental research. The research underlying the paper encompasses several tests, which were carried out to build the environment map to be navigated by the mobile robot in conjunction with the trajectory planning algorithm and obstacle avoidance.
APA, Harvard, Vancouver, ISO, and other styles
27

Williams, Stefan B., Paul Newman, Julio Rosenblatt, Gamini Dissanayake, and Hugh Durrant-Whyte. "Autonomous underwater navigation and control." Robotica 19, no. 5 (August 29, 2001): 481–96. http://dx.doi.org/10.1017/s0263574701003423.

Full text
Abstract:
This paper describes the autonomous navigation and control of an undersea vehicle using a vehicle control architecture based on the Distributed Architeclure for Mobile Navigation and a terrain-aided navigation technique based on simultaneous localisation and map building. Development of the low-speed platform models for vehicle control and the theoretical and practical details of mapping and position estimation using sonar are provided. Details of an implementation of these techniques on a small submersible vehicle “Oberon” are presented.
APA, Harvard, Vancouver, ISO, and other styles
28

Cao, Yuchen, Lan Hu, and Laurent Kneip. "Representations and Benchmarking of Modern Visual SLAM Systems." Sensors 20, no. 9 (April 30, 2020): 2572. http://dx.doi.org/10.3390/s20092572.

Full text
Abstract:
Simultaneous Localisation And Mapping (SLAM) has long been recognised as a core problem to be solved within countless emerging mobile applications that require intelligent interaction or navigation in an environment. Classical solutions to the problem primarily aim at localisation and reconstruction of a geometric 3D model of the scene. More recently, the community increasingly investigates the development of Spatial Artificial Intelligence (Spatial AI), an evolutionary paradigm pursuing a simultaneous recovery of object-level composition and semantic annotations of the recovered 3D model. Several interesting approaches have already been presented, producing object-level maps with both geometric and semantic properties rather than just accurate and robust localisation performance. As such, they require much broader ground truth information for validation purposes. We discuss the structure of the representations and optimisation problems involved in Spatial AI, and propose new synthetic datasets that, for the first time, include accurate ground truth information about the scene composition as well as individual object shapes and poses. We furthermore propose evaluation metrics for all aspects of such joint geometric-semantic representations and apply them to a new semantic SLAM framework. It is our hope that the introduction of these datasets and proper evaluation metrics will be instrumental in the evaluation of current and future Spatial AI systems and as such contribute substantially to the overall research progress on this important topic.
APA, Harvard, Vancouver, ISO, and other styles
29

Yang, P. "Efficient particle filter algorithm for ultrasonic sensor-based 2D range-only simultaneous localisation and mapping application." IET Wireless Sensor Systems 2, no. 4 (December 1, 2012): 394–401. http://dx.doi.org/10.1049/iet-wss.2011.0129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Rodionov, O. A., and B. Rasheed. "Development of a semantic map for an unmanned vehicle using a simultaneous localisation and mapping method." Russian Automobile and Highway Industry Journal 19, no. 6 (January 7, 2023): 900–914. http://dx.doi.org/10.26518/2071-7296-2022-19-6-900-914.

Full text
Abstract:
Introduction: The field of unmanned technologies is rapidly developing and a lot of research is being conducted on the practical application of artificial intelligence algorithms to solve complex problems on the road. The difficulties in the perception of the surrounding world by the machine led to the appearance of special High definition maps. These maps are used to simplify and improve the quality and reliability of other subsystems from the stack of autonomous technologies, such as localization, prediction, navigation and planning modules. In modern literature, there are mainly works on the practical application of such maps, and the process of developing a map remains outside the scope of consideration.The aim of the work is to create a methodology for designing semantic maps for autonomous vehicles with a detailed description of each of the development stages.Materials and methods: The article describes the methodology for creation of HD maps, which includes the stages of data collection using SLAM (Simultaneous localization and mapping) approach, its further processing and the development of the semantics of the road network. The described algorithm is applied in practice to develop the semantic map of Innopolis city area using SLAM approach with LIDAR inertial odometry via smoothing and mapping (LIO-SAM).Results: The main stages of the methodology for creating HD maps for autonomous vehicles have been proposed and investigated. Authors implemented the proposed concept in practice and described in detail the process of creating a semantic map for the Innopolis city area.Conclusions: The proposed methodology can be used for any type of autonomous robots (ground vehicles, unmanned aerial vehicle, water transport) and can be implemented in different road conditions (city, off-road), depending on the information the map should provide for the implementation of the goals and objectives set for the autonomous vehicle.
APA, Harvard, Vancouver, ISO, and other styles
31

Brindza, Ján, Pavol Kajánek, and Ján Erdélyi. "Lidar-Based Mobile Mapping System for an Indoor Environment." Slovak Journal of Civil Engineering 30, no. 2 (June 1, 2022): 47–58. http://dx.doi.org/10.2478/sjce-2022-0014.

Full text
Abstract:
Abstract The article deals with developing and testing a low-cost measuring system for simultaneous localisation and mapping (SLAM) in an indoor environment. The measuring system consists of three orthogonally-placed 2D lidars, a robotic platform with two wheel speed sensors, and an inertial measuring unit (IMU). The paper describes the data processing model used for both the estimation of the trajectory of SLAM and the creation of a 3D model of the environment based on the estimated trajectory of the SLAM. The main problem of SLAM usage is the accumulation of errors caused by the imperfect transformation of two scans into each other. The data processing developed includes an automatic evaluation and correction of the slope of the lidar. Furthermore, during the calculation of the trajectory, a repeatedly traversed area is identified (loop closure), which enables the optimisation of the trajectory determined. The system was tested in the indoor environment of the Faculty of Civil Engineering of the Slovak University of Technology in Bratislava.
APA, Harvard, Vancouver, ISO, and other styles
32

Collings, Simon, Tara J. Martin, Emili Hernandez, Stuart Edwards, Andrew Filisetti, Gavin Catt, Andreas Marouchos, Matt Boyd, and Carl Embry. "Findings from a Combined Subsea LiDAR and Multibeam Survey at Kingston Reef, Western Australia." Remote Sensing 12, no. 15 (July 30, 2020): 2443. http://dx.doi.org/10.3390/rs12152443.

Full text
Abstract:
Light Detection and Ranging (LiDAR), a comparatively new technology in the field of underwater surveying, has principally been used for taking precise measurement of undersea structures in the oil and gas industry. Typically, the LiDAR is deployed on a remotely operated vehicle (ROV), which will “land” on the seafloor in order to generate a 3D point cloud of its environment from a stationary position. To explore the potential of subsea LiDAR on a moving platform in an environmental context, we deployed an underwater LiDAR system simultaneously with a multibeam echosounder (MBES), surveying Kingston Reef off the coast of Rottnest Island, Western Australia. This paper compares and summarises the relative accuracy and characteristics of underwater LiDAR and multibeam sonar and investigates synergies between sonar and LiDAR technology for the purpose of benthic habitat mapping and underwater simultaneous localisation and mapping (SLAM) for Autonomous Underwater Vehicles (AUVs). We found that LiDAR reflectivity and multibeam backscatter are complementary technologies for habitat mapping, which can combine to discriminate between habitats that could not be mapped with either one alone. For robot navigation, SLAM can be effectively applied with either technology, however, when a Global Navigation Satellite System (GNSS) is available, SLAM does not significantly improve the self-consistency of multibeam data, but it does for LiDAR.
APA, Harvard, Vancouver, ISO, and other styles
33

Karam, S., V. Lehtola, and G. Vosselman. "INTEGRATING A LOW-COST MEMS IMU INTO A LASER-BASED SLAM FOR INDOOR MOBILE MAPPING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W17 (November 29, 2019): 149–56. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w17-149-2019.

Full text
Abstract:
Abstract. Indoor mapping techniques are highly important in many applications, such as human navigation and indoor modelling. As satellite positioning systems do not work in indoor applications, several alternative navigational sensors and methods have been used to provide accurate indoor positioning for mapping purposes, such as inertial measurement units (IMUs) and simultaneous localisation and mapping algorithms (SLAM). In this paper, we investigate the benefits that the integration of a low-cost microelectromechanical system (MEMS) IMU can bring to a feature-based SLAM algorithm. Specifically, we utilize IMU data to predict the pose of our backpack indoor mobile mapping system to improve the SLAM algorithm. The experimental results show that using the proposed IMU integration method leads into a more robust data association between the measured points and the model planes. Notably, the number of points that are assigned to the model planes is increased, and the root mean square error (RMSE) of the residuals, i.e. distances between these measured points and the model planes, is decreased significantly from 1.8 cm to 1.3 cm.
APA, Harvard, Vancouver, ISO, and other styles
34

Ng, Yonhon, Hongdong Li, and Jonghyuk Kim. "Uncertainty Estimation of Dense Optical Flow for Robust Visual Navigation." Sensors 21, no. 22 (November 16, 2021): 7603. http://dx.doi.org/10.3390/s21227603.

Full text
Abstract:
This paper presents a novel dense optical-flow algorithm to solve the monocular simultaneous localisation and mapping (SLAM) problem for ground or aerial robots. Dense optical flow can effectively provide the ego-motion of the vehicle while enabling collision avoidance with the potential obstacles. Existing research has not fully utilised the uncertainty of the optical flow—at most, an isotropic Gaussian density model has been used. We estimate the full uncertainty of the optical flow and propose a new eight-point algorithm based on the statistical Mahalanobis distance. Combined with the pose-graph optimisation, the proposed method demonstrates enhanced robustness and accuracy for the public autonomous car dataset (KITTI) and aerial monocular dataset.
APA, Harvard, Vancouver, ISO, and other styles
35

Kalisperakis, I., T. Mandilaras, A. El Saer, P. Stamatopoulou, C. Stentoumis, S. Bourou, and L. Grammatikopoulos. "A MODULAR MOBILE MAPPING PLATFORM FOR COMPLEX INDOOR AND OUTDOOR ENVIRONMENTS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2020 (August 6, 2020): 243–50. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2020-243-2020.

Full text
Abstract:
Abstract. In this work we present the development of a prototype, mobile mapping platform with modular design and architecture that can be suitably modified to address effectively both outdoors and indoors environments. Our system is built on the Robotics Operation System (ROS) and utilizes multiple sensors to capture images, pointclouds and 3D motion trajectories. These include synchronized cameras with wide angle lenses, a lidar sensor, a GPS/IMU unit and a tracking optical sensor. We report on the individual components of the platform, it’s architecture, the integration and the calibration of its components, the fusion of all recorded data and provide initial 3D reconstruction results. The processing algorithms are based on existing implementations of SLAM (Simultaneous Localisation and Mapping) methods combined with SfM (Structure-from-Motion) for optimal estimations of orientations and 3D pointclouds. The scope of this work, which is part of an ongoing H2020 program, is to digitize the physical world, collect relevant spatial data and make digital copies available to experts and public for covering a wide range of needs; remote access and viewing, process, design, use in VR etc.
APA, Harvard, Vancouver, ISO, and other styles
36

Castanheiro, L. F., A. M. G. Tommaselli, M. V. Machado, G. H. Santos, I. S. Norberto, and T. T. Reis. "THE USE OF A WIDE FOV LASER SCANNING SYSTEM AND A SLAM ALGORITHM FOR MOBILE APPLICATIONS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2022 (May 30, 2022): 181–87. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2022-181-2022.

Full text
Abstract:
Abstract. This paper presents the assessment of a wide-angle laser scanner and a simultaneous localisation and mapping (SLAM) algorithm to estimate the trajectory and generate a 3D map of the environment. A backpack platform composed of an OS0-128 Ouster (FoV 90° × 360°) laser scanner was used to acquire laser data in an area with urban and forest features. Web SLAM, an online SLAM algorithm implemented by Ouster, Inc., was used to estimate the trajectory and generate a 3D map in a local reference system. Then, the 3D point clouds were transformed into the ground coordinate system with a rigid body transformation. Three datasets were used: (I) the entire trajectory consisting of forwarding and backwards paths, (II) only forward path, and (III) only backward path. Visual analysis showed a double mapping error in the point cloud of dataset I. Therefore, the point cloud registration was performed only for datasets II and III, achieving a centimetric accuracy, which is compatible with the OS0-128 laser scanner accuracy (∼5cm).
APA, Harvard, Vancouver, ISO, and other styles
37

Liu, Haiqiao, Shibin Luo, and Jiazhen Lu. "Correlation scan matching algorithm based on multi-resolution auxiliary historical point cloud and lidar simultaneous localisation and mapping positioning application." IET Image Processing 14, no. 14 (December 1, 2020): 3596–601. http://dx.doi.org/10.1049/iet-ipr.2019.1657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lai, Tin. "A Review on Visual-SLAM: Advancements from Geometric Modelling to Learning-Based Semantic Scene Understanding Using Multi-Modal Sensor Fusion." Sensors 22, no. 19 (September 25, 2022): 7265. http://dx.doi.org/10.3390/s22197265.

Full text
Abstract:
Simultaneous Localisation and Mapping (SLAM) is one of the fundamental problems in autonomous mobile robots where a robot needs to reconstruct a previously unseen environment while simultaneously localising itself with respect to the map. In particular, Visual-SLAM uses various sensors from the mobile robot for collecting and sensing a representation of the map. Traditionally, geometric model-based techniques were used to tackle the SLAM problem, which tends to be error-prone under challenging environments. Recent advancements in computer vision, such as deep learning techniques, have provided a data-driven approach to tackle the Visual-SLAM problem. This review summarises recent advancements in the Visual-SLAM domain using various learning-based methods. We begin by providing a concise overview of the geometric model-based approaches, followed by technical reviews on the current paradigms in SLAM. Then, we present the various learning-based approaches to collecting sensory inputs from mobile robots and performing scene understanding. The current paradigms in deep-learning-based semantic understanding are discussed and placed under the context of Visual-SLAM. Finally, we discuss challenges and further opportunities in the direction of learning-based approaches in Visual-SLAM.
APA, Harvard, Vancouver, ISO, and other styles
39

Georgiou, Christina, Sean Anderson, and Tony Dodd. "Constructing informative Bayesian map priors: A multi-objective optimisation approach applied to indoor occupancy grid mapping." International Journal of Robotics Research 36, no. 3 (January 26, 2017): 274–91. http://dx.doi.org/10.1177/0278364916687027.

Full text
Abstract:
The problem of simultaneous localisation and mapping (SLAM) has been addressed in numerous ways with different approaches aiming to produce faster, more robust solutions that yield consistent maps. This focus, however, has resulted in a number of solutions that perform poorly in challenging real life scenarios. In order to achieve improved performance and map quality this article proposes a novel method to construct informative Bayesian mapping priors through a multi-objective optimisation of prior map design variables defined using a source of prior information. This concept is explored for 2D occupancy grid SLAM, constructing such priors by extracting structural information from architectural drawings and identifying optimised prior values to assign to detected walls and empty space. Using the proposed method a contextual optimised prior can be constructed. This prior is found to yield better quantitative and qualitative performance than the commonly used non-informative prior, yielding an increase of over 20% in the [Formula: see text] metric. This is achieved without adding to the computational complexity of the SLAM algorithm, making it a good fit for time critical real life applications such as search and rescue missions.
APA, Harvard, Vancouver, ISO, and other styles
40

Sanchez-Rodriguez, Jose-Pablo, and Alejandro Aceves-Lopez. "A survey on stereo vision-based autonomous navigation for multi-rotor MUAVs." Robotica 36, no. 8 (May 6, 2018): 1225–43. http://dx.doi.org/10.1017/s0263574718000358.

Full text
Abstract:
SUMMARYThis paper presents an overview of the most recent vision-based multi-rotor micro unmanned aerial vehicles (MUAVs) intended for autonomous navigation using a stereoscopic camera. Drone operation is difficult because pilots need the expertise to fly the drones. Pilots have a limited field of view, and unfortunate situations, such as loss of line of sight or collision with objects such as wires and branches, can happen. Autonomous navigation is an even more difficult challenge than remote control navigation because the drones must make decisions on their own in real time and simultaneously build maps of their surroundings if none is available. Moreover, MUAVs are limited in terms of useful payload capability and energy consumption. Therefore, a drone must be equipped with small sensors, and it must carry low weight. In addition, a drone requires a sufficiently powerful onboard computer so that it can understand its surroundings and navigate accordingly to achieve its goal safely. A stereoscopic camera is considered a suitable sensor because of its three-dimensional (3D) capabilities. Hence, a drone can perform vision-based navigation through object recognition and self-localise inside a map if one is available; otherwise, its autonomous navigation creates a simultaneous localisation and mapping problem.
APA, Harvard, Vancouver, ISO, and other styles
41

Lei, Xu, Bin Feng, Guiping Wang, Weiyu Liu, and Yalin Yang. "A Novel FastSLAM Framework Based on 2D Lidar for Autonomous Mobile Robot." Electronics 9, no. 4 (April 24, 2020): 695. http://dx.doi.org/10.3390/electronics9040695.

Full text
Abstract:
The autonomous navigation and environment exploration of mobile robots are carried out on the premise of the ability of environment sensing. Simultaneous localisation and mapping (SLAM) is the key algorithm in perceiving and mapping an environment in real time. FastSLAM has played an increasingly significant role in the SLAM problem. In order to enhance the performance of FastSLAM, a novel framework called IFastSLAM is proposed, based on particle swarm optimisation (PSO). In this framework, an adaptive resampling strategy is proposed that uses the genetic algorithm to increase the diversity of particles, and the principles of fractional differential theory and chaotic optimisation are combined into the algorithm to improve the conventional PSO approach. We observe that the fractional differential approach speeds up the iteration of the algorithm and chaotic optimisation prevents premature convergence. A new idea of a virtual particle is put forward as the global optimisation target for the improved PSO scheme. This approach is more accurate in terms of determining the optimisation target based on the geometric position of the particle, compared to an approach based on the maximum weight value of the particle. The proposed IFastSLAM method is compared with conventional FastSLAM, PSO-FastSLAM, and an adaptive generic FastSLAM algorithm (AGA-FastSLAM). The superiority of IFastSLAM is verified by simulations, experiments with a real-world dataset, and field experiments.
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Weiqi, Xiong You, Xin Zhang, Lingyu Chen, Lantian Zhang, and Xu Liu. "LiDAR-Based SLAM under Semantic Constraints in Dynamic Environments." Remote Sensing 13, no. 18 (September 13, 2021): 3651. http://dx.doi.org/10.3390/rs13183651.

Full text
Abstract:
Facing the realistic demands of the application environment of robots, the application of simultaneous localisation and mapping (SLAM) has gradually moved from static environments to complex dynamic environments, while traditional SLAM methods usually result in pose estimation deviations caused by errors in data association due to the interference of dynamic elements in the environment. This problem is effectively solved in the present study by proposing a SLAM approach based on light detection and ranging (LiDAR) under semantic constraints in dynamic environments. Four main modules are used for the projection of point cloud data, semantic segmentation, dynamic element screening, and semantic map construction. A LiDAR point cloud semantic segmentation network SANet based on a spatial attention mechanism is proposed, which significantly improves the real-time performance and accuracy of point cloud semantic segmentation. A dynamic element selection algorithm is designed and used with prior knowledge to significantly reduce the pose estimation deviations caused by SLAM dynamic elements. The results of experiments conducted on the public datasets SemanticKITTI, KITTI, and SemanticPOSS show that the accuracy and robustness of the proposed approach are significantly improved.
APA, Harvard, Vancouver, ISO, and other styles
43

Hoth, Julian, and Wojciech Kowalczyk. "Determination of Flow Parameters of a Water Flow Around an AUV Body." Robotics 8, no. 1 (January 28, 2019): 5. http://dx.doi.org/10.3390/robotics8010005.

Full text
Abstract:
Autonomous underwater vehicles (AUVs) have changed the way marine environment is surveyed, monitored and mapped. Autonomous underwater vehicles have a wide range of applications in research, military, and commercial settings. AUVs not only perform a given task but also adapt to changes in the environment, e.g., sudden side currents, downdrafts, and other effects which are extremely unpredictable. To navigate properly and allow simultaneous localisation and mapping (SLAM) algorithms to be used, these effects need to be detected. With current navigation systems, these disturbances in the water flow are not measured directly. Only the indirect effects are observed. It is proposed to detect the disturbances directly by placing pressure sensors on the surface of the AUV and processing the pressure data obtained. Within this study, the applicability of different learning methods for determining flow parameters of a surrounding fluid from pressure on an AUV body are tested. This is based on CFD simulations using pressure data from specified points on the surface of the AUV. It is shown that support vector machines are most suitable for the given task and yield excellent results.
APA, Harvard, Vancouver, ISO, and other styles
44

Bogue, Robert. "Bioinspired designs impart robots with unique capabilities." Industrial Robot: the international journal of robotics research and application 46, no. 5 (August 19, 2019): 561–67. http://dx.doi.org/10.1108/ir-05-2019-0100.

Full text
Abstract:
Purpose This paper aims to provide an insight into robot developments that use bioinspired design concepts. Design/methodology/approach Following a short introduction to biomimetics, this paper first provides examples of bioinspired terrestrial, aerial and underwater robot navigation techniques. It then discusses bioinspired locomotion and considers a selection of robotic products and developments inspired by snakes, bats, diving birds, fish and dragonflies. Finally, brief concluding comments are drawn. Findings The application of design concepts that mimic the capabilities and processes found in living creatures can impart robots with unique abilities. Bioinspired techniques used by insects and other organisms, notably optic flow and sunlight polarisation sensing, allow robots to navigate without the need for methods such as simultaneous localisation and mapping, GPS or inertial measurement units. Bioinspired locomotion techniques have yielded robots capable of operating in water, air and on land and in some cases, making the transition between different media. Originality/value This shows how bioinspired design concepts can impart robots with innovative and enhanced navigation and locomotion capabilities.
APA, Harvard, Vancouver, ISO, and other styles
45

Karam, S., M. Peter, S. Hosseinyalamdary, and G. Vosselman. "AN EVALUATION PIPELINE FOR INDOOR LASER SCANNING POINT CLOUDS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-1 (September 26, 2018): 85–92. http://dx.doi.org/10.5194/isprs-annals-iv-1-85-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> The necessity for the modelling of building interiors has encouraged researchers in recent years to focus on improving the capturing and modelling techniques for such environments. State-of-the-art indoor mobile mapping systems use a combination of laser scanners and/or cameras mounted on movable platforms and allow for capturing 3D data of buildings’ interiors. As GNSS positioning does not work inside buildings, the extensively investigated Simultaneous Localisation and Mapping (SLAM) algorithms seem to offer a suitable solution for the problem. Because of the dead-reckoning nature of SLAM approaches, their results usually suffer from registration errors. Therefore, indoor data acquisition has remained a challenge and the accuracy of the captured data has to be analysed and investigated. In this paper, we propose to use architectural constraints to partly evaluate the quality of the acquired point cloud in the absence of any ground truth model. The internal consistency of walls is utilized to check the accuracy and correctness of indoor models. In addition, we use a floor plan (if available) as an external information source to check the quality of the generated indoor model. The proposed evaluation method provides an overall impression of the reconstruction accuracy. Our results show that perpendicularity, parallelism, and thickness of walls are important cues in buildings and can be used for an internal consistency check.</p>
APA, Harvard, Vancouver, ISO, and other styles
46

Zheng, Bo, Zexu Zhang, Jing Wang, Feng Chen, and Xiangquan Wei. "Body-fixed SLAM with Local Submaps for Planetary Rover." Journal of Navigation 73, no. 1 (June 26, 2019): 149–71. http://dx.doi.org/10.1017/s0373463319000560.

Full text
Abstract:
In traditional Simultaneous Localisation and Mapping (SLAM) algorithms based on Extended Kalman Filtering (EKF-SLAM), the uncertainty of state estimation will increase rapidly with the development of the exploration process and the increase of map area. Likewise, the computational complexity of the EKF-SLAM is proportional to the square of the number of feature points contained in the state variables in a single filtering process. A new SLAM algorithm combining the local submaps and the body-fixed coordinates of the rover is presented in this paper. The algorithm can reduce the computational complexity and enhance computational speed in consideration of the processing capability of the onboard computer. Due to the introduction of local submaps, the algorithm represented in this paper is able to reduce the number of feature points contained in the state variables in each single filtering process. Therefore, the algorithm could reduce the computational complexity and improve the computational speed. In addition, rover body-fixed SLAM could improve the navigation accuracy of a rover and decrease the cumulative linearization error by coordinates transformation during the update process, which is shown in the simulation results.
APA, Harvard, Vancouver, ISO, and other styles
47

Zheng Siong, Chua, Mohd Faisal Ibrahim, Aqilah Baseri Huddin, Mohd Hairi Mohd Zaman, and Fazida Hanim Hashim. "A Combinatorial RGB and Depth Images CNN-based Model for Oil Palm Fruit Bunch Detection and Heatmap Localisation for a Visual SLAM System." Jurnal Kejuruteraan 33, no. 4 (November 30, 2021): 1113–21. http://dx.doi.org/10.17576/jkukm-2021-33(4)-34.

Full text
Abstract:
The harvesting job of cutting and collecting fruit bunches in oil palm plantations remains the most labour-intensive job in the oil palm processing cycle. The introduction of an autonomous vehicle to assist workers in the harvesting job promises better productivity. Such a driverless vehicle requires a software module known as simultaneous localisation and mapping (SLAM) to guide the vehicle to navigate autonomously. This work proposes a visual SLAM system with a distinctive capability of detecting and localising oil palm loose fresh fruit bunches (FFB) on the ground using intelligent image processing. This vehicle is equipped with a depth camera capable of capturing RGB images and depth images concurrently. Two VGG16-based convolutional neural network (CNN) models are trained using the acquired RGB and depth images dataset of loose FFBs on the ground. The output from the combinatorial FFB detection model is then fed into a visual SLAM system called RTAB-Map. By combining the FFB detection model and the visual SLAM system, the vehicle can plan for autonomous navigation safely, perform bunch pick-up tasks, and avoid collision with fruit bunches on the ground. The experiment results show that the proposed CNN model can detect and localise loose FFBs with significant accuracy in various lighting conditions.
APA, Harvard, Vancouver, ISO, and other styles
48

Wasala, Mateusz, Hubert Szolc, and Tomasz Kryjak. "An Efficient Real-Time FPGA-Based ORB Feature Extraction for an UHD Video Stream for Embedded Visual SLAM." Electronics 11, no. 14 (July 20, 2022): 2259. http://dx.doi.org/10.3390/electronics11142259.

Full text
Abstract:
The detection and description of feature points are important components of many computer vision systems. For example, in the field of autonomous unmanned aerial vehicles (UAV), these methods form the basis of so-called Visual Odometry (VO) and Simultaneous Localisation and Mapping (SLAM) algorithms. In this paper, we present a hardware feature points detection system able to process a 4K video stream in real-time. We use the ORB algorithm—Oriented FAST (Features from Accelerated Segment Test) and Rotated BRIEF (Binary Robust Independent Elementary Features)—to detect and describe feature points in the images. We make numerous modifications to the original ORB algorithm (among others, we use the RS-BRIEF instead of classic R-BRIEF) to adapt it to the high video resolution, make it computationally efficient, reduce the resource utilisation and achieve lower power consumption. Our hardware implementation supports a 4 ppc (pixels per clock) format (with simple adaptation to 2 ppc, 8 ppc, and more) and real-time processing of a 4K video stream (UHD—Ultra High Definition, 3840×2160 pixels) @ 60 frames per second (150 MHz clock). We verify our system using simulations in the Vivado IDE and implement it in hardware on the ZCU 104 evaluation board with the AMD Xilinx Zynq UltraScale+ MPSoC device. The proposed design consumes only 5 watts.
APA, Harvard, Vancouver, ISO, and other styles
49

Pérez-Martín, Enrique, Serafín López-Cuervo Medina, Tomás Herrero-Tejedor, Miguel Angel Pérez-Souza, Julian Aguirre de Mata, and Alejandra Ezquerra-Canalejo. "Assessment of Tree Diameter Estimation Methods from Mobile Laser Scanning in a Historic Garden." Forests 12, no. 8 (July 30, 2021): 1013. http://dx.doi.org/10.3390/f12081013.

Full text
Abstract:
Geo-referenced 3D models are currently in demand as an initial knowledge base for cultural heritage projects and forest inventories. The mobile laser scanning (MLS) used for geo-referenced 3D models offers ever greater efficiency in the acquisition of 3D data and their subsequent application in the fields of forestry. In this study, we have analysed the performance of an MLS with simultaneous localisation and mapping technology (SLAM) for compiling a tree inventory in a historic garden, and we assessed the accuracy of the estimates of diameter at breast height (DBH, a height of 1.30 m) calculated from three fitting algorithms: RANSAC, Monte Carlo, and Optimal Circle. The reference sample used was 378 trees from the Island Garden, a historic garden and UNESCO World Heritage site in Aranjuez, Spain. The time taken to acquire the data by MLS was 27 min 37 s, in an area of 2.38 ha. The best results were obtained with the Monte Carlo fitting algorithm, which was able to estimate the DBH of 77% of the 378 trees in the study, with a root mean squared error (RMSE) of 5.31 cm and a bias of 1.23 cm. The proposed methodology enabled a supervised detection of the trees and automatically estimated the DBH of most trees in the study, making this a useful tool for the management and conservation of a historic garden.
APA, Harvard, Vancouver, ISO, and other styles
50

McGlade, James, Luke Wallace, Karin Reinke, and Simon Jones. "The Potential of Low-Cost 3D Imaging Technologies for Forestry Applications: Setting a Research Agenda for Low-Cost Remote Sensing Inventory Tasks." Forests 13, no. 2 (January 28, 2022): 204. http://dx.doi.org/10.3390/f13020204.

Full text
Abstract:
Limitations with benchmark light detection and ranging (LiDAR) technologies in forestry have prompted the exploration of handheld or wearable low-cost 3D sensors (<2000 USD). These sensors are now being integrated into consumer devices, such as the Apple iPad Pro 2020. This study was aimed at determining future research recommendations to promote the adoption of terrestrial low-cost technologies within forest measurement tasks. We reviewed the current literature surrounding the application of low-cost 3D remote sensing (RS) technologies. We also surveyed forestry professionals to determine what inventory metrics were considered important and/or difficult to capture using conventional methods. The current research focus regarding inventory metrics captured by low-cost sensors aligns with the metrics identified as important by survey respondents. Based on the literature review and survey, a suite of research directions are proposed to democratise the access to and development of low-cost 3D for forestry: (1) the development of methods for integrating standalone colour and depth (RGB-D) sensors into handheld or wearable devices; (2) the development of a sensor-agnostic method for determining the optimal capture procedures with low-cost RS technologies in forestry settings; (3) the development of simultaneous localisation and mapping (SLAM) algorithms designed for forestry environments; and (4) the exploration of plot-scale forestry captures that utilise low-cost devices at both terrestrial and airborne scales.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography