Academic literature on the topic 'Slam LiDAR'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Slam LiDAR.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Slam LiDAR"

1

Jie, Lu, Zhi Jin, Jinping Wang, Letian Zhang, and Xiaojun Tan. "A SLAM System with Direct Velocity Estimation for Mechanical and Solid-State LiDARs." Remote Sensing 14, no. 7 (April 4, 2022): 1741. http://dx.doi.org/10.3390/rs14071741.

Full text
Abstract:
Simultaneous localization and mapping (SLAM) is essential for intelligent robots operating in unknown environments. However, existing algorithms are typically developed for specific types of solid-state LiDARs, leading to weak feature representation abilities for new sensors. Moreover, LiDAR-based SLAM methods are limited by distortions caused by LiDAR ego motion. To address the above issues, this paper presents a versatile and velocity-aware LiDAR-based odometry and mapping (VLOM) system. A spherical projection-based feature extraction module is utilized to process the raw point cloud generated by various LiDARs, hence avoiding the time-consuming adaptation of various irregular scan patterns. The extracted features are grouped into higher-level clusters to filter out smaller objects and reduce false matching during feature association. Furthermore, bundle adjustment is adopted to jointly estimate the poses and velocities for multiple scans, effectively improving the velocity estimation accuracy and compensating for point cloud distortions. Experiments on publicly available datasets demonstrate the superiority of VLOM over other state-of-the-art LiDAR-based SLAM systems in terms of accuracy and robustness. Additionally, the satisfactory performance of VLOM on RS-LiDAR-M1, a newly released solid-state LiDAR, shows its applicability to a wide range of LiDARs.
APA, Harvard, Vancouver, ISO, and other styles
2

Sier, Ha, Qingqing Li, Xianjia Yu, Jorge Peña Queralta, Zhuo Zou, and Tomi Westerlund. "A Benchmark for Multi-Modal LiDAR SLAM with Ground Truth in GNSS-Denied Environments." Remote Sensing 15, no. 13 (June 28, 2023): 3314. http://dx.doi.org/10.3390/rs15133314.

Full text
Abstract:
LiDAR-based simultaneous localization and mapping (SLAM) approaches have obtained considerable success in autonomous robotic systems. This is in part owing to the high accuracy of robust SLAM algorithms and the emergence of new and lower-cost LiDAR products. This study benchmarks the current state-of-the-art LiDAR SLAM algorithms with a multi-modal LiDAR sensor setup, showcasing diverse scanning modalities (spinning and solid state) and sensing technologies, and LiDAR cameras, mounted on a mobile sensing and computing platform. We extend our previous multi-modal multi-LiDAR dataset with additional sequences and new sources of ground truth data. Specifically, we propose a new multi-modal multi-LiDAR SLAM-assisted and ICP-based sensor fusion method for generating ground truth maps. With these maps, we then match real-time point cloud data using a normal distributions transform (NDT) method to obtain the ground truth with a full six-degrees-of-freedom (DOF) pose estimation. These novel ground truth data leverage high-resolution spinning and solid-state LiDARs. We also include new open road sequences with GNSS-RTK data and additional indoor sequences with motion capture (MOCAP) ground truth, complementing the previous forest sequences with MOCAP data. We perform an analysis of the positioning accuracy achieved, comprising ten unique configurations generated by pairing five distinct LiDAR sensors with five SLAM algorithms, to critically compare and assess their respective performance characteristics. We also report the resource utilization in four different computational platforms and a total of five settings (Intel and Jetson ARM CPUs). Our experimental results show that the current state-of-the-art LiDAR SLAM algorithms perform very differently for different types of sensors. More results, code, and the dataset can be found at GitHub.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Yu-Lin, Yi-Tian Hong, and Han-Pang Huang. "Comprehensive Performance Evaluation between Visual SLAM and LiDAR SLAM for Mobile Robots: Theories and Experiments." Applied Sciences 14, no. 9 (May 6, 2024): 3945. http://dx.doi.org/10.3390/app14093945.

Full text
Abstract:
SLAM (Simultaneous Localization and Mapping), primarily relying on camera or LiDAR (Light Detection and Ranging) sensors, plays a crucial role in robotics for localization and environmental reconstruction. This paper assesses the performance of two leading methods, namely ORB-SLAM3 and SC-LeGO-LOAM, focusing on localization and mapping in both indoor and outdoor environments. The evaluation employs artificial and cost-effective datasets incorporating data from a 3D LiDAR and an RGB-D (color and depth) camera. A practical approach is introduced for calculating ground-truth trajectories and during benchmarking, reconstruction maps based on ground truth are established. To assess the performance, ATE and RPE are utilized to evaluate the accuracy of localization; standard deviation is employed to compare the stability during the localization process for different methods. While both algorithms exhibit satisfactory positioning accuracy, their performance is suboptimal in scenarios with inadequate textures. Furthermore, 3D reconstruction maps established by the two approaches are also provided for direct observation of their differences and the limitations encountered during map construction. Moreover, the research includes a comprehensive comparison of computational performance metrics, encompassing Central Processing Unit (CPU) utilization, memory usage, and an in-depth analysis. This evaluation revealed that Visual SLAM requires more CPU resources than LiDAR SLAM, primarily due to additional data storage requirements, emphasizing the impact of environmental factors on resource requirements. In conclusion, LiDAR SLAM is more suitable for the outdoors due to its comprehensive nature, while Visual SLAM excels indoors, compensating for sparse aspects in LiDAR SLAM. To facilitate further research, a technical guide was also provided for the researchers in related fields.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Shoubin, Baoding Zhou, Changhui Jiang, Weixing Xue, and Qingquan Li. "A LiDAR/Visual SLAM Backend with Loop Closure Detection and Graph Optimization." Remote Sensing 13, no. 14 (July 10, 2021): 2720. http://dx.doi.org/10.3390/rs13142720.

Full text
Abstract:
LiDAR (light detection and ranging), as an active sensor, is investigated in the simultaneous localization and mapping (SLAM) system. Typically, a LiDAR SLAM system consists of front-end odometry and back-end optimization modules. Loop closure detection and pose graph optimization are the key factors determining the performance of the LiDAR SLAM system. However, the LiDAR works at a single wavelength (905 nm), and few textures or visual features are extracted, which restricts the performance of point clouds matching based loop closure detection and graph optimization. With the aim of improving LiDAR SLAM performance, in this paper, we proposed a LiDAR and visual SLAM backend, which utilizes LiDAR geometry features and visual features to accomplish loop closure detection. Firstly, the bag of word (BoW) model, describing the visual similarities, was constructed to assist in the loop closure detection and, secondly, point clouds re-matching was conducted to verify the loop closure detection and accomplish graph optimization. Experiments with different datasets were carried out for assessing the proposed method, and the results demonstrated that the inclusion of the visual features effectively helped with the loop closure detection and improved LiDAR SLAM performance. In addition, the source code, which is open source, is available for download once you contact the corresponding author.
APA, Harvard, Vancouver, ISO, and other styles
5

Peng, Gang, Yicheng Zhou, Lu Hu, Li Xiao, Zhigang Sun, Zhangang Wu, and Xukang Zhu. "VILO SLAM: Tightly Coupled Binocular Vision–Inertia SLAM Combined with LiDAR." Sensors 23, no. 10 (May 9, 2023): 4588. http://dx.doi.org/10.3390/s23104588.

Full text
Abstract:
For the existing visual–inertial SLAM algorithm, when the robot is moving at a constant speed or purely rotating and encounters scenes with insufficient visual features, problems of low accuracy and poor robustness arise. Aiming to solve the problems of low accuracy and robustness of the visual inertial SLAM algorithm, a tightly coupled vision-IMU-2D lidar odometry (VILO) algorithm is proposed. Firstly, low-cost 2D lidar observations and visual–inertial observations are fused in a tightly coupled manner. Secondly, the low-cost 2D lidar odometry model is used to derive the Jacobian matrix of the lidar residual with respect to the state variable to be estimated, and the residual constraint equation of the vision-IMU-2D lidar is constructed. Thirdly, the nonlinear solution method is used to obtain the optimal robot pose, which solves the problem of how to fuse 2D lidar observations with visual–inertial information in a tightly coupled manner. The results show that the algorithm still has reliable pose-estimation accuracy and robustness in many special environments, and the position error and yaw angle error are greatly reduced. Our research improves the accuracy and robustness of the multi-sensor fusion SLAM algorithm.
APA, Harvard, Vancouver, ISO, and other styles
6

Dang, Xiangwei, Zheng Rong, and Xingdong Liang. "Sensor Fusion-Based Approach to Eliminating Moving Objects for SLAM in Dynamic Environments." Sensors 21, no. 1 (January 1, 2021): 230. http://dx.doi.org/10.3390/s21010230.

Full text
Abstract:
Accurate localization and reliable mapping is essential for autonomous navigation of robots. As one of the core technologies for autonomous navigation, Simultaneous Localization and Mapping (SLAM) has attracted widespread attention in recent decades. Based on vision or LiDAR sensors, great efforts have been devoted to achieving real-time SLAM that can support a robot’s state estimation. However, most of the mature SLAM methods generally work under the assumption that the environment is static, while in dynamic environments they will yield degenerate performance or even fail. In this paper, first we quantitatively evaluate the performance of the state-of-the-art LiDAR-based SLAMs taking into account different pattens of moving objects in the environment. Through semi-physical simulation, we observed that the shape, size, and distribution of moving objects all can impact the performance of SLAM significantly, and obtained instructive investigation results by quantitative comparison between LOAM and LeGO-LOAM. Secondly, based on the above investigation, a novel approach named EMO to eliminating the moving objects for SLAM fusing LiDAR and mmW-radar is proposed, towards improving the accuracy and robustness of state estimation. The method fully uses the advantages of different characteristics of two sensors to realize the fusion of sensor information with two different resolutions. The moving objects can be efficiently detected based on Doppler effect by radar, accurately segmented and localized by LiDAR, then filtered out from the point clouds through data association and accurate synchronized in time and space. Finally, the point clouds representing the static environment are used as the input of SLAM. The proposed approach is evaluated through experiments using both semi-physical simulation and real-world datasets. The results demonstrate the effectiveness of the method at improving SLAM performance in accuracy (decrease by 30% at least in absolute position error) and robustness in dynamic environments.
APA, Harvard, Vancouver, ISO, and other styles
7

Debeunne, César, and Damien Vivet. "A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping." Sensors 20, no. 7 (April 7, 2020): 2068. http://dx.doi.org/10.3390/s20072068.

Full text
Abstract:
Autonomous navigation requires both a precise and robust mapping and localization solution. In this context, Simultaneous Localization and Mapping (SLAM) is a very well-suited solution. SLAM is used for many applications including mobile robotics, self-driving cars, unmanned aerial vehicles, or autonomous underwater vehicles. In these domains, both visual and visual-IMU SLAM are well studied, and improvements are regularly proposed in the literature. However, LiDAR-SLAM techniques seem to be relatively the same as ten or twenty years ago. Moreover, few research works focus on vision-LiDAR approaches, whereas such a fusion would have many advantages. Indeed, hybridized solutions offer improvements in the performance of SLAM, especially with respect to aggressive motion, lack of light, or lack of visual features. This study provides a comprehensive survey on visual-LiDAR SLAM. After a summary of the basic idea of SLAM and its implementation, we give a complete review of the state-of-the-art of SLAM research, focusing on solutions using vision, LiDAR, and a sensor fusion of both modalities.
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Xiaobin, Lei Zhang, Jian Yang, Chenfei Cao, Wen Wang, Yingying Ran, Zhiying Tan, and Minzhou Luo. "A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR." Remote Sensing 14, no. 12 (June 13, 2022): 2835. http://dx.doi.org/10.3390/rs14122835.

Full text
Abstract:
The ability of intelligent unmanned platforms to achieve autonomous navigation and positioning in a large-scale environment has become increasingly demanding, in which LIDAR-based Simultaneous Localization and Mapping (SLAM) is the mainstream of research schemes. However, the LIDAR-based SLAM system will degenerate and affect the localization and mapping effects in extreme environments with high dynamics or sparse features. In recent years, a large number of LIDAR-based multi-sensor fusion SLAM works have emerged in order to obtain a more stable and robust system. In this work, the development process of LIDAR-based multi-sensor fusion SLAM and the latest research work are highlighted. After summarizing the basic idea of SLAM and the necessity of multi-sensor fusion, this paper introduces the basic principles and recent work of multi-sensor fusion in detail from four aspects based on the types of fused sensors and data coupling methods. Meanwhile, we review some SLAM datasets and compare the performance of five open-source algorithms using the UrbanNav dataset. Finally, the development trend and popular research directions of SLAM based on 3D LIDAR multi-sensor fusion are discussed and summarized.
APA, Harvard, Vancouver, ISO, and other styles
9

Bu, Zean, Changku Sun, and Peng Wang. "Semantic Lidar-Inertial SLAM for Dynamic Scenes." Applied Sciences 12, no. 20 (October 18, 2022): 10497. http://dx.doi.org/10.3390/app122010497.

Full text
Abstract:
Over the past few years, many impressive lidar-inertial SLAM systems have been developed and perform well under static scenes. However, most tasks are under dynamic environments in real life, and the determination of a method to improve accuracy and robustness poses a challenge. In this paper, we propose a semantic lidar-inertial SLAM approach with the combination of a point cloud semantic segmentation network and lidar-inertial SLAM LIO mapping for dynamic scenes. We import an attention mechanism to the PointConv network to build an attention weight function to improve the capacity to predict details. The semantic segmentation results of the point clouds from lidar enable us to obtain point-wise labels for each lidar frame. After filtering the dynamic objects, the refined global map of the lidar-inertial SLAM sytem is clearer, and the estimated trajectory can achieve a higher precision. We conduct experiments on an UrbanNav dataset, whose challenging highway sequences have a large number of moving cars and pedestrians. The results demonstrate that, compared with other SLAM systems, the accuracy of trajectory can be improved to different degrees.
APA, Harvard, Vancouver, ISO, and other styles
10

Abdelhafid, El Farnane, Youssefi My Abdelkader, Mouhsen Ahmed, Dakir Rachid, and El Ihyaoui Abdelilah. "Visual and light detection and ranging-based simultaneous localization and mapping for self-driving cars." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 6 (December 1, 2022): 6284. http://dx.doi.org/10.11591/ijece.v12i6.pp6284-6292.

Full text
Abstract:
<span lang="EN-US">In recent years, there has been a strong demand for self-driving cars. For safe navigation, self-driving cars need both precise localization and robust mapping. While global navigation satellite system (GNSS) can be used to locate vehicles, it has some limitations, such as satellite signal absence (tunnels and caves), which restrict its use in urban scenarios. Simultaneous localization and mapping (SLAM) are an excellent solution for identifying a vehicle’s position while at the same time constructing a representation of the environment. SLAM-based visual and light detection and ranging (LIDAR) refer to using cameras and LIDAR as source of external information. This paper presents an implementation of SLAM algorithm for building a map of environment and obtaining car’s trajectory using LIDAR scans. A detailed overview of current visual and LIDAR SLAM approaches has also been provided and discussed. Simulation results referred to LIDAR scans indicate that SLAM is convenient and helpful in localization and mapping.</span>
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Slam LiDAR"

1

Nava, Chocron Yoshua. "Visual-LiDAR SLAM with loop closure." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-265532.

Full text
Abstract:
State-of-the-art LIDAR odometry techniques are exceptionally precise. However, while they solve the localization problem, they perform mapping on-the-run, not being able to close loops, neither re-localize in previously visited environments. This study is concerned with the development of a system that combines an accurate laser odometry estimator, with algorithms for place recognition in order to detect trajectory loops. This project uses widely available datasets from urban driving scenarios and outdoor areas for development and evaluation of the system The results obtained confirm that loop closure detection can significantly improve the accuracy and robustness of laser SLAM pipelines, with detectors based on point cloud segments and visual features displaying very strong performance during the evaluation phase.
Spjutspetsen inom Lidar-baserade teknik för fordonsodometri har den senaste tiden uppnått exceptionella nivåer av noggrannhet. Med det sagt har de metoder som presenterats fokuserat på att lösa lokaliseringsproblemet och därför gjort förenklande antaganden såsom att de sköter kartläggning av miljön löpande utan platsåterkoppling, och att de inte kan återlokalisera i tidigare kända miljöer. Således utvecklar vi i detta arbete ett system som kombinerar dessa noggranna lidarodometriska tekniker med algoritmer för platsigenkänning för att möjliggöra loopdetektion. Vi använder vitt tillgängliga dataset av körning i stadstrafik samt i utomhusområden för utveckling och utvärdering av systemet. Resultaten visar att platsåterkoppling förbättrar noggrannheten hos Lidar-baserade lokaliseringsmetoder och gör dem mer robusta, samt att man med hjälp av detektorer baserade på punktmolnssegmentering och visuella särdrag erhåller ett system som uppvisar mycket goda resultat under utvärderingsfasen.
APA, Harvard, Vancouver, ISO, and other styles
2

Contreras, Samamé Luis Federico. "SLAM collaboratif dans des environnements extérieurs." Thesis, Ecole centrale de Nantes, 2019. http://www.theses.fr/2019ECDN0012/document.

Full text
Abstract:
Cette thèse propose des modèles cartographiques à grande échelle d'environnements urbains et ruraux à l'aide de données en 3D acquises par plusieurs robots. La mémoire contribue de deux manières principales au domaine de recherche de la cartographie. La première contribution est la création d'une nouvelle structure, CoMapping, qui permet de générer des cartes 3D de façon collaborative. Cette structure s’applique aux environnements extérieurs en ayant une approche décentralisée. La fonctionnalité de CoMapping comprend les éléments suivants : Tout d’abord, chaque robot réalise la construction d'une carte de son environnement sous forme de nuage de points.Pour cela, le système de cartographie a été mis en place sur des ordinateurs dédiés à chaque voiture, en traitant les mesures de distance à partir d'un LiDAR 3D se déplaçant en six degrés de liberté (6-DOF). Ensuite, les robots partagent leurs cartes locales et fusionnent individuellement les nuages de points afin d'améliorer leur estimation de leur cartographie locale. La deuxième contribution clé est le groupe de métriques qui permettent d'analyser les processus de fusion et de partage de cartes entre les robots. Nous présentons des résultats expérimentaux en vue de valider la structure CoMapping et ses métriques. Tous les tests ont été réalisés dans des environnements extérieurs urbains du campus de l’École Centrale de Nantes ainsi que dans des milieux ruraux
This thesis proposes large-scale mapping model of urban and rural environments using 3D data acquired by several robots. The work contributes in two main ways to the research field of mapping. The first contribution is the creation of a new framework, CoMapping, which allows to generate 3D maps in a cooperative way. This framework applies to outdoor environments with a decentralized approach. The CoMapping's functionality includes the following elements: First of all, each robot builds a map of its environment in point cloud format.To do this, the mapping system was set up on computers dedicated to each vehicle, processing distance measurements from a 3D LiDAR moving in six degrees of freedom (6-DOF). Then, the robots share their local maps and merge the point clouds individually to improve their local map estimation. The second key contribution is the group of metrics that allow to analyze the merging and card sharing processes between robots. We present experimental results to validate the CoMapping framework with their respective metrics. All tests were carried out in urban outdoor environments on the surrounding campus of the École Centrale de Nantes as well as in rural areas
APA, Harvard, Vancouver, ISO, and other styles
3

Dellenbach, Pierre. "Exploring LiDAR Odometries through Classical, Deep and Inertial perspectives." Electronic Thesis or Diss., Université Paris sciences et lettres, 2023. http://www.theses.fr/2023UPSLM069.

Full text
Abstract:
Les LiDARS 3D se sont largement démocratisés ces dernières années, poussés notamment par le développement des véhicules automones, et la nécessité de redondance et de sécurité. Contrairement aux caméras, les LiDAR 3D fournissent des mesures 3D de l'environnement très précises. Cela a conduit au développement de différents algorithmes de cartographie et de SLAM (Simultaneous Localization and Mapping), utilisant ces nouvelles modalités. Ces algorithmes ont vite dépassé les capacités des systèmes basés sur les caméras. Un élément crucial de ces systèmes est le problèmed'odométrie LiDAR, qui désigne le problème d'estimation de trajectoire du capteur, en utilisant uniquement le flux continu de mesures de LiDAR. Ce travail se concentre sur ce problème. Plus précisément, dans ce manuscrit nous visons à repousser les performances des odométries LiDAR.Pour atteindre cet objectif, nous explorons d'abord les méthodes classiques (ou géométriques) d'odométrie LiDAR. Nousproposons notamment deux nouvelles méthodes d'odométrie LiDAR dans le chapitre 3. Nous en montrons les forces etles faiblesses. Pour tâcher de répondre à ces limites, nous regardons de plus près les méthodes d'odométrie utilisant leDeep Learning dans le chapitre 4, en nous concentrant notamment sur les méthodes de type "boîte noires". Finalement,dans le chapitre 5 nous fusionnons les mesures LiDAR et les mesures inertielles pour rechercher encore plus de précisionet de robustesse
3D LiDARs have become increasingly popular in the past decade, notably motivated by the safety requirements of autonomous driving requiring new sensor modalities. Contrary to cameras, 3D LiDARs provide direct, and extremely precise 3D measurements of the environment. This has led to the development of many different mapping and Simultaneous Localization And Mapping (SLAM) solutions leveraging this new modality. These algorithms quickly performed much better than their camera-based counterparts, as evidenced by several open-source benchmarks. One critical component ofthese systems is LiDAR odometry. A LiDAR odometry is an algorithm estimating the trajectory of the sensor, given only the iterative integration of the LiDAR measurements. The focus of this work is on the topic of LiDAR Odometries. More precisely, we aim to push the boundaries of LiDAR odometries, both in terms of precision and performance.To achieve this, we first explore classical LiDAR odometries in depth, and propose two novel LiDAR odometries, in chapter 3. We show the strength, and limitations of such methods. Then, to address to improve them we first investigate Deep Learning for LiDAR odometries in chapter 4, notably focusing on end-to-end odometries. We show again the limitations of such approaches and finally investigate in chapter 5 fusing inertial and LiDAR measurements
APA, Harvard, Vancouver, ISO, and other styles
4

Bruns, Christian. "Lidar-based Vehicle Localization in an Autonomous Valet Parking Scenario." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1461236677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ekström, Joakim. "3D Imaging Using Photon Counting Lidar on a Moving Platform." Thesis, Linköpings universitet, Reglerteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153297.

Full text
Abstract:
The problem of constructing high quality point clouds based on measurements from a moving and rotating single-photon counting lidar is considered in this report. The movement is along a straight rail while the lidar sensor rotates side to side. The point clouds are constructed in three steps, which are all studied in this master’s thesis. First, point clouds are constructed from raw lidar measurements from single sweeps with the lidar. In the second step, the sensor transformation between the point clouds constructed in the first step are obtained in a registration step using iterative closest point (ICP). In the third step the point clouds are combined to a coherent point cloud, using the full measurement. A method using simultaneous localization and mapping (SLAM) is developed for the third step. It is then compared to two other methods, constructing the final point cloud only using the registration, and to utilize odometric information in the combination step. It is also investigated which voxel discretization that should be used when extracting the point clouds. The methods developed are evaluated using experimental data from a prototype photon counting lidar system. The results show that the voxel discretization need to be at least as large as the range quantization in the lidar. No significant difference between using registration and SLAM in the third step is observed, but both methods outperform the odometric method.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Erik. "Integration of IMU and Velodyne LiDAR sensor in an ICP-SLAM framework." Thesis, KTH, Optimeringslära och systemteori, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-193653.

Full text
Abstract:
Simultaneous localization and mapping (SLAM) of an unknown environment is a critical step for many autonomous processes. For this work, we propose a solution which does not rely on storing descriptors of the environment and performing descriptors filtering. Compared to most SLAM based methods this work with general sparse point clouds with the underlying generalized ICP (GICP) algorithm for point cloud registration. This thesis presents a modified GICP method and an investigation of how and if an IMU can assist the SLAM process by different methods of integrating the IMU measurements. All the data in this thesis have been sampled from a LiDAR scanner mounted on top of an UAV, a car or on a backpack. Suggested modification on GICP have shown to improve robustness in a forest environment. From urban measurements the result indicates that IMU contributes by reducing the overall angular drift, which in a long run is contributing most to the loop closure error.
Lokalisering och kartläggning (SLAM) i en okänd miljö är ett viktigt steg för många autonoma system. Den föreslagna lösningen är inte beroende på att hitta nyckelpunkter eller nyckelobjekt. Till skillnad från många andra SLAM baserade metoder så arbetar denna metod med glesa punktmoln där 'generalized ICP' (GICP)algoritmen används för punktmolns registrering. I denna uppsats så föreslås en variant av GICP och undersöker, ifall en tröghetssensor (IMU) kan hjälpa till med SLAM-processen. LiDAR-data som har använts i denna uppsats har varit uppmätta från en Velodyne LiDAR monterat på en ryggsäck, en bil och på en UAV. Resultatet tyder på att IMU-data kan göra algoritmen robustare och från mätningar i stadsmiljö så visar det sig att IMU kan hjälpa till att minska vinkeldrift, vilket är det största felkällan för noggrannhet i det globala koordinat systemet.
APA, Harvard, Vancouver, ISO, and other styles
7

Gonzalez, Cadenillas Clayder Alejandro. "An improved feature extractor for the lidar odometry and mapping algorithm." Tesis, Universidad de Chile, 2019. http://repositorio.uchile.cl/handle/2250/171499.

Full text
Abstract:
Tesis para optar al grado de Magíster en Ciencias de la Ingeniería, Mención Eléctrica
La extracción de características es una tarea crítica en la localización y mapeo simultáneo o Simultaneous Localization and Mapping (SLAM) basado en características, que es uno de los problemas más importantes de la comunidad robótica. Un algoritmo que resuelve SLAM utilizando características basadas en LiDAR es el algoritmo LiDAR Odometry and Mapping (LOAM). Este algoritmo se considera actualmente como el mejor algoritmo SLAM según el Benchmark KITTI. El algoritmo LOAM resuelve el problema de SLAM a través de un enfoque de emparejamiento de características y su algoritmo de extracción de características detecta las características clasifican los puntos de una nube de puntos como planos o agudos. Esta clasificación resulta de una ecuación que define el nivel de suavidad para cada punto. Sin embargo, esta ecuación no considera el ruido de rango del sensor. Por lo tanto, si el ruido de rango del LiDAR es alto, el extractor de características de LOAM podría confundir los puntos planos y agudos, lo que provocaría que la tarea de emparejamiento de características falle. Esta tesis propone el reemplazo del algoritmo de extracción de características del LOAM original por el algoritmo Curvature Scale Space (CSS). La elección de este algoritmo se realizó después de estudiar varios extractores de características en la literatura. El algoritmo CSS puede mejorar potencialmente la tarea de extracción de características en entornos ruidosos debido a sus diversos niveles de suavizado Gaussiano. La sustitución del extractor de características original de LOAM por el algoritmo CSS se logró mediante la adaptación del algoritmo CSS al Velodyne VLP-16 3D LiDAR. El extractor de características de LOAM y el extractor de características de CSS se probaron y compararon con datos reales y simulados, incluido el dataset KITTI utilizando las métricas Optimal Sub-Pattern Assignment (OSPA) y Absolute Trajectory Error (ATE). Para todos estos datasets, el rendimiento de extracción de características de CSS fue mejor que el del algoritmo LOAM en términos de métricas OSPA y ATE.
APA, Harvard, Vancouver, ISO, and other styles
8

Paiva, mendes Ellon. "Study on the Use of Vision and Laser Range Sensors with Graphical Models for the SLAM Problem." Thesis, Toulouse, INSA, 2017. http://www.theses.fr/2017ISAT0016/document.

Full text
Abstract:
La capacité des robots mobiles à se localiser précisément par rapport à leur environnement est indispensable à leur autonomie. Pour ce faire, les robots exploitent les données acquises par des capteurs qui observent leur état interne, tels que centrales inertielles ou l’odométrie, et les données acquises par des capteurs qui observent l’environnement, telles que les caméras et les Lidars. L’exploitation de ces derniers capteurs a suscité le développement de solutions qui estiment conjointement la position du robot et la position des éléments dans l'environnement, appelées SLAM (Simultaneous Localization and Mapping). Pour gérer le bruit des données provenant des capteurs, les solutions pour le SLAM sont mises en œuvre dans un contexte probabiliste. Les premiers développements étaient basés sur le filtre de Kalman étendu, mais des développements plus récents utilisent des modèles graphiques probabilistes pour modéliser le problème d’estimation et de le résoudre grâce à techniques d’optimisation. Cette thèse exploite cette dernière approche et propose deux techniques distinctes pour les véhicules terrestres autonomes: une utilisant la vision monoculaire, l’autre un Lidar. L’absence d’information de profondeur dans les images obtenues par une caméra a mené à l’utilisation de paramétrisations spécifiques pour les points de repères qui isolent la profondeur inconnue dans une variable, concentrant la grande incertitude sur la profondeur dans un seul paramètre. Une de ces paramétrisations, nommé paramétrisation pour l’angle de parallaxe (ou PAP, Parallax Angle Parametrization), a été introduite dans le contexte du problème d’ajustement de faisceaux, qui traite l’ensemble des données en une seule étape d’optimisation globale. Nous présentons comment exploiter cette paramétrisation dans une approche incrémentale de SLAM à base de modèles graphiques, qui intègre également les mesures de mouvement du robot. Les Lidars peuvent être utilisés pour construire des solutions d’odométrie grâce à un recalage séquentiel des nuages de points acquis le long de la trajectoire. Nous définissons une couche basée sur les modèles graphiques au dessus d’une telle couche d’odométrie, qui utilise l’algorithme ICP (Iterative Closest Points). Des repères clefs (keyframes) sont définis le long de la trajectoire du robot, et les résultats de l’algorithme ICP sont utilisés pour construire un graphe de poses, exploité pour résoudre un problème d’optimisation qui permet la correction de l’ensemble de la trajectoire du robot et de la carte de l’environnement à suite des fermetures de boucle.Après une introduction à la théorie des modèles graphiques appliquée au problème de SLAM, le manuscrit présente ces deux approches. Des résultats simulés et expérimentaux illustrent les développements tout au long du manuscrit, en utilisant des jeux des données classiques et obtenus au laboratoire
A strong requirement to deploy autonomous mobile robots is their capacity to localize themselves with a certain precision in relation to their environment. Localization exploits data gathered by sensors that either observe the inner states of the robot, like acceleration and speed, or the environment, like cameras and Light Detection And Ranging (LIDAR) sensors. The use of environment sensors has triggered the development of localization solutions that jointly estimate the robot position and the position of elements in the environment, referred to as Simultaneous Localization and Mapping (SLAM) approaches. To handle the noise inherent of the data coming from the sensors, SLAM solutions are implemented in a probabilistic framework. First developments were based on Extended Kalman Filters, while a more recent developments use probabilistic graphical models to model the estimation problem and solve it through optimization. This thesis exploits the latter approach to develop two distinct techniques for autonomous ground vehicles: oneusing monocular vision, the other one using LIDAR. The lack of depth information in camera images has fostered the use of specific landmark parametrizations that isolate the unknown depth in one variable, concentrating its large uncertainty into a single parameter. One of these parametrizations, named Parallax Angle Parametrization, was originally introduced in the context of the Bundle Adjustment problem, that processes all the gathered data in a single global optimization step. We present how to exploit this parametrization in an incremental graph-based SLAM approach in which robot motion measures are also incorporated. LIDAR sensors can be used to build odometry-like solutions for localization by sequentially registering the point clouds acquired along a robot trajectory. We define a graphical model layer on top of a LIDAR odometry layer, that uses the Iterative Closest Points (ICP) algorithm as registration technique. Reference frames are defined along the robot trajectory, and ICP results are used to build a pose graph, used to solve an optimization problem that enables the correction of the robot trajectory and the environment map upon loop closures. After an introduction to the theory of graphical models applied to SLAM problem, the manuscript depicts these two approaches. Simulated and experimental results illustrate the developments throughout the manuscript, using classic and in-house datasets
APA, Harvard, Vancouver, ISO, and other styles
9

Chghaf, Mohammed. "Towards a Multimodal Loop Closure System for Real-Time Embedded SLAM Applications." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST133.

Full text
Abstract:
Les algorithmes multimodaux de SLAM améliorent sa robustesse et sa précision dans des environnements complexes et dynamiques. Cependant, ces améliorations se font au prix d’une augmentation des exigences en matière de calcul. L’étude systémique du problème SLAM est cruciale pour concevoir une solution pratique, stable et polyvalente, adaptable aux systèmes embarqués et temps réel. Nous avons étudié les différentes étapes de traitement du système afin d’apporter des contributions au niveau de fermeture de boucle multimodale pour des applications SLAM et de son architecture de calcul. Cette étude a commencé par une analyse approfondie de l’impact des différentes modalités de représentation de l’information sur la précision de la fermeture de boucle et son influence sur la réduction de la dérive de trajectoire. Nous avons développé une méthode de fusion à base d’un filtre particulaire guidé par la similarité, qui a été évaluée à l’aide de divers ensembles de données. Les résultats obtenus ont montré une amélioration de la localisation. Nous avons proposé un modèle d’architecture hétérogène (CPU-GPU et CPU-FPGA) pour le calcul d’un descripteur de scène inter-modal. Cette architecture a pu offrir des performances supérieures en termes de temps de traitement
Multimodal SLAM algorithms improve its robustness and accuracy in complex and dynamic environments. However, these improvements come at the cost of increased computational requirements. The systemic-level study of the SLAM problem is crucial to designing a practical, stable and versatile solution, adaptable to embedded and real-time systems. We have studied the various processing stages of the system in order to propose contributions to the multimodal loop closure level for SLAM applications, and its computational architecture. This study began with an in-depth analysis of the impact of multimodal information representation on loop closure accuracy and its influence on trajectory drift reduction. We developed a fusion method based on a similarity-guided particle filter, which was evaluated using various dataset. The results obtained showed an improvement in localization’s accuracy. We proposed a heterogeneous architecture model (CPU-GPU and CPU-FPGA) for inter-modal scene descriptor computation. This architecture was able to deliver superior performance in terms of processing time
APA, Harvard, Vancouver, ISO, and other styles
10

Karlsson, Oskar. "Lidar-based SLAM : Investigation of environmental changes and use of road-edges for improved positioning." Thesis, Linköpings universitet, Reglerteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-165288.

Full text
Abstract:
The ability to position yourself and map the surroundings is an important aspect for both civilian and military applications. Global navigation satellite systems are very popular and are widely used for positioning. This kind of system is however quite easy to disturb and therefore lacks robustness. The introduction of autonomous vehicles has accelerated the development of local positioning systems. This thesis work is done in collaboration with FOI in Linköping, using a positioning system with LIDAR and IMU sensors in a EKF-SLAM system using the GTSAM framework. The goal was to evaluate the system in different conditions and also investigate the possibility of using the road surface for positioning. Data available at FOI was used for evaluation. These data sets have a known sensor setup and matches the intended hardware. The data sets used have been gathered on three different occasions in a residential area, a country road and a forest road in sunny spring weather on two occasions and one occasion in winter conditions. To evaluate the performance several different measures were used, common ones such as looking at positioning error and RMSE, but also the number of found landmarks, the estimated distance between landmarks and the drift of the vehicle. All results pointed towards the forest road providing the best positioning, the country road the worst and the residential area in between. When comparing different weather conditions the data set from winter conditions performed the best. The difference between the two spring data sets was quite different which indicates that there may be other factors at play than just weather. A road edge detector was implemented to improve mapping and positioning. Vectors, denoted road vectors, with position and orientation were adapted to the edge points and the change between these road vectors were used in the system using GTSAM in areas with few landmarks. The clearest improvements to the drift in the vehicle direction was in the longer country area where the error was lowered with 6.4 % with increase in the error sideways and in orientation as side effects. The implemented method has a significant impact on the computational cost of the system as well as requiring precise adjustment of uncertainty to have a noticeable improvement and not worsen the overall results.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Slam LiDAR"

1

Zhang, Jiabao, and Yu Zhang. "Downsampling Assessment for LiDAR SLAM." In Proceedings of 2023 Chinese Intelligent Automation Conference, 234–42. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-6187-0_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Peng, Gang, Tin Lun Lam, Chunxu Hu, Yu Yao, Jintao Liu, and Fan Yang. "LiDAR SLAM for Mobile Robot." In Introduction to Intelligent Robot System Design, 191–223. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1814-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Chunhui, Jiaxing Li, Anqi Chen, Yang Lyu, and Lin Hua. "Intensity Augmented Solid-State-LiDAR-Inertial SLAM." In Lecture Notes in Electrical Engineering, 129–39. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-1103-1_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Andert, Franz, and Henning Mosebach. "LiDAR SLAM Positioning Quality Evaluation in Urban Road Traffic." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 277–91. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-38822-5_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cho, Kuk, SeungHo Baeg, and Sangdeok Park. "Natural Terrain Detection and SLAM Using LIDAR for UGV." In Advances in Intelligent Systems and Computing, 793–805. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-33926-4_76.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Yuhang, and Liwei Zhang. "Lidar-Inertial SLAM Method for Accurate and Robust Mapping." In Communications in Computer and Information Science, 33–44. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8018-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shi, Zhanhong, Ping Wang, Wanquan Liu, and Chenqiang Gao. "Multi-Sensor SLAM Assisted by 2D LiDAR Line Features." In Learning and Analytics in Intelligent Systems, 73–80. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-56521-2_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Yuwei, Jian Tang, Ziyi Feng, Teemu Hakala, Juha Hyyppä, Chuncheng Zhou, Lingli Tang, and Chuanrong Li. "Possibility of Applying SLAM-Aided LiDAR in Deep Space Exploration." In Springer Proceedings in Physics, 239–48. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-49184-4_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cho, Kuk, SeungHo Baeg, and Sangdeok Park. "Natural Terrain Detection and SLAM Using LIDAR for an UGV." In Frontiers of Intelligent Autonomous Systems, 263–75. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-35485-4_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Chunxu, Ling Pei, Changqing Xu, Danping Zou, Yuhui Qi, Yifan Zhu, and Tao Li. "Trajectory Optimization of LiDAR SLAM Based on Local Pose Graph." In Lecture Notes in Electrical Engineering, 360–70. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-7751-8_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Slam LiDAR"

1

Kim, Giseop, Seungsang Yun, Jeongyun Kim, and Ayoung Kim. "SC-LiDAR-SLAM: A Front-end Agnostic Versatile LiDAR SLAM System." In 2022 International Conference on Electronics, Information, and Communication (ICEIC). IEEE, 2022. http://dx.doi.org/10.1109/iceic54506.2022.9748644.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Abati, Gabriel F., João Carlos V. Soares, and Marco Antonio Meggiolaro. "SLAM Visual Em Ambientes Dinâmicos Usando Segmentação Panóptica." In Anais Estendidos do Simpósio Brasileiro de Robótica e Simpósio Latino-Americano de Robótica. Sociedade Brasileira de Computação, 2023. http://dx.doi.org/10.5753/sbrlars_estendido.2023.235116.

Full text
Abstract:
A maioria dos sistemas de SLAM visual não é robusta em cenários dinâmicos. Aqueles que lidam com conteúdo dinâmico nas cenas geralmente dependem de métodos baseados em aprendizado profundo para detectar e filtrar objetos dinâmicos. No entanto, esses métodos não conseguem lidar com objetos desconhecidos. Este trabalho apresenta o Panoptic-SLAM, um sistema de SLAM visual robusto para ambientes dinâmicos, mesmo na presença de objetos desconhecidos. Ele utiliza a Segmentação Panóptica para filtrar objetos dinâmicos da cena durante o processo de estimativa de estado. A metodologia proposta é baseada no ORB-SLAM3, um sistema SLAM estado-da-arte para ambientes estáticos. A implementação foi testada usando conjuntos de dados do mundo real e comparada com vários sistemas da literatura, incluindo DynaSLAM, DS-SLAM e SaD-SLAM e PVO.
APA, Harvard, Vancouver, ISO, and other styles
3

xu, bo, Yiran Fu, Changsai Zhang, and Zhengjun Liu. "Research of cartographer laser SLAM algorithm." In LIDAR Imaging Detection and Target Recognition 2017, edited by Yueguang Lv, Jianzhong Su, Wei Gong, Jian Yang, Weimin Bao, Weibiao Chen, Zelin Shi, Jindong Fei, Shensheng Han, and Weiqi Jin. SPIE, 2017. http://dx.doi.org/10.1117/12.2292864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Jun, Junqiao Zhao, Yuchen Kang, Xudong He, Chen Ye, and Lu Sun. "DL-SLAM: Direct 2.5D LiDAR SLAM for Autonomous Driving." In 2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2019. http://dx.doi.org/10.1109/ivs.2019.8813868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chang, Yu-Cheng, Ya-Li Chen, Ya-Wen Hsu, Jau-Woei Perng, and Jun-Dong Chang. "Integrating V-SLAM and LiDAR-based SLAM for Map Updating." In 2021 IEEE 4th International Conference on Knowledge Innovation and Invention (ICKII). IEEE, 2021. http://dx.doi.org/10.1109/ickii51822.2021.9574718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Frosi, Matteo, and Matteo Matteucci. "D3VIL-SLAM: 3D Visual Inertial LiDAR SLAM for Outdoor Environments." In 2023 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2023. http://dx.doi.org/10.1109/iv55152.2023.10186534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Geneva, Patrick, Kevin Eckenhoff, Yulin Yang, and Guoquan Huang. "LIPS: LiDAR-Inertial 3D Plane SLAM." In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018. http://dx.doi.org/10.1109/iros.2018.8594463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Xieyuanli, Andres Milioto, Emanuele Palazzolo, Philippe Giguere, Jens Behley, and Cyrill Stachniss. "SuMa++: Efficient LiDAR-based Semantic SLAM." In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019. http://dx.doi.org/10.1109/iros40897.2019.8967704.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kirnos, Vasilii, Vladimir Antipov, Andrey Priorov, and Vera Kokovkina. "The LIDAR Odometry in the SLAM." In 2018 23rd Conference of Open Innovations Association (FRUCT). IEEE, 2018. http://dx.doi.org/10.23919/fruct.2018.8588026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Xuanliang, and Wenguang Wang. "Moving target removal for lidar SLAM." In Seventh Symposium on Novel Photoelectronic Detection Technology and Application 2020, edited by Junhao Chu, Qifeng Yu, Huilin Jiang, and Junhong Su. SPIE, 2021. http://dx.doi.org/10.1117/12.2587440.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Slam LiDAR"

1

Ennasr, Osama, Michael Paquette, and Garry Glaspell. UGV SLAM payload for low-visibility environments. Engineer Research and Development Center (U.S.), September 2023. http://dx.doi.org/10.21079/11681/47589.

Full text
Abstract:
Herein, we explore using a low size, weight, power, and cost unmanned ground vehicle payload designed specifically for low-visibility environments. The proposed payload simultaneously localizes and maps in GPS-denied environments via waypoint navigation. This solution utilizes a diverse sensor payload that includes wheel encoders, inertial measurement unit, 3D lidar, 3D ultrasonic sensors, and thermal cameras. Furthermore, the resulting 3D point cloud was compared against a survey-grade lidar.
APA, Harvard, Vancouver, ISO, and other styles
2

Christie, Benjamin, Osama Ennasr, and Garry Glaspell. Autonomous navigation and mapping in a simulated environment. Engineer Research and Development Center (U.S.), September 2021. http://dx.doi.org/10.21079/11681/42006.

Full text
Abstract:
Unknown Environment Exploration (UEE) with an Unmanned Ground Vehicle (UGV) is extremely challenging. This report investigates a frontier exploration approach, in simulation, that leverages Simultaneous Localization And Mapping (SLAM) to efficiently explore unknown areas by finding navigable routes. The solution utilizes a diverse sensor payload that includes wheel encoders, three-dimensional (3-D) LIDAR, and Red, Green, Blue and Depth (RGBD) cameras. The main goal of this effort is to leverage frontier-based exploration with a UGV to produce a 3-D map (up to 10 cm resolution). The solution provided leverages the Robot Operating System (ROS).
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, W. S., Victor Alchanatis, and Asher Levi. Innovative yield mapping system using hyperspectral and thermal imaging for precision tree crop management. United States Department of Agriculture, January 2014. http://dx.doi.org/10.32747/2014.7598158.bard.

Full text
Abstract:
Original objectives and revisions – The original overall objective was to develop, test and validate a prototype yield mapping system for unit area to increase yield and profit for tree crops. Specific objectives were: (1) to develop a yield mapping system for a static situation, using hyperspectral and thermal imaging independently, (2) to integrate hyperspectral and thermal imaging for improved yield estimation by combining thermal images with hyperspectral images to improve fruit detection, and (3) to expand the system to a mobile platform for a stop-measure- and-go situation. There were no major revisions in the overall objective, however, several revisions were made on the specific objectives. The revised specific objectives were: (1) to develop a yield mapping system for a static situation, using color and thermal imaging independently, (2) to integrate color and thermal imaging for improved yield estimation by combining thermal images with color images to improve fruit detection, and (3) to expand the system to an autonomous mobile platform for a continuous-measure situation. Background, major conclusions, solutions and achievements -- Yield mapping is considered as an initial step for applying precision agriculture technologies. Although many yield mapping systems have been developed for agronomic crops, it remains a difficult task for mapping yield of tree crops. In this project, an autonomous immature fruit yield mapping system was developed. The system could detect and count the number of fruit at early growth stages of citrus fruit so that farmers could apply site-specific management based on the maps. There were two sub-systems, a navigation system and an imaging system. Robot Operating System (ROS) was the backbone for developing the navigation system using an unmanned ground vehicle (UGV). An inertial measurement unit (IMU), wheel encoders and a GPS were integrated using an extended Kalman filter to provide reliable and accurate localization information. A LiDAR was added to support simultaneous localization and mapping (SLAM) algorithms. The color camera on a Microsoft Kinect was used to detect citrus trees and a new machine vision algorithm was developed to enable autonomous navigations in the citrus grove. A multimodal imaging system, which consisted of two color cameras and a thermal camera, was carried by the vehicle for video acquisitions. A novel image registration method was developed for combining color and thermal images and matching fruit in both images which achieved pixel-level accuracy. A new Color- Thermal Combined Probability (CTCP) algorithm was created to effectively fuse information from the color and thermal images to classify potential image regions into fruit and non-fruit classes. Algorithms were also developed to integrate image registration, information fusion and fruit classification and detection into a single step for real-time processing. The imaging system achieved a precision rate of 95.5% and a recall rate of 90.4% on immature green citrus fruit detection which was a great improvement compared to previous studies. Implications – The development of the immature green fruit yield mapping system will help farmers make early decisions for planning operations and marketing so high yield and profit can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography