To see the other types of publications on this topic, follow the link: Velodyne LiDAR.

Journal articles on the topic 'Velodyne LiDAR'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Velodyne LiDAR.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Alsadik, Bashar. "Performance Assessment of Mobile Laser Scanning Systems Using Velodyne Hdl-32e." Surveying and Geospatial Engineering Journal 1, no. 1 (January 1, 2021): 28–33. http://dx.doi.org/10.38094/sgej116.

Full text
Abstract:
Mapping systems using multi-beam LiDARs are widely used nowadays for different geospatial applications graduating from indoor projects to outdoor city-wide projects. These mobile mapping systems can be either ground-based or aerial-based systems and are mostly equipped with inertial navigation systems INS. The Velodyne HDL-32 LiDAR is a well-known 360° spinning multi-beam laser scanner that is widely used in outdoor and indoor mobile mapping systems. The performance of such LiDARs is an ongoing research topic which is quite important for the quality assurance and quality control topic. The performance of this LiDAR type is correlated to many factors either related to the device itself or the design of the mobile mapping system. Regarding design, most of the mapping systems are equipped with a single Velodyne HDL32 in a specific orientation angle which is different among the mapping systems manufacturers. The LiDAR orientation angle has a significant impact on the performance in terms of the density and coverage of the produced point clouds. Furthermore, during the lifetime of this multi-beam LiDAR, one or more beams may be defected and then either continue the production or returned to the manufacturer to be fixed which then cost time and money. In this paper, the design impact analysis of a mobile laser scanning (MLS) system equipped with a single Velodyne HDL-32E will be clarified and a clear relationship is given between the orientation angle of the LiDAR and the output density of points. The ideal angular orientation of a single Velodyne HDL-32E is found to be at 35° in a mobile mapping system. Furthermore, we investigated the degradation of points density when one of the 32 beams is defected and quantified the density loss percentage and to the best of our knowledge, this is not presented in literature before. It is found that a maximum of about 8% point density loss occurs on the ground and 4% on the facades when having a defected beam of the Velodyne HDL-32E.
APA, Harvard, Vancouver, ISO, and other styles
2

Jozkow, G., C. Toth, and D. Grejner-Brzezinska. "UAS TOPOGRAPHIC MAPPING WITH VELODYNE LiDAR SENSOR." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-1 (June 2, 2016): 201–8. http://dx.doi.org/10.5194/isprsannals-iii-1-201-2016.

Full text
Abstract:
Unmanned Aerial System (UAS) technology is nowadays willingly used in small area topographic mapping due to low costs and good quality of derived products. Since cameras typically used with UAS have some limitations, e.g. cannot penetrate the vegetation, LiDAR sensors are increasingly getting attention in UAS mapping. Sensor developments reached the point when their costs and size suit the UAS platform, though, LiDAR UAS is still an emerging technology. One issue related to using LiDAR sensors on UAS is the limited performance of the navigation sensors used on UAS platforms. Therefore, various hardware and software solutions are investigated to increase the quality of UAS LiDAR point clouds. This work analyses several aspects of the UAS LiDAR point cloud generation performance based on UAS flights conducted with the Velodyne laser scanner and cameras. The attention was primarily paid to the trajectory reconstruction performance that is essential for accurate point cloud georeferencing. Since the navigation sensors, especially Inertial Measurement Units (IMUs), may not be of sufficient performance, the estimated camera poses could allow to increase the robustness of the estimated trajectory, and subsequently, the accuracy of the point cloud. The accuracy of the final UAS LiDAR point cloud was evaluated on the basis of the generated DSM, including comparison with point clouds obtained from dense image matching. The results showed the need for more investigation on MEMS IMU sensors used for UAS trajectory reconstruction. The accuracy of the UAS LiDAR point cloud, though lower than for point cloud obtained from images, may be still sufficient for certain mapping applications where the optical imagery is not useful.
APA, Harvard, Vancouver, ISO, and other styles
3

Jozkow, G., C. Toth, and D. Grejner-Brzezinska. "UAS TOPOGRAPHIC MAPPING WITH VELODYNE LiDAR SENSOR." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-1 (June 2, 2016): 201–8. http://dx.doi.org/10.5194/isprs-annals-iii-1-201-2016.

Full text
Abstract:
Unmanned Aerial System (UAS) technology is nowadays willingly used in small area topographic mapping due to low costs and good quality of derived products. Since cameras typically used with UAS have some limitations, e.g. cannot penetrate the vegetation, LiDAR sensors are increasingly getting attention in UAS mapping. Sensor developments reached the point when their costs and size suit the UAS platform, though, LiDAR UAS is still an emerging technology. One issue related to using LiDAR sensors on UAS is the limited performance of the navigation sensors used on UAS platforms. Therefore, various hardware and software solutions are investigated to increase the quality of UAS LiDAR point clouds. This work analyses several aspects of the UAS LiDAR point cloud generation performance based on UAS flights conducted with the Velodyne laser scanner and cameras. The attention was primarily paid to the trajectory reconstruction performance that is essential for accurate point cloud georeferencing. Since the navigation sensors, especially Inertial Measurement Units (IMUs), may not be of sufficient performance, the estimated camera poses could allow to increase the robustness of the estimated trajectory, and subsequently, the accuracy of the point cloud. The accuracy of the final UAS LiDAR point cloud was evaluated on the basis of the generated DSM, including comparison with point clouds obtained from dense image matching. The results showed the need for more investigation on MEMS IMU sensors used for UAS trajectory reconstruction. The accuracy of the UAS LiDAR point cloud, though lower than for point cloud obtained from images, may be still sufficient for certain mapping applications where the optical imagery is not useful.
APA, Harvard, Vancouver, ISO, and other styles
4

Tazir, M. L., and N. Seube. "COMPARISON OF UAV LIDAR ODOMETRY OF ROTATING AND FIXED VELODYNE PLATFORMS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2020 (August 6, 2020): 527–34. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2020-527-2020.

Full text
Abstract:
Abstract. Three-dimensional LiDAR rangefinders are increasingly integrated into unmanned aerial vehicles (UAV), due to their direct access to 3D information, their high accuracy and high refresh rate, and their tendency to be lightweight and cheaper. However, all commercial LiDARs can only offer a limited vertical resolution. To cope with this problem, a solution can be to rotate the LiDAR on an axis passing through its center, adding an additional degree of freedom and allowing more overlap, which significantly enlarges the sensor scope and allows having a complete spherical field of view (FOV). In this paper, we explore this solution in detail for drone’s context, while making comparisons between the rotating and fixed configurations for a Multi-Layers LiDAR (MLL) of type Velodyne Puck Lite. We investigate its impact on the LiDAR Odometry (LO) process by comparing the resulting trajectories with the data of the two configurations, as well as, qualitative comparisons, of the resulting maps.
APA, Harvard, Vancouver, ISO, and other styles
5

Okunsky, M. V., and N. V. Nesterova. "Velodyne LIDAR method for sensor data decoding." IOP Conference Series: Materials Science and Engineering 516 (April 26, 2019): 012018. http://dx.doi.org/10.1088/1757-899x/516/1/012018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Atanacio-Jiménez, Gerardo, José-Joel González-Barbosa, Juan B. Hurtado-Ramos, Francisco J. Ornelas-Rodríguez, Hugo Jiménez-Hernández, Teresa García-Ramirez, and Ricardo González-Barbosa. "LIDAR Velodyne HDL-64E Calibration Using Pattern Planes." International Journal of Advanced Robotic Systems 8, no. 5 (January 1, 2011): 59. http://dx.doi.org/10.5772/50900.

Full text
Abstract:
This work describes a method for calibration of the Velodyne HDL-64E scanning LIDAR system. The principal contribution was expressed by a pattern calibration signature, the mathematical model and the numerical algorithm for computing the calibration parameters of the LIDAR. In this calibration pattern the main objective is to minimize systematic errors due to geometric calibration factor. It describes an algorithm for solution of the intrinsic and extrinsic parameters. Finally, its uncertainty was calculated from the standard deviation of calibration result errors.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Liang, Qingquan Li, Ming Li, Qingzhou Mao, and Andreas Nüchter. "Multiple Vehicle-like Target Tracking Based on the Velodyne LiDAR*." IFAC Proceedings Volumes 46, no. 10 (June 2013): 126–31. http://dx.doi.org/10.3182/20130626-3-au-2035.00058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Erke, Shang, Dai Bin, Nie Yiming, Xiao Liang, and Zhu Qi. "A fast calibration approach for onboard LiDAR-camera systems." International Journal of Advanced Robotic Systems 17, no. 2 (March 1, 2020): 172988142090960. http://dx.doi.org/10.1177/1729881420909606.

Full text
Abstract:
Outdoor surveillance and security robots have a wide range of industrial, military, and civilian applications. In order to achieve autonomous navigation, the LiDAR-camera system is widely applied by outdoor surveillance and security robots. The calibration of the LiDAR-camera system is essential and important for robots to correctly acquire the scene information. This article proposes a fast calibration approach that is different from traditional calibration algorithms. The proposed approach combines two independent calibration processes, which are the calibration of LiDAR and the camera to robot platform, so as to address the relationship between LiDAR sensor and camera sensor. A novel approach to calibrate LiDAR to robot platform is applied to improve accuracy and robustness. A series of indoor experiments are carried out and the results show that the proposed approach is effective and efficient. At last, it is applied to our own outdoor security robot platform to detect both positive and negative obstacles in a field environment, in which two Velodyne-HDL-32 LiDARs and a color camera are employed. The real application illustrates the robustness performance of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Lassiter, H. Andrew, Travis Whitley, Benjamin Wilkinson, and Amr Abd-Elrahman. "Scan Pattern Characterization of Velodyne VLP-16 Lidar Sensor for UAS Laser Scanning." Sensors 20, no. 24 (December 21, 2020): 7351. http://dx.doi.org/10.3390/s20247351.

Full text
Abstract:
Many lightweight lidar sensors employed for UAS lidar mapping feature a fan-style laser emitter-detector configuration which results in a non-uniform pattern of laser pulse returns. As the role of UAS lidar mapping grows in both research and industry, it is imperative to understand the behavior of the fan-style lidar sensor to ensure proper mission planning. This study introduces sensor modeling software for scanning simulation and analytical equations developed in-house to characterize the non-uniform return density (i.e., scan pattern) of the fan-style sensor, with special focus given to a popular fan-style sensor, the Velodyne VLP-16 laser scanner. The results indicate that, despite the high pulse frequency of modern scanners, areas of poor laser pulse coverage are often present along the scanning path under typical mission parameters. These areas of poor coverage appear in a variety of shapes and sizes which do not necessarily correspond to the forward speed of the scanner or the height of the scanner above the ground, highlighting the importance of scan simulation for proper mission planning when using a fan-style sensor.
APA, Harvard, Vancouver, ISO, and other styles
10

Bula, Jason, Marc-Henri Derron, and Gregoire Mariethoz. "Dense point cloud acquisition with a low-cost Velodyne VLP-16." Geoscientific Instrumentation, Methods and Data Systems 9, no. 2 (October 12, 2020): 385–96. http://dx.doi.org/10.5194/gi-9-385-2020.

Full text
Abstract:
Abstract. This study develops a low-cost terrestrial lidar system (TLS) for dense point cloud acquisition. Our system consists of a VLP-16 lidar scanner produced by Velodyne, which we have placed on a motorized rotating platform. This allows us to continuously change the direction and densify the scan. Axis correction is performed in post-processing to obtain accurate scans. The system has been compared indoors with a high-cost system, showing an average absolute difference of ±2.5 cm. Stability tests demonstrated an average distance of ±2 cm between repeated scans with our system. The system has been tested in abandoned mines with promising results. It has a very low price (approximately USD 4000) and opens the door to measuring risky sectors where instrument loss is high but information valuable.
APA, Harvard, Vancouver, ISO, and other styles
11

Vallet, B., W. Xiao, and M. Brédif. "EXTRACTING MOBILE OBJECTS IN IMAGES USING A VELODYNE LIDAR POINT CLOUD." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-3/W4 (March 11, 2015): 247–53. http://dx.doi.org/10.5194/isprsannals-ii-3-w4-247-2015.

Full text
Abstract:
This paper presents a full pipeline to extract mobile objects in images based on a simultaneous laser acquisition with a Velodyne scanner. The point cloud is first analysed to extract mobile objects in 3D. This is done using Dempster-Shafer theory and it results in weights telling for each points if it corresponds to a mobile object, a fixed object or if no decision can be made based on the data (unknown). These weights are projected in an image acquired simultaneously and used to segment the image between the mobile and the static part of the scene.
APA, Harvard, Vancouver, ISO, and other styles
12

Jozkow, G., P. Wieczorek, M. Karpina, A. Walicka, and A. Borkowski. "PERFORMANCE EVALUATION OF sUAS EQUIPPED WITH VELODYNE HDL-32E LiDAR SENSOR." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W6 (August 23, 2017): 171–77. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w6-171-2017.

Full text
Abstract:
The Velodyne HDL-32E laser scanner is used more frequently as main mapping sensor in small commercial UASs. However, there is still little information about the actual accuracy of point clouds collected with such UASs. This work evaluates empirically the accuracy of the point cloud collected with such UAS. Accuracy assessment was conducted in four aspects: impact of sensors on theoretical point cloud accuracy, trajectory reconstruction quality, and internal and absolute point cloud accuracies. Theoretical point cloud accuracy was evaluated by calculating 3D position error knowing errors of used sensors. The quality of trajectory reconstruction was assessed by comparing position and attitude differences from forward and reverse EKF solution. Internal and absolute accuracies were evaluated by fitting planes to 8 point cloud samples extracted for planar surfaces. In addition, the absolute accuracy was also determined by calculating point 3D distances between LiDAR UAS and reference TLS point clouds. Test data consisted of point clouds collected in two separate flights performed over the same area. Executed experiments showed that in tested UAS, the trajectory reconstruction, especially attitude, has significant impact on point cloud accuracy. Estimated absolute accuracy of point clouds collected during both test flights was better than 10 cm, thus investigated UAS fits mapping-grade category.
APA, Harvard, Vancouver, ISO, and other styles
13

Pandey, Gaurav, James R. McBride, and Ryan M. Eustice. "Ford Campus vision and lidar data set." International Journal of Robotics Research 30, no. 13 (March 11, 2011): 1543–52. http://dx.doi.org/10.1177/0278364911400640.

Full text
Abstract:
In this paper we describe a data set collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck. The vehicle is outfitted with a professional (Applanix POS-LV) and consumer (Xsens MTi-G) inertial measurement unit, a Velodyne three-dimensional lidar scanner, two push-broom forward-looking Riegl lidars, and a Point Grey Ladybug3 omnidirectional camera system. Here we present the time-registered data from these sensors mounted on the vehicle, collected while driving the vehicle around the Ford Research Campus and downtown Dearborn, MI, during November–December 2009. The vehicle path trajectory in these data sets contains several large- and small-scale loop closures, which should be useful for testing various state-of-the-art computer vision and simultaneous localization and mapping algorithms.
APA, Harvard, Vancouver, ISO, and other styles
14

Hu, Tianyu, Xiliang Sun, Yanjun Su, Hongcan Guan, Qianhui Sun, Maggi Kelly, and Qinghua Guo. "Development and Performance Evaluation of a Very Low-Cost UAV-Lidar System for Forestry Applications." Remote Sensing 13, no. 1 (December 28, 2020): 77. http://dx.doi.org/10.3390/rs13010077.

Full text
Abstract:
Accurate and repeated forest inventory data are critical to understand forest ecosystem processes and manage forest resources. In recent years, unmanned aerial vehicle (UAV)-borne light detection and ranging (lidar) systems have demonstrated effectiveness at deriving forest inventory attributes. However, their high cost has largely prevented them from being used in large-scale forest applications. Here, we developed a very low-cost UAV lidar system that integrates a recently emerged DJI Livox MID40 laser scanner (~$600 USD) and evaluated its capability in estimating both individual tree-level (i.e., tree height) and plot-level forest inventory attributes (i.e., canopy cover, gap fraction, and leaf area index (LAI)). Moreover, a comprehensive comparison was conducted between the developed DJI Livox system and four other UAV lidar systems equipped with high-end laser scanners (i.e., RIEGL VUX-1 UAV, RIEGL miniVUX-1 UAV, HESAI Pandar40, and Velodyne Puck LITE). Using these instruments, we surveyed a coniferous forest site and a broadleaved forest site, with tree densities ranging from 500 trees/ha to 3000 trees/ha, with 52 UAV flights at different flying height and speed combinations. The developed DJI Livox MID40 system effectively captured the upper canopy structure and terrain surface information at both forest sites. The estimated individual tree height was highly correlated with field measurements (coniferous site: R2 = 0.96, root mean squared error/RMSE = 0.59 m; broadleaved site: R2 = 0.70, RMSE = 1.63 m). The plot-level estimates of canopy cover, gap fraction, and LAI corresponded well with those derived from the high-end RIEGL VUX-1 UAV system but tended to have systematic biases in areas with medium to high canopy densities. Overall, the DJI Livox MID40 system performed comparably to the RIEGL miniVUX-1 UAV, HESAI Pandar40, and Velodyne Puck LITE systems in the coniferous site and to the Velodyne Puck LITE system in the broadleaved forest. Despite its apparent weaknesses of limited sensitivity to low-intensity returns and narrow field of view, we believe that the very low-cost system developed by this study can largely broaden the potential use of UAV lidar in forest inventory applications. This study also provides guidance for the selection of the appropriate UAV lidar system and flight specifications for forest research and management.
APA, Harvard, Vancouver, ISO, and other styles
15

Velas, Martin, Michal Spanel, Tomas Sleziak, Jiri Habrovec, and Adam Herout. "Indoor and Outdoor Backpack Mapping with Calibrated Pair of Velodyne LiDARs." Sensors 19, no. 18 (September 12, 2019): 3944. http://dx.doi.org/10.3390/s19183944.

Full text
Abstract:
This paper presents a human-carried mapping backpack based on a pair of Velodyne LiDAR scanners. Our system is a universal solution for both large scale outdoor and smaller indoor environments. It benefits from a combination of two LiDAR scanners, which makes the odometry estimation more precise. The scanners are mounted under different angles, thus a larger space around the backpack is scanned. By fusion with GNSS/INS sub-system, the mapping of featureless environments and the georeferencing of resulting point cloud is possible. By deploying SoA methods for registration and the loop closure optimization, it provides sufficient precision for many applications in BIM (Building Information Modeling), inventory check, construction planning, etc. In our indoor experiments, we evaluated our proposed backpack against ZEB-1 solution, using FARO terrestrial scanner as the reference, yielding similar results in terms of precision, while our system provides higher data density, laser intensity readings, and scalability for large environments.
APA, Harvard, Vancouver, ISO, and other styles
16

Chan, T. O., D. D. Lichti, and D. Belton. "Temporal Analysis and Automatic Calibration of the Velodyne HDL-32E LiDAR System." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-5/W2 (October 16, 2013): 61–66. http://dx.doi.org/10.5194/isprsannals-ii-5-w2-61-2013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Glennie, Craig. "Calibration and Kinematic Analysis of the Velodyne HDL-64E S2 Lidar Sensor." Photogrammetric Engineering & Remote Sensing 78, no. 4 (April 1, 2012): 339–47. http://dx.doi.org/10.14358/pers.78.4.339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Alsadik, Bashar, and Fabio Remondino. "Flight Planning for LiDAR-Based UAS Mapping Applications." ISPRS International Journal of Geo-Information 9, no. 6 (June 8, 2020): 378. http://dx.doi.org/10.3390/ijgi9060378.

Full text
Abstract:
In the last two decades, unmanned aircraft systems (UAS) were successfully used in different environments for diverse applications like territorial mapping, heritage 3D documentation, as built surveys, construction monitoring, solar panel placement and assessment, road inspections, etc. These applications were correlated to the onboard sensors like RGB cameras, multi-spectral cameras, thermal sensors, panoramic cameras, or LiDARs. According to the different onboard sensors, a different mission plan is required to satisfy the characteristics of the sensor and the project aims. For UAS LiDAR-based mapping missions, requirements for the flight planning are different with respect to conventional UAS image-based flight plans because of different reasons related to the LiDAR scanning mechanism, scanning range, output scanning rate, field of view (FOV), rotation speed, etc. Although flight planning for image-based UAS missions is a well-known and solved problem, flight planning for a LiDAR-based UAS mapping is still an open research topic that needs further investigations. The article presents the developments of a LiDAR-based UAS flight planning tool, tested with simulations in real scenarios. The flight planning simulations considered an UAS platform equipped, alternatively, with three low-cost multi-beam LiDARs, namely Quanergy M8, Velodyne VLP-16, and the Ouster OS-1-16. The specific characteristics of the three sensors were used to plan flights and acquired dense point clouds. Comparisons and analyses of the results showed clear relationships between point density, flying speeds, and flying heights.
APA, Harvard, Vancouver, ISO, and other styles
19

Brščić, Dražen, Rhys Wyn Evans, Matthias Rehm, and Takayuki Kanda. "Using a Rotating 3D LiDAR on a Mobile Robot for Estimation of Person’s Body Angle and Gender." Sensors 20, no. 14 (July 16, 2020): 3964. http://dx.doi.org/10.3390/s20143964.

Full text
Abstract:
We studied the use of a rotating multi-layer 3D Light Detection And Ranging (LiDAR) sensor (specifically the Velodyne HDL-32E) mounted on a social robot for the estimation of features of people around the robot. While LiDARs are often used for robot self-localization and people tracking, we were interested in the possibility of using them to estimate the people’s features (states or attributes), which are important in human–robot interaction. In particular, we tested the estimation of the person’s body orientation and their gender. As collecting data in the real world and labeling them is laborious and time consuming, we also looked into other ways for obtaining data for training the estimators: using simulations, or using LiDAR data collected in the lab. We trained convolutional neural network-based estimators and tested their performance on actual LiDAR measurements of people in a public space. The results show that with a rotating 3D LiDAR a usable estimate of the body angle can indeed be achieved (mean absolute error 33.5 ° ), and that using simulated data for training the estimators is effective. For estimating gender, the results are satisfactory (accuracy above 80%) when the person is close enough; however, simulated data do not work well and training needs to be done on actual people measurements.
APA, Harvard, Vancouver, ISO, and other styles
20

Dogotari, Marcel, Moritz Prüm, Olee Hoi Ying Lam, Hemang Narendra Vithlani, Viet Hung Vu, Bethany Melville, and Rolf Becker. "Development of a UAV-Borne LiDAR System for Surveying Applications." Proceedings 30, no. 1 (May 23, 2020): 75. http://dx.doi.org/10.3390/proceedings2019030075.

Full text
Abstract:
A high-resolution UAV-borne LiDAR system with a Velodyne VLP16-Lite at its core was developed for surveying applications. The LiDAR unit was combined with a high-end IMU-GNSS solution for direct georeferencing (APX-15) and a single-board computer for data acquisition (2nd-gen. Intel NUC). Hardware and software solutions were developed for system integration. Moreover, a mechanical mount for isolating the sensitive components of the system from the UAV’s high-frequency vibration was built and evaluated. System architecture and preliminary results were presented. Furthermore, a sensitivity analysis revealed the system’s most important sources of error and suggested ways to overcome these.
APA, Harvard, Vancouver, ISO, and other styles
21

Nouiraa, H., J. E. Deschaud, and F. Goulettea. "POINT CLOUD REFINEMENT WITH A TARGET-FREE INTRINSIC CALIBRATION OF A MOBILE MULTI-BEAM LIDAR SYSTEM." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 359–66. http://dx.doi.org/10.5194/isprs-archives-xli-b3-359-2016.

Full text
Abstract:
LIDAR sensors are widely used in mobile mapping systems. The mobile mapping platforms allow to have fast acquisition in cities for example, which would take much longer with static mapping systems. The LIDAR sensors provide reliable and precise 3D information, which can be used in various applications: mapping of the environment; localization of objects; detection of changes. Also, with the recent developments, multi-beam LIDAR sensors have appeared, and are able to provide a high amount of data with a high level of detail. <br><br> A mono-beam LIDAR sensor mounted on a mobile platform will have an extrinsic calibration to be done, so the data acquired and registered in the sensor reference frame can be represented in the body reference frame, modeling the mobile system. For a multibeam LIDAR sensor, we can separate its calibration into two distinct parts: on one hand, we have an extrinsic calibration, in common with mono-beam LIDAR sensors, which gives the transformation between the sensor cartesian reference frame and the body reference frame. On the other hand, there is an intrinsic calibration, which gives the relations between the beams of the multi-beam sensor. This calibration depends on a model given by the constructor, but the model can be non optimal, which would bring errors and noise into the acquired point clouds. In the litterature, some optimizations of the calibration parameters are proposed, but need a specific routine or environment, which can be constraining and time-consuming. <br><br> In this article, we present an automatic method for improving the intrinsic calibration of a multi-beam LIDAR sensor, the Velodyne HDL-32E. The proposed approach does not need any calibration target, and only uses information from the acquired point clouds, which makes it simple and fast to use. Also, a corrected model for the Velodyne sensor is proposed. <br><br> An energy function which penalizes points far from local planar surfaces is used to optimize the different proposed parameters for the corrected model, and we are able to give a confidence value for the calibration parameters found. Optimization results on both synthetic and real data are presented.
APA, Harvard, Vancouver, ISO, and other styles
22

Axmann, J., and C. Brenner. "MAXIMUM CONSENSUS LOCALIZATION USING LIDAR SENSORS." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2021 (June 17, 2021): 9–16. http://dx.doi.org/10.5194/isprs-annals-v-2-2021-9-2021.

Full text
Abstract:
Abstract. Real world localization tasks based on LiDAR usually face a high proportion of outliers arising from erroneous measurements and changing environments. However, applications such as autonomous driving require a high integrity in all of their components, including localization. Standard localization approaches are often based on (recursive) least squares estimation, for example, using Kalman filters. Since least squares minimization shows a strong susceptibility to outliers, it is not robust.In this paper, we focus on high integrity vehicle localization and investigate a maximum consensus localization strategy. For our work, we use 2975 epochs from a Velodyne VLP-16 scanner (representing the vehicle scan data), and map data obtained using a Riegl VMX-250 mobile mapping system. We investigate the effects of varying scene geometry on the maximum consensus result by exhaustively computing the consensus values for the entire search space. We analyze the deviations in position and heading for a circular course in a downtown area by comparing the estimation results to a reference trajectory, and show the robustness of the maximum consensus localization.
APA, Harvard, Vancouver, ISO, and other styles
23

Shao, Hai Yan, Zhen Hai Zhang, Ke Jie Li, Jian Wang, Tao Xu, Shuai Hou, and Liang Zhang. "Water Hazard Detection Based on 3D LIDAR." Applied Mechanics and Materials 668-669 (October 2014): 1174–77. http://dx.doi.org/10.4028/www.scientific.net/amm.668-669.1174.

Full text
Abstract:
Autonomous off-road navigation is a highly complicated task for a robot or unmanned ground vehicle (UGV) owing to the different kinds of obstacles it could encounter. In-particular, water hazards such as puddles and ponds are very common in outdoor environments and are hard to detect even with ranging devices due to the specular nature of reflection at the air water interface. In recent years, many researches to detect the water bodies have been done. But there still has been very little work on detecting bodies of water that could be navigation hazards, especially at night. In this paper, we used Velodyne HDL-64ES2 3D LIDAR to detect water hazard. The approach first analyzes the data format and transformation of 3D LIDAR, and then writes the data acquisition and visualizations algorithm, integrated data based on ICP algorithm. Finally according the intensity distribution identifies the water hazard. Experiments are carried out on the experimental car in campus, and results show the promising performance.
APA, Harvard, Vancouver, ISO, and other styles
24

Toth, C., G. Jozkow, Z. Koppanyi, S. Young, and D. Grejner-Brzezinska. "MONITORING AIRCRAFT MOTION AT AIRPORTS BY LIDAR." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-1 (June 2, 2016): 159–65. http://dx.doi.org/10.5194/isprsannals-iii-1-159-2016.

Full text
Abstract:
Improving sensor performance, combined with better affordability, provides better object space observability, resulting in new applications. Remote sensing systems are primarily concerned with acquiring data of the static components of our environment, such as the topographic surface of the earth, transportation infrastructure, city models, etc. Observing the dynamic component of the object space is still rather rare in the geospatial application field; vehicle extraction and traffic flow monitoring are a few examples of using remote sensing to detect and model moving objects. Deploying a network of inexpensive LiDAR sensors along taxiways and runways can provide both geometrically and temporally rich geospatial data that aircraft body can be extracted from the point cloud, and then, based on consecutive point clouds motion parameters can be estimated. Acquiring accurate aircraft trajectory data is essential to improve aviation safety at airports. This paper reports about the initial experiences obtained by using a network of four Velodyne VLP- 16 sensors to acquire data along a runway segment.
APA, Harvard, Vancouver, ISO, and other styles
25

Toth, C., G. Jozkow, Z. Koppanyi, S. Young, and D. Grejner-Brzezinska. "MONITORING AIRCRAFT MOTION AT AIRPORTS BY LIDAR." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-1 (June 2, 2016): 159–65. http://dx.doi.org/10.5194/isprs-annals-iii-1-159-2016.

Full text
Abstract:
Improving sensor performance, combined with better affordability, provides better object space observability, resulting in new applications. Remote sensing systems are primarily concerned with acquiring data of the static components of our environment, such as the topographic surface of the earth, transportation infrastructure, city models, etc. Observing the dynamic component of the object space is still rather rare in the geospatial application field; vehicle extraction and traffic flow monitoring are a few examples of using remote sensing to detect and model moving objects. Deploying a network of inexpensive LiDAR sensors along taxiways and runways can provide both geometrically and temporally rich geospatial data that aircraft body can be extracted from the point cloud, and then, based on consecutive point clouds motion parameters can be estimated. Acquiring accurate aircraft trajectory data is essential to improve aviation safety at airports. This paper reports about the initial experiences obtained by using a network of four Velodyne VLP- 16 sensors to acquire data along a runway segment.
APA, Harvard, Vancouver, ISO, and other styles
26

Ulyashin, Aleksander, and Aleksander Velichko. "Study of methods for measuring distances in scanning range." Transaction of Scientific Papers of the Novosibirsk State Technical University, no. 4 (December 18, 2020): 21–37. http://dx.doi.org/10.17212/2307-6879-2020-4-21-37.

Full text
Abstract:
Recently, the market for scanning rangefinders, in other words, LIDARS, has begun to develop rapidly due to the new course for unmanned vehicles and the need for high-precision positioning of objects in construction, geodesy, military Affairs, navigation, etc. The leading manufacturers of such scanning devices are currently Velodyne, Ouster and Luminar. Each company has its own unique approach to creating LIDAR, which has both pros and cons. LIDAR itself is a scanning device designed to receive and process information about remote objects using active optical systems that use the phenomena of light absorption and scattering in optically transparent media. In other words, a LIDAR is a device that uses a laser emitter to detect an object, after which the beam from the object is reflected and hits the photodetector, which, in turn, generates a signal and transmits it to the time interval meter. The output is a two-or three-dimensional image of the scanned object in the form of dots, depending on the type of LIDAR. The more of them, the clearer the picture we have, their number directly depends on the number of lasers and the processing speed of the system. This work is devoted to a comparative analysis of methods for constructing LIDAR systems. The analysis is carried out in order to identify the most accurate method of measuring the distance to an object under various conditions, and a new self-oscillating principle of distance measurement is proposed, which allows you to bring the measurement accuracy to a new level.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhou, Sanzhang, Feng Kang, Wenbin Li, Jiangming Kan, Yongjun Zheng, and Guojian He. "Extracting Diameter at Breast Height with a Handheld Mobile LiDAR System in an Outdoor Environment." Sensors 19, no. 14 (July 21, 2019): 3212. http://dx.doi.org/10.3390/s19143212.

Full text
Abstract:
Mobile laser scanning (MLS) is widely used in the mapping of forest environments. It has become important for extracting the parameters of forest trees using the generated environmental map. In this study, a three-dimensional point cloud map of a forest area was generated by using the Velodyne VLP-16 LiDAR system, so as to extract the diameter at breast height (DBH) of individual trees. The Velodyne VLP-16 LiDAR system and inertial measurement units (IMU) were used to construct a mobile measurement platform for generating 3D point cloud maps for forest areas. The 3D point cloud map in the forest area was processed offline, and the ground point cloud was removed by the random sample consensus (RANSAC) algorithm. The trees in the experimental area were segmented by the European clustering algorithm, and the DBH component of the tree point cloud was extracted and projected onto a 2D plane, fitting the DBH of the trees using the RANSAC algorithm in the plane. A three-dimensional point cloud map of 71 trees was generated in the experimental area, and estimated the DBH. The mean and variance of the absolute error were 0.43 cm and 0.50, respectively. The relative error of the whole was 2.27%, the corresponding variance was 15.09, and the root mean square error (RMSE) was 0.70 cm. The experimental results were good and met the requirements of forestry mapping, and the application value and significance were presented.
APA, Harvard, Vancouver, ISO, and other styles
28

Nouiraa, H., J. E. Deschaud, and F. Goulettea. "POINT CLOUD REFINEMENT WITH A TARGET-FREE INTRINSIC CALIBRATION OF A MOBILE MULTI-BEAM LIDAR SYSTEM." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 359–66. http://dx.doi.org/10.5194/isprsarchives-xli-b3-359-2016.

Full text
Abstract:
LIDAR sensors are widely used in mobile mapping systems. The mobile mapping platforms allow to have fast acquisition in cities for example, which would take much longer with static mapping systems. The LIDAR sensors provide reliable and precise 3D information, which can be used in various applications: mapping of the environment; localization of objects; detection of changes. Also, with the recent developments, multi-beam LIDAR sensors have appeared, and are able to provide a high amount of data with a high level of detail. &lt;br&gt;&lt;br&gt; A mono-beam LIDAR sensor mounted on a mobile platform will have an extrinsic calibration to be done, so the data acquired and registered in the sensor reference frame can be represented in the body reference frame, modeling the mobile system. For a multibeam LIDAR sensor, we can separate its calibration into two distinct parts: on one hand, we have an extrinsic calibration, in common with mono-beam LIDAR sensors, which gives the transformation between the sensor cartesian reference frame and the body reference frame. On the other hand, there is an intrinsic calibration, which gives the relations between the beams of the multi-beam sensor. This calibration depends on a model given by the constructor, but the model can be non optimal, which would bring errors and noise into the acquired point clouds. In the litterature, some optimizations of the calibration parameters are proposed, but need a specific routine or environment, which can be constraining and time-consuming. &lt;br&gt;&lt;br&gt; In this article, we present an automatic method for improving the intrinsic calibration of a multi-beam LIDAR sensor, the Velodyne HDL-32E. The proposed approach does not need any calibration target, and only uses information from the acquired point clouds, which makes it simple and fast to use. Also, a corrected model for the Velodyne sensor is proposed. &lt;br&gt;&lt;br&gt; An energy function which penalizes points far from local planar surfaces is used to optimize the different proposed parameters for the corrected model, and we are able to give a confidence value for the calibration parameters found. Optimization results on both synthetic and real data are presented.
APA, Harvard, Vancouver, ISO, and other styles
29

Blanco-Claraco, Jose Luis, Francisco Mañas-Alvarez, Jose Luis Torres-Moreno, Francisco Rodriguez, and Antonio Gimenez-Fernandez. "Benchmarking Particle Filter Algorithms for Efficient Velodyne-Based Vehicle Localization." Sensors 19, no. 14 (July 17, 2019): 3155. http://dx.doi.org/10.3390/s19143155.

Full text
Abstract:
Keeping a vehicle well-localized within a prebuilt-map is at the core of any autonomous vehicle navigation system. In this work, we show that both standard SIR sampling and rejection-based optimal sampling are suitable for efficient (10 to 20 ms) real-time pose tracking without feature detection that is using raw point clouds from a 3D LiDAR. Motivated by the large amount of information captured by these sensors, we perform a systematic statistical analysis of how many points are actually required to reach an optimal ratio between efficiency and positioning accuracy. Furthermore, initialization from adverse conditions, e.g., poor GPS signal in urban canyons, we also identify the optimal particle filter settings required to ensure convergence. Our findings include that a decimation factor between 100 and 200 on incoming point clouds provides a large savings in computational cost with a negligible loss in localization accuracy for a VLP-16 scanner. Furthermore, an initial density of ∼2 particles/m 2 is required to achieve 100% convergence success for large-scale (∼100,000 m 2 ), outdoor global localization without any additional hint from GPS or magnetic field sensors. All implementations have been released as open-source software.
APA, Harvard, Vancouver, ISO, and other styles
30

Fekry, Reda, Wei Yao, Lin Cao, and Xin Shen. "Marker-Less UAV-LiDAR Strip Alignment in Plantation Forests Based on Topological Persistence Analysis of Clustered Canopy Cover." ISPRS International Journal of Geo-Information 10, no. 5 (April 29, 2021): 284. http://dx.doi.org/10.3390/ijgi10050284.

Full text
Abstract:
A holistic strategy is established for automated UAV-LiDAR strip adjustment for plantation forests, based on hierarchical density-based clustering analysis of the canopy cover. The method involves three key stages: keypoint extraction, feature similarity and correspondence, and rigid transformation estimation. Initially, the HDBSCAN algorithm is used to cluster the scanned canopy cover, and the keypoints are marked using topological persistence analysis of the individual clusters. Afterward, the feature similarity is calculated by considering the linear and angular relationships between each point and the pointset centroid. The one-to-one feature correspondence is retrieved by solving the assignment problem on the similarity score function using the Kuhn–Munkres algorithm, generating a set of matching pairs. Finally, 3D rigid transformation parameters are determined by permutations over all conceivable pair combinations within the correspondences, whereas the best pair combination is that which yields the maximum count of matched points achieving distance residuals within the specified tolerance. Experimental data covering eighteen subtropical forest plots acquired from the GreenValley and Riegl UAV-LiDAR platforms in two scan modes are used to validate the method. The results are extremely promising for redwood and poplar tree species from both the Velodyne and Riegl UAV-LiDAR datasets. The minimal mean distance residuals of 31 cm and 36 cm are achieved for the coniferous and deciduous plots of the Velodyne data, respectively, whereas their corresponding values are 32 cm and 38 cm for the Riegl plots. Moreover, the method achieves both higher matching percentages and lower mean distance residuals by up to 28% and 14 cm, respectively, compared to the baseline method, except in the case of plots with extremely low tree height. Nevertheless, the mean planimetric distance residual achieved by the proposed method is lower by 13 cm.
APA, Harvard, Vancouver, ISO, and other styles
31

Rasul, Abdullah, Jaho Seo, and Amir Khajepour. "Development of Sensing Algorithms for Object Tracking and Predictive Safety Evaluation of Autonomous Excavators." Applied Sciences 11, no. 14 (July 9, 2021): 6366. http://dx.doi.org/10.3390/app11146366.

Full text
Abstract:
This article presents the sensing and safety algorithms for autonomous excavators operating on construction sites. Safety is a key concern for autonomous construction to reduce collisions and machinery damage. Taking this point into consideration, our study deals with LiDAR data processing that allows for object detection, motion tracking/prediction, and track management, as well as safety evaluation in terms of potential collision risk. In the safety algorithm developed in this study, potential collision risks can be evaluated based on information from excavator working areas, predicted states of detected objects, and calculated safety indices. Experiments were performed using a modified mini hydraulic excavator with Velodyne VLP-16 LiDAR. Experimental validations prove that the developed algorithms are capable of tracking objects, predicting their future states, and assessing the degree of collision risks with respect to distance and time. Hence, the proposed algorithms can be applied to diverse autonomous machines for safety enhancement.
APA, Harvard, Vancouver, ISO, and other styles
32

Sujiwo, Adi, Tomohito Ando, Eijiro Takeuchi, Yoshiki Ninomiya, and Masato Edahiro. "Monocular Vision-Based Localization Using ORB-SLAM with LIDAR-Aided Mapping in Real-World Robot Challenge." Journal of Robotics and Mechatronics 28, no. 4 (August 19, 2016): 479–90. http://dx.doi.org/10.20965/jrm.2016.p0479.

Full text
Abstract:
[abstFig src='/00280004/06.jpg' width='300' text='Monocular Visual Localization in Tsukuba Challenge 2015. Left: result of localization inside the map created by ORB-SLAM. Right: position tracking at starting point.' ] For the 2015 Tsukuba Challenge, we realized an implementation of vision-based localization based on ORB-SLAM. Our method combined mapping based on ORB-SLAM and Velodyne LIDAR SLAM, and utilized these maps in a localization process using only a monocular camera. We also apply sensor fusion method of odometer and ORB-SLAM from all maps. The combined method delivered better accuracy than the original ORB-SLAM, which suffered from scale ambiguities and map distance distortion. This paper reports on our experience when using ORB-SLAM for visual localization, and describes the difficulties encountered.
APA, Harvard, Vancouver, ISO, and other styles
33

Sobczak, Łukasz, Katarzyna Filus, Adam Domański, and Joanna Domańska. "LiDAR Point Cloud Generation for SLAM Algorithm Evaluation." Sensors 21, no. 10 (May 11, 2021): 3313. http://dx.doi.org/10.3390/s21103313.

Full text
Abstract:
With the emerging interest in the autonomous driving level at 4 and 5 comes a necessity to provide accurate and versatile frameworks to evaluate the algorithms used in autonomous vehicles. There is a clear gap in the field of autonomous driving simulators. It covers testing and parameter tuning of a key component of autonomous driving systems, SLAM, frameworks targeting off-road and safety-critical environments. It also includes taking into consideration the non-idealistic nature of the real-life sensors, associated phenomena and measurement errors. We created a LiDAR simulator that delivers accurate 3D point clouds in real time. The point clouds are generated based on the sensor placement and the LiDAR type that can be set using configurable parameters. We evaluate our solution based on comparison of the results using an actual device, Velodyne VLP-16, on real-life tracks and the corresponding simulations. We measure the error values obtained using Google Cartographer SLAM algorithm and the distance between the simulated and real point clouds to verify their accuracy. The results show that our simulation (which incorporates measurement errors and the rolling shutter effect) produces data that can successfully imitate the real-life point clouds. Due to dedicated mechanisms, it is compatible with the Robotic Operating System (ROS) and can be used interchangeably with data from actual sensors, which enables easy testing, SLAM algorithm parameter tuning and deployment.
APA, Harvard, Vancouver, ISO, and other styles
34

Pereira, Luísa Gomes, Paulo Fernandez, Sandra Mourato, Jorge Matos, Cedric Mayer, and Fábio Marques. "Quality Control of Outsourced LiDAR Data Acquired with a UAV: A Case Study." Remote Sensing 13, no. 3 (January 26, 2021): 419. http://dx.doi.org/10.3390/rs13030419.

Full text
Abstract:
Over the last few decades, we witnessed a revolution in acquiring very high resolution and accurate geo-information. One of the reasons was the advances in photonics and LiDAR, which had a remarkable impact in applications requiring information with high accuracy and/or elevated completeness, such as flood modelling, forestry, construction, and mining. Also, miniaturization within electronics played an important role as it allowed smaller and lighter aerial cameras and LiDAR systems to be carried by unmanned aerial vehicles (UAV). While the use of aerial imagery acquired with UAV is becoming a standard procedure in geo-information extraction for several applications, the use of LiDAR for this purpose is still in its infancy. In several countries, companies have started to commercialize products derived from LiDAR data acquired using a UAV but not always with the necessary expertise and experience. The LIDAR-derived products’ price has become very attractive, but their quality must meet the contracted specifications. Few studies have reported on the quality of outsourced LiDAR data acquired with UAV and the problems that need to be handled during production. There can be significant differences between the planning and execution of a commercial project and a research field campaign, particularly concerning the size of the surveyed area, the volume of the acquired data, and the strip processing. This work addresses the quality control of LiDAR UAV data through outsourcing to develop a modelling-based flood forecast and alert system. The contracted company used the Phoenix Scout-16 from Phoenix LiDAR Systems, carrying a Velodyne VLP-16 and mounted on a DJI Matrice 600 PRO Hexacopter for an area of 560 ha along a flood-prone area of the Águeda River in Central Portugal.
APA, Harvard, Vancouver, ISO, and other styles
35

Gehrung, J., M. Hebel, M. Arens, and U. Stilla. "AN APPROACH TO EXTRACT MOVING OBJECTS FROM MLS DATA USING A VOLUMETRIC BACKGROUND REPRESENTATION." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-1/W1 (May 30, 2017): 107–14. http://dx.doi.org/10.5194/isprs-annals-iv-1-w1-107-2017.

Full text
Abstract:
Data recorded by mobile LiDAR systems (MLS) can be used for the generation and refinement of city models or for the automatic detection of long-term changes in the public road space. Since for this task only static structures are of interest, all mobile objects need to be removed. This work presents a straightforward but powerful approach to remove the subclass of moving objects. A probabilistic volumetric representation is utilized to separate MLS measurements recorded by a Velodyne HDL-64E into mobile objects and static background. The method was subjected to a quantitative and a qualitative examination using multiple datasets recorded by a mobile mapping platform. The results show that depending on the chosen octree resolution 87-95% of the measurements are labeled correctly.
APA, Harvard, Vancouver, ISO, and other styles
36

Behley, Jens, Martin Garbade, Andres Milioto, Jan Quenzel, Sven Behnke, Jürgen Gall, and Cyrill Stachniss. "Towards 3D LiDAR-based semantic scene understanding of 3D point cloud sequences: The SemanticKITTI Dataset." International Journal of Robotics Research 40, no. 8-9 (April 20, 2021): 959–67. http://dx.doi.org/10.1177/02783649211006735.

Full text
Abstract:
A holistic semantic scene understanding exploiting all available sensor modalities is a core capability to master self-driving in complex everyday traffic. To this end, we present the SemanticKITTI dataset that provides point-wise semantic annotations of Velodyne HDL-64E point clouds of the KITTI Odometry Benchmark. Together with the data, we also published three benchmark tasks for semantic scene understanding covering different aspects of semantic scene understanding: (1) semantic segmentation for point-wise classification using single or multiple point clouds as input; (2) semantic scene completion for predictive reasoning on the semantics and occluded regions; and (3) panoptic segmentation combining point-wise classification and assigning individual instance identities to separate objects of the same class. In this article, we provide details on our dataset showing an unprecedented number of fully annotated point cloud sequences, more information on our labeling process to efficiently annotate such a vast amount of point clouds, and lessons learned in this process. The dataset and resources are available at http://www.semantic-kitti.org .
APA, Harvard, Vancouver, ISO, and other styles
37

Pricope, Narcisa Gabriela, Joanne Nancie Halls, Kerry Lynn Mapes, Joseph Britton Baxley, and James JyunYueh Wu. "Quantitative Comparison of UAS-Borne LiDAR Systems for High-Resolution Forested Wetland Mapping." Sensors 20, no. 16 (August 10, 2020): 4453. http://dx.doi.org/10.3390/s20164453.

Full text
Abstract:
Wetlands provide critical ecosystem services across a range of environmental gradients and are at heightened risk of degradation from anthropogenic pressures and continued development, especially in coastal regions. There is a growing need for high-resolution (spatially and temporally) habitat identification and precise delineation of wetlands across a variety of stakeholder groups, including wetlands loss mitigation programs. Traditional wetland delineations are costly, time-intensive and can physically degrade the systems that are being surveyed, while aerial surveys are relatively fast and relatively unobtrusive. To assess the efficacy and feasibility of using two variable-cost LiDAR sensors mounted on a commercial hexacopter unmanned aerial system (UAS) in deriving high resolution topography, we conducted nearly concomitant flights over a site located in the Atlantic Coastal plain that contains a mix of palustrine forested wetlands, upland coniferous forest, upland grass and bare ground/dirt roads. We compared point clouds and derived topographic metrics acquired using the Quanergy M8 and the Velodyne HDL-32E LiDAR sensors with airborne LiDAR and results showed that the less expensive and lighter payload sensor outperforms the more expensive one in deriving high resolution, high accuracy ground elevation measurements under a range of canopy cover densities and for metrics of point cloud density and digital terrain computed both globally and locally using variable size tessellations. The mean point cloud density was not significantly different between wetland and non-wetland areas, but the two sensors were significantly different by wetland/non-wetland type. Ultra-high-resolution LiDAR-derived topography models can fill evolving wetlands mapping needs and increase accuracy and efficiency of detection and prediction of sensitive wetland ecosystems, especially for heavily forested coastal wetland systems.
APA, Harvard, Vancouver, ISO, and other styles
38

Yan, Wanqian, Haiyan Guan, Lin Cao, Yongtao Yu, Sha Gao, and JianYong Lu. "An Automated Hierarchical Approach for Three-Dimensional Segmentation of Single Trees Using UAV LiDAR Data." Remote Sensing 10, no. 12 (December 10, 2018): 1999. http://dx.doi.org/10.3390/rs10121999.

Full text
Abstract:
Forests play a key role in terrestrial ecosystems, and the variables extracted from single trees can be used in various fields and applications for evaluating forest production and assessing forest ecosystem services. In this study, we developed an automated hierarchical single-tree segmentation approach based on the high density three-dimensional (3D) Unmanned Aerial Vehicle (UAV) point clouds. First, this approach obtains normalized non-ground UAV points in data preprocessing; then, a voxel-based mean shift algorithm is used to roughly classify the non-ground UAV points into well-detected and under-segmentation clusters. Moreover, potential tree apices for each under-segmentation cluster are obtained with regard to profile shape curves and finally input to the normalized cut segmentation (NCut) algorithm to segment iteratively the under-segmentation cluster into single trees. We evaluated the proposed method using datasets acquired by a Velodyne 16E LiDAR system mounted on a multi-rotor UAV. The results showed that the proposed method achieves the average correctness, completeness, and overall accuracy of 0.90, 0.88, and 0.89, respectively, in delineating single trees. Comparative analysis demonstrated that our method provided a promising solution to reliable and robust segmentation of single trees from UAV LiDAR data with high point cloud density.
APA, Harvard, Vancouver, ISO, and other styles
39

Yan, Wanqian, Haiyan Guan, Lin Cao, Yongtao Yu, Cheng Li, and JianYong Lu. "A Self-Adaptive Mean Shift Tree-Segmentation Method Using UAV LiDAR Data." Remote Sensing 12, no. 3 (February 5, 2020): 515. http://dx.doi.org/10.3390/rs12030515.

Full text
Abstract:
Unmanned aerial vehicles using light detection and ranging (UAV LiDAR) with high spatial resolution have shown great potential in forest applications because they can capture vertical structures of forests. Individual tree segmentation is the foundation of many forest research works and applications. The tradition fixed bandwidth mean shift has been applied to individual tree segmentation and proved to be robust in tree segmentation. However, the fixed bandwidth-based segmentation methods are not suitable for various crown sizes, resulting in omission or commission errors. Therefore, to increase tree-segmentation accuracy, we propose a self-adaptive bandwidth estimation method to estimate the optimal kernel bandwidth automatically without any prior knowledge of crown size. First, from the global maximum point, we divide the three-dimensional (3D) space into a set of angular sectors, for each of which a canopy surface is simulated and the potential tree crown boundaries are identified to estimate average crown width as the kernel bandwidth. Afterwards, we use a mean shift with the automatically estimated kernel bandwidth to extract individual tree points. The method is iteratively implemented within a given area until all trees are segmented. The proposed method was tested on the 7 plots acquired by a Velodyne 16E LiDAR system, including 3 simple plots and 4 complex plots, and 95% and 80% of trees were correctly segmented, respectively. Comparative experiments show that our method contributes to the improvement of both segmentation accuracy and computational efficiency.
APA, Harvard, Vancouver, ISO, and other styles
40

Bronzino, G. P. C., N. Grasso, F. Matrone, A. Osello, and M. Piras. "LASER-VISUAL-INERTIAL ODOMETRY BASED SOLUTION FOR 3D HERITAGE MODELING: THE SANCTUARY OF THE BLESSED VIRGIN OF TROMPONE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W15 (August 20, 2019): 215–22. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w15-215-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> The advent of new mobile mapping systems that integrate different sensors has made it easier to acquire multiple 3D information with high speed. Today, technological development has allowed the creation of portable systems particularly suitable for indoor surveys, which mainly integrating LiDAR devices, chambers and inertial platforms, make it possible to create in a fast and easy way, full 3D model of the environment. However, the performance of these instruments differs depending on the acquisition context (indoor and outdoor), the characteristics of the scene (for example lighting, the presence of objects and people, reflecting surfaces, textures) and, above all, the mapping and localization algorithms implemented in devices. The purpose of this study is to analyse the results, and their accuracy, deriving from a survey conducted with the KAARTA Stencil 2 handheld system. This instrument, composed of a 3D LiDAR Velodyne VLP-16, a MEMS inertial platform and a feature tracker camera, it is able to realize the temporal 3D map of the environment. Specifically, the acquisition tests were carried out in a context of metrical documentation of an architectural heritage, in order extract architectural detail for the future reconstruction of virtual and augmented reality environments and for Historical Building Information Modeling purposes. The achieved results were analysed and the discrepancies from some reference LiDAR data are computed for a final evaluation. The system was tested in the church and cloister of the Sanctuary of the Beata Vergine del Trompone in Moncrivello (VC) (Italy).</p>
APA, Harvard, Vancouver, ISO, and other styles
41

Tsai, G. J., K. W. Chiang, and N. El-Sheimy. "INS-AIDED 3D LIDAR SEAMLESS MAPPING IN CHALLENGING ENVIRONMENT FOR FUTURE HIGH DEFINITION MAP." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W16 (September 17, 2019): 251–57. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w16-251-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> With advances in computing and sensor technologies, onboard systems can deal with a large amount of data and achieve real-time process continuously and accurately. In order to further enhance the performance of positioning, high definition map (HD map) is one of the game changers for future autonomous driving. Instead of directly using Inertial Navigation System and Global Navigation Satellite System (INS/GNSS) navigation solutions to conduct the Direct Geo-referencing (DG) and acquiring 3D mapping information, Simultaneous Localization and Mapping (SLAM) relies heavily on environmental features to derive the position and attitude as well as conducting the mapping at the same time. In this research, the new structure is proposed to integrate the INS/GNSS into LiDAR Odometry and Mapping (LOAM) algorithm and enhance the mapping performance. The first contribution is using the INS/GNSS to provide the short-term relative position information for the mapping process when the LiDAR odometry process is failed. The checking process is built to detect the divergence of LiDAR odometry process based on the residual from correspondences of features and innovation sequence of INS/GNSS. More importantly, by integrating with INS/GNSS, the whole global map is located in the standard global coordinate system (WGS84) which can be shared and employed easily and seamlessly. In this research, the designed land vehicle platform includes commercial INS/GNSS integrated product as a reference, relatively low-cost and lower grade INS system and Velodyne LiDAR with 16 laser channels, respectively. The field test is conducted from outdoor to the indoor underground parking lot and the final solution using the proposed method has a significant improvement as well as building a more accurate and reliable map for future use.</p>
APA, Harvard, Vancouver, ISO, and other styles
42

Brown, Alan S. "Hiding in Plain Sight." Mechanical Engineering 133, no. 02 (February 1, 2011): 31. http://dx.doi.org/10.1115/1.2011-feb-3.

Full text
Abstract:
This article presents an overview of Google's autonomous car. There are three components that make Google’s driverless cars go: sensors, software, and Google’s mapping database. Most of these sensors are neatly tucked away in the car’s body rather than mounted, laboratory-style, on a roof rack. The exception is the rotating sensor mounted on the roof. It is a Velodyne high-density LIDAR—light detection and ranging—that combines 64 pulsed lasers into a single unit. The system rotates 10 times per second, capturing 1.3 million points to map the car’s surroundings with centimeter-scale resolution in three dimensions. This lets it detect pavement up to 165-feet ahead or cars and trees within 400 feet. Automotive radars, front and back, provide greater range at lower resolution. A high-resolution video camera inside the car detects traffic signals, as well as pedestrians, bicyclists, and other moving obstacles. The cars also track their positions with a GPS and an inertial motion sensor.
APA, Harvard, Vancouver, ISO, and other styles
43

Julge, K., A. Ellmann, T. Vajakas, and R. Kolka. "INITIAL TESTS AND ACCURACY ASSESMENT OF A COMPACT MOBILE LASER SCANNING SYSTEM." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 6, 2016): 633–38. http://dx.doi.org/10.5194/isprsarchives-xli-b1-633-2016.

Full text
Abstract:
Mobile laser scanning (MLS) is a faster and cost-effective alternative to static laser scanning, even though there is a slight trade-off in accuracy. This contribution describes a compact mobile laser scanning system mounted on a vehicle. The technical parameters of the used system components, i.e. a small LIDAR sensor Velodyne VLP-16 and a dual antenna GNSS/INS system Advanced Navigation Spatial Dual, are reviewed, along with the integration of these components for spatial data acquisition. Calculation principles of 3D coordinates from the real-time data of all the involved sensors are discussed. The field tests were carried out in a controlled environment of a parking lot and at different velocities. Experiments were carried out to test the ability of the GNSS/INS system to cope with difficult conditions, e.g. sudden movements due to cornering or swerving. The accuracy of the resulting MLS point cloud is evaluated with respect to high-accuracy static terrestrial laser scanning data. Problems regarding combining LIDAR, GNSS and INS sensors are outlined, as well as the initial accuracy assessments. Initial tests revealed errors related to insufficient quality of inertial data and a need for the trajectory post-processing calculations. Although this study was carried out while the system was mounted on a car, there is potential for operating the system on an unmanned aerial vehicle, all-terrain vehicle or in a backpack mode due to its relatively compact size.
APA, Harvard, Vancouver, ISO, and other styles
44

Chan, T. O., and D. D. Lichti. "Geometric Modelling of Octagonal Lamp Poles." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5 (June 6, 2014): 145–50. http://dx.doi.org/10.5194/isprsarchives-xl-5-145-2014.

Full text
Abstract:
Lamp poles are one of the most abundant highway and community components in modern cities. Their supporting parts are primarily tapered octagonal cones specifically designed for wind resistance. The geometry and the positions of the lamp poles are important information for various applications. For example, they are important to monitoring deformation of aged lamp poles, maintaining an efficient highway GIS system, and also facilitating possible feature-based calibration of mobile LiDAR systems. In this paper, we present a novel geometric model for octagonal lamp poles. The model consists of seven parameters in which a rotation about the z-axis is included, and points are constrained by the trigonometric property of 2D octagons after applying the rotations. For the geometric fitting of the lamp pole point cloud captured by a terrestrial LiDAR, accurate initial parameter values are essential. They can be estimated by first fitting the points to a circular cone model and this is followed by some basic point cloud processing techniques. The model was verified by fitting both simulated and real data. The real data includes several lamp pole point clouds captured by: (1) Faro Focus 3D and (2) Velodyne HDL-32E. The fitting results using the proposed model are promising, and up to 2.9 mm improvement in fitting accuracy was realized for the real lamp pole point clouds compared to using the conventional circular cone model. The overall result suggests that the proposed model is appropriate and rigorous.
APA, Harvard, Vancouver, ISO, and other styles
45

Julge, K., A. Ellmann, T. Vajakas, and R. Kolka. "INITIAL TESTS AND ACCURACY ASSESMENT OF A COMPACT MOBILE LASER SCANNING SYSTEM." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 6, 2016): 633–38. http://dx.doi.org/10.5194/isprs-archives-xli-b1-633-2016.

Full text
Abstract:
Mobile laser scanning (MLS) is a faster and cost-effective alternative to static laser scanning, even though there is a slight trade-off in accuracy. This contribution describes a compact mobile laser scanning system mounted on a vehicle. The technical parameters of the used system components, i.e. a small LIDAR sensor Velodyne VLP-16 and a dual antenna GNSS/INS system Advanced Navigation Spatial Dual, are reviewed, along with the integration of these components for spatial data acquisition. Calculation principles of 3D coordinates from the real-time data of all the involved sensors are discussed. The field tests were carried out in a controlled environment of a parking lot and at different velocities. Experiments were carried out to test the ability of the GNSS/INS system to cope with difficult conditions, e.g. sudden movements due to cornering or swerving. The accuracy of the resulting MLS point cloud is evaluated with respect to high-accuracy static terrestrial laser scanning data. Problems regarding combining LIDAR, GNSS and INS sensors are outlined, as well as the initial accuracy assessments. Initial tests revealed errors related to insufficient quality of inertial data and a need for the trajectory post-processing calculations. Although this study was carried out while the system was mounted on a car, there is potential for operating the system on an unmanned aerial vehicle, all-terrain vehicle or in a backpack mode due to its relatively compact size.
APA, Harvard, Vancouver, ISO, and other styles
46

Bedkowski, Janusz, Hubert Nowak, Blazej Kubiak, Witold Studzinski, Maciej Janeczek, Szymon Karas, Adam Kopaczewski, et al. "A Novel Approach to Global Positioning System Accuracy Assessment, Verified on LiDAR Alignment of One Million Kilometers at a Continent Scale, as a Foundation for Autonomous DRIVING Safety Analysis." Sensors 21, no. 17 (August 24, 2021): 5691. http://dx.doi.org/10.3390/s21175691.

Full text
Abstract:
This paper concerns a new methodology for accuracy assessment of GPS (Global Positioning System) verified experimentally with LiDAR (Light Detection and Ranging) data alignment at continent scale for autonomous driving safety analysis. Accuracy of an autonomous driving vehicle positioning within a lane on the road is one of the key safety considerations and the main focus of this paper. The accuracy of GPS positioning is checked by comparing it with mobile mapping tracks in the recorded high-definition source. The aim of the comparison is to see if the GPS positioning remains accurate up to the dimensions of the lane where the vehicle is driving. The goal is to align all the available LiDAR car trajectories to confirm the of accuracy of GNSS + INS (Global Navigation Satellite System + Inertial Navigation System). For this reason, the use of LiDAR metric measurements for data alignment implemented using SLAM (Simultaneous Localization and Mapping) was investigated, assuring no systematic drift by applying GNSS+INS constraints. The methodology was verified experimentally using arbitrarily chosen measurement instruments (NovAtel GNSS + INS, Velodyne HDL32 LiDAR) mounted onto mobile mapping systems. The accuracy was assessed and confirmed by the alignment of 32,785 trajectories with a total length of 1,159,956.9 km and a total of 186.4 × 109 optimized parameters (six degrees of freedom of poses) that cover the United States region in the 2016–2019 period. The alignment improves the trajectories; thus the final map is consistent. The proposed methodology extends the existing methods of global positioning system accuracy assessment, focusing on realistic environmental and driving conditions. The impact of global positioning system accuracy on autonomous car safety is discussed. It is shown that 99% of the assessed data satisfy the safety requirements (driving within lanes of 3.6 m) for Mid-Size (width 1.85 m, length 4.87 m) vehicles and 95% for Six-Wheel Pickup (width 2.03–2.43 m, length 5.32–6.76 m). The conclusion is that this methodology has great potential for global positioning accuracy assessment at the global scale for autonomous driving applications. LiDAR data alignment is introduced as a novel approach to GNSS + INS accuracy confirmation. Further research is needed to solve the identified challenges.
APA, Harvard, Vancouver, ISO, and other styles
47

Kim, Eung-su, and Soon-Yong Park. "Calibration of 3-D Mapping System Consisted of 16-Channel Velodyne LiDAR and 6-Channel Cameras by Matching Multiple 3-D Planes." Journal of Institute of Control, Robotics and Systems 26, no. 5 (May 31, 2020): 363–72. http://dx.doi.org/10.5302/j.icros.2020.20.0040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Ilci, Veli, and Charles Toth. "High Definition 3D Map Creation Using GNSS/IMU/LiDAR Sensor Integration to Support Autonomous Vehicle Navigation." Sensors 20, no. 3 (February 7, 2020): 899. http://dx.doi.org/10.3390/s20030899.

Full text
Abstract:
Recent developments in sensor technologies such as Global Navigation Satellite Systems (GNSS), Inertial Measurement Unit (IMU), Light Detection and Ranging (LiDAR), radar, and camera have led to emerging state-of-the-art autonomous systems, such as driverless vehicles or UAS (Unmanned Airborne Systems) swarms. These technologies necessitate the use of accurate object space information about the physical environment around the platform. This information can be generally provided by the suitable selection of the sensors, including sensor types and capabilities, the number of sensors, and their spatial arrangement. Since all these sensor technologies have different error sources and characteristics, rigorous sensor modeling is needed to eliminate/mitigate errors to obtain an accurate, reliable, and robust integrated solution. Mobile mapping systems are very similar to autonomous vehicles in terms of being able to reconstruct the environment around the platforms. However, they differ a lot in operations and objectives. Mobile mapping vehicles use professional grade sensors, such as geodetic grade GNSS, tactical grade IMU, mobile LiDAR, and metric cameras, and the solution is created in post-processing. In contrast, autonomous vehicles use simple/inexpensive sensors, require real-time operations, and are primarily interested in identifying and tracking moving objects. In this study, the main objective was to assess the performance potential of autonomous vehicle sensor systems to obtain high-definition maps based on only using Velodyne sensor data for creating accurate point clouds. In other words, no other sensor data were considered in this investigation. The results have confirmed that cm-level accuracy can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
49

Hadas, E., G. Jozkow, A. Walicka, and A. Borkowski. "DETERMINING GEOMETRIC PARAMETERS OF AGRICULTURAL TREES FROM LASER SCANNING DATA OBTAINED WITH UNMANNED AERIAL VEHICLE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (May 30, 2018): 407–10. http://dx.doi.org/10.5194/isprs-archives-xlii-2-407-2018.

Full text
Abstract:
The estimation of dendrometric parameters has become an important issue for agriculture planning and for the efficient management of orchards. Airborne Laser Scanning (ALS) data is widely used in forestry and many algorithms for automatic estimation of dendrometric parameters of individual forest trees were developed. Unfortunately, due to significant differences between forest and fruit trees, some contradictions exist against adopting the achievements of forestry science to agricultural studies indiscriminately.<br> In this study we present the methodology to identify individual trees in apple orchard and estimate heights of individual trees, using high-density LiDAR data (3200&amp;thinsp;points/m<sup>2</sup>) obtained with Unmanned Aerial Vehicle (UAV) equipped with Velodyne HDL32-E sensor. The processing strategy combines the alpha-shape algorithm, principal component analysis (PCA) and detection of local minima. The alpha-shape algorithm is used to separate tree rows. In order to separate trees in a single row, we detect local minima on the canopy profile and slice polygons from alpha-shape results. We successfully separated 92&amp;thinsp;% of trees in the test area. 6&amp;thinsp;% of trees in orchard were not separated from each other and 2&amp;thinsp;% were sliced into two polygons. The RMSE of tree heights determined from the point clouds compared to field measurements was equal to 0.09&amp;thinsp;m, and the correlation coefficient was equal to 0.96. The results confirm the usefulness of LiDAR data from UAV platform in orchard inventory.
APA, Harvard, Vancouver, ISO, and other styles
50

Kim, Kyu-Won, Jun-Hyuck Im, Moon-Beom Heo, and Gyu-In Jee. "Precise Vehicle Position and Heading Estimation Using a Binary Road Marking Map." Journal of Sensors 2019 (January 20, 2019): 1–18. http://dx.doi.org/10.1155/2019/1296175.

Full text
Abstract:
Road markings are always present on roads to guide and control traffic. Therefore, they can be used at any time for vehicle localization. Moreover, they can be easily extracted by using light detection and ranging (LIDAR) intensity because they are brightly colored. We propose a vehicle localization method using a 2D road marking grid map. The grid map inserts the map information into the grid directly. Thus, an additional process (such as line detection) is not required and there is no problem due to false detection. We obtained road marking using a 3D LIDAR (Velodyne HDL-32E) and binarized this information to store in the map. Thus, we could reduce the map size significantly. In the previous research, the road marking grid map was used only for position estimation. However, we propose a position-and-heading estimation algorithm using the binary road marking grid map. Accordingly, we derive more precise position estimation results. Moreover, position reliability is an important factor for vehicle localization. Autonomous vehicles may cause accidents if they cannot maintain their lane momentarily. Therefore, we propose an algorithm for evaluating map matching results. Consequently, we can use only reliable matching results and increase position reliability. The experiment was conducted in Gangnam, Seoul, where GPS error occurs largely. In the experimental results, the lateral root mean square (RMS) error was 0.05 m and longitudinal RMS error was 0.08 m. Further, we obtained a position error of less than 50 cm in both lateral and longitudinal directions with a 99% confidence level.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography