Journal articles on the topic 'SfM and LiDAR'

To see the other types of publications on this topic, follow the link: SfM and LiDAR.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'SfM and LiDAR.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Morgan, Carli J., Matthew Powers, and Bogdan M. Strimbu. "Estimating Tree Defects with Point Clouds Developed from Active and Passive Sensors." Remote Sensing 14, no. 8 (April 17, 2022): 1938. http://dx.doi.org/10.3390/rs14081938.

Full text
Abstract:
Traditional inventories require large investments of resources and a trained workforce to measure tree sizes and characteristics that affect wood quality and value, such as the presence of defects and damages. Handheld light detection and ranging (LiDAR) and photogrammetric point clouds developed using Structure from Motion (SfM) algorithms achieved promising results in tree detection and dimensional measurements. However, few studies have utilized handheld LiDAR or SfM to assess tree defects or damages. We used a Samsung Galaxy S7 smartphone camera to photograph trees and create digital models using SfM, and a handheld GeoSLAM Zeb Horizon to create LiDAR point cloud models of some of the main tree species from the Pacific Northwest. We compared measurements of damage count and damage length obtained from handheld LiDAR, SfM photogrammetry, and traditional field methods using linear mixed-effects models. The field method recorded nearly twice as many damages per tree as the handheld LiDAR and SfM methods, but there was no evidence that damage length measurements varied between the three survey methods. Lower damage counts derived from LiDAR and SfM were likely driven by the limited point cloud reconstructions of the upper stems, as usable tree heights were achieved, on average, at 13.6 m for LiDAR and 9.3 m for SfM, even though mean field-measured tree heights was 31.2 m. Our results suggest that handheld LiDAR and SfM approaches show potential for detection and measurement of tree damages, at least on the lower stem.
APA, Harvard, Vancouver, ISO, and other styles
2

Obanawa, Hiroyuki, Rena Yoshitoshi, Nariyasu Watanabe, and Seiichi Sakanoue. "Portable LiDAR-Based Method for Improvement of Grass Height Measurement Accuracy: Comparison with SfM Methods." Sensors 20, no. 17 (August 26, 2020): 4809. http://dx.doi.org/10.3390/s20174809.

Full text
Abstract:
Plant height is a key indicator of grass growth. However, its accurate measurement at high spatial density with a conventional ruler is time-consuming and costly. We estimated grass height with high accuracy and speed using the structure from motion (SfM) and portable light detection and ranging (LiDAR) systems. The shapes of leaf tip surface and ground in grassland were determined by unmanned aerial vehicle (UAV)-SfM, pole camera-SfM, and hand-held LiDAR, before and after grass harvesting. Grass height was most accurately estimated using the difference between the maximum value of the point cloud before harvesting, and the minimum value of the point cloud after harvesting, when converting from the point cloud to digital surface model (DSM). We confirmed that the grass height estimation accuracy was the highest in DSM, with a resolution of 50–100 mm for SfM and 20 mm for LiDAR, when the grass width was 10 mm. We also found that the error of the estimated value by LiDAR was about half of that by SfM. As a result, we evaluated the influence of the data conversion method (from point cloud to DSM), and the measurement method on the accuracy of grass height measurement, using SfM and LiDAR.
APA, Harvard, Vancouver, ISO, and other styles
3

Broxton, Patrick D., and Willem J. D. van Leeuwen. "Structure from Motion of Multi-Angle RPAS Imagery Complements Larger-Scale Airborne Lidar Data for Cost-Effective Snow Monitoring in Mountain Forests." Remote Sensing 12, no. 14 (July 18, 2020): 2311. http://dx.doi.org/10.3390/rs12142311.

Full text
Abstract:
Snowmelt from mountain forests is critically important for water resources and hydropower generation. More than 75% of surface water supply originates as snowmelt in mountainous regions, such as the western U.S. Remote sensing has the potential to measure snowpack in these areas accurately. In this research, we combine light detection and ranging (lidar) from crewed aircraft (currently, the most reliable way of measuring snow depth in mountain forests) and structure from motion (SfM) remotely piloted aircraft systems (RPAS) for cost-effective multi-temporal monitoring of snowpack in mountain forests. In sparsely forested areas, both technologies give similar snow depth maps, with a comparable agreement with ground-based snow depth observations (RMSE ~10 cm). In densely forested areas, airborne lidar is better able to represent snow depth than RPAS-SfM (RMSE ~10 cm vs ~10–20 cm). In addition, we find the relationship between RPAS-SfM and previous lidar snow depth data can be used to estimate snow depth conditions outside of relatively small RPAS-SfM monitoring plots, with RMSE’s between these observed and estimated snow depths on the order of 10–15 cm for the larger lidar coverages. This suggests that when a single airborne lidar snow survey exists, RPAS-SfM may provide useful multi-temporal snow monitoring that can estimate basin-scale snowpack, at a much lower cost than multiple airborne lidar surveys. Doing so requires a pre-existing mid-winter or peak-snowpack airborne lidar snow survey, and subsequent well-designed paired SfM and field snow surveys that accurately capture substantial snow depth variability.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Fei, Amirhossein Hassanzadeh, Julie Kikkert, Sarah Jane Pethybridge, and Jan van Aardt. "Comparison of UAS-Based Structure-from-Motion and LiDAR for Structural Characterization of Short Broadacre Crops." Remote Sensing 13, no. 19 (October 4, 2021): 3975. http://dx.doi.org/10.3390/rs13193975.

Full text
Abstract:
The use of small unmanned aerial system (UAS)-based structure-from-motion (SfM; photogrammetry) and LiDAR point clouds has been widely discussed in the remote sensing community. Here, we compared multiple aspects of the SfM and the LiDAR point clouds, collected concurrently in five UAS flights experimental fields of a short crop (snap bean), in order to explore how well the SfM approach performs compared with LiDAR for crop phenotyping. The main methods include calculating the cloud-to-mesh distance (C2M) maps between the preprocessed point clouds, as well as computing a multiscale model-to-model cloud comparison (M3C2) distance maps between the derived digital elevation models (DEMs) and crop height models (CHMs). We also evaluated the crop height and the row width from the CHMs and compared them with field measurements for one of the data sets. Both SfM and LiDAR point clouds achieved an average RMSE of ~0.02 m for crop height and an average RMSE of ~0.05 m for row width. The qualitative and quantitative analyses provided proof that the SfM approach is comparable to LiDAR under the same UAS flight settings. However, its altimetric accuracy largely relied on the number and distribution of the ground control points.
APA, Harvard, Vancouver, ISO, and other styles
5

Liao, Jianghua, Jinxing Zhou, and Wentao Yang. "Comparing LiDAR and SfM digital surface models for three land cover types." Open Geosciences 13, no. 1 (January 1, 2021): 497–504. http://dx.doi.org/10.1515/geo-2020-0257.

Full text
Abstract:
Abstract Airborne light detection and ranging (LiDAR) and unmanned aerial vehicle structure from motion (UAV-SfM) are two major methods used to produce digital surface models (DSMs) for geomorphological studies. Previous studies have used both types of DSM datasets interchangeably and ignored their differences, whereas others have attempted to locally compare these differences. However, few studies have quantified these differences for different land cover types. Therefore, we simultaneously compared the two DSMs using airborne LiDAR and UAV-SfM for three land cover types (i.e. forest, wasteland, and bare land) in northeast China. Our results showed that the differences between the DSMs were the greatest for forest areas. Further, the average elevation of the UAV-SfM DSM was 0.4 m lower than that of the LiDAR DSM, with a 95th percentile difference of 3.62 m for the forest areas. Additionally, the average elevations of the SfM DSM for wasteland and bare land were 0.16 and 0.43 m lower, respectively, than those of the airborne LiDAR DSM; the 95th percentile differences were 0.67 and 0.64 m, respectively. The differences between the two DSMs were generally minor over areas with sparse vegetation and more significant for areas covered by tall dense trees. The findings of this research can guide the joint use of different types of DSMs in certain applications, such as land management and soil erosion studies. A comparison of the DSM types in complex terrains should be explored in the future.
APA, Harvard, Vancouver, ISO, and other styles
6

Ratner, JJ, JJ Sury, MR James, TA Mather, and DM Pyle. "Crowd-sourcing structure-from- motion data for terrain modelling in a real-world disaster scenario: A proof of concept." Progress in Physical Geography: Earth and Environment 43, no. 2 (February 24, 2019): 236–59. http://dx.doi.org/10.1177/0309133318823622.

Full text
Abstract:
Structure-from-motion (SfM) photogrammetry techniques are now widely available to generate digital terrain models (DTMs) from optical imagery, providing an alternative to costlier options such as LiDAR or satellite surveys. SfM could be a useful tool in hazard studies because its minimal cost makes it accessible even in developing regions and its speed of use can provide updated data rapidly in hazard-prone regions. Our study is designed to assess whether crowd-sourced SfM data is comparable to an industry standard LiDAR dataset, demonstrating potential real-world use of SfM if employed for disaster risk reduction purposes. Three groups with variable SfM knowledge utilized 16 different camera models, including four camera phones, to collect 1001 total photos in one hour of data collection. Datasets collected by each group were processed using VisualSFM, and the point densities, accuracies and distributions of points in the resultant point clouds (DTM skeletons) were compared. Our results show that the point clouds are resilient to inconsistency in users’ SfM knowledge: crowd-sourced data collected by a moderately informed general public yields topography results comparable in data density and accuracy to those produced with data collected by highly-informed SfM users or experts using LiDAR. This means that in a real-world scenario involving participants with a diverse range of expertise, topography models could be produced from crowd-sourced data quite rapidly and to a very high standard. This could be beneficial to disaster risk reduction as a relatively quick, simple and low-cost method to attain rapidly updated knowledge of terrain attributes, useful for the prediction and mitigation of many natural hazards.
APA, Harvard, Vancouver, ISO, and other styles
7

Gassen, Fabian, Eberhard Hasche, Patrick Ingwer, and Reiner Creutzburg. "Supplementation of Lidar Scans with Structure from Motion (SfM) Data." Electronic Imaging 2016, no. 7 (February 14, 2016): 1–6. http://dx.doi.org/10.2352/issn.2470-1173.2016.7.mobmu-297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mikita, Tomáš, Marie Balková, Aleš Bajer, Miloš Cibulka, and Zdeněk Patočka. "Comparison of Different Remote Sensing Methods for 3D Modeling of Small Rock Outcrops." Sensors 20, no. 6 (March 17, 2020): 1663. http://dx.doi.org/10.3390/s20061663.

Full text
Abstract:
This paper reviews the use of modern 3D image-based and Light Detection and Ranging (LiDAR) methods of surface reconstruction techniques for high fidelity surveys of small rock outcrops to highlight their potential within structural geology and landscape protection. LiDAR and Structure from Motion (SfM) software provide useful opportunities for rock outcrops mapping and 3D model creation. The accuracy of these surface reconstructions is crucial for quantitative structural analysis. However, these technologies require either a costly data acquisition device (Terrestrial LiDAR) or specialized image processing software (SfM). Recent developments in augmented reality and smartphone technologies, such as increased processing capacity and higher resolution of cameras, may offer a simple and inexpensive alternative for 3D surface reconstruction. Therefore, the aim of the paper is to show the possibilities of using smartphone applications for model creation and to determine their accuracy for rock outcrop mapping.
APA, Harvard, Vancouver, ISO, and other styles
9

Harder, Phillip, John W. Pomeroy, and Warren D. Helgason. "Improving sub-canopy snow depth mapping with unmanned aerial vehicles: lidar versus structure-from-motion techniques." Cryosphere 14, no. 6 (June 15, 2020): 1919–35. http://dx.doi.org/10.5194/tc-14-1919-2020.

Full text
Abstract:
Abstract. Vegetation has a tremendous influence on snow processes and snowpack dynamics, yet remote sensing techniques to resolve the spatial variability of sub-canopy snow depth are not always available and are difficult from space-based platforms. Unmanned aerial vehicles (UAVs) have had recent widespread application to capture high-resolution information on snow processes and are herein applied to the sub-canopy snow depth challenge. Previous demonstrations of snow depth mapping with UAV structure from motion (SfM) and airborne lidar have focussed on non-vegetated surfaces or reported large errors in the presence of vegetation. In contrast, UAV-lidar systems have high-density point clouds and measure returns from a wide range of scan angles, increasing the likelihood of successfully sensing the sub-canopy snow depth. The effectiveness of UAV lidar and UAV SfM in mapping snow depth in both open and forested terrain was tested in a 2019 field campaign at the Canadian Rockies Hydrological Observatory, Alberta, and at Canadian prairie sites near Saskatoon, Saskatchewan, Canada. Only UAV lidar could successfully measure the sub-canopy snow surface with reliable sub-canopy point coverage and consistent error metrics (root mean square error (RMSE) <0.17 m and bias −0.03 to −0.13 m). Relative to UAV lidar, UAV SfM did not consistently sense the sub-canopy snow surface, the interpolation needed to account for point cloud gaps introduced interpolation artefacts, and error metrics demonstrated relatively large variability (RMSE<0.33 m and bias 0.08 to −0.14 m). With the demonstration of sub-canopy snow depth mapping capabilities, a number of early applications are presented to showcase the ability of UAV lidar to effectively quantify the many multiscale snow processes defining snowpack dynamics in mountain and prairie environments.
APA, Harvard, Vancouver, ISO, and other styles
10

Nagy, Balázs, and Csaba Benedek. "On-the-Fly Camera and Lidar Calibration." Remote Sensing 12, no. 7 (April 2, 2020): 1137. http://dx.doi.org/10.3390/rs12071137.

Full text
Abstract:
Sensor fusion is one of the main challenges in self driving and robotics applications. In this paper we propose an automatic, online and target-less camera-Lidar extrinsic calibration approach. We adopt a structure from motion (SfM) method to generate 3D point clouds from the camera data which can be matched to the Lidar point clouds; thus, we address the extrinsic calibration problem as a registration task in the 3D domain. The core step of the approach is a two-stage transformation estimation: First, we introduce an object level coarse alignment algorithm operating in the Hough space to transform the SfM-based and the Lidar point clouds into a common coordinate system. Thereafter, we apply a control point based nonrigid transformation refinement step to register the point clouds more precisely. Finally, we calculate the correspondences between the 3D Lidar points and the pixels in the 2D camera domain. We evaluated the method in various real-life traffic scenarios in Budapest, Hungary. The results show that our proposed extrinsic calibration approach is able to provide accurate and robust parameter settings on-the-fly.
APA, Harvard, Vancouver, ISO, and other styles
11

Swinfield, Tom, Jeremy A. Lindsell, Jonathan V. Williams, Rhett D. Harrison, Agustiono, Habibi, Elva Gemita, Carola B. Schönlieb, and David A. Coomes. "Accurate Measurement of Tropical Forest Canopy Heights and Aboveground Carbon Using Structure From Motion." Remote Sensing 11, no. 8 (April 17, 2019): 928. http://dx.doi.org/10.3390/rs11080928.

Full text
Abstract:
Unmanned aerial vehicles are increasingly used to monitor forests. Three-dimensional models of tropical rainforest canopies can be constructed from overlapping photos using Structure from Motion (SfM), but it is often impossible to map the ground elevation directly from such data because canopy gaps are rare in rainforests. Without knowledge of the terrain elevation, it is, thus, difficult to accurately measure the canopy height or forest properties, including the recovery stage and aboveground carbon density. Working in an Indonesian ecosystem restoration landscape, we assessed how well SfM derived the estimates of the canopy height and aboveground carbon density compared with those from an airborne laser scanning (also known as LiDAR) benchmark. SfM systematically underestimated the canopy height with a mean bias of approximately 5 m. The linear models suggested that the bias increased quadratically with the top-of-canopy height for short, even-aged, stands but linearly for tall, structurally complex canopies (>10 m). The predictions based on the simple linear model were closely correlated to the field-measured heights when the approach was applied to an independent survey in a different location ( R 2 = 67% and RMSE = 1.85 m), but a negative bias of 0.89 m remained, suggesting the need to refine the model parameters with additional training data. Models that included the metrics of canopy complexity were less biased but with a reduced R 2 . The inclusion of ground control points (GCPs) was found to be important in accurately registering SfM measurements in space, which is essential if the survey requirement is to produce small-scale restoration interventions or to track changes through time. However, at the scale of several hectares, the top-of-canopy height and above-ground carbon density estimates from SfM and LiDAR were very similar even without GCPs. The ability to produce accurate top-of-canopy height and carbon stock measurements from SfM is game changing for forest managers and restoration practitioners, providing the means to make rapid, low-cost surveys over hundreds of hectares without the need for LiDAR.
APA, Harvard, Vancouver, ISO, and other styles
12

Winsen, Megan, and Grant Hamilton. "A Comparison of UAV-Derived Dense Point Clouds Using LiDAR and NIR Photogrammetry in an Australian Eucalypt Forest." Remote Sensing 15, no. 6 (March 21, 2023): 1694. http://dx.doi.org/10.3390/rs15061694.

Full text
Abstract:
Light detection and ranging (LiDAR) has been a tool of choice for 3D dense point cloud reconstructions of forest canopy over the past two decades, but advances in computer vision techniques, such as structure from motion (SfM) photogrammetry, have transformed 2D digital aerial imagery into a powerful, inexpensive and highly available alternative. Canopy modelling is complex and affected by a wide range of inputs. While studies have found dense point cloud reconstructions to be accurate, there is no standard approach to comparing outputs or assessing accuracy. Modelling is particularly challenging in native eucalypt forests, where the canopy displays abrupt vertical changes and highly varied relief. This study first investigated whether a remotely sensed LiDAR dense point cloud reconstruction of a native eucalypt forest completely reproduced canopy cover and accurately predicted tree heights. A further comparison was made with a photogrammetric reconstruction based solely on near-infrared (NIR) imagery to gain some insight into the contribution of the NIR spectral band to the 3D SfM reconstruction of native dry eucalypt open forest. The reconstructions did not produce comparable canopy height models and neither reconstruction completely reproduced canopy cover nor accurately predicted tree heights. Nonetheless, the LiDAR product was more representative of the eucalypt canopy than SfM-NIR. The SfM-NIR results were strongly affected by an absence of data in many locations, which was related to low canopy penetration by the passive optical sensor and sub-optimal feature matching in the photogrammetric pre-processing pipeline. To further investigate the contribution of NIR, future studies could combine NIR imagery captured at multiple solar elevations. A variety of photogrammetric pre-processing settings should continue to be explored in an effort to optimise image feature matching.
APA, Harvard, Vancouver, ISO, and other styles
13

Chirico, Peter, Jessica DeWitt, and Sarah Bergstresser. "Evaluating Elevation Change Thresholds between Structure-from-Motion DEMs Derived from Historical Aerial Photos and 3DEP LiDAR Data." Remote Sensing 12, no. 10 (May 19, 2020): 1625. http://dx.doi.org/10.3390/rs12101625.

Full text
Abstract:
This study created digital terrain models (DTMs) from historical aerial images using Structure from Motion (SfM) for a variety of image dates, resolutions, and photo scales. Accuracy assessments were performed on the SfM DTMs, and they were compared to the United States Geological Survey’s three-dimensional digital elevation program (3DEP) light detection and ranging (LiDAR) DTMs to evaluate geomorphic change thresholds based on vertical accuracy assessments and elevation change methodologies. The results of this study document a relationship between historical aerial photo scales and predicted vertical accuracy of the resultant DTMs. The results may be used to assess geomorphic change thresholds over multi-decadal timescales depending on spatial scale, resolution, and accuracy requirements. This study shows that if elevation changes of approximately ±1 m are to be mapped, historical aerial photography collected at 1:20,000 scale or larger would be required for comparison to contemporary LiDAR derived DTMs.
APA, Harvard, Vancouver, ISO, and other styles
14

Jakovljevic, Gordana, Miro Govedarica, Flor Alvarez-Taboada, and Vladimir Pajic. "Accuracy Assessment of Deep Learning Based Classification of LiDAR and UAV Points Clouds for DTM Creation and Flood Risk Mapping." Geosciences 9, no. 7 (July 23, 2019): 323. http://dx.doi.org/10.3390/geosciences9070323.

Full text
Abstract:
Digital elevation model (DEM) has been frequently used for the reduction and management of flood risk. Various classification methods have been developed to extract DEM from point clouds. However, the accuracy and computational efficiency need to be improved. The objectives of this study were as follows: (1) to determine the suitability of a new method to produce DEM from unmanned aerial vehicle (UAV) and light detection and ranging (LiDAR) data, using a raw point cloud classification and ground point filtering based on deep learning and neural networks (NN); (2) to test the convenience of rebalancing datasets for point cloud classification; (3) to evaluate the effect of the land cover class on the algorithm performance and the elevation accuracy; and (4) to assess the usability of the LiDAR and UAV structure from motion (SfM) DEM in flood risk mapping. In this paper, a new method of raw point cloud classification and ground point filtering based on deep learning using NN is proposed and tested on LiDAR and UAV data. The NN was trained on approximately 6 million points from which local and global geometric features and intensity data were extracted. Pixel-by-pixel accuracy assessment and visual inspection confirmed that filtering point clouds based on deep learning using NN is an appropriate technique for ground classification and producing DEM, as for the test and validation areas, both ground and non-ground classes achieved high recall (>0.70) and high precision values (>0.85), which showed that the two classes were well handled by the model. The type of method used for balancing the original dataset did not have a significant influence in the algorithm accuracy, and it was suggested not to use any of them unless the distribution of the generated and real data set will remain the same. Furthermore, the comparisons between true data and LiDAR and a UAV structure from motion (UAV SfM) point clouds were analyzed, as well as the derived DEM. The root mean square error (RMSE) and the mean average error (MAE) of the DEM were 0.25 m and 0.05 m, respectively, for LiDAR data, and 0.59 m and –0.28 m, respectively, for UAV data. For all land cover classes, the UAV DEM overestimated the elevation, whereas the LIDAR DEM underestimated it. The accuracy was not significantly different in the LiDAR DEM for the different vegetation classes, while for the UAV DEM, the RMSE increased with the height of the vegetation class. The comparison of the inundation areas derived from true LiDAR and UAV data for different water levels showed that in all cases, the largest differences were obtained for the lowest water level tested, while they performed best for very high water levels. Overall, the approach presented in this work produced DEM from LiDAR and UAV data with the required accuracy for flood mapping according to European Flood Directive standards. Although LiDAR is the recommended technology for point cloud acquisition, a suitable alternative is also UAV SfM in hilly areas.
APA, Harvard, Vancouver, ISO, and other styles
15

Piermattei, Livia, Luca Carturan, Fabrizio de Blasi, Paolo Tarolli, Giancarlo Dalla Fontana, Antonio Vettore, and Norbert Pfeifer. "Suitability of ground-based SfM–MVS for monitoring glacial and periglacial processes." Earth Surface Dynamics 4, no. 2 (May 20, 2016): 425–43. http://dx.doi.org/10.5194/esurf-4-425-2016.

Full text
Abstract:
Abstract. Photo-based surface reconstruction is rapidly emerging as an alternative survey technique to lidar (light detection and ranging) in many fields of geoscience fostered by the recent development of computer vision algorithms such as structure from motion (SfM) and dense image matching such as multi-view stereo (MVS). The objectives of this work are to test the suitability of the ground-based SfM–MVS approach for calculating the geodetic mass balance of a 2.1 km2 glacier and for detecting the surface displacement of a neighbouring active rock glacier located in the eastern Italian Alps. The photos were acquired in 2013 and 2014 using a digital consumer-grade camera during single-day field surveys. Airborne laser scanning (ALS, otherwise known as airborne lidar) data were used as benchmarks to estimate the accuracy of the photogrammetric digital elevation models (DEMs) and the reliability of the method. The SfM–MVS approach enabled the reconstruction of high-quality DEMs, which provided estimates of glacial and periglacial processes similar to those achievable using ALS. In stable bedrock areas outside the glacier, the mean and the standard deviation of the elevation difference between the SfM–MVS DEM and the ALS DEM was −0.42 ± 1.72 and 0.03 ± 0.74 m in 2013 and 2014, respectively. The overall pattern of elevation loss and gain on the glacier were similar with both methods, ranging between −5.53 and + 3.48 m. In the rock glacier area, the elevation difference between the SfM–MVS DEM and the ALS DEM was 0.02 ± 0.17 m. The SfM–MVS was able to reproduce the patterns and the magnitudes of displacement of the rock glacier observed by the ALS, ranging between 0.00 and 0.48 m per year. The use of natural targets as ground control points, the occurrence of shadowed and low-contrast areas, and in particular the suboptimal camera network geometry imposed by the morphology of the study area were the main factors affecting the accuracy of photogrammetric DEMs negatively. Technical improvements such as using an aerial platform and/or placing artificial targets could significantly improve the results but run the risk of being more demanding in terms of costs and logistics.
APA, Harvard, Vancouver, ISO, and other styles
16

Teng, Poching, Yu Zhang, Takayoshi Yamane, Masayuki Kogoshi, Takeshi Yoshida, Tomohiko Ota, and Junichi Nakagawa. "Accuracy Evaluation and Branch Detection Method of 3D Modeling Using Backpack 3D Lidar SLAM and UAV-SfM for Peach Trees during the Pruning Period in Winter." Remote Sensing 15, no. 2 (January 9, 2023): 408. http://dx.doi.org/10.3390/rs15020408.

Full text
Abstract:
In the winter pruning operation of deciduous fruit trees, the number of pruning branches and the structure of the main branches greatly influence the future growth of the fruit trees and the final harvest volume. Terrestrial laser scanning (TLS) is considered a feasible method for the 3D modeling of trees, but it is not suitable for large-scale inspection. The simultaneous localization and mapping (SLAM) technique makes it possible to move the lidar on the ground and model quickly, but it is not useful enough for the accuracy of plant detection. Therefore, in this study, we used UAV-SfM and 3D lidar SLAM techniques to build 3D models for the winter pruning of peach trees. Then, we compared and analyzed these models and further proposed a method to distinguish branches from 3D point clouds by spatial point cloud density. The results showed that the 3D lidar SLAM technique had a shorter modeling time and higher accuracy than UAV-SfM for the winter pruning period of peach trees. The method had the smallest RMSE of 3084 g with an R2 = 0.93 compared to the fresh weight of the pruned branches. In the branch detection part, branches with diameters greater than 3 cm were differentiated successfully, regardless of whether before or after pruning.
APA, Harvard, Vancouver, ISO, and other styles
17

Bash, Eleanor A., Brian J. Moorman, Brian Menounos, and Allison Gunther. "Evaluation of SfM for surface characterization of a snow-covered glacier through comparison with aerial lidar." Journal of Unmanned Vehicle Systems 8, no. 2 (June 1, 2020): 119–39. http://dx.doi.org/10.1139/juvs-2019-0006.

Full text
Abstract:
The combined use of unmanned aerial vehicles (UAVs) and structure-from-motion (SfM) is rapidly growing as a cost-effective alternative to airborne laser scanning (lidar) for reconstructing glacier surfaces. Here we present a thorough analysis of the precision and accuracy of a photogrammetric point cloud (PPC) constructed through SfM from UAV-acquired imagery over the spring snow surface at Haig Glacier, Alberta, Canada, the first of its kind in a glaciological setting. An aerial lidar survey conducted concurrently with UAV surveys was used to examine spatial patterns in the PPC accuracy. We found a median error in the PPC of −0.046 ± 0.067 m, with a 95% quantile of 0.218 m. Mean precision of the PPC was 0.199 m, with large spatially clustered outliers. We found an association between high-error, low-precision, and high-surface roughness in the PPC, likely due to illumination characteristics of the snow surface. Glacier surface reconstructions are important for geodetic mass balance measurements, giving key insights into changing climate where in situ measurements are difficult to obtain. The PPC errors are small enough that they would have minimal effects on total mass balance, should the technique be applied across the glacier.
APA, Harvard, Vancouver, ISO, and other styles
18

Laksono, D., T. Aditya, and G. Riyadi. "INTERACTIVE 3D CITY VISUALIZATION FROM STRUCTURE MOTION DATA USING GAME ENGINE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W16 (October 14, 2019): 737–40. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w16-737-2019.

Full text
Abstract:
Abstract. Developing a 3D city model is always a challenging task, whether on how to obtain the 3D data or how to present the model to users. Lidar is often used to produce real-world measurement, resulting in point clouds which further processed into a 3D model. However, this method possesses some limitation, e.g. tedious, expensive works and high technicalities, which limits its usability in a smaller area. Currently, there exists pipeline utilize point-clouds from Lidar data to automate the generation of 3D city model. For example, 3dfier (http://github.com/tudelft3d/3dfier) is a software capable of generating LoD 1 3D city model from lidar point cloud data. The resulting CityGML file could further be used in a 3D GIS viewer to produce an interactive 3D city model. This research proposed the use of Structure from Motion (SfM) method to obtain point cloud from UAV data. Using SfM to generate point clouds means cheaper and shorter production time, as well as more suitable for smaller area compared to LiDAR. 3Dfier could be utilized to produce 3D model from the point cloud. Subsequently, a game engine, i.e. Unity 3D, is utilized as the visualization platform. Previous works shows that a game engine could be used as an interactive environment for exploring virtual world based on real-world measurement and other data, such as parcel boundaries. This works shows that the process of generating 3D city model could be achieved using the proposed pipeline.
APA, Harvard, Vancouver, ISO, and other styles
19

Meesuk, Vorawit, Zoran Vojinovic, Arthur E. Mynett, and Ahmad F. Abdullah. "Urban flood modelling combining top-view LiDAR data with ground-view SfM observations." Advances in Water Resources 75 (January 2015): 105–17. http://dx.doi.org/10.1016/j.advwatres.2014.11.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Izumida, Atsuto, Shoichiro Uchiyama, and Toshihiko Sugai. "Application of UAV-SfM photogrammetry and aerial lidar to a disastrous flood: repeated topographic measurement of a newly formed crevasse splay of the Kinu River, central Japan." Natural Hazards and Earth System Sciences 17, no. 9 (September 13, 2017): 1505–19. http://dx.doi.org/10.5194/nhess-17-1505-2017.

Full text
Abstract:
Abstract. Geomorphic impacts of a disastrous crevasse splay that formed in September 2015 and its post-formation modifications were quantitatively documented by using repeated, high-definition digital surface models (DSMs) of an inhabited and cultivated floodplain of the Kinu River, central Japan. The DSMs were based on pre-flood (resolution: 2 m) and post-flood (resolution: 1 m) aerial light detection and ranging (lidar) data from January 2007 and September 2015, respectively, and on structure-from-motion (SfM) photogrammetry data (resolution: 3.84 cm) derived from aerial photos taken by an unmanned aerial vehicle (UAV) in December 2015. After elimination of systematic errors among the DSMs and down-sampling of the SfM-derived DSM, elevation changes on the order of 10−1 m – including not only topography but also growth of vegetation, vanishing of flood waters, and restoration and repair works – were detected. Comparison of the DSMs showed that the volume eroded by the flood was more than twice the deposited volume in the area within 300–500 m of the breached artificial levee, where the topography was significantly affected. The results suggest that DSMs based on a combination of UAV-SfM and lidar data can be used to quantify, rapidly and in rich detail, topographic changes on floodplains caused by floods.
APA, Harvard, Vancouver, ISO, and other styles
21

Agrafiotis, Panagiotis, Dimitrios Skarlatos, Andreas Georgopoulos, and Konstantinos Karantzalos. "DepthLearn: Learning to Correct the Refraction on Point Clouds Derived from Aerial Imagery for Accurate Dense Shallow Water Bathymetry Based on SVMs-Fusion with LiDAR Point Clouds." Remote Sensing 11, no. 19 (September 24, 2019): 2225. http://dx.doi.org/10.3390/rs11192225.

Full text
Abstract:
The determination of accurate bathymetric information is a key element for near offshore activities; hydrological studies, such as coastal engineering applications, sedimentary processes, hydrographic surveying, archaeological mapping and biological research. Through structure from motion (SfM) and multi-view-stereo (MVS) techniques, aerial imagery can provide a low-cost alternative compared to bathymetric LiDAR (Light Detection and Ranging) surveys, as it offers additional important visual information and higher spatial resolution. Nevertheless, water refraction poses significant challenges on depth determination. Till now, this problem has been addressed through customized image-based refraction correction algorithms or by modifying the collinearity equation. In this article, in order to overcome the water refraction errors in a massive and accurate way, we employ machine learning tools, which are able to learn the systematic underestimation of the estimated depths. In particular, an SVR (support vector regression) model was developed, based on known depth observations from bathymetric LiDAR surveys, which is able to accurately recover bathymetry from point clouds derived from SfM-MVS procedures. Experimental results and validation were based on datasets derived from different test-sites, and demonstrated the high potential of our approach. Moreover, we exploited the fusion of LiDAR and image-based point clouds towards addressing challenges of both modalities in problematic areas.
APA, Harvard, Vancouver, ISO, and other styles
22

Yoshii, Tatsuki, Naoto Matsumura, and Chinsu Lin. "Integrating UAV-SfM and Airborne Lidar Point Cloud Data to Plantation Forest Feature Extraction." Remote Sensing 14, no. 7 (April 1, 2022): 1713. http://dx.doi.org/10.3390/rs14071713.

Full text
Abstract:
A low-cost but accurate remote-sensing-based forest-monitoring tool is necessary for regularly inventorying tree-level parameters and stand-level attributes to achieve sustainable management of timber production forests. Lidar technology is precise for multi-temporal data collection but expensive. A low-cost UAV-based optical sensing method is an economical and flexible alternative for collecting high-resolution images for generating point cloud data and orthophotos for mapping but lacks height accuracy. This study proposes a protocol of integrating a UAV equipped without an RTK instrument and airborne lidar sensors (ALS) for characterizing tree parameters and stand attributes for use in plantation forest management. The proposed method primarily relies on the ALS-based digital elevation model data (ALS-DEM), UAV-based structure-from-motion technique generated digital surface model data (UAV-SfM-DSM), and their derivative canopy height model data (UAV-SfM-CHM). Following traditional forest inventory approaches, a few middle-aged and mature stands of Hinoki cypress (Chamaecyparis obtusa) plantation forests were used to investigate the performance of characterizing forest parameters via the canopy height model. Results show that the proposed method can improve UAV-SfM point cloud referencing transformation accuracy. With the derived CHM data, this method can estimate tree height with an RMSE ranging from 0.43 m to 1.65 m, equivalent to a PRMSE of 2.40–7.84%. The tree height estimates between UAV-based and ALS-based approaches are highly correlated (R2 = 0.98, p < 0.0001), similarly, the height annual growth rate (HAGR) is also significantly correlated (R2 = 0.78, p < 0.0001). The percentage HAGR of Hinoki trees behaves as an exponential decay function of the tree height over an 8-year management period. The stand-level parameters stand density, stand volume stocks, stand basal area, and relative spacing are with an error rate of less than 20% for both UAV-based and ALS-based approaches. Intensive management with regular thinning helps the plantation forests retain a clear crown shape feature, therefore, benefitting tree segmentation for deriving tree parameters and stand attributes.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhou, T., S. M. Hasheminasab, Y. C. Lin, and A. Habib. "COMPARATIVE EVALUATION OF DERIVED IMAGE AND LIDAR POINT CLOUDS FROM UAV-BASED MOBILE MAPPING SYSTEMS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (August 12, 2020): 169–75. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-169-2020.

Full text
Abstract:
Abstract. Unmanned aerial vehicles (UAVs) have been widely used for 3D reconstruction/modelling in various applications such as precision agriculture, coastal monitoring, and emergency management. For such mapping applications, camera and LiDAR are the two most commonly used sensors. Mapping with imagery-based approaches is considered to be an economical and effective option and is often conducted using Structure from Motion (SfM) techniques where point clouds and orthophotos are generated. In addition to UAV photogrammetry, point clouds of the area of interest can also be directly derived from LiDAR sensors onboard UAVs equipped with global navigation satellite systems/inertial navigation systems (GNSS/INS). In this study, a custom-built UAV-based mobile mapping system is used to simultaneously collect imagery and LiDAR data. Derived LiDAR and image-based point clouds are investigated and compared in terms of their absolute and relative accuracy. Furthermore, stability of the system calibration parameters for the camera and LiDAR sensors are studied using temporal datasets. The results show that while LiDAR point clouds demonstrate a high absolute accuracy over time, image-based point clouds are not as accurate as LiDAR due to instability of the camera interior orientation parameters.
APA, Harvard, Vancouver, ISO, and other styles
24

Bi, Rui, Shu Gan, Xiping Yuan, Raobo Li, Sha Gao, Min Yang, Weidong Luo, and Lin Hu. "Multi-View Analysis of High-Resolution Geomorphic Features in Complex Mountains Based on UAV–LiDAR and SfM–MVS: A Case Study of the Northern Pit Rim Structure of the Mountains of Lufeng, China." Applied Sciences 13, no. 2 (January 4, 2023): 738. http://dx.doi.org/10.3390/app13020738.

Full text
Abstract:
Unmanned aerial vehicles (UAVs) and light detection and ranging (LiDAR) can be used to analyze the geomorphic features in complex plateau mountains. Accordingly, a UAV–LiDAR system was adopted in this study to acquire images and lidar point-cloud dataset in the annular structure of Lufeng, Yunnan. A three-dimensional (3D) model was constructed based on structure from motion and multi-view stereo (SfM–MVS) in combination with a high-resolution digital elevation model (DEM). Geomorphic identification, measurement, and analysis were conducted using integrated visual interpretation, DEM visualization, and geographic information system (GIS) topographic feature extraction. The results indicated that the 3D geomorphological visualization and mapping were based on DEM, which was employed to identify the dividing lines and ridges that were delineated of the pit rim structure. The high-resolution DEM retained more geomorphic detail information, and the topography and the variation between ridges were analyzed in depth. The catchment and ponding areas were analyzed using accurate morphological parameters through a multi-angle 3D visualization. The slope, aspect, and topographic wetness index (TWI) parameters were analyzed through mathematical statistics to qualitatively and accurately analyze the differences between different ridges. This study highlighted the significance of the UAV–LiDAR high-resolution topographic measurements and the SfM–MVS 3D scene modelling in accurately identifying geomorphological features and conducting refined analysis. An effective framework was established to acquire high-precision topographic datasets and to analyze geomorphological features in complex mountain areas, which was beneficial in deepening the research on numerical simulation analysis of geomorphological features and reveal the process evolution mechanism.
APA, Harvard, Vancouver, ISO, and other styles
25

Widyaningrum, E., and B. G. H. Gorte. "COMPREHENSIVE COMPARISON OF TWO IMAGE-BASED POINT CLOUDS FROM AERIAL PHOTOS WITH AIRBORNE LIDAR FOR LARGE-SCALE MAPPING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W7 (September 12, 2017): 557–65. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w7-557-2017.

Full text
Abstract:
The integration of computer vision and photogrammetry to generate three-dimensional (3D) information from images has contributed to a wider use of point clouds, for mapping purposes. Large-scale topographic map production requires 3D data with high precision and accuracy to represent the real conditions of the earth surface. Apart from LiDAR point clouds, the image-based matching is also believed to have the ability to generate reliable and detailed point clouds from multiple-view images. In order to examine and analyze possible fusion of LiDAR and image-based matching for large-scale detailed mapping purposes, point clouds are generated by Semi Global Matching (SGM) and by Structure from Motion (SfM). In order to conduct comprehensive and fair comparison, this study uses aerial photos and LiDAR data that were acquired at the same time. Qualitative and quantitative assessments have been applied to evaluate LiDAR and image-matching point clouds data in terms of visualization, geometric accuracy, and classification result. The comparison results conclude that LiDAR is the best data for large-scale mapping.
APA, Harvard, Vancouver, ISO, and other styles
26

Hall, Emma C., and Mark J. Lara. "Multisensor UAS mapping of Plant Species and Plant Functional Types in Midwestern Grasslands." Remote Sensing 14, no. 14 (July 18, 2022): 3453. http://dx.doi.org/10.3390/rs14143453.

Full text
Abstract:
Uncrewed aerial systems (UASs) have emerged as powerful ecological observation platforms capable of filling critical spatial and spectral observation gaps in plant physiological and phenological traits that have been difficult to measure from space-borne sensors. Despite recent technological advances, the high cost of drone-borne sensors limits the widespread application of UAS technology across scientific disciplines. Here, we evaluate the tradeoffs between off-the-shelf and sophisticated drone-borne sensors for mapping plant species and plant functional types (PFTs) within a diverse grassland. Specifically, we compared species and PFT mapping accuracies derived from hyperspectral, multispectral, and RGB imagery fused with light detection and ranging (LiDAR) or structure-for-motion (SfM)-derived canopy height models (CHM). Sensor–data fusion were used to consider either a single observation period or near-monthly observation frequencies for integration of phenological information (i.e., phenometrics). Results indicate that overall classification accuracies for plant species and PFTs were highest in hyperspectral and LiDAR-CHM fusions (78 and 89%, respectively), followed by multispectral and phenometric–SfM–CHM fusions (52 and 60%, respectively) and RGB and SfM–CHM fusions (45 and 47%, respectively). Our findings demonstrate clear tradeoffs in mapping accuracies from economical versus exorbitant sensor networks but highlight that off-the-shelf multispectral sensors may achieve accuracies comparable to those of sophisticated UAS sensors by integrating phenometrics into machine learning image classifiers.
APA, Harvard, Vancouver, ISO, and other styles
27

Mohan, Midhun, Rodrigo Vieira Leite, Eben North Broadbent, Wan Shafrina Wan Mohd Jaafar, Shruthi Srinivasan, Shaurya Bajaj, Ana Paula Dalla Corte, et al. "Individual tree detection using UAV-lidar and UAV-SfM data: A tutorial for beginners." Open Geosciences 13, no. 1 (January 1, 2021): 1028–39. http://dx.doi.org/10.1515/geo-2020-0290.

Full text
Abstract:
Abstract Applications of unmanned aerial vehicles (UAVs) have proliferated in the last decade due to the technological advancements on various fronts such as structure-from-motion (SfM), machine learning, and robotics. An important preliminary step with regard to forest inventory and management is individual tree detection (ITD), which is required to calculate forest attributes such as stem volume, forest uniformity, and biomass estimation. However, users may find adopting the UAVs and algorithms for their specific projects challenging due to the plethora of information available. Herein, we provide a step-by-step tutorial for performing ITD using (i) low-cost UAV-derived imagery and (ii) UAV-based high-density lidar (light detection and ranging). Functions from open-source R packages were implemented to develop a canopy height model (CHM) and perform ITD utilizing the local maxima (LM) algorithm. ITD accuracy assessment statistics and validation were derived through manual visual interpretation from high-resolution imagery and field-data-based accuracy assessment. As the intended audience are beginners in remote sensing, we have adopted a very simple methodology and chosen study plots that have relatively open canopies to demonstrate our proposed approach; the respective R codes and sample plot data are available as supplementary materials.
APA, Harvard, Vancouver, ISO, and other styles
28

Gao, Qiang, and Jiangming Kan. "Automatic Forest DBH Measurement Based on Structure from Motion Photogrammetry." Remote Sensing 14, no. 9 (April 25, 2022): 2064. http://dx.doi.org/10.3390/rs14092064.

Full text
Abstract:
Measuring diameter at breast height (DBH) is an essential but laborious task in the traditional forest inventory; it motivates people to develop alternative methods based on remote sensing technologies. In recent years, structure from motion (SfM) photogrammetry has drawn researchers’ attention in forest surveying for its economy and high precision as the light detection and ranging (LiDAR) methods are always expensive. This study explores an automatic DBH measurement method based on SfM. Firstly, we proposed a new image acquisition technique that could reduce the number of images for the high accuracy of DBH measurement. Secondly, we developed an automatic DBH estimation pipeline based on sample consensus (RANSAC) and cylinder fitting with the Least Median of Squares with impressive DBH estimation speed and high accuracy comparable to methods based on LiDAR. For the application of SfM on forest survey, a graphical interface software Auto-DBH integrated with SfM reconstruction and automatic DBH estimation pipeline was developed. We sampled four plots with different species to verify the performance of the proposed method. The result showed that the accuracy of the first two plots, where trees’ stems were of good roundness, was high with a root mean squared error (RMSE) of 1.41 cm and 1.118 cm and a mean relative error of 4.78% and 5.70%, respectively. The third plot’s damaged trunks and low roundness stems reduced the accuracy with an RMSE of 3.16 cm and a mean relative error of 10.74%. The average automatic detection rate of the trees in the four plots was 91%. Our automatic DBH estimation procedure is relatively fast and on average takes only 2 s to estimate the DBH of a tree, which is much more rapid than direct physical measurements of tree trunk diameters. The result proves that Auto-DBH could reach high accuracy, close to terrestrial laser scanning (TLS) in plot scale forest DBH measurement. Our successful application of automatic DBH measurement indicates that SfM is promising in forest inventory.
APA, Harvard, Vancouver, ISO, and other styles
29

Gao, Qiang, and Jiangming Kan. "Automatic Forest DBH Measurement Based on Structure from Motion Photogrammetry." Remote Sensing 14, no. 9 (April 25, 2022): 2064. http://dx.doi.org/10.3390/rs14092064.

Full text
Abstract:
Measuring diameter at breast height (DBH) is an essential but laborious task in the traditional forest inventory; it motivates people to develop alternative methods based on remote sensing technologies. In recent years, structure from motion (SfM) photogrammetry has drawn researchers’ attention in forest surveying for its economy and high precision as the light detection and ranging (LiDAR) methods are always expensive. This study explores an automatic DBH measurement method based on SfM. Firstly, we proposed a new image acquisition technique that could reduce the number of images for the high accuracy of DBH measurement. Secondly, we developed an automatic DBH estimation pipeline based on sample consensus (RANSAC) and cylinder fitting with the Least Median of Squares with impressive DBH estimation speed and high accuracy comparable to methods based on LiDAR. For the application of SfM on forest survey, a graphical interface software Auto-DBH integrated with SfM reconstruction and automatic DBH estimation pipeline was developed. We sampled four plots with different species to verify the performance of the proposed method. The result showed that the accuracy of the first two plots, where trees’ stems were of good roundness, was high with a root mean squared error (RMSE) of 1.41 cm and 1.118 cm and a mean relative error of 4.78% and 5.70%, respectively. The third plot’s damaged trunks and low roundness stems reduced the accuracy with an RMSE of 3.16 cm and a mean relative error of 10.74%. The average automatic detection rate of the trees in the four plots was 91%. Our automatic DBH estimation procedure is relatively fast and on average takes only 2 s to estimate the DBH of a tree, which is much more rapid than direct physical measurements of tree trunk diameters. The result proves that Auto-DBH could reach high accuracy, close to terrestrial laser scanning (TLS) in plot scale forest DBH measurement. Our successful application of automatic DBH measurement indicates that SfM is promising in forest inventory.
APA, Harvard, Vancouver, ISO, and other styles
30

Morell-Monzó, Sergio, María-Teresa Sebastiá-Frasquet, and Javier Estornell. "Cartografía del abandono de cultivos de cítricos mediante el uso de datos altimétricos: LiDAR y fotogrametría SfM." Revista de Teledetección, no. 59 (January 31, 2022): 47–58. http://dx.doi.org/10.4995/raet.2022.16698.

Full text
Abstract:
The Comunitat Valenciana region (Spain) is the largest citrus producer in Europe. However, it has suffered an accelerated land abandonment in recent decades. Agricultural land abandonment is a global phenomenon with environmental and socio-economic implications. The small size of the agricultural parcels, the highly fragmented landscape and the low spectral separability between productive and abandoned parcels make it difficult to detect abandoned crops using moderate resolution images. In this work, an approach is applied to monitor citrus crops using altimetric data. The study uses two sources of altimetry data: LiDAR from the National Plan for Aerial Orthophotography (PNOA) and altimetric data obtained through an unmanned aerial system applying photogrammetric processes (Structure from Motion). The results showed an overall accuracy of 67,9% for the LiDAR data and 83,6% for the photogrammetric data. The high density of points in the photogrammetric data allowed to extract texture features from the Gray Level Co-Occurrence Matrix derived from the Canopy Height Model. The results indicate the potential of altimetry information for monitoring abandoned citrus fields, especially high-density point clouds. Future research should explore the fusion of spectral, textural and altimetric data for the study of abandoned citrus crops.
APA, Harvard, Vancouver, ISO, and other styles
31

Alexiou, Simoni, Georgios Deligiannakis, Aggelos Pallikarakis, Ioannis Papanikolaou, Emmanouil Psomiadis, and Klaus Reicherter. "Comparing High Accuracy t-LiDAR and UAV-SfM Derived Point Clouds for Geomorphological Change Detection." ISPRS International Journal of Geo-Information 10, no. 6 (May 29, 2021): 367. http://dx.doi.org/10.3390/ijgi10060367.

Full text
Abstract:
Analysis of two small semi-mountainous catchments in central Evia island, Greece, highlights the advantages of Unmanned Aerial Vehicle (UAV) and Terrestrial Laser Scanning (TLS) based change detection methods. We use point clouds derived by both methods in two sites (S1 & S2), to analyse the effects of a recent wildfire on soil erosion. Results indicate that topsoil’s movements in the order of a few centimetres, occurring within a few months, can be estimated. Erosion at S2 is precisely delineated by both methods, yielding a mean value of 1.5 cm within four months. At S1, UAV-derived point clouds’ comparison quantifies annual soil erosion more accurately, showing a maximum annual erosion rate of 48 cm. UAV-derived point clouds appear to be more accurate for channel erosion display and measurement, while the slope wash is more precisely estimated using TLS. Analysis of Point Cloud time series is a reliable and fast process for soil erosion assessment, especially in rapidly changing environments with difficult access for direct measurement methods. This study will contribute to proper georesource management by defining the best-suited methodology for soil erosion assessment after a wildfire in Mediterranean environments.
APA, Harvard, Vancouver, ISO, and other styles
32

Hillman, Samuel, Bryan Hally, Luke Wallace, Darren Turner, Arko Lucieer, Karin Reinke, and Simon Jones. "High-Resolution Estimates of Fire Severity—An Evaluation of UAS Image and LiDAR Mapping Approaches on a Sedgeland Forest Boundary in Tasmania, Australia." Fire 4, no. 1 (March 18, 2021): 14. http://dx.doi.org/10.3390/fire4010014.

Full text
Abstract:
With an increase in the frequency and severity of wildfires across the globe and resultant changes to long-established fire regimes, the mapping of fire severity is a vital part of monitoring ecosystem resilience and recovery. The emergence of unoccupied aircraft systems (UAS) and compact sensors (RGB and LiDAR) provide new opportunities to map fire severity. This paper conducts a comparison of metrics derived from UAS Light Detecting and Ranging (LiDAR) point clouds and UAS image based products to classify fire severity. A workflow which derives novel metrics describing vegetation structure and fire severity from UAS remote sensing data is developed that fully utilises the vegetation information available in both data sources. UAS imagery and LiDAR data were captured pre- and post-fire over a 300 m by 300 m study area in Tasmania, Australia. The study area featured a vegetation gradient from sedgeland vegetation (e.g., button grass 0.2m) to forest (e.g., Eucalyptus obliqua and Eucalyptus globulus 50m). To classify the vegetation and fire severity, a comprehensive set of variables describing structural, textural and spectral characteristics were gathered using UAS images and UAS LiDAR datasets. A recursive feature elimination process was used to highlight the subsets of variables to be included in random forest classifiers. The classifier was then used to map vegetation and severity across the study area. The results indicate that UAS LiDAR provided similar overall accuracy to UAS image and combined (UAS LiDAR and UAS image predictor values) data streams to classify vegetation (UAS image: 80.6%; UAS LiDAR: 78.9%; and Combined: 83.1%) and severity in areas of forest (UAS image: 76.6%, UAS LiDAR: 74.5%; and Combined: 78.5%) and areas of sedgeland (UAS image: 72.4%; UAS LiDAR: 75.2%; and Combined: 76.6%). These results indicate that UAS SfM and LiDAR point clouds can be used to assess fire severity at very high spatial resolution.
APA, Harvard, Vancouver, ISO, and other styles
33

Guerin, Antoine, Antonio Abellán, Battista Matasci, Michel Jaboyedoff, Marc-Henri Derron, and Ludovic Ravanel. "Brief communication: 3-D reconstruction of a collapsed rock pillar from Web-retrieved images and terrestrial lidar data – the 2005 event of the west face of the Drus (Mont Blanc massif)." Natural Hazards and Earth System Sciences 17, no. 7 (July 18, 2017): 1207–20. http://dx.doi.org/10.5194/nhess-17-1207-2017.

Full text
Abstract:
Abstract. In June 2005, a series of major rockfall events completely wiped out the Bonatti Pillar located in the legendary Drus west face (Mont Blanc massif, France). Terrestrial lidar scans of the west face were acquired after this event, but no pre-event point cloud is available. Thus, in order to reconstruct the volume and the shape of the collapsed blocks, a 3-D model has been built using photogrammetry (structure-from-motion (SfM) algorithms) based on 30 pictures collected on the Web. All these pictures were taken between September 2003 and May 2005. We then reconstructed the shape and volume of the fallen compartment by comparing the SfM model with terrestrial lidar data acquired in October 2005 and November 2011. The volume is calculated to 292 680 m3 (±5.6 %). This result is close to the value previously assessed by Ravanel and Deline (2008) for this same rock avalanche (265 000 ± 10 000 m3). The difference between these two estimations can be explained by the rounded shape of the volume determined by photogrammetry, which may lead to a volume overestimation. However it is not excluded that the volume calculated by Ravanel and Deline (2008) is slightly underestimated, the thickness of the blocks having been assessed manually from historical photographs.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhou, Tian, Seyyed Meghdad Hasheminasab, Radhika Ravi, and Ayman Habib. "LiDAR-Aided Interior Orientation Parameters Refinement Strategy for Consumer-Grade Cameras Onboard UAV Remote Sensing Systems." Remote Sensing 12, no. 14 (July 15, 2020): 2268. http://dx.doi.org/10.3390/rs12142268.

Full text
Abstract:
Unmanned aerial vehicles (UAVs) are quickly emerging as a popular platform for 3D reconstruction/modeling in various applications such as precision agriculture, coastal monitoring, and emergency management. For such applications, LiDAR and frame cameras are the two most commonly used sensors for 3D mapping of the object space. For example, point clouds for the area of interest can be directly derived from LiDAR sensors onboard UAVs equipped with integrated global navigation satellite systems and inertial navigation systems (GNSS/INS). Imagery-based mapping, on the other hand, is considered to be a cost-effective and practical option and is often conducted by generating point clouds and orthophotos using structure from motion (SfM) techniques. Mapping with photogrammetric approaches requires accurate camera interior orientation parameters (IOPs), especially when direct georeferencing is utilized. Most state-of-the-art approaches for determining/refining camera IOPs depend on ground control points (GCPs). However, establishing GCPs is expensive and labor-intensive, and more importantly, the distribution and number of GCPs are usually less than optimal to provide adequate control for determining and/or refining camera IOPs. Moreover, consumer-grade cameras with unstable IOPs have been widely used for mapping applications. Therefore, in such scenarios, where frequent camera calibration or IOP refinement is required, GCP-based approaches are impractical. To eliminate the need for GCPs, this study uses LiDAR data as a reference surface to perform in situ refinement of camera IOPs. The proposed refinement strategy is conducted in three main steps. An image-based sparse point cloud is first generated via a GNSS/INS-assisted SfM strategy. Then, LiDAR points corresponding to the resultant image-based sparse point cloud are identified through an iterative plane fitting approach and are referred to as LiDAR control points (LCPs). Finally, IOPs of the utilized camera are refined through a GNSS/INS-assisted bundle adjustment procedure using LCPs. Seven datasets over two study sites with a variety of geomorphic features are used to evaluate the performance of the developed strategy. The results illustrate the ability of the proposed approach to achieve an object space absolute accuracy of 3–5 cm (i.e., 5–10 times the ground sampling distance) at a 41 m flying height.
APA, Harvard, Vancouver, ISO, and other styles
35

Naimaee, R., M. Saadatseresht, and M. Omidalizarandi. "AUTOMATIC EXTRACTION OF CONTROL POINTS FROM 3D LIDAR MOBILE MAPPING AND UAV IMAGERY FOR AERIAL TRIANGULATION." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-4/W1-2022 (January 14, 2023): 581–88. http://dx.doi.org/10.5194/isprs-annals-x-4-w1-2022-581-2023.

Full text
Abstract:
Abstract. Installing targets and measuring them as ground control points (GCPs) are time consuming and cost inefficient tasks in a UAV photogrammetry project. This research aims to automatically extract GCPs from 3D LiDAR mobile mapping system (L-MMS) measurements and UAV imagery to perform aerial triangulation in a UAV photogrammetric network. The L-MMS allows to acquire 3D point clouds of an urban environment including floors and facades of buildings with an accuracy of a few centimetres. Integration of UAV imagery, as complementary information enables to reduce the acquisition time of measurement as well as increasing the automation level in a production line. Therefore, a higher quality measurements and more diverse products are obtained. This research hypothesises that the spatial accuracy of the L-MMS is higher than that of the UAV photogrammetric point clouds. The tie points are extracted from the UAV imagery based on the well-known SIFT method, and then matched. The structure from motion (SfM) algorithm is applied to estimate the 3D object coordinates of the matched tie points. Rigid registration is carried out between the point clouds obtained from the L-MMS and the SfM. For each tie point extracted from the SfM point clouds, their corresponding neighbouring points are selected from the L-MMS point clouds, and then a plane is fitted and then a tie point was projected on the plane, and this is how the LiDAR-based control points (LCPs) are calculated. The re-projection error of the analyses carried out on a test data sets of the Glian area in Iran show a half pixel size accuracy standing for a few centimetres range accuracy. Finally, a significant increasing of speed up in survey operations besides improving the spatial accuracy of the extracted LCPs are achieved.
APA, Harvard, Vancouver, ISO, and other styles
36

Kalacska, Margaret, J. Pablo Arroyo-Mora, and Oliver Lucanus. "Comparing UAS LiDAR and Structure-from-Motion Photogrammetry for Peatland Mapping and Virtual Reality (VR) Visualization." Drones 5, no. 2 (May 9, 2021): 36. http://dx.doi.org/10.3390/drones5020036.

Full text
Abstract:
The mapping of peatland microtopography (e.g., hummocks and hollows) is key for understanding and modeling complex hydrological and biochemical processes. Here we compare unmanned aerial system (UAS) derived structure-from-motion (SfM) photogrammetry and LiDAR point clouds and digital surface models of an ombrotrophic bog, and we assess the utility of these technologies in terms of payload, efficiency, and end product quality (e.g., point density, microform representation, etc.). In addition, given their generally poor accessibility and fragility, peatlands provide an ideal model to test the usability of virtual reality (VR) and augmented reality (AR) visualizations. As an integrated system, the LiDAR implementation was found to be more straightforward, with fewer points of potential failure (e.g., hardware interactions). It was also more efficient for data collection (10 vs. 18 min for 1.17 ha) and produced considerably smaller file sizes (e.g., 51 MB vs. 1 GB). However, SfM provided higher spatial detail of the microforms due to its greater point density (570.4 vs. 19.4 pts/m2). Our VR/AR assessment revealed that the most immersive user experience was achieved from the Oculus Quest 2 compared to Google Cardboard VR viewers or mobile AR, showcasing the potential of VR for natural sciences in different environments. We expect VR implementations in environmental sciences to become more popular, as evaluations such as the one shown in our study are carried out for different ecosystems.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhang, Shenman, Pengjie Tao, Lei Wang, Yaolin Hou, and Zhihua Hu. "Improving Details of Building Façades in Open LiDAR Data Using Ground Images." Remote Sensing 11, no. 4 (February 18, 2019): 420. http://dx.doi.org/10.3390/rs11040420.

Full text
Abstract:
Recent open data initiatives allow free access to a vast amount of light detection and ranging (LiDAR) data in many cities. However, most open LiDAR data of cities are acquired by airborne scanning, where points on building façades are sparse or even completely missing due to occlusions in the urban environment, leading to the absence of façade details. This paper presents an approach for improving the LiDAR data coverage on building façades by using point cloud generated from ground images. A coarse-to-fine strategy is proposed to fuse these two-point clouds of different sources with very limited overlaps. First, the façade point cloud generated from ground images is leveled by adjusting the facade normal to perpendicular to the upright direction. Then leveling façade point cloud is geolocated by alignment between images GPS data and their structure from motion (SfM) coordinates. Next, a modified coherent point drift algorithm with (surface) normal consistency is proposed to accurately align the façade point cloud to the LiDAR data. The significance of this work resides in the use of 2D overlapping points on the building outlines instead of the limited 3D overlap between the two-point clouds. This way we can still achieve reliable and precise registration under incomplete coverage and ambiguous correspondence. Experiments show that the proposed approach can significantly improve the façade details in open LiDAR data, and achieve 2 to 10 times higher registration accuracy, when compared to classic registration methods.
APA, Harvard, Vancouver, ISO, and other styles
38

Štroner, Martin, Rudolf Urban, and Lenka Línková. "A New Method for UAV Lidar Precision Testing Used for the Evaluation of an Affordable DJI ZENMUSE L1 Scanner." Remote Sensing 13, no. 23 (November 27, 2021): 4811. http://dx.doi.org/10.3390/rs13234811.

Full text
Abstract:
Lately, affordable unmanned aerial vehicle (UAV)-lidar systems have started to appear on the market, highlighting the need for methods facilitating proper verification of their accuracy. However, the dense point cloud produced by such systems makes the identification of individual points that could be used as reference points difficult. In this paper, we propose such a method utilizing accurately georeferenced targets covered with high-reflectivity foil, which can be easily extracted from the cloud; their centers can be determined and used for the calculation of the systematic shift of the lidar point cloud. Subsequently, the lidar point cloud is cleaned of such systematic shift and compared with a dense SfM point cloud, thus yielding the residual accuracy. We successfully applied this method to the evaluation of an affordable DJI ZENMUSE L1 scanner mounted on the UAV DJI Matrice 300 and found that the accuracies of this system (3.5 cm in all directions after removal of the global georeferencing error) are better than manufacturer-declared values (10/5 cm horizontal/vertical). However, evaluation of the color information revealed a relatively high (approx. 0.2 m) systematic shift.
APA, Harvard, Vancouver, ISO, and other styles
39

Gomez, Christopher, Muhammad Anggri Setiawan, Noviyanti Listyaningrum, Sandy Budi Wibowo, Danang Sri Hadmoko, Wiwit Suryanto, Herlan Darmawan, et al. "LiDAR and UAV SfM-MVS of Merapi Volcanic Dome and Crater Rim Change from 2012 to 2014." Remote Sensing 14, no. 20 (October 17, 2022): 5193. http://dx.doi.org/10.3390/rs14205193.

Full text
Abstract:
Spatial approaches, based on the deformation measurement of volcanic domes and crater rims, is key in evaluating the activity of a volcano, such as Merapi Volcano, where associated disaster risk regularly takes lives. Within this framework, this study aims to detect localized topographic change in the summit area that has occurred concomitantly with the dome growth and explosion reported. The methodology was focused on two sets of data, one LiDAR-based dataset from 2012 and one UAV dataset from 2014. The results show that during the period 2012–2014, the crater walls were 100–120 m above the crater floor at its maximum (from the north to the east–southeast sector), while the west and north sectors present a topographic range of 40–80 m. During the period 2012–2014, the evolution of the crater rim around the dome was generally stable (no large collapse). The opening of a new vent on the surface of the dome has displaced an equivalent volume of 2.04 × 104 m3, corresponding to a maximum −9 m (+/−0.9 m) vertically. The exploded material has partly fallen within the crater, increasing the accumulated loose material while leaving “hollows” where the vents are located, although the potential presence of debris inside these vents made it difficult to determine the exact size of these openings. Despite a measure of the error from the two DEMs, adding a previously published dataset shows further discrepancies, suggesting that there is also a technical need to develop point-cloud technologies for active volcanic craters.
APA, Harvard, Vancouver, ISO, and other styles
40

Corradetti, Amerigo, Thomas Seers, Marco Mercuri, Chiara Calligaris, Alice Busetti, and Luca Zini. "Benchmarking Different SfM-MVS Photogrammetric and iOS LiDAR Acquisition Methods for the Digital Preservation of a Short-Lived Excavation: A Case Study from an Area of Sinkhole Related Subsidence." Remote Sensing 14, no. 20 (October 17, 2022): 5187. http://dx.doi.org/10.3390/rs14205187.

Full text
Abstract:
We are witnessing a digital revolution in geoscientific field data collection and data sharing, driven by the availability of low-cost sensory platforms capable of generating accurate surface reconstructions as well as the proliferation of apps and repositories which can leverage their data products. Whilst the wider proliferation of 3D close-range remote sensing applications is welcome, improved accessibility is often at the expense of model accuracy. To test the accuracy of consumer-grade close-range 3D model acquisition platforms commonly employed for geo-documentation, we have mapped a 20-m-wide trench using aerial and terrestrial photogrammetry, as well as iOS LiDAR. The latter was used to map the trench using both the 3D Scanner App and PIX4Dcatch applications. Comparative analysis suggests that only in optimal scenarios can geotagged field-based photographs alone result in models with acceptable scaling errors, though even in these cases, the orientation of the transformed model is not sufficiently accurate for most geoscientific applications requiring structural metric data. The apps tested for iOS LiDAR acquisition were able to produce accurately scaled models, though surface deformations caused by simultaneous localization and mapping (SLAM) errors are present. Finally, of the tested apps, PIX4Dcatch is the iOS LiDAR acquisition tool able to produce correctly oriented models.
APA, Harvard, Vancouver, ISO, and other styles
41

Suh, Ji Won, and William Ouimet. "Generation of High-Resolution Orthomosaics from Historical Aerial Photographs Using Structure-from-Motion and Lidar Data." Photogrammetric Engineering & Remote Sensing 89, no. 1 (January 1, 2023): 37–46. http://dx.doi.org/10.14358/pers.22-00063r2.

Full text
Abstract:
This study presents a method to generate historical orthomosaics using Structure-from-Motion (SfM ) photogrammetry, historical aerial photographs, and lidar data, and then analyzes the horizontal accuracy and factors that can affect the quality of historical orthoimagery products made with these approaches. Two sets of historical aerial photographs (1934 and 1951) were analyzed, focused on the town of Woodstock in Connecticut, U.S.A. Ground control points (GCPs) for georeferencing were obtained by overlaying multiple data sets, including lidar elevation data and derivative hillshades, and recent orthoimagery. Root-Mean-Square Error values of check points (CPs ) for 1934 and 1951 orthomosaics without extreme outliers are 0.83 m and 1.37 m, respectively. Results indicate that orthomosaics can be used for standard mapping and geographic information systems (GIS ) work according to the ASPRS 1990 accuracy standard. In addition, results emphasize that three main factors can affect the horizontal accuracy of orthomosaics: (1) types of CPs, (2) the number of tied photos, and (3) terrain.
APA, Harvard, Vancouver, ISO, and other styles
42

Lin, Yi-Chun, Yi-Ting Cheng, Tian Zhou, Radhika Ravi, Seyyed Hasheminasab, John Flatt, Cary Troy, and Ayman Habib. "Evaluation of UAV LiDAR for Mapping Coastal Environments." Remote Sensing 11, no. 24 (December 4, 2019): 2893. http://dx.doi.org/10.3390/rs11242893.

Full text
Abstract:
Unmanned Aerial Vehicle (UAV)-based remote sensing techniques have demonstrated great potential for monitoring rapid shoreline changes. With image-based approaches utilizing Structure from Motion (SfM), high-resolution Digital Surface Models (DSM), and orthophotos can be generated efficiently using UAV imagery. However, image-based mapping yields relatively poor results in low textured areas as compared to those from LiDAR. This study demonstrates the applicability of UAV LiDAR for mapping coastal environments. A custom-built UAV-based mobile mapping system is used to simultaneously collect LiDAR and imagery data. The quality of LiDAR, as well as image-based point clouds, are investigated and compared over different geomorphic environments in terms of their point density, relative and absolute accuracy, and area coverage. The results suggest that both UAV LiDAR and image-based techniques provide high-resolution and high-quality topographic data, and the point clouds generated by both techniques are compatible within a 5 to 10 cm range. UAV LiDAR has a clear advantage in terms of large and uniform ground coverage over different geomorphic environments, higher point density, and ability to penetrate through vegetation to capture points below the canopy. Furthermore, UAV LiDAR-based data acquisitions are assessed for their applicability in monitoring shoreline changes over two actively eroding sandy beaches along southern Lake Michigan, Dune Acres, and Beverly Shores, through repeated field surveys. The results indicate a considerable volume loss and ridge point retreat over an extended period of one year (May 2018 to May 2019) as well as a short storm-induced period of one month (November 2018 to December 2018). The foredune ridge recession ranges from 0 m to 9 m. The average volume loss at Dune Acres is 18.2 cubic meters per meter and 12.2 cubic meters per meter within the one-year period and storm-induced period, respectively, highlighting the importance of episodic events in coastline changes. The average volume loss at Beverly Shores is 2.8 cubic meters per meter and 2.6 cubic meters per meter within the survey period and storm-induced period, respectively.
APA, Harvard, Vancouver, ISO, and other styles
43

Luppichini, Favalli, Isola, Nannipieri, Giannecchini, and Bini. "Influence of Topographic Resolution and Accuracy on Hydraulic Channel Flow Simulations: Case Study of the Versilia River (Italy)." Remote Sensing 11, no. 13 (July 9, 2019): 1630. http://dx.doi.org/10.3390/rs11131630.

Full text
Abstract:
The Versilia plain, a well-known and populated tourist area in northwestern Tuscany, is historically subject to floods. The last hydrogeological disaster of 1996 resulted in 13 deaths and in loss worth hundreds of millions of euros. A valid management of the hydraulic and flooding risks of this territory is therefore mandatory. A 7.5 km-long stretch of the Versilia River was simulated in one-dimension using river cross-sections with the FLO-2D Basic model. Simulations of the channel flow and of its maximum flow rate under different input conditions highlight the key role of topography: uncertainties in the topography introduce much larger errors than the uncertainties in roughness. The best digital elevation model (DEM) available for the area, a 1-m light detection and ranging (LiDAR) DEM dating back to 2008–2010, does not reveal all the hydraulic structures (e.g., the 40 cm thick embankment walls), lowering the maximum flow rate to only 150 m3/s, much lower than the expected value of 400 m3/s. In order to improve the already existing input topography, three different possibilities were considered: (1) to add the embankment walls to the LiDAR data with a targeted Differential GPS (DGPS) survey, (2) to acquire the cross section profiles necessary for simulation with a targeted DGPS survey, and (3) to achieve a very high resolution topography using structure from motion techniques (SfM) from images acquired using an unmanned aerial vehicle (UAV). The simulations based on all these options deliver maximum flow rates in agreement with estimated values. Resampling of the 10 cm cell size SfM-DSM allowed us to investigate the influence of topographic resolution on hydraulic channel flow, demonstrating that a change in the resolution from 30 to 50 cm alone introduced a 10% loss in the maximum flow rate. UAV-SfM-derived DEMs are low cost, relatively fast, very accurate, and they allow for the monitoring of the channel morphology variations in real time and to keep the hydraulic models updated, thus providing an excellent tool for managing hydraulic and flooding risks.
APA, Harvard, Vancouver, ISO, and other styles
44

Kadhim, Israa, and Fanar Abed. "The Potential of LiDAR and UAV-Photogrammetric Data Analysis to Interpret Archaeological Sites: A Case Study of Chun Castle in South-West England." ISPRS International Journal of Geo-Information 10, no. 1 (January 19, 2021): 41. http://dx.doi.org/10.3390/ijgi10010041.

Full text
Abstract:
With the increasing demands to use remote sensing approaches, such as aerial photography, satellite imagery, and LiDAR in archaeological applications, there is still a limited number of studies assessing the differences between remote sensing methods in extracting new archaeological finds. Therefore, this work aims to critically compare two types of fine-scale remotely sensed data: LiDAR and an Unmanned Aerial Vehicle (UAV) derived Structure from Motion (SfM) photogrammetry. To achieve this, aerial imagery and airborne LiDAR datasets of Chun Castle were acquired, processed, analyzed, and interpreted. Chun Castle is one of the most remarkable ancient sites in Cornwall County (Southwest England) that had not been surveyed and explored by non-destructive techniques. The work outlines the approaches that were applied to the remotely sensed data to reveal potential remains: Visualization methods (e.g., hillshade and slope raster images), ISODATA clustering, and Support Vector Machine (SVM) algorithms. The results display various archaeological remains within the study site that have been successfully identified. Applying multiple methods and algorithms have successfully improved our understanding of spatial attributes within the landscape. The outcomes demonstrate how raster derivable from inexpensive approaches can be used to identify archaeological remains and hidden monuments, which have the possibility to revolutionize archaeological understanding.
APA, Harvard, Vancouver, ISO, and other styles
45

Sangjan, Worasit, and Sindhuja Sankaran. "Phenotyping Architecture Traits of Tree Species Using Remote Sensing Techniques." Transactions of the ASABE 64, no. 5 (2021): 1611–24. http://dx.doi.org/10.13031/trans.14419.

Full text
Abstract:
HighlightsTree canopy architecture traits are associated with its productivity and management.Understanding these traits is important for both precision agriculture and phenomics applications.Remote sensing platforms (satellite, UAV, etc.) and multiple approaches (SfM, LiDAR) have been used to assess these traits.3D reconstruction of tree canopies allows the measurement of tree height, crown area, and canopy volume.Abstract. Tree canopy architecture is associated with light use efficiency and thus productivity. Given the modern training systems in orchard tree fruit systems, modification of tree architecture is becoming important for easier management of crops (e.g., pruning, thinning, chemical application, harvesting, etc.) while maintaining fruit quality and quantity. Similarly, in forest environments, architecture can influence the competitiveness and balance between tree species in the ecosystem. This article reviews the literature related to sensing approaches used for assessing architecture traits and the factors that influence such evaluation processes. Digital imagery integrated with structure from motion analysis and both terrestrial and aerial light detection and ranging (LiDAR) systems have been commonly used. In addition, satellite imagery and other techniques have been explored. Some of the major findings and some critical considerations for such measurement methods are summarized here. Keywords: Canopy volume, LiDAR system, Structure from motion, Tree height, UAV.
APA, Harvard, Vancouver, ISO, and other styles
46

Martínez-Fernández, Adrián, Enrique Serrano, Alfonso Pisabarro, Manuel Sánchez-Fernández, José Juan de Sanjosé, Manuel Gómez-Lende, Gizéh Rangel-de Lázaro, and Alfonso Benito-Calvo. "The Influence of Image Properties on High-Detail SfM Photogrammetric Surveys of Complex Geometric Landforms: The Application of a Consumer-Grade UAV Camera in a Rock Glacier Survey." Remote Sensing 14, no. 15 (July 23, 2022): 3528. http://dx.doi.org/10.3390/rs14153528.

Full text
Abstract:
The detailed description of processing workflows in Structure from Motion (SfM) surveys using unmanned aerial vehicles (UAVs) is not common in geomorphological research. One of the aspects frequently overlooked in photogrammetric reconstruction is image characteristics. In this context, the present study aims to determine whether the format or properties (e.g., exposure, sharpening, lens corrections) of the images used in the SfM process can affect high-detail surveys of complex geometric landforms such as rock glaciers. For this purpose, images generated (DNG and JPEG) and derived (TIFF) from low-cost UAV systems widely used by the scientific community are applied. The case study is carried out through a comprehensive flight plan with ground control and differences among surveys are assessed visually and geometrically. Thus, geometric evaluation is based on 2.5D and 3D perspectives and a ground-based LiDAR benchmark. The results show that the lens profiles applied by some low-cost UAV cameras to the images can significantly alter the geometry among photo-reconstructions, to the extent that they can influence monitoring activities with variations of around ±5 cm in areas with close control and over ±20 cm (10 times the ground sample distance) on surfaces outside the ground control surroundings. The terrestrial position of the laser scanner measurements and the scene changing topography results in uneven surface sampling, which makes it challenging to determine which set of images best fit the LiDAR benchmark. Other effects of the image properties are found in minor variations scattered throughout the survey or modifications to the RGB values of the point clouds or orthomosaics, with no critical impact on geomorphological studies.
APA, Harvard, Vancouver, ISO, and other styles
47

Deligiannakis, Georgios, Aggelos Pallikarakis, Ioannis Papanikolaou, Simoni Alexiou, and Klaus Reicherter. "Detecting and Monitoring Early Post-Fire Sliding Phenomena Using UAV–SfM Photogrammetry and t-LiDAR-Derived Point Clouds." Fire 4, no. 4 (November 20, 2021): 87. http://dx.doi.org/10.3390/fire4040087.

Full text
Abstract:
Soil changes, including landslides and erosion, are some of the most prominent post-fire effects in Mediterranean ecosystems. Landslide detection and monitoring play an essential role in mitigation measures. We tested two different methodologies in five burned sites with different characteristics in Central Greece. We compared Unmanned Aerial Vehicles (UAV)-derived high-resolution Digital Surface Models and point clouds with terrestrial Light Detection and Ranging (LiDAR)-derived point clouds to reveal new cracks and monitor scarps of pre-existing landslides. New cracks and scarps were revealed at two sites after the wildfire, measuring up to 27 m in length and up to 25 ± 5 cm in depth. Pre-existing scarps in both Kechries sites appeared to be active, with additional vertical displacements ranging from 5–15 ± 5 cm. In addition, the pre-existing landslide in Magoula expanded by 8%. Due to vegetation regrowth, no changes could be detected in the Agios Stefanos pre-existing landslide. This high-spatial-resolution mapping of slope deformations can be used as landslide precursor, assisting prevention measures. Considering the lack of vegetation after wildfires, UAV photogrammetry has great potential for tracing such early landslide indicators and is more efficient for accurately recording soil changes.
APA, Harvard, Vancouver, ISO, and other styles
48

Kolzenburg, Stephan, M. Favalli, A. Fornaciai, I. Isola, A. J. L. Harris, L. Nannipieri, and D. Giordano. "Rapid Updating and Improvement of Airborne LIDAR DEMs Through Ground-Based SfM 3-D Modeling of Volcanic Features." IEEE Transactions on Geoscience and Remote Sensing 54, no. 11 (November 2016): 6687–99. http://dx.doi.org/10.1109/tgrs.2016.2587798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Petras, V., A. Petrasova, J. Jeziorska, and H. Mitasova. "PROCESSING UAV AND LIDAR POINT CLOUDS IN GRASS GIS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 22, 2016): 945–52. http://dx.doi.org/10.5194/isprs-archives-xli-b7-945-2016.

Full text
Abstract:
Today’s methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.
APA, Harvard, Vancouver, ISO, and other styles
50

Petras, V., A. Petrasova, J. Jeziorska, and H. Mitasova. "PROCESSING UAV AND LIDAR POINT CLOUDS IN GRASS GIS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 22, 2016): 945–52. http://dx.doi.org/10.5194/isprsarchives-xli-b7-945-2016.

Full text
Abstract:
Today’s methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography