Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Imagerie laser 3D.

Статті в журналах з теми "Imagerie laser 3D"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Imagerie laser 3D".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Chhatkuli, S., T. Satoh, and K. Tachibana. "MULTI SENSOR DATA INTEGRATION FOR AN ACCURATE 3D MODEL GENERATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-4/W5 (May 11, 2015): 103–6. http://dx.doi.org/10.5194/isprsarchives-xl-4-w5-103-2015.

Повний текст джерела
Анотація:
The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other’s weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hasegawa, Kyoko, Liang Li, Naoya Okamoto, Shu Yanai, Hiroshi Yamaguchi, Atsushi Okamoto, and Satoshi Tanaka. "Application of Stochastic Point-Based Rendering to Laser-Scanned Point Clouds of Various Cultural Heritage Objects." International Journal of Automation Technology 12, no. 3 (May 1, 2018): 348–55. http://dx.doi.org/10.20965/ijat.2018.p0348.

Повний текст джерела
Анотація:
Recently, we proposed stochastic point-based rendering, which enables precise and interactive-speed transparent rendering of large-scale laser-scanned point clouds. This transparent visualization method does not suffer from rendering artifact and realizes correct depth feel in the created 3D image.In this paper, we apply the method to several kinds of large-scale laser-scanned point clouds of cultural heritage objects and prove its wide applicability.In addition, we prove better image quality is realized by properly eliminating points to realize better distributional uniformity of points. Here, the distributional uniformity means uniformity of inter-point distances between nearest-neighbor points.We also demonstrate that highlighting feature regions, especially edges, in the transparent visualization helps us understand 3D internal structures of complex laser-scanned objects. The feature regions are highlighted by properly increasing local opacity of the regions.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Haase, I., P. Gläser, and J. Oberst. "BUNDLE ADJUSTMENT OF SPACEBORNE DOUBLE-CAMERA PUSH-BROOM IMAGERS AND ITS APPLICATION TO LROC NAC IMAGERY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 5, 2019): 1397–404. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-1397-2019.

Повний текст джерела
Анотація:
<p><strong>Abstract.</strong> The TU Berlin group of the Lunar Reconnaissance Orbiter Camera (LROC) team has implemented a Bundle Adjustment (BA) for spaceborne multi-lenses line scan imagers, by rigorously modeling the geometric properties of the image acquisition. The BA was applied to stereo image sets of the LROC Narrow Angle Camera (NAC) and first results show, that the overall geometry of the stereo models were significantly improved. Ray intersection accuracies of initially up to several meters were homogenized within the integrated stereo models and improved to 0.14&amp;thinsp;m on average. The mean point error of the adjusted 3D object points was estimated by the BA to be 0.95&amp;thinsp;m. The inclusion of available Lunar Orbiter Laser Altimeter (LOLA) shots as 3D ground control to the BA, accurately tied to the image space by an aforegoing co-registration, allowed to register the final adjusted NAC DTM to the currently most accurate global lunar reference frame. The BA also provides accuracy assessments of the individual LOLA tracks used for georeference during the adjustment, which will be useful to further assess LOLA derived products.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Namouchi, Slim, and Imed Riadh Farah. "Graph-Based Classification and Urban Modeling of Laser Scanning and Imagery: Toward 3D Smart Web Services." Remote Sensing 14, no. 1 (December 28, 2021): 114. http://dx.doi.org/10.3390/rs14010114.

Повний текст джерела
Анотація:
Recently, remotely sensed data obtained via laser technology has gained great importance due to its wide use in several fields, especially in 3D urban modeling. In fact, 3D city models in urban environments are efficiently employed in many fields, such as military operations, emergency management, building and height mapping, cadastral data upgrading, monitoring of changes as well as virtual reality. These applications are essentially composed of models of structures, urban elements, ground surface and vegetation. This paper presents a workflow for modeling the structure of buildings by using laser-scanned data (LiDAR) and multi-spectral images in order to develop a 3D web service for a smart city concept. Optical vertical photography is generally utilized to extract building class, while LiDAR data is used as a source of information to create the structure of the 3D building. The building reconstruction process presented in this study can be divided into four main stages: building LiDAR points extraction, piecewise horizontal roof clustering, boundaries extraction and 3D geometric modeling. Finally, an architecture for a 3D smart service based on the CityGML interchange format is proposed.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Gonzalez-Barbosa, Jose-Joel, Karen Lizbeth Flores-Rodrıguez, Francisco Javier Ornelas-Rodrıguez, Felipe Trujillo-Romero, Erick Alejandro Gonzalez-Barbosa, and Juan B. Hurtado-Ramos. "Using mobile laser scanner and imagery for urban management applications." IAES International Journal of Robotics and Automation (IJRA) 11, no. 2 (June 1, 2022): 89. http://dx.doi.org/10.11591/ijra.v11i2.pp89-110.

Повний текст джерела
Анотація:
<p>Despite autonomous navigation is one of the most proliferate applications of three-dimensional (3D) point clouds and imagery both techniques can potentially have many other applications. This work explores urban digitization tools applied to 3D geometry to perform urban tasks. We focus exclusively on compiling scientific research that merges mobile laser scanning (MLS) and imagery from vision systems. The major contribution of this review is to show the evolution of MLS combined with imagery in urban applications. We review systems used by public and private organizations to handle urban tasks such as historic preservation, roadside assistance, road infrastructure inventory, and public space study. The work pinpoints the potential and accuracy of data acquisition systems to handled both 3D point clouds and imagery data. We highlight potential future work regarding the detection of urban environment elements and to solve urban problems. This article concludes by discussing the major constraints and struggles of current systems that use MLS combined with imagery to perform urban tasks and to solve urban tasks.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Gabara, G., and P. Sawicki. "QUALITY EVALUATION OF 3D BUILDING MODELS BASED ON LOW-ALTITUDE IMAGERY AND AIRBORNE LASER SCANNING POINT CLOUDS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2021 (June 28, 2021): 345–52. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2021-345-2021.

Повний текст джерела
Анотація:
Abstract. The term “3D building models” is used in relation to the CityGML models and building information modelling. Reconstruction and modelling of 3D building objects in urban areas becomes a common trend and finds a wide spectrum of utilitarian applications. The paper presents the quality assessment of two multifaceted 3D building models, which were obtained from two open-access databases: Polish national Geoportal (accuracy in LOD 2 standard) and Trimble SketchUp Warehouse (accuracy in LOD 2 standard with information about architectural details of façades). The Geoportal 3D models were primary created based on the airborne laser scanning data (density 12 pts/sq. m, elevation accuracy to 0.10 m) collected during Informatic System for Country Protection against extraordinary hazards project. The testing was performed using different validation low-altitude photogrammetric datasets: RIEGL LMS-Q680i airborne laser scanning point cloud (min. density 25 pts/sq. m and height accuracy 0.03 m), and image-based Phase One iXU-RS 1000 point cloud (average accuracy in the horizontal and in the vertical plane is respectively to 0.015 m and 0.030 m). The visual comparison, heat maps with the function of the signed distance, and histograms in predefined ranges were used to evaluate the quality and accuracy of 3D building models. The aspect of error sources that occurred during the modelling process was also discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Cavegn, S., N. Haala, S. Nebiker, M. Rothermel, and P. Tutzauer. "Benchmarking High Density Image Matching for Oblique Airborne Imagery." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3 (August 11, 2014): 45–52. http://dx.doi.org/10.5194/isprsarchives-xl-3-45-2014.

Повний текст джерела
Анотація:
Both, improvements in camera technology and new pixel-wise matching approaches triggered the further development of software tools for image based 3D reconstruction. Meanwhile research groups as well as commercial vendors provide photogrammetric software to generate dense, reliable and accurate 3D point clouds and Digital Surface Models (DSM) from highly overlapping aerial images. In order to evaluate the potential of these algorithms in view of the ongoing software developments, a suitable test bed is provided by the ISPRS/EuroSDR initiative <i>Benchmark on High Density Image Matching for DSM Computation</i>. This paper discusses the proposed test scenario to investigate the potential of dense matching approaches for 3D data capture from oblique airborne imagery. For this purpose, an oblique aerial image block captured at a GSD of 6 cm in the west of Zürich by a Leica RCD30 Oblique Penta camera is used. Within this paper, the potential test scenario is demonstrated using matching results from two software packages, Agisoft PhotoScan and SURE from University of Stuttgart. As oblique images are frequently used for data capture at building facades, 3D point clouds are mainly investigated at such areas. Reference data from terrestrial laser scanning is used to evaluate data quality from dense image matching for several facade patches with respect to accuracy, density and reliability.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Luo, Haitao, Jinming Zhang, Xiongfei Liu, Lili Zhang, and Junyi Liu. "Large-Scale 3D Reconstruction from Multi-View Imagery: A Comprehensive Review." Remote Sensing 16, no. 5 (February 22, 2024): 773. http://dx.doi.org/10.3390/rs16050773.

Повний текст джерела
Анотація:
Three-dimensional reconstruction is a key technology employed to represent virtual reality in the real world, which is valuable in computer vision. Large-scale 3D models have broad application prospects in the fields of smart cities, navigation, virtual tourism, disaster warning, and search-and-rescue missions. Unfortunately, most image-based studies currently prioritize the speed and accuracy of 3D reconstruction in indoor scenes. While there are some studies that address large-scale scenes, there has been a lack of systematic comprehensive efforts to bring together the advancements made in the field of 3D reconstruction in large-scale scenes. Hence, this paper presents a comprehensive overview of a 3D reconstruction technique that utilizes multi-view imagery from large-scale scenes. In this article, a comprehensive summary and analysis of vision-based 3D reconstruction technology for large-scale scenes are presented. The 3D reconstruction algorithms are extensively categorized into traditional and learning-based methods. Furthermore, these methods can be categorized based on whether the sensor actively illuminates objects with light sources, resulting in two categories: active and passive methods. Two active methods, namely, structured light and laser scanning, are briefly introduced. The focus then shifts to structure from motion (SfM), stereo matching, and multi-view stereo (MVS), encompassing both traditional and learning-based approaches. Additionally, a novel approach of neural-radiance-field-based 3D reconstruction is introduced. The workflow and improvements in large-scale scenes are elaborated upon. Subsequently, some well-known datasets and evaluation metrics for various 3D reconstruction tasks are introduced. Lastly, a summary of the challenges encountered in the application of 3D reconstruction technology in large-scale outdoor scenes is provided, along with predictions for future trends in development.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Liew, S. C., X. Huang, E. S. Lin, C. Shi, A. T. K. Yee, and A. Tandon. "INTEGRATION OF TREE DATABASE DERIVED FROM SATELLITE IMAGERY AND LIDAR POINT CLOUD DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W10 (September 12, 2018): 105–11. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w10-105-2018.

Повний текст джерела
Анотація:
<p><strong>Abstract.</strong> 3D tree database provides essential information of tree species abundance, spatial distribution and tree height for forest mapping, sustainable urban planning and 3D city modelling. Fusion of passive optical satellite imagery and active Lidar data can potentially be exploited for operational forest inventory. However, such fusion requires very high geometric accuracy for both data sets. This paper proposes an approach for 3D tree information extracted from passive and active data integrating into existing tree database by effectively using geometric information of satellite camera model and laser scanner scanning geometry. The paper also presents the individual methods for tree crown identification and delineation from satellite images and lidar point cloud data respectively, the geometric correction of tree position from tree top to tree base. The ground truth accuracy assessment for the tree extracted is also present.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tsai, F., and H. Chang. "Evaluations of Three-Dimensional Building Model Reconstruction from LiDAR Point Clouds and Single-View Perspective Imagery." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5 (June 6, 2014): 597–600. http://dx.doi.org/10.5194/isprsarchives-xl-5-597-2014.

Повний текст джерела
Анотація:
This paper briefly presents two approaches for effective three-dimensional (3D) building model reconstruction from terrestrial laser scanning (TLS) data and single perspective view imagery and assesses their applicability to the reconstruction of 3D models of landmark or historical buildings. The collected LiDAR point clouds are registered based on conjugate points identified using a seven-parameter transformation system. Three dimensional models are generated using plan and surface fitting algorithms. The proposed single-view reconstruction (SVR) method is based on vanishing points and single-view metrology. More detailed models can also be generated according to semantic analysis of the façade images. Experimental results presented in this paper demonstrate that both TLS and SVR approaches can successfully produce accurate and detailed 3D building models from LiDAR point clouds or different types of single-view perspective images.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Veitch-Michaelis, Joshua, Jan-Peter Muller, David Walton, Jonathan Storey, Michael Foster, and Benjamin Crutchley. "ENHANCEMENT OF STEREO IMAGERY BY ARTIFICIAL TEXTURE PROJECTION GENERATED USING A LIDAR." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B5 (June 15, 2016): 599–606. http://dx.doi.org/10.5194/isprs-archives-xli-b5-599-2016.

Повний текст джерела
Анотація:
Passive stereo imaging is capable of producing dense 3D data, but image matching algorithms generally perform poorly on images with large regions of homogenous texture due to ambiguous match costs. Stereo systems can be augmented with an additional light source that can project some form of unique texture onto surfaces in the scene. Methods include structured light, laser projection through diffractive optical elements, data projectors and laser speckle. Pattern projection using lasers has the advantage of producing images with a high signal to noise ratio. We have investigated the use of a scanning visible-beam LIDAR to simultaneously provide enhanced texture within the scene and to provide additional opportunities for data fusion in unmatched regions. The use of a LIDAR rather than a laser alone allows us to generate highly accurate ground truth data sets by scanning the scene at high resolution. This is necessary for evaluating different pattern projection schemes. Results from LIDAR generated random dots are presented and compared to other texture projection techniques. Finally, we investigate the use of image texture analysis to intelligently project texture where it is required while exploiting the texture available in the ambient light image.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Veitch-Michaelis, Joshua, Jan-Peter Muller, David Walton, Jonathan Storey, Michael Foster, and Benjamin Crutchley. "ENHANCEMENT OF STEREO IMAGERY BY ARTIFICIAL TEXTURE PROJECTION GENERATED USING A LIDAR." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B5 (June 15, 2016): 599–606. http://dx.doi.org/10.5194/isprsarchives-xli-b5-599-2016.

Повний текст джерела
Анотація:
Passive stereo imaging is capable of producing dense 3D data, but image matching algorithms generally perform poorly on images with large regions of homogenous texture due to ambiguous match costs. Stereo systems can be augmented with an additional light source that can project some form of unique texture onto surfaces in the scene. Methods include structured light, laser projection through diffractive optical elements, data projectors and laser speckle. Pattern projection using lasers has the advantage of producing images with a high signal to noise ratio. We have investigated the use of a scanning visible-beam LIDAR to simultaneously provide enhanced texture within the scene and to provide additional opportunities for data fusion in unmatched regions. The use of a LIDAR rather than a laser alone allows us to generate highly accurate ground truth data sets by scanning the scene at high resolution. This is necessary for evaluating different pattern projection schemes. Results from LIDAR generated random dots are presented and compared to other texture projection techniques. Finally, we investigate the use of image texture analysis to intelligently project texture where it is required while exploiting the texture available in the ambient light image.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Jarzabek-Rychard, M., and M. Karpina. "QUALITY ANALYSIS ON 3D BUIDLING MODELS RECONSTRUCTED FROM UAV IMAGERY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 6, 2016): 1121–26. http://dx.doi.org/10.5194/isprsarchives-xli-b1-1121-2016.

Повний текст джерела
Анотація:
Recent developments in UAV technology and structure from motion techniques have effected that UAVs are becoming standard platforms for 3D data collection. Because of their flexibility and ability to reach inaccessible urban parts, drones appear as optimal solution for urban applications. Building reconstruction from the data collected with UAV has the important potential to reduce labour cost for fast update of already reconstructed 3D cities. However, especially for updating of existing scenes derived from different sensors (e.g. airborne laser scanning), a proper quality assessment is necessary. The objective of this paper is thus to evaluate the potential of UAV imagery as an information source for automatic 3D building modeling at LOD2. The investigation process is conducted threefold: (1) comparing generated SfM point cloud to ALS data; (2) computing internal consistency measures of the reconstruction process; (3) analysing the deviation of Check Points identified on building roofs and measured with a tacheometer. In order to gain deep insight in the modeling performance, various quality indicators are computed and analysed. The assessment performed according to the ground truth shows that the building models acquired with UAV-photogrammetry have the accuracy of less than 18 cm for the plannimetric position and about 15 cm for the height component.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Jarzabek-Rychard, M., and M. Karpina. "QUALITY ANALYSIS ON 3D BUIDLING MODELS RECONSTRUCTED FROM UAV IMAGERY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 6, 2016): 1121–26. http://dx.doi.org/10.5194/isprs-archives-xli-b1-1121-2016.

Повний текст джерела
Анотація:
Recent developments in UAV technology and structure from motion techniques have effected that UAVs are becoming standard platforms for 3D data collection. Because of their flexibility and ability to reach inaccessible urban parts, drones appear as optimal solution for urban applications. Building reconstruction from the data collected with UAV has the important potential to reduce labour cost for fast update of already reconstructed 3D cities. However, especially for updating of existing scenes derived from different sensors (e.g. airborne laser scanning), a proper quality assessment is necessary. The objective of this paper is thus to evaluate the potential of UAV imagery as an information source for automatic 3D building modeling at LOD2. The investigation process is conducted threefold: (1) comparing generated SfM point cloud to ALS data; (2) computing internal consistency measures of the reconstruction process; (3) analysing the deviation of Check Points identified on building roofs and measured with a tacheometer. In order to gain deep insight in the modeling performance, various quality indicators are computed and analysed. The assessment performed according to the ground truth shows that the building models acquired with UAV-photogrammetry have the accuracy of less than 18 cm for the plannimetric position and about 15 cm for the height component.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Bouziani, M., M. Amraoui, and S. Kellouch. "COMPARISON ASSESSMENT OF DIGITAL 3D MODELS OBTAINED BY DRONE-BASED LIDAR AND DRONE IMAGERY." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-4/W5-2021 (December 23, 2021): 113–18. http://dx.doi.org/10.5194/isprs-archives-xlvi-4-w5-2021-113-2021.

Повний текст джерела
Анотація:
Abstract. The purpose of this study is to assess the potential of drone airborne LiDAR technology in Morocco in comparison with drone photogrammetry. The cost and complexity of the equipment which includes a laser scanner, an inertial measurement unit, a positioning system and a platform are among the causes limiting its use. Furthermore, this study was motivated by the following reasons: (1) Limited number of studies in Morocco on drone-based LiDAR technology applications, (2) Lack of study on the parameters that influence the quality of drone-based LiDAR surveys as well as on the evaluation of the accuracy of derived products. In this study, the evaluation of LiDAR technology was carried out by an analysis of the geometric accuracy of the 3D products generated: Digital Terrain Model (DTM), Digital Surface Model (DSM) and Digital Canopy Model (DCM). We conduct a comparison with the products generated by drone photogrammetry and GNSS surveys. Several tests were carried out to analyse the parameters that influence the mission results namely height, overlap, drone speed and laser pulse frequency. After data collection, the processing phase was carried out. It includes: the cleaning, the consolidation then the classification of point clouds and the generation of the various digital models. This project also made it possible to propose and validate a workflow for the processing, the classification of point clouds and the generation of 3D digital products derived from the processing of LiDAR data acquired by drone.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Park, Hongjoo, Mahmoud Salah, and Samsung Lim. "ACCURACY OF 3D MODELS DERIVED FROM AERIAL LASER SCANNING AND AERIAL ORTHO-IMAGERY." Survey Review 43, no. 320 (April 2011): 109–22. http://dx.doi.org/10.1179/003962611x12894696204786.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Bai, Ting, Dominik Stütz, Chang Liu, Dimitri Bulatov, Jorg Hacker, and Linlin Ge. "Unsupervised Bushfire Burn Severity Mapping Using Aerial and Satellite Imagery." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-4-2024 (October 18, 2024): 29–36. http://dx.doi.org/10.5194/isprs-annals-x-4-2024-29-2024.

Повний текст джерела
Анотація:
Abstract. It is critical to assess bushfire impact rapidly and accurately because bushfires play a significant role in forest degradation and present a threat to ecosystems and human lives. Over the past decades, several supervised algorithms of burn severity mapping have been proposed, facing the significant drawback of time-consuming labeling. Moreover, there is no robust framework for burn severity mapping through fusing multi-sensor, multi-resolution, and multi-temporal remote sensing imagery from satellite and aerial platforms. Therefore, this paper presents an unsupervised two-step pipeline: processing 2D data followed by 3D data for burn severity mapping, both of which are acquired from either aircraft or satellites. For the 2D data processing, our proposed unsupervised burned area detection (UsBA detection) model enhances burned area mapping accuracy by integrating Ultra-High Resolution (UHR) aerial imagery with bi-temporal medium-resolution PlanetScope imagery, using a Segment Anything Model (SAM)-assisted UNetFormer (pre-trained on the target-style public dataset – LoveDA Rural) for refinement. The model demonstrates superior burned area segmentation, evidenced by improved evaluation metrics calculated from labeled test sites. For the 3D analysis, the burned areas extracted from 2D processing are further assessed using pre- and post-event airborne laser data. We implement a voxel-based workflow, including necessary steps such as ground filtering through Superpoints in RANSAC Planes (SiRP) method and biomass change analysis. The results indicate that the 3D branch provides a reliable lower bound of the actual damage map, because the vegetation growth between two measurements remains, in essence, undetected. The proposed framework offers a more accurate and robust solution for burn severity mapping utilizing combined 2D and 3D data, evaluated on a multi-source dataset from a real bushfire event that occurred in Bushland Park, South Australia.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Manajitprasert, Supaporn, Nitin K. Tripathi, and Sanit Arunplod. "Three-Dimensional (3D) Modeling of Cultural Heritage Site Using UAV Imagery: A Case Study of the Pagodas in Wat Maha That, Thailand." Applied Sciences 9, no. 18 (September 4, 2019): 3640. http://dx.doi.org/10.3390/app9183640.

Повний текст джерела
Анотація:
As a novel innovative technology, unmanned aerial vehicles (UAVs) are increasingly being used in archaeological studies owing to their cost-effective, simple photogrammetric tool that can produce high-resolution scaled models. This study focuses on the three-dimensional (3D) modeling of the pagoda at Wat Maha That, an archaeological site in the Ayutthaya province of Thailand, which was declared a UNESCO World Heritage Site of notable cultural and historical significance in 1991. This paper presents the application of UAV imagery to generate an accurate 3D model using two pagodas at Wat Maha That as case studies: Chedi and Prang. The methodology described in the paper provides an effective, economical manner of semi-automatic mapping and contributes to the high-quality modeling of cultural heritage sites. The unmanned aerial vehicle structure-from-motion (UAV-SfM) method was used to generate a 3D Wat Mahathat pagoda model. Its accuracy was compared with a model obtained using terrestrial laser scanning and check points. The findings indicated that the 3D UAV-SfM pagoda model was sufficiently accurate to support pagoda conservation management in Thailand.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Clay, E. R., K. S. Lee, S. Tan, L. N. H. Truong, O. E. Mora, and W. Cheng. "ASSESSING THE ACCURACY OF GEOREFERENCED POINT CLOUDS FROM UAS IMAGERY." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-M-2-2022 (July 25, 2022): 59–64. http://dx.doi.org/10.5194/isprs-archives-xlvi-m-2-2022-59-2022.

Повний текст джерела
Анотація:
Abstract. Unmanned Aircraft System (UAS) mapping methods determine the three-dimensional (3D) position of surface features. UAS mapping is often used, compared to traditional mapping techniques, such as the total station (TS), Global Navigation Satellite System (GNSS) and terrestrial laser scanning (TLS). Traditional mapping methods have become less favourable due to efficiency and cost, especially for medium to large areas. As UAS mapping increases in popularity, the need to verify its accuracy for topographic mapping is evident. In this study, an assessment of the accuracy of UAS mapping is performed. Our results suggest that there are many factors that affect the accuracy of UAS photogrammetry products. In specific, the distribution and density of ground control points (GCPs) are particularly significant for a study area of 2.861 km2 in size. The best results were obtained by strategizing the distribution and density of GCPs; where, the root mean square error (RMSE) for the X, Y, Z, and 3D was minimized to 0.012, 0.021, 0.038, and 0.045 meters, respectively, by applying a total of 15 GCPs in the aerotriangulation. Therefore, it may be concluded that UAS photogrammetric mapping can meet sub-decimeter accuracy for topographic mapping, if proper planning, data collection and processing procedures are followed.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Wen, Xuedong, Hong Xie, Hua Liu, and Li Yan. "Accurate Reconstruction of the LoD3 Building Model by Integrating Multi-Source Point Clouds and Oblique Remote Sensing Imagery." ISPRS International Journal of Geo-Information 8, no. 3 (March 8, 2019): 135. http://dx.doi.org/10.3390/ijgi8030135.

Повний текст джерела
Анотація:
3D urban building models, which provide 3D information services for urban planning, management and operational decision-making, are essential for constructing digital cities. Unfortunately, the existing reconstruction approaches for LoD3 building models are insufficient in model details and are associated with a heavy workload, and accordingly they could not satisfy urgent requirements of realistic applications. In this paper, we propose an accurate LoD3 building reconstruction method by integrating multi-source laser point clouds and oblique remote sensing imagery. By combing high-precision plane features extracted from point clouds and accurate boundary constraint features from oblique images, the building mainframe model, which provides an accurate reference for further editing, is quickly and automatically constructed. Experimental results show that the proposed reconstruction method outperforms existing manual and automatic reconstruction methods using both point clouds and oblique images in terms of reconstruction efficiency and spatial accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Briechle, S., P. Krzystek, and G. Vosselman. "CLASSIFICATION OF TREE SPECIES AND STANDING DEAD TREES BY FUSING UAV-BASED LIDAR DATA AND MULTISPECTRAL IMAGERY IN THE 3D DEEP NEURAL NETWORK POINTNET++." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020 (August 3, 2020): 203–10. http://dx.doi.org/10.5194/isprs-annals-v-2-2020-203-2020.

Повний текст джерела
Анотація:
Abstract. Knowledge of tree species mapping and of dead wood in particular is fundamental to managing our forests. Although individual tree-based approaches using lidar can successfully distinguish between deciduous and coniferous trees, the classification of multiple tree species is still limited in accuracy. Moreover, the combined mapping of standing dead trees after pest infestation is becoming increasingly important. New deep learning methods outperform baseline machine learning approaches and promise a significant accuracy gain for tree mapping. In this study, we performed a classification of multiple tree species (pine, birch, alder) and standing dead trees with crowns using the 3D deep neural network (DNN) PointNet++ along with UAV-based lidar data and multispectral (MS) imagery. Aside from 3D geometry, we also integrated laser echo pulse width values and MS features into the classification process. In a preprocessing step, we generated the 3D segments of single trees using a 3D detection method. Our approach achieved an overall accuracy (OA) of 90.2% and was clearly superior to a baseline method using a random forest classifier and handcrafted features (OA = 85.3%). All in all, we demonstrate that the performance of the 3D DNN is highly promising for the classification of multiple tree species and standing dead trees in practice.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Szostak, Marta, Kacper Knapik, Piotr Wężyk, Justyna Likus-Cieślik, and Marcin Pietrzykowski. "Fusing Sentinel-2 Imagery and ALS Point Clouds for Defining LULC Changes on Reclaimed Areas by Afforestation." Sustainability 11, no. 5 (February 27, 2019): 1251. http://dx.doi.org/10.3390/su11051251.

Повний текст джерела
Анотація:
The study was performed on two former sulphur mines located in Southeast Poland: Jeziórko, where 216.5 ha of afforested area was reclaimed after borehole exploitation and Machów, where 871.7 ha of dump area was reclaimed after open cast strip mining. The areas were characterized by its terrain structure and vegetation cover resulting from the reclamation process. The types of reclamation applied in these areas were forestry in Jeziórko and agroforestry in the Machów post-sulphur mine. The study investigates the possibility of applying the most recent Sentinel-2 (ESA) satellite imageries for land cover mapping, with a primary focus on detecting and monitoring afforested areas. Airborne laser scanning point clouds were used to derive precise information about the spatial (3D) characteristics of vegetation: the height (95th percentile), std. dev. of relative height, and canopy cover. The results of the study show an increase in afforested areas in the former sulphur mines. For the entire analyzed area of Jeziórko, forested areas made up 82.0% in the year 2000 (Landsat 7, NASA), 88.8% in 2009 (aerial orthophoto), and 95.5% in 2016 (Sentinel-2, ESA). For Machów, the corresponding results were 46.1% in 2000, 57.3% in 2009, and 60.7% in 2016. A dynamic increase of afforested area was observed, especially in the Jeziórko test site, with the presence of different stages of vegetation growth.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

NAGAOKA, Yasushi, Isami NITTA, and Akihiro KANNO. "502 Development of 3D Laser Imager with a Huge Field of View and High Resolution." Proceedings of Conference of Hokuriku-Shinetsu Branch 2006.43 (2006): 135–36. http://dx.doi.org/10.1299/jsmehs.2006.43.135.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Jang, Eunkyung, and Woochul Kang. "Estimating Riparian Vegetation Volume in the River by 3D Point Cloud from UAV Imagery and Alpha Shape." Applied Sciences 14, no. 1 (December 19, 2023): 20. http://dx.doi.org/10.3390/app14010020.

Повний текст джерела
Анотація:
This study employs technology that has many different applications, including flood management, flood level control, and identification of vegetation type by patch size. Recent climate change, characterized by severe droughts and floods, intensifies riparian vegetation growth, demanding accurate environmental data. Traditional methods for analyzing vegetation in rivers involve on-site measurements or estimating the growth phase of the vegetation; however, these methods have limitations. Unmanned aerial vehicles (UAVs) and ground laser scanning, meanwhile, offer cost-effective, versatile solutions. This study uses UAVs to generate 3D riparian vegetation point clouds, employing the alpha shape technique. Performance was evaluated by analyzing the estimated volume results, considering the influence of the alpha radius. Results are most significant with an alpha radius of 0.75. This technology benefits river management by addressing vegetation volume, scale, flood control, and identification of vegetation type.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Zhou, Chang, Chunye Ying, Xinli Hu, Chu Xu, and Qiang Wang. "Thermal Infrared Imagery Integrated with Multi-Field Information for Characterization of Pile-Reinforced Landslide Deformation." Sensors 20, no. 4 (February 20, 2020): 1170. http://dx.doi.org/10.3390/s20041170.

Повний текст джерела
Анотація:
Physical model testing can replicate the deformation process of landslide stabilizing piles and analyze the pile-landslide interaction with multiple field information, thoroughly demonstrating its deformation and failure mechanism. In this paper, an integrated monitoring system was introduced. The instrumentation used included soil pressure cells, thermal infrared (TIR) imagery, 3D laser scanner, and digital photography. In order to precisely perform field information analysis, an index was proposed to analyze thermal infrared temperature captured by infrared thermography; the qualitative relationship among stress state and deformation as well as thermal infrared temperature is analyzed. The results indicate that the integrated monitoring system is expected to be useful for characterizing the deformation process of a pile-reinforced landslide. Difference value of TIR temperature ( T I R m ) is a useful indicator for landslide detection, and its anomalies can be selected as a precursor to landslide deformation.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Bachhofner, Stefan, Ana-Maria Loghin, Johannes Otepka, Norbert Pfeifer, Michael Hornacek, Andrea Siposova, Niklas Schmidinger, et al. "Generalized Sparse Convolutional Neural Networks for Semantic Segmentation of Point Clouds Derived from Tri-Stereo Satellite Imagery." Remote Sensing 12, no. 8 (April 18, 2020): 1289. http://dx.doi.org/10.3390/rs12081289.

Повний текст джерела
Анотація:
We studied the applicability of point clouds derived from tri-stereo satellite imagery for semantic segmentation for generalized sparse convolutional neural networks by the example of an Austrian study area. We examined, in particular, if the distorted geometric information, in addition to color, influences the performance of segmenting clutter, roads, buildings, trees, and vehicles. In this regard, we trained a fully convolutional neural network that uses generalized sparse convolution one time solely on 3D geometric information (i.e., 3D point cloud derived by dense image matching), and twice on 3D geometric as well as color information. In the first experiment, we did not use class weights, whereas in the second we did. We compared the results with a fully convolutional neural network that was trained on a 2D orthophoto, and a decision tree that was once trained on hand-crafted 3D geometric features, and once trained on hand-crafted 3D geometric as well as color features. The decision tree using hand-crafted features has been successfully applied to aerial laser scanning data in the literature. Hence, we compared our main interest of study, a representation learning technique, with another representation learning technique, and a non-representation learning technique. Our study area is located in Waldviertel, a region in Lower Austria. The territory is a hilly region covered mainly by forests, agriculture, and grasslands. Our classes of interest are heavily unbalanced. However, we did not use any data augmentation techniques to counter overfitting. For our study area, we reported that geometric and color information only improves the performance of the Generalized Sparse Convolutional Neural Network (GSCNN) on the dominant class, which leads to a higher overall performance in our case. We also found that training the network with median class weighting partially reverts the effects of adding color. The network also started to learn the classes with lower occurrences. The fully convolutional neural network that was trained on the 2D orthophoto generally outperforms the other two with a kappa score of over 90% and an average per class accuracy of 61%. However, the decision tree trained on colors and hand-crafted geometric features has a 2% higher accuracy for roads.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

NAGAOKA, Yasushi, Isami NITTA, Akihiro KANNO, Kimio KOMATA, and Satoshi IGUCHI. "2918 Development of Wide-View 3D Laser Imager Using the Shrink Fitter : Improvement of Image Jitter." Proceedings of the JSME annual meeting 2005.4 (2005): 287–88. http://dx.doi.org/10.1299/jsmemecjo.2005.4.0_287.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Rilskiy, I. I., and I. V. Kalinkin. "FEASIBILITY COMPARISON OF AIRBORNE LASER SCANNING DATA AND 3D-POINT CLOUDS FORMED FROM UNMANNED AERIAL VEHICLE (UAV)-BASED IMAGERY USED FOR 3D PROJECTING." Proceedings of the International conference “InterCarto/InterGIS” 3, no. 23 (January 1, 2017): 31–46. http://dx.doi.org/10.24057/2414-9179-2017-3-23-31-46.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Hussnain, Zille, Sander Oude Elberink, and George Vosselman. "Automatic extraction of accurate 3D tie points for trajectory adjustment of mobile laser scanners using aerial imagery." ISPRS Journal of Photogrammetry and Remote Sensing 154 (August 2019): 41–58. http://dx.doi.org/10.1016/j.isprsjprs.2019.05.010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Karantanellis, E., V. Marinos, and E. Vassilakis. "3D HAZARD ANALYSIS AND OBJECT-BASED CHARACTERIZATION OF LANDSLIDE MOTION MECHANISM USING UAV IMAGERY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 4, 2019): 425–30. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-425-2019.

Повний текст джерела
Анотація:
<p><strong>Abstract.</strong> Late years, innovative close-range remote sensing technology such as Unmanned Aerial Vehicle (UAV) photogrammetry and Terrestrial Laser Scanning (TLS) are widely applied in the field of geoscience due to their efficiency in collecting data about surface morphology. Their main advantage stands on the fact that conventional methods are mainly collecting point measurements such as compass measurements of bedding and fracture orientation solely from accessible areas. The current research aims to demonstrate the applicability of UAVs in managing landslide and rockfall hazard in mountainous environments during emergency situations using object-based approach. Specifically, a detailed UAV survey took place in a test site namely as Proussos, one of the most visited and famous Monasteries in the territory of Evritania prefecture, in central Greece. An unstable steep slope across the sole road network results in continuous failures and road cuts after heavy rainfall events. Structure from Motion (SfM) photogrammetry is used to provide detailed 3D point clouds describing the surface morphology of landslide objects. The latter resulted from an object-based classification approach of the photogrammetric point cloud products into homogeneous and spatially connected elements. In specific, a knowledge-based ruleset has been developed in accordance with the local morphometric parameters. Orthomosaic and DSM were segmented in meaningful objects based on a number of geometrical and contextual properties and classified as a landslide object (scarp, depletion zone, accumulation zone). The resulted models were used to detect and characterize 3D landslide features and provide a hazard assessment in respect to the road network. Moreover, a detailed assessment of the identified failure mechanism has been provided. The proposed study presents the effectiveness and efficiency of UAV platforms to acquire accurate photogrammetric datasets from high-mountain environments and complex surface topographies and provide a holistic object-based framework to characterize the failure site based on semantic classification of the landslide objects.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Prandi, F., D. Magliocchetti, A. Poveda, R. De Amicis, M. Andreolli, and F. Devigili. "New Approach for forest inventory estimation and timber harvesting planning in mountain areas: the SLOPE project." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 10, 2016): 775–82. http://dx.doi.org/10.5194/isprs-archives-xli-b3-775-2016.

Повний текст джерела
Анотація:
Forests represent an important economic resource for mountainous areas being for a few region and mountain communities the main form of income. However, wood chain management in these contexts differs from the traditional schemes due to the limits imposed by terrain morphology, both for the operation planning aspects and the hardware requirements. In fact, forest organizational and technical problems require a wider strategic and detailed level of planning to reach the level of productivity of forest operation techniques applied on flatlands. <br><br> In particular, a perfect knowledge of forest inventories improves long-term management sustainability and efficiency allowing a better understanding of forest ecosystems. However, this knowledge is usually based on historical parcel information with only few cases of remote sensing information from satellite imageries. This is not enough to fully exploit the benefit of the mountain areas forest stocks where the economic and ecological value of each single parcel depends on singletree characteristics. <br><br> The work presented in this paper, based on the results of the SLOPE (Integrated proceSsing and controL systems fOr sustainable forest Production in mountain arEas) project, investigates the capability to generate, manage and visualize detailed virtual forest models using geospatial information, combining data acquired from traditional on-the-field laser scanning surveys technologies with new aerial survey through UAV systems. These models are then combined with interactive 3D virtual globes for continuous assessment of resource characteristics, harvesting planning and real-time monitoring of the whole production.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Prandi, F., D. Magliocchetti, A. Poveda, R. De Amicis, M. Andreolli, and F. Devigili. "New Approach for forest inventory estimation and timber harvesting planning in mountain areas: the SLOPE project." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 10, 2016): 775–82. http://dx.doi.org/10.5194/isprsarchives-xli-b3-775-2016.

Повний текст джерела
Анотація:
Forests represent an important economic resource for mountainous areas being for a few region and mountain communities the main form of income. However, wood chain management in these contexts differs from the traditional schemes due to the limits imposed by terrain morphology, both for the operation planning aspects and the hardware requirements. In fact, forest organizational and technical problems require a wider strategic and detailed level of planning to reach the level of productivity of forest operation techniques applied on flatlands. &lt;br&gt;&lt;br&gt; In particular, a perfect knowledge of forest inventories improves long-term management sustainability and efficiency allowing a better understanding of forest ecosystems. However, this knowledge is usually based on historical parcel information with only few cases of remote sensing information from satellite imageries. This is not enough to fully exploit the benefit of the mountain areas forest stocks where the economic and ecological value of each single parcel depends on singletree characteristics. &lt;br&gt;&lt;br&gt; The work presented in this paper, based on the results of the SLOPE (Integrated proceSsing and controL systems fOr sustainable forest Production in mountain arEas) project, investigates the capability to generate, manage and visualize detailed virtual forest models using geospatial information, combining data acquired from traditional on-the-field laser scanning surveys technologies with new aerial survey through UAV systems. These models are then combined with interactive 3D virtual globes for continuous assessment of resource characteristics, harvesting planning and real-time monitoring of the whole production.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Thiel, Christian, Marlin M. Mueller, Lea Epple, Christian Thau, Sören Hese, Michael Voltersen, and Andreas Henkel. "UAS Imagery-Based Mapping of Coarse Wood Debris in a Natural Deciduous Forest in Central Germany (Hainich National Park)." Remote Sensing 12, no. 20 (October 10, 2020): 3293. http://dx.doi.org/10.3390/rs12203293.

Повний текст джерела
Анотація:
Dead wood such as coarse dead wood debris (CWD) is an important component in natural forests since it increases the diversity of plants, fungi, and animals. It serves as habitat, provides nutrients and is conducive to forest regeneration, ecosystem stabilization and soil protection. In commercially operated forests, dead wood is often unwanted as it can act as an originator of calamities. Accordingly, efficient CWD monitoring approaches are needed. However, due to the small size of CWD objects satellite data-based approaches cannot be used to gather the needed information and conventional ground-based methods are expensive. Unmanned aerial systems (UAS) are becoming increasingly important in the forestry sector since structural and spectral features of forest stands can be extracted from the high geometric resolution data they produce. As such, they have great potential in supporting regular forest monitoring and inventory. Consequently, the potential of UAS imagery to map CWD is investigated in this study. The study area is located in the center of the Hainich National Park (HNP) in the federal state of Thuringia, Germany. The HNP features natural and unmanaged forest comprising deciduous tree species such as Fagus sylvatica (beech), Fraxinus excelsior (ash), Acer pseudoplatanus (sycamore maple), and Carpinus betulus (hornbeam). The flight campaign was controlled from the Hainich eddy covariance flux tower located at the Eastern edge of the test site. Red-green-blue (RGB) image data were captured in March 2019 during leaf-off conditions using off-the-shelf hardware. Agisoft Metashape Pro was used for the delineation of a three-dimensional (3D) point cloud, which formed the basis for creating a canopy-free RGB orthomosaic and mapping CWD. As heavily decomposed CWD hardly stands out from the ground due to its low height, it might not be detectable by means of 3D geometric information. For this reason, solely RGB data were used for the classification of CWD. The mapping task was accomplished using a line extraction approach developed within the object-based image analysis (OBIA) software eCognition. The achieved CWD detection accuracy can compete with results of studies utilizing high-density airborne light detection and ranging (LiDAR)-based point clouds. Out of 180 CWD objects, 135 objects were successfully delineated while 76 false alarms occurred. Although the developed OBIA approach only utilizes spectral information, it is important to understand that the 3D information extracted from our UAS data is a key requirement for successful CWD mapping as it provides the foundation for the canopy-free orthomosaic created in an earlier step. We conclude that UAS imagery is an alternative to laser data in particular if rapid update and quick response is required. We conclude that UAS imagery is an alternative to laser data for CWD mapping, especially when a rapid response and quick reaction, e.g., after a storm event, is required.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Peterson, S., J. Lopez, and R. Munjy. "COMPARISON OF UAV IMAGERY-DERIVED POINT CLOUD TO TERRESTRIAL LASER SCANNER POINT CLOUD." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W5 (May 29, 2019): 149–55. http://dx.doi.org/10.5194/isprs-annals-iv-2-w5-149-2019.

Повний текст джерела
Анотація:
<p><strong>Abstract.</strong> A small unmanned aerial vehicle (UAV) with survey-grade GNSS positioning is used to produce a point cloud for topographic mapping and 3D reconstruction. The objective of this study is to assess the accuracy of a UAV imagery-derived point cloud by comparing a point cloud generated by terrestrial laser scanning (TLS). Imagery was collected over a 320&amp;thinsp;m by 320&amp;thinsp;m area with undulating terrain, containing 80 ground control points. A SenseFly eBee Plus fixed-wing platform with PPK positioning with a 10.6&amp;thinsp;mm focal length and a 20&amp;thinsp;MP digital camera was used to fly the area. Pix4Dmapper, a computer vision based commercial software, was used to process a photogrammetric block, constrained by 5 GCPs while obtaining cm-level RMSE based on the remaining 75 checkpoints. Based on results of automatic aerial triangulation, a point cloud and digital surface model (DSM) (2.5&amp;thinsp;cm/pixel) are generated and their accuracy assessed. A bias less than 1 pixel was observed in elevations from the UAV DSM at the checkpoints. 31 registered TLS scans made up a point cloud of the same area with an observed horizontal root mean square error (RMSE) of 0.006m, and negligible vertical RMSE. Comparisons were made between fitted planes of extracted roof features of 2 buildings and centreline profile comparison of a road in both UAV and TLS point clouds. Comparisons showed an average +8&amp;thinsp;cm bias with UAV point cloud computing too high in two features. No bias was observed in the roof features of the southernmost building.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Holopainen, M., M. Vastaranta, M. Karjalainen, K. Karila, S. Kaasalainen, E. Honkavaara, and J. Hyyppä. "FOREST INVENTORY ATTRIBUTE ESTIMATION USING AIRBORNE LASER SCANNING, AERIAL STEREO IMAGERY, RADARGRAMMETRY AND INTERFEROMETRY–FINNISH EXPERIENCES OF THE 3D TECHNIQUES." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-3/W4 (March 11, 2015): 63–69. http://dx.doi.org/10.5194/isprsannals-ii-3-w4-63-2015.

Повний текст джерела
Анотація:
Three-dimensional (3D) remote sensing has enabled detailed mapping of terrain and vegetation heights. Consequently, forest inventory attributes are estimated more and more using point clouds and normalized surface models. In practical applications, mainly airborne laser scanning (ALS) has been used in forest resource mapping. The current status is that ALS-based forest inventories are widespread, and the popularity of ALS has also raised interest toward alternative 3D techniques, including airborne and spaceborne techniques. Point clouds can be generated using photogrammetry, radargrammetry and interferometry. Airborne stereo imagery can be used in deriving photogrammetric point clouds, as very-high-resolution synthetic aperture radar (SAR) data are used in radargrammetry and interferometry. ALS is capable of mapping both the terrain and tree heights in mixed forest conditions, which is an advantage over aerial images or SAR data. However, in many jurisdictions, a detailed ALS-based digital terrain model is already available, and that enables linking photogrammetric or SAR-derived heights to heights above the ground. In other words, in forest conditions, the height of single trees, height of the canopy and/or density of the canopy can be measured and used in estimation of forest inventory attributes. In this paper, first we review experiences of the use of digital stereo imagery and spaceborne SAR in estimation of forest inventory attributes in Finland, and we compare techniques to ALS. In addition, we aim to present new implications based on our experiences.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Cuong, Cao Xuan, Le Van Canh, Pham Van Chung, Le Duc Tinh, Pham Trung Dung, and Ngo Sy Cuong. "Quality assessment of 3D point cloud of industrial buildings from imagery acquired by oblique and nadir UAV flights." Naukovyi Visnyk Natsionalnoho Hirnychoho Universytetu, no. 5 (2020): 131–39. http://dx.doi.org/10.33271/nvngu/2021-5/131.

Повний текст джерела
Анотація:
Purpose. The main objective of this paper is to assess the quality of the 3D model of industrial buildings generated from Unmanned Aerial Vehicle (UAV) imagery datasets, including nadir (N), oblique (O), and Nadir and Oblique (N+O) UAV datasets. Methodology. The quality of a 3D model is defined by the accuracy and density of point clouds created from UAV images. For this purpose, the UAV was deployed to acquire images with both O and N flight modes over an industrial mining area containing a mine shaft tower, factory housing and office buildings. The quality assessment was conducted for the 3D point cloud model of three main objects such as roofs, facades, and ground surfaces using CheckPoints (CPs) and terrestrial laser scanning (TLS) point clouds as the reference datasets. The Root Mean Square Errors (RMSE) were calculated using CP coordinates, and cloud to cloud distances were computed using TLS point clouds, which were used for the accuracy assessment. Findings. The results showed that the point cloud model generated by the N flight mode was the most accurate but least dense, whereas that of the O mode was the least accurate but most detailed level in comparison with the others. Also, the combination of O and N datasets takes advantages of individual mode as the point clouds accuracy is higher than that of case O, and its density is much higher than that of case N. Therefore, it is optimal to build exceptional accurate and dense point clouds of buildings. Originality. The paper provides a comparative analysis in quality of point cloud of roofs and facades generated from UAV photogrammetry for mining industrial buildings. Practical value. Findings of the study can be used as references for both UAV survey practices and applications of UAV point cloud. The paper provides useful information for making UAV flight planning, or which UAV points should be integrated into TLS points to have the best point cloud.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Hernández-Clemente, Rocío, Rafael Navarro-Cerrillo, Francisco Ramírez, Alberto Hornero, and Pablo Zarco-Tejada. "A Novel Methodology to Estimate Single-Tree Biophysical Parameters from 3D Digital Imagery Compared to Aerial Laser Scanner Data." Remote Sensing 6, no. 11 (November 21, 2014): 11627–48. http://dx.doi.org/10.3390/rs61111627.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Becirevic, D., L. Klingbeil, A. Honecker, H. Schumann, U. Rascher, J. Léon, and H. Kuhlmann. "ON THE DERIVATION OF CROP HEIGHTS FROM MULTITEMPORAL UAV BASED IMAGERY." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W5 (May 29, 2019): 95–102. http://dx.doi.org/10.5194/isprs-annals-iv-2-w5-95-2019.

Повний текст джерела
Анотація:
<p><strong>Abstract.</strong> In this paper, we investigate the usage of unmanned aerial vehicles (UAV) to assess the crop geometry with special focus on the crop height extraction. Crop height is classified as a reliable trait in crop phenotyping and recognized as a good indicator for biomass, expected yield, lodging or crop stress. The current industrial standard for crop height measurement is a manual procedure using a ruler, but this method is considered as time consuming, labour intensive and subjective. This study investigates methods for reliable and rapid deriving of the crop height from high spatial, spectral and time resolution UAV data considering the influences of the reference surface and the selected crop height generation method to the final calculation. To do this, we performed UAV missions during two winter wheat growing seasons and generate point clouds from areal images using photogrammetric methods. For the accuracy assessment we compare UAV based crop height with ruler based crop height as current industrial standard and terrestrial laser scanner (TLS) based crop height as a reliable validation method. The high correlation between UAV based and ruler based crop height and especially the correlation with TLS data shows that the UAV based crop height extraction method can provide reliable winter wheat height information in a non-invasive and rapid way. Along with crop height as a single value per area of interest, 3D UAV crop data should provide some additional information like lodging area, which can also be of interest in the plant breeding community.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Eslami, Mehrdad, and Mohammad Saadatseresht. "Imagery Network Fine Registration by Reference Point Cloud Data Based on the Tie Points and Planes." Sensors 21, no. 1 (January 5, 2021): 317. http://dx.doi.org/10.3390/s21010317.

Повний текст джерела
Анотація:
Cameras and laser scanners are complementary tools for a 2D/3D information generation. Systematic and random errors cause the misalignment of the multi-sensor imagery and point cloud data. In this paper, a novel feature-based approach is proposed for imagery and point cloud fine registration. The tie points and its two neighbor pixels are matched in the overlap images, which are intersected in the object space to create the differential tie plane. A preprocessing is applied to the corresponding tie points and non-robust ones are removed. Initial coarse Exterior Orientation Parameters (EOPs), Interior Orientation Parameters (IOPs), and Additional Parameters (APs) are used to transform tie plane points to the object space. Then, the nearest points of the point cloud data to the transformed tie plane points are estimated. These estimated points are used to calculate Directional Vectors (DV) of the differential planes. As a constraint equation along with the collinearity equation, each object space tie point is forced to be located on the point cloud differential plane. Two different indoor and outdoor experimental data are used to assess the proposed approach. Achieved results show about 2.5 pixels errors on checkpoints. Such results demonstrated the robustness and practicality of the proposed approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Ruiz, Rafael Melendreras, Ma Teresa Marín Torres, and Paloma Sánchez Allegue. "Comparative Analysis Between the Main 3D Scanning Techniques: Photogrammetry, Terrestrial Laser Scanner, and Structured Light Scanner in Religious Imagery: The Case of The Holy Christ of the Blood." Journal on Computing and Cultural Heritage 15, no. 1 (February 28, 2022): 1–23. http://dx.doi.org/10.1145/3469126.

Повний текст джерела
Анотація:
In recent years, three-dimensional (3D) scanning has become the main tool for recording, documenting, and preserving cultural heritage in the long term. It has become the “document” most in demand today by historians, curators, and art restorers to carry out their work based on a “digital twin,” that is, a totally reliable and accurate model of the object in question. Thanks to 3D scanning, we can preserve reliable models in digital format of the real state of our heritage, some of which are currently destroyed. The first step is to digitize our heritage with the highest possible quality and precision. To do this, it will be necessary to identify the most appropriate technique. In this article, we will show some of the main digitization techniques currently used in sculpture heritage and the workflows associated with them to obtain high-quality models. Finally, a complete comparative analysis will be made to show their main advantages and disadvantages.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Chaidas, K., G. Tataris, and N. Soulakellis. "POST-EARTHQUAKE 3D BUILDING MODEL (LOD2) GENERATION FROM UAS IMAGERY: THE CASE OF VRISA TRADITIONAL SETTLEMENT, LESVOS, GREECE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIV-4/W3-2020 (November 23, 2020): 165–72. http://dx.doi.org/10.5194/isprs-archives-xliv-4-w3-2020-165-2020.

Повний текст джерела
Анотація:
Abstract. In recent years 3D building modelling techniques are commonly used in various domains such as navigation, urban planning and disaster management, mostly confined to visualization purposes. The 3D building models are produced at various Levels of Detail (LOD) in the CityGML standard, that not only visualize complex urban environment but also allows queries and analysis. The aim of this paper is to present the methodology and the results of the comparison among two scenarios of LOD2 building models, which have been generated by the derivate UAS data acquired from two flight campaigns in different altitudes. The study was applied in Vrisa traditional settlement, Lesvos island, Greece, which was affected by a devastating earthquake of Mw = 6.3 on 12th June 2017. Specifically, the two scenarios were created by the results that were derived from two different flight campaigns which were: i) on 12th January 2020 with a flying altitude of 100 m and ii) on 4th February 2020 with a flying altitude of 40 m, both with a nadir camera position. The LOD2 buildings were generated in a part of Vrisa settlement consisted of 80 buildings using the footprints of the buildings, Digital Surface Models (DSMs), a Digital Elevation Model (DEM) and orthophoto maps of the area. Afterwards, a comparison was implemented between the LOD2 buildings of the two different scenarios, with their volumes and their heights. Subsequently, the heights of the LOD2 buildings were compared with the heights of the respective terrestrial laser scanner (TLS) models. Additionally, the roofs of the LOD2 buildings were evaluated through visual inspections. The results showed that the 65 of 80 LOD2 buildings were generated accurately in terms of their heights and roof types for the first scenario and 64 for the second respectively. Finally, the comparison of the results proved that the generation of post-earthquake LOD2 buildings can be achieved with the appropriate UAS data acquired at a flying altitude of 100 m and they are not affected significantly by a lower one altitude.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Dechesne, Clément, Clément Mallet, Arnaud Le Bris, Valérie Gouet, and Alexandre Hervieu. "FOREST STAND SEGMENTATION USING AIRBORNE LIDAR DATA AND VERY HIGH RESOLUTION MULTISPECTRAL IMAGERY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 207–14. http://dx.doi.org/10.5194/isprs-archives-xli-b3-207-2016.

Повний текст джерела
Анотація:
Forest stands are the basic units for forest inventory and mapping. Stands are large forested areas (e.g., &ge; 2 ha) of homogeneous tree species composition. The accurate delineation of forest stands is usually performed by visual analysis of human operators on very high resolution (VHR) optical images. This work is highly time consuming and should be automated for scalability purposes. In this paper, a method based on the fusion of airborne laser scanning data (or lidar) and very high resolution multispectral imagery for automatic forest stand delineation and forest land-cover database update is proposed. The multispectral images give access to the tree species whereas 3D lidar point clouds provide geometric information on the trees. Therefore, multi-modal features are computed, both at pixel and object levels. The objects are individual trees extracted from lidar data. A supervised classification is performed at the object level on the computed features in order to coarsely discriminate the existing tree species in the area of interest. The analysis at tree level is particularly relevant since it significantly improves the tree species classification. A probability map is generated through the tree species classification and inserted with the pixel-based features map in an energetical framework. The proposed energy is then minimized using a standard graph-cut method (namely QPBO with &alpha;-expansion) in order to produce a segmentation map with a controlled level of details. Comparison with an existing forest land cover database shows that our method provides satisfactory results both in terms of stand labelling and delineation (matching ranges between 94% and 99%).
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Dechesne, Clément, Clément Mallet, Arnaud Le Bris, Valérie Gouet, and Alexandre Hervieu. "FOREST STAND SEGMENTATION USING AIRBORNE LIDAR DATA AND VERY HIGH RESOLUTION MULTISPECTRAL IMAGERY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 207–14. http://dx.doi.org/10.5194/isprsarchives-xli-b3-207-2016.

Повний текст джерела
Анотація:
Forest stands are the basic units for forest inventory and mapping. Stands are large forested areas (e.g., &ge; 2 ha) of homogeneous tree species composition. The accurate delineation of forest stands is usually performed by visual analysis of human operators on very high resolution (VHR) optical images. This work is highly time consuming and should be automated for scalability purposes. In this paper, a method based on the fusion of airborne laser scanning data (or lidar) and very high resolution multispectral imagery for automatic forest stand delineation and forest land-cover database update is proposed. The multispectral images give access to the tree species whereas 3D lidar point clouds provide geometric information on the trees. Therefore, multi-modal features are computed, both at pixel and object levels. The objects are individual trees extracted from lidar data. A supervised classification is performed at the object level on the computed features in order to coarsely discriminate the existing tree species in the area of interest. The analysis at tree level is particularly relevant since it significantly improves the tree species classification. A probability map is generated through the tree species classification and inserted with the pixel-based features map in an energetical framework. The proposed energy is then minimized using a standard graph-cut method (namely QPBO with &alpha;-expansion) in order to produce a segmentation map with a controlled level of details. Comparison with an existing forest land cover database shows that our method provides satisfactory results both in terms of stand labelling and delineation (matching ranges between 94% and 99%).
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Breaban, Ana-Ioana, Valeria-Ersilia Oniga, Constantin Chirila, Ana-Maria Loghin, Norbert Pfeifer, Mihaela Macovei, and Alina-Mihaela Nicuta Precul. "Proposed Methodology for Accuracy Improvement of LOD1 3D Building Models Created Based on Stereo Pléiades Satellite Imagery." Remote Sensing 14, no. 24 (December 12, 2022): 6293. http://dx.doi.org/10.3390/rs14246293.

Повний текст джерела
Анотація:
Three-dimensional city models play an important role for a large number of applications in urban environments, and thus it is of high interest to create them automatically, accurately and in a cost-effective manner. This paper presents a new methodology for point cloud accuracy improvement to generate terrain topographic models and 3D building modeling with the Open Geospatial Consortium (OGC) CityGML standard, level of detail 1 (LOD1), using very high-resolution (VHR) satellite images. In that context, a number of steps are given attention (which are often (in the literature) not considered in detail), including the local geoid and the role of the digital terrain model (DTM) in the dense image matching process. The quality of the resulting models is analyzed thoroughly. For this objective, two stereo Pléiades 1 satellite images over Iasi city were acquired in September 2016, and 142 points were measured in situ by global navigation satellite system real-time kinematic positioning (GNSS-RTK) technology. First, the quasigeoid surface resulting from EGG2008 regional gravimetric model was corrected based on data from GNSS and leveling measurements using a four-parameter transformation, and the ellipsoidal heights of the 142 GNSS-RTK points were corrected based on the local quasigeoid surface. The DTM of the study area was created based on low-resolution airborne laser scanner (LR ALS) point clouds that have been filtered using the robust filter algorithm and a mask for buildings, and the ellipsoidal heights were also corrected with the local quasigeoid surface, resulting in a standard deviation of 37.3 cm for 50 levelling points and 28.1 cm for the 142 GNSS-RTK points. For the point cloud generation, two scenarios were considered: (1) no DTM and ground control points (GCPs) with uncorrected ellipsoidal heights resulting in an RMS difference (Z) for the 64 GCPs and 78 ChPs of 69.8 cm and (2) with LR ALS-DTM and GCPs with corrected ellipsoidal height values resulting in an RMS difference (Z) of 60.9 cm. The LOD1 models of 1550 buildings from the Iasi city center were created based on Pléiades-DSM point clouds (corrected and not corrected) and existing building sub-footprints, with four methods for the derivation of the building roof elevations, resulting in a standard deviation of 1.6 m against high-resolution (HR) ALS point cloud in the case of the best scenario. The proposed method for height extraction and reconstruction of the city structure performed the best compared with other studies on multiple satellite stereo imagery.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Zhang, Chao, Scott Lindner, Ivan Antolovic, Martin Wolf, and Edoardo Charbon. "A CMOS SPAD Imager with Collision Detection and 128 Dynamically Reallocating TDCs for Single-Photon Counting and 3D Time-of-Flight Imaging." Sensors 18, no. 11 (November 17, 2018): 4016. http://dx.doi.org/10.3390/s18114016.

Повний текст джерела
Анотація:
Per-pixel time-to-digital converter (TDC) architectures have been exploited by single-photon avalanche diode (SPAD) sensors to achieve high photon throughput, but at the expense of fill factor, pixel pitch and readout efficiency. In contrast, TDC sharing architecture usually features high fill factor at small pixel pitch and energy efficient event-driven readout. While the photon throughput is not necessarily lower than that of per-pixel TDC architectures, since the throughput is not only decided by the TDC number but also the readout bandwidth. In this paper, a SPAD sensor with 32 × 32 pixels fabricated with a 180 nm CMOS image sensor technology is presented, where dynamically reallocating TDCs were implemented to achieve the same photon throughput as that of per-pixel TDCs. Each 4 TDCs are shared by 32 pixels via a collision detection bus, which enables a fill factor of 28% with a pixel pitch of 28.5 μm. The TDCs were characterized, obtaining the peak-to-peak differential and integral non-linearity of −0.07/+0.08 LSB and −0.38/+0.75 LSB, respectively. The sensor was demonstrated in a scanning light-detection-and-ranging (LiDAR) system equipped with an ultra-low power laser, achieving depth imaging up to 10 m at 6 frames/s with a resolution of 64 × 64 with 50 lux background light.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Guerra-Hernández, Cosenza, Cardil, Silva, Botequim, Soares, Silva, González-Ferreiro, and Díaz-Varela. "Predicting Growing Stock Volume of Eucalyptus Plantations Using 3-D Point Clouds Derived from UAV Imagery and ALS Data." Forests 10, no. 10 (October 15, 2019): 905. http://dx.doi.org/10.3390/f10100905.

Повний текст джерела
Анотація:
Estimating forest inventory variables is important in monitoring forest resources and mitigating climate change. In this respect, forest managers require flexible, non-destructive methods for estimating volume and biomass. High-resolution and low-cost remote sensing data are increasingly available to measure three-dimensional (3D) canopy structure and to model forest structural attributes. The main objective of this study was to evaluate and compare the individual tree volume estimates derived from high-density point clouds obtained from airborne laser scanning (ALS) and digital aerial photogrammetry (DAP) in Eucalyptus spp. plantations. Object-based image analysis (OBIA) techniques were applied for individual tree crown (ITC) delineation. The ITC algorithm applied correctly detected and delineated 199 trees from ALS-derived data, while 192 trees were correctly identified using DAP-based point clouds acquired from Unmanned Aerial Vehicles (UAV), representing accuracy levels of respectively 62% and 60%. Addressing volume modelling, non-linear regression fit based on individual tree height and individual crown area derived from the ITC provided the following results: Model Efficiency (Mef) = 0.43 and 0.46, Root Mean Square Error (RMSE) = 0.030 m3 and 0.026 m3, rRMSE = 20.31% and 19.97%, and an approximately unbiased results (0.025 m3 and 0.0004 m3) using DAP and ALS-based estimations, respectively. No significant difference was found between the observed value (field data) and volume estimation from ALS and DAP (p-value from t-test statistic = 0.99 and 0.98, respectively). The proposed approaches could also be used to estimate basal area or biomass stocks in Eucalyptus spp. plantations.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Lari, Zahra, Naser El-Sheimy, and Ayman Habib. "A New Approach for Realistic 3D Reconstruction of Planar Surfaces from Laser Scanning Data and Imagery Collected Onboard Modern Low-Cost Aerial Mapping Systems." Remote Sensing 9, no. 3 (February 25, 2017): 212. http://dx.doi.org/10.3390/rs9030212.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Reilly, Sean, Matthew L. Clark, Lisa Patrick Bentley, Corbin Matley, Elise Piazza, and Imma Oliveras Menor. "The Potential of Multispectral Imagery and 3D Point Clouds from Unoccupied Aerial Systems (UAS) for Monitoring Forest Structure and the Impacts of Wildfire in Mediterranean-Climate Forests." Remote Sensing 13, no. 19 (September 23, 2021): 3810. http://dx.doi.org/10.3390/rs13193810.

Повний текст джерела
Анотація:
Wildfire shapes vegetation assemblages in Mediterranean ecosystems, such as those in the state of California, United States. Successful restorative management of forests in-line with ecologically beneficial fire regimes relies on a thorough understanding of wildfire impacts on forest structure and fuel loads. As these data are often difficult to comprehensively measure on the ground, remote sensing approaches can be used to estimate forest structure and fuel load parameters over large spatial extents. Here, we analyze the capabilities of one such methodology, unoccupied aerial system structure from motion (UAS-SfM) from digital aerial photogrammetry, for mapping forest structure and wildfire impacts in the Mediterranean forests of northern California. To determine the ability of UAS-SfM to map the structure of mixed oak and conifer woodlands and to detect persistent changes caused by fire, we compared UAS-SfM derived metrics of terrain height and canopy structure to pre-fire airborne laser scanning (ALS) measurements. We found that UAS-SfM was able to accurately capture the forest’s upper-canopy structure, but was unable to resolve mid- and below-canopy structure. The addition of a normalized difference vegetation index (NDVI) ground point filter to the DTM generation process improved DTM root-mean-square error (RMSE) by ~1 m with an overall DTM RMSE of 2.12 m. Upper-canopy metrics (max height, 95th percentile height, and 75th percentile height) were highly correlated between ALS and UAS-SfM (r > +0.9), while lower-canopy metrics and metrics of density and vertical variation had little to no similarity. Two years after the 2017 Sonoma County Tubbs fire, we found significant decreases in UAS-SfM metrics of bulk canopy height and NDVI with increasing burn severity, indicating the lasting impact of the fire on vegetation health and structure. These results point to the utility of UAS-SfM as a monitoring tool in Mediterranean forests, especially for post-fire canopy changes and subsequent recovery.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

BRIOTTET, Xavier, Laurent HESPEL, and Nicolas RIVIÈRE. "Imagerie laser 3D à plan focal." Mesures mécaniques et dimensionnelles, October 2020. http://dx.doi.org/10.51257/a-v1-r6734.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Tuominen, Sakari, Andras Balazs, and Annika Kangas. "Comparison of photogrammetric canopy models from archived and made-to-order aerial imagery in forest inventory." Silva Fennica 54, no. 5 (2020). http://dx.doi.org/10.14214/sf.10291.

Повний текст джерела
Анотація:
In remote sensing-based forest inventories 3D point cloud data, such as acquired from airborne laser scanning, are well suited for estimating the volume of growing stock and stand height, but tree species recognition often requires additional optical imagery. A combination of 3D data and optical imagery can be acquired based on aerial imaging only, by using stereo photogrammetric 3D canopy modeling. The use of aerial imagery is well suited for large-area forest inventories, due to low costs, good area coverage and temporally rapid cycle of data acquisition. Stereo-photogrammetric canopy modeling can also be applied to previously acquired imagery, such as for aerial ortho-mosaic production, assuming that the imagery has sufficient stereo overlap. In this study we compared two stereo-photogrammetric canopy models combined with contemporary satellite imagery in forest inventory. One canopy model was based on standard archived imagery acquired primarily for ortho-mosaic production, and another was based on aerial imagery whose acquisition parameters were better oriented for stereo-photogrammetric canopy modeling, including higher imaging resolution and greater stereo-coverage. Aerial and satellite data were tested in the estimation of growing stock volume, volumes of main tree species, basal area and diameter and height. Despite the better quality of the latter canopy model, the difference of the accuracy of the forest estimates based on the two different data sets was relatively small for most variables (differences in RMSEs were 0–20%, depending on variable). However, the estimates based on stereo-photogrammetrically oriented aerial data retained better the original variation of the forest variables present in the study area.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії