Academic literature on the topic 'Point cloud analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Point cloud analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Point cloud analysis"

1

Pu, Xinming, Shu Gan, Xiping Yuan, and Raobo Li. "Feature Analysis of Scanning Point Cloud of Structure and Research on Hole Repair Technology Considering Space-Ground Multi-Source 3D Data Acquisition." Sensors 22, no. 24 (December 8, 2022): 9627. http://dx.doi.org/10.3390/s22249627.

Full text
Abstract:
As one of the best means of obtaining the geometry information of special shaped structures, point cloud data acquisition can be achieved by laser scanning or photogrammetry. However, there are some differences in the quantity, quality, and information type of point clouds obtained by different methods when collecting point clouds of the same structure, due to differences in sensor mechanisms and collection paths. Thus, this study aimed to combine the complementary advantages of multi-source point cloud data and provide the high-quality basic data required for structure measurement and modeling. Specifically, low-altitude photogrammetry technologies such as hand-held laser scanners (HLS), terrestrial laser scanners (TLS), and unmanned aerial systems (UAS) were adopted to collect point cloud data of the same special-shaped structure in different paths. The advantages and disadvantages of different point cloud acquisition methods of special-shaped structures were analyzed from the perspective of the point cloud acquisition mechanism of different sensors, point cloud data integrity, and single-point geometric characteristics of the point cloud. Additionally, a point cloud void repair technology based on the TLS point cloud was proposed according to the analysis results. Under the premise of unifying the spatial position relationship of the three point clouds, the M3C2 distance algorithm was performed to extract the point clouds with significant spatial position differences in the same area of the structure from the three point clouds. Meanwhile, the single-point geometric feature differences of the multi-source point cloud in the area with the same neighborhood radius was calculated. With the kernel density distribution of the feature difference, the feature points filtered from the HLS point cloud and the TLS point cloud were fused to enrich the number of feature points in the TLS point cloud. In addition, the TLS point cloud voids were located by raster projection, and the point clouds within the void range were extracted, or the closest points were retrieved from the other two heterologous point clouds, to repair the top surface and façade voids of the TLS point cloud. Finally, high-quality basic point cloud data of the special-shaped structure were generated.
APA, Harvard, Vancouver, ISO, and other styles
2

Cai, S., W. Zhang, J. Qi, P. Wan, J. Shao, and A. Shen. "APPLICABILITY ANALYSIS OF CLOTH SIMULATION FILTERING ALGORITHM FOR MOBILE LIDAR POINT CLOUD." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3 (April 30, 2018): 107–11. http://dx.doi.org/10.5194/isprs-archives-xlii-3-107-2018.

Full text
Abstract:
Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging) data post-processing. Cloth simulation filtering (CSF) algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS) has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM), 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature) for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.
APA, Harvard, Vancouver, ISO, and other styles
3

Alsadik, B., M. Gerke, and G. Vosselman. "Visibility analysis of point cloud in close range photogrammetry." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-5 (May 28, 2014): 9–16. http://dx.doi.org/10.5194/isprsannals-ii-5-9-2014.

Full text
Abstract:
The ongoing development of advanced techniques in photogrammetry, computer vision (CV), robotics and laser scanning to efficiently acquire three dimensional geometric data offer new possibilities for many applications. The output of these techniques in the digital form is often a sparse or dense point cloud describing the 3D shape of an object. Viewing these point clouds in a computerized digital environment holds a difficulty in displaying the visible points of the object from a given viewpoint rather than the hidden points. This visibility problem is a major computer graphics topic and has been solved previously by using different mathematical techniques. However, to our knowledge, there is no study of presenting the different visibility analysis methods of point clouds from a photogrammetric viewpoint. The visibility approaches, which are surface based or voxel based, and the hidden point removal (HPR) will be presented. Three different problems in close range photogrammetry are presented: camera network design, guidance with synthetic images and the gap detection in a point cloud. The latter one introduces also a new concept of gap classification. Every problem utilizes a different visibility technique to show the valuable effect of visibility analysis on the final solution.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Chang, Haiyun Gan, Jialin Li, and Boqing Zhu. "Rasterize the Lidar Point Cloud on The Ground Out Method Optimization Analysis." Journal of Physics: Conference Series 2405, no. 1 (December 1, 2022): 012005. http://dx.doi.org/10.1088/1742-6596/2405/1/012005.

Full text
Abstract:
Abstract This paper is mainly aimed at the over-segmentation or under-segmentation phenomenon of the ground point cloud in the moving scene of the unmanned platform to identify the target. This paper proposes an optimized rasterization method to remove the ground point clouds appropriately. First, the partitioned data by quadrant is processed based on the asymmetric distribution of front-to-back and left-to-right point cloud data. Then, the ground estimation of the lowest point is carried out by drilling into the laser layer data, and the connection between the estimated points and the ground detection point cloud data is segmented using the road reflection intensity correction. Next, the ground point cloud detection filtering is optimized using the raster elevation difference. Finally, an obstacle continuum hypothesis model is used to improve the under-segmentation phenomenon that occurs inside the raster. The overall ground point cloud detection filtering effect, the algorithm has a certain degree of universality and ideal ground detection filtering effect within the road, the overall achieve the desired goal.
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Youping, and Zhihui Zhou. "Intelligent City 3D Modeling Model Based on Multisource Data Point Cloud Algorithm." Journal of Function Spaces 2022 (July 21, 2022): 1–10. http://dx.doi.org/10.1155/2022/6135829.

Full text
Abstract:
With the rapid development of smart cities, intelligent navigation, and autonomous driving, how to quickly obtain 3D spatial information of urban buildings and build a high-precision 3D fine model has become a key problem to be solved. As the two-dimensional mapping results have constrained various needs in people’s social life, coupled with the concept of digital city and advocacy, making three-dimensional, virtualization and actualization become the common pursuit of people’s goals. However, the original point cloud obtained is always incomplete due to reasons such as occlusion during acquisition and data density decreasing with distance, resulting in extracted boundaries that are often incomplete as well. In this paper, based on the study of current mainstream 3D model data organization methods, geographic grids and map service specifications, and other related technologies, an intelligent urban 3D modeling model based on multisource data point cloud algorithm is designed for the two problems of unified organization and expression of urban multisource 3D model data. A point cloud preprocessing process is also designed: point cloud noise reduction and downsampling to ensure the original point cloud geometry structure remain unchanged, while improving the point cloud quality and reducing the number of point clouds. By outputting to a common 3D format, the 3D model constructed in this paper can be applied to many fields such as urban planning and design, architectural landscape design, urban management, emergency disaster relief, environmental protection, and virtual tourism.
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Ruixuan, and Jian Sun. "Learning Polynomial-Based Separable Convolution for 3D Point Cloud Analysis." Sensors 21, no. 12 (June 19, 2021): 4211. http://dx.doi.org/10.3390/s21124211.

Full text
Abstract:
Shape classification and segmentation of point cloud data are two of the most demanding tasks in photogrammetry and remote sensing applications, which aim to recognize object categories or point labels. Point convolution is an essential operation when designing a network on point clouds for these tasks, which helps to explore 3D local points for feature learning. In this paper, we propose a novel point convolution (PSConv) using separable weights learned with polynomials for 3D point cloud analysis. Specifically, we generalize the traditional convolution defined on the regular data to a 3D point cloud by learning the point convolution kernels based on the polynomials of transformed local point coordinates. We further propose a separable assumption on the convolution kernels to reduce the parameter size and computational cost for our point convolution. Using this novel point convolution, a hierarchical network (PSNet) defined on the point cloud is proposed for 3D shape analysis tasks such as 3D shape classification and segmentation. Experiments are conducted on standard datasets, including synthetic and real scanned ones, and our PSNet achieves state-of-the-art accuracies for shape classification, as well as competitive results for shape segmentation compared with previous methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Yan, Wenhan Zhao, Bo Sun, Ying Zhang, and Wen Wen. "Point Cloud Upsampling Algorithm: A Systematic Review." Algorithms 15, no. 4 (April 8, 2022): 124. http://dx.doi.org/10.3390/a15040124.

Full text
Abstract:
Point cloud upsampling algorithms can improve the resolution of point clouds and generate dense and uniform point clouds, and are an important image processing technology. Significant progress has been made in point cloud upsampling research in recent years. This paper provides a comprehensive survey of point cloud upsampling algorithms. We classify existing point cloud upsampling algorithms into optimization-based methods and deep learning-based methods, and analyze the advantages and limitations of different algorithms from a modular perspective. In addition, we cover some other important issues such as public datasets and performance evaluation metrics. Finally, we conclude this survey by highlighting several future research directions and open issues that should be further addressed.
APA, Harvard, Vancouver, ISO, and other styles
8

Pan, Liang, Pengfei Wang, and Chee-Meng Chew. "PointAtrousNet: Point Atrous Convolution for Point Cloud Analysis." IEEE Robotics and Automation Letters 4, no. 4 (October 2019): 4035–41. http://dx.doi.org/10.1109/lra.2019.2927948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ahmad, N., S. Azri, U. Ujang, M. G. Cuétara, G. M. Retortillo, and S. Mohd Salleh. "COMPARATIVE ANALYSIS OF VARIOUS CAMERA INPUT FOR VIDEOGRAMMETRY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W16 (October 1, 2019): 63–70. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w16-63-2019.

Full text
Abstract:
Abstract. Videogrammetry is a technique to generate point clouds by using video frame sequences. It is a branch of photogrammetry that offers an attractive capabilities and make it an interesting choice for a 3D data acquisition. However, different camera input and specification will produce different quality of point cloud. Thus, it is the aim of this study to investigate the quality of point cloud that is produced from various camera input and specification. Several devices are using in this study such as Iphone 5s, Iphone 7+, Iphone X, Digital camera of Casio Exilim EX-ZR1000 and Nikon D7000 DSLR. For each device, different camera with different resolution and frame per second (fps) are used for video recording. The videos are processed using EyesCloud3D by eCapture. EyesCloud3D is a platform that receive input such as videos and images to generate point clouds. 3D model is constructed based on generated point clouds. The total number of point clouds produced is analyzed to determine which camera input and specification produce a good 3D model. Besides that, factor of generating number of point clouds is analyzed. Finally, each camera resolution and fps is suggested for certain applications based on generated number of point cloud.
APA, Harvard, Vancouver, ISO, and other styles
10

Xu, Y., Z. Sun, R. Boerner, T. Koch, L. Hoegner, and U. Stilla. "GENERATION OF GROUND TRUTH DATASETS FOR THE ANALYSIS OF 3D POINT CLOUDS IN URBAN SCENES ACQUIRED VIA DIFFERENT SENSORS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3 (April 30, 2018): 2009–15. http://dx.doi.org/10.5194/isprs-archives-xlii-3-2009-2018.

Full text
Abstract:
In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Point cloud analysis"

1

Forsman, Mona. "Point cloud densification." Thesis, Umeå universitet, Institutionen för fysik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-39980.

Full text
Abstract:
Several automatic methods exist for creating 3D point clouds extracted from 2D photos. In manycases, the result is a sparse point cloud, unevenly distributed over the scene.After determining the coordinates of the same point in two images of an object, the 3D positionof that point can be calculated using knowledge of camera data and relative orientation. A model created from a unevenly distributed point clouds may loss detail and precision in thesparse areas. The aim of this thesis is to study methods for densification of point clouds. This thesis contains a literature study over different methods for extracting matched point pairs,and an implementation of Least Square Template Matching (LSTM) with a set of improvementtechniques. The implementation is evaluated on a set of different scenes of various difficulty. LSTM is implemented by working on a dense grid of points in an image and Wallis filtering isused to enhance contrast. The matched point correspondences are evaluated with parameters fromthe optimization in order to keep good matches and discard bad ones. The purpose is to find detailsclose to a plane in the images, or on plane-like surfaces. A set of extensions to LSTM is implemented in the aim of improving the quality of the matchedpoints. The seed points are improved by Transformed Normalized Cross Correlation (TNCC) andMultiple Seed Points (MSP) for the same template, and then tested to see if they converge to thesame result. Wallis filtering is used to increase the contrast in the image. The quality of the extractedpoints are evaluated with respect to correlation with other optimization parameters and comparisonof standard deviation in x- and y- direction. If a point is rejected, the option to try again with a largertemplate size exists, called Adaptive Template Size (ATS).
APA, Harvard, Vancouver, ISO, and other styles
2

Donner, Marc, Sebastian Varga, and Ralf Donner. "Point cloud generation for hyperspectral ore analysis." Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2018. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-231365.

Full text
Abstract:
Recent development of hyperspectral snapshot cameras offers new possibilities for ore analysis. A method for generating a 3D dataset from RGB and hyperspectral images is presented. By using Structure from Motion, a reference of each source image to the resulting point cloud is kept. This reference is used for projecting hyperspectral data onto the point cloud. Additionally, with this work flow it is possible to add meta data to the point cloud, which was generated from images alone.
APA, Harvard, Vancouver, ISO, and other styles
3

Donner, Marc, Sebastian Varga, and Ralf Donner. "Point cloud generation for hyperspectral ore analysis." TU Bergakademie Freiberg, 2017. https://tubaf.qucosa.de/id/qucosa%3A23196.

Full text
Abstract:
Recent development of hyperspectral snapshot cameras offers new possibilities for ore analysis. A method for generating a 3D dataset from RGB and hyperspectral images is presented. By using Structure from Motion, a reference of each source image to the resulting point cloud is kept. This reference is used for projecting hyperspectral data onto the point cloud. Additionally, with this work flow it is possible to add meta data to the point cloud, which was generated from images alone.
APA, Harvard, Vancouver, ISO, and other styles
4

Awadallah, Mahmoud Sobhy Tawfeek. "Image Analysis Techniques for LiDAR Point Cloud Segmentation and Surface Estimation." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73055.

Full text
Abstract:
Light Detection And Ranging (LiDAR), as well as many other applications and sensors, involve segmenting sparse sets of points (point clouds) for which point density is the only discriminating feature. The segmentation of these point clouds is challenging for several reasons, including the fact that the points are not associated with a regular grid. Moreover, the presence of noise, particularly impulsive noise with varying density, can make it difficult to obtain a good segmentation using traditional techniques, including the algorithms that had been developed to process LiDAR data. This dissertation introduces novel algorithms and frameworks based on statistical techniques and image analysis in order to segment and extract surfaces from sparse noisy point clouds. We introduce an adaptive method for mapping point clouds onto an image grid followed by a contour detection approach that is based on an enhanced version of region-based Active Contours Without Edges (ACWE). We also proposed a noise reduction method using Bayesian approach and incorporated it, along with other noise reduction approaches, into a joint framework that produces robust results. We combined the aforementioned techniques with a statistical surface refinement method to introduce a novel framework to detect ground and canopy surfaces in micropulse photon-counting LiDAR data. The algorithm is fully automatic and uses no prior elevation or geographic information to extract surfaces. Moreover, we propose a novel segmentation framework for noisy point clouds in the plane based on a Markov random field (MRF) optimization that we call Point Cloud Densitybased Segmentation (PCDS). We also developed a large synthetic dataset of in plane point clouds that includes either a set of randomly placed, sized and oriented primitive objects (circle, rectangle and triangle) or an arbitrary shape that forms a simple approximation for the LiDAR point clouds. The experiment performed on a large number of real LiDAR and synthetic point clouds showed that our proposed frameworks and algorithms outperforms the state-of-the-art algorithms in terms of segmentation accuracy and surface RMSE.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Burwell, Claire Leonora. "The effect of 2D vs. 3D visualisation on lidar point cloud analysis tasks." Thesis, University of Leicester, 2016. http://hdl.handle.net/2381/37950.

Full text
Abstract:
The exploitation of human depth perception is not uncommon in visual analysis of data; medical imagery and geological analysis already rely on stereoscopic 3D visualisation. In contrast, 3D scans of the environment are usually represented on a flat, 2D computer screen, although there is potential to take advantage of both (a) the spatial depth that is offered by the point cloud data, and (b) our ability to see stereoscopically. This study explores whether a stereo 3D analysis environment would add value to visual lidar tasks, compared to the standard 2D display. Forty-six volunteers, all with good stereovision and varying lidar knowledge, viewed lidar data in either 2D or in 3D, on a 4m x 2.4m screen. The first task required 2D and 3D measurement of linear lengths of a planar and a volumetric feature, using an interaction device for point selection. Overall, there was no significant difference in the spread of 2D and 3D measurement distributions for both of the measured features. The second task required interpretation of ten features from individual points. These were highlighted across two areas of interest - a flat, suburban area and a valley slope with a mixture of features. No classification categories were offered to the participant and answers were expressed verbally. Two of the ten features (chimney and cliff-face) were interpreted with a better degree of accuracy using the 3D method and the remaining features had no difference in 2D and 3D accuracy. Using the experiment’s data processing and visualisation approaches, results suggest that stereo 3D perception of lidar data does not add value to manual linear measurement. The interpretation results indicate that immersive stereo 3D visualisation does improve the accuracy of manual point cloud classification for certain features. The findings contribute to wider discussions in lidar processing, geovisualisation, and applied psychology.
APA, Harvard, Vancouver, ISO, and other styles
6

Bungula, Wako Tasisa. "Bi-filtration and stability of TDA mapper for point cloud data." Diss., University of Iowa, 2019. https://ir.uiowa.edu/etd/6918.

Full text
Abstract:
TDA mapper is an algorithm used to visualize and analyze big data. TDA mapper is applied to a dataset, X, equipped with a filter function f from X to R. The output of the algorithm is an abstract graph (or simplicial complex). The abstract graph captures topological and geometric information of the underlying space of X. One of the interests in TDA mapper is to study whether or not a mapper graph is stable. That is, if a dataset X is perturbed by a small value, and denote the perturbed dataset by X∂, we would like to compare the TDA mapper graph of X to the TDA mapper graph of X∂. Given a topological space X, if the cover of the image of f satisfies certain conditions, Tamal Dey, Facundo Memoli, and Yusu Wang proved that the TDA mapper is stable. That is, the mapper graph of X differs from the mapper graph of X∂ by a small value measured via homology. The goal of this thesis is three-fold. The first is to introduce a modified TDA mapper algorithm. The fundamental difference between TDA mapper and the modified version is the modified version avoids the use of filter function. In comparing the mapper graph outputs, the proposed modified mapper is shown to capture more geometric and topological features. We discuss the advantages and disadvantages of the modified mapper. Tamal Dey, Facundo Memoli, and Yusu Wang showed that a filtration of covers induce a filtration of simplicial complexes, which in turn induces a filtration of homology groups. While Tamal Dey, Facundo Memoli, and Yusu Wang focused on TDA mapper's application to topological space, the second goal of this thesis is to show DBSCAN clustering gives a filtration of covers when TDA mapper is applied to a point cloud. Hence, DBSCAN gives a filtration of mapper graphs (simplicial complexes) and homology groups. More importantly, DBSCAN gives a filtration of covers, mapper graphs, and homology groups in three parameter directions: bin size, epsilon, and Minpts. Hence, there is a multi-dimensional filtration of covers, mapper graphs, and homology groups. We also note that single-linkage clustering is a special case of DBSCAN clustering, so the results proved to be true when DBSCAN is used are also true when single-linkage is used. However, complete-linkage does not give a filtration of covers in the direction of bin, hence no filtration of simpicial complexes and homology groups exist when complete-linkage is applied to cluster a dataset. In general, the results hold for any clustering algorithm that gives a filtration of covers. The third (and last) goal of this thesis is to prove that two multi-dimensional persistence modules (one: with respect to the original dataset, X; two: with respect to the ∂-perturbation of X) are 2∂-interleaved. In other words, the mapper graphs of X and X∂ differ by a small value as measured by homology.
APA, Harvard, Vancouver, ISO, and other styles
7

Megahed, Fadel M. "The Use of Image and Point Cloud Data in Statistical Process Control." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/26511.

Full text
Abstract:
The volume of data acquired in production systems continues to expand. Emerging imaging technologies, such as machine vision systems (MVSs) and 3D surface scanners, diversify the types of data being collected, further pushing data collection beyond discrete dimensional data. These large and diverse datasets increase the challenge of extracting useful information. Unfortunately, industry still relies heavily on traditional quality methods that are limited to fault detection, which fails to consider important diagnostic information needed for process recovery. Modern measurement technologies should spur the transformation of statistical process control (SPC) to provide practitioners with additional diagnostic information. This dissertation focuses on how MVSs and 3D laser scanners can be further utilized to meet that goal. More specifically, this work: 1) reviews image-based control charts while highlighting their advantages and disadvantages; 2) integrates spatiotemporal methods with digital image processing to detect process faults and estimate their location, size, and time of occurrence; and 3) shows how point cloud data (3D laser scans) can be used to detect and locate unknown faults in complex geometries. Overall, the research goal is to create new quality control tools that utilize high density data available in manufacturing environments to generate knowledge that supports decision-making beyond just indicating the existence of a process issue. This allows industrial practitioners to have a rapid process recovery once a process issue has been detected, and consequently reduce the associated downtime.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Chleborad, Aaron A. "Grasping unknown novel objects from single view using octant analysis." Thesis, Manhattan, Kan. : Kansas State University, 2010. http://hdl.handle.net/2097/4089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rasmussen, Johan, and David Nilsson. "Analys av punktmoln i tre dimensioner." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Datateknik och informatik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-36915.

Full text
Abstract:
Syfte: Att ta fram en metod för att hjälpa mindre sågverk att bättre tillvarata mesta möjliga virke från en timmerstock. Metod: En kvantitativ studie där tre iterationer genomförts enligt Design Science. Resultat: För att skapa en effektiv algoritm som ska utföra volymberäkningar i ett punktmoln som består av cirka två miljoner punkter i ett industriellt syfte ligger fokus i att algoritmen är snabb och visar rätt data. Det primära målet för att göra algoritmen snabb är att bearbeta punktmolnet ett minimalt antal gånger. Den algoritm som uppfyller delmålen i denna studie är Algoritm C. Algoritmen är både snabb och har en låg standardavvikelse på mätfelen. Algoritm C har komplexiteten O(n) vid analys av delpunktmoln. Implikationer: Med utgångspunkt från denna studies algoritm skulle det vara möjligt att använda stereokamerateknik för att hjälpa mindre sågverk att bättre tillvarata mesta möjliga virke från en timmerstock. Begränsningar: Studiens algoritm har utgått från att inga punkter har skapats inuti stocken vilket skulle kunna leda till felplacerade punkter. Om en stock skulle vara krokig överensstämmer inte stockens centrum med z-axelns placering. Detta är något som skulle kunna innebära att z-värdet hamnar utanför stocken, i extremfall, vilket algoritmen inte kan hantera.
Purpose: To develop a method that can help smaller sawmills to better utilize the greatest possible amount of wood from a log. Method: A quantitative study where three iterations has been made using Design Science. Findings: To create an effective algorithm that will perform volume calculations in a point cloud consisting of about two million points for an industrial purpose, the focus is on the algorithm being fast and that it shows the correct data. The primary goal of making the algorithm quick is to process the point cloud a minimum number of times. The algorithm that meets the goals in this study is Algorithm C. The algorithm is both fast and has a low standard deviation of the measurement errors. Algorithm C has the complexity O(n) in the analysis of sub-point clouds. Implications: Based on this study’s algorithm, it would be possible to use stereo camera technology to help smaller sawmills to better utilize the most possible amount of wood from a log. Limitations: The study’s algorithm assumes that no points have been created inside the log, which could lead to misplaced points. If a log would be crooked, the center of the log would not match the z-axis position. This is something that could mean that the z-value is outside of the log, in extreme cases, which the algorithm cannot handle.
APA, Harvard, Vancouver, ISO, and other styles
10

Rusinek, Cory A. "New Avenues in Electrochemical Systems and Analysis." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1490350904669695.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Point cloud analysis"

1

Liu, Shan, Min Zhang, Pranav Kadam, and C. C. Jay Kuo. 3D Point Cloud Analysis. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89180-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

3D Point Cloud Analysis: Traditional, Deep Learning, and Explainable Machine Learning Methods. Springer International Publishing AG, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

3D Point Cloud Analysis: Traditional, Deep Learning, and Explainable Machine Learning Methods. Springer International Publishing AG, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kleinman, Daniel Lee, Karen A. Cloud-Hansen, and Jo Handelsman, eds. Controversies in Science and Technology. Oxford University Press, 2014. http://dx.doi.org/10.1093/oso/9780199383771.001.0001.

Full text
Abstract:
When it comes to any current scientific debate, there are more than two sides to every story. Controversies in Science and Technology, Volume 4 analyzes controversial topics in science and technology-infrastructure, ecosystem management, food security, and plastics and health-from multiple points of view. The editors have compiled thought-provoking essays from a variety of experts from academia and beyond, creating a volume that addresses many of the issues surrounding these scientific debates. Part I of the volume discusses infrastructure, and the real meaning behind the term in today's society. Essays address the central issues that motivate current discussion about infrastructure, including writing on the vulnerability to disasters. Part II, titled "Food Policy," will focus on the challenges of feeding an ever-growing world and the costs of not doing so. Part III features essays on chemicals and environmental health, and works to define "safety" as it relates to today's scientific community. The book's final section examines ecosystem management. In the end, Kleinman, Cloud-Hansen, and Handelsman provide a multifaceted volume that will be appropriate for anyone hoping to understand arguments surrounding several of today's most important scientific controversies.
APA, Harvard, Vancouver, ISO, and other styles
5

Burford, Mark. Mahalia Jackson and the Black Gospel Field. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190634902.001.0001.

Full text
Abstract:
Drawing on and piecing together a trove of previously unexamined sources, this book is the first critical study of the renowned African American gospel singer Mahalia Jackson (1911–1972). Beginning with the history of Jackson’s family on a remote cotton plantation in the Central Louisiana parish of Pointe Coupée, the book follows their relocation to New Orleans, where Jackson was born, and Jackson’s own migration to Chicago during the Great Depression. The principal focus is her career in the decade following World War II, during which Jackson, building upon the groundwork of seminal Chicago gospel pioneers and the influential National Baptist Convention, earned a reputation as a dynamic church singer. Eventually, Jackson achieved unprecedented mass-mediated celebrity, breaking through in the late 1940s as an internationally recognized recording artist for Apollo and Columbia Records who also starred in her own radio and television programs. But the book is also a study of the black gospel field of which Jackson was a part. Over the course of the 1940s and 1950s, black gospel singing, both as musical worship and as pop-cultural spectacle, grew exponentially, with expanded visibility, commercial clout, and forms of prestige. Methodologically informed by a Bourdiean field analysis approach that develops a more granular, dynamic, and encompassing picture of post-war black gospel, the book persistently considers Jackson, however exceptional she may have been, in relation to her fellow gospel artists, raising fresh questions about Jackson, gospel music, and the reception of black vernacular culture.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Point cloud analysis"

1

Weinmann, Martin. "Point Cloud Registration." In Reconstruction and Analysis of 3D Scenes, 55–110. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-29246-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Shan, Min Zhang, Pranav Kadam, and C. C. Jay Kuo. "Deep Learning-Based Point Cloud Analysis." In 3D Point Cloud Analysis, 53–86. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89180-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Shan, Min Zhang, Pranav Kadam, and C. C. Jay Kuo. "Introduction." In 3D Point Cloud Analysis, 1–13. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89180-0_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Shan, Min Zhang, Pranav Kadam, and C. C. Jay Kuo. "Conclusion and Future Work." In 3D Point Cloud Analysis, 141–43. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89180-0_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Shan, Min Zhang, Pranav Kadam, and C. C. Jay Kuo. "Explainable Machine Learning Methods for Point Cloud Analysis." In 3D Point Cloud Analysis, 87–140. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89180-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mémoli, Facundo, and Guillermo Sapiro. "Computing with Point Cloud Data." In Statistics and Analysis of Shapes, 201–29. Boston, MA: Birkhäuser Boston, 2006. http://dx.doi.org/10.1007/0-8176-4481-4_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Weinmann, Martin. "Preliminaries of 3D Point Cloud Processing." In Reconstruction and Analysis of 3D Scenes, 17–38. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-29246-5_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kyöstilä, Tomi, Daniel Herrera C., Juho Kannala, and Janne Heikkilä. "Merging Overlapping Depth Maps into a Nonredundant Point Cloud." In Image Analysis, 567–78. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38886-6_53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Richtsfeld, Mario, and Markus Vincze. "Point Cloud Segmentation Based on Radial Reflection." In Computer Analysis of Images and Patterns, 955–62. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03767-2_116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Salih, Yasir, Aamir Saeed Malik, Nicolas Walter, Désiré Sidibé, Naufal Saad, and Fabrice Meriaudeau. "Noise Robustness Analysis of Point Cloud Descriptors." In Advanced Concepts for Intelligent Vision Systems, 68–79. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-02895-8_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Point cloud analysis"

1

Liu, Yichen. "Point Cloud registration based on iterative closest point." In 2021 International Conference on Computer Vision and Pattern Analysis, edited by Ruimin Hu, Yang Yue, and Siting Chen. SPIE, 2022. http://dx.doi.org/10.1117/12.2626850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Haiwei, Shichen Liu, Weikai Chen, Hao Li, and Randall Hill. "Equivariant Point Network for 3D Point Cloud Analysis." In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.01428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lu, Jia, and Jing Qian. "Discrete Stress Analysis on Point-Cloud Model Derived From Medical Images." In ASME 2009 Summer Bioengineering Conference. American Society of Mechanical Engineers, 2009. http://dx.doi.org/10.1115/sbc2009-206209.

Full text
Abstract:
Pixel or voxel data from the medical images provide a point-cloud depiction for complicated anatomies that are difficult to describe in CAD geometry. Traditionally, a point-cloud model needs to be converted into finite element mesh in order to perform mechanical analysis. Although meshing generation tools have been significantly improved over the last decades, generating high quality meshes in complicated bodies remains a challenge. Recently, the authors developed a family of solid mechanics solvers that work directly on domains represented by point-clouds [1,2]. Using this method, it is possible to conduct mechanical analysis on point-cloud representations of patient-specific organs without resorting to finite element method. In this article, we describe this paradigm of analysis and demonstrate the method with numerical examples.
APA, Harvard, Vancouver, ISO, and other styles
4

Dargahi, Mozhgan Momtaz, and David Lattanzi. "Spatial Statistical Methods for Complexity-Based Point Cloud Analysis." In ASME 2020 Conference on Smart Materials, Adaptive Structures and Intelligent Systems. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/smasis2020-2294.

Full text
Abstract:
Abstract Modern remote sensing technologies now provide the basis for flexible and highly accurate three-dimensional geometric modeling of structures in the form of point clouds. To date, most efforts are focused on how to use these point clouds to form a digital twin of an asset, but these models can also be used to augment and improve condition assessment and structural health monitoring (SHM). However, point cloud analytics require unique approaches given the complexity and scale of the data. To illustrate these capabilities, we propose a new SHM method that leverages 3D point cloud data and the evolution of this data over time. Taking inspiration from recent work on the use of complexity measures for sensor driven SHM, here we adapt the concept for spatial analysis of 3D digital twins. The fundamental assumption that underpins the approach presented here is that, as a structure degrades in integrity, the randomness of the data increases when compared against the null model of the homogeneous Poisson process, otherwise described as ‘complete spatial randomness’ (CSR). In spatial point analysis, points from a baseline model are generated and placed within a normalized Cartesian reference frame. The spatial randomness of this baseline is considered the null model of the homogeneous Poisson process. In subsequent 3D models of an asset, spatial complexity metrics are recomputed on a local neighborhood level, with increased complexity corresponding to potential damage or degradation of the asset. Another question of interest is to provide a suitable mathematical model for this underlying temporal evolution. Compared to more conventional analytical approaches that can only detect data anomalies via a single computation, this complexity-based approach enables us to further integrate multi-level information, in the form of first and second order moment metrics, to evaluate data anomalies in more depth. In this method we use the variation of the first and second moments of the average intensity of the points in space. A first order metric of a point pattern represents the density change across the study region such as Quadrat density or Kernel density. The second-order metric of the point pattern considers the distance between points, effectively quantifying how points are distributed relative to one another. Examples include Ripley’s K-function, the L-function or Baddeley’s J-function. This analytical approach was tested on a variety of laboratory scale specimens with varying levels of damage and degradation. The results show that this new technique provides rapid analytical capabilities for finding damage and quantifying both damage and evolution in point clouds. Ongoing work seeks to scale up these measures to full-scale specimens, and to explore methods of using the results for damage prognosis through statistical time-series modeling of the evolution of the complexity metrics.
APA, Harvard, Vancouver, ISO, and other styles
5

Kadam, Pranav, Min Zhang, Shan Liu, and C. C. Jay Kuo. "Unsupervised Point Cloud Registration via Salient Points Analysis (SPA)." In 2020 IEEE International Conference on Visual Communications and Image Processing (VCIP). IEEE, 2020. http://dx.doi.org/10.1109/vcip49819.2020.9301874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Srivatsan, Vijay, and Reuven Katz. "In-Process Surface Normal Estimation for Raster Scanned Point Cloud Data." In ASME 2008 9th Biennial Conference on Engineering Systems Design and Analysis. ASMEDC, 2008. http://dx.doi.org/10.1115/esda2008-59007.

Full text
Abstract:
A method for in-process surface normal estimation from point cloud data is presented. The method enables surface normal estimation immediately after coordinates of points are measured. Such an approach allows in-process computational registration, used for collision and occlusion avoidance during dimensional inspection with high-precision point-based range sensors. The most commonly used sensor path for inspection with high-precision point-based range sensors is a raster scan path. A novel neighborhood identification approach for raster scanned point cloud data is presented. Quadratic polynomials are used to model the local geometry of the surface, from which the surface normal is estimated for the point. Implementation of the method through simulations and on a real part shows the normal estimation error to be within 0.1°.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Pei-Heng, Juo-Wei Lin, Yi-Lun Huang, and Ting-Lan Lin. "Analysis of Octree Coding for 3D Point Cloud Frame." In ASME 2019 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/detc2019-97328.

Full text
Abstract:
Abstract The octree coding has been used extensively for 3D point cloud compression. For encoding, the recursive octree partition is used for the 3D point cloud frame until only one point is contained in the cube bin, and the representative bits are recorded. For decoding, the reversal procedure is performed to reconstruct the 3D point cloud frame. During the reconstruction, the reconstructed point in a particular bin is usually quantized to a particular corner of the bin for the reconstructed 3D point cloud frame; this introduces coding distortion in the point location. In this paper, nine reconstructed locations are tested to distinguish their distortion results, and the bit cost is also analyzed.
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Fayao, Guosheng Lin, Chuan-Sheng Foo, Chaitanya K. Joshi, and Jie Lin. "Point Discriminative Learning for Data-efficient 3D Point Cloud Analysis." In 2022 International Conference on 3D Vision (3DV). IEEE, 2022. http://dx.doi.org/10.1109/3dv57658.2022.00017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wei, Shuangfeng, and Hong Chen. "Building depth images from scattered point cloud." In International Symposium on Spatial Analysis, Spatial-temporal Data Modeling, and Data Mining, edited by Yaolin Liu and Xinming Tang. SPIE, 2009. http://dx.doi.org/10.1117/12.838404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fujiwara, Kent, and Taiichi Hashimoto. "Neural Implicit Embedding for Point Cloud Analysis." In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.01175.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Point cloud analysis"

1

Berney, Ernest, Naveen Ganesh, Andrew Ward, J. Newman, and John Rushing. Methodology for remote assessment of pavement distresses from point cloud analysis. Engineer Research and Development Center (U.S.), April 2021. http://dx.doi.org/10.21079/11681/40401.

Full text
Abstract:
The ability to remotely assess road and airfield pavement condition is critical to dynamic basing, contingency deployment, convoy entry and sustainment, and post-attack reconnaissance. Current Army processes to evaluate surface condition are time-consuming and require Soldier presence. Recent developments in the area of photogrammetry and light detection and ranging (LiDAR) enable rapid generation of three-dimensional point cloud models of the pavement surface. Point clouds were generated from data collected on a series of asphalt, concrete, and unsurfaced pavements using ground- and aerial-based sensors. ERDC-developed algorithms automatically discretize the pavement surface into cross- and grid-based sections to identify physical surface distresses such as depressions, ruts, and cracks. Depressions can be sized from the point-to-point distances bounding each depression, and surface roughness is determined based on the point heights along a given cross section. Noted distresses are exported to a distress map file containing only the distress points and their locations for later visualization and quality control along with classification and quantification. Further research and automation into point cloud analysis is ongoing with the goal of enabling Soldiers with limited training the capability to rapidly assess pavement surface condition from a remote platform.
APA, Harvard, Vancouver, ISO, and other styles
2

Berney, Ernest, Andrew Ward, and Naveen Ganesh. First generation automated assessment of airfield damage using LiDAR point clouds. Engineer Research and Development Center (U.S.), March 2021. http://dx.doi.org/10.21079/11681/40042.

Full text
Abstract:
This research developed an automated software technique for identifying type, size, and location of man-made airfield damage including craters, spalls, and camouflets from a digitized three-dimensional point cloud of the airfield surface. Point clouds were initially generated from Light Detection and Ranging (LiDAR) sensors mounted on elevated lifts to simulate aerial data collection and, later, an actual unmanned aerial system. LiDAR data provided a high-resolution, globally positioned, and dimensionally scaled point cloud exported in a LAS file format that was automatically retrieved and processed using volumetric detection algorithms developed in the MATLAB software environment. Developed MATLAB algorithms used a three-stage filling technique to identify the boundaries of craters first, then spalls, then camouflets, and scaled their sizes based on the greatest pointwise extents. All pavement damages and their locations were saved as shapefiles and uploaded into the GeoExPT processing environment for visualization and quality control. This technique requires no user input between data collection and GeoExPT visualization, allowing for a completely automated software analysis with all filters and data processing hidden from the user.
APA, Harvard, Vancouver, ISO, and other styles
3

Kholoshyn, Ihor V., Olga V. Bondarenko, Olena V. Hanchuk, and Iryna M. Varfolomyeyeva. Cloud technologies as a tool of creating Earth Remote Sensing educational resources. [б. в.], July 2020. http://dx.doi.org/10.31812/123456789/3885.

Full text
Abstract:
This article is dedicated to the Earth Remote Sensing (ERS), which the authors believe is a great way to teach geography and allows forming an idea of the actual geographic features and phenomena. One of the major problems that now constrains the active introduction of remote sensing data in the educational process is the low availability of training aerospace pictures, which meet didactic requirements. The article analyzes the main sources of ERS as a basis for educational resources formation with aerospace images: paper, various individual sources (personal stations receiving satellite information, drones, balloons, kites and balls) and Internet sources (mainstream sites, sites of scientific-technical organizations and distributors, interactive Internet geoservices, cloud platforms of geospatial analysis). The authors point out that their geospatial analysis platforms (Google Earth Engine, Land Viewer, EOS Platform, etc.), due to their unique features, are the basis for the creation of information thematic databases of ERS. The article presents an example of such a database, covering more than 800 aerospace images and dynamic models, which are combined according to such didactic principles as high information load and clarity.
APA, Harvard, Vancouver, ISO, and other styles
4

Marienko, Maiia V., Yulia H. Nosenko, and Mariya P. Shyshkina. Personalization of learning using adaptive technologies and augmented reality. [б. в.], November 2020. http://dx.doi.org/10.31812/123456789/4418.

Full text
Abstract:
The research is aimed at developing the recommendations for educators on using adaptive technologies and augmented reality in personalized learning implementation. The latest educational technologies related to learning personalization and the adaptation of its content to the individual needs of students and group work are considered. The current state of research is described, the trends of development are determined. Due to a detailed analysis of scientific works, a retrospective of the development of adaptive and, in particular, cloud-oriented systems is shown. The preconditions of their appearance and development, the main scientific ideas that contributed to this are analyzed. The analysis showed that the scientists point to four possible types of semantic interaction of augmented reality and adaptive technologies. The adaptive cloud-based educational systems design is considered as the promising trend of research. It was determined that adaptability can be manifested in one or a combination of several aspects: content, evaluation and consistency. The cloud technology is taken as a platform for integrating adaptive learning with augmented reality as the effective modern tools to personalize learning. The prospects of the adaptive cloud-based systems design in the context of teachers training are evaluated. The essence and place of assistive technologies in adaptive learning systems design are defined. It is shown that augmented reality can be successfully applied in inclusive education. The ways of combining adaptive systems and augmented reality tools to support the process of teachers training are considered. The recommendations on the use of adaptive cloud-based systems in teacher education are given.
APA, Harvard, Vancouver, ISO, and other styles
5

Habib, Ayman, Darcy M. Bullock, Yi-Chun Lin, and Raja Manish. Road Ditch Line Mapping with Mobile LiDAR. Purdue University, 2021. http://dx.doi.org/10.5703/1288284317354.

Full text
Abstract:
Maintenance of roadside ditches is important to avoid localized flooding and premature failure of pavements. Scheduling effective preventative maintenance requires mapping of the ditch profile to identify areas requiring excavation of long-term sediment accumulation. High-resolution, high-quality point clouds collected by mobile LiDAR mapping systems (MLMS) provide an opportunity for effective monitoring of roadside ditches and performing hydrological analyses. This study evaluated the applicability of mobile LiDAR for mapping roadside ditches for slope and drainage analyses. The performance of alternative MLMS units was performed. These MLMS included an unmanned ground vehicle, an unmanned aerial vehicle, a portable backpack system along with its vehicle-mounted version, a medium-grade wheel-based system, and a high-grade wheel-based system. Point cloud from all the MLMS units were in agreement in the vertical direction within the ±3 cm range for solid surfaces, such as paved roads, and ±7 cm range for surfaces with vegetation. The portable backpack system that could be carried by a surveyor or mounted on a vehicle and was the most flexible MLMS. The report concludes that due to flexibility and cost effectiveness of the portable backpack system, it is the preferred platform for mapping roadside ditches, followed by the medium-grade wheel-based system. Furthermore, a framework for ditch line characterization is proposed and tested using datasets acquired by the medium-grade wheel-based and vehicle-mounted portable systems over a state highway. An existing ground filtering approach is modified to handle variations in point density of mobile LiDAR data. Hydrological analyses, including flow direction and flow accumulation, are applied to extract the drainage network from the digital terrain model (DTM). Cross-sectional/longitudinal profiles of the ditch are automatically extracted from LiDAR data and visualized in 3D point clouds and 2D images. The slope derived from the LiDAR data was found to be very close to highway cross slope design standards of 2% on driving lanes, 4% on shoulders, as well as 6-by-1 slope for ditch lines. Potential flooded regions are identified by detecting areas with no LiDAR return and a recall score of 54% and 92% was achieved by the medium-grade wheel-based and vehicle-mounted portable systems, respectively. Furthermore, a framework for ditch line characterization is proposed and tested using datasets acquired by the medium-grade wheel-based and vehicle-mounted portable systems over a state highway. An existing ground filtering approach is modified to handle variations in point density of mobile LiDAR data. Hydrological analyses, including flow direction and flow accumulation, are applied to extract the drainage network from the digital terrain model (DTM). Cross-sectional/longitudinal profiles of the ditch are automatically extracted from LiDAR data, and visualized in 3D point clouds and 2D images. The slope derived from the LiDAR data was found to be very close to highway cross slope design standards of 2% on driving lanes, 4% on shoulder, as well as 6-by-1 slope for ditch lines. Potential flooded regions are identified by detecting areas with no LiDAR return and a recall score of 54% and 92% was achieved by the medium-grade wheel-based and vehicle-mounted portable systems, respectively.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography