Academic literature on the topic '3D Point cloud Compression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic '3D Point cloud Compression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "3D Point cloud Compression"

1

Huang, Tianxin, Jiangning Zhang, Jun Chen, Zhonggan Ding, Ying Tai, Zhenyu Zhang, Chengjie Wang, and Yong Liu. "3QNet." ACM Transactions on Graphics 41, no. 6 (November 30, 2022): 1–13. http://dx.doi.org/10.1145/3550454.3555481.

Full text
Abstract:
Since the development of 3D applications, the point cloud, as a spatial description easily acquired by sensors, has been widely used in multiple areas such as SLAM and 3D reconstruction. Point Cloud Compression (PCC) has also attracted more attention as a primary step before point cloud transferring and saving, where the geometry compression is an important component of PCC to compress the points geometrical structures. However, existing non-learning-based geometry compression methods are often limited by manually pre-defined compression rules. Though learning-based compression methods can significantly improve the algorithm performances by learning compression rules from data, they still have some defects. Voxel-based compression networks introduce precision errors due to the voxelized operations, while point-based methods may have relatively weak robustness and are mainly designed for sparse point clouds. In this work, we propose a novel learning-based point cloud compression framework named 3D Point Cloud Geometry Quantiation Compression Network (3QNet), which overcomes the robustness limitation of existing point-based methods and can handle dense points. By learning a codebook including common structural features from simple and sparse shapes, 3QNet can efficiently deal with multiple kinds of point clouds. According to experiments on object models, indoor scenes, and outdoor scans, 3QNet can achieve better compression performances than many representative methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Morell, Vicente, Sergio Orts, Miguel Cazorla, and Jose Garcia-Rodriguez. "Geometric 3D point cloud compression." Pattern Recognition Letters 50 (December 2014): 55–62. http://dx.doi.org/10.1016/j.patrec.2014.05.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yu, Siyang, Si Sun, Wei Yan, Guangshuai Liu, and Xurui Li. "A Method Based on Curvature and Hierarchical Strategy for Dynamic Point Cloud Compression in Augmented and Virtual Reality System." Sensors 22, no. 3 (February 7, 2022): 1262. http://dx.doi.org/10.3390/s22031262.

Full text
Abstract:
As a kind of information-intensive 3D representation, point cloud rapidly develops in immersive applications, which has also sparked new attention in point cloud compression. The most popular dynamic methods ignore the characteristics of point clouds and use an exhaustive neighborhood search, which seriously impacts the encoder’s runtime. Therefore, we propose an improved compression means for dynamic point cloud based on curvature estimation and hierarchical strategy to meet the demands in real-world scenarios. This method includes initial segmentation derived from the similarity between normals, curvature-based hierarchical refining process for iterating, and image generation and video compression technology based on de-redundancy without performance loss. The curvature-based hierarchical refining module divides the voxel point cloud into high-curvature points and low-curvature points and optimizes the initial clusters hierarchically. The experimental results show that our method achieved improved compression performance and faster runtime than traditional video-based dynamic point cloud compression.
APA, Harvard, Vancouver, ISO, and other styles
4

Imdad, Ulfat, Mirza Tahir Ahmed, Muhammad Asif, and Hanan Aljuaid. "3D point cloud lossy compression using quadric surfaces." PeerJ Computer Science 7 (October 6, 2021): e675. http://dx.doi.org/10.7717/peerj-cs.675.

Full text
Abstract:
The presence of 3D sensors in hand-held or head-mounted smart devices has motivated many researchers around the globe to devise algorithms to manage 3D point cloud data efficiently and economically. This paper presents a novel lossy compression technique to compress and decompress 3D point cloud data that will save storage space on smart devices as well as minimize the use of bandwidth when transferred over the network. The idea presented in this research exploits geometric information of the scene by using quadric surface representation of the point cloud. A region of a point cloud can be represented by the coefficients of quadric surface when the boundary conditions are known. Thus, a set of quadric surface coefficients and their associated boundary conditions are stored as a compressed point cloud and used to decompress. An added advantage of proposed technique is its flexibility to decompress the cloud as a dense or a course cloud. We compared our technique with state-of-the-art 3D lossless and lossy compression techniques on a number of standard publicly available datasets with varying the structure complexities.
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Jiawen, Jin Wang, Longhua Sun, Mu-En Wu, and Qing Zhu. "Point Cloud Geometry Compression Based on Multi-Layer Residual Structure." Entropy 24, no. 11 (November 17, 2022): 1677. http://dx.doi.org/10.3390/e24111677.

Full text
Abstract:
Point cloud data are extensively used in various applications, such as autonomous driving and augmented reality since it can provide both detailed and realistic depictions of 3D scenes or objects. Meanwhile, 3D point clouds generally occupy a large amount of storage space that is a big burden for efficient communication. However, it is difficult to efficiently compress such sparse, disordered, non-uniform and high dimensional data. Therefore, this work proposes a novel deep-learning framework for point cloud geometric compression based on an autoencoder architecture. Specifically, a multi-layer residual module is designed on a sparse convolution-based autoencoders that progressively down-samples the input point clouds and reconstructs the point clouds in a hierarchically way. It effectively constrains the accuracy of the sampling process at the encoder side, which significantly preserves the feature information with a decrease in the data volume. Compared with the state-of-the-art geometry-based point cloud compression (G-PCC) schemes, our approach obtains more than 70–90% BD-Rate gain on an object point cloud dataset and achieves a better point cloud reconstruction quality. Additionally, compared to the state-of-the-art PCGCv2, we achieve an average gain of about 10% in BD-Rate.
APA, Harvard, Vancouver, ISO, and other styles
6

Quach, Maurice, Aladine Chetouani, Giuseppe Valenzise, and Frederic Dufaux. "A deep perceptual metric for 3D point clouds." Electronic Imaging 2021, no. 9 (January 18, 2021): 257–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.9.iqsp-257.

Full text
Abstract:
Point clouds are essential for storage and transmission of 3D content. As they can entail significant volumes of data, point cloud compression is crucial for practical usage. Recently, point cloud geometry compression approaches based on deep neural networks have been explored. In this paper, we evaluate the ability to predict perceptual quality of typical voxel-based loss functions employed to train these networks. We find that the commonly used focal loss and weighted binary cross entropy are poorly correlated with human perception. We thus propose a perceptual loss function for 3D point clouds which outperforms existing loss functions on the ICIP2020 subjective dataset. In addition, we propose a novel truncated distance field voxel grid representation and find that it leads to sparser latent spaces and loss functions that are more correlated with perceived visual quality compared to a binary representation. The source code is available at <uri>https://github.com/mauriceqch/2021_pc_perceptual_loss</uri>.
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Mun-yong, Sang-ha Lee, Kye-dong Jung, Seung-hyun Lee, and Soon-chul Kwon. "A Novel Preprocessing Method for Dynamic Point-Cloud Compression." Applied Sciences 11, no. 13 (June 26, 2021): 5941. http://dx.doi.org/10.3390/app11135941.

Full text
Abstract:
Computer-based data processing capabilities have evolved to handle a lot of information. As such, the complexity of three-dimensional (3D) models (e.g., animations or real-time voxels) containing large volumes of information has increased exponentially. This rapid increase in complexity has led to problems with recording and transmission. In this study, we propose a method of efficiently managing and compressing animation information stored in the 3D point-clouds sequence. A compressed point-cloud is created by reconfiguring the points based on their voxels. Compared with the original point-cloud, noise caused by errors is removed, and a preprocessing procedure that achieves high performance in a redundant processing algorithm is proposed. The results of experiments and rendering demonstrate an average file-size reduction of 40% using the proposed algorithm. Moreover, 13% of the over-lap data are extracted and removed, and the file size is further reduced.
APA, Harvard, Vancouver, ISO, and other styles
8

Luo, Guoliang, Bingqin He, Yanbo Xiong, Luqi Wang, Hui Wang, Zhiliang Zhu, and Xiangren Shi. "An Optimized Convolutional Neural Network for the 3D Point-Cloud Compression." Sensors 23, no. 4 (February 16, 2023): 2250. http://dx.doi.org/10.3390/s23042250.

Full text
Abstract:
Due to the tremendous volume taken by the 3D point-cloud models, knowing how to achieve the balance between a high compression ratio, a low distortion rate, and computing cost in point-cloud compression is a significant issue in the field of virtual reality (VR). Convolutional neural networks have been used in numerous point-cloud compression research approaches during the past few years in an effort to progress the research state. In this work, we have evaluated the effects of different network parameters, including neural network depth, stride, and activation function on point-cloud compression, resulting in an optimized convolutional neural network for compression. We first have analyzed earlier research on point-cloud compression based on convolutional neural networks before designing our own convolutional neural network. Then, we have modified our model parameters using the experimental data to further enhance the effect of point-cloud compression. Based on the experimental results, we have found that the neural network with the 4 layers and 2 strides parameter configuration using the Sigmoid activation function outperforms the default configuration by 208% in terms of the compression-distortion rate. The experimental results show that our findings are effective and universal and make a great contribution to the research of point-cloud compression using convolutional neural networks.
APA, Harvard, Vancouver, ISO, and other styles
9

Gu, Shuai, Junhui Hou, Huanqiang Zeng, and Hui Yuan. "3D Point Cloud Attribute Compression via Graph Prediction." IEEE Signal Processing Letters 27 (2020): 176–80. http://dx.doi.org/10.1109/lsp.2019.2963793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dybedal, Joacim, Atle Aalerud, and Geir Hovland. "Embedded Processing and Compression of 3D Sensor Data for Large Scale Industrial Environments." Sensors 19, no. 3 (February 2, 2019): 636. http://dx.doi.org/10.3390/s19030636.

Full text
Abstract:
This paper presents a scalable embedded solution for processing and transferring 3D point cloud data. Sensors based on the time-of-flight principle generate data which are processed on a local embedded computer and compressed using an octree-based scheme. The compressed data is transferred to a central node where the individual point clouds from several nodes are decompressed and filtered based on a novel method for generating intensity values for sensors which do not natively produce such a value. The paper presents experimental results from a relatively large industrial robot cell with an approximate size of 10 m × 10 m × 4 m. The main advantage of processing point cloud data locally on the nodes is scalability. The proposed solution could, with a dedicated Gigabit Ethernet local network, be scaled up to approximately 440 sensor nodes, only limited by the processing power of the central node that is receiving the compressed data from the local nodes. A compression ratio of 40.5 was obtained when compressing a point cloud stream from a single Microsoft Kinect V2 sensor using an octree resolution of 4 cm.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "3D Point cloud Compression"

1

Morell, Vicente. "Contributions to 3D Data Registration and Representation." Doctoral thesis, Universidad de Alicante, 2014. http://hdl.handle.net/10045/42364.

Full text
Abstract:
Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.
APA, Harvard, Vancouver, ISO, and other styles
2

Roure, Garcia Ferran. "Tools for 3D point cloud registration." Doctoral thesis, Universitat de Girona, 2017. http://hdl.handle.net/10803/403345.

Full text
Abstract:
In this thesis, we did an in-depth review of the state of the art of 3D registration, evaluating the most popular methods. Given the lack of standardization in the literature, we also proposed a nomenclature and a classification to unify the evaluation systems and to be able to compare the different algorithms under the same criteria. The major contribution of the thesis is the Registration Toolbox, which consists of software and a database of 3D models. The software presented here consists of a 3D Registration Pipeline written in C ++ that allows researchers to try different methods, as well as add new ones and compare them. In this Pipeline, we not only implemented the most popular methods of literature, but we also added three new methods that contribute to improving the state of the art. On the other hand, the database provides different 3D models to be able to carry out the tests to validate the performances of the methods. Finally, we presented a new hybrid data structure specially focused on the search for neighbors. We tested our proposal together with other data structures and we obtained very satisfactory results, overcoming in many cases the best current alternatives. All tested structures are also available in our Pipeline. This Toolbox is intended to be a useful tool for the whole community and is available to researchers under a Creative Commons license
En aquesta tesi, hem fet una revisió en profunditat de l'estat de l'art del registre 3D, avaluant els mètodes més populars. Donada la falta d'estandardització de la literatura, també hem proposat una nomenclatura i una classificació per tal d'unificar els sistemes d'avaluació i poder comparar els diferents algorismes sota els mateixos criteris. La contribució més gran de la tesi és el Toolbox de Registre, que consisteix en un software i una base de dades de models 3D. El software presentat aquí consisteix en una Pipeline de registre 3D escrit en C++ que permet als investigadors provar diferents mètodes, així com afegir-n'hi de nous i comparar-los. En aquesta Pipeline, no només hem implementat els mètodes més populars de la literatura, sinó que també hem afegit tres mètodes nous que contribueixen a millorar l'estat de l'art de la tecnologia. D'altra banda, la base de dades proporciona una sèrie de models 3D per poder dur a terme les proves necessàries per validar el bon funcionament dels mètodes. Finalment, també hem presentat una nova estructura de dades híbrida especialment enfocada a la cerca de veïns. Hem testejat la nostra proposta conjuntament amb altres estructures de dades i hem obtingut resultats molt satisfactoris, superant en molts casos les millors alternatives actuals. Totes les estructures testejades estan també disponibles al nostre Pipeline. Aquesta Toolbox està pensada per ésser una eina útil per tota la comunitat i està a disposició dels investigadors sota llicència Creative-Commons
APA, Harvard, Vancouver, ISO, and other styles
3

Tarcin, Serkan. "Fast Feature Extraction From 3d Point Cloud." Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615659/index.pdf.

Full text
Abstract:
To teleoperate an unmanned vehicle a rich set of information should be gathered from surroundings.These systems use sensors which sends high amounts of data and processing the data in CPUs can be time consuming. Similarly, the algorithms that use the data may work slow because of the amount of the data. The solution is, preprocessing the data taken from the sensors on the vehicle and transmitting only the necessary parts or the results of the preprocessing. In this thesis a 180 degree laser scanner at the front end of an unmanned ground vehicle (UGV) tilted up and down on a horizontal axis and point clouds constructed from the surroundings. Instead of transmitting this data directly to the path planning or obstacle avoidance algorithms, a preprocessing stage has been run. In this preprocess rst, the points belonging to the ground plane have been detected and a simplied version of ground has been constructed then the obstacles have been detected. At last, a simplied ground plane as ground and simple primitive geometric shapes as obstacles have been sent to the path planning algorithms instead of sending the whole point cloud.
APA, Harvard, Vancouver, ISO, and other styles
4

Forsman, Mona. "Point cloud densification." Thesis, Umeå universitet, Institutionen för fysik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-39980.

Full text
Abstract:
Several automatic methods exist for creating 3D point clouds extracted from 2D photos. In manycases, the result is a sparse point cloud, unevenly distributed over the scene.After determining the coordinates of the same point in two images of an object, the 3D positionof that point can be calculated using knowledge of camera data and relative orientation. A model created from a unevenly distributed point clouds may loss detail and precision in thesparse areas. The aim of this thesis is to study methods for densification of point clouds. This thesis contains a literature study over different methods for extracting matched point pairs,and an implementation of Least Square Template Matching (LSTM) with a set of improvementtechniques. The implementation is evaluated on a set of different scenes of various difficulty. LSTM is implemented by working on a dense grid of points in an image and Wallis filtering isused to enhance contrast. The matched point correspondences are evaluated with parameters fromthe optimization in order to keep good matches and discard bad ones. The purpose is to find detailsclose to a plane in the images, or on plane-like surfaces. A set of extensions to LSTM is implemented in the aim of improving the quality of the matchedpoints. The seed points are improved by Transformed Normalized Cross Correlation (TNCC) andMultiple Seed Points (MSP) for the same template, and then tested to see if they converge to thesame result. Wallis filtering is used to increase the contrast in the image. The quality of the extractedpoints are evaluated with respect to correlation with other optimization parameters and comparisonof standard deviation in x- and y- direction. If a point is rejected, the option to try again with a largertemplate size exists, called Adaptive Template Size (ATS).
APA, Harvard, Vancouver, ISO, and other styles
5

Gujar, Sanket. "Pointwise and Instance Segmentation for 3D Point Cloud." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-theses/1290.

Full text
Abstract:
The camera is the cheapest and computationally real-time option for detecting or segmenting the environment for an autonomous vehicle, but it does not provide the depth information and is undoubtedly not reliable during the night, bad weather, and tunnel flash outs. The risk of an accident gets higher for autonomous cars when driven by a camera in such situations. The industry has been relying on LiDAR for the past decade to solve this problem and focus on depth information of the environment, but LiDAR also has its shortcoming. The industry methods commonly use projections methods to create a projection image and run detection and localization network for inference, but LiDAR sees obscurants in bad weather and is sensitive enough to detect snow, making it difficult for robustness in projection based methods. We propose a novel pointwise and Instance segmentation deep learning architecture for the point clouds focused on self-driving application. The model is only dependent on LiDAR data making it light invariant and overcoming the shortcoming of the camera in the perception stack. The pipeline takes advantage of both global and local/edge features of points in points clouds to generate high-level feature. We also propose Pointer-Capsnet which is an extension of CapsNet for small 3D point clouds.
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Chen. "Semantics Augmented Point Cloud Sampling for 3D Object Detection." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/26956.

Full text
Abstract:
3D object detection is an emerging topic among both industries and research communities. It aims at discovering objects of interest from 3D scenes and has a strong connection with many real-world scenarios, such as autonomous driving. Currently, many models have been proposed to detect potential objects from point clouds. Some methods attempt to model point clouds in the unit of point, and then perform detection with acquired point-wise features. These methods are classified as point-based methods. However, we argue that the prevalent sampling algorithm for point-based models is sub-optimal for involving too much potentially unimportant data and may also lose some important information for detecting objects. Hence, it may lead to a significant performance drop. This thesis manages to improve the current sampling strategy for point-based models in the context of 3D detection. We propose recasting the sampling algorithm by incorporating semantic information to help identify more beneficial data for detection, thus obtaining a semantics augmented sampling strategy. In particular, we introduce a 2-phase augmentation for sampling. In the point feature learning phase, we propose a semantics-guided farthest point sampling (S-FPS) to keep more informative foreground points. In addition, in the box prediction phase, we devise a semantic balance sampling (SBS) to avoid redundant training on easily recognized instances. We evaluate our proposed strategy on the popular KITTI dataset and the large-scale nuScenes dataset. Extensive experiments show that our method lifts the point-based single-stage detector to surpass all existing point-based models and even achieve comparable performance to state-of-the-art two-stage methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Dey, Emon Kumar. "Effective 3D Building Extraction from Aerial Point Cloud Data." Thesis, Griffith University, 2022. http://hdl.handle.net/10072/413311.

Full text
Abstract:
Building extraction is important for a wider range of applications including smart city planning, disaster management, security, and cadastral mapping. This thesis mainly aims to present an effective data-driven strategy for building extraction using aerial Light Detection And Ranging (LiDAR) point cloud data. The LiDAR data provides highly accurate three-dimensional (3D) positional information. Therefore, studies on building extraction using LiDAR data have broadened in scope over time. Outliers, inharmonious input data behaviour, innumerable building structure possibilities, and heterogeneous environments are major challenges that need to be addressed for an effective 3D building extraction using LiDAR data. Outliers can cause the extraction of erroneous roof planes, incorrect boundaries, and over-segmentation of the extracted buildings. Due to the uneven point densities and heterogeneous building structures, small roof parts often remain undetected. Moreover, finding and using a realistic performance metric to evaluate the extracted buildings is another challenge. Inaccurate identification of sharp features, coplanar points, and boundary feature points often causes inaccurate roof plane segmentation and overall 3D outline generation for a building. To address these challenges, first, this thesis proposes a robust variable point neighbourhood estimation method. Considering the specific scanline properties associated with aerial LiDAR data, the proposed method automatically estimates an optimal and realistic neighbourhood for each point to solve the shortcomings of existing fixed neighbourhood methods in uneven or abrupt point densities. Using the estimated variable neighbourhood, a robust z-score and a distance-based outlier factor are calculated for each point in the input data. Based on these two measurements, an effective outlier detection method is proposed which can preserve more than 98% of inliers and remove outliers with better precision than the existing state-of-the-art methods. Then, individual roof planes are extracted in a robust way from the separated outlier free coplanar points based on the M-estimator SAmple Consensus (MSAC) plane-ftting algorithm. The proposed technique is capable of extracting small real roof planes, while avoiding spurious roof planes caused by the remaining outliers, if any. Individual buildings are then extracted precisely by grouping adjacent roof planes into clusters. Next, to assess the extracted buildings and individual roof plane boundaries, a realistic evaluation metric is proposed based on a new robust corner correspondence algorithm. The metric is defined as the average minimum distance davg from the extracted boundary points to their actual corresponding reference lines. It strictly follows the definition of a standard mathematical metric, and addresses the shortcomings of the existing metrics. In addition, during the evaluation, the proposed metric separately identifies the underlap and extralap areas in an extracted building. Furthermore, finding precise 3D feature points (e.g., fold and boundary) is necessary for tracing feature lines to describe a building outline. It is also important for accurate roof plane extraction and for establishing relationships between the correctly extracted planes so as to facilitate a more robust 3D building extraction. Thus, this thesis presents a robust fold feature point extraction method based on the calculated normal of the individual point. Later, a method to extract the feature points representing the boundaries is also developed based on the distance from a point to the calculated mean of its estimated neighbours. In the context of the accuracy evaluation, the proposed methods show more than 90% F1-scores on the generated ground truth data. Finally, machine learning techniques are applied to circumvent the problems (e.g., selecting manual thresholds for different parameters) of existing rule-based approaches for roof feature point extraction and classification. Seven effective geometric and statistical features are calculated for each point to train and test the machine learning classifiers using the appropriate ground truth data. Four primary classes of building roof point cloud are considered, and promising results for each of the classes have been achieved, confirming the competitive performance of the classification over the state-of-the-art techniques. At the end of this thesis, using the classified roof feature points, a more robust plane segmentation algorithm is demonstrated for extracting the roof planes of individual buildings.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Info & Comm Tech
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
8

Eckart, Benjamin. "Compact Generative Models of Point Cloud Data for 3D Perception." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1089.

Full text
Abstract:
One of the most fundamental tasks for any robotics application is the ability to adequately assimilate and respond to incoming sensor data. In the case of 3D range sensing, modern-day sensors generate massive quantities of point cloud data that strain available computational resources. Dealing with large quantities of unevenly sampled 3D point data is a great challenge for many fields, including autonomous driving, 3D manipulation, augmented reality, and medical imaging. This thesis explores how carefully designed statistical models for point cloud data can facilitate, accelerate, and unify many common tasks in the area of range-based 3D perception. We first establish a novel family of compact generative models for 3D point cloud data, offering them as an efficient and robust statistical alternative to traditional point-based or voxel-based data structures. We then show how these statistical models can be utilized toward the creation of a unified data processing architecture for tasks such as segmentation, registration, visualization, and mapping. In complex robotics systems, it is common for various concurrent perceptual processes to have separate low-level data processing pipelines. Besides introducing redundancy, these processes may perform their own data processing in conflicting or ad hoc ways. To avoid this, tractable data structures and models need to be established that share common perceptual processing elements. Additionally, given that many robotics applications involving point cloud processing are size, weight, and power-constrained, these models and their associated algorithms should be deployable in low-power embedded systems while retaining acceptable performance. Given a properly flexible and robust point processor, therefore, many low-level tasks could be unified under a common architectural paradigm and greatly simplify the overall perceptual system. In this thesis, a family of compact generative models is introduced for point cloud data based on hierarchical Gaussian Mixture Models. Using recursive, dataparallel variants of the Expectation Maximization algorithm, we construct high fidelity statistical and hierarchical point cloud models that compactly represent the data as a 3D generative probability distribution. In contrast to raw points or voxelbased decompositions, our proposed statistical model provides a better theoretical footing for robustly dealing with noise, constructing maximum likelihood methods, reasoning probabilistically about free space, utilizing spatial sampling techniques, and performing gradient-based optimizations. Further, the construction of the model as a spatial hierarchy allows for Octree-like logarithmic time access. One challenge compared to previous methods, however, is that our model-based approach incurs a potentially high creation cost. To mitigate this problem, we leverage data parallelism in order to design models well-suited for GPU acceleration, allowing them to run at real-time rates for many time-critical applications. We show how our models can facilitate various 3D perception tasks, demonstrating state-of-the-art performance in geometric segmentation, registration, dynamic occupancy map creation, and 3D visualization.
APA, Harvard, Vancouver, ISO, and other styles
9

Oropallo, William Edward Jr. "A Point Cloud Approach to Object Slicing for 3D Printing." Thesis, University of South Florida, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10751757.

Full text
Abstract:

Various industries have embraced 3D printing for manufacturing on-demand, custom printed parts. However, 3D printing requires intelligent data processing and algorithms to go from CAD model to machine instructions. One of the most crucial steps in the process is the slicing of the object. Most 3D printers build parts by accumulating material layers by layer. 3D printing software needs to calculate these layers for manufacturing by slicing a model and calculating the intersections. Finding exact solutions of intersections on the original model is mathematically complicated and computationally demanding. A preprocessing stage of tessellation has become the standard practice for slicing models. Calculating intersections with tessellations of the original model is computationally simple but can introduce inaccuracies and errors that can ruin the final print.

This dissertation shows that a point cloud approach to preprocessing and slicing models is robust and accurate. The point cloud approach to object slicing avoids the complexities of directly slicing models while evading the error-prone tessellation stage. An algorithm developed for this dissertation generates point clouds and slices models within a tolerance. The algorithm uses the original NURBS model and converts the model into a point cloud, based on layer thickness and accuracy requirements. The algorithm then uses a gridding structure to calculate where intersections happen and fit B-spline curves to those intersections.

This algorithm finds accurate intersections and can ignore certain anomalies and error from the modeling process. The primary point evaluation is stable and computationally inexpensive. This algorithm provides an alternative to challenges of both the direct and tessellated slicing methods that have been the focus of the 3D printing industry.

APA, Harvard, Vancouver, ISO, and other styles
10

Lev, Hoang Justin. "A Study of 3D Point Cloud Features for Shape Retrieval." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM040.

Full text
Abstract:
Grâce à l’amélioration et la multiplication des capteurs 3D, la diminution des prix et l’augmentation des puissances de calculs, l’utilisation de donnée3D s’est intensifiée ces dernières années. Les nuages de points 3D (3D pointcloud) sont une des représentations possibles pour de telles données. Elleà l’avantage d’être simple et précise, ainsi que le résultat immédiat de la capture. En tant que structure non-régulière sous forme de liste de points,l’analyse des nuages de points est complexe d’où leur récente utilisation. Cette thèse se concentre sur l’utilisation de nuages de points 3D pourune analyse tridimensionnelle de leur forme. La géométrie des nuages est plus particulièrement étudiée via les courbures des objets. Des descripteursreprésentant la distribution des courbures principales sont proposés: Semantic Point Cloud (SPC) et Multi-Scale Principal Curvature Point Cloud (MPC2).Global Local Point Cloud (GLPC) est un autre descripteur basé sur les courbures mais en combinaison d’autres propriétés. Ces trois descripteurs sontrobustes aux erreurs communes lors d’une capture 3D comme par exemple le bruit ou bien les occlusions. Leurs performances sont supérieures à ceuxde l’état de l’art en ce qui concerne la reconnaissance d’instance avec plus de 90% de précision. La thèse étudie également les récents algorithmes de deep learning qui concernent les nuages de points 3D qui sont apparus au cours de ces trois ans de thèse. Une première approche utilise des descripteurs basé sur les courbures en tant que données d’entrée pour un réseau de perceptron multicouche (MLP). Les résultats ne sont cependant pas au niveau de l’état de l’art mais cette étude montre que ModelNet, la base de données de référence pour laclassification d’objet 3D, n’est pas optimale. En effet, la base de donnéesn’est pas une bonne représentation de la réalité en ne reflétant pas la richesse de courbures des objets réels. Enfin, l’architecture d’un réseau neuronal artificiel est présenté. Inspiré par l’état de l’art en deep learning, Multi-scale PointNet détermine les propriétés d’un objet à différente échelle et les combine afin de le décrire. Encore en développement, le modèle requiert encore des ajustements pour obtenir des résultats concluants. Pour résumer, en s’attaquant au problème complexe de l’utilisation des nuages de points 3D mais aussi à l’évolution rapide du domaine, la thèse contribue à l’état de l’art sur trois aspects majeurs: (i) L’élaboration de nouveaux algorithmes se basant sur les courbures géométrique des objets pour la reconnaissance d’instance. (ii) L’étude qui montre que la construction d’une nouvelle base de données plus réaliste est nécessaire pour correctement poursuivre les études dans le domaine. (iii) La proposition d’une nouvelle architecture de réseau de neurones artificiels pour l’analyse de nuage de points3D
With the improvement and proliferation of 3D sensors, price cut and enhancementof computational power, the usage of 3D data intensifies for the last few years. The3D point cloud is one type amongst the others for 3D representation. This particularlyrepresentation is the direct output of sensors, accurate and simple. As a non-regularstructure of unordered list of points, the analysis on point cloud is challenging andhence the recent usage only.This PhD thesis focuses on the use of 3D point cloud representation for threedimensional shape analysis. More particularly, the geometrical shape is studied throughthe curvature of the object. Descriptors describing the distribution of the principalcurvature is proposed: Principal Curvature Point Cloud and Multi-Scale PrincipalCurvature Point Cloud. Global Local Point Cloud is another descriptor using thecurvature but in combination with other features. These three descriptors are robustto typical 3D scan error like noisy data or occlusion. They outperform state-of-the-artalgorithms in instance retrieval task with more than 90% of accuracy.The thesis also studies deep learning on 3D point cloud which emerges during thethree years of this PhD. The first approach tested, used curvature-based descriptor asthe input of a multi-layer perceptron network. The accuracy cannot catch state-ofthe-art performances. However, they show that ModelNet, the standard dataset for 3Dshape classification is not a good picture of the reality. Indeed, the experiment showsthat the dataset does not reflect the curvature wealth of true objects scans.Ultimately, a new neural network architecture is proposed. Inspired by the state-ofthe-art deep learning network, Multiscale PointNet computes the feature on multiplescales and combines them all to describe an object. Still under development, theperformances are still to be improved.In summary, tackling the challenging use of 3D point clouds but also the quickevolution of the field, the thesis contributes to the state-of-the-art in three majoraspects: (i) Design of new algorithms, relying on geometrical curvature of the objectfor instance retrieval task. (ii) Study and exhibition of the need to build a new standardclassification dataset with more realistic objects. (iii) Proposition of a new deep neuralnetwork for 3D point cloud analysis
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "3D Point cloud Compression"

1

Liu, Shan, Min Zhang, Pranav Kadam, and C. C. Jay Kuo. 3D Point Cloud Analysis. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89180-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Guoxiang, and YangQuan Chen. Towards Optimal Point Cloud Processing for 3D Reconstruction. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96110-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, YangQuan, and Guoxiang Zhang. Towards Optimal Point Cloud Processing for 3D Reconstruction. Springer International Publishing AG, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

3D Point Cloud Analysis: Traditional, Deep Learning, and Explainable Machine Learning Methods. Springer International Publishing AG, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

3D Point Cloud Analysis: Traditional, Deep Learning, and Explainable Machine Learning Methods. Springer International Publishing AG, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "3D Point cloud Compression"

1

Tu, Chenxi. "Point Cloud Compression for 3D LiDAR Sensor." In Frontiers of Digital Transformation, 119–34. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-1358-9_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cheng, Shyi-Chyi, Ting-Lan Lin, and Ping-Yuan Tseng. "K-SVD Based Point Cloud Coding for RGB-D Video Compression Using 3D Super-Point Clustering." In MultiMedia Modeling, 690–701. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37731-1_56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Alexandrov, Victor V., Sergey V. Kuleshov, Alexey J. Aksenov, and Alexandra A. Zaytseva. "The Method of Lossless 3D Point Cloud Compression Based on Space Filling Curve Implementation." In Automation Control Theory Perspectives in Intelligent Systems, 415–22. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-33389-2_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Héno, Raphaële, and Laure Chandelier. "Point Cloud Processing." In 3D Modeling of Buildings, 133–81. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781118648889.ch5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Weinmann, Martin. "Point Cloud Registration." In Reconstruction and Analysis of 3D Scenes, 55–110. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-29246-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Shan, Min Zhang, Pranav Kadam, and C. C. Jay Kuo. "Deep Learning-Based Point Cloud Analysis." In 3D Point Cloud Analysis, 53–86. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89180-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Shan, Min Zhang, Pranav Kadam, and C. C. Jay Kuo. "Introduction." In 3D Point Cloud Analysis, 1–13. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89180-0_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Shan, Min Zhang, Pranav Kadam, and C. C. Jay Kuo. "Conclusion and Future Work." In 3D Point Cloud Analysis, 141–43. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89180-0_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Shan, Min Zhang, Pranav Kadam, and C. C. Jay Kuo. "Explainable Machine Learning Methods for Point Cloud Analysis." In 3D Point Cloud Analysis, 87–140. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89180-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

McInerney, Daniel, and Pieter Kempeneers. "3D Point Cloud Data Processing." In Open Source Geospatial Tools, 263–82. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-01824-9_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "3D Point cloud Compression"

1

Cao, Chao, Marius Preda, and Titus Zaharia. "3D Point Cloud Compression." In Web3D '19: The 24th International Conference on 3D Web Technology. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3329714.3338130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Renault, Sylvain, Thomas Ebner, Ingo Feldmann, and Oliver Schreer. "Point cloud compression framework for the web." In 2016 International Conference on 3D Imaging (IC3D). IEEE, 2016. http://dx.doi.org/10.1109/ic3d.2016.7823455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Tianxin, and Yong Liu. "3D Point Cloud Geometry Compression on Deep Learning." In MM '19: The 27th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3343031.3351061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bui, Mai, Lin-Ching Chang, Hang Liu, Qi Zhao, and Genshe Chen. "Comparative Study of 3D Point Cloud Compression Methods." In 2021 IEEE International Conference on Big Data (Big Data). IEEE, 2021. http://dx.doi.org/10.1109/bigdata52589.2021.9671822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Jiacheng, Zhijun Fang, Yongbin Gao, Siwei Ma, Yaochu Jin, Heng Zhou, and Anjie Wang. "Point AE-DCGAN: A deep learning model for 3D point cloud lossy geometry compression." In 2021 Data Compression Conference (DCC). IEEE, 2021. http://dx.doi.org/10.1109/dcc50243.2021.00085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Daribo, Ismael, Ryo Furukawa, Ryusuke Sagawa, and Hiroshi Kawasaki. "Adaptive arithmetic coding for point cloud compression." In 2012 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON 2012). IEEE, 2012. http://dx.doi.org/10.1109/3dtv.2012.6365475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fan, Tingyu, Linyao Gao, Yiling Xu, Zhu Li, and Dong Wang. "D-DPCC: Deep Dynamic Point Cloud Compression via 3D Motion Prediction." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/126.

Full text
Abstract:
The non-uniformly distributed nature of the 3D Dynamic Point Cloud (DPC) brings significant challenges to its high-efficient inter-frame compression. This paper proposes a novel 3D sparse convolution-based Deep Dynamic Point Cloud Compression (D-DPCC) network to compensate and compress the DPC geometry with 3D motion estimation and motion compensation in the feature space. In the proposed D-DPCC network, we design a Multi-scale Motion Fusion (MMF) module to accurately estimate the 3D optical flow between the feature representations of adjacent point cloud frames. Specifically, we utilize a 3D sparse convolution-based encoder to obtain the latent representation for motion estimation in the feature space and introduce the proposed MMF module for fused 3D motion embedding. Besides, for motion compensation, we propose a 3D Adaptively Weighted Interpolation (3DAWI) algorithm with a penalty coefficient to adaptively decrease the impact of distant neighbours. We compress the motion embedding and the residual with a lossy autoencoder-based network. To our knowledge, this paper is the first work proposing an end-to-end deep dynamic point cloud compression framework. The experimental result shows that the proposed D-DPCC framework achieves an average 76% BD-Rate (Bjontegaard Delta Rate) gains against state-of-the-art Video-based Point Cloud Compression (V-PCC) v13 in inter mode.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Li, Zhu Li, Vladyslav Zakharchenko, and Jianle Chen. "Advanced 3D Motion Prediction for Video Based Point Cloud Attributes Compression." In 2019 Data Compression Conference (DCC). IEEE, 2019. http://dx.doi.org/10.1109/dcc.2019.00058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nguyen, Dat Thanh, Maurice Quach, Giuseppe Valenzise, and Pierre Duhamel. "Learning-Based Lossless Compression of 3D Point Cloud Geometry." In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9414763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Zhe, Lanyi He, Wenjie Zhu, Yiling Xu, Jun Sun, and Le Yang. "3D Point Cloud Attribute Compression Based on Cylindrical Projection." In 2019 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB). IEEE, 2019. http://dx.doi.org/10.1109/bmsb47279.2019.8971837.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "3D Point cloud Compression"

1

Blundell, S., and Philip Devine. Creation, transformation, and orientation adjustment of a building façade model for feature segmentation : transforming 3D building point cloud models into 2D georeferenced feature overlays. Engineer Research and Development Center (U.S.), January 2020. http://dx.doi.org/10.21079/11681/35115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Habib, Ayman, Darcy M. Bullock, Yi-Chun Lin, and Raja Manish. Road Ditch Line Mapping with Mobile LiDAR. Purdue University, 2021. http://dx.doi.org/10.5703/1288284317354.

Full text
Abstract:
Maintenance of roadside ditches is important to avoid localized flooding and premature failure of pavements. Scheduling effective preventative maintenance requires mapping of the ditch profile to identify areas requiring excavation of long-term sediment accumulation. High-resolution, high-quality point clouds collected by mobile LiDAR mapping systems (MLMS) provide an opportunity for effective monitoring of roadside ditches and performing hydrological analyses. This study evaluated the applicability of mobile LiDAR for mapping roadside ditches for slope and drainage analyses. The performance of alternative MLMS units was performed. These MLMS included an unmanned ground vehicle, an unmanned aerial vehicle, a portable backpack system along with its vehicle-mounted version, a medium-grade wheel-based system, and a high-grade wheel-based system. Point cloud from all the MLMS units were in agreement in the vertical direction within the ±3 cm range for solid surfaces, such as paved roads, and ±7 cm range for surfaces with vegetation. The portable backpack system that could be carried by a surveyor or mounted on a vehicle and was the most flexible MLMS. The report concludes that due to flexibility and cost effectiveness of the portable backpack system, it is the preferred platform for mapping roadside ditches, followed by the medium-grade wheel-based system. Furthermore, a framework for ditch line characterization is proposed and tested using datasets acquired by the medium-grade wheel-based and vehicle-mounted portable systems over a state highway. An existing ground filtering approach is modified to handle variations in point density of mobile LiDAR data. Hydrological analyses, including flow direction and flow accumulation, are applied to extract the drainage network from the digital terrain model (DTM). Cross-sectional/longitudinal profiles of the ditch are automatically extracted from LiDAR data and visualized in 3D point clouds and 2D images. The slope derived from the LiDAR data was found to be very close to highway cross slope design standards of 2% on driving lanes, 4% on shoulders, as well as 6-by-1 slope for ditch lines. Potential flooded regions are identified by detecting areas with no LiDAR return and a recall score of 54% and 92% was achieved by the medium-grade wheel-based and vehicle-mounted portable systems, respectively. Furthermore, a framework for ditch line characterization is proposed and tested using datasets acquired by the medium-grade wheel-based and vehicle-mounted portable systems over a state highway. An existing ground filtering approach is modified to handle variations in point density of mobile LiDAR data. Hydrological analyses, including flow direction and flow accumulation, are applied to extract the drainage network from the digital terrain model (DTM). Cross-sectional/longitudinal profiles of the ditch are automatically extracted from LiDAR data, and visualized in 3D point clouds and 2D images. The slope derived from the LiDAR data was found to be very close to highway cross slope design standards of 2% on driving lanes, 4% on shoulder, as well as 6-by-1 slope for ditch lines. Potential flooded regions are identified by detecting areas with no LiDAR return and a recall score of 54% and 92% was achieved by the medium-grade wheel-based and vehicle-mounted portable systems, respectively.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography