To see the other types of publications on this topic, follow the link: 3D Point cloud Compression.

Journal articles on the topic '3D Point cloud Compression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic '3D Point cloud Compression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Huang, Tianxin, Jiangning Zhang, Jun Chen, Zhonggan Ding, Ying Tai, Zhenyu Zhang, Chengjie Wang, and Yong Liu. "3QNet." ACM Transactions on Graphics 41, no. 6 (November 30, 2022): 1–13. http://dx.doi.org/10.1145/3550454.3555481.

Full text
Abstract:
Since the development of 3D applications, the point cloud, as a spatial description easily acquired by sensors, has been widely used in multiple areas such as SLAM and 3D reconstruction. Point Cloud Compression (PCC) has also attracted more attention as a primary step before point cloud transferring and saving, where the geometry compression is an important component of PCC to compress the points geometrical structures. However, existing non-learning-based geometry compression methods are often limited by manually pre-defined compression rules. Though learning-based compression methods can significantly improve the algorithm performances by learning compression rules from data, they still have some defects. Voxel-based compression networks introduce precision errors due to the voxelized operations, while point-based methods may have relatively weak robustness and are mainly designed for sparse point clouds. In this work, we propose a novel learning-based point cloud compression framework named 3D Point Cloud Geometry Quantiation Compression Network (3QNet), which overcomes the robustness limitation of existing point-based methods and can handle dense points. By learning a codebook including common structural features from simple and sparse shapes, 3QNet can efficiently deal with multiple kinds of point clouds. According to experiments on object models, indoor scenes, and outdoor scans, 3QNet can achieve better compression performances than many representative methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Morell, Vicente, Sergio Orts, Miguel Cazorla, and Jose Garcia-Rodriguez. "Geometric 3D point cloud compression." Pattern Recognition Letters 50 (December 2014): 55–62. http://dx.doi.org/10.1016/j.patrec.2014.05.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yu, Siyang, Si Sun, Wei Yan, Guangshuai Liu, and Xurui Li. "A Method Based on Curvature and Hierarchical Strategy for Dynamic Point Cloud Compression in Augmented and Virtual Reality System." Sensors 22, no. 3 (February 7, 2022): 1262. http://dx.doi.org/10.3390/s22031262.

Full text
Abstract:
As a kind of information-intensive 3D representation, point cloud rapidly develops in immersive applications, which has also sparked new attention in point cloud compression. The most popular dynamic methods ignore the characteristics of point clouds and use an exhaustive neighborhood search, which seriously impacts the encoder’s runtime. Therefore, we propose an improved compression means for dynamic point cloud based on curvature estimation and hierarchical strategy to meet the demands in real-world scenarios. This method includes initial segmentation derived from the similarity between normals, curvature-based hierarchical refining process for iterating, and image generation and video compression technology based on de-redundancy without performance loss. The curvature-based hierarchical refining module divides the voxel point cloud into high-curvature points and low-curvature points and optimizes the initial clusters hierarchically. The experimental results show that our method achieved improved compression performance and faster runtime than traditional video-based dynamic point cloud compression.
APA, Harvard, Vancouver, ISO, and other styles
4

Imdad, Ulfat, Mirza Tahir Ahmed, Muhammad Asif, and Hanan Aljuaid. "3D point cloud lossy compression using quadric surfaces." PeerJ Computer Science 7 (October 6, 2021): e675. http://dx.doi.org/10.7717/peerj-cs.675.

Full text
Abstract:
The presence of 3D sensors in hand-held or head-mounted smart devices has motivated many researchers around the globe to devise algorithms to manage 3D point cloud data efficiently and economically. This paper presents a novel lossy compression technique to compress and decompress 3D point cloud data that will save storage space on smart devices as well as minimize the use of bandwidth when transferred over the network. The idea presented in this research exploits geometric information of the scene by using quadric surface representation of the point cloud. A region of a point cloud can be represented by the coefficients of quadric surface when the boundary conditions are known. Thus, a set of quadric surface coefficients and their associated boundary conditions are stored as a compressed point cloud and used to decompress. An added advantage of proposed technique is its flexibility to decompress the cloud as a dense or a course cloud. We compared our technique with state-of-the-art 3D lossless and lossy compression techniques on a number of standard publicly available datasets with varying the structure complexities.
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Jiawen, Jin Wang, Longhua Sun, Mu-En Wu, and Qing Zhu. "Point Cloud Geometry Compression Based on Multi-Layer Residual Structure." Entropy 24, no. 11 (November 17, 2022): 1677. http://dx.doi.org/10.3390/e24111677.

Full text
Abstract:
Point cloud data are extensively used in various applications, such as autonomous driving and augmented reality since it can provide both detailed and realistic depictions of 3D scenes or objects. Meanwhile, 3D point clouds generally occupy a large amount of storage space that is a big burden for efficient communication. However, it is difficult to efficiently compress such sparse, disordered, non-uniform and high dimensional data. Therefore, this work proposes a novel deep-learning framework for point cloud geometric compression based on an autoencoder architecture. Specifically, a multi-layer residual module is designed on a sparse convolution-based autoencoders that progressively down-samples the input point clouds and reconstructs the point clouds in a hierarchically way. It effectively constrains the accuracy of the sampling process at the encoder side, which significantly preserves the feature information with a decrease in the data volume. Compared with the state-of-the-art geometry-based point cloud compression (G-PCC) schemes, our approach obtains more than 70–90% BD-Rate gain on an object point cloud dataset and achieves a better point cloud reconstruction quality. Additionally, compared to the state-of-the-art PCGCv2, we achieve an average gain of about 10% in BD-Rate.
APA, Harvard, Vancouver, ISO, and other styles
6

Quach, Maurice, Aladine Chetouani, Giuseppe Valenzise, and Frederic Dufaux. "A deep perceptual metric for 3D point clouds." Electronic Imaging 2021, no. 9 (January 18, 2021): 257–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.9.iqsp-257.

Full text
Abstract:
Point clouds are essential for storage and transmission of 3D content. As they can entail significant volumes of data, point cloud compression is crucial for practical usage. Recently, point cloud geometry compression approaches based on deep neural networks have been explored. In this paper, we evaluate the ability to predict perceptual quality of typical voxel-based loss functions employed to train these networks. We find that the commonly used focal loss and weighted binary cross entropy are poorly correlated with human perception. We thus propose a perceptual loss function for 3D point clouds which outperforms existing loss functions on the ICIP2020 subjective dataset. In addition, we propose a novel truncated distance field voxel grid representation and find that it leads to sparser latent spaces and loss functions that are more correlated with perceived visual quality compared to a binary representation. The source code is available at <uri>https://github.com/mauriceqch/2021_pc_perceptual_loss</uri>.
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Mun-yong, Sang-ha Lee, Kye-dong Jung, Seung-hyun Lee, and Soon-chul Kwon. "A Novel Preprocessing Method for Dynamic Point-Cloud Compression." Applied Sciences 11, no. 13 (June 26, 2021): 5941. http://dx.doi.org/10.3390/app11135941.

Full text
Abstract:
Computer-based data processing capabilities have evolved to handle a lot of information. As such, the complexity of three-dimensional (3D) models (e.g., animations or real-time voxels) containing large volumes of information has increased exponentially. This rapid increase in complexity has led to problems with recording and transmission. In this study, we propose a method of efficiently managing and compressing animation information stored in the 3D point-clouds sequence. A compressed point-cloud is created by reconfiguring the points based on their voxels. Compared with the original point-cloud, noise caused by errors is removed, and a preprocessing procedure that achieves high performance in a redundant processing algorithm is proposed. The results of experiments and rendering demonstrate an average file-size reduction of 40% using the proposed algorithm. Moreover, 13% of the over-lap data are extracted and removed, and the file size is further reduced.
APA, Harvard, Vancouver, ISO, and other styles
8

Luo, Guoliang, Bingqin He, Yanbo Xiong, Luqi Wang, Hui Wang, Zhiliang Zhu, and Xiangren Shi. "An Optimized Convolutional Neural Network for the 3D Point-Cloud Compression." Sensors 23, no. 4 (February 16, 2023): 2250. http://dx.doi.org/10.3390/s23042250.

Full text
Abstract:
Due to the tremendous volume taken by the 3D point-cloud models, knowing how to achieve the balance between a high compression ratio, a low distortion rate, and computing cost in point-cloud compression is a significant issue in the field of virtual reality (VR). Convolutional neural networks have been used in numerous point-cloud compression research approaches during the past few years in an effort to progress the research state. In this work, we have evaluated the effects of different network parameters, including neural network depth, stride, and activation function on point-cloud compression, resulting in an optimized convolutional neural network for compression. We first have analyzed earlier research on point-cloud compression based on convolutional neural networks before designing our own convolutional neural network. Then, we have modified our model parameters using the experimental data to further enhance the effect of point-cloud compression. Based on the experimental results, we have found that the neural network with the 4 layers and 2 strides parameter configuration using the Sigmoid activation function outperforms the default configuration by 208% in terms of the compression-distortion rate. The experimental results show that our findings are effective and universal and make a great contribution to the research of point-cloud compression using convolutional neural networks.
APA, Harvard, Vancouver, ISO, and other styles
9

Gu, Shuai, Junhui Hou, Huanqiang Zeng, and Hui Yuan. "3D Point Cloud Attribute Compression via Graph Prediction." IEEE Signal Processing Letters 27 (2020): 176–80. http://dx.doi.org/10.1109/lsp.2019.2963793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dybedal, Joacim, Atle Aalerud, and Geir Hovland. "Embedded Processing and Compression of 3D Sensor Data for Large Scale Industrial Environments." Sensors 19, no. 3 (February 2, 2019): 636. http://dx.doi.org/10.3390/s19030636.

Full text
Abstract:
This paper presents a scalable embedded solution for processing and transferring 3D point cloud data. Sensors based on the time-of-flight principle generate data which are processed on a local embedded computer and compressed using an octree-based scheme. The compressed data is transferred to a central node where the individual point clouds from several nodes are decompressed and filtered based on a novel method for generating intensity values for sensors which do not natively produce such a value. The paper presents experimental results from a relatively large industrial robot cell with an approximate size of 10 m × 10 m × 4 m. The main advantage of processing point cloud data locally on the nodes is scalability. The proposed solution could, with a dedicated Gigabit Ethernet local network, be scaled up to approximately 440 sensor nodes, only limited by the processing power of the central node that is receiving the compressed data from the local nodes. A compression ratio of 40.5 was obtained when compressing a point cloud stream from a single Microsoft Kinect V2 sensor using an octree resolution of 4 cm.
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Yan, Yuyong Ma, Ye Tao, and Zhengmeng Hou. "Innovative Methodology of On-Line Point Cloud Data Compression for Free-Form Surface Scanning Measurement." Applied Sciences 8, no. 12 (December 10, 2018): 2556. http://dx.doi.org/10.3390/app8122556.

Full text
Abstract:
In order to obtain a highly accurate profile of a measured three-dimensional (3D) free-form surface, a scanning measuring device has to produce extremely dense point cloud data with a great sampling rate. Bottlenecks are created owing to inefficiencies in manipulating, storing and transferring these data, and parametric modelling from them is quite time-consuming work. In order to effectively compress the dense point cloud data obtained from a 3D free-form surface during the real-time scanning measuring process, this paper presents an innovative methodology of an on-line point cloud data compression algorithm for 3D free-form surface scanning measurement. It has the ability to identify and eliminate data redundancy caused by geometric feature similarity between adjacent scanning layers. At first, the new algorithm adopts the bi-Akima method to compress the initial point cloud data; next, the data redundancy existing in the compressed point cloud is further identified and eliminated; then, we can get the final compressed point cloud data. Finally, the experiment is conducted, and the results demonstrate that the proposed algorithm is capable of obtaining high-quality data compression results with higher data compression ratios than other existing on-line point cloud data compression/reduction methods.
APA, Harvard, Vancouver, ISO, and other styles
12

Thanou, Dorina, Philip A. Chou, and Pascal Frossard. "Graph-Based Compression of Dynamic 3D Point Cloud Sequences." IEEE Transactions on Image Processing 25, no. 4 (April 2016): 1765–78. http://dx.doi.org/10.1109/tip.2016.2529506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Park, Juntaek, Jongseok Lee, Seanae Park, and Donggyu Sim. "Projection-based Occupancy Map Coding for 3D Point Cloud Compression." IEIE Transactions on Smart Processing & Computing 9, no. 4 (August 31, 2020): 293–97. http://dx.doi.org/10.5573/ieiespc.2020.9.4.293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Cheng-qiu, Dai, Chen Min, and Fang Xiao-yong. "Compression Algorithm of 3D Point Cloud Data Based on Octree." Open Automation and Control Systems Journal 7, no. 1 (August 31, 2015): 879–83. http://dx.doi.org/10.2174/1874444301507010879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Gu, Shuai, Junhui Hou, Huanqiang Zeng, Hui Yuan, and Kai-Kuang Ma. "3D Point Cloud Attribute Compression Using Geometry-Guided Sparse Representation." IEEE Transactions on Image Processing 29 (2020): 796–808. http://dx.doi.org/10.1109/tip.2019.2936738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Huo, Xiao, Saiping Zhang, and Fuzheng Yang. "Variable Rate Point Cloud Attribute Compression with Non-Local Attention Optimization." Applied Sciences 12, no. 16 (August 16, 2022): 8179. http://dx.doi.org/10.3390/app12168179.

Full text
Abstract:
Point clouds are widely used as representations of 3D objects and scenes in a number of applications, including virtual and mixed reality, autonomous driving, antiques reconstruction. To reduce the cost for transmitting and storing such data, this paper proposes an end-to-end learning-based point cloud attribute compression (PCAC) approach. The proposed network adopts a sparse convolution-based variational autoencoder (VAE) structure to compress the color attribute of point clouds. Considering the difficulty of stacked convolution operations in capturing long range dependencies, the attention mechanism is incorporated in which a non-local attention module is developed to capture the local and global correlations in both spatial and channel dimensions. Towards the practical application, an additional modulation network is offered to achieve the variable rate compression purpose in a single network, avoiding the memory cost of storing multiple networks for multiple bitrates. Our proposed method achieves state-of-the-art compression performance compared to other existing learning-based methods and further reduces the gap with the latest MPEG G-PCC reference software TMC13 version 14.
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Li, Zhu Li, Vladyslav Zakharchenko, Jianle Chen, and Houqiang Li. "Advanced 3D Motion Prediction for Video-Based Dynamic Point Cloud Compression." IEEE Transactions on Image Processing 29 (2020): 289–302. http://dx.doi.org/10.1109/tip.2019.2931621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Yuan, Hui, Dexiang Zhang, Weiwei Wang, and Yujun Li. "A Sampling-based 3D Point Cloud Compression Algorithm for Immersive Communication." Mobile Networks and Applications 25, no. 5 (June 27, 2020): 1863–72. http://dx.doi.org/10.1007/s11036-020-01570-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Almac, Umut, Isıl Polat Pekmezci, and Metin Ahunbay. "Numerical Analysis of Historic Structural Elements Using 3D Point Cloud Data." Open Construction and Building Technology Journal 10, no. 1 (May 31, 2016): 233–45. http://dx.doi.org/10.2174/1874836801610010233.

Full text
Abstract:
The 3D laser scanner has become a common instrument in numerous field applications such as structural health monitoring, assessment and documentation of structural damages, volume and dimension control of excavations, geometrical recording of built environment, and construction progress monitoring in different fields. It enables capture of millions of points from the surface of objects with high accuracy and in a very short time. These points can be employed to extrapolate the shape of the elements. In this way, the collected data can be developed to construct three-dimensional digital models that can be used in structural FEM analysis. This paper presents structural evaluation of a historic building through FE models with the help of a 3D point cloud. The main focus of the study is on the stone columns of a historic cistern. These deteriorated load bearing elements have severe non-uniform erosion, which leads to formation of significant stress concentrations. At this point, the 3D geometric data becomes crucial in revealing the stress distribution of severely eroded columns due to material deterioration. According to the results of static analysis using real geometry, maximum stress in compression increased remarkably on the columns in comparison with the geometrically idealized models. These values seem to approach the compressive strength of the material, which was obtained from the point load test results. Moreover, the stress distribution of the analysis draws attention to the section between columns and their capitals. According to the detailed 3D documentation, there is a reduced contact surface between columns and capitals to transfer loads.
APA, Harvard, Vancouver, ISO, and other styles
20

Lai, Hui Fen, and Xiu Min Liu. "The Reverse Modeling Analysis and Design on Compression Molding Bodies." Applied Mechanics and Materials 602-605 (August 2014): 155–58. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.155.

Full text
Abstract:
To solve the problem it is difficult to accurately measure the curve surface, the forming mechanism of reverse engineering modeling approach is studied. Using the non-contact measurement of laser scanner, the point cloud data of the compression molding bodies is obtained, and it is introduced into the three-dimensional software CATIA to generate 3D model according to the molding processing point cloud and compression molding mechanism. In the last to generate the complete engineering drawings. Comparing the traditional design method, the method is mainly used for 3D parts reconstruction and design in surface shape is difficult to accurately expressed or unknown component shape design methods.
APA, Harvard, Vancouver, ISO, and other styles
21

Cura, R., J. Perret, and N. Paparoditis. "POINT CLOUD SERVER (PCS) : POINT CLOUDS IN-BASE MANAGEMENT AND PROCESSING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-3/W5 (August 20, 2015): 531–39. http://dx.doi.org/10.5194/isprsannals-ii-3-w5-531-2015.

Full text
Abstract:
In addition to the traditional Geographic Information System (GIS) data such as images and vectors, point cloud data has become more available. It is appreciated for its precision and true three-Dimensional (3D) nature. However, managing the point cloud can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a complete and efficient point cloud management system based on a database server that works on groups of points rather than individual points. This system is specifically designed to solve all the needs of point cloud users: fast loading, compressed storage, powerful filtering, easy data access and exporting, and integrated processing. Moreover, the system fully integrates metadata (like sensor position) and can conjointly use point clouds with images, vectors, and other point clouds. The system also offers in-base processing for easy prototyping and parallel processing and can scale well. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the system will several <i>billion</i> points of point clouds from Lidar (aerial and terrestrial ) and stereo-vision. We demonstrate ~ <i>400 million pts/h</i> loading speed, user-transparent greater than <i>2 to 4:1</i> compression ratio, filtering in the approximately <i>50 ms</i> range, and output of about a million pts/s, along with classical processing, such as object detection.
APA, Harvard, Vancouver, ISO, and other styles
22

Pacheco-Gutierrez, Salvador, Hanlin Niu, Ipek Caliskanelli, and Robert Skilton. "A Multiple Level-of-Detail 3D Data Transmission Approach for Low-Latency Remote Visualisation in Teleoperation Tasks." Robotics 10, no. 3 (July 14, 2021): 89. http://dx.doi.org/10.3390/robotics10030089.

Full text
Abstract:
In robotic teleoperation, the knowledge of the state of the remote environment in real time is paramount. Advances in the development of highly accurate 3D cameras able to provide high-quality point clouds appear to be a feasible solution for generating live, up-to-date virtual environments. Unfortunately, the exceptional accuracy and high density of these data represent a burden for communications requiring a large bandwidth affecting setups where the local and remote systems are particularly geographically distant. This paper presents a multiple level-of-detail (LoD) compression strategy for 3D data based on tree-like codification structures capable of compressing a single data frame at multiple resolutions using dynamically configured parameters. The level of compression (resolution) of objects is prioritised based on: (i) placement on the scene; and (ii) the type of object. For the former, classical point cloud fitting and segmentation techniques are implemented; for the latter, user-defined prioritisation is considered. The results obtained are compared using a single LoD (whole-scene) compression technique previously proposed by the authors. Results showed a considerable improvement to the transmitted data size and updated frame rate while maintaining low distortion after decompression.
APA, Harvard, Vancouver, ISO, and other styles
23

Imdad, Ulfat, Muhammad Asif, Mirza Ahmad, Osama Sohaib, Muhammad Hanif, and Muhammad Chaudary. "Three Dimensional Point Cloud Compression and Decompression Using Polynomials of Degree One." Symmetry 11, no. 2 (February 12, 2019): 209. http://dx.doi.org/10.3390/sym11020209.

Full text
Abstract:
The availability of cheap depth range sensors has increased the use of an enormous amount of 3D information in hand-held and head-mounted devices. This has directed a large research community to optimize point cloud storage requirements by preserving the original structure of data with an acceptable attenuation rate. Point cloud compression algorithms were developed to occupy less storage space by focusing on features such as color, texture, and geometric information. In this work, we propose a novel lossy point cloud compression and decompression algorithm that optimizes storage space requirements by preserving geometric information of the scene. Segmentation is performed by using a region growing segmentation algorithm. The points under the boundary of the surfaces are discarded that can be recovered through the polynomial equations of degree one in the decompression phase. We have compared the proposed technique with existing techniques using publicly available datasets for indoor architectural scenes. The results show that the proposed novel technique outperformed all the techniques for compression rate and RMSE within an acceptable time scale.
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Zun-Ran, Chen-Guang Yang, and Shi-Lu Dai. "A Fast Compression Framework Based on 3D Point Cloud Data for Telepresence." International Journal of Automation and Computing 17, no. 6 (July 31, 2020): 855–66. http://dx.doi.org/10.1007/s11633-020-1240-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kim, Junsik, Jiheon Im, Sungryeul Rhyu, and Kyuheon Kim. "3D Motion Estimation and Compensation Method for Video-Based Point Cloud Compression." IEEE Access 8 (2020): 83538–47. http://dx.doi.org/10.1109/access.2020.2991478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Seo, Hyungjoon. "3D Roughness Measurement of Failure Surface in CFA Pile Samples Using Three-Dimensional Laser Scanning." Applied Sciences 11, no. 6 (March 18, 2021): 2713. http://dx.doi.org/10.3390/app11062713.

Full text
Abstract:
The bearing capacity of CFA (Continuous Flight Auger) pile is not able to reach the design capacity if proper construction is not performed due to the soil collapse at the bottom of the pile. In this paper, three pile samples were prepared to simulate the bottom of the CFA pile: grouting sample; mixture of grouting and gravel; mixture of grouting and sand. The failure surfaces of each sample obtained by a uniaxial compression tests were represented as a three-dimensional point cloud by three-dimensional laser scanning. Therefore, high resolution of point clouds can be obtained to simulate the failure surfaces of three samples. The three-dimensional point cloud of each failure surface was analyzed by a plane to points histogram (P2PH) method and a roughness detection method by kernel proposed in this paper. These methods can analyze the global roughness as well as the local roughness of the three pile samples in three dimensions. The roughness features of the grouting sample, the mixed sample of grouting and sand, and the mixed sample of grouting and gravel can be distinguished by the sections where points of each sample are predominantly distributed in the histogram of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
27

Yang, Xiaoxue. "Key Technologies of Seam Fusion for Multi-view Image Texture Mapping Based on 3D Point Cloud Data." Journal of Physics: Conference Series 2066, no. 1 (November 1, 2021): 012042. http://dx.doi.org/10.1088/1742-6596/2066/1/012042.

Full text
Abstract:
Abstract With the rapid development of computer technology and measurement technology, three-dimensional point cloud data, as an important form of data in computer graphics, is used by light reactions in reverse engineering, surveying, robotics, virtual reality, stereo 3D imaging, Indoor scene reconstruction and many other fields. This paper aims to study the key technology of 3D point cloud data multi-view image texture mapping seam fusion, and propose a joint coding and compression scheme of multi-view image texture to replace the previous independent coding scheme of applying MVC standard compression to multi-view image texture. Experimental studies have shown that multi-view texture depth joint coding has different degrees of performance improvement compared with the other two current 3D MVD data coding schemes. Especially for Ballet and Dancer sequences with better depth video quality, the performance of JMVDC is very obvious. Compared with the KS_ IBP structure, the gain can reach as high as 1.34dB at the same bit rate.
APA, Harvard, Vancouver, ISO, and other styles
28

Tu, Chenxi, Eijiro Takeuchi, Alexander Carballo, and Kazuya Takeda. "Real-Time Streaming Point Cloud Compression for 3D LiDAR Sensor Using U-Net." IEEE Access 7 (2019): 113616–25. http://dx.doi.org/10.1109/access.2019.2935253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Navarrete, Javier, Diego Viejo, and Miguel Cazorla. "Compression and registration of 3D point clouds using GMMs." Pattern Recognition Letters 110 (July 2018): 8–15. http://dx.doi.org/10.1016/j.patrec.2018.03.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Fadzli, Fazliaty Edora, Ajune Wanis Ismail, Shafina Abd Karim Ishigaki, Muhammad Nur Affendy Nor’a, and Mohamad Yahya Fekri Aladin. "Real-Time 3D Reconstruction Method for Holographic Telepresence." Applied Sciences 12, no. 8 (April 15, 2022): 4009. http://dx.doi.org/10.3390/app12084009.

Full text
Abstract:
This paper introduces a real-time 3D reconstruction of a human captured using a depth sensor and has integrated it with a holographic telepresence application. Holographic projection is widely recognized as one of the most promising 3D display technologies, and it is expected to become more widely available in the near future. This technology may also be deployed in various ways, including holographic prisms and Z-Hologram, which this research has used to demonstrate the initial results by displaying the reconstructed 3D representation of the user. The realization of a stable and inexpensive 3D data acquisition system is a problem that has yet to be solved. When we involve multiple sensors we need to compress and optimize the data so that it can be sent to a server for a telepresence. Therefore the paper presents the processes in real-time 3D reconstruction, which consists of data acquisition, background removal, point cloud extraction, and a surface generation which applies a marching cube algorithm to finally form an isosurface from the set of points in the point cloud which later texture mapping is applied on the isosurface generated. The compression results has been presented in this paper, and the results of the integration process after sending the data over the network also have been discussed.
APA, Harvard, Vancouver, ISO, and other styles
31

Jae-Kyun Ahn, Kyu-Yul Lee, Jae-Young Sim, and Chang-Su Kim. "Large-Scale 3D Point Cloud Compression Using Adaptive Radial Distance Prediction in Hybrid Coordinate Domains." IEEE Journal of Selected Topics in Signal Processing 9, no. 3 (April 2015): 422–34. http://dx.doi.org/10.1109/jstsp.2014.2370752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Gordon, M., B. Borgmann, J. Gehrung, M. Hebel, and M. Arens. "AD HOC MODEL GENERATION USING MULTISCALE LIDAR DATA FROM A GEOSPATIAL DATABASE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3/W3 (August 20, 2015): 535–41. http://dx.doi.org/10.5194/isprsarchives-xl-3-w3-535-2015.

Full text
Abstract:
Due to the spread of economically priced laser scanning technology nowadays, especially in the field of topographic surveying and mapping, ever-growing amounts of data need to be handled. Depending on the requirements of the specific application, airborne, mobile or terrestrial laser scanners are commonly used. Since visualizing this flood of data is not feasible with classical approaches like raw point cloud rendering, real time decision making requires sophisticated solutions. In addition, the efficient storage and recovery of 3D measurements is a challenging task. Therefore we propose an approach for the intelligent storage of 3D point clouds using a spatial database. For a given region of interest, the database is queried for the data available. All resulting point clouds are fused in a model generation process, utilizing the fact that low density airborne measurements could be used to supplement higher density mobile or terrestrial laser scans. The octree based modeling approach divides and subdivides the world into cells of varying size and fits one plane per cell, once a specified amount of points is present. The resulting model exceeds the completeness and precision of every single data source and enables for real time visualization. This is especially supported by data compression ratios of about 90%.
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Jian, Xinyu Guo, Hongduo Wang, Pin Jiang, Tengyun Chen, and Zemin Sun. "Pillar-Based Cooperative Perception from Point Clouds for 6G-Enabled Cooperative Autonomous Vehicles." Wireless Communications and Mobile Computing 2022 (July 25, 2022): 1–13. http://dx.doi.org/10.1155/2022/3646272.

Full text
Abstract:
3D object detection is a significant aspect of the perception module in autonomous driving; however, with current technology, data sharing between vehicles and cloud servers for cooperative 3D object detection under the strict latency requirement is limited by the communication bandwidth. The sixth-generation (6G) networks have accelerated the transmission rate of the sensor data significantly with extreme low-latency and high-speed data transmission. However, which sensor data format and when to transmit it are still challenging. To address these issues, this study proposes a cooperative perception framework combined with a pillar-based encoder and Octomap-based compression at edges for connected autonomous vehicles to reduce the amount of missing detection in blind spots and further distances. This approach satisfies the constraints on the accuracy of the task perception and provides drivers or autonomous vehicles with sufficient reaction time by applying fixed encoders to learn a representation of point clouds (LiDAR sensor data). Extensive experiment results show that the proposed approach outperforms the previous cooperative perception schemes running at 30 Hz, and the accuracy of the object bounding box results in further distances (greater than 12 m). Furthermore, this approach achieves a lower total delay for the procession of the fusion data and the transmission of the cooperative perception message. To the best of our knowledge, this study is the first to introduce a pillar-based encoder and Octomap-based compression framework for cooperative perception between vehicles and edges in connected autonomous driving.
APA, Harvard, Vancouver, ISO, and other styles
34

Maksymova, Ievgeniia, Christian Steger, and Norbert Druml. "Review of LiDAR Sensor Data Acquisition and Compression for Automotive Applications." Proceedings 2, no. 13 (December 6, 2018): 852. http://dx.doi.org/10.3390/proceedings2130852.

Full text
Abstract:
Due to specific dynamics of the operating environment and required safety regulations, the amount of acquired data of an automotive LiDAR sensor that has to be processed is reaching several Gbit/s. Therefore, data compression is much-needed to enable future multi-sensor automated vehicles. Numerous techniques have been developed to compress LiDAR raw data; however, these techniques are primarily targeting a compression of 3D point cloud, while the way data is captured and transferred from a sensor to an electronic computing unit (ECU) was left out. The purpose of this paper is to discuss and evaluate how various low-level compression algorithms could be used in the automotive LiDAR sensor in order to optimize on-chip storage capacity and link bandwidth. We also discuss relevant parameters that affect amount of collected data per second and what are the associated issues. After analyzing compressing approaches and identifying their limitations, we conclude several promising directions for future research.
APA, Harvard, Vancouver, ISO, and other styles
35

Kravets, Vladislav, Bahram Javidi, and Adrian Stern. "Compressive imaging for thwarting adversarial attacks on 3D point cloud classifiers." Optics Express 29, no. 26 (December 8, 2021): 42726. http://dx.doi.org/10.1364/oe.444840.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

de Queiroz, Ricardo L., and Philip A. Chou. "Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform." IEEE Transactions on Image Processing 25, no. 8 (August 2016): 3947–56. http://dx.doi.org/10.1109/tip.2016.2575005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Zhu, Feng, Jieyu Zhao, and Zhengyi Cai. "A Contrastive Learning Method for the Visual Representation of 3D Point Clouds." Algorithms 15, no. 3 (March 8, 2022): 89. http://dx.doi.org/10.3390/a15030089.

Full text
Abstract:
At present, the unsupervised visual representation learning of the point cloud model is mainly based on generative methods, but the generative methods pay too much attention to the details of each point, thus ignoring the learning of semantic information. Therefore, this paper proposes a discriminative method for the contrastive learning of three-dimensional point cloud visual representations, which can effectively learn the visual representation of point cloud models. The self-attention point cloud capsule network is designed as the backbone network, which can effectively extract the features of point cloud data. By compressing the digital capsule layer, the class dependence of features is eliminated, and the generalization ability of the model and the ability of feature queues to store features are improved. Aiming at the equivariance of the capsule network, the Jaccard loss function is constructed, which is conducive to the network distinguishing the characteristics of positive and negative samples, thereby improving the performance of the contrastive learning. The model is pre-trained on the ShapeNetCore data set, and the pre-trained model is used for classification and segmentation tasks. The classification accuracy on the ModelNet40 data set is 0.1% higher than that of the best unsupervised method, PointCapsNet, and when only 10% of the label data is used, the classification accuracy exceeds 80%. The mIoU of part segmentation on the ShapeNet data set is 1.2% higher than the best comparison method, MulUnsupervised. The experimental results of classification and segmentation show that the proposed method has good performance in accuracy. The alignment and uniformity of features are better than the generative method of PointCapsNet, which proves that this method can learn the visual representation of the three-dimensional point cloud model more effectively.
APA, Harvard, Vancouver, ISO, and other styles
38

Xu, Renjie, Ting Yun, Lin Cao, and Yunfei Liu. "Compression and Recovery of 3D Broad-Leaved Tree Point Clouds Based on Compressed Sensing." Forests 11, no. 3 (February 26, 2020): 257. http://dx.doi.org/10.3390/f11030257.

Full text
Abstract:
The terrestrial laser scanner (TLS) has been widely used in forest inventories. However, with increasing precision of TLS, storing and transmitting tree point clouds become more challenging. In this paper, a novel compressed sensing (CS) scheme for broad-leaved tree point clouds is proposed by analyzing and comparing different sparse bases, observation matrices, and reconstruction algorithms. Our scheme starts by eliminating outliers and simplifying point clouds with statistical filtering and voxel filtering. The scheme then applies Haar sparse basis to thin the coordinate data based on the characteristics of the broad-leaved tree point clouds. An observation procedure down-samples the point clouds with the partial Fourier matrix. The regularized orthogonal matching pursuit algorithm (ROMP) finally reconstructs the original point clouds. The experimental results illustrate that the proposed scheme can preserve morphological attributes of the broad-leaved tree within a range of relative error: 0.0010%–3.3937%, and robustly extend to plot-level within a range of mean square error (MSE): 0.0063–0.2245.
APA, Harvard, Vancouver, ISO, and other styles
39

Lu, Chunting, Li Wang, Zheng Yang, Xingsheng Liu, Xiangwei Zhang, and Longhai Wu. "The Application of Laser-Scanning-Based BIM Technology in Large Steel Structure Engineering for Environmental Protection." Mathematical Problems in Engineering 2022 (September 20, 2022): 1–9. http://dx.doi.org/10.1155/2022/4665141.

Full text
Abstract:
The rise of heteromorphic architecture brings great challenges to engineering design, blanking, construction, completion testing, and maintenance. The 3D laser scanning measurement technology can quickly achieve the “copy” measurement of the target, especially suitable for the digitization of complex structure and the accurate construction of true 3D model and it takes care of other things which are necessary environment point of view. In this paper, we are considering the steel structure inspection and curtain wall blanking in the construction of the Grand Theater in the “Shangqiu Cultural and Art Center” as a case study. First, the laser scanning technology has been introduced to complete the data acquisition of steel structure objects and to screen the other details of the structure. Then, the high-quality point cloud is obtained through multistation splicing, filtering, denoising and smoothing, compression, and simplification. Through field comparison, the point cloud accuracy reached an acceptable level. Through field comparison, it can be seen that the difference between the designed building model combining laser technology and BIM technology is distributed within the range of ±(1–5)mm, and the mean square error is ±3.5 mm, and the deviation of numerical simulation is small, which meets the building requirements. Therefore, this method can effectively detect steel structure, and the BIM model can be updated according to the measured point cloud data. We provide accurate data reference for curtain wall material and installation.
APA, Harvard, Vancouver, ISO, and other styles
40

Li, Xudong, Chong Liu, Jingmin Li, Mehdi Baghdadi, and Yuanchang Liu. "A Multi-Sensor Environmental Perception System for an Automatic Electric Shovel Platform." Sensors 21, no. 13 (June 25, 2021): 4355. http://dx.doi.org/10.3390/s21134355.

Full text
Abstract:
Electric shovels have been widely used in heavy industrial applications, such as mineral extraction. However, the performance of the electric shovel is often affected by the complicated working environment and the proficiency of the operator, which will affect safety and efficiency. To improve the extraction performance, it is particularly important to study an intelligent electric shovel with autonomous operation technology. An electric shovel experimental platform for intelligent technology research and testing is proposed in this paper. The core of the designed platform is an intelligent environmental sensing/perception system, in which multiple sensors, such as RTK (real-time kinematic), IMU (inertial measurement unit) and LiDAR (light detection and ranging), have been employed. By appreciating the multi-directional loading characteristics of electric shovels, two 2D-LiDARs have been used and their data are synchronized and fused to construct a 3D point cloud. The synchronization is achieved with the assistance of RTK and IMU, which provide pose information of the shovel. In addition, in order to down-sample the LiDAR point clouds to facilitate more efficient data analysis, a new point cloud data processing algorithm including a bilateral-filtering based noise filter and a grid-based data compression method is proposed. The designed platform, together with its sensing system, was tested in different outdoor environment conditions. Compared with the original LiDAR point cloud, the proposed new environment sensing/perception system not only guarantees the characteristic points and effective edges of the measured objects, but also reduces the amount of processing point cloud data and improves system efficiency. By undertaking a large number of experiments, the overall measurement error of the proposed system is within 50 mm, which is well beyond the requirements of electric shovel application. The environment perception system for the automatic electric shovel platform has great research value and engineering significance for the improvement of the service problem of the electric shovel.
APA, Harvard, Vancouver, ISO, and other styles
41

Shinde, Rajat C., and Surya S. Durbha. "Deep Convolutional Compressed Sensing-Based Adaptive 3D Reconstruction of Sparse LiDAR Data: A Case Study for Forests." Remote Sensing 15, no. 5 (March 1, 2023): 1394. http://dx.doi.org/10.3390/rs15051394.

Full text
Abstract:
LiDAR point clouds are characterized by high geometric and radiometric resolution and are therefore of great use for large-scale forest analysis. Although the analysis of 3D geometries and shapes has improved at different resolutions, processing large-scale 3D LiDAR point clouds is difficult due to their enormous volume. From the perspective of using LiDAR point clouds for forests, the challenge lies in learning local and global features, as the number of points in a typical 3D LiDAR point cloud is in the range of millions. In this research, we present a novel end-to-end deep learning framework called ADCoSNet, capable of adaptively reconstructing 3D LiDAR point clouds from a few sparse measurements. ADCoSNet uses empirical mode decomposition (EMD), a data-driven signal processing approach with Deep Learning, to decompose input signals into intrinsic mode functions (IMFs). These IMFs capture hierarchical implicit features in the form of decreasing spatial frequency. This research proposes using the last IMF (least varying component), also known as the Residual function, as a statistical prior for capturing local features, followed by fusing with the hierarchical convolutional features from the deep compressive sensing (CS) network. The central idea is that the Residue approximately represents the overall forest structure considering it is relatively homogenous due to the presence of vegetation. ADCoSNet utilizes this last IMF for generating sparse representation based on a set of CS measurement ratios. The research presents extensive experiments for reconstructing 3D LiDAR point clouds with high fidelity for various CS measurement ratios. Our approach achieves a maximum peak signal-to-noise ratio (PSNR) of 48.96 dB (approx. 8 dB better than reconstruction without data-dependent transforms) with reconstruction root mean square error (RMSE) of 7.21. It is envisaged that the proposed framework finds high potential as an end-to-end learning framework for generating adaptive and sparse representations to capture geometrical features for the 3D reconstruction of forests.
APA, Harvard, Vancouver, ISO, and other styles
42

Alfio, Vincenzo Saverio, Domenica Costantino, and Massimiliano Pepe. "Influence of Image TIFF Format and JPEG Compression Level in the Accuracy of the 3D Model and Quality of the Orthophoto in UAV Photogrammetry." Journal of Imaging 6, no. 5 (May 11, 2020): 30. http://dx.doi.org/10.3390/jimaging6050030.

Full text
Abstract:
The aim of this study is to evaluate the degradation of the accuracy and quality of the images in relation to the TIFF format and the different compression level of the JPEG format compared to the raw images acquired by UAV platform. Experiments were carried out using DJI Mavic 2 Pro and Hasselblad L1D-20c camera on three test sites. Post-processing of images was performed using software based on structure from motion and multi-view stereo approaches. The results show a slight influence of image format and compression levels in flat or slightly flat surfaces; in the case of a complex 3D model, instead, the choice of a format became important. Across all tests, processing times were found to also play a key role, especially in point cloud generation. The qualitative and quantitative analysis, carried out on the different orthophotos, allowed to highlight a modest impact in the use of the TIFF format and a strong influence as the JPEG compression level increases.
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Yuan, Wei Guo, Shuanfeng Zhao, Zhengxiong Lu, and Zhizhong Xing. "Simplified Algorithm of Geometric Model Region Segmentation Using Neural Network." Mobile Information Systems 2022 (August 28, 2022): 1–11. http://dx.doi.org/10.1155/2022/4064512.

Full text
Abstract:
The simplification of three-dimensional (3D) models has always been a hot research topic for scholars. The researchers simplified different parts of the 3D point cloud data from both global and local information. Aiming at the need to retain detailed features in the simplification of 3D models, the neural network (NN) technology is firstly analyzed and studied, and a simplified algorithm for regional segmentation of geometric models based on Graph Convolutional Neural Network (GCNN) is proposed. Secondly, based on the idea of dense connection of DenseNet network structure, a symmetric segmentation model is established. The left part continuously performs Down-Sampling and local feature aggregation on the original geometric model through the Weighted Critical Points (WCPL) algorithm and edge convolution operation and performs compression encoding. At the same time, the right part uses the interpolation method for Up-Sampling the encoded data to increase the number of data points and feature dimensions, so as to restore the point cloud data to the dimensions before processing. Finally, it is restored to the dimension size of the original data to realize the end-to-end output of the segmentation model. Comparing the results with other segmentation models, it shows that (1) as the number of iterations increases, the regional accuracy of the training set increases; (2) after 1000 training rounds, from the perspective of the segmentation effect of a single category of objects, the model has good segmentation effect and has application prospects; and (3) compared with other models, the segmentation interaction ratio of the model is at a relatively mature level. The findings can provide a reference for the application of the segmentation technology of related geometric models and neural networks in the fields of similar models and image segmentation.
APA, Harvard, Vancouver, ISO, and other styles
44

Pinheiro, Antonio. "JPEG Column: 93rd JPEG Meeting." ACM SIGMultimedia Records 13, no. 4 (December 2021): 1. http://dx.doi.org/10.1145/3578508.3578512.

Full text
Abstract:
The 93rd JPEG meeting was held online from 18 to 22 October 2021. The JPEG Committee continued its work on the development of new standardised solutions for the representation of visual information. Notably, the JPEG Committee has decided to release a new call for proposals on point cloud coding based on machine learning technologies that targets both compression efficiency and effective performance for 3D processing as well as machine and computer vision tasks. This activity will be conducted in parallel with JPEG AI standardization. Furthermore, it was also decided to pursue the development of a new standard in the context of the exploration on JPEG Fake News activity.
APA, Harvard, Vancouver, ISO, and other styles
45

Shinde, Rajat C., Surya S. Durbha, and Abhishek V. Potnis. "LidarCSNet: A Deep Convolutional Compressive Sensing Reconstruction Framework for 3D Airborne Lidar Point Cloud." ISPRS Journal of Photogrammetry and Remote Sensing 180 (October 2021): 313–34. http://dx.doi.org/10.1016/j.isprsjprs.2021.08.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Camuffo, Elena, Daniele Mari, and Simone Milani. "Recent Advancements in Learning Algorithms for Point Clouds: An Updated Overview." Sensors 22, no. 4 (February 10, 2022): 1357. http://dx.doi.org/10.3390/s22041357.

Full text
Abstract:
Recent advancements in self-driving cars, robotics, and remote sensing have widened the range of applications for 3D Point Cloud (PC) data. This data format poses several new issues concerning noise levels, sparsity, and required storage space; as a result, many recent works address PC problems using Deep Learning (DL) solutions thanks to their capability to automatically extract features and achieve high performances. Such evolution has also changed the structure of processing chains and posed new problems to both academic and industrial researchers. The aim of this paper is to provide a comprehensive overview of the latest state-of-the-art DL approaches for the most crucial PC processing operations, i.e., semantic scene understanding, compression, and completion. With respect to the existing reviews, the work proposes a new taxonomical classification of the approaches, taking into account the characteristics of the acquisition set up, the peculiarities of the acquired PC data, the presence of side information (depending on the adopted dataset), the data formatting, and the characteristics of the DL architectures. This organization allows one to better comprehend some final performance comparisons on common test sets and cast a light on the future research trends.
APA, Harvard, Vancouver, ISO, and other styles
47

Esmailzadeh, Mojtaba, Maryam Delshah, Seyedeh Mina Amirsadat, Ahmad Azari, Rouhollah Fatehi, Mohsen Rezaei, Hasan Bazai, Farhad Ghadyanlou, Amir Saidizad, and Mohsen Sharifpur. "Fracture Analysis of Compressor Impellers in Olefin Units: Numerical and Metallurgical Approach." Advances in Materials Science and Engineering 2022 (March 17, 2022): 1–16. http://dx.doi.org/10.1155/2022/5367695.

Full text
Abstract:
This paper presents a failure analysis conducted in 7175 aluminum alloy compressor impellers used in olefin units which operate at 34500 rpm to compress gas in the process. Some characterizations such as chemical composition, microstructure, and hardness tests were conducted to obtain a detailed evaluation of the base alloy. Furthermore, a finite element method and a 3D point cloud data technique have used to determine critical stress points on the surface of impellers. The finite element result showed the root of blades has significant stress concentration. Moreover, the formed cyclic tension has led to a fatigue phenomenon in the root of the blade, so near this location, the local strain accumulation was visible in 3D points cloud data. The fractography results showed that the mode of crack progression and the fractured surface would change by changing the stress mode. In addition, CFD modeling for investigating the effect of flow hydrodynamics on the HP and LP compressor blades is analyzed. The results revealed that the maximum pressure of gas stream for the rotor speed of 34500 had taken place in the area of a blade that already breakdown took place, and the changes of pressure, stress, and temperature gradients of flow in the HP compressor were significantly higher than the LP compressor.
APA, Harvard, Vancouver, ISO, and other styles
48

Hooda, Reetu, and W. David Pan. "Early Termination of Dyadic Region-Adaptive Hierarchical Transform for Efficient Attribute Compression of 3D Point Clouds." IEEE Signal Processing Letters 29 (2022): 214–18. http://dx.doi.org/10.1109/lsp.2021.3133204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Gehrung, Joachim, Marcus Hebel, Michael Arens, and Uwe Stilla. "A FRAMEWORK FOR VOXEL-BASED GLOBAL SCALE MODELING OF URBAN ENVIRONMENTS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W1 (October 26, 2016): 45–51. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w1-45-2016.

Full text
Abstract:
The generation of 3D city models is a very active field of research. Modeling environments as point clouds may be fast, but has disadvantages. These are easily solvable by using volumetric representations, especially when considering selective data acquisition, change detection and fast changing environments. Therefore, this paper proposes a framework for the volumetric modeling and visualization of large scale urban environments. Beside an architecture and the right mix of algorithms for the task, two compression strategies for volumetric models as well as a data quality based approach for the import of range measurements are proposed. The capabilities of the framework are shown on a mobile laser scanning dataset of the Technical University of Munich. Furthermore the loss of the compression techniques is evaluated and their memory consumption is compared to that of raw point clouds. The presented results show that generation, storage and real-time rendering of even large urban models are feasible, even with off-the-shelf hardware.
APA, Harvard, Vancouver, ISO, and other styles
50

Wien, Mathias. "MPEG Visual Quality Assessment Advisory Group." ACM SIGMultimedia Records 13, no. 3 (September 2021): 1. http://dx.doi.org/10.1145/3578495.3578498.

Full text
Abstract:
The perceived visual quality is of utmost importance in the context of visual media compression, such as 2D, 3D, immersive video, and point clouds. The trade-off between compression efficiency and computational/implementation complexity has a crucial impact on the success of a compression scheme. This specifically holds for the development of visual media compression standards which typically aims at maximum compression efficiency using state-of-the-art coding technology. In MPEG, the subjective and objective assessment of visual quality has always been an integral part of the standards development process. Due to the significant effort of formal subjective evaluations, the standardization process typically relies on such formal tests in the starting phase and for verification while in the development phase objective metrics are used. In the new MPEG structure, established in 2020, a dedicated advisory group has been installed for the purpose of providing, maintaining, and developing visual quality assessment methods suitable for use in the standardization process. This column lays out the scope and tasks of this advisory group and reports on its first achievements and developments. After a brief overview of the organizational structure, current projects are presented, and initial results are presented.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography