Letteratura scientifica selezionata sul tema "3D point cloud representation"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "3D point cloud representation".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "3D point cloud representation":

1

Arya, Hemlata, Parul Saxena e Jaimala Jha. "Detection of 3D Object in Point Cloud: Cloud Semantic Segmentation in Lane Marking". International Journal on Recent and Innovation Trends in Computing and Communication 11, n. 10s (7 ottobre 2023): 376–81. http://dx.doi.org/10.17762/ijritcc.v11i10s.7645.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Managing a city efficiently and effectively is more important than ever as growing population and economic strain put a strain on infrastructure like transportation and public services like keeping urban green areas clean and maintained. For effective administration, knowledge of the urban setting is essential. Both portable and stationary laser scanners generate 3D point clouds that accurately depict the environment. These data points may be used to infer the state of the roads, buildings, trees, and other important elements involved in this decision-making process. Perhaps they would support "smart" or "smarter" cities in general. Unfortunately, the point clouds do not immediately supply this sort of data. It must be eliminated. This extraction is done either by human specialists or by sophisticated computer programmes that can identify objects. Because the point clouds might represent such large locations, relying on specialists to identify the things may be an unproductive use of time (streets or even whole cities). Automatic or nearly automatic discovery and recognition of essential objects is now possible with the help of object identification software. In this research, In this paper, we describe a unique approach to semantic segmentation of point clouds, based on the usage of contextual point representations to take use of both local and global features within the point cloud. We improve the accuracy of the point's representation by performing a single innovative gated fusion on the point and its neighbours, which incorporates the knowledge from both sets of data and enhances the representation of the point. Following this, we offer a new graph point net module that further develops the improved representation by composing and updating each point's representation inside the local point cloud structure using the graph attention block in real time. Finally, we make advantage of the global structure of the point cloud by using spatial- and channel-wise attention techniques to construct the ensuing semantic label for each point.
2

Barnefske, E., e H. Sternberg. "PCCT: A POINT CLOUD CLASSIFICATION TOOL TO CREATE 3D TRAINING DATA TO ADJUST AND DEVELOP 3D CONVNET". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W16 (17 settembre 2019): 35–40. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w16-35-2019.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
<p><strong>Abstract.</strong> Point clouds give a very detailed and sometimes very accurate representation of the geometry of captured objects. In surveying, point clouds captured with laser scanners or camera systems are an intermediate result that must be processed further. Often the point cloud has to be divided into regions of similar types (object classes) for the next process steps. These classifications are very time-consuming and cost-intensive compared to acquisition. In order to automate this process step, conventional neural networks (ConvNet), which take over the classification task, are investigated in detail. In addition to the network architecture, the classification performance of a ConvNet depends on the training data with which the task is learned. This paper presents and evaluates the point clould classification tool (PCCT) developed at HCU Hamburg. With the PCCT, large point cloud collections can be semi-automatically classified. Furthermore, the influence of erroneous points in three-dimensional point clouds is investigated. The network architecture PointNet is used for this investigation.</p>
3

Orts-Escolano, Sergio, Jose Garcia-Rodriguez, Miguel Cazorla, Vicente Morell, Jorge Azorin, Marcelo Saval, Alberto Garcia-Garcia e Victor Villena. "Bioinspired point cloud representation: 3D object tracking". Neural Computing and Applications 29, n. 9 (16 settembre 2016): 663–72. http://dx.doi.org/10.1007/s00521-016-2585-0.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Rai, A., N. Srivastava, K. Khoshelham e K. Jain. "SEMANTIC ENRICHMENT OF 3D POINT CLOUDS USING 2D IMAGE SEGMENTATION". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (14 dicembre 2023): 1659–66. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-1659-2023.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract. 3D point cloud segmentation is computationally intensive due to the lack of inherent structural information and the unstructured nature of the point cloud data, which hinders the identification and connection of neighboring points. Understanding the structure of the point cloud data plays a crucial role in obtaining a meaningful and accurate representation of the underlying 3D environment. In this paper, we propose an algorithm that builds on existing state-of-the-art techniques of 2D image segmentation and point cloud registration to enrich point clouds with semantic information. DeepLab2 with ResNet50 as backbone architecture trained on the COCO dataset is used for indoor scene semantic segmentation into several classes like wall, floor, ceiling, doors, and windows. Semantic information from 2D images is propagated along with other input data, i.e., RGB images, depth images, and sensor information to generate 3D point clouds with semantic information. Iterative Closest Point (ICP) algorithm is used for the pair-wise registration of consecutive point clouds and finally, optimization is applied using the pose graph optimization on the whole set of point clouds to generate the combined point cloud of the whole scene. 3D point cloud of the whole scene contains pseudo-color information which denotes the semantic class to which each point belongs. The proposed methodology use an off-the-shelf 2D semantic segmentation deep learning model to semantically segment 3D point clouds collected using handheld mobile LiDAR sensor. We demonstrate a comparison of the accuracy achieved compared to a manually segmented point cloud on an in-house dataset as well as a 2D3DS benchmark dataset.
5

Sun, Yichen. "3D point cloud domain generalization via adversarial training". Applied and Computational Engineering 13, n. 1 (23 ottobre 2023): 160–68. http://dx.doi.org/10.54254/2755-2721/13/20230725.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The purpose of the paper is to tackle the classification problem of 3D point cloud data in domain generalization: how to develop a generalized feature representation for an unseen target domain by utilizing sub-field of numerous seen source domain(s). We present a novel methodology based on both adversarial training to learn a generalized feature representations across subdomains in domain adaptation called 3D-AA. We specifically expand adversarial autoencoders by applying the Maximum Mean Discrepancy (MMD) measure to align the distributions across several subdomains, and then matching the aligned distribution to any given prior distribution via adversarial feature learning. In this manner, the learned 3D feature representation is supposed to be universal to the observed source domains due to the MMD regularization and is expected to generalize well on the target domain due to the addition of the prior distribution. We applied an algorithm to train two different 3D point cloud source domains with our framework. The combination of multiple loss functions on 3D point cloud domain generalization task show that our applied algorithm performs better and learn more generalized features for the target domain than the source-only algorithm which only utilized the MMD measurement.
6

Yang, Zexin, Qin Ye, Jantien Stoter e Liangliang Nan. "Enriching Point Clouds with Implicit Representations for 3D Classification and Segmentation". Remote Sensing 15, n. 1 (22 dicembre 2022): 61. http://dx.doi.org/10.3390/rs15010061.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Continuous implicit representations can flexibly describe complex 3D geometry and offer excellent potential for 3D point cloud analysis. However, it remains challenging for existing point-based deep learning architectures to leverage the implicit representations due to the discrepancy in data structures between implicit fields and point clouds. In this work, we propose a new point cloud representation by integrating the 3D Cartesian coordinates with the intrinsic geometric information encapsulated in its implicit field. Specifically, we parameterize the continuous unsigned distance field around each point into a low-dimensional feature vector that captures the local geometry. Then we concatenate the 3D Cartesian coordinates of each point with its encoded implicit feature vector as the network input. The proposed method can be plugged into an existing network architecture as a module without trainable weights. We also introduce a novel local canonicalization approach to ensure the transformation-invariance of encoded implicit features. With its local mechanism, our implicit feature encoding module can be applied to not only point clouds of single objects but also those of complex real-world scenes. We have validated the effectiveness of our approach using five well-known point-based deep networks (i.e., PointNet, SuperPoint Graph, RandLA-Net, CurveNet, and Point Structuring Net) on object-level classification and scene-level semantic segmentation tasks. Extensive experiments on both synthetic and real-world datasets have demonstrated the effectiveness of the proposed point representation.
7

Quach, Maurice, Aladine Chetouani, Giuseppe Valenzise e Frederic Dufaux. "A deep perceptual metric for 3D point clouds". Electronic Imaging 2021, n. 9 (18 gennaio 2021): 257–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.9.iqsp-257.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Point clouds are essential for storage and transmission of 3D content. As they can entail significant volumes of data, point cloud compression is crucial for practical usage. Recently, point cloud geometry compression approaches based on deep neural networks have been explored. In this paper, we evaluate the ability to predict perceptual quality of typical voxel-based loss functions employed to train these networks. We find that the commonly used focal loss and weighted binary cross entropy are poorly correlated with human perception. We thus propose a perceptual loss function for 3D point clouds which outperforms existing loss functions on the ICIP2020 subjective dataset. In addition, we propose a novel truncated distance field voxel grid representation and find that it leads to sparser latent spaces and loss functions that are more correlated with perceived visual quality compared to a binary representation. The source code is available at <uri>https://github.com/mauriceqch/2021_pc_perceptual_loss</uri>.
8

Decker, Kevin T., e Brett J. Borghetti. "Hyperspectral Point Cloud Projection for the Semantic Segmentation of Multimodal Hyperspectral and Lidar Data with Point Convolution-Based Deep Fusion Neural Networks". Applied Sciences 13, n. 14 (14 luglio 2023): 8210. http://dx.doi.org/10.3390/app13148210.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The fusion of dissimilar data modalities in neural networks presents a significant challenge, particularly in the case of multimodal hyperspectral and lidar data. Hyperspectral data, typically represented as images with potentially hundreds of bands, provide a wealth of spectral information, while lidar data, commonly represented as point clouds with millions of unordered points in 3D space, offer structural information. The complementary nature of these data types presents a unique challenge due to their fundamentally different representations requiring distinct processing methods. In this work, we introduce an alternative hyperspectral data representation in the form of a hyperspectral point cloud (HSPC), which enables ingestion and exploitation with point cloud processing neural network methods. Additionally, we present a composite fusion-style, point convolution-based neural network architecture for the semantic segmentation of HSPC and lidar point cloud data. We investigate the effects of the proposed HSPC representation for both unimodal and multimodal networks ingesting a variety of hyperspectral and lidar data representations. Finally, we compare the performance of these networks against each other and previous approaches. This study paves the way for innovative approaches to multimodal remote sensing data fusion, unlocking new possibilities for enhanced data analysis and interpretation.
9

Li, Shidi, Miaomiao Liu e Christian Walder. "EditVAE: Unsupervised Parts-Aware Controllable 3D Point Cloud Shape Generation". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 2 (28 giugno 2022): 1386–94. http://dx.doi.org/10.1609/aaai.v36i2.20027.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper tackles the problem of parts-aware point cloud generation. Unlike existing works which require the point cloud to be segmented into parts a priori, our parts-aware editing and generation are performed in an unsupervised manner. We achieve this with a simple modification of the Variational Auto-Encoder which yields a joint model of the point cloud itself along with a schematic representation of it as a combination of shape primitives. In particular, we introduce a latent representation of the point cloud which can be decomposed into a disentangled representation for each part of the shape. These parts are in turn disentangled into both a shape primitive and a point cloud representation, along with a standardising transformation to a canonical coordinate system. The dependencies between our standardising transformations preserve the spatial dependencies between the parts in a manner that allows meaningful parts-aware point cloud generation and shape editing. In addition to the flexibility afforded by our disentangled representation, the inductive bias introduced by our joint modeling approach yields state-of-the-art experimental results on the ShapeNet dataset.
10

Bello, Saifullahi Aminu, Shangshu Yu, Cheng Wang, Jibril Muhmmad Adam e Jonathan Li. "Review: Deep Learning on 3D Point Clouds". Remote Sensing 12, n. 11 (28 maggio 2020): 1729. http://dx.doi.org/10.3390/rs12111729.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A point cloud is a set of points defined in a 3D metric space. Point clouds have become one of the most significant data formats for 3D representation and are gaining increased popularity as a result of the increased availability of acquisition devices, as well as seeing increased application in areas such as robotics, autonomous driving, and augmented and virtual reality. Deep learning is now the most powerful tool for data processing in computer vision and is becoming the most preferred technique for tasks such as classification, segmentation, and detection. While deep learning techniques are mainly applied to data with a structured grid, the point cloud, on the other hand, is unstructured. The unstructuredness of point clouds makes the use of deep learning for its direct processing very challenging. This paper contains a review of the recent state-of-the-art deep learning techniques, mainly focusing on raw point cloud data. The initial work on deep learning directly with raw point cloud data did not model local regions; therefore, subsequent approaches model local regions through sampling and grouping. More recently, several approaches have been proposed that not only model the local regions but also explore the correlation between points in the local regions. From the survey, we conclude that approaches that model local regions and take into account the correlation between points in the local regions perform better. Contrary to existing reviews, this paper provides a general structure for learning with raw point clouds, and various methods were compared based on the general structure. This work also introduces the popular 3D point cloud benchmark datasets and discusses the application of deep learning in popular 3D vision tasks, including classification, segmentation, and detection.

Tesi sul tema "3D point cloud representation":

1

Diskin, Yakov. "Dense 3D Point Cloud Representation of a Scene Using Uncalibrated Monocular Vision". University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1366386933.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Diskin, Yakov. "Volumetric Change Detection Using Uncalibrated 3D Reconstruction Models". University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1429293660.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Morell, Vicente. "Contributions to 3D Data Registration and Representation". Doctoral thesis, Universidad de Alicante, 2014. http://hdl.handle.net/10045/42364.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.
4

Orts-Escolano, Sergio. "A three-dimensional representation method for noisy point clouds based on growing self-organizing maps accelerated on GPUs". Doctoral thesis, Universidad de Alicante, 2013. http://hdl.handle.net/10045/36484.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The research described in this thesis was motivated by the need of a robust model capable of representing 3D data obtained with 3D sensors, which are inherently noisy. In addition, time constraints have to be considered as these sensors are capable of providing a 3D data stream in real time. This thesis proposed the use of Self-Organizing Maps (SOMs) as a 3D representation model. In particular, we proposed the use of the Growing Neural Gas (GNG) network, which has been successfully used for clustering, pattern recognition and topology representation of multi-dimensional data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models, without considering time constraints. It is proposed a hardware implementation leveraging the computing power of modern GPUs, which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). The proposed methods were applied to different problem and applications in the area of computer vision such as the recognition and localization of objects, visual surveillance or 3D reconstruction.
5

Zhao, Yongheng. "3D feature representations for visual perception and geometric shape understanding". Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3424787.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this thesis, we first present a unified look to several well known 3D feature representations, ranging from hand-crafted design to learning based ones. Then, we propose three kinds of feature representations from both RGB-D data and point cloud, addressing different problems and aiming for different functionality. With RGB-D data, we address the existing problems of 2D feature representation in visual perception by integrating with the 3D information. We propose an RGB-D data based feature representation which fuses object's statistical color model and depth information in a probabilistic manner. The depth information is able to not only enhance the discriminative power of the model toward clutters with a different range but also can be used as a constraint to properly update the model and reduce model drifting. The proposed representation is then evaluated in our proposed object tracking algorithm (named MS3D) on a public RGB-D object tracking dataset. It runs in real-time and produces the best results compared against the other state-of-the-art RGB-D trackers. Furthermore, we integrate MS3D tracker in an RGB-D camera network in order to handle long-term and full occlusion. The accuracy and robustness of our algorithm are evaluated in our presented dataset and the results suggest our algorithm is able to track multiple objects accurately and continuously in the long term. For 3D point cloud, the current deep learning based feature representations often discard spatial arrangements in data, hence falling short of respecting the parts-to-whole relationship, which is critical to explain and describe 3D shapes. Addressing this problem, we propose 3D point-capsule networks, an autoencoder designed for unsupervised learning of feature representations from sparse 3D point clouds while preserving spatial arrangements of the input data into different feature attentions. 3D capsule networks arise as a direct consequence of our unified formulation of the common 3D autoencoders. The dynamic routing scheme and the peculiar 2D latent feature representation deployed by our capsule networks bring in improvements for several common point cloud-related tasks, such as object classification, object reconstruction and part segmentation as substantiated by our extensive evaluations. Moreover, it enables new applications such as part interpolation and replacement. Finally, towards rotation equivariance of the 3D feature representation, we present a 3D capsule architecture for processing of point clouds that is equivariant with respect to the SO(3) rotation group, translation, and permutation of the unordered input sets. The network operates on a sparse set of local reference frames, computed from an input point cloud and establishes end-to-end equivariance through a novel 3D quaternion group capsule layer, including an equivariant dynamic routing procedure. The capsule layer enables us to disentangle geometry from the pose, paving the way for more informative descriptions and structured latent space. In the process, we theoretically connect the process of dynamic routing between capsules to the well-known Weiszfeld algorithm, a scheme for solving iterative re-weighted least squares (IRLS) problems with provable convergence properties, enabling robust pose estimation between capsule layers. Due to the sparse equivariant quaternion capsules, our architecture allows joint object classification and orientation estimation, which we validate empirically on common benchmark datasets.
6

Konradsson, Albin, e Gustav Bohman. "3D Instance Segmentation of Cluttered Scenes : A Comparative Study of 3D Data Representations". Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177598.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis provides a comparison between instance segmentation methods using point clouds and depth images. Specifically, their performance on cluttered scenes of irregular objects in an industrial environment is investigated. Recent work by Wang et al. [1] has suggested potential benefits of a point cloud representation when performing deep learning on data from 3D cameras. However, little work has been done to enable quantifiable comparisons between methods based on different representations, particularly on industrial data. Generating synthetic data provides accurate grayscale, depth map, and point cloud representations for a large number of scenes and can thus be used to compare methods regardless of datatype. The datasets in this work are created using a tool provided by SICK. They simulate postal packages on a conveyor belt scanned by a LiDAR, closely resembling a common industry application. Two datasets are generated. One dataset has low complexity, containing only boxes.The other has higher complexity, containing a combination of boxes and multiple types of irregularly shaped parcels. State-of-the-art instance segmentation methods are selected based on their performance on existing benchmarks. We chose PointGroup by Jiang et al. [2], which uses point clouds, and Mask R-CNN by He et al. [3], which uses images. The results support that there may be benefits of using a point cloud representation over depth images. PointGroup performs better in terms of the chosen metric on both datasets. On low complexity scenes, the inference times are similar between the two methods tested. However, on higher complexity scenes, MaskR-CNN is significantly faster.
7

Cao, Chao. "Compression d'objets 3D représentés par nuages de points". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAS015.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Avec la croissance rapide du contenu multimédia, les objets 3D deviennent de plus en plus populaires. Ils sont généralement modélisés sous forme de maillages polygonaux complexes ou de nuages de points 3D denses, offrant des expériences immersives dans différentes applications multimédias industrielles et grand public. La représentation par nuages de points, plus facile à acquérir que les maillages, a suscité ces dernières année un intérêt croissant tant dans le monde académique que commercial. Un nuage de points est par définition un ensemble de points définissant la géométrie de l’objet et les attributs associés (couleurs, textures, les propriétés des matériaux, etc.). Le nombre de points dans un nuage de points peut aller d'un millier, pour représenter des objets relativement simples, jusqu'à des milliards pour représenter de manière réaliste des scènes 3D complexes. Ces énormes quantités de données posent de grands défis liés à la transmission, au traitement et au stockage des nuages de points 3D. Ces dernières années, de nombreux travaux ont été dédiés principalement à la compression de maillages, tandis qu’un nombre plus réduit de techniques s’attaquent à la problématique de compression de nuages de points 3D. L’état de l’art fait ressortir deux grandes familles approches principales: une première purement géométrique, fondée sur une décomposition en octree et une seconde hybride, exploitant à la fois la projection multi-vues de la géométrie et le codage vidéo. La première approche permet de préserver une information de géométrie 3D précise mais contient une faible cohérence temporelle. La seconde permet de supprimer efficacement la redondance temporelle mais est pénalisé par une diminution de la précision géométrique, liée au processus de projection 3D/2D. Ainsi, le compromis entre efficacité de compression et précision des objets reconstruit doit être optimisé. Premièrement, une segmentation adaptative par octree a été proposée pour regrouper les points avec différentes amplitudes de mouvement dans des cubes 3D. Ensuite, une estimation de mouvement est appliquée à ces cubes en utilisant une transformation affine. Des gains en termes de performances de distorsion de débit (RD) ont été observés dans des séquences avec des amplitudes de mouvement plus faibles. Cependant, le coût de construction d'un octree pour le nuage de points dense reste élevé tandis que les structures d'octree résultantes contiennent une mauvaise cohérence temporelle pour les séquences avec des amplitudes de mouvement plus élevées. Une structure anatomique a ensuite été proposée pour modéliser le mouvement de manière intrinsèque. À l'aide d'outils d'estimation de pose 2D, le mouvement est estimé à partir de 14 segments anatomiques à l'aide d'une transformation affine. De plus, nous avons proposé une nouvelle solution pour la prédiction des couleurs et discuté du codage des résidus de la prédiction. Il est montré qu'au lieu de coder des informations de texture redondantes, il est plus intéressant de coder les résidus, ce qui a entraîné une meilleure performance RD. Les différentes approches proposées ont permis d’améliorer les performances des modèles de test V-PCC. Toutefois, la compression temporelle de nuages de points 3D dynamiques reste une tâche complexe et difficile. Ainsi, en raison des limites de la technologie d'acquisition actuelle, les nuages acquis peuvent être bruyants à la fois dans les domaines de la géométrie et des attributs, ce qui rend difficile l'obtention d'une estimation précise du mouvement. Dans les études futures, les technologies utilisées pour les maillages 3D pourraient être exploitées et adaptées au cas des nuages de points non-structurés pour fournir des informations de connectivité cohérentes dans le temps
With the rapid growth of multimedia content, 3D objects are becoming more and more popular. Most of the time, they are modeled as complex polygonal meshes or dense point clouds, providing immersive experiences in different industrial and consumer multimedia applications. The point cloud, which is easier to acquire than mesh and is widely applicable, has raised many interests in both the academic and commercial worlds.A point cloud is a set of points with different properties such as their geometrical locations and the associated attributes (e.g., color, material properties, etc.). The number of the points within a point cloud can range from a thousand, to constitute simple 3D objects, up to billions, to realistically represent complex 3D scenes. Such huge amounts of data bring great technological challenges in terms of transmission, processing, and storage of point clouds.In recent years, numerous research works focused their efforts on the compression of meshes, while less was addressed for point clouds. We have identified two main approaches in the literature: a purely geometric one based on octree decomposition, and a hybrid one based on both geometry and video coding. The first approach can provide accurate 3D geometry information but contains weak temporal consistency. The second one can efficiently remove the temporal redundancy yet a decrease of geometrical precision can be observed after the projection. Thus, the tradeoff between compression efficiency and accurate prediction needs to be optimized.We focused on exploring the temporal correlations between dynamic dense point clouds. We proposed different approaches to improve the compression performance of the MPEG (Moving Picture Experts Group) V-PCC (Video-based Point Cloud Compression) test model, which provides state-of-the-art compression on dynamic dense point clouds.First, an octree-based adaptive segmentation is proposed to cluster the points with different motion amplitudes into 3D cubes. Then, motion estimation is applied to these cubes using affine transformation. Gains in terms of rate-distortion (RD) performance have been observed in sequences with relatively low motion amplitudes. However, the cost of building an octree for the dense point cloud remains expensive while the resulting octree structures contain poor temporal consistency for the sequences with higher motion amplitudes.An anatomical structure is then proposed to model the motion of the point clouds representing humanoids more inherently. With the help of 2D pose estimation tools, the motion is estimated from 14 anatomical segments using affine transformation.Moreover, we propose a novel solution for color prediction and discuss the residual coding from prediction. It is shown that instead of encoding redundant texture information, it is more valuable to code the residuals, which leads to a better RD performance.Although our contributions have improved the performances of the V-PCC test models, the temporal compression of dynamic point clouds remains a highly challenging task. Due to the limitations of the current acquisition technology, the acquired point clouds can be noisy in both geometry and attribute domains, which makes it challenging to achieve accurate motion estimation. In future studies, the technologies used for 3D meshes may be exploited and adapted to provide temporal-consistent connectivity information between dynamic 3D point clouds
8

Hejl, Zdeněk. "Rekonstrukce 3D scény z obrazových dat". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236495.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis describes methods of reconstruction of 3D scenes from photographs and videos using the Structure from motion approach. A new software capable of automatic reconstruction of point clouds and polygonal models from common images and videos was implemented based on these methods. The software uses variety of existing and custom solutions and clearly links them into one easily executable application. The reconstruction consists of feature point detection, pairwise matching, Bundle adjustment, stereoscopic algorithms and polygon model creation from point cloud using PCL library. Program is based on Bundler and PMVS. Poisson surface reconstruction algorithm, as well as simple triangulation and own reconstruction method based on plane segmentation were used for polygonal model creation.
9

Smith, Michael. "Non-parametric workspace modelling for mobile robots using push broom lasers". Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:50224eb9-73e8-4c8a-b8c5-18360d11e21b.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis is about the intelligent compression of large 3D point cloud datasets. The non-parametric method that we describe simultaneously generates a continuous representation of the workspace surfaces from discrete laser samples and decimates the dataset, retaining only locally salient samples. Our framework attains decimation factors in excess of two orders of magnitude without significant degradation in fidelity. The work presented here has a specific focus on gathering and processing laser measurements taken from a moving platform in outdoor workspaces. We introduce a somewhat unusual parameterisation of the problem and look to Gaussian Processes as the fundamental machinery in our processing pipeline. Our system compresses laser data in a fashion that is naturally sympathetic to the underlying structure and complexity of the workspace. In geometrically complex areas, compression is lower than that in geometrically bland areas. We focus on this property in detail and it leads us well beyond a simple application of non-parametric techniques. Indeed, towards the end of the thesis we develop a non-stationary GP framework whereby our regression model adapts to the local workspace complexity. Throughout we construct our algorithms so that they may be efficiently implemented. In addition, we present a detailed analysis of the proposed system and investigate model parameters, metric errors and data compression rates. Finally, we note that this work is predicated on a substantial amount of robotics engineering which has allowed us to produce a high quality, peer reviewed, dataset - the first of its kind.
10

Roure, Garcia Ferran. "Tools for 3D point cloud registration". Doctoral thesis, Universitat de Girona, 2017. http://hdl.handle.net/10803/403345.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this thesis, we did an in-depth review of the state of the art of 3D registration, evaluating the most popular methods. Given the lack of standardization in the literature, we also proposed a nomenclature and a classification to unify the evaluation systems and to be able to compare the different algorithms under the same criteria. The major contribution of the thesis is the Registration Toolbox, which consists of software and a database of 3D models. The software presented here consists of a 3D Registration Pipeline written in C ++ that allows researchers to try different methods, as well as add new ones and compare them. In this Pipeline, we not only implemented the most popular methods of literature, but we also added three new methods that contribute to improving the state of the art. On the other hand, the database provides different 3D models to be able to carry out the tests to validate the performances of the methods. Finally, we presented a new hybrid data structure specially focused on the search for neighbors. We tested our proposal together with other data structures and we obtained very satisfactory results, overcoming in many cases the best current alternatives. All tested structures are also available in our Pipeline. This Toolbox is intended to be a useful tool for the whole community and is available to researchers under a Creative Commons license
En aquesta tesi, hem fet una revisió en profunditat de l'estat de l'art del registre 3D, avaluant els mètodes més populars. Donada la falta d'estandardització de la literatura, també hem proposat una nomenclatura i una classificació per tal d'unificar els sistemes d'avaluació i poder comparar els diferents algorismes sota els mateixos criteris. La contribució més gran de la tesi és el Toolbox de Registre, que consisteix en un software i una base de dades de models 3D. El software presentat aquí consisteix en una Pipeline de registre 3D escrit en C++ que permet als investigadors provar diferents mètodes, així com afegir-n'hi de nous i comparar-los. En aquesta Pipeline, no només hem implementat els mètodes més populars de la literatura, sinó que també hem afegit tres mètodes nous que contribueixen a millorar l'estat de l'art de la tecnologia. D'altra banda, la base de dades proporciona una sèrie de models 3D per poder dur a terme les proves necessàries per validar el bon funcionament dels mètodes. Finalment, també hem presentat una nova estructura de dades híbrida especialment enfocada a la cerca de veïns. Hem testejat la nostra proposta conjuntament amb altres estructures de dades i hem obtingut resultats molt satisfactoris, superant en molts casos les millors alternatives actuals. Totes les estructures testejades estan també disponibles al nostre Pipeline. Aquesta Toolbox està pensada per ésser una eina útil per tota la comunitat i està a disposició dels investigadors sota llicència Creative-Commons

Libri sul tema "3D point cloud representation":

1

Liu, Shan, Min Zhang, Pranav Kadam e C. C. Jay Kuo. 3D Point Cloud Analysis. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89180-0.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Zhang, Guoxiang, e YangQuan Chen. Towards Optimal Point Cloud Processing for 3D Reconstruction. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96110-7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Chen, YangQuan, e Guoxiang Zhang. Towards Optimal Point Cloud Processing for 3D Reconstruction. Springer International Publishing AG, 2022.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Zhang, Min, Shan Liu, C. C. Jay Kuo e Pranav Kadam. 3D Point Cloud Analysis: Traditional, Deep Learning, and Explainable Machine Learning Methods. Springer International Publishing AG, 2021.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Zhang, Min, Shan Liu, C. C. Jay Kuo e Pranav Kadam. 3D Point Cloud Analysis: Traditional, Deep Learning, and Explainable Machine Learning Methods. Springer International Publishing AG, 2022.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "3D point cloud representation":

1

Zdobylak, Adrian, e Maciej Zieba. "Semi-supervised Representation Learning for 3D Point Clouds". In Intelligent Information and Database Systems, 480–91. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41964-6_41.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Liu, Jingya, Oguz Akin e Yingli Tian. "Rethinking Pulmonary Nodule Detection in Multi-view 3D CT Point Cloud Representation". In Machine Learning in Medical Imaging, 80–90. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87589-3_9.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Miyachi, Hideo, e Koshiro Murakami. "A Study of 3D Shape Similarity Search in Point Representation by Using Machine Learning". In Advances on P2P, Parallel, Grid, Cloud and Internet Computing, 265–74. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-33509-0_24.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Ćurković, Milan, e Damir Vučina. "Adaptive Representation of Large 3D Point Clouds for Shape Optimization". In Operations Research Proceedings, 547–53. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-42902-1_74.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

He, Tong, Dong Gong, Zhi Tian e Chunhua Shen. "Learning and Memorizing Representative Prototypes for 3D Point Cloud Semantic and Instance Segmentation". In Computer Vision – ECCV 2020, 564–80. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58523-5_33.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Sant, Rohit, Ninad Kulkarni, Ainesh Bakshi, Salil Kapur e Kratarth Goel. "Autonomous Robot Navigation: Path Planning on a Detail-Preserving Reduced-Complexity Representation of 3D Point Clouds". In Lecture Notes in Computer Science, 173–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39402-7_18.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Héno, Raphaële, e Laure Chandelier. "Point Cloud Processing". In 3D Modeling of Buildings, 133–81. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781118648889.ch5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Weinmann, Martin. "Point Cloud Registration". In Reconstruction and Analysis of 3D Scenes, 55–110. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-29246-5_4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Li, Ge, Wei Gao e Wen Gao. "MPEG AI-Based 3D Graphics Coding Standard". In Point Cloud Compression, 219–41. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-1957-0_10.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Liu, Shan, Min Zhang, Pranav Kadam e C. C. Jay Kuo. "Deep Learning-Based Point Cloud Analysis". In 3D Point Cloud Analysis, 53–86. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89180-0_3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "3D point cloud representation":

1

Eybposh, M. Hossein, Changjia Cai, Diptodip Deb, Miguel A. B. Schott, Longtian Ye, Gert-Jan Both, Srinivas C. Turaga, Jose Rodriguez-Romaguera e Nicolas C. Pégard. "Computer-Generated Holography Using Point Cloud Processing Neural Networks". In 3D Image Acquisition and Display: Technology, Perception and Applications. Washington, D.C.: Optica Publishing Group, 2023. http://dx.doi.org/10.1364/3d.2023.dw5a.4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We present a new deep-learning-based method for Computer Generated Holography (CGH) with point cloud representation. Our technique, DeepCGH2.0, dramatically reduces the size of the target image representations and synthesizes holograms in less than 2 milliseconds.
2

Wang, Lihui, Jing Chen e Baozong Yuan. "Simplified representation for 3D point cloud data". In 2010 10th International Conference on Signal Processing (ICSP 2010). IEEE, 2010. http://dx.doi.org/10.1109/icosp.2010.5656972.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Li, Zongmin, Yupeng Zhang e Yun Bai. "Geometric Invariant Representation Learning for 3D Point Cloud". In 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2021. http://dx.doi.org/10.1109/ictai52525.2021.00235.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Feng, Tuo, Wenguan Wang, Xiaohan Wang, Yi Yang e Qinghua Zheng. "Clustering based Point Cloud Representation Learning for 3D Analysis". In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.00761.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Su, Zhuo, Max Welling, Matti Pietikainen e Li Liu. "SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud Representation". In 2022 International Conference on 3D Vision (3DV). IEEE, 2022. http://dx.doi.org/10.1109/3dv57658.2022.00084.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Ishikawa, H., e H. Saito. "Point cloud representation of 3D shape for laser-plasma scanning 3D display". In IECON 2008 - 34th Annual Conference of IEEE Industrial Electronics Society. IEEE, 2008. http://dx.doi.org/10.1109/iecon.2008.4758248.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Kambhamettu, Chandra. "3DSAINT Representation for 3D Point Clouds". In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2023. http://dx.doi.org/10.1109/cvprw59228.2023.00277.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Fan, Tingyu, Linyao Gao, Yiling Xu, Zhu Li e Dong Wang. "D-DPCC: Deep Dynamic Point Cloud Compression via 3D Motion Prediction". In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/126.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The non-uniformly distributed nature of the 3D Dynamic Point Cloud (DPC) brings significant challenges to its high-efficient inter-frame compression. This paper proposes a novel 3D sparse convolution-based Deep Dynamic Point Cloud Compression (D-DPCC) network to compensate and compress the DPC geometry with 3D motion estimation and motion compensation in the feature space. In the proposed D-DPCC network, we design a Multi-scale Motion Fusion (MMF) module to accurately estimate the 3D optical flow between the feature representations of adjacent point cloud frames. Specifically, we utilize a 3D sparse convolution-based encoder to obtain the latent representation for motion estimation in the feature space and introduce the proposed MMF module for fused 3D motion embedding. Besides, for motion compensation, we propose a 3D Adaptively Weighted Interpolation (3DAWI) algorithm with a penalty coefficient to adaptively decrease the impact of distant neighbours. We compress the motion embedding and the residual with a lossy autoencoder-based network. To our knowledge, this paper is the first work proposing an end-to-end deep dynamic point cloud compression framework. The experimental result shows that the proposed D-DPCC framework achieves an average 76% BD-Rate (Bjontegaard Delta Rate) gains against state-of-the-art Video-based Point Cloud Compression (V-PCC) v13 in inter mode.
9

Nguyen, Van Tung, Trung-Thien Tran, Van-Toan Cao e Denis Laurendeau. "3D Point Cloud Registration Based on the Vector Field Representation". In 2013 2nd IAPR Asian Conference on Pattern Recognition (ACPR). IEEE, 2013. http://dx.doi.org/10.1109/acpr.2013.111.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Wells, Lee J., Mohammed S. Shafae e Jaime A. Camelio. "Automated Part Inspection Using 3D Point Clouds". In ASME 2013 International Manufacturing Science and Engineering Conference collocated with the 41st North American Manufacturing Research Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/msec2013-1212.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Ever advancing sensor and measurement technologies continually provide new opportunities for knowledge discovery and quality control (QC) strategies for complex manufacturing systems. One such state-of-the-art measurement technology currently being implemented in industry is the 3D laser scanner, which can rapidly provide millions of data points to represent an entire manufactured part’s surface. This gives 3D laser scanners a significant advantage over competing technologies that typically provide tens or hundreds of data points. Consequently, data collected from 3D laser scanners have a great potential to be used for inspecting parts for surface and feature abnormalities. The current use of 3D point clouds for part inspection falls into two main categories; 1) Extracting feature parameters, which does not complement the nature of 3D point clouds as it wastes valuable data and 2) An ad-hoc manual process where a visual representation of a point cloud (usually as deviations from nominal) is analyzed, which tends to suffer from slow, inefficient, and inconsistent inspection results. Therefore our paper proposes an approach to automate the latter approach to 3D point cloud inspection. The proposed approach uses a newly developed adaptive generalized likelihood ratio (AGLR) technique to identify the most likely size, shape, and magnitude of a potential fault within the point cloud, which transforms the ad-hoc visual inspection approach to a statistically viable automated inspection solution. In order to aid practitioners in designing and implementing an AGLR-based inspection process, our paper also reports the performance of the AGLR with respect to the probability of detecting specific size and magnitude faults in addition to the probability of a false alarms.

Rapporti di organizzazioni sul tema "3D point cloud representation":

1

Smith, Curtis L., Steven Prescott, Kellie Kvarfordt, Ram Sampath e Katie Larson. Status of the phenomena representation, 3D modeling, and cloud-based software architecture development. Office of Scientific and Technical Information (OSTI), settembre 2015. http://dx.doi.org/10.2172/1245516.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Blundell, S., e Philip Devine. Creation, transformation, and orientation adjustment of a building façade model for feature segmentation : transforming 3D building point cloud models into 2D georeferenced feature overlays. Engineer Research and Development Center (U.S.), gennaio 2020. http://dx.doi.org/10.21079/11681/35115.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Ennasr, Osama, Charles Ellison, Anton Netchaev, Ahmet Soylemezoglu e Garry Glaspell. Unmanned ground vehicle (UGV) path planning in 2.5D and 3D. Engineer Research and Development Center (U.S.), agosto 2023. http://dx.doi.org/10.21079/11681/47459.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Herein, we explored path planning in 2.5D and 3D for unmanned ground vehicle (UGV) applications. For real-time 2.5D navigation, we investigated generating 2.5D occupancy grids using either elevation or traversability to determine path costs. Compared to elevation, traversability, which used a layered approach generated from surface normals, was more robust for the tested environments. A layered approached was also used for 3D path planning. While it was possible to use the 3D approach in real time, the time required to generate 3D meshes meant that the only way to effectively path plan was to use a preexisting point cloud environment. As a result, we explored generating 3D meshes from a variety of sources, including handheld sensors, UGVs, UAVs, and aerial lidar.
4

Ennasr, Osama, Michael Paquette e Garry Glaspell. UGV SLAM payload for low-visibility environments. Engineer Research and Development Center (U.S.), settembre 2023. http://dx.doi.org/10.21079/11681/47589.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Herein, we explore using a low size, weight, power, and cost unmanned ground vehicle payload designed specifically for low-visibility environments. The proposed payload simultaneously localizes and maps in GPS-denied environments via waypoint navigation. This solution utilizes a diverse sensor payload that includes wheel encoders, inertial measurement unit, 3D lidar, 3D ultrasonic sensors, and thermal cameras. Furthermore, the resulting 3D point cloud was compared against a survey-grade lidar.
5

Habib, Ayman, Darcy M. Bullock, Yi-Chun Lin e Raja Manish. Road Ditch Line Mapping with Mobile LiDAR. Purdue University, 2021. http://dx.doi.org/10.5703/1288284317354.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Maintenance of roadside ditches is important to avoid localized flooding and premature failure of pavements. Scheduling effective preventative maintenance requires mapping of the ditch profile to identify areas requiring excavation of long-term sediment accumulation. High-resolution, high-quality point clouds collected by mobile LiDAR mapping systems (MLMS) provide an opportunity for effective monitoring of roadside ditches and performing hydrological analyses. This study evaluated the applicability of mobile LiDAR for mapping roadside ditches for slope and drainage analyses. The performance of alternative MLMS units was performed. These MLMS included an unmanned ground vehicle, an unmanned aerial vehicle, a portable backpack system along with its vehicle-mounted version, a medium-grade wheel-based system, and a high-grade wheel-based system. Point cloud from all the MLMS units were in agreement in the vertical direction within the ±3 cm range for solid surfaces, such as paved roads, and ±7 cm range for surfaces with vegetation. The portable backpack system that could be carried by a surveyor or mounted on a vehicle and was the most flexible MLMS. The report concludes that due to flexibility and cost effectiveness of the portable backpack system, it is the preferred platform for mapping roadside ditches, followed by the medium-grade wheel-based system. Furthermore, a framework for ditch line characterization is proposed and tested using datasets acquired by the medium-grade wheel-based and vehicle-mounted portable systems over a state highway. An existing ground filtering approach is modified to handle variations in point density of mobile LiDAR data. Hydrological analyses, including flow direction and flow accumulation, are applied to extract the drainage network from the digital terrain model (DTM). Cross-sectional/longitudinal profiles of the ditch are automatically extracted from LiDAR data and visualized in 3D point clouds and 2D images. The slope derived from the LiDAR data was found to be very close to highway cross slope design standards of 2% on driving lanes, 4% on shoulders, as well as 6-by-1 slope for ditch lines. Potential flooded regions are identified by detecting areas with no LiDAR return and a recall score of 54% and 92% was achieved by the medium-grade wheel-based and vehicle-mounted portable systems, respectively. Furthermore, a framework for ditch line characterization is proposed and tested using datasets acquired by the medium-grade wheel-based and vehicle-mounted portable systems over a state highway. An existing ground filtering approach is modified to handle variations in point density of mobile LiDAR data. Hydrological analyses, including flow direction and flow accumulation, are applied to extract the drainage network from the digital terrain model (DTM). Cross-sectional/longitudinal profiles of the ditch are automatically extracted from LiDAR data, and visualized in 3D point clouds and 2D images. The slope derived from the LiDAR data was found to be very close to highway cross slope design standards of 2% on driving lanes, 4% on shoulder, as well as 6-by-1 slope for ditch lines. Potential flooded regions are identified by detecting areas with no LiDAR return and a recall score of 54% and 92% was achieved by the medium-grade wheel-based and vehicle-mounted portable systems, respectively.

Vai alla bibliografia