Academic literature on the topic 'Point cloud instance segmentation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Point cloud instance segmentation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Point cloud instance segmentation"

1

Zhao, Lin, and Wenbing Tao. "JSNet: Joint Instance and Semantic Segmentation of 3D Point Clouds." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12951–58. http://dx.doi.org/10.1609/aaai.v34i07.6994.

Full text
Abstract:
In this paper, we propose a novel joint instance and semantic segmentation approach, which is called JSNet, in order to address the instance and semantic segmentation of 3D point clouds simultaneously. Firstly, we build an effective backbone network to extract robust features from the raw point clouds. Secondly, to obtain more discriminative features, a point cloud feature fusion module is proposed to fuse the different layer features of the backbone network. Furthermore, a joint instance semantic segmentation module is developed to transform semantic features into instance embedding space, and then the transformed features are further fused with instance features to facilitate instance segmentation. Meanwhile, this module also aggregates instance features into semantic feature space to promote semantic segmentation. Finally, the instance predictions are generated by applying a simple mean-shift clustering on instance embeddings. As a result, we evaluate the proposed JSNet on a large-scale 3D indoor point cloud dataset S3DIS and a part dataset ShapeNet, and compare it with existing approaches. Experimental results demonstrate our approach outperforms the state-of-the-art method in 3D instance segmentation with a significant improvement in 3D semantic prediction and our method is also beneficial for part segmentation. The source code for this work is available at https://github.com/dlinzhao/JSNet.
APA, Harvard, Vancouver, ISO, and other styles
2

Agapaki, Eva, and Ioannis Brilakis. "Instance Segmentation of Industrial Point Cloud Data." Journal of Computing in Civil Engineering 35, no. 6 (November 2021): 04021022. http://dx.doi.org/10.1061/(asce)cp.1943-5487.0000972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Hui, Ciyun Lin, Dayong Wu, and Bowen Gong. "Slice-Based Instance and Semantic Segmentation for Low-Channel Roadside LiDAR Data." Remote Sensing 12, no. 22 (November 21, 2020): 3830. http://dx.doi.org/10.3390/rs12223830.

Full text
Abstract:
More and more scholars are committed to light detection and ranging (LiDAR) as a roadside sensor to obtain traffic flow data. Filtering and clustering are common methods to extract pedestrians and vehicles from point clouds. This kind of method ignores the impact of environmental information on traffic. The segmentation process is a crucial part of detailed scene understanding, which could be especially helpful for locating, recognizing, and classifying objects in certain scenarios. However, there are few studies on the segmentation of low-channel (16 channels in this paper) roadside 3D LiDAR. This paper presents a novel segmentation (slice-based) method for point clouds of roadside LiDAR. The proposed method can be divided into two parts: the instance segmentation part and semantic segmentation part. The part of the instance segmentation of point cloud is based on the regional growth method, and we proposed a seed point generation method for low-channel LiDAR data. Furthermore, we optimized the instance segmentation effect under occlusion. The part of semantic segmentation of a point cloud is realized by classifying and labeling the objects obtained by instance segmentation. For labeling static objects, we represented and classified a certain object through the related features derived from its slices. For labeling moving objects, we proposed a recurrent neural network (RNN)-based model, of which the accuracy could be up to 98.7%. The result implies that the slice-based method can obtain a good segmentation effect and the slice has good potential for point cloud segmentation.
APA, Harvard, Vancouver, ISO, and other styles
4

Gao, Zhiyong, and Jianhong Xiang. "Real-time 3D Object Detection Using Improved Convolutional Neural Network Based on Image-driven Point Cloud." (Recent Advances in Electrical & Electronic Engineering (Formerly Recent Patents on Electrical & Electronic Engineering) 14, no. 8 (December 23, 2021): 826–36. http://dx.doi.org/10.2174/2352096514666211026142721.

Full text
Abstract:
Background: While detecting the object directly from the 3D point cloud, the natural 3D patterns and invariance of 3D data are often obscure. Objective: In this work, we aimed at studying the 3D object detection from discrete, disordered and sparse 3D point clouds. Methods: The CNN comprises the frustum sequence module, 3D instance segmentation module SNET, 3D point cloud transformation module T-NET, and 3D boundary box estimation module ENET. The search space of the object is determined by the frustum sequence module. The instance segmentation of the point cloud is performed by the 3D instance segmentation module. The 3D coordinates of the object are confirmed by the transformation module and the 3D bounding box estimation module. Results: Evaluated on KITTI benchmark dataset, our method outperforms state of the art by remarkable margins while having real-time capability. Conclusion: We achieve real-time 3D object detection by proposing an improved Convolutional Neural Network (CNN) based on image-driven point clouds.
APA, Harvard, Vancouver, ISO, and other styles
5

Karara, Ghizlane, Rafika Hajji, and Florent Poux. "3D Point Cloud Semantic Augmentation: Instance Segmentation of 360° Panoramas by Deep Learning Techniques." Remote Sensing 13, no. 18 (September 13, 2021): 3647. http://dx.doi.org/10.3390/rs13183647.

Full text
Abstract:
Semantic augmentation of 3D point clouds is a challenging problem with numerous real-world applications. While deep learning has revolutionised image segmentation and classification, its impact on point cloud is an active research field. In this paper, we propose an instance segmentation and augmentation of 3D point clouds using deep learning architectures. We show the potential of an indirect approach using 2D images and a Mask R-CNN (Region-Based Convolution Neural Network). Our method consists of four core steps. We first project the point cloud onto panoramic 2D images using three types of projections: spherical, cylindrical, and cubic. Next, we homogenise the resulting images to correct the artefacts and the empty pixels to be comparable to images available in common training libraries. These images are then used as input to the Mask R-CNN neural network, designed for 2D instance segmentation. Finally, the obtained predictions are reprojected to the point cloud to obtain the segmentation results. We link the results to a context-aware neural network to augment the semantics. Several tests were performed on different datasets to test the adequacy of the method and its potential for generalisation. The developed algorithm uses only the attributes X, Y, Z, and a projection centre (virtual camera) position as inputs.
APA, Harvard, Vancouver, ISO, and other styles
6

Cao, Yu, Yancheng Wang, Yifei Xue, Huiqing Zhang, and Yizhen Lao. "FEC: Fast Euclidean Clustering for Point Cloud Segmentation." Drones 6, no. 11 (October 27, 2022): 325. http://dx.doi.org/10.3390/drones6110325.

Full text
Abstract:
Segmentation from point cloud data is essential in many applications, such as remote sensing, mobile robots, or autonomous cars. However, the point clouds captured by the 3D range sensor are commonly sparse and unstructured, challenging efficient segmentation. A fast solution for point cloud instance segmentation with small computational demands is lacking. To this end, we propose a novel fast Euclidean clustering (FEC) algorithm which applies a point-wise scheme over the cluster-wise scheme used in existing works. The proposed method avoids traversing every point constantly in each nested loop, which is time and memory-consuming. Our approach is conceptually simple, easy to implement (40 lines in C++), and achieves two orders of magnitudes faster against the classical segmentation methods while producing high-quality results.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Dawei, Jinsheng Li, Shiyu Xiang, and Anqi Pan. "PSegNet: Simultaneous Semantic and Instance Segmentation for Point Clouds of Plants." Plant Phenomics 2022 (May 23, 2022): 1–20. http://dx.doi.org/10.34133/2022/9787643.

Full text
Abstract:
Phenotyping of plant growth improves the understanding of complex genetic traits and eventually expedites the development of modern breeding and intelligent agriculture. In phenotyping, segmentation of 3D point clouds of plant organs such as leaves and stems contributes to automatic growth monitoring and reflects the extent of stress received by the plant. In this work, we first proposed the Voxelized Farthest Point Sampling (VFPS), a novel point cloud downsampling strategy, to prepare our plant dataset for training of deep neural networks. Then, a deep learning network—PSegNet, was specially designed for segmenting point clouds of several species of plants. The effectiveness of PSegNet originates from three new modules including the Double-Neighborhood Feature Extraction Block (DNFEB), the Double-Granularity Feature Fusion Module (DGFFM), and the Attention Module (AM). After training on the plant dataset prepared with VFPS, the network can simultaneously realize the semantic segmentation and the leaf instance segmentation for three plant species. Comparing to several mainstream networks such as PointNet++, ASIS, SGPN, and PlantNet, the PSegNet obtained the best segmentation results quantitatively and qualitatively. In semantic segmentation, PSegNet achieved 95.23%, 93.85%, 94.52%, and 89.90% for the mean Prec, Rec, F1, and IoU, respectively. In instance segmentation, PSegNet achieved 88.13%, 79.28%, 83.35%, and 89.54% for the mPrec, mRec, mCov, and mWCov, respectively.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhao, Guangyuan, Xue Wan, Yaolin Tian, Yadong Shao, and Shengyang Li. "3D Component Segmentation Network and Dataset for Non-Cooperative Spacecraft." Aerospace 9, no. 5 (May 1, 2022): 248. http://dx.doi.org/10.3390/aerospace9050248.

Full text
Abstract:
Spacecraft component segmentation is one of the key technologies which enables autonomous navigation and manipulation for non-cooperative spacecraft in OOS (On-Orbit Service). While most of the studies on spacecraft component segmentation are based on 2D image segmentation, this paper proposes spacecraft component segmentation methods based on 3D point clouds. Firstly, we propose a multi-source 3D spacecraft component segmentation dataset, including point clouds from lidar and VisualSFM (Visual Structure From Motion). Then, an improved PointNet++ based 3D component segmentation network named 3DSatNet is proposed with a new geometrical-aware FE (Feature Extraction) layers and a new loss function to tackle the data imbalance problem which means the points number of different components differ greatly, and the density distribution of point cloud is not uniform. Moreover, when the partial prior point clouds of the target spacecraft are known, we propose a 3DSatNet-Reg network by adding a Teaser-based 3D point clouds registration module to 3DSatNet to obtain higher component segmentation accuracy. Experiments carried out on our proposed dataset demonstrate that the proposed 3DSatNet achieves 1.9% higher instance mIoU than PointNet++_SSG, and the highest IoU for antenna in both lidar point clouds and visual point clouds compared with the popular networks. Furthermore, our algorithm has been deployed on an embedded AI computing device Nvidia Jetson TX2 which has the potential to be used on orbit with a processing speed of 0.228 s per point cloud with 20,000 points.
APA, Harvard, Vancouver, ISO, and other styles
9

Huang, Pin-Hao, Han-Hung Lee, Hwann-Tzong Chen, and Tyng-Luh Liu. "Text-Guided Graph Neural Networks for Referring 3D Instance Segmentation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1610–18. http://dx.doi.org/10.1609/aaai.v35i2.16253.

Full text
Abstract:
This paper addresses a new task called referring 3D instance segmentation, which aims to segment out the target instance in a 3D scene given a query sentence. Previous work on scene understanding has explored visual grounding with natural language guidance, yet the emphasis is mostly constrained on images and videos. We propose a Text-guided Graph Neural Network (TGNN) for referring 3D instance segmentation on point clouds. Given a query sentence and the point cloud of a 3D scene, our method learns to extract per-point features and predicts an offset to shift each point toward its object center. Based on the point features and the offsets, we cluster the points to produce fused features and coordinates for the candidate objects. The resulting clusters are modeled as nodes in a Graph Neural Network to learn the representations that encompass the relation structure for each candidate object. The GNN layers leverage each object's features and its relations with neighbors to generate an attention heatmap for the input sentence expression. Finally, the attention heatmap is used to "guide" the aggregation of information from neighborhood nodes. Our method achieves state-of-the-art performance on referring 3D instance segmentation and 3D localization on ScanRefer, Nr3D, and Sr3D benchmarks, respectively.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Yongjun, Wangshan Yang, Xinyi Liu, Yi Wan, Xianzhang Zhu, and Yuhui Tan. "Unsupervised Building Instance Segmentation of Airborne LiDAR Point Clouds for Parallel Reconstruction Analysis." Remote Sensing 13, no. 6 (March 17, 2021): 1136. http://dx.doi.org/10.3390/rs13061136.

Full text
Abstract:
Efficient building instance segmentation is necessary for many applications such as parallel reconstruction, management and analysis. However, most of the existing instance segmentation methods still suffer from low completeness, low correctness and low quality for building instance segmentation, which are especially obvious for complex building scenes. This paper proposes a novel unsupervised building instance segmentation (UBIS) method of airborne Light Detection and Ranging (LiDAR) point clouds for parallel reconstruction analysis, which combines a clustering algorithm and a novel model consistency evaluation method. The proposed method first divides building point clouds into building instances by the improved kd tree 2D shared nearest neighbor clustering algorithm (Ikd-2DSNN). Then, the geometric feature of the building instance is obtained using the model consistency evaluation method, which is used to determine whether the building instance is a single building instance or a multi-building instance. Finally, for multiple building instances, the improved kd tree 3D shared nearest neighbor clustering algorithm (Ikd-3DSNN) is used to divide multi-building instances again to improve the accuracy of building instance segmentation. Our experimental results demonstrate that the proposed UBIS method obtained good performances for various buildings in different scenes such as high-rise building, podium buildings and a residential area with detached houses. A comparative analysis confirms that the proposed UBIS method performed better than state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Point cloud instance segmentation"

1

Gujar, Sanket. "Pointwise and Instance Segmentation for 3D Point Cloud." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-theses/1290.

Full text
Abstract:
The camera is the cheapest and computationally real-time option for detecting or segmenting the environment for an autonomous vehicle, but it does not provide the depth information and is undoubtedly not reliable during the night, bad weather, and tunnel flash outs. The risk of an accident gets higher for autonomous cars when driven by a camera in such situations. The industry has been relying on LiDAR for the past decade to solve this problem and focus on depth information of the environment, but LiDAR also has its shortcoming. The industry methods commonly use projections methods to create a projection image and run detection and localization network for inference, but LiDAR sees obscurants in bad weather and is sensitive enough to detect snow, making it difficult for robustness in projection based methods. We propose a novel pointwise and Instance segmentation deep learning architecture for the point clouds focused on self-driving application. The model is only dependent on LiDAR data making it light invariant and overcoming the shortcoming of the camera in the perception stack. The pipeline takes advantage of both global and local/edge features of points in points clouds to generate high-level feature. We also propose Pointer-Capsnet which is an extension of CapsNet for small 3D point clouds.
APA, Harvard, Vancouver, ISO, and other styles
2

Konradsson, Albin, and Gustav Bohman. "3D Instance Segmentation of Cluttered Scenes : A Comparative Study of 3D Data Representations." Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177598.

Full text
Abstract:
This thesis provides a comparison between instance segmentation methods using point clouds and depth images. Specifically, their performance on cluttered scenes of irregular objects in an industrial environment is investigated. Recent work by Wang et al. [1] has suggested potential benefits of a point cloud representation when performing deep learning on data from 3D cameras. However, little work has been done to enable quantifiable comparisons between methods based on different representations, particularly on industrial data. Generating synthetic data provides accurate grayscale, depth map, and point cloud representations for a large number of scenes and can thus be used to compare methods regardless of datatype. The datasets in this work are created using a tool provided by SICK. They simulate postal packages on a conveyor belt scanned by a LiDAR, closely resembling a common industry application. Two datasets are generated. One dataset has low complexity, containing only boxes.The other has higher complexity, containing a combination of boxes and multiple types of irregularly shaped parcels. State-of-the-art instance segmentation methods are selected based on their performance on existing benchmarks. We chose PointGroup by Jiang et al. [2], which uses point clouds, and Mask R-CNN by He et al. [3], which uses images. The results support that there may be benefits of using a point cloud representation over depth images. PointGroup performs better in terms of the chosen metric on both datasets. On low complexity scenes, the inference times are similar between the two methods tested. However, on higher complexity scenes, MaskR-CNN is significantly faster.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu, Charlotte. "Point cloud segmentation for mobile robot manipulation." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106400.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 47-48).
In this thesis, we develop a system for estimating a belief state for a scene over multiple observations of the scene. Given as input a sequence of observed RGB-D point clouds of a scene, a list of known objects in the scene and their pose distributions as a prior, and a black-box object detector, our system outputs a belief state of what is believed to be in the scene. This belief state consists of the states of known objects, walls, the floor, and "stuff" in the scene based on the observed point clouds. The system first segments the observed point clouds and then incrementally updates the belief state with each segmented point cloud.
by Charlotte Zhu.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
4

Kulkarni, Amey S. "Motion Segmentation for Autonomous Robots Using 3D Point Cloud Data." Digital WPI, 2020. https://digitalcommons.wpi.edu/etd-theses/1370.

Full text
Abstract:
Achieving robot autonomy is an extremely challenging task and it starts with developing algorithms that help the robot understand how humans perceive the environment around them. Once the robot understands how to make sense of its environment, it is easy to make efficient decisions about safe movement. It is hard for robots to perform tasks that come naturally to humans like understanding signboards, classifying traffic lights, planning path around dynamic obstacles, etc. In this work, we take up one such challenge of motion segmentation using Light Detection and Ranging (LiDAR) point clouds. Motion segmentation is the task of classifying a point as either moving or static. As the ego-vehicle moves along the road, it needs to detect moving cars with very high certainty as they are the areas of interest which provide cues to the ego-vehicle to plan it's motion. Motion segmentation algorithms segregate moving cars from static cars to give more importance to dynamic obstacles. In contrast to the usual LiDAR scan representations like range images and regular grid, this work uses a modern representation of LiDAR scans using permutohedral lattices. This representation gives ease of representing unstructured LiDAR points in an efficient lattice structure. We propose a machine learning approach to perform motion segmentation. The network architecture takes in two sequential point clouds and performs convolutions on them to estimate if 3D points from the first point cloud are moving or static. Using two temporal point clouds help the network in learning what features constitute motion. We have trained and tested our learning algorithm on the FlyingThings3D dataset and a modified KITTI dataset with simulated motion.
APA, Harvard, Vancouver, ISO, and other styles
5

He, Linbo. "Improving 3D Point Cloud Segmentation Using Multimodal Fusion of Projected 2D Imagery Data : Improving 3D Point Cloud Segmentation Using Multimodal Fusion of Projected 2D Imagery Data." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157705.

Full text
Abstract:
Semantic segmentation is a key approach to comprehensive image data analysis. It can be applied to analyze 2D images, videos, and even point clouds that contain 3D data points. On the first two problems, CNNs have achieved remarkable progress, but on point cloud segmentation, the results are less satisfactory due to challenges such as limited memory resource and difficulties in 3D point annotation. One of the research studies carried out by the Computer Vision Lab at Linköping University was aiming to ease the semantic segmentation of 3D point cloud. The idea is that by first projecting 3D data points to 2D space and then focusing only on the analysis of 2D images, we can reduce the overall workload for the segmentation process as well as exploit the existing well-developed 2D semantic segmentation techniques. In order to improve the performance of CNNs for 2D semantic segmentation, the study has used input data derived from different modalities. However, how different modalities can be optimally fused is still an open question. Based on the above-mentioned study, this thesis aims to improve the multistream framework architecture. More concretely, we investigate how different singlestream architectures impact the multistream framework with a given fusion method, and how different fusion methods contribute to the overall performance of a given multistream framework. As a result, our proposed fusion architecture outperformed all the investigated traditional fusion methods. Along with the best singlestream candidate and few additional training techniques, our final proposed multistream framework obtained a relative gain of 7.3\% mIoU compared to the baseline on the semantic3D point cloud test set, increasing the ranking from 12th to 5th position on the benchmark leaderboard.
APA, Harvard, Vancouver, ISO, and other styles
6

Awadallah, Mahmoud Sobhy Tawfeek. "Image Analysis Techniques for LiDAR Point Cloud Segmentation and Surface Estimation." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73055.

Full text
Abstract:
Light Detection And Ranging (LiDAR), as well as many other applications and sensors, involve segmenting sparse sets of points (point clouds) for which point density is the only discriminating feature. The segmentation of these point clouds is challenging for several reasons, including the fact that the points are not associated with a regular grid. Moreover, the presence of noise, particularly impulsive noise with varying density, can make it difficult to obtain a good segmentation using traditional techniques, including the algorithms that had been developed to process LiDAR data. This dissertation introduces novel algorithms and frameworks based on statistical techniques and image analysis in order to segment and extract surfaces from sparse noisy point clouds. We introduce an adaptive method for mapping point clouds onto an image grid followed by a contour detection approach that is based on an enhanced version of region-based Active Contours Without Edges (ACWE). We also proposed a noise reduction method using Bayesian approach and incorporated it, along with other noise reduction approaches, into a joint framework that produces robust results. We combined the aforementioned techniques with a statistical surface refinement method to introduce a novel framework to detect ground and canopy surfaces in micropulse photon-counting LiDAR data. The algorithm is fully automatic and uses no prior elevation or geographic information to extract surfaces. Moreover, we propose a novel segmentation framework for noisy point clouds in the plane based on a Markov random field (MRF) optimization that we call Point Cloud Densitybased Segmentation (PCDS). We also developed a large synthetic dataset of in plane point clouds that includes either a set of randomly placed, sized and oriented primitive objects (circle, rectangle and triangle) or an arbitrary shape that forms a simple approximation for the LiDAR point clouds. The experiment performed on a large number of real LiDAR and synthetic point clouds showed that our proposed frameworks and algorithms outperforms the state-of-the-art algorithms in terms of segmentation accuracy and surface RMSE.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Šooš, Marek. "Segmentace 2D Point-cloudu pro proložení křivkami." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-444985.

Full text
Abstract:
The presented diploma thesis deals with the division of points into homogeneous groups. The work provides a broad overview of the current state in this topic and a brief explanation of the main segmentation methods principles. From the analysis of the articles are selected and programmed five algorithms. The work defines the principles of selected algorithms and explains their mathematical models. For each algorithm is also given a code design description. The diploma thesis also contains a cross comparison of segmentation capabilities of individual algorithms on created as well as on measured data. The results of the curves extraction are compared with each other graphically and numerically. At the end of the work is a comparison graph of time dependence on the number of points and the table that includes a mutual comparison of algorithms in specific areas.
APA, Harvard, Vancouver, ISO, and other styles
8

Jagbrant, Gustav. "Autonomous Crop Segmentation, Characterisation and Localisation." Thesis, Linköpings universitet, Institutionen för systemteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-97374.

Full text
Abstract:
Orchards demand large areas of land, thus they are often situated far from major population centres. As a result it is often difficult to obtain the necessary personnel, limiting both growth and productivity. However, if autonomous robots could be integrated into the operation of the orchard, the manpower demand could be reduced. A key problem for any autonomous robot is localisation; how does the robot know where it is? In agriculture robots, the most common approach is to use GPS positioning. However, in an orchard environment, the dense and tall vegetation restricts the usage to large robots that reach above the surroundings. In order to enable the use of smaller robots, it is instead necessary to use a GPS independent system. However, due to the similarity of the environment and the lack of strong recognisable features, it appears unlikely that typical non-GPS solutions will prove successful. Therefore we present a GPS independent localisation system, specifically aimed for orchards, that utilises the inherent structure of the surroundings. Furthermore, we examine and individually evaluate three related sub-problems. The proposed system utilises a 3D point cloud created from a 2D LIDAR and the robot’s movement. First, we show how the data can be segmented into individual trees using a Hidden Semi-Markov Model. Second, we introduce a set of descriptors for describing the geometric characteristics of the individual trees. Third, we present a robust localisation method based on Hidden Markov Models. Finally, we propose a method for detecting segmentation errors when associating new tree measurements with previously measured trees. Evaluation shows that the proposed segmentation method is accurate and yields very few segmentation errors. Furthermore, the introduced descriptors are determined to be consistent and informative enough to allow localisation. Third, we show that the presented localisation method is robust both to noise and segmentation errors. Finally it is shown that a significant majority of all segmentation errors can be detected without falsely labeling correct segmentations as incorrect.
Eftersom fruktodlingar kräver stora markområden är de ofta belägna långt från större befolkningscentra. Detta gör det svårt att finna tillräckligt med arbetskraft och begränsar expansionsmöjligheterna. Genom att integrera autonoma robotar i drivandet av odlingarna skulle arbetet kunna effektiviseras och behovet av arbetskraft minska. Ett nyckelproblem för alla autonoma robotar är lokalisering; hur vet roboten var den är? I jordbruksrobotar är standardlösningen att använda GPS-positionering. Detta är dock problematiskt i fruktodlingar, då den höga och täta vegetationen begränsar användandet till större robotar som når ovanför omgivningen. För att möjliggöra användandet av mindre robotar är det istället nödvändigt att använda ett GPS-oberoende lokaliseringssystem. Detta problematiseras dock av den likartade omgivningen och bristen på distinkta riktpunkter, varför det framstår som osannolikt att existerande standardlösningar kommer fungera i denna omgivning. Därför presenterar vi ett GPS-oberoende lokaliseringssystem, speciellt riktat mot fruktodlingar, som utnyttjar den naturliga strukturen hos omgivningen.Därutöver undersöker vi och utvärderar tre relaterade delproblem. Det föreslagna systemet använder ett 3D-punktmoln skapat av en 2D-LIDAR och robotens rörelse. Först visas hur en dold semi-markovmodell kan användas för att segmentera datasetet i enskilda träd. Därefter introducerar vi ett antal deskriptorer för att beskriva trädens geometriska form. Vi visar därefter hur detta kan kombineras med en dold markovmodell för att skapa ett robust lokaliseringssystem.Slutligen föreslår vi en metod för att detektera segmenteringsfel när nya mätningar av träd associeras med tidigare uppmätta träd. De föreslagna metoderna utvärderas individuellt och visar på goda resultat. Den föreslagna segmenteringsmetoden visas vara noggrann och ge upphov till få segmenteringsfel. Därutöver visas att de introducerade deskriptorerna är tillräckligt konsistenta och informativa för att möjliggöra lokalisering. Ytterligare visas att den presenterade lokaliseringsmetoden är robust både mot brus och segmenteringsfel. Slutligen visas att en signifikant majoritet av alla segmenteringsfel kan detekteras utan att felaktigt beteckna korrekta segmenteringar som inkorrekta.
APA, Harvard, Vancouver, ISO, and other styles
9

Serra, Sabina. "Deep Learning for Semantic Segmentation of 3D Point Clouds from an Airborne LiDAR." Thesis, Linköpings universitet, Datorseende, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-168367.

Full text
Abstract:
Light Detection and Ranging (LiDAR) sensors have many different application areas, from revealing archaeological structures to aiding navigation of vehicles. However, it is challenging to interpret and fully use the vast amount of unstructured data that LiDARs collect. Automatic classification of LiDAR data would ease the utilization, whether it is for examining structures or aiding vehicles. In recent years, there have been many advances in deep learning for semantic segmentation of automotive LiDAR data, but there is less research on aerial LiDAR data. This thesis investigates the current state-of-the-art deep learning architectures, and how well they perform on LiDAR data acquired by an Unmanned Aerial Vehicle (UAV). It also investigates different training techniques for class imbalanced and limited datasets, which are common challenges for semantic segmentation networks. Lastly, this thesis investigates if pre-training can improve the performance of the models. The LiDAR scans were first projected to range images and then a fully convolutional semantic segmentation network was used. Three different training techniques were evaluated: weighted sampling, data augmentation, and grouping of classes. No improvement was observed by the weighted sampling, neither did grouping of classes have a substantial effect on the performance. Pre-training on the large public dataset SemanticKITTI resulted in a small performance improvement, but the data augmentation seemed to have the largest positive impact. The mIoU of the best model, which was trained with data augmentation, was 63.7% and it performed very well on the classes Ground, Vegetation, and Vehicle. The other classes in the UAV dataset, Person and Structure, had very little data and were challenging for most models to classify correctly. In general, the models trained on UAV data performed similarly as the state-of-the-art models trained on automotive data.
APA, Harvard, Vancouver, ISO, and other styles
10

Vock, Dominik. "Automatic segmentation and reconstruction of traffic accident scenarios from mobile laser scanning data." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-141582.

Full text
Abstract:
Virtual reconstruction of historic sites, planning of restorations and attachments of new building parts, as well as forest inventory are few examples of fields that benefit from the application of 3D surveying data. Originally using 2D photo based documentation and manual distance measurements, the 3D information obtained from multi camera and laser scanning systems realizes a noticeable improvement regarding the surveying times and the amount of generated 3D information. The 3D data allows a detailed post processing and better visualization of all relevant spatial information. Yet, for the extraction of the required information from the raw scan data and for the generation of useable visual output, time-consuming, complex user-based data processing is still required, using the commercially available 3D software tools. In this context, the automatic object recognition from 3D point cloud and depth data has been discussed in many different works. The developed tools and methods however, usually only focus on a certain kind of object or the detection of learned invariant surface shapes. Although the resulting methods are applicable for certain practices of data segmentation, they are not necessarily suitable for arbitrary tasks due to the varying requirements of the different fields of research. This thesis presents a more widespread solution for automatic scene reconstruction from 3D point clouds, targeting street scenarios, specifically for the task of traffic accident scene analysis and documentation. The data, obtained by sampling the scene using a mobile scanning system is evaluated, segmented, and finally used to generate detailed 3D information of the scanned environment. To realize this aim, this work adapts and validates various existing approaches on laser scan segmentation regarding the application on accident relevant scene information, including road surfaces and markings, vehicles, walls, trees and other salient objects. The approaches are therefore evaluated regarding their suitability and limitations for the given tasks, as well as for possibilities concerning the combined application together with other procedures. The obtained knowledge is used for the development of new algorithms and procedures to allow a satisfying segmentation and reconstruction of the scene, corresponding to the available sampling densities and precisions. Besides the segmentation of the point cloud data, this thesis presents different visualization and reconstruction methods to achieve a wider range of possible applications of the developed system for data export and utilization in different third party software tools.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Point cloud instance segmentation"

1

He, Tong, Yifan Liu, Chunhua Shen, Xinlong Wang, and Changming Sun. "Instance-Aware Embedding for Point Cloud Instance Segmentation." In Computer Vision – ECCV 2020, 255–70. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58577-8_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Brådland, Henrik, Martin Choux, and Linga Reddy Cenkeramaddi. "Point Cloud Instance Segmentation for Automatic Electric Vehicle Battery Disassembly." In Communications in Computer and Information Science, 247–58. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10525-8_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cheng, Lixue, Taihai Yang, and Lizhuang Ma. "Object Bounding Box-Aware Embedding for Point Cloud Instance Segmentation." In PRICAI 2021: Trends in Artificial Intelligence, 182–94. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89370-5_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zanjani, Farhad Ghazvinian, David Anssari Moin, Frank Claessen, Teo Cherici, Sarah Parinussa, Arash Pourtaherian, Svitlana Zinger, and Peter H. N. de With. "Mask-MCNet: Instance Segmentation in 3D Point Cloud of Intra-oral Scans." In Lecture Notes in Computer Science, 128–36. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-32254-0_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

He, Tong, Dong Gong, Zhi Tian, and Chunhua Shen. "Learning and Memorizing Representative Prototypes for 3D Point Cloud Semantic and Instance Segmentation." In Computer Vision – ECCV 2020, 564–80. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58523-5_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dong, Shichao, Guosheng Lin, and Tzu-Yi Hung. "Learning Regional Purity for Instance Segmentation on 3D Point Clouds." In Lecture Notes in Computer Science, 56–72. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20056-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Jinxian, Minghui Yu, Bingbing Ni, and Ye Chen. "Self-Prediction for Joint Instance and Semantic Segmentation of Point Clouds." In Computer Vision – ECCV 2020, 187–204. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58542-6_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Guangnan, Zhiyi Pan, Peng Jiang, and Changhe Tu. "Bi-Directional Attention for Joint Instance and Semantic Segmentation in Point Clouds." In Computer Vision – ACCV 2020, 209–26. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69525-5_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gélard, William, Ariane Herbulot, Michel Devy, Philippe Debaeke, Ryan F. McCormick, Sandra K. Truong, and John Mullet. "Leaves Segmentation in 3D Point Cloud." In Advanced Concepts for Intelligent Vision Systems, 664–74. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70353-4_56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Feihu, Jin Fang, Benjamin Wah, and Philip Torr. "Deep FusionNet for Point Cloud Semantic Segmentation." In Computer Vision – ECCV 2020, 644–63. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58586-0_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Point cloud instance segmentation"

1

Zhang, Biao, and Peter Wonka. "Point Cloud Instance Segmentation using Probabilistic Embeddings." In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.00877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sun, Fei, Yangjie Xu, and Weidong Sun. "SPSN: Seed Point Selection Network in Point Cloud Instance Segmentation." In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9206908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Feihu, Chenye Guan, Jin Fang, Song Bai, Ruigang Yang, Philip H. S. Torr, and Victor Prisacariu. "Instance Segmentation of LiDAR Point Clouds." In 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020. http://dx.doi.org/10.1109/icra40945.2020.9196622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jiang, Haiyong, Feilong Yan, Jianfei Cai, Jianmin Zheng, and Jun Xiao. "End-to-End 3D Point Cloud Instance Segmentation Without Detection." In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.01281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Luhan, Lihua Zheng, and Minjuan Wang. "3D Point Cloud Instance Segmentation of Lettuce Based on PartNet." In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2022. http://dx.doi.org/10.1109/cvprw56347.2022.00171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sun, Yu, Zhicheng Wang, Jingjing Fei, Ling Chen, and Gang Wei. "ATSGPN: adaptive threshold instance segmentation network in 3D point cloud." In MIPPR 2019: Pattern Recognition and Computer Vision, edited by Zhenbing Liu, Jayaram K. Udupa, Nong Sang, and Yuehuan Wang. SPIE, 2020. http://dx.doi.org/10.1117/12.2541582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pan, Ru-Yi, and Cheng-Ming Huang. "Accuracy Improvement of Deep Learning 3D Point Cloud Instance Segmentation." In 2021 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW). IEEE, 2021. http://dx.doi.org/10.1109/icce-tw52618.2021.9603064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Xiaodong, Ruiping Wang, and Xilin Chen. "Implicit-Part Based Context Aggregation for Point Cloud Instance Segmentation." In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022. http://dx.doi.org/10.1109/iros47612.2022.9981772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Weiyue, Ronald Yu, Qiangui Huang, and Ulrich Neumann. "SGPN: Similarity Group Proposal Network for 3D Point Cloud Instance Segmentation." In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liao, Yongbin, Hongyuan Zhu, Tao Chen, and Jiayuan Fan. "Spcr: semi-supervised point cloud instance segmentation with perturbation consistency regularization." In 2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021. http://dx.doi.org/10.1109/icip42928.2021.9506359.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Point cloud instance segmentation"

1

Blundell, S., and Philip Devine. Creation, transformation, and orientation adjustment of a building façade model for feature segmentation : transforming 3D building point cloud models into 2D georeferenced feature overlays. Engineer Research and Development Center (U.S.), January 2020. http://dx.doi.org/10.21079/11681/35115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography