Дисертації з теми "Point cloud instance segmentation"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-38 дисертацій для дослідження на тему "Point cloud instance segmentation".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Gujar, Sanket. "Pointwise and Instance Segmentation for 3D Point Cloud." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-theses/1290.
Повний текст джерелаKonradsson, Albin, and Gustav Bohman. "3D Instance Segmentation of Cluttered Scenes : A Comparative Study of 3D Data Representations." Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177598.
Повний текст джерелаZhu, Charlotte. "Point cloud segmentation for mobile robot manipulation." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106400.
Повний текст джерелаThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 47-48).
In this thesis, we develop a system for estimating a belief state for a scene over multiple observations of the scene. Given as input a sequence of observed RGB-D point clouds of a scene, a list of known objects in the scene and their pose distributions as a prior, and a black-box object detector, our system outputs a belief state of what is believed to be in the scene. This belief state consists of the states of known objects, walls, the floor, and "stuff" in the scene based on the observed point clouds. The system first segments the observed point clouds and then incrementally updates the belief state with each segmented point cloud.
by Charlotte Zhu.
M. Eng.
Kulkarni, Amey S. "Motion Segmentation for Autonomous Robots Using 3D Point Cloud Data." Digital WPI, 2020. https://digitalcommons.wpi.edu/etd-theses/1370.
Повний текст джерелаHe, Linbo. "Improving 3D Point Cloud Segmentation Using Multimodal Fusion of Projected 2D Imagery Data : Improving 3D Point Cloud Segmentation Using Multimodal Fusion of Projected 2D Imagery Data." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157705.
Повний текст джерелаAwadallah, Mahmoud Sobhy Tawfeek. "Image Analysis Techniques for LiDAR Point Cloud Segmentation and Surface Estimation." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73055.
Повний текст джерелаPh. D.
Šooš, Marek. "Segmentace 2D Point-cloudu pro proložení křivkami." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-444985.
Повний текст джерелаJagbrant, Gustav. "Autonomous Crop Segmentation, Characterisation and Localisation." Thesis, Linköpings universitet, Institutionen för systemteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-97374.
Повний текст джерелаEftersom fruktodlingar kräver stora markområden är de ofta belägna långt från större befolkningscentra. Detta gör det svårt att finna tillräckligt med arbetskraft och begränsar expansionsmöjligheterna. Genom att integrera autonoma robotar i drivandet av odlingarna skulle arbetet kunna effektiviseras och behovet av arbetskraft minska. Ett nyckelproblem för alla autonoma robotar är lokalisering; hur vet roboten var den är? I jordbruksrobotar är standardlösningen att använda GPS-positionering. Detta är dock problematiskt i fruktodlingar, då den höga och täta vegetationen begränsar användandet till större robotar som når ovanför omgivningen. För att möjliggöra användandet av mindre robotar är det istället nödvändigt att använda ett GPS-oberoende lokaliseringssystem. Detta problematiseras dock av den likartade omgivningen och bristen på distinkta riktpunkter, varför det framstår som osannolikt att existerande standardlösningar kommer fungera i denna omgivning. Därför presenterar vi ett GPS-oberoende lokaliseringssystem, speciellt riktat mot fruktodlingar, som utnyttjar den naturliga strukturen hos omgivningen.Därutöver undersöker vi och utvärderar tre relaterade delproblem. Det föreslagna systemet använder ett 3D-punktmoln skapat av en 2D-LIDAR och robotens rörelse. Först visas hur en dold semi-markovmodell kan användas för att segmentera datasetet i enskilda träd. Därefter introducerar vi ett antal deskriptorer för att beskriva trädens geometriska form. Vi visar därefter hur detta kan kombineras med en dold markovmodell för att skapa ett robust lokaliseringssystem.Slutligen föreslår vi en metod för att detektera segmenteringsfel när nya mätningar av träd associeras med tidigare uppmätta träd. De föreslagna metoderna utvärderas individuellt och visar på goda resultat. Den föreslagna segmenteringsmetoden visas vara noggrann och ge upphov till få segmenteringsfel. Därutöver visas att de introducerade deskriptorerna är tillräckligt konsistenta och informativa för att möjliggöra lokalisering. Ytterligare visas att den presenterade lokaliseringsmetoden är robust både mot brus och segmenteringsfel. Slutligen visas att en signifikant majoritet av alla segmenteringsfel kan detekteras utan att felaktigt beteckna korrekta segmenteringar som inkorrekta.
Serra, Sabina. "Deep Learning for Semantic Segmentation of 3D Point Clouds from an Airborne LiDAR." Thesis, Linköpings universitet, Datorseende, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-168367.
Повний текст джерелаVock, Dominik. "Automatic segmentation and reconstruction of traffic accident scenarios from mobile laser scanning data." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-141582.
Повний текст джерелаDigne, Julie. "Inverse geometry : from the raw point cloud to the 3d surface : theory and algorithms." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2010. http://tel.archives-ouvertes.fr/tel-00610432.
Повний текст джерелаBharadwaj, Akshay S. "A Perception Payload for Small-UAS Navigation in Structured Environments." Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1533649419108963.
Повний текст джерелаSimonovsky, Martin. "Deep learning on attributed graphs." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1133/document.
Повний текст джерелаGraph is a powerful concept for representation of relations between pairs of entities. Data with underlying graph structure can be found across many disciplines, describing chemical compounds, surfaces of three-dimensional models, social interactions, or knowledge bases, to name only a few. There is a natural desire for understanding such data better. Deep learning (DL) has achieved significant breakthroughs in a variety of machine learning tasks in recent years, especially where data is structured on a grid, such as in text, speech, or image understanding. However, surprisingly little has been done to explore the applicability of DL on graph-structured data directly.The goal of this thesis is to investigate architectures for DL on graphs and study how to transfer, adapt or generalize concepts working well on sequential and image data to this domain. We concentrate on two important primitives: embedding graphs or their nodes into a continuous vector space representation (encoding) and, conversely, generating graphs from such vectors back (decoding). To that end, we make the following contributions.First, we introduce Edge-Conditioned Convolutions (ECC), a convolution-like operation on graphs performed in the spatial domain where filters are dynamically generated based on edge attributes. The method is used to encode graphs with arbitrary and varying structure.Second, we propose SuperPoint Graph, an intermediate point cloud representation with rich edge attributes encoding the contextual relationship between object parts. Based on this representation, ECC is employed to segment large-scale point clouds without major sacrifice in fine details.Third, we present GraphVAE, a graph generator allowing to decode graphs with variable but upper-bounded number of nodes making use of approximate graph matching for aligning the predictions of an autoencoder with its inputs. The method is applied to the task of molecule generation
Hamraz, Hamid. "AUTOMATED TREE-LEVEL FOREST QUANTIFICATION USING AIRBORNE LIDAR." UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/69.
Повний текст джерелаJelínek, Aleš. "Vektorizovaná mračna bodů pro mobilní robotiku." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-364602.
Повний текст джерелаFang, Hao. "Modélisation géométrique à différent niveau de détails d'objets fabriqués par l'homme." Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4002/document.
Повний текст джерелаGeometric modeling of man-made objects from 3D data is one of the biggest challenges in Computer Vision and Computer Graphics. The long term goal is to generate a CAD-style model in an as-automatic-as-possible way. To achieve this goal, difficult issues have to be addressed including (i) the scalability of the modeling process with respect to massive input data, (ii) the robustness of the methodology to various defect-laden input measurements, and (iii) the geometric quality of output models. Existing methods work well to recover the surface of free-form objects. However, in case of manmade objects, it is difficult to produce results that approach the quality of high-structured representations as CAD models.In this thesis, we present a series of contributions to the field. First, we propose a classification method based on deep learning to distinguish objects from raw 3D point cloud. Second, we propose an algorithm to detect planar primitives in 3D data at different level of abstraction. Finally, we propose a mechanism to assemble planar primitives into compact polygonal meshes. These contributions are complementary and can be used sequentially to reconstruct city models at various level-of-details from airborne 3D data. We illustrate the robustness, scalability and efficiency of our methods on both laser and multi-view stereo data composed of man-made objects
Ben, Abdallah Hamdi. "Inspection d'assemblages aéronautiques par vision 2D/3D en exploitant la maquette numérique et la pose estimée en temps réel Three-dimensional point cloud analysis for automatic inspection of complex aeronautical mechanical assemblies Automatic inspection of aeronautical mechanical assemblies by matching the 3D CAD model and real 2D images." Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2020. http://www.theses.fr/2020EMAC0001.
Повний текст джерелаThis thesis makes part of a research aimed towards innovative digital tools for the service of what is commonly referred to as Factory of the Future. Our work was conducted in the scope of the joint research laboratory "Inspection 4.0" founded by IMT Mines Albi/ICA and the company DIOTA specialized in the development of numerical tools for Industry 4.0. In the thesis, we were interested in the development of systems exploiting 2D images or (and) 3D point clouds for the automatic inspection of complex aeronautical mechanical assemblies (typically an aircraft engine). The CAD (Computer Aided Design) model of the assembly is at our disposal and our task is to verify that the assembly has been correctly assembled, i.e. that all the elements constituting the assembly are present in the right position and at the right place. The CAD model serves as a reference. We have developed two inspection scenarios that exploit the inspection systems designed and implemented by DIOTA: (1) a scenario based on a tablet equipped with a camera, carried by a human operator for real-time interactive control, (2) a scenario based on a robot equipped with sensors (two cameras and a 3D scanner) for fully automatic control. In both scenarios, a so-called localisation camera provides in real-time the pose between the CAD model and the sensors (which allows to directly link the 3D digital model with the 2D images or the 3D point clouds analysed). We first developed 2D inspection methods, based solely on the analysis of 2D images. Then, for certain types of inspection that could not be performed by using 2D images only (typically requiring the measurement of 3D distances), we developed 3D inspection methods based on the analysis of 3D point clouds. For the 3D inspection of electrical cables, we proposed an original method for segmenting a cable within a point cloud. We have also tackled the problem of automatic selection of best view point, which allows the inspection sensor to be placed in an optimal observation position. The developed methods have been validated on many industrial cases. Some of the inspection algorithms developed during this thesis have been integrated into the DIOTA Inspect© software and are used daily by DIOTA's customers to perform inspections on industrial sites
Yogeswaran, Arjun. "3D Surface Analysis for the Automated Detection of Deformations on Automotive Panels." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/19992.
Повний текст джерелаKratochvíl, Jiří Jaroslav. "Detekce a vizualizace specifických rysů v mračnu bodů." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2018. http://www.nusl.cz/ntk/nusl-385286.
Повний текст джерелаGhorpade, Vijaya Kumar. "3D Semantic SLAM of Indoor Environment with Single Depth Sensor." Thesis, Université Clermont Auvergne (2017-2020), 2017. http://www.theses.fr/2017CLFAC085/document.
Повний текст джерелаIntelligent autonomous actions in an ordinary environment by a mobile robot require maps. A map holds the spatial information about the environment and gives the 3D geometry of the surrounding of the robot to not only avoid collision with complex obstacles, but also selflocalization and for task planning. However, in the future, service and personal robots will prevail and need arises for the robot to interact with the environment in addition to localize and navigate. This interaction demands the next generation robots to understand, interpret its environment and perform tasks in human-centric form. A simple map of the environment is far from being sufficient for the robots to co-exist and assist humans in the future. Human beings effortlessly make map and interact with environment, and it is trivial task for them. However, for robots these frivolous tasks are complex conundrums. Layering the semantic information on regular geometric maps is the leap that helps an ordinary mobile robot to be a more intelligent autonomous system. A semantic map augments a general map with the information about entities, i.e., objects, functionalities, or events, that are located in the space. The inclusion of semantics in the map enhances the robot’s spatial knowledge representation and improves its performance in managing complex tasks and human interaction. Many approaches have been proposed to address the semantic SLAM problem with laser scanners and RGB-D time-of-flight sensors, but it is still in its nascent phase. In this thesis, an endeavour to solve semantic SLAM using one of the time-of-flight sensors which gives only depth information is proposed. Time-of-flight cameras have dramatically changed the field of range imaging, and surpassed the traditional scanners in terms of rapid acquisition of data, simplicity and price. And it is believed that these depth sensors will be ubiquitous in future robotic applications. In this thesis, an endeavour to solve semantic SLAM using one of the time-of-flight sensors which gives only depth information is proposed. Starting with a brief motivation in the first chapter for semantic stance in normal maps, the state-of-the-art methods are discussed in the second chapter. Before using the camera for data acquisition, the noise characteristics of it has been studied meticulously, and properly calibrated. The novel noise filtering algorithm developed in the process, helps to get clean data for better scan matching and SLAM. The quality of the SLAM process is evaluated using a context-based similarity score metric, which has been specifically designed for the type of acquisition parameters and the data which have been used. Abstracting semantic layer on the reconstructed point cloud from SLAM has been done in two stages. In large-scale higher-level semantic interpretation, the prominent surfaces in the indoor environment are extracted and recognized, they include surfaces like walls, door, ceiling, clutter. However, in indoor single scene object-level semantic interpretation, a single 2.5D scene from the camera is parsed and the objects, surfaces are recognized. The object recognition is achieved using a novel shape signature based on probability distribution of 3D keypoints that are most stable and repeatable. The classification of prominent surfaces and single scene semantic interpretation is done using supervised machine learning and deep learning systems. To this end, the object dataset and SLAM data are also made publicly available for academic research
Jaritz, Maximilian. "2D-3D scene understanding for autonomous driving." Thesis, Université Paris sciences et lettres, 2020. https://pastel.archives-ouvertes.fr/tel-02921424.
Повний текст джерелаIn this thesis, we address the challenges of label scarcity and fusion of heterogeneous 3D point clouds and 2D images. We adopt the strategy of end-to-end race driving where a neural network is trained to directly map sensor input (camera image) to control output, which makes this strategy independent from annotations in the visual domain. We employ deep reinforcement learning where the algorithm learns from reward by interaction with a realistic simulator. We propose new training strategies and reward functions for better driving and faster convergence. However, training time is still very long which is why we focus on perception to study point cloud and image fusion in the remainder of this thesis. We propose two different methods for 2D-3D fusion. First, we project 3D LiDAR point clouds into 2D image space, resulting in sparse depth maps. We propose a novel encoder-decoder architecture to fuse dense RGB and sparse depth for the task of depth completion that enhances point cloud resolution to image level. Second, we fuse directly in 3D space to prevent information loss through projection. Therefore, we compute image features with a 2D CNN of multiple views and then lift them all to a global 3D point cloud for fusion, followed by a point-based network to predict 3D semantic labels. Building on this work, we introduce the more difficult novel task of cross-modal unsupervised domain adaptation, where one is provided with multi-modal data in a labeled source and an unlabeled target dataset. We propose to perform 2D-3D cross-modal learning via mutual mimicking between image and point cloud networks to address the source-target domain shift. We further showcase that our method is complementary to the existing uni-modal technique of pseudo-labeling
Ravaglia, Joris. "Reconstruction de formes tubulaires à partir de nuages de points : application à l’estimation de la géométrie forestière." Thèse, Université de Sherbrooke, 2017. http://hdl.handle.net/11143/11791.
Повний текст джерелаAbstract : The potential of remote sensing technologies has recently increased exponentially: new sensors now provide a geometric representation of their environment in the form of point clouds with unrivalled accuracy. Point cloud processing hence became a full discipline, including specific problems and many challenges to face. The core of this thesis concerns geometric modelling and introduces a fast and robust method for the extraction of tubular shapes from point clouds. We hence chose to test our method in the difficult applicative context of forestry in order to highlight the robustness of our algorithms and their application to large data sets. Our methods integrate normal vectors as a supplementary geometric information in order to achieve the performance goal necessary for large point cloud processing. However, remote sensing techniques do not commonly provide normal vectors, thus they have to be computed. Our first development hence consisted in the development of a fast normal estimation method on point cloud in order to reduce the computing time on large point clouds. To do so, we locally approximated the point cloud geometry using smooth ''patches`` of points which size adapts to the local complexity of the point cloud geometry. We then focused our work on the robust extraction of tubular shapes from dense, occluded, noisy point clouds suffering from non-homogeneous sampling density. For this objective, we developed a variant of the Hough transform which complexity is reduced thanks to the computed normal vectors. We then combined this research with a new definition of parametrisation-invariant active contours. This combination ensures the internal coherence of the reconstructed shapes and alleviates issues related to occlusion, noise and variation of sampling density. We validated our method in complex forest environments with the reconstruction of tree stems to emphasize its advantages and compare it to existing methods. Tree stem reconstruction also opens new perspectives halfway in between forestry and geometry. One of them is the segmentation of trees from a forest plot. Therefore we also propose a segmentation approach designed to overcome the defects of forest point clouds and capable of isolating objects inside a point cloud. During our work we used modelling approaches to answer geometric questions and we applied our methods to forestry problems. Therefore, our studies result in a processing pipeline adapted to forest point cloud analyses, but the general geometric algorithms we propose can also be applied in various contexts.
Marko, Peter. "Detekce objektů v laserových skenech pomocí konvolučních neuronových sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445509.
Повний текст джерелаHe, Tong. "Efficient Scene Parsing with Imagery and Point Cloud Data." Thesis, 2020. http://hdl.handle.net/2440/129534.
Повний текст джерелаThesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2020
Wang, Chi-Pei, and 王綺珮. "A Study on Multi-Scale Object-Based Point Cloud Segmentation." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/97088326437886646112.
Повний текст джерела國立臺灣大學
土木工程學研究所
103
The point cloud segmentation has been a significant progress to point cloud classification and ground object reconstruction. In addition, the result of segmentation has directly influence over the following analysis and utilization. Considering that LiDAR (light detection and ranging) scanners are attributes of blind systems, the object-based concept is used to analyze point clouds from large amounts of discrete data to point cloud objects, which are composed of parent-child relationships. The methods of point cloud segmentation are diverse in accordance with purposes and demands. For instance, a model-driven approach, RANSAC (random sample consensus), which is robust and efficient, is used to building extraction and reconstruction. Moreover, a data-driven approach, clustering, which clusters highly correlated points into objects, is applied to irregular object identification and classification by calculating Euclidean distance between points. The study is essentially built on the object-based point cloud analysis (OBPCA) and proposes a suitable segmentation method to point clouds. Since the features, also known as attributes, are considered in the object-based point cloud analysis, they are not only beneficial to object analysis, but also provide heterogeneities to the progress of segmentation. The heterogeneity is exploited to simplify the procedure, to improve the efficiency of point cloud segmentation, and to adapt different point cloud distributions of scenes. Therefore, in this research, current methods of segmentation are consolidated and interpreted, and a multi-scale segmentation algorithm is developed for increasing operational efficiency without reducing overall accuracy of classification.
ZHAO, BO-XU, and 趙伯勗. "Multiple moving object detection and tracking method using point cloud segmentation." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/q65u84.
Повний текст джерела國立雲林科技大學
資訊工程系
106
In this thesis, a moving detection and tracking method is proposed for multiple targets by using point cloud segmentation. LIDAR systems are widely used in autonomous systems. In an ego-motion system, it is an interesting research topic to identify moving objects from scene point clouds obtained by the mobile LIDAR. The proposed method can detect moving objects within a moving scene and the information of moving objects, e.g., relative velocity, can be used for collision avoidance for a driverless vehicle. The proposed approach consists of five steps: (1) point cloud capturing, (2) ground point removal, (3) segmentation, (4) foreground and background detection, (5) moving object tracking. Firstly, the 3D point cloud scene is retrieved by LiDAR mounted on a ego-motion system. Then, in order to reduce the computation complexity, ground points are removed by the ground detection algorithm. In third step, the rest points are grouped and segmented by the voxel grouping method to eliminate the noise point and to form objects. The velocities of objects are computed with respect to the ego-motion system for identifying the foreground (moving object) and the background (static objects). Finally, Kalman filter is used to track moving objects and to expect the position of these objects. The expecting position of moving objects can be used for collision avoidance.
FANG, YU-WEI, and 房育維. "3D Environment Reconstruction Based on Semantic Segmentation and Point Cloud Registration." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/sqpcz3.
Повний текст джерела國立臺北科技大學
電機工程系
107
In this thesis, the simultaneous localization and mapping (SLAM) will be employed to establish a three-dimensional indoor environment map of a real scene, and then the users engaged in a virtual experience can use the head-mounted display to view the established real scene model. For the environment scene construction, the original point cloud obtained by the depth camera will have empty points. Also, since the sensing range is limited during the SLAM construction process, and the established three-dimensional environment map may be defective and invisible during the virtual experience. First, the semantic segmentation is utilized to classify the point cloud groups according to the image of color camera, and then the point clouds of different objects are repaired according to the category of semantic segmentation. Since the classification results of point clouds is not high in some objects, it is improved by clustering the position and classified label of point clouds. Then, the partition planes such as the wall surface, the ground and the ceiling are reconstructed first. The plane reconstruction of the partition mainly solves the unevenness in the planes due to the depth error of the depth camera. For the reconstruction of furniture objects, the original point cloud of the furniture and the point cloud of complete object models will be matched to select the appropriate model point cloud to replace the original point cloud of the furniture such that the problem of the original point cloud missing or invisible during SLAM can be improved. After that, the density of the point cloud is increased by upsampling to produce a better reconstruction effect and to avoid the occurrence of virtual reality display delay due to excessive point clouds. Finally, the triangular mesh reconstruction is used to convert the map form of point cloud into a form of surface, thereby improving the detail of the map. Through the Unity engine, the reconstructed environment map is displayed on the virtual helmet, allowing the user to enjoy the virtually experience of the real scene in any locations.
Abdullah, Salah Sohaib Saleh, and 蘇家德. "An Obstacle Detection for an Autonomous Vehicle Based on Point Cloud Segmentation." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/343wyg.
Повний текст джерела國立雲林科技大學
資訊工程系
106
Since autonomous robots are often used for navigating unknown or dangerous environments, multiple sensing devices are required to enable autonomous robots to plan the collision-free path to the destination and to record the trajectories. In this thesis, an autonomous car mounted with LiDAR, GPS, gyroscopes, and camera is proposed for navigation, collision avoidance and path planning. The proposed system is a four-wheel-drive electric scooter carrier which is designed as four independent drive axles of the motor and a joints between front and back of the chassis. In this paper, point clouds captured by LiDAR is used as obstacle detection. Firstly, point clouds are reduced to 2D dimensions by Voxel algorithm and then the reduced points are clustered by flood-fill grouping algorithm into several objects which are considered as obstacles. The bug algorithm adopted as path planning algorithm plans a local path to destination to avoid those obstacles. The GPS and gyroscopes are used for locating the robot position and identifying its orientation. The experimental results show that the implemented autonomous car can reach the target position safely. Since the proposed pre-processing method can reduce the amount of point cloud and can improve the efficiency of obstacle clustering significantly, the proposed system is practicable in the field.
Abualshour, Abdulellah. "Applications of Graph Convolutional Networks and DeepGCNs in Point Cloud Part Segmentation and Upsampling." Thesis, 2020. http://hdl.handle.net/10754/662567.
Повний текст джерела(8804144), Junzhe Shen. "A SIMULATED POINT CLOUD IMPLEMENTATION OF A MACHINE LEARNING SEGMENTATION AND CLASSIFICATION ALGORITHM." Thesis, 2020.
Знайти повний текст джерелаAs buildings have almost come to a saturation point in most developed countries, the management and maintenance of existing buildings have become the major problem of the field. Building Information Modeling (BIM) is the underlying technology to solve this problem. It is a 3D semantic representation of building construction and facilities that contributes to not only the design phase but also the construction and maintenance phases, such as life-cycle management and building energy performance measurement. This study aims at the processes of creating as-built BIM models, which are constructed after the design phase. Point cloud, a set of points in 3D space, is an intermediate product of as-built BIM models that is often acquired by 3D laser scanning and photogrammetry. A raw point cloud typically requires further procedures, e.g. registration, segmentation, classification, etc. In terms of segmentation and classification, machine learning methodologies are trending due to the enhanced speed of computation. However, supervised machine learning methodologies require labelling the training point clouds in advance, which is time-consuming and often leads to inevitable errors. And due to the complexity and uncertainty of real-world environments, the attributes of one point vary from the attributes of others. These situations make it difficult to analyze how one single attribute contributes to the result of segmentation and classification. This study developed a method of producing point clouds from a fast-generating 3D virtual indoor environment using procedural modeling. This research focused on two attributes of simulated point clouds, point density and the level of random errors. According to Silverman (1986), point density is associated with the point features around each output raster cell. The number of points within a neighborhood divided the area of the neighborhood is the point density. However, in this study, there was a little different. The point density was defined as the number of points on a surface divided by the surface area. And the unit is points per square meters (pts/m2). This research compared the performances of a machine learning segmentation and classification algorithm on ten different point cloud datasets. The mean loss and accuracy of segmentation and classification were analyzed and evaluated to show how the point density and level of random errors affect the performance of the segmentation and classification models. Moreover, the real-world point cloud data were used as additional data to evaluate the applicability of produced models.
Itani, Hani. "A Closer Look at Neighborhoods in Graph Based Point Cloud Scene Semantic Segmentation Networks." Thesis, 2020. http://hdl.handle.net/10754/665898.
Повний текст джерелаChiang, Hung-Yueh, and 江泓樂. "An Analysis of 3D Indoor Scene Segmentation Based on Images, Point Cloud and Voxel Data." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/pvbj47.
Повний текст джерела國立臺灣大學
資訊工程學研究所
107
The deep learning technology has brought great success in image classification, object detection and semantic segmentation tasks. Recent years, the advent of inexpensive depth sensors hugely motivate 3D research area and real scene reconstruction datasets such as ScanNet [5] and Matterport3D [1] have been proposed. However, the problem of 3D scene semantic segmentation still remains new and challenging due to many variance of 3D data type (e.g. image, voxel, point cloud). Other difficulties such as suffering from high computation cost and the scarcity of data dispel the research progress of 3D segmentation. In this paper, we study 3D indoor scene segmentation problem with three different types of 3D data, which we categorize into image-based, voxel-based and point-based. We experiment on different input signals (e.g. color, depth, normal) and verify their effectiveness and performance in different data type networks. We further study fusion methods and improve the performance by using off-the-shelf deep models and by leveraging data modalities in the paper.
Roque, Mariana Assunção. "An empirical study on the effect compression on the performance of point clouds segmentation algorithms." Master's thesis, 2019. http://hdl.handle.net/10316/87860.
Повний текст джерелаNuvens de pontos são conjuntos de pontos que representam um objeto ou cena 3D, em que os pontos são representados pelas respetivas coordenadas 3D e atributos opcionais tais como cor, reflectância, entre outros. Estas são usadas em várias áreas de aplicação como, por exemplo, entretenimento, representação de terrenos, imagens médicas e, mais recentemente, sistemas de condução autónoma de veículos. No entanto, devido ao grande volume de dados necessários para representar as nuvens de pontos, essas aplicações iriam precisar de um grande poder de processamento e, em alguns casos, poderia não ser possível realizar as tarefas em tempo real. Portanto, a compressão é usada para combater o problema de armazenamento e transmissão em tempo real. Sabe-se que a compressão com perdas introduz distorções geométricas que, geralmente, dependem do grau de compressão. Dado que em algumas aplicações é necessário segmentar os objectos que compõem a nuvem de pontos reconstruída/descomprimida, é importante perceber e caracterizar o efeito do tipo e grau de compressão na performance das tarefas de segmentação e classificação.Nesta dissertação, são descritos dois tipos de experiências: uma com nuvens de pontos de uso geral e a outra usando um caso particular das nuvens de pontos, mais precisamente, o LiDAR. Esta divisão foi feita, pois é provável que os resultados destas duas experiências sejam diferentes devido às aplicações distintas destas classes de nuvens de pontos, assim como respetivos requisitos de precisão. Estas experiências foram criadas para avaliar empiricamente o efeito de diferentes métodos de compressão de nuvens de pontos usando diferentes graus de compressão na performance de vários algoritmos de segmentação e classificação. Para isso, várias medidas de performance são usadas para avaliar o comportamento de cada caso.
Point clouds are sets of points which represent a 3D object/scene represented by their coordinates and optional attributes such as color, reflectance or other. Point clouds are being used in several application areas such as entertainment, terrain representation, medical imaging and, more recently, autonomous vehicle guidance systems. Due to the large size of point clouds, these applications would require a huge power processing and, in some cases, tasks may not be able to be performed in real time. Thus, compression is used to tackle the challenges of storage and real-time transmission. It is known that lossy compression introduces geometric distortions to the point clouds which are usually dependent on the compression rate. Therefore, in some cases, it is necessary to segment the component objects of the reconstructed/decompressed point cloud, it is important to understand and characterize the effect of the type and degree of compression on the performance of the segmentation and classification tasks. In this dissertation, two sets of experiments are described: one with general use point clouds and the other using a particular type of point clouds, more precisely LiDAR. This division was made because it is likely that the results are different for these two types due to the amount of precision and uses of each point cloud type. These experiments are designed to empirically evaluate the effect of different point cloud compression methods, employed at different compression rates, on the performance of several point cloud segmentation and classification algorithms. For that, several performance measures are used to evaluate the behavior of each case.
Dušek, Dominik. "Segmentace a klasifikace LIDAR dat." Master's thesis, 2020. http://www.nusl.cz/ntk/nusl-434961.
Повний текст джерела(5929979), Yun-Jou Lin. "Point Cloud-Based Analysis and Modelling of Urban Environments and Transportation Corridors." Thesis, 2019.
Знайти повний текст джерелаVock, Dominik. "Automatic segmentation and reconstruction of traffic accident scenarios from mobile laser scanning data." Doctoral thesis, 2013. https://tud.qucosa.de/id/qucosa%3A27971.
Повний текст джерелаXavier, Alexandre Dias. "Perception System for Forest Cleaning with UGV." Master's thesis, 2021. http://hdl.handle.net/10316/98083.
Повний текст джерелаO constante desenvolvimento de sistemas robóticos autónomos tem aumentando o interesse em utilizar os robôs como alternativa ao ser humano no desempenho de tarefas repetitivas, árduas, e perigosas.Face a uma alta densidade florestal existente em Portugal, mas também noutros países da Europa e de outros continentes, a necessidade de diminuir a matéria inflamável existente na floresta tornou-se um dos grandes objetivos da prevenção de grandes incêndios florestais.Os desenvolvimentos na área da robótica permitem aos robôs mapear os ambientes florestais de modo a obter informação útil que permita percecionar qual a matéria inflamável existente. A necessidade de perceber qual a vegetação que o robô deve ou não cortar, torna-se uma tarefa muito importante para o desempenho do robô. Esta dissertação está focada na perceção do ambiente que rodeia o robô, ou seja, perceber quais os objetos que rodeiam o robô, quais são obstáculos, qual a vegetação a cortar e não cortar.São propostas soluções usando LiDAR ou usando uma câmara RGB. Em relação ao LiDAR as soluções implementadas têm como base a altura dos objetos, a reflexão do “laser” do LiDAR conforme a superfície do objeto e também o tamanho dos objetos. Enquanto usando a câmara RGB a solução passa pelo uso de índices de vegetação e segmentação.As soluções foram validadas usando data ‘sets’ e fotografias de ambiente real. No final foi possível classificar os objetos como obstáculos, neste caso carros, paredes e troncos, mas também vegetação cortada através de um trator equipado com uma capinadeira e vegetação não cortada.
The constant development of autonomous robotic systems has open up the interest in using robots as an alternative to humans in the performance of repetitive, arduous, and dangerous tasks.Given the high forest density in Portugal, but also in other countries in Europe and other continents, the need to reduce the inflammable matter in the forest has become one of the major goals in the prevention of large forest fires.Developments in robotics allow robots to map forest environments in order to obtain useful information to understand the existing inflammable matter.The need to understand which vegetation the robot should or should not cut becomes a very important task for the robot performance.This dissertation is focused on the perception of the environment that surrounds the robot, that is, to understand which objects surround the robot, which are obstacles, which vegetation to cut and not to cut.Solutions are proposed using LiDAR or using an RGB camera. Regarding LiDAR the solutions implemented are based on the height of the objects, the reflection of the LiDAR laser according to the surface of the object and also the size of the objects. While using the RGB camera the solution goes through the use of vegetation indexes and segmentation. The solutions were validated using data sets and real environment photographs. In the end it was possible to classify the objects as obstacles, in this case cars, walls and trunks, but also vegetation cut by a tractor equipped with a clearing machine and uncut vegetation.
Ioannou, Yani Andrew. "Automatic Urban Modelling using Mobile Urban LIDAR Data." Thesis, 2010. http://hdl.handle.net/1974/5443.
Повний текст джерелаThesis (Master, Computing) -- Queen's University, 2010-03-01 12:26:34.698