Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: 3D Point cloud Compression.

Дисертації з теми "3D Point cloud Compression"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "3D Point cloud Compression".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Morell, Vicente. "Contributions to 3D Data Registration and Representation." Doctoral thesis, Universidad de Alicante, 2014. http://hdl.handle.net/10045/42364.

Повний текст джерела
Анотація:
Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Roure, Garcia Ferran. "Tools for 3D point cloud registration." Doctoral thesis, Universitat de Girona, 2017. http://hdl.handle.net/10803/403345.

Повний текст джерела
Анотація:
In this thesis, we did an in-depth review of the state of the art of 3D registration, evaluating the most popular methods. Given the lack of standardization in the literature, we also proposed a nomenclature and a classification to unify the evaluation systems and to be able to compare the different algorithms under the same criteria. The major contribution of the thesis is the Registration Toolbox, which consists of software and a database of 3D models. The software presented here consists of a 3D Registration Pipeline written in C ++ that allows researchers to try different methods, as well as add new ones and compare them. In this Pipeline, we not only implemented the most popular methods of literature, but we also added three new methods that contribute to improving the state of the art. On the other hand, the database provides different 3D models to be able to carry out the tests to validate the performances of the methods. Finally, we presented a new hybrid data structure specially focused on the search for neighbors. We tested our proposal together with other data structures and we obtained very satisfactory results, overcoming in many cases the best current alternatives. All tested structures are also available in our Pipeline. This Toolbox is intended to be a useful tool for the whole community and is available to researchers under a Creative Commons license
En aquesta tesi, hem fet una revisió en profunditat de l'estat de l'art del registre 3D, avaluant els mètodes més populars. Donada la falta d'estandardització de la literatura, també hem proposat una nomenclatura i una classificació per tal d'unificar els sistemes d'avaluació i poder comparar els diferents algorismes sota els mateixos criteris. La contribució més gran de la tesi és el Toolbox de Registre, que consisteix en un software i una base de dades de models 3D. El software presentat aquí consisteix en una Pipeline de registre 3D escrit en C++ que permet als investigadors provar diferents mètodes, així com afegir-n'hi de nous i comparar-los. En aquesta Pipeline, no només hem implementat els mètodes més populars de la literatura, sinó que també hem afegit tres mètodes nous que contribueixen a millorar l'estat de l'art de la tecnologia. D'altra banda, la base de dades proporciona una sèrie de models 3D per poder dur a terme les proves necessàries per validar el bon funcionament dels mètodes. Finalment, també hem presentat una nova estructura de dades híbrida especialment enfocada a la cerca de veïns. Hem testejat la nostra proposta conjuntament amb altres estructures de dades i hem obtingut resultats molt satisfactoris, superant en molts casos les millors alternatives actuals. Totes les estructures testejades estan també disponibles al nostre Pipeline. Aquesta Toolbox està pensada per ésser una eina útil per tota la comunitat i està a disposició dels investigadors sota llicència Creative-Commons
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Tarcin, Serkan. "Fast Feature Extraction From 3d Point Cloud." Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615659/index.pdf.

Повний текст джерела
Анотація:
To teleoperate an unmanned vehicle a rich set of information should be gathered from surroundings.These systems use sensors which sends high amounts of data and processing the data in CPUs can be time consuming. Similarly, the algorithms that use the data may work slow because of the amount of the data. The solution is, preprocessing the data taken from the sensors on the vehicle and transmitting only the necessary parts or the results of the preprocessing. In this thesis a 180 degree laser scanner at the front end of an unmanned ground vehicle (UGV) tilted up and down on a horizontal axis and point clouds constructed from the surroundings. Instead of transmitting this data directly to the path planning or obstacle avoidance algorithms, a preprocessing stage has been run. In this preprocess rst, the points belonging to the ground plane have been detected and a simplied version of ground has been constructed then the obstacles have been detected. At last, a simplied ground plane as ground and simple primitive geometric shapes as obstacles have been sent to the path planning algorithms instead of sending the whole point cloud.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Forsman, Mona. "Point cloud densification." Thesis, Umeå universitet, Institutionen för fysik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-39980.

Повний текст джерела
Анотація:
Several automatic methods exist for creating 3D point clouds extracted from 2D photos. In manycases, the result is a sparse point cloud, unevenly distributed over the scene.After determining the coordinates of the same point in two images of an object, the 3D positionof that point can be calculated using knowledge of camera data and relative orientation. A model created from a unevenly distributed point clouds may loss detail and precision in thesparse areas. The aim of this thesis is to study methods for densification of point clouds. This thesis contains a literature study over different methods for extracting matched point pairs,and an implementation of Least Square Template Matching (LSTM) with a set of improvementtechniques. The implementation is evaluated on a set of different scenes of various difficulty. LSTM is implemented by working on a dense grid of points in an image and Wallis filtering isused to enhance contrast. The matched point correspondences are evaluated with parameters fromthe optimization in order to keep good matches and discard bad ones. The purpose is to find detailsclose to a plane in the images, or on plane-like surfaces. A set of extensions to LSTM is implemented in the aim of improving the quality of the matchedpoints. The seed points are improved by Transformed Normalized Cross Correlation (TNCC) andMultiple Seed Points (MSP) for the same template, and then tested to see if they converge to thesame result. Wallis filtering is used to increase the contrast in the image. The quality of the extractedpoints are evaluated with respect to correlation with other optimization parameters and comparisonof standard deviation in x- and y- direction. If a point is rejected, the option to try again with a largertemplate size exists, called Adaptive Template Size (ATS).
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Gujar, Sanket. "Pointwise and Instance Segmentation for 3D Point Cloud." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-theses/1290.

Повний текст джерела
Анотація:
The camera is the cheapest and computationally real-time option for detecting or segmenting the environment for an autonomous vehicle, but it does not provide the depth information and is undoubtedly not reliable during the night, bad weather, and tunnel flash outs. The risk of an accident gets higher for autonomous cars when driven by a camera in such situations. The industry has been relying on LiDAR for the past decade to solve this problem and focus on depth information of the environment, but LiDAR also has its shortcoming. The industry methods commonly use projections methods to create a projection image and run detection and localization network for inference, but LiDAR sees obscurants in bad weather and is sensitive enough to detect snow, making it difficult for robustness in projection based methods. We propose a novel pointwise and Instance segmentation deep learning architecture for the point clouds focused on self-driving application. The model is only dependent on LiDAR data making it light invariant and overcoming the shortcoming of the camera in the perception stack. The pipeline takes advantage of both global and local/edge features of points in points clouds to generate high-level feature. We also propose Pointer-Capsnet which is an extension of CapsNet for small 3D point clouds.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chen, Chen. "Semantics Augmented Point Cloud Sampling for 3D Object Detection." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/26956.

Повний текст джерела
Анотація:
3D object detection is an emerging topic among both industries and research communities. It aims at discovering objects of interest from 3D scenes and has a strong connection with many real-world scenarios, such as autonomous driving. Currently, many models have been proposed to detect potential objects from point clouds. Some methods attempt to model point clouds in the unit of point, and then perform detection with acquired point-wise features. These methods are classified as point-based methods. However, we argue that the prevalent sampling algorithm for point-based models is sub-optimal for involving too much potentially unimportant data and may also lose some important information for detecting objects. Hence, it may lead to a significant performance drop. This thesis manages to improve the current sampling strategy for point-based models in the context of 3D detection. We propose recasting the sampling algorithm by incorporating semantic information to help identify more beneficial data for detection, thus obtaining a semantics augmented sampling strategy. In particular, we introduce a 2-phase augmentation for sampling. In the point feature learning phase, we propose a semantics-guided farthest point sampling (S-FPS) to keep more informative foreground points. In addition, in the box prediction phase, we devise a semantic balance sampling (SBS) to avoid redundant training on easily recognized instances. We evaluate our proposed strategy on the popular KITTI dataset and the large-scale nuScenes dataset. Extensive experiments show that our method lifts the point-based single-stage detector to surpass all existing point-based models and even achieve comparable performance to state-of-the-art two-stage methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Dey, Emon Kumar. "Effective 3D Building Extraction from Aerial Point Cloud Data." Thesis, Griffith University, 2022. http://hdl.handle.net/10072/413311.

Повний текст джерела
Анотація:
Building extraction is important for a wider range of applications including smart city planning, disaster management, security, and cadastral mapping. This thesis mainly aims to present an effective data-driven strategy for building extraction using aerial Light Detection And Ranging (LiDAR) point cloud data. The LiDAR data provides highly accurate three-dimensional (3D) positional information. Therefore, studies on building extraction using LiDAR data have broadened in scope over time. Outliers, inharmonious input data behaviour, innumerable building structure possibilities, and heterogeneous environments are major challenges that need to be addressed for an effective 3D building extraction using LiDAR data. Outliers can cause the extraction of erroneous roof planes, incorrect boundaries, and over-segmentation of the extracted buildings. Due to the uneven point densities and heterogeneous building structures, small roof parts often remain undetected. Moreover, finding and using a realistic performance metric to evaluate the extracted buildings is another challenge. Inaccurate identification of sharp features, coplanar points, and boundary feature points often causes inaccurate roof plane segmentation and overall 3D outline generation for a building. To address these challenges, first, this thesis proposes a robust variable point neighbourhood estimation method. Considering the specific scanline properties associated with aerial LiDAR data, the proposed method automatically estimates an optimal and realistic neighbourhood for each point to solve the shortcomings of existing fixed neighbourhood methods in uneven or abrupt point densities. Using the estimated variable neighbourhood, a robust z-score and a distance-based outlier factor are calculated for each point in the input data. Based on these two measurements, an effective outlier detection method is proposed which can preserve more than 98% of inliers and remove outliers with better precision than the existing state-of-the-art methods. Then, individual roof planes are extracted in a robust way from the separated outlier free coplanar points based on the M-estimator SAmple Consensus (MSAC) plane-ftting algorithm. The proposed technique is capable of extracting small real roof planes, while avoiding spurious roof planes caused by the remaining outliers, if any. Individual buildings are then extracted precisely by grouping adjacent roof planes into clusters. Next, to assess the extracted buildings and individual roof plane boundaries, a realistic evaluation metric is proposed based on a new robust corner correspondence algorithm. The metric is defined as the average minimum distance davg from the extracted boundary points to their actual corresponding reference lines. It strictly follows the definition of a standard mathematical metric, and addresses the shortcomings of the existing metrics. In addition, during the evaluation, the proposed metric separately identifies the underlap and extralap areas in an extracted building. Furthermore, finding precise 3D feature points (e.g., fold and boundary) is necessary for tracing feature lines to describe a building outline. It is also important for accurate roof plane extraction and for establishing relationships between the correctly extracted planes so as to facilitate a more robust 3D building extraction. Thus, this thesis presents a robust fold feature point extraction method based on the calculated normal of the individual point. Later, a method to extract the feature points representing the boundaries is also developed based on the distance from a point to the calculated mean of its estimated neighbours. In the context of the accuracy evaluation, the proposed methods show more than 90% F1-scores on the generated ground truth data. Finally, machine learning techniques are applied to circumvent the problems (e.g., selecting manual thresholds for different parameters) of existing rule-based approaches for roof feature point extraction and classification. Seven effective geometric and statistical features are calculated for each point to train and test the machine learning classifiers using the appropriate ground truth data. Four primary classes of building roof point cloud are considered, and promising results for each of the classes have been achieved, confirming the competitive performance of the classification over the state-of-the-art techniques. At the end of this thesis, using the classified roof feature points, a more robust plane segmentation algorithm is demonstrated for extracting the roof planes of individual buildings.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Info & Comm Tech
Science, Environment, Engineering and Technology
Full Text
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Eckart, Benjamin. "Compact Generative Models of Point Cloud Data for 3D Perception." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1089.

Повний текст джерела
Анотація:
One of the most fundamental tasks for any robotics application is the ability to adequately assimilate and respond to incoming sensor data. In the case of 3D range sensing, modern-day sensors generate massive quantities of point cloud data that strain available computational resources. Dealing with large quantities of unevenly sampled 3D point data is a great challenge for many fields, including autonomous driving, 3D manipulation, augmented reality, and medical imaging. This thesis explores how carefully designed statistical models for point cloud data can facilitate, accelerate, and unify many common tasks in the area of range-based 3D perception. We first establish a novel family of compact generative models for 3D point cloud data, offering them as an efficient and robust statistical alternative to traditional point-based or voxel-based data structures. We then show how these statistical models can be utilized toward the creation of a unified data processing architecture for tasks such as segmentation, registration, visualization, and mapping. In complex robotics systems, it is common for various concurrent perceptual processes to have separate low-level data processing pipelines. Besides introducing redundancy, these processes may perform their own data processing in conflicting or ad hoc ways. To avoid this, tractable data structures and models need to be established that share common perceptual processing elements. Additionally, given that many robotics applications involving point cloud processing are size, weight, and power-constrained, these models and their associated algorithms should be deployable in low-power embedded systems while retaining acceptable performance. Given a properly flexible and robust point processor, therefore, many low-level tasks could be unified under a common architectural paradigm and greatly simplify the overall perceptual system. In this thesis, a family of compact generative models is introduced for point cloud data based on hierarchical Gaussian Mixture Models. Using recursive, dataparallel variants of the Expectation Maximization algorithm, we construct high fidelity statistical and hierarchical point cloud models that compactly represent the data as a 3D generative probability distribution. In contrast to raw points or voxelbased decompositions, our proposed statistical model provides a better theoretical footing for robustly dealing with noise, constructing maximum likelihood methods, reasoning probabilistically about free space, utilizing spatial sampling techniques, and performing gradient-based optimizations. Further, the construction of the model as a spatial hierarchy allows for Octree-like logarithmic time access. One challenge compared to previous methods, however, is that our model-based approach incurs a potentially high creation cost. To mitigate this problem, we leverage data parallelism in order to design models well-suited for GPU acceleration, allowing them to run at real-time rates for many time-critical applications. We show how our models can facilitate various 3D perception tasks, demonstrating state-of-the-art performance in geometric segmentation, registration, dynamic occupancy map creation, and 3D visualization.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Oropallo, William Edward Jr. "A Point Cloud Approach to Object Slicing for 3D Printing." Thesis, University of South Florida, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10751757.

Повний текст джерела
Анотація:

Various industries have embraced 3D printing for manufacturing on-demand, custom printed parts. However, 3D printing requires intelligent data processing and algorithms to go from CAD model to machine instructions. One of the most crucial steps in the process is the slicing of the object. Most 3D printers build parts by accumulating material layers by layer. 3D printing software needs to calculate these layers for manufacturing by slicing a model and calculating the intersections. Finding exact solutions of intersections on the original model is mathematically complicated and computationally demanding. A preprocessing stage of tessellation has become the standard practice for slicing models. Calculating intersections with tessellations of the original model is computationally simple but can introduce inaccuracies and errors that can ruin the final print.

This dissertation shows that a point cloud approach to preprocessing and slicing models is robust and accurate. The point cloud approach to object slicing avoids the complexities of directly slicing models while evading the error-prone tessellation stage. An algorithm developed for this dissertation generates point clouds and slices models within a tolerance. The algorithm uses the original NURBS model and converts the model into a point cloud, based on layer thickness and accuracy requirements. The algorithm then uses a gridding structure to calculate where intersections happen and fit B-spline curves to those intersections.

This algorithm finds accurate intersections and can ignore certain anomalies and error from the modeling process. The primary point evaluation is stable and computationally inexpensive. This algorithm provides an alternative to challenges of both the direct and tessellated slicing methods that have been the focus of the 3D printing industry.

Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lev, Hoang Justin. "A Study of 3D Point Cloud Features for Shape Retrieval." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM040.

Повний текст джерела
Анотація:
Grâce à l’amélioration et la multiplication des capteurs 3D, la diminution des prix et l’augmentation des puissances de calculs, l’utilisation de donnée3D s’est intensifiée ces dernières années. Les nuages de points 3D (3D pointcloud) sont une des représentations possibles pour de telles données. Elleà l’avantage d’être simple et précise, ainsi que le résultat immédiat de la capture. En tant que structure non-régulière sous forme de liste de points,l’analyse des nuages de points est complexe d’où leur récente utilisation. Cette thèse se concentre sur l’utilisation de nuages de points 3D pourune analyse tridimensionnelle de leur forme. La géométrie des nuages est plus particulièrement étudiée via les courbures des objets. Des descripteursreprésentant la distribution des courbures principales sont proposés: Semantic Point Cloud (SPC) et Multi-Scale Principal Curvature Point Cloud (MPC2).Global Local Point Cloud (GLPC) est un autre descripteur basé sur les courbures mais en combinaison d’autres propriétés. Ces trois descripteurs sontrobustes aux erreurs communes lors d’une capture 3D comme par exemple le bruit ou bien les occlusions. Leurs performances sont supérieures à ceuxde l’état de l’art en ce qui concerne la reconnaissance d’instance avec plus de 90% de précision. La thèse étudie également les récents algorithmes de deep learning qui concernent les nuages de points 3D qui sont apparus au cours de ces trois ans de thèse. Une première approche utilise des descripteurs basé sur les courbures en tant que données d’entrée pour un réseau de perceptron multicouche (MLP). Les résultats ne sont cependant pas au niveau de l’état de l’art mais cette étude montre que ModelNet, la base de données de référence pour laclassification d’objet 3D, n’est pas optimale. En effet, la base de donnéesn’est pas une bonne représentation de la réalité en ne reflétant pas la richesse de courbures des objets réels. Enfin, l’architecture d’un réseau neuronal artificiel est présenté. Inspiré par l’état de l’art en deep learning, Multi-scale PointNet détermine les propriétés d’un objet à différente échelle et les combine afin de le décrire. Encore en développement, le modèle requiert encore des ajustements pour obtenir des résultats concluants. Pour résumer, en s’attaquant au problème complexe de l’utilisation des nuages de points 3D mais aussi à l’évolution rapide du domaine, la thèse contribue à l’état de l’art sur trois aspects majeurs: (i) L’élaboration de nouveaux algorithmes se basant sur les courbures géométrique des objets pour la reconnaissance d’instance. (ii) L’étude qui montre que la construction d’une nouvelle base de données plus réaliste est nécessaire pour correctement poursuivre les études dans le domaine. (iii) La proposition d’une nouvelle architecture de réseau de neurones artificiels pour l’analyse de nuage de points3D
With the improvement and proliferation of 3D sensors, price cut and enhancementof computational power, the usage of 3D data intensifies for the last few years. The3D point cloud is one type amongst the others for 3D representation. This particularlyrepresentation is the direct output of sensors, accurate and simple. As a non-regularstructure of unordered list of points, the analysis on point cloud is challenging andhence the recent usage only.This PhD thesis focuses on the use of 3D point cloud representation for threedimensional shape analysis. More particularly, the geometrical shape is studied throughthe curvature of the object. Descriptors describing the distribution of the principalcurvature is proposed: Principal Curvature Point Cloud and Multi-Scale PrincipalCurvature Point Cloud. Global Local Point Cloud is another descriptor using thecurvature but in combination with other features. These three descriptors are robustto typical 3D scan error like noisy data or occlusion. They outperform state-of-the-artalgorithms in instance retrieval task with more than 90% of accuracy.The thesis also studies deep learning on 3D point cloud which emerges during thethree years of this PhD. The first approach tested, used curvature-based descriptor asthe input of a multi-layer perceptron network. The accuracy cannot catch state-ofthe-art performances. However, they show that ModelNet, the standard dataset for 3Dshape classification is not a good picture of the reality. Indeed, the experiment showsthat the dataset does not reflect the curvature wealth of true objects scans.Ultimately, a new neural network architecture is proposed. Inspired by the state-ofthe-art deep learning network, Multiscale PointNet computes the feature on multiplescales and combines them all to describe an object. Still under development, theperformances are still to be improved.In summary, tackling the challenging use of 3D point clouds but also the quickevolution of the field, the thesis contributes to the state-of-the-art in three majoraspects: (i) Design of new algorithms, relying on geometrical curvature of the objectfor instance retrieval task. (ii) Study and exhibition of the need to build a new standardclassification dataset with more realistic objects. (iii) Proposition of a new deep neuralnetwork for 3D point cloud analysis
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Kulkarni, Amey S. "Motion Segmentation for Autonomous Robots Using 3D Point Cloud Data." Digital WPI, 2020. https://digitalcommons.wpi.edu/etd-theses/1370.

Повний текст джерела
Анотація:
Achieving robot autonomy is an extremely challenging task and it starts with developing algorithms that help the robot understand how humans perceive the environment around them. Once the robot understands how to make sense of its environment, it is easy to make efficient decisions about safe movement. It is hard for robots to perform tasks that come naturally to humans like understanding signboards, classifying traffic lights, planning path around dynamic obstacles, etc. In this work, we take up one such challenge of motion segmentation using Light Detection and Ranging (LiDAR) point clouds. Motion segmentation is the task of classifying a point as either moving or static. As the ego-vehicle moves along the road, it needs to detect moving cars with very high certainty as they are the areas of interest which provide cues to the ego-vehicle to plan it's motion. Motion segmentation algorithms segregate moving cars from static cars to give more importance to dynamic obstacles. In contrast to the usual LiDAR scan representations like range images and regular grid, this work uses a modern representation of LiDAR scans using permutohedral lattices. This representation gives ease of representing unstructured LiDAR points in an efficient lattice structure. We propose a machine learning approach to perform motion segmentation. The network architecture takes in two sequential point clouds and performs convolutions on them to estimate if 3D points from the first point cloud are moving or static. Using two temporal point clouds help the network in learning what features constitute motion. We have trained and tested our learning algorithm on the FlyingThings3D dataset and a modified KITTI dataset with simulated motion.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

He, Linbo. "Improving 3D Point Cloud Segmentation Using Multimodal Fusion of Projected 2D Imagery Data : Improving 3D Point Cloud Segmentation Using Multimodal Fusion of Projected 2D Imagery Data." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157705.

Повний текст джерела
Анотація:
Semantic segmentation is a key approach to comprehensive image data analysis. It can be applied to analyze 2D images, videos, and even point clouds that contain 3D data points. On the first two problems, CNNs have achieved remarkable progress, but on point cloud segmentation, the results are less satisfactory due to challenges such as limited memory resource and difficulties in 3D point annotation. One of the research studies carried out by the Computer Vision Lab at Linköping University was aiming to ease the semantic segmentation of 3D point cloud. The idea is that by first projecting 3D data points to 2D space and then focusing only on the analysis of 2D images, we can reduce the overall workload for the segmentation process as well as exploit the existing well-developed 2D semantic segmentation techniques. In order to improve the performance of CNNs for 2D semantic segmentation, the study has used input data derived from different modalities. However, how different modalities can be optimally fused is still an open question. Based on the above-mentioned study, this thesis aims to improve the multistream framework architecture. More concretely, we investigate how different singlestream architectures impact the multistream framework with a given fusion method, and how different fusion methods contribute to the overall performance of a given multistream framework. As a result, our proposed fusion architecture outperformed all the investigated traditional fusion methods. Along with the best singlestream candidate and few additional training techniques, our final proposed multistream framework obtained a relative gain of 7.3\% mIoU compared to the baseline on the semantic3D point cloud test set, increasing the ranking from 12th to 5th position on the benchmark leaderboard.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Downham, Alexander David. "True 3D Digital Holographic Tomography for Virtual Reality Applications." University of Dayton / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1513204001924421.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Trowbridge, Michael Aaron. "Autonomous 3D Model Generation of Orbital Debris using Point Cloud Sensors." Thesis, University of Colorado at Boulder, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1558774.

Повний текст джерела
Анотація:

A software prototype for autonomous 3D scanning of uncooperatively rotating orbital debris using a point cloud sensor is designed and tested. The software successfully generated 3D models under conditions that simulate some on-orbit orbit challenges including relative motion between observer and target, inconsistent target visibility and a target with more than one plane of symmetry. The model scanning software performed well against an irregular object with one plane of symmetry but was weak against objects with 2 planes of symmetry.

The suitability of point cloud sensors and algorithms for space is examined. Terrestrial Graph SLAM is adapted for an uncooperatively rotating orbital debris scanning scenario. A joint EKF attitude estimate and shape similiarity loop closure heuristic for orbital debris is derived and experimentally tested. The binary Extended Fast Point Feature Histogram (EFPFH) is defined and analyzed as a binary quantization of the floating point EFPFH. Both the binary and floating point EPFH are experimentally tested and compared as part of the joint loop closure heuristic.

Стилі APA, Harvard, Vancouver, ISO та ін.
15

Diskin, Yakov. "Dense 3D Point Cloud Representation of a Scene Using Uncalibrated Monocular Vision." University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1366386933.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Hirschmüller, Korbinian. "Development and Evaluation of a 3D Point Cloud Based Attitude Determination System." Thesis, Luleå tekniska universitet, Rymdteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-65910.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Blahož, Vladimír. "Vizualizace 3D scény pro ovládání robota." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236501.

Повний текст джерела
Анотація:
This thesis presents possibilities of 3D point cloud and true colored digital video fusion that can be used in the process of robot teleoperation. Advantages of a 3D environment visualization combining more than one sensor data, tools to facilitate such data fusion, as well as two alternative practical implementations of combined data visualization are discussed. First proposed alternative estimates view frustum of the robot's camera and maps real colored video to a semi-transparent polygon placed in the view frustum. The second option is a direct coloring of the point cloud data creating a colored point cloud representing color as well as depth information about an environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Burwell, Claire Leonora. "The effect of 2D vs. 3D visualisation on lidar point cloud analysis tasks." Thesis, University of Leicester, 2016. http://hdl.handle.net/2381/37950.

Повний текст джерела
Анотація:
The exploitation of human depth perception is not uncommon in visual analysis of data; medical imagery and geological analysis already rely on stereoscopic 3D visualisation. In contrast, 3D scans of the environment are usually represented on a flat, 2D computer screen, although there is potential to take advantage of both (a) the spatial depth that is offered by the point cloud data, and (b) our ability to see stereoscopically. This study explores whether a stereo 3D analysis environment would add value to visual lidar tasks, compared to the standard 2D display. Forty-six volunteers, all with good stereovision and varying lidar knowledge, viewed lidar data in either 2D or in 3D, on a 4m x 2.4m screen. The first task required 2D and 3D measurement of linear lengths of a planar and a volumetric feature, using an interaction device for point selection. Overall, there was no significant difference in the spread of 2D and 3D measurement distributions for both of the measured features. The second task required interpretation of ten features from individual points. These were highlighted across two areas of interest - a flat, suburban area and a valley slope with a mixture of features. No classification categories were offered to the participant and answers were expressed verbally. Two of the ten features (chimney and cliff-face) were interpreted with a better degree of accuracy using the 3D method and the remaining features had no difference in 2D and 3D accuracy. Using the experiment’s data processing and visualisation approaches, results suggest that stereo 3D perception of lidar data does not add value to manual linear measurement. The interpretation results indicate that immersive stereo 3D visualisation does improve the accuracy of manual point cloud classification for certain features. The findings contribute to wider discussions in lidar processing, geovisualisation, and applied psychology.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Kudryavtsev, Andrey. "3D Reconstruction in Scanning Electron Microscope : from image acquisition to dense point cloud." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCD050/document.

Повний текст джерела
Анотація:
L’objectif de ce travail est d’obtenir un modèle 3D d’un objet à partir d’une série d’images prisesavec un Microscope Electronique à Balayage (MEB). Pour cela, nous utilisons la technique dereconstruction 3D qui est une application bien connue du domaine de vision par ordinateur.Cependant, en raison des spécificités de la formation d’images dans le MEB et dans la microscopieen général, les techniques existantes ne peuvent pas être appliquées aux images MEB. Lesprincipales raisons à cela sont la projection parallèle et les problèmes d’étalonnage de MEB entant que caméra. Ainsi, dans ce travail, nous avons développé un nouvel algorithme permettant deréaliser une reconstruction 3D dans le MEB tout en prenant en compte ces difficultés. De plus,comme la reconstruction est obtenue par auto-étalonnage de la caméra, l’utilisation des mires n’estplus requise. La sortie finale des techniques présentées est un nuage de points dense, pouvant donccontenir des millions de points, correspondant à la surface de l’objet
The goal of this work is to obtain a 3D model of an object from its multiple views acquired withScanning Electron Microscope (SEM). For this, the technique of 3D reconstruction is used which isa well known application of computer vision. However, due to the specificities of image formation inSEM, and in microscale in general, the existing techniques are not applicable to the SEM images. Themain reasons for that are the parallel projection and the problems of SEM calibration as a camera.As a result, in this work we developed a new algorithm allowing to achieve 3D reconstruction in SEMwhile taking into account these issues. Moreover, as the reconstruction is obtained through cameraautocalibration, there is no need in calibration object. The final output of the presented techniques isa dense point cloud corresponding to the surface of the object that may contain millions of points
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Nurunnabi, Abdul Awal Md. "Robust statistical approaches for feature extraction in laser scanning 3D point cloud data." Thesis, Curtin University, 2014. http://hdl.handle.net/20.500.11937/543.

Повний текст джерела
Анотація:
Three dimensional point cloud data acquired from mobile laser scanning system commonly contain outliers and/or noise. The presence of outliers and noise means most of the frequently used methods for feature extraction produce inaccurate and non-robust results. We investigate the problems of outliers and how to accommodate them for automatic robust feature extraction. This thesis develops algorithms for outlier detection, point cloud denoising, robust feature extraction, segmentation and ground surface extraction.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Diskin, Yakov. "Volumetric Change Detection Using Uncalibrated 3D Reconstruction Models." University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1429293660.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Grankvist, Ola. "Recognition and Registration of 3D Models in Depth Sensor Data." Thesis, Linköpings universitet, Datorseende, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-131452.

Повний текст джерела
Анотація:
Object Recognition is the art of localizing predefined objects in image sensor data. In this thesis a depth sensor was used which has the benefit that the 3D pose of the object can be estimated. This has applications in e.g. automatic manufacturing, where a robot picks up parts or tools with a robot arm. This master thesis presents an implementation and an evaluation of a system for object recognition of 3D models in depth sensor data. The system uses several depth images rendered from a 3D model and describes their characteristics using so-called feature descriptors. These are then matched with the descriptors of a scene depth image to find the 3D pose of the model in the scene. The pose estimate is then refined iteratively using a registration method. Different descriptors and registration methods are investigated. One of the main contributions of this thesis is that it compares two different types of descriptors, local and global, which has seen little attention in research. This is done for two different scene scenarios, and for different types of objects and depth sensors. The evaluation shows that global descriptors are fast and robust for objects with a smooth visible surface whereas the local descriptors perform better for larger objects in clutter and occlusion. This thesis also presents a novel global descriptor, the CESF, which is observed to be more robust than other global descriptors. As for the registration methods, the ICP is shown to perform most accurately and ICP point-to-plane more robust.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Digne, Julie. "Inverse geometry : from the raw point cloud to the 3d surface : theory and algorithms." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2010. http://tel.archives-ouvertes.fr/tel-00610432.

Повний текст джерела
Анотація:
Many laser devices acquire directly 3D objects and reconstruct their surface. Nevertheless, the final reconstructed surface is usually smoothed out as a result of the scanner internal de-noising process and the offsets between different scans. This thesis, working on results from high precision scans, adopts the somewhat extreme conservative position, not to loose or alter any raw sample throughout the whole processing pipeline, and to attempt to visualize them. Indeed, it is the only way to discover all surface imperfections (holes, offsets). Furthermore, since high precision data can capture the slightest surface variation, any smoothing and any sub-sampling can incur in the loss of textural detail.The thesis attempts to prove that one can triangulate the raw point cloud with almost no sample loss. It solves the exact visualization problem on large data sets of up to 35 million points made of 300 different scan sweeps and more. Two major problems are addressed. The first one is the orientation of the complete raw point set, an the building of a high precision mesh. The second one is the correction of the tiny scan misalignments which can cause strong high frequency aliasing and hamper completely a direct visualization.The second development of the thesis is a general low-high frequency decomposition algorithm for any point cloud. Thus classic image analysis tools, the level set tree and the MSER representations, are extended to meshes, yielding an intrinsic mesh segmentation method.The underlying mathematical development focuses on an analysis of a half dozen discrete differential operators acting on raw point clouds which have been proposed in the literature. By considering the asymptotic behavior of these operators on a smooth surface, a classification by their underlying curvature operators is obtained.This analysis leads to the development of a discrete operator consistent with the mean curvature motion (the intrinsic heat equation) defining a remarkably simple and robust numerical scale space. By this scale space all of the above mentioned problems (point set orientation, raw point set triangulation, scan merging, segmentation), usually addressed by separated techniques, are solved in a unified framework.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Cheng, Huaining. "Orthogonal Moment-Based Human Shape Query and Action Recognition from 3D Point Cloud Patches." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1452160221.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Al, Hakim Ezeddin. "3D YOLO: End-to-End 3D Object Detection Using Point Clouds." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234242.

Повний текст джерела
Анотація:
For safe and reliable driving, it is essential that an autonomous vehicle can accurately perceive the surrounding environment. Modern sensor technologies used for perception, such as LiDAR and RADAR, deliver a large set of 3D measurement points known as a point cloud. There is a huge need to interpret the point cloud data to detect other road users, such as vehicles and pedestrians. Many research studies have proposed image-based models for 2D object detection. This thesis takes it a step further and aims to develop a LiDAR-based 3D object detection model that operates in real-time, with emphasis on autonomous driving scenarios. We propose 3D YOLO, an extension of YOLO (You Only Look Once), which is one of the fastest state-of-the-art 2D object detectors for images. The proposed model takes point cloud data as input and outputs 3D bounding boxes with class scores in real-time. Most of the existing 3D object detectors use hand-crafted features, while our model follows the end-to-end learning fashion, which removes manual feature engineering. 3D YOLO pipeline consists of two networks: (a) Feature Learning Network, an artificial neural network that transforms the input point cloud to a new feature space; (b) 3DNet, a novel convolutional neural network architecture based on YOLO that learns the shape description of the objects. Our experiments on the KITTI dataset shows that the 3D YOLO has high accuracy and outperforms the state-of-the-art LiDAR-based models in efficiency. This makes it a suitable candidate for deployment in autonomous vehicles.
För att autonoma fordon ska ha en god uppfattning av sin omgivning används moderna sensorer som LiDAR och RADAR. Dessa genererar en stor mängd 3-dimensionella datapunkter som kallas point clouds. Inom utvecklingen av autonoma fordon finns det ett stort behov av att tolka LiDAR-data samt klassificera medtrafikanter. Ett stort antal studier har gjorts om 2D-objektdetektering som analyserar bilder för att upptäcka fordon, men vi är intresserade av 3D-objektdetektering med hjälp av endast LiDAR data. Därför introducerar vi modellen 3D YOLO, som bygger på YOLO (You Only Look Once), som är en av de snabbaste state-of-the-art modellerna inom 2D-objektdetektering för bilder. 3D YOLO tar in ett point cloud och producerar 3D lådor som markerar de olika objekten samt anger objektets kategori. Vi har tränat och evaluerat modellen med den publika träningsdatan KITTI. Våra resultat visar att 3D YOLO är snabbare än dagens state-of-the-art LiDAR-baserade modeller med en hög träffsäkerhet. Detta gör den till en god kandidat för kunna användas av autonoma fordon.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Dahlin, Johan. "3D Modeling of Indoor Environments." Thesis, Linköpings universitet, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93999.

Повний текст джерела
Анотація:
With the aid of modern sensors it is possible to create models of buildings. These sensorstypically generate 3D point clouds and in order to increase interpretability and usability,these point clouds are often translated into 3D models.In this thesis a way of translating a 3D point cloud into a 3D model is presented. The basicfunctionality is implemented using Matlab. The geometric model consists of floors, wallsand ceilings. In addition, doors and windows are automatically identified and integrated intothe model. The resulting model also has an explicit representation of the topology betweenentities of the model. The topology is represented as a graph, and to do this GraphML isused. The graph is opened in a graph editing program called yEd.The result is a 3D model that can be plotted in Matlab and a graph describing the connectivitybetween entities. The GraphML file is automatically generated in Matlab. An interfacebetween Matlab and yEd allows the user to choose which rooms should be plotted.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Hammoudi, Karim. "Contributions to the 3D city modeling : 3D polyhedral building model reconstruction from aerial images and 3D facade modeling from terrestrial 3D point cloud and images." Phd thesis, Université Paris-Est, 2011. http://tel.archives-ouvertes.fr/tel-00682442.

Повний текст джерела
Анотація:
The aim of this work is to develop research on 3D building modeling. In particular, the research in aerial-based 3D building reconstruction is a topic very developed since 1990. However, it is necessary to pursue the research since the actual approaches for 3D massive building reconstruction (although efficient) still encounter problems in generalization, coherency, accuracy. Besides, the recent developments of street acquisition systems such as Mobile Mapping Systems open new perspectives for improvements in building modeling in the sense that the terrestrial data (very dense and accurate) can be exploited with more performance (in comparison to the aerial investigation) to enrich the building models at facade level (e.g., geometry, texturing).Hence, aerial and terrestrial based building modeling approaches are individually proposed. At aerial level, we describe a direct and featureless approach for simple polyhedral building reconstruction from a set of calibrated aerial images. At terrestrial level, several approaches that essentially describe a 3D urban facade modeling pipeline are proposed, namely, the street point cloud segmentation and classification, the geometric modeling of urban facade and the occlusion-free facade texturing
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Houshiar, Hamidreza [Verfasser], Andreas [Gutachter] Nüchter, and Claus [Gutachter] Brenner. "Documentation and mapping with 3D point cloud processing / Hamidreza Houshiar ; Gutachter: Andreas Nüchter, Claus Brenner." Würzburg : Universität Würzburg, 2017. http://d-nb.info/1127528823/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Houshiar, Hamidreza Verfasser], Andreas [Gutachter] [Nüchter, and Claus [Gutachter] Brenner. "Documentation and mapping with 3D point cloud processing / Hamidreza Houshiar ; Gutachter: Andreas Nüchter, Claus Brenner." Würzburg : Universität Würzburg, 2017. http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-144493.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Fucili, Mattia. "3D object detection from point clouds with dense pose voters." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17616/.

Повний текст джерела
Анотація:
Il riconoscimento di oggetti è sempre stato un compito sfidante per la Computer Vision. Trova applicazione in molti campi, principalmente nell’industria, come ad esempio per permettere ad un robot di trovare gli oggetti da afferrare. Negli ultimi decenni tali compiti hanno trovato nuovi modi di essere raggiunti grazie alla riscoperta delle Reti Neurali, in particolare le Reti Neurali Convoluzionali. Questo tipo di reti ha raggiunto ottimi risultati in molte applicazioni per il riconoscimento e la classificazione degli oggetti. La tendenza, ora, `e quella di utilizzare tali reti anche nell’industria automobilistica per cercare di rendere reale il sogno delle automobili che guidano da sole. Ci sono molti lavori importanti sul riconoscimento delle auto dalle immagini. In questa tesi presentiamo la nostra architettura di Rete Neurale Convoluzionale per il riconoscimento di automobili e la loro posizione nello spazio, utilizzando solo input lidar. Salvando le informazioni riguardanti le bounding box attorno all’auto a livello del punto ci assicura una buona previsione anche in situazioni in cui le automobili sono occluse. I test vengono eseguiti sul dataset più utilizzato per il riconoscimento di automobili e pedoni nelle applicazioni di guida autonoma.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Schubert, Stefan. "Optimierter Einsatz eines 3D-Laserscanners zur Point-Cloud-basierten Kartierung und Lokalisierung im In- und Outdoorbereich." Master's thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-161415.

Повний текст джерела
Анотація:
Die Kartierung und Lokalisierung eines mobilen Roboters in seiner Umgebung ist eine wichtige Voraussetzung für dessen Autonomie. In dieser Arbeit wird der Einsatz eines 3D-Laserscanners zur Erfüllung dieser Aufgaben untersucht. Durch die optimierte Anordnung eines rotierenden 2D-Laserscanners werden hochauflösende Bereiche vorgegeben. Zudem wird mit Hilfe von ICP die Kartierung und Lokalisierung im Stillstand durchgeführt. Bei der Betrachtung zur Verbesserung der Bewegungsschätzung wird auch eine Möglichkeit zur Lokalisierung während der Bewegung mit 3D-Scans vorgestellt. Die vorgestellten Algorithmen werden durch Experimente mit realer Hardware evaluiert.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Megahed, Fadel M. "The Use of Image and Point Cloud Data in Statistical Process Control." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/26511.

Повний текст джерела
Анотація:
The volume of data acquired in production systems continues to expand. Emerging imaging technologies, such as machine vision systems (MVSs) and 3D surface scanners, diversify the types of data being collected, further pushing data collection beyond discrete dimensional data. These large and diverse datasets increase the challenge of extracting useful information. Unfortunately, industry still relies heavily on traditional quality methods that are limited to fault detection, which fails to consider important diagnostic information needed for process recovery. Modern measurement technologies should spur the transformation of statistical process control (SPC) to provide practitioners with additional diagnostic information. This dissertation focuses on how MVSs and 3D laser scanners can be further utilized to meet that goal. More specifically, this work: 1) reviews image-based control charts while highlighting their advantages and disadvantages; 2) integrates spatiotemporal methods with digital image processing to detect process faults and estimate their location, size, and time of occurrence; and 3) shows how point cloud data (3D laser scans) can be used to detect and locate unknown faults in complex geometries. Overall, the research goal is to create new quality control tools that utilize high density data available in manufacturing environments to generate knowledge that supports decision-making beyond just indicating the existence of a process issue. This allows industrial practitioners to have a rapid process recovery once a process issue has been detected, and consequently reduce the associated downtime.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Stålberg, Martin. "Reconstruction of trees from 3D point clouds." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-316833.

Повний текст джерела
Анотація:
The geometrical structure of a tree can consist of thousands, even millions, of branches, twigs and leaves in complex arrangements. The structure contains a lot of useful information and can be used for example to assess a tree's health or calculate parameters such as total wood volume or branch size distribution. Because of the complexity, capturing the structure of an entire tree used to be nearly impossible, but the increased availability and quality of particularly digital cameras and Light Detection and Ranging (LIDAR) instruments is making it increasingly possible. A set of digital images of a tree, or a point cloud of a tree from a LIDAR scan, contains a lot of data, but the information about the tree structure has to be extracted from this data through analysis. This work presents a method of reconstructing 3D models of trees from point clouds. The model is constructed from cylindrical segments which are added one by one. Bayesian inference is used to determine how to optimize the parameters of model segment candidates and whether or not to accept them as part of the model. A Hough transform for finding cylinders in point clouds is presented, and used as a heuristic to guide the proposals of model segment candidates. Previous related works have mainly focused on high density point clouds of sparse trees, whereas the objective of this work was to analyze low resolution point clouds of dense almond trees. The method is evaluated on artificial and real datasets and works rather well on high quality data, but performs poorly on low resolution data with gaps and occlusions.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Galante, Annamaria. "Studio di CNNs sferiche per l'apprendimento di descrittori locali su Point Cloud." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18680/.

Повний текст джерела
Анотація:
Nell'ambito della Computer Vision assume sempre maggiore importanza la 3D Computer Vision. Diversi sono i task e le applicazioni della 3D CV, così come diverse sono le possibili rappresentazioni dei dati. Molti di questi task richiedono la ricerca di corrispondenze tra due o più scene\oggetti 3D. Queste corrispondenze vengono individuate tramite il paradigma di Feature Matching, composto da tre step: detection, description, matching. Le performance della pipe line di feature matching sono strettamente correlate alle tecniche utilizzate in fase di description. La creazione di descriptor compatti, informativi e invarianti alla rotazione è un problema tutt’altro che risolto in letteratura. Recentemente sono state proposte delle architetture basate su reti convoluzionali sferiche, per il calcolo di descrittori globali da utilizzare in task come shape classification. Questi approcci, grazie alla loro trattazione matematica, permettono di essere equivarianti alla rotazione. Lo scopo di questo elaborato di tesi è quello di fornire una panoramica dei metodi presenti allo stato dell’arte e proporre un’architettura basata su spherical cnns per apprendere un descrittore locale da usare su nuvole di punti.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Monnier, Fabrice. "Amélioration de la localisation 3D de données laser terrestre à l'aide de cartes 2D ou modèles 3D." Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1114/document.

Повний текст джерела
Анотація:
Les avancées technologiques dans le domaine informatique (logiciel et matériel) et, en particulier, de la géolocalisation ont permis la démocratisation des modèles numériques. L'arrivée depuis quelques années de véhicules de cartographie mobile a ouvert l'accès à la numérisation 3D mobile terrestre. L'un des avantages de ces nouvelles méthodes d'imagerie de l'environnement urbain est la capacité potentielle de ces systèmes à améliorer les bases de données existantes 2D comme 3D, en particulier leur niveau de détail et la diversité des objets représentés. Les bases de données géographiques sont constituées d'un ensemble de primitives géométriques (généralement des lignes en 2D et des plans ou des triangles en 3D) d'un niveau de détail grossier mais ont l'avantage d'être disponibles sur de vastes zones géographiques. Elles sont issues de la fusion d'informations diverses (anciennes campagnes réalisées manuellement, conception automatisée ou encore hybride) et peuvent donc présenter des erreurs de fabrication. Les systèmes de numérisation mobiles, eux, peuvent acquérir, entre autres, des nuages de points laser. Ces nuages laser garantissent des données d'un niveau de détail très fin pouvant aller jusqu'à plusieurs points au centimètre carré. Acquérir des nuages de points laser présente toutefois des inconvénients :- une quantité de données importante sur de faibles étendues géographiques posant des problèmes de stockage et de traitements pouvant aller jusqu'à plusieurs Téraoctet lors de campagnes d'acquisition importantes- des difficultés d'acquisition inhérentes au fait d'imager l'environnement depuis le sol. Les systèmes de numérisation mobiles présentent eux aussi des limites : en milieu urbain, le signal GPS nécessaire au bon géoréférencement des données peut être perturbé par les multi-trajets voire même stoppé lors de phénomènes de masquage GPS liés à la réduction de la portion de ciel visible pour capter assez de satellites pour en déduire une position spatiale. Améliorer les bases de données existantes grâce aux données acquises par un véhicule de numérisation mobile nécessite une mise en cohérence des deux ensembles. L'objectif principal de ce manuscrit est donc de mettre en place une chaîne de traitements automatique permettant de recaler bases de données géographiques et nuages de points laser terrestre (provenant de véhicules de cartographies mobiles) de la manière la plus fiable possible. Le recalage peut se réaliser de manière différentes. Dans ce manuscrit, nous avons développé une méthode permettant de recaler des nuages laser sur des bases de données, notamment, par la définition d'un modèle de dérive particulièrement adapté aux dérives non-linéaires de ces données mobiles. Nous avons également développé une méthode capable d'utiliser de l'information sémantique pour recaler des bases de données sur des nuages laser mobiles. Les différentes optimisations effectuées sur notre approche nous permettent de recaler des données rapidement pour une approche post-traitements, ce qui permet d'ouvrir l'approche à la gestion de grands volumes de données (milliards de points laser et milliers de primitives géométriques).Le problème du recalage conjoint a été abordé. Notre chaîne de traitements a été testée sur des données simulées et des données réelles provenant de différentes missions effectuées par l'IGN
Technological advances in computer science (software and hardware) and particularly, GPS localization made digital models accessible to all people. In recent years, mobile mapping systems has enabled large scale mobile 3D scanning. One advantage of this technology for the urban environment is the potential ability to improve existing 2D or 3D database, especially their level of detail and variety of represented objects. Geographic database consist of a set of geometric primitives (generally 2D lines and plans or triangles in 3D) with a coarse level of detail but with the advantage of being available over wide geographical areas. They come from the fusion of various information (old campaigns performed manually, automated or hybrid design) wich may lead to manufacturing errors. The mobile mapping systems can acquire laser point clouds. These point clouds guarantee a fine level of detail up to more than one points per square centimeter. But there are some disavantages :- a large amount of data on small geographic areas that may cause problems for storage and treatment of up to several Terabyte during major acquisition,- the inherent acquisition difficulties to image the environment from the ground. In urban areas, the GPS signal required for proper georeferencing data can be disturbed by multipath or even stopped when GPS masking phenomena related to the reduction of the portion of the visible sky to capture enough satellites to find a good localization. Improve existing databases through these dataset acquired by a mobile mapping system requires alignment of these two sets. The main objective of this manuscript is to establish a pipeline of automatic processes to register these datasets together in the most reliable manner. Co-registration this data can be done in different ways. In this manuscript we have focused our work on the registration of mobile laser point cloud on geographical database by using a drift model suitable for the non rigid drift of these kind of mobile data. We have also developped a method to register geographical database containing semantics on mobile point cloud. The different optimization step performed on our methods allows to register the data fast enough for post-processing pipeline, which allows the management of large volumes of data (billions of laser points and thousands geometric primitives). We have also discussed on the problem of joint deformation. Our methods have been tested on simulated data and real data from different mission performed by IGN
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Westling, Fredrik Anders. "Pruning of Tree Crops through 3D Reconstruction and Light Simulation using Mobile LiDAR." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/27427.

Повний текст джерела
Анотація:
Consistent sunlight access is critical when growing fruit crops, and therefore pruning is a vital operation for tree management as it can be used for controlling shading within and between trees. This thesis focuses on using Light Detection And Ranging (LiDAR) to understand and improve the light distribution of fruit trees. To enable commercial applications, the tools developed aim to provide insights on every individual tree at whole orchard scale. Since acquisition and labelling of 3D data is difficult at a large scale, a system is developed for simulating LiDAR scans of tree crops for development and validation of techniques using infinite, perfectly-labelled datasets. Furthermore, processing scans at a large scale require rapid and relatively low-cost solutions, but many existing methods for point cloud analysis require a priori information or expensive high quality LiDAR scans. New tools are presented for structural analysis of noisy mobile LiDAR scans using a novel graph-search approach which can operate on unstructured point clouds with significant overlap between trees. The light available to trees is important for predicting future growth and crop yields as well as making pruning decisions, but many measurement techniques cannot provide branch-level analysis, or are difficult to apply on a large scale. Using mobile LiDAR, which can easily capture large areas, a method is developed to estimate the light available throughout the canopy. A study is then performed to demonstrate the viability of this approach to replace traditional agronomic methods, enabling large-scale adoption. The main contribution of this thesis is a novel framework for suggesting pruning decisions to improve light availability of individual trees. A full-tree quality metric is proposed and branch-scale light information identifies underexposed areas of the tree to suggest branches whose removal will improve the light distribution. Simulated tree scans are then used to validate a technique for estimating matter removed from the point cloud given specific pruning decisions, and this is used to quantify the improvement of real tree scans. The findings of this iv ABSTRACT v thesis demonstrate the value and application of mobile LiDAR in tree crops, and the tools developed through this work promise usefulness in scientific and commercial contexts.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Anadon, Leon Hector. "3D Shape Detection for Augmented Reality." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231727.

Повний текст джерела
Анотація:
In previous work, 2D object recognition has shown exceptional results. However, it is not possible to sense the environment spatial information, where the objects are and what they are. Having this knowledge could imply improvements in several fields like Augmented Reality by allowing virtual characters to interact more realistically with the environment and Autonomous cars by being able to make better decisions knowing where the objects are in a 3D space. The proposed work shows that it is possible to predict 3D bounding boxes with semantic labels for 3D object detection and a set of primitives for 3D shape recognition from multiple objects in a indoors scene using an algorithm that receives as input an RGB image and its 3D information. It uses Deep Neural Networks with novel architectures for point cloud feature extraction. It uses a unique feature vector capable of representing the latent space of the object that models its shape, position, size and orientation for multi-task prediction trained end-to-end with unbalanced datasets. It runs in real time (5 frames per second) in a live video feed. The method is evaluated in the NYU Depth Dataset V2 using Average Precision for object detection and 3D Intersection over Union and surface-to-surface distance for 3D shape. The results confirm that it is possible to use a shared feature vector for more than one prediction task and it generalizes for unseen objects during the training process achieving state-of-the-art results for 3D object detection and 3D shape prediction for the NYU Depth Dataset V2. Qualitative results are shown in real particular captured data showing that there could be navigation in a real-world indoor environment and that there could be collisions between the animations and the detected objects improving the interaction character-environment in Augmented Reality applications.
2D-objektigenkänning har i tidigare arbeten uppvisat exceptionella resultat. Dessa modeller gör det dock inte möjligt att erhålla rumsinformation, så som föremåls position och information om vad föremålen är. Sådan kunskap kan leda till förbättringar inom flera områden så som förstärkt verklighet, så att virtuella karaktärer mer realistiskt kan interagera med miljön, samt för självstyrande bilar, så att de kan fatta bättre beslut och veta var objekt är i ett 3D-utrymme. Detta arbete visar att det är möjligt att modellera täckande rätblock med semantiska etiketter för 3D-objektdetektering, samt underliggande komponenter för 3D-formigenkänning, från flera objekt i en inomhusmiljö med en algoritm som verkar på en RGB-bild och dess 3D-information. Modellen konstrueras med djupa neurala nätverk med nya arkitekturer för Point Cloud-representationsextraktion. Den använder en unik representationsvektor som kan representera det latenta utrymmet i objektet som modellerar dess form, position, storlek och orientering för komplett träning med flera uppgifter, med obalanserade dataset. Den körs i realtid (5 bilder per sekund) i realtidsvideo. Metoden utvärderas med NYU Depth Dataset V2 med Genomsnittlig Precision för objektdetektering, 3D-Skärning över Union, samt avstånd mellan ytorna för 3D-form. Resultaten bekräftar att det är möjligt att använda en delad representationsvektor för mer än en prediktionsuppgift, och generaliserar för föremål som inte observerats under träningsprocessen. Den uppnår toppresultat för 3D-objektdetektering samt 3D-form-prediktion för NYU Depth Dataset V2. Kvalitativa resultat baserade på särskilt anskaffade data visar potential inom navigering i en verklig inomhusmiljö, samt kollision mellan animationer och detekterade objekt, vilka kan förbättra interaktonen mellan karaktär och miljö inom förstärkt verklighet-applikationer.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Oboňová, Veronika. "Využití laserového skenování pro 3D modelování." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2017. http://www.nusl.cz/ntk/nusl-390221.

Повний текст джерела
Анотація:
The aim of the diploma thesis is to create a 3D model of the given object using laser scanning technology. Subsequent adjustments of the model and its separate preparation for possible 3D printing will be done through appropriate programs. The next 3D model of the identical object will be made based on the created photos and will be edited and prepared in the same way for possible 3D printing.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Bose, Saptak. "An integrated approach encompassing point cloud manipulation and 3D modeling for HBIM establishment: a case of study." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Знайти повний текст джерела
Анотація:
In the case of Cultural Heritage buildings, the need for an effective, exhaustive, efficient method to replicate its state of being in an interactive, three-dimensional environment is today, of paramount importance, both from an engineering as well as a historical point of view. Modern geomatics entails the usage of Terrestrial Laser Scanners (TLS) and photogrammetric modelling from Structure-from-Motion (SfM) techniques to initiate this modelling operation. To realize its eventual existence, the novel Historic Building Information Modelling (HBIM) technique is implemented. A prototype library of parametric objects, based on historic architectural data, HBIM allows the generation of an all-encompassing, three-dimensional model which possesses an extensive array of information pertaining to the structure at hand. This information, be it geometric, architectural, or even structural, can then be used to realize reinforcement requirements, rehabilitation needs, stage of depreciation, method of initial construction, material makeup, historic alterations, etc. In this paper, the study of the San Michele in Acerboli’s church, located in Santarcangelo di Romagna, Italy, is considered. A HBIM model is prepared and its accuracy analyzed. The final model serves as an information repository for the aforementioned Church, able to geometrically define its finest characteristics.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Pereira, Nícolas Silva. "Cloud Partitioning Iterative Closest Point (CP-ICP): um estudo comparativo para registro de nuvens de pontos 3D." reponame:Repositório Institucional da UFC, 2016. http://www.repositorio.ufc.br/handle/riufc/22971.

Повний текст джерела
Анотація:
PEREIRA, Nicolas Silva. Cloud Partitioning Iterative Closest Point (CP-ICP): um estudo comparativo para registro de nuvens de pontos 3D. 2016. 69 f. Dissertação (Mestrado em Engenharia de Teleinformática)–Centro de Tecnologia, Universidade Federal do Ceará, Fortaleza, 2016.
Submitted by Hohana Sanders (hohanasanders@hotmail.com) on 2017-01-06T18:04:28Z No. of bitstreams: 1 2016_dis_nspereira.pdf: 7889549 bytes, checksum: d5299d9df9b32e2b1189eba97b03f9e1 (MD5)
Approved for entry into archive by Marlene Sousa (mmarlene@ufc.br) on 2017-06-01T18:21:16Z (GMT) No. of bitstreams: 1 2016_dis_nspereira.pdf: 7889549 bytes, checksum: d5299d9df9b32e2b1189eba97b03f9e1 (MD5)
Made available in DSpace on 2017-06-01T18:21:16Z (GMT). No. of bitstreams: 1 2016_dis_nspereira.pdf: 7889549 bytes, checksum: d5299d9df9b32e2b1189eba97b03f9e1 (MD5) Previous issue date: 2016-07-15
In relation to the scientific and technologic evolution of equipment such as cameras and image sensors, the computer vision presents itself more and more as a consolidated engineering solution to issues in diverse fields. Together with it, due to the 3D image sensors dissemination, the improvement and optimization of techniques that deals with 3D point clouds registration, such as the classic algorithm Iterative Closest Point (ICP), appear as fundamental on solving problems such as collision avoidance and occlusion treatment. In this context, this work proposes a sampling technique to be used prior to the ICP algorithm. The proposed method is compared to other five variations of sampling techniques based on three criteria: RMSE (root mean squared error), based also on an Euler angles analysis and an autoral criterion based on structural similarity index (SSIM). The experiments were developed on four distincts 3D models from two databases, and shows that the proposed technique achieves a more accurate point cloud registration in a smaller time than the other techniques.
Com a evolução científica e tecnológica de equipamentos como câmeras e sensores de imagens, a visão computacional se mostra cada vez mais consolidada como solução de engenharia para problemas das mais diversas áreas. Associando isto com a disseminação dos sensores de imagens 3D, o aperfeiçoamento e a otimização de técnicas que lidam com o registro de nuvens de pontos 3D, como o algoritmo clássico Iterative Closest Point (ICP), se mostram fundamentais na resolução de problemas como desvio de colisão e tratamento de oclusão. Nesse contexto, este trabalho propõe um técnica de amostragem a ser utilizada previamente ao algoritmo ICP. O método proposto é comparado com outras cinco varições de amostragem a partir de três critérios: RMSE (root mean squared error ), a partir de uma análise de ângulos de Euler e uma métrica autoral baseada no índice de structural similarity (SSIM). Os experimentos foram desenvolvidos em quatro modelos 3D distintos vindos de dois diferentes databases, e revelaram que a abordagem apresentada alcançou um registro de nuvens mais acuraz num tempo menor que as outras técnicas.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Abayowa, Bernard Olushola. "Automatic Registration of Optical Aerial Imagery to a LiDAR Point Cloud for Generation of Large Scale City Models." University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1372508452.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

MURGIA, FRANCESCA. "3D Point Cloud Reconstruction from Plenoptic Images - A low computational complexity method for the generation of real objects in a digital 3D space." Doctoral thesis, Università degli Studi di Cagliari, 2017. http://hdl.handle.net/11584/249552.

Повний текст джерела
Анотація:
3D modelling and augmented reality, as fields of study, are constantly evolving and affect everyday lives. A high number of professionals in different branches use 3D modelling in order to simplify their works, such as structural and civil engineers in planning stage, or for content divulgation, such as the case of museum operators. 3D modelling is conquering new fields of application every day; this is the reason why it is of paramount importance to find methods to reconstruct real objects in three-dimensional digital models. A clear example is the growing interest of 3D models of important historic and artistic objects. A 3D model of a valuable statue can be studied in all his parts, anywhere in the world. Another important example is the great use of 3D models in architectural design field, in both design simulation and buildings of historical and artistic interest. Plenoptic camera, thanks to his intrinsic features of depth perception, opens further new scenarios in the field of 3D modelling. It follows, from all of the above considerations, the idea of using depth perception of the plenoptic camera for the purpose of object 3D modelling, in order to generate digital models of the real objects. Therefore, the main aim of this work is to suggest an alternative method of real object reconstruction, with a simple process and at reasonable price, using images extracted from a first generation plenoptic camera, which is moderately expensive, and processing them for object reconstruction. In this work I will present a 3D modelling method starting from the images produced by a plenoptic camera. The following method is at his early history in respect other methods and it does not want to be competitive, even though at the moment it is impossible. His aim is to show how the features of these cameras allow modelling and how promising these results are. We believe that this technology deserves attention because his further development could solve some of the current problems, such as cost reduction, process simplification and lower specialisation of the operators.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Fjärdsjö, Johnny, and Zada Nasir Muhabatt. "Kvalitetssäkrad arbetsprocess vid 3D-modellering av byggnader : Baserat på underlag från ritning och 3D-laserskanning." Thesis, KTH, Byggteknik och design, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-148822.

Повний текст джерела
Анотація:
Tidigare vid ombyggnation, försäljning och förvaltning av byggnader som var uppförda innan 80-talet utgick fastighetsägarna från enkla handritade pappersritningar. Det är en svår utmaning att hålla ritningen uppdaterad till verkliga förhållanden d.v.s. relationsritning. För ca 25 år sedan (i början på 80-talet) byttes papper och penna ut mot avancerade ritprogram (CAD) för framtagning av ritningar. Idag används CAD i stort sett för all nyprojektering och de senaste åren har utvecklingen gått mot en större användning av 3D-underlag än tidigare 2D-ritningar. Den stora fördelen med att projektera i 3D är att en virtuell modell skapas av hela byggnaden för att få en bättre kontroll av ingående byggdelsobjekt och även att fel kan upptäckas i tidigare skeden än på byggarbetsplatsen. Genom att börja bygga en virtuell byggnad i 3D från första skedet och succesivt fylla den med mer relevant information i hela livscykeln får man en komplett informationsmodell. Ett av kraven som ställs på fastighetsägarna vid ombyggnation och förvaltning är att tillhandahålla korrekt information och uppdaterade ritningar. Det skall vara enkelt för entreprenören att avläsa ritningarna. I rapporten beskrivs en effektiviserad arbetsprocess, metoder, verktyg och användningsområden för framtagning av 3D-modeller. Detta arbete avser att leda fram till en metodbeskrivning som skall användas för erfarenhetsåterföring. Arbetet skall också vara ett underlag som skall användas för att beskriva tillvägagångsättet att modellera från äldre ritningar till 3D-modeller. Metodbeskrivningen kommer att förenkla förståelsen för modellering för både fastighetsägaren och inom företaget, samt höja kvalitén på arbetet med att skapa CAD-modeller från de olika underlag som används för modellering.
The use of hand drawn construction model was the only way of development, rebuilding, sales and real estate management before the 80’s. However, the challenge was to preserve the drawings and maintain its real condition. To make things work faster and easier the development of advanced drawing software (CAD) was introduced which replaced the traditional hand drawn designs. Today, CAD is used broadly for all new constructions with a great success rate. However, with the new advanced technology many engineers and construction companies are heavily using 3D models over 2D drawings. The major advantage of designing in 3D is a virtual model created of the entire building to get a better control of input construction items and the errors can be detected at earlier stages than at the construction sites. By modifying buildings in a virtual model in three dimensions yet at the first stage and gradually fill it with more relevant information throughout the life cycle of buildings to get a complete information model. One of the requirements from the property owners in the redevelopment and management is to provide accurate information and updated drawings. It should be simple for the contractor to read drawings. This report describes a streamlined work processes, methods, tools and applications for the production of 3D models. This work is intended to lead to a methodology and to be used as well as for passing on experience. This report will also be a base to describe the approach to model from older drawings into 3D models. The method description will simplify the understanding of model for both the property owners and for companies who creates 3D models. It will also increase the quality of the work to create CAD models from the different data used for modeling.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Carlsson, Henrik. "Modeling method to visually reconstruct the historical Vasa ship with the help of a 3D scanned point cloud." Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-10574.

Повний текст джерела
Анотація:
A point cloud derived from scanning the actual Vasa ship is used for an accurate visualisation. Both manual and automatic mesh techniques where utilized in the modelling of the Vasa ship to overcome problems of poor resolution in the point cloud and computing power. A combination of manual and automatic techniques resulted in a 3D model optimized for use within animation software. The method presented in this paper utilized a method that allows the user to keep control over topology.  The polygon count is kept to a minimum and one can still remain certain that the measurements and realism from the point cloud is maintained.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Chen, Cong. "High-Dimensional Generative Models for 3D Perception." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103948.

Повний текст джерела
Анотація:
Modern robotics and automation systems require high-level reasoning capability in representing, identifying, and interpreting the three-dimensional data of the real world. Understanding the world's geometric structure by visual data is known as 3D perception. The necessity of analyzing irregular and complex 3D data has led to the development of high-dimensional frameworks for data learning. Here, we design several sparse learning-based approaches for high-dimensional data that effectively tackle multiple perception problems, including data filtering, data recovery, and data retrieval. The frameworks offer generative solutions for analyzing complex and irregular data structures without prior knowledge of data. The first part of the dissertation proposes a novel method that simultaneously filters point cloud noise and outliers as well as completing missing data by utilizing a unified framework consisting of a novel tensor data representation, an adaptive feature encoder, and a generative Bayesian network. In the next section, a novel multi-level generative chaotic Recurrent Neural Network (RNN) has been proposed using a sparse tensor structure for image restoration. In the last part of the dissertation, we discuss the detection followed by localization, where we discuss extracting features from sparse tensors for data retrieval.
Doctor of Philosophy
The development of automation systems and robotics brought the modern world unrivaled affluence and convenience. However, the current automated tasks are mainly simple repetitive motions. Tasks that require more artificial capability with advanced visual cognition are still an unsolved problem for automation. Many of the high-level cognition-based tasks require the accurate visual perception of the environment and dynamic objects from the data received from the optical sensor. The capability to represent, identify and interpret complex visual data for understanding the geometric structure of the world is 3D perception. To better tackle the existing 3D perception challenges, this dissertation proposed a set of generative learning-based frameworks on sparse tensor data for various high-dimensional robotics perception applications: underwater point cloud filtering, image restoration, deformation detection, and localization. Underwater point cloud data is relevant for many applications such as environmental monitoring or geological exploration. The data collected with sonar sensors are however subjected to different types of noise, including holes, noise measurements, and outliers. In the first chapter, we propose a generative model for point cloud data recovery using Variational Bayesian (VB) based sparse tensor factorization methods to tackle these three defects simultaneously. In the second part of the dissertation, we propose an image restoration technique to tackle missing data, which is essential for many perception applications. An efficient generative chaotic RNN framework has been introduced for recovering the sparse tensor from a single corrupted image for various types of missing data. In the last chapter, a multi-level CNN for high-dimension tensor feature extraction for underwater vehicle localization has been proposed.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Тростинський, Назар Миколайович. "Метод візуалізації хмар точок «Web point cloud viewer» для прийняття контрольованих людиною критично-безпекових рішень". Магістерська робота, Хмельницький національний університет, 2021. http://elar.khnu.km.ua/jspui/handle/123456789/10868.

Повний текст джерела
Анотація:
Мета роботи полягає у реалізації методу «Web Point Cloud Viewer» для прийняття контрольованих людиною критично-безпекових рішень з достатнім рівнем точності та швидкодії, обчислювальної здатності та доступності.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Dutta, Somnath. "Moving Least Squares Correspondences for Iterative Point Set Registration." Technische Universität Dresden, 2019. https://tud.qucosa.de/id/qucosa%3A35721.

Повний текст джерела
Анотація:
Registering partial shapes plays an important role in numerous applications in the fields of robotics, vision, and graphics. An essential problem of registration algorithms is the determination of correspondences between surfaces. In this paper, we provide a in-depth evaluation of an approach that computes high-quality correspondences for pair-wise closest point-based iterative registration and compare the results with state-of-the-art registration algorithms. Instead of using a discrete point set for correspondence search, the approach is based on a locally reconstructed continuous moving least squares surface to overcome sampling mismatches in the input shapes. Furthermore, MLS-based correspondences are highly robust to noise. We demonstrate that this strategy outperforms existing approaches in terms of registration accuracy by combining it with the SparseICP local registration algorithm. Our extensive evaluation over several thousand scans from different sources verify that MLS-based approach results in a significant increase in alignment accuracy, surpassing state-of-theart feature-based and probabilistic methods. At the same time, it allows an efficient implementation that introduces only a modest computational overhead.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Serra, Sabina. "Deep Learning for Semantic Segmentation of 3D Point Clouds from an Airborne LiDAR." Thesis, Linköpings universitet, Datorseende, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-168367.

Повний текст джерела
Анотація:
Light Detection and Ranging (LiDAR) sensors have many different application areas, from revealing archaeological structures to aiding navigation of vehicles. However, it is challenging to interpret and fully use the vast amount of unstructured data that LiDARs collect. Automatic classification of LiDAR data would ease the utilization, whether it is for examining structures or aiding vehicles. In recent years, there have been many advances in deep learning for semantic segmentation of automotive LiDAR data, but there is less research on aerial LiDAR data. This thesis investigates the current state-of-the-art deep learning architectures, and how well they perform on LiDAR data acquired by an Unmanned Aerial Vehicle (UAV). It also investigates different training techniques for class imbalanced and limited datasets, which are common challenges for semantic segmentation networks. Lastly, this thesis investigates if pre-training can improve the performance of the models. The LiDAR scans were first projected to range images and then a fully convolutional semantic segmentation network was used. Three different training techniques were evaluated: weighted sampling, data augmentation, and grouping of classes. No improvement was observed by the weighted sampling, neither did grouping of classes have a substantial effect on the performance. Pre-training on the large public dataset SemanticKITTI resulted in a small performance improvement, but the data augmentation seemed to have the largest positive impact. The mIoU of the best model, which was trained with data augmentation, was 63.7% and it performed very well on the classes Ground, Vegetation, and Vehicle. The other classes in the UAV dataset, Person and Structure, had very little data and were challenging for most models to classify correctly. In general, the models trained on UAV data performed similarly as the state-of-the-art models trained on automotive data.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Yang, Chih-Chieh, and 楊智傑. "The Compression of 3D Point Cloud Data Using Wavelet Transform." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/03676248426950931392.

Повний текст джерела
Анотація:
碩士
義守大學
電機工程學系碩士班
95
In this thesis, we use the 3D model point cloud data to present the object surface information. The point cloud data is expressed by rectangular coordinates(X,Y,Z). We transform the point cloud data into cylindrical coordinates (rc、θ、Z) and quantize them, therefore the 3D point cloud data may be regarded as a 2D matrix. In general, the image data are subjected to some kind of specific transformation processing in image compression, for example, DCT or DWT etc. The processing retains and compacts the important information of image in the low frequency area. This research uses DWT as the transformation for compression. After DWT , we use the SPIHT encoding method to retain the important coefficients and neglect the unimportant coefficients. The proposed compression method is a lossy compression with distortion of insignificance to the vision effect. Finally we discuss the efficiency of compressing the 3D model point cloud data with the simulate results in the experiment.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Weng, Jia-zheng, and 翁嘉政. "The Compression of 3D Point Data." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/62651000167671586953.

Повний текст джерела
Анотація:
碩士
義守大學
電機工程學系碩士班
93
The 3D model of an object is usually represented by its visible external surfaces, which are represented by scattered points in 3D space, known as point cloud data. The locations of these points can be represented by different coordinate systems. In this work, we transform 3D point cloud data from the originally acquired points in Cartesian coordinate system into points in spherical or cylindrical coordinate systems and perform discrete cosine transform to achieve effective compression of the 3D point data. The compressed data is compared with the original 3D data to evaluate the compression ration and error rate.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії