Дисертації з теми "Point cloud analysis"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Point cloud analysis.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Point cloud analysis".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Forsman, Mona. "Point cloud densification." Thesis, Umeå universitet, Institutionen för fysik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-39980.

Повний текст джерела
Анотація:
Several automatic methods exist for creating 3D point clouds extracted from 2D photos. In manycases, the result is a sparse point cloud, unevenly distributed over the scene.After determining the coordinates of the same point in two images of an object, the 3D positionof that point can be calculated using knowledge of camera data and relative orientation. A model created from a unevenly distributed point clouds may loss detail and precision in thesparse areas. The aim of this thesis is to study methods for densification of point clouds. This thesis contains a literature study over different methods for extracting matched point pairs,and an implementation of Least Square Template Matching (LSTM) with a set of improvementtechniques. The implementation is evaluated on a set of different scenes of various difficulty. LSTM is implemented by working on a dense grid of points in an image and Wallis filtering isused to enhance contrast. The matched point correspondences are evaluated with parameters fromthe optimization in order to keep good matches and discard bad ones. The purpose is to find detailsclose to a plane in the images, or on plane-like surfaces. A set of extensions to LSTM is implemented in the aim of improving the quality of the matchedpoints. The seed points are improved by Transformed Normalized Cross Correlation (TNCC) andMultiple Seed Points (MSP) for the same template, and then tested to see if they converge to thesame result. Wallis filtering is used to increase the contrast in the image. The quality of the extractedpoints are evaluated with respect to correlation with other optimization parameters and comparisonof standard deviation in x- and y- direction. If a point is rejected, the option to try again with a largertemplate size exists, called Adaptive Template Size (ATS).
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Donner, Marc, Sebastian Varga, and Ralf Donner. "Point cloud generation for hyperspectral ore analysis." Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2018. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-231365.

Повний текст джерела
Анотація:
Recent development of hyperspectral snapshot cameras offers new possibilities for ore analysis. A method for generating a 3D dataset from RGB and hyperspectral images is presented. By using Structure from Motion, a reference of each source image to the resulting point cloud is kept. This reference is used for projecting hyperspectral data onto the point cloud. Additionally, with this work flow it is possible to add meta data to the point cloud, which was generated from images alone.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Donner, Marc, Sebastian Varga, and Ralf Donner. "Point cloud generation for hyperspectral ore analysis." TU Bergakademie Freiberg, 2017. https://tubaf.qucosa.de/id/qucosa%3A23196.

Повний текст джерела
Анотація:
Recent development of hyperspectral snapshot cameras offers new possibilities for ore analysis. A method for generating a 3D dataset from RGB and hyperspectral images is presented. By using Structure from Motion, a reference of each source image to the resulting point cloud is kept. This reference is used for projecting hyperspectral data onto the point cloud. Additionally, with this work flow it is possible to add meta data to the point cloud, which was generated from images alone.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Awadallah, Mahmoud Sobhy Tawfeek. "Image Analysis Techniques for LiDAR Point Cloud Segmentation and Surface Estimation." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73055.

Повний текст джерела
Анотація:
Light Detection And Ranging (LiDAR), as well as many other applications and sensors, involve segmenting sparse sets of points (point clouds) for which point density is the only discriminating feature. The segmentation of these point clouds is challenging for several reasons, including the fact that the points are not associated with a regular grid. Moreover, the presence of noise, particularly impulsive noise with varying density, can make it difficult to obtain a good segmentation using traditional techniques, including the algorithms that had been developed to process LiDAR data. This dissertation introduces novel algorithms and frameworks based on statistical techniques and image analysis in order to segment and extract surfaces from sparse noisy point clouds. We introduce an adaptive method for mapping point clouds onto an image grid followed by a contour detection approach that is based on an enhanced version of region-based Active Contours Without Edges (ACWE). We also proposed a noise reduction method using Bayesian approach and incorporated it, along with other noise reduction approaches, into a joint framework that produces robust results. We combined the aforementioned techniques with a statistical surface refinement method to introduce a novel framework to detect ground and canopy surfaces in micropulse photon-counting LiDAR data. The algorithm is fully automatic and uses no prior elevation or geographic information to extract surfaces. Moreover, we propose a novel segmentation framework for noisy point clouds in the plane based on a Markov random field (MRF) optimization that we call Point Cloud Densitybased Segmentation (PCDS). We also developed a large synthetic dataset of in plane point clouds that includes either a set of randomly placed, sized and oriented primitive objects (circle, rectangle and triangle) or an arbitrary shape that forms a simple approximation for the LiDAR point clouds. The experiment performed on a large number of real LiDAR and synthetic point clouds showed that our proposed frameworks and algorithms outperforms the state-of-the-art algorithms in terms of segmentation accuracy and surface RMSE.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Burwell, Claire Leonora. "The effect of 2D vs. 3D visualisation on lidar point cloud analysis tasks." Thesis, University of Leicester, 2016. http://hdl.handle.net/2381/37950.

Повний текст джерела
Анотація:
The exploitation of human depth perception is not uncommon in visual analysis of data; medical imagery and geological analysis already rely on stereoscopic 3D visualisation. In contrast, 3D scans of the environment are usually represented on a flat, 2D computer screen, although there is potential to take advantage of both (a) the spatial depth that is offered by the point cloud data, and (b) our ability to see stereoscopically. This study explores whether a stereo 3D analysis environment would add value to visual lidar tasks, compared to the standard 2D display. Forty-six volunteers, all with good stereovision and varying lidar knowledge, viewed lidar data in either 2D or in 3D, on a 4m x 2.4m screen. The first task required 2D and 3D measurement of linear lengths of a planar and a volumetric feature, using an interaction device for point selection. Overall, there was no significant difference in the spread of 2D and 3D measurement distributions for both of the measured features. The second task required interpretation of ten features from individual points. These were highlighted across two areas of interest - a flat, suburban area and a valley slope with a mixture of features. No classification categories were offered to the participant and answers were expressed verbally. Two of the ten features (chimney and cliff-face) were interpreted with a better degree of accuracy using the 3D method and the remaining features had no difference in 2D and 3D accuracy. Using the experiment’s data processing and visualisation approaches, results suggest that stereo 3D perception of lidar data does not add value to manual linear measurement. The interpretation results indicate that immersive stereo 3D visualisation does improve the accuracy of manual point cloud classification for certain features. The findings contribute to wider discussions in lidar processing, geovisualisation, and applied psychology.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Bungula, Wako Tasisa. "Bi-filtration and stability of TDA mapper for point cloud data." Diss., University of Iowa, 2019. https://ir.uiowa.edu/etd/6918.

Повний текст джерела
Анотація:
TDA mapper is an algorithm used to visualize and analyze big data. TDA mapper is applied to a dataset, X, equipped with a filter function f from X to R. The output of the algorithm is an abstract graph (or simplicial complex). The abstract graph captures topological and geometric information of the underlying space of X. One of the interests in TDA mapper is to study whether or not a mapper graph is stable. That is, if a dataset X is perturbed by a small value, and denote the perturbed dataset by X∂, we would like to compare the TDA mapper graph of X to the TDA mapper graph of X∂. Given a topological space X, if the cover of the image of f satisfies certain conditions, Tamal Dey, Facundo Memoli, and Yusu Wang proved that the TDA mapper is stable. That is, the mapper graph of X differs from the mapper graph of X∂ by a small value measured via homology. The goal of this thesis is three-fold. The first is to introduce a modified TDA mapper algorithm. The fundamental difference between TDA mapper and the modified version is the modified version avoids the use of filter function. In comparing the mapper graph outputs, the proposed modified mapper is shown to capture more geometric and topological features. We discuss the advantages and disadvantages of the modified mapper. Tamal Dey, Facundo Memoli, and Yusu Wang showed that a filtration of covers induce a filtration of simplicial complexes, which in turn induces a filtration of homology groups. While Tamal Dey, Facundo Memoli, and Yusu Wang focused on TDA mapper's application to topological space, the second goal of this thesis is to show DBSCAN clustering gives a filtration of covers when TDA mapper is applied to a point cloud. Hence, DBSCAN gives a filtration of mapper graphs (simplicial complexes) and homology groups. More importantly, DBSCAN gives a filtration of covers, mapper graphs, and homology groups in three parameter directions: bin size, epsilon, and Minpts. Hence, there is a multi-dimensional filtration of covers, mapper graphs, and homology groups. We also note that single-linkage clustering is a special case of DBSCAN clustering, so the results proved to be true when DBSCAN is used are also true when single-linkage is used. However, complete-linkage does not give a filtration of covers in the direction of bin, hence no filtration of simpicial complexes and homology groups exist when complete-linkage is applied to cluster a dataset. In general, the results hold for any clustering algorithm that gives a filtration of covers. The third (and last) goal of this thesis is to prove that two multi-dimensional persistence modules (one: with respect to the original dataset, X; two: with respect to the ∂-perturbation of X) are 2∂-interleaved. In other words, the mapper graphs of X and X∂ differ by a small value as measured by homology.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Megahed, Fadel M. "The Use of Image and Point Cloud Data in Statistical Process Control." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/26511.

Повний текст джерела
Анотація:
The volume of data acquired in production systems continues to expand. Emerging imaging technologies, such as machine vision systems (MVSs) and 3D surface scanners, diversify the types of data being collected, further pushing data collection beyond discrete dimensional data. These large and diverse datasets increase the challenge of extracting useful information. Unfortunately, industry still relies heavily on traditional quality methods that are limited to fault detection, which fails to consider important diagnostic information needed for process recovery. Modern measurement technologies should spur the transformation of statistical process control (SPC) to provide practitioners with additional diagnostic information. This dissertation focuses on how MVSs and 3D laser scanners can be further utilized to meet that goal. More specifically, this work: 1) reviews image-based control charts while highlighting their advantages and disadvantages; 2) integrates spatiotemporal methods with digital image processing to detect process faults and estimate their location, size, and time of occurrence; and 3) shows how point cloud data (3D laser scans) can be used to detect and locate unknown faults in complex geometries. Overall, the research goal is to create new quality control tools that utilize high density data available in manufacturing environments to generate knowledge that supports decision-making beyond just indicating the existence of a process issue. This allows industrial practitioners to have a rapid process recovery once a process issue has been detected, and consequently reduce the associated downtime.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Chleborad, Aaron A. "Grasping unknown novel objects from single view using octant analysis." Thesis, Manhattan, Kan. : Kansas State University, 2010. http://hdl.handle.net/2097/4089.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Rasmussen, Johan, and David Nilsson. "Analys av punktmoln i tre dimensioner." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Datateknik och informatik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-36915.

Повний текст джерела
Анотація:
Syfte: Att ta fram en metod för att hjälpa mindre sågverk att bättre tillvarata mesta möjliga virke från en timmerstock. Metod: En kvantitativ studie där tre iterationer genomförts enligt Design Science. Resultat: För att skapa en effektiv algoritm som ska utföra volymberäkningar i ett punktmoln som består av cirka två miljoner punkter i ett industriellt syfte ligger fokus i att algoritmen är snabb och visar rätt data. Det primära målet för att göra algoritmen snabb är att bearbeta punktmolnet ett minimalt antal gånger. Den algoritm som uppfyller delmålen i denna studie är Algoritm C. Algoritmen är både snabb och har en låg standardavvikelse på mätfelen. Algoritm C har komplexiteten O(n) vid analys av delpunktmoln. Implikationer: Med utgångspunkt från denna studies algoritm skulle det vara möjligt att använda stereokamerateknik för att hjälpa mindre sågverk att bättre tillvarata mesta möjliga virke från en timmerstock. Begränsningar: Studiens algoritm har utgått från att inga punkter har skapats inuti stocken vilket skulle kunna leda till felplacerade punkter. Om en stock skulle vara krokig överensstämmer inte stockens centrum med z-axelns placering. Detta är något som skulle kunna innebära att z-värdet hamnar utanför stocken, i extremfall, vilket algoritmen inte kan hantera.
Purpose: To develop a method that can help smaller sawmills to better utilize the greatest possible amount of wood from a log. Method: A quantitative study where three iterations has been made using Design Science. Findings: To create an effective algorithm that will perform volume calculations in a point cloud consisting of about two million points for an industrial purpose, the focus is on the algorithm being fast and that it shows the correct data. The primary goal of making the algorithm quick is to process the point cloud a minimum number of times. The algorithm that meets the goals in this study is Algorithm C. The algorithm is both fast and has a low standard deviation of the measurement errors. Algorithm C has the complexity O(n) in the analysis of sub-point clouds. Implications: Based on this study’s algorithm, it would be possible to use stereo camera technology to help smaller sawmills to better utilize the most possible amount of wood from a log. Limitations: The study’s algorithm assumes that no points have been created inside the log, which could lead to misplaced points. If a log would be crooked, the center of the log would not match the z-axis position. This is something that could mean that the z-value is outside of the log, in extreme cases, which the algorithm cannot handle.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Rusinek, Cory A. "New Avenues in Electrochemical Systems and Analysis." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1490350904669695.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Dahlin, Johan. "3D Modeling of Indoor Environments." Thesis, Linköpings universitet, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93999.

Повний текст джерела
Анотація:
With the aid of modern sensors it is possible to create models of buildings. These sensorstypically generate 3D point clouds and in order to increase interpretability and usability,these point clouds are often translated into 3D models.In this thesis a way of translating a 3D point cloud into a 3D model is presented. The basicfunctionality is implemented using Matlab. The geometric model consists of floors, wallsand ceilings. In addition, doors and windows are automatically identified and integrated intothe model. The resulting model also has an explicit representation of the topology betweenentities of the model. The topology is represented as a graph, and to do this GraphML isused. The graph is opened in a graph editing program called yEd.The result is a 3D model that can be plotted in Matlab and a graph describing the connectivitybetween entities. The GraphML file is automatically generated in Matlab. An interfacebetween Matlab and yEd allows the user to choose which rooms should be plotted.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Pérez, Gramatges Aurora. "Simultaneous preconcentration of trace metals by cloud point extraction with 1-(2-pyridylazo)-2-naphthol and determination by neutron activation analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0021/NQ49263.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Scharf, Alexander. "Terrestrial Laser Scanning for Wooden Facade-system Inspection." Thesis, Luleå tekniska universitet, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-77159.

Повний текст джерела
Анотація:
The objective of this study was to evaluate the feasibility of measuring movement, deformation and displacement in wooden façade-systems by terrestrial laser scanning. An overview of different surveying techniques and methods has been created. Point cloud structure and processing was explained in detail as it is the foundation for understanding the advantages and disadvantages of laser scanning.    The boundaries of monitoring façades with simple and complex façade structures were tested with the phase-based laser scanner FARO Focus 3DS. In-field measurements of existing facades were done to show the capabilities of extracting defect features such as cracks by laser scanning. The high noise in the data caused by the limited precision of 3D laser scanners is problematic. Details on a scale of several mm are hidden by the data noise. Methods to reduce the noise during point cloud processing have proven to be very data-specific. The uneven point cloud structure of a façade scan made it therefore difficult to find a method working for the whole scans. Dividing the point cloud data automatically into different façade parts by a process called segmentation could make it possible. However, no suitable segmentation algorithm was found and developing an own algorithm would have exceeded the scope of this thesis. Therefore, the goal of automatic point cloud processing was not fulfilled and neglected in the further analyses of outdoor facades and laboratory experiments. The experimental scans showed that several information could be extracted out of the scans. The accuracy of measured board and gap dimensions were, however, highly depended on the point cloud cleaning steps but provided information which could be used for tracking development of a facade’s features. Extensive calibration might improve the accuracy of the measurements. Deviation of façade structures from flat planes were clearly visible when using colorization of point clouds and might be the main benefit of measuring spatial information of facades by non-contact methods. The determination of façade displacement was done under laboratory conditions. A façade panel was displaced manually, and displacement was calculated with different algorithms. The algorithm determining distance to the closest point in a pair of point clouds provided the best results, while being the simplest one in terms of computational complexity. Out-of-plane displacement was the most suitable to detect with this method. Displacement sideways or upwards required more advanced point cloud processing and manual interpretation by the software operator. Based on the findings during the study it can be concluded that laser scanning is not the correct methods for structural health monitoring of facades when the tracking of small deformations, especially deformations below 5 mm and defects like cracks are the main goal. Displacements, defects and deformations of larger scale can be detected but are tied to a large amount of point cloud processing. It is not clear if the equipment costs, surveying time and the problems caused by high variability of scans results based on façade color, shape and texture are in a positive relation to the benefits obtained from using laser scanning over manually surveying.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Jovančević, Igor. "Exterior inspection of an aircraft using a Pan-Tilt-Zoom camera and a 3D scanner moved by a mobile robot : 2D image processing and 3D point cloud analysis." Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2016. http://www.theses.fr/2016EMAC0023/document.

Повний текст джерела
Анотація:
Cette thèse s’inscrit dans le cadre d’un projet industriel multi-partenaires ayant pour objectif le développement d’un robot mobile collaboratif (un cobot), autonome dans ses mouvements au sol, capable de réaliser l’inspection visuelle d’un aéronef, à la fois en phase de petite ou grande maintenance dans un hangar ou en phase de pré-vol sur le tarmac d’un aéroport. Le cobot est équipé de capteurs lui permettant d’effectuer ses tâches de navigation autonome, mais également d’un ensemble de capteurs optiques constituant la tête d’inspection : une caméra orientable Pan-Tilt-Zoom et un scanner 3D qui délivrent respectivement des données sous forme d’images 2D et de nuages de points 3D. L’objectif de la thèse est de développer des algorithmes d’analyse d’images 2D et de nuages de points 3D, afin d’établir un diagnostic sur l’état de l’avion et son aptitude à voler. Nous avons développé des algorithmes pour vérifier certains éléments de l’appareil, tels que valves, portes, capteurs, pneus ou moteurs, et également pour détecter et caractériser des dommages 3D sur le fuselage (impacts, rayures, etc.). Nous avons exploité dans nos algorithmes les connaissances a priori disponibles, en particulier le modèle 3D CAO de l’avion (un AIRBUS A320 dans le cadre de nos essais). Durant ces travaux de la thèse, nous avons pu répondre à deux besoins (parfois antagonistes) : développer des algorithmes d’inspection rapides et robustes, mais également répondre aux exigences spécifiques d’un projet industriel qui visait à développer un prototype opérationnel. Nous nous sommes attachés à développer des algorithmes les plus génériques possibles, de manière à ce qu’ils puissent être utilisés pour d’autres types d’inspection, tels que l’inspection de bâtiments ou de navires par exemple. Nous avons aussi contribué au développement du prototype (robot mobile équipé de capteurs) en développant le module de contrôle des capteurs d’inspection et en intégrant nos codes sur le robot avec les autres modules développés par les partenaires. Le prototype a fait l’objet de nombreux essais en hangar de maintenance ou sur tarmac
This thesis makes part of an industry oriented multi-partners project aimed at developing a mobile collaborative robot (a cobot), autonomous in its movements on the ground, capable of performing visual inspection of an aircraft during short or long maintenance procedures in the hangar or in the pre-flight phase on the tarmac. The cobot is equipped with sensors for realizing its navigation tasks as well as with a set of optical sensors which constitute the inspection head: an orientable Pan-Tilt-Zoom visible light camera and a three-dimensional scanner, delivering data in the format of two-dimensional images and three-dimensional point clouds, respectively. The goal of the thesis is to propose original approaches for processing 2D images and 3D clouds, with intention to make a decision with respect to the flight readiness of the airplane. We developed algorithms for verification of the aircraft items such as vents, doors, sensors, tires or engine as well as for detection and characterization of three-dimensional damages on the fuselage. We integrated a-priori knowledge on the airplane structure, notably numerical three-dimensional CAD model of the Airbus-A320. We argue that with investing effort to develop robust enough algorithms and with the help of existing optical sensors to acquire suitable data, we can come up with non-invasive, accurate, and time-efficient system for automatic airplane exterior inspection. The thesis work was placed in between two main requirements: develop inspection algorithms which could be as general as possible and also meet the specific requirements of an industry oriented project. Often, these two goals do not go along and the balance had to be made. On one side, we were aiming to design and assess the approaches that can be employed on other large structures, for ex. buildings, ships. On the other hand, writing source code for controlling sensors as well as integrating our whole developed source code with other modules on the real-time robotic system, were necessary in order to demonstrate the feasibility of our robotic prototype
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Saval-Calvo, Marcelo. "Methodology based on registration techniques for representing subjects and their deformations acquired from general purpose 3D sensors." Doctoral thesis, Universidad de Alicante, 2015. http://hdl.handle.net/10045/49990.

Повний текст джерела
Анотація:
In this thesis a methodology for representing 3D subjects and their deformations in adverse situations is studied. The study is focused in providing methods based on registration techniques to improve the data in situations where the sensor is working in the limit of its sensitivity. In order to do this, it is proposed two methods to overcome the problems which can difficult the process in these conditions. First a rigid registration based on model registration is presented, where the model of 3D planar markers is used. This model is estimated using a proposed method which improves its quality by taking into account prior knowledge of the marker. To study the deformations, it is proposed a framework to combine multiple spaces in a non-rigid registration technique. This proposal improves the quality of the alignment with a more robust matching process that makes use of all available input data. Moreover, this framework allows the registration of multiple spaces simultaneously providing a more general technique. Concretely, it is instantiated using colour and location in the matching process for 3D location registration.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Yogeswaran, Arjun. "3D Surface Analysis for the Automated Detection of Deformations on Automotive Panels." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/19992.

Повний текст джерела
Анотація:
This thesis examines an automated method to detect surface deformations on automotive panels for the purpose of quality control along a manufacturing assembly line. Automation in the automotive manufacturing industry is becoming more prominent, but quality control is still largely performed by human workers. Quality control is important in the context of automotive body panels as deformations can occur along the assembly line such as inadequate handling of parts or tools around a vehicle during assembly, rack storage, and shipping from subcontractors. These defects are currently identified and marked, before panels are either rectified or discarded. This work attempts to develop an automated system to detect deformations to alleviate the dependence on human workers in quality control and improve performance by increasing speed and accuracy. Some techniques make use of an ideal CAD model behaving as a master work, and panels scanned on the assembly line are compared to this model to determine the location of deformations. This thesis presents a solution for detecting deformations of various scales without a master work. It also focuses on automated analysis requiring minimal intuitive operator-set parameters and provides the ability to classify the deformations as dings, which are deformations that protrude from the surface, or dents, which are depressions into the surface. A complete automated deformation detection system is proposed, comprised of a feature extraction module, segmentation module, and classification module, which outputs the locations of deformations when provided with the 3D mesh of an automotive panel. Two feature extraction techniques are proposed. The first is a general feature extraction technique for 3D meshes using octrees for multi-resolution analysis and evaluates the amount of surface variation to locate deformations. The second is specifically designed for the purpose of deformation detection, and analyzes multi-resolution cross-sections of a 3D mesh to locate deformations based on their estimated size. The performance of the proposed automated deformation detection system, and all of its sub-modules, is tested on a set of meshes which represent differing characteristics of deformations in surface panels, including deformations of different scales. Noisy, low resolution meshes are captured from a 3D acquisition, while artificial meshes are generated to simulate ideal acquisition conditions. The proposed system shows accurate results in both ideal situations as well as non-ideal situations under the condition of noise and complex surface curvature by extracting only the deformations of interest and accurately classifying them as dings or dents.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Razafindramanana, Octavio. "Low-dimensional data analysis and clustering by means of Delaunay triangulation." Thesis, Tours, 2014. http://www.theses.fr/2014TOUR4033/document.

Повний текст джерела
Анотація:
Les travaux présentés et discutés dans cette thèse ont pour objectif de proposer plusieurs solutions au problème de l’analyse et du clustering de nuages de points en basse dimension. Ces solutions s’appuyent sur l’analyse de triangulations de Delaunay. Deux types d’approches sont présentés et discutés. Le premier type suit une approche en trois-passes classique: 1) la construction d’un graphe de proximité contenant une information topologique, 2) la construction d’une information statistique à partir de ce graphe et 3) la suppression d’éléments inutiles au regard de cette information statistique. L’impact de différentes measures sur le clustering ainsi que sur la reconnaissance de caractères est discuté. Ces mesures s’appuyent sur l’exploitation du complexe simplicial et non pas uniquement sur celle du graphe. Le second type d’approches est composé d’approches en une passe extrayant des clusters en même temps qu’une triangulation de Delaunay est construite
This thesis aims at proposing and discussing several solutions to the problem of low-dimensional point cloudanalysis and clustering. These solutions are based on the analysis of the Delaunay triangulation.Two types of approaches are presented and discussed. The first one follows a classical three steps approach:1) the construction of a proximity graph that embeds topological information, 2) the construction of statisticalinformation out of this graph and 3) the removal of pointless elements regarding this information. The impactof different simplicial complex-based measures, i.e. not only based on a graph, is discussed. Evaluation is madeas regards point cloud clustering quality along with handwritten character recognition rates. The second type ofapproaches consists of one-step approaches that derive clustering along with the construction of the triangulation
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Lejemble, Thibault. "Analyse multi-échelle de nuage de points." Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30184.

Повний текст джерела
Анотація:
Les techniques d'acquisition numérique 3D comme la photogrammétrie ou les scanners laser sont couramment utilisées dans de nombreux domaines d'applications tels que l'ingénierie inverse, l'archéologie, la robotique, ou l'urbanisme. Le principal objectif est d'obtenir des versions virtuels d'objets réels afin de les visualiser, analyser et traiter plus facilement. Ces techniques d'acquisition deviennent de plus en plus performantes et accessibles, créant un besoin important de traitement efficace des données 3D variées et massives qui en résultent. Les données sont souvent obtenues sont sous la forme de nuage de points 3D non-structurés qui échantillonnent la surface scannée. Les méthodes traditionnelles de traitement du signal ne peuvent alors s'appliquer directement par manque de paramétrisation spatiale, les points étant explicités par leur coordonnées 3D, sans ordre particulier. Dans cette thèse nous nous focalisons sur la notion d'échelle d'analyse qui est définie par la taille du voisinage utilisé pour caractériser localement la surface échantillonnée. L'analyse à différentes échelles permet de considérer des formes variées et ainsi rendre l'analyse plus pertinente et plus robuste aux imperfections des données acquises. Nous présentons d'abord des résultats théoriques et pratiques sur l'estimation de courbure adaptée à une représentation multi-échelle et multi-résolution de nuage de points. Nous les utilisons pour développer des algorithmes multi-échelle de reconnaissance de formes planaires et anisotropes comme les cylindres et les lignes caractéristiques. Enfin, nous proposons de calculer une paramétrisation 2D globale de la surface sous-jacente directement à partir de son nuage de points 3D non-structurés
3D acquisition techniques like photogrammetry and laser scanning are commonly used in numerous fields such as reverse engineering, archeology, robotics and urban planning. The main objective is to get virtual versions of real objects in order to visualize, analyze and process them easily. Acquisition techniques become more and more powerful and affordable which creates important needs to process efficiently the resulting various and massive 3D data. Data are usually obtained in the form of unstructured 3D point cloud sampling the scanned surface. Traditional signal processing methods cannot be directly applied due to the lack of spatial parametrization. Points are only represented by their 3D coordinates without any particular order. This thesis focuses on the notion of scale of analysis defined by the size of the neighborhood used to locally characterize the point-sampled surface. The analysis at different scales enables to consider various shapes which increases the analysis pertinence and the robustness to acquired data imperfections. We first present some theoretical and practical results on curvature estimation adapted to a multi-scale and multi-resolution representation of point clouds. They are used to develop multi-scale algorithms for the recognition of planar and anisotropic shapes such as cylinders and feature curves. Finally, we propose to compute a global 2D parametrization of the underlying surface directly from the 3D unstructured point cloud
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Ben, Abdallah Hamdi. "Inspection d'assemblages aéronautiques par vision 2D/3D en exploitant la maquette numérique et la pose estimée en temps réel Three-dimensional point cloud analysis for automatic inspection of complex aeronautical mechanical assemblies Automatic inspection of aeronautical mechanical assemblies by matching the 3D CAD model and real 2D images." Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2020. http://www.theses.fr/2020EMAC0001.

Повний текст джерела
Анотація:
Cette thèse s'inscrit dans le contexte du développement d'outils numériques innovants au service de ce qui est communément désigné par Usine du Futur. Nos travaux de recherche ont été menés dans le cadre du laboratoire de recherche commun "Inspection 4.0" entre IMT Mines Albi/ICA et la Sté DIOTA spécialisée dans le développement d'outils numériques pour l'Industrie 4.0. Dans cette thèse, nous nous sommes intéressés au développement de systèmes exploitant des images 2D ou des nuages de points 3D pour l'inspection automatique d'assemblages mécaniques aéronautiques complexes (typiquement un moteur d'avion). Nous disposons du modèle CAO de l'assemblage (aussi désigné par maquette numérique) et il s'agit de vérifier que l'assemblage a été correctement assemblé, i.e que tous les éléments constituant l'assemblage sont présents, dans la bonne position et à la bonne place. La maquette numérique sert de référence. Nous avons développé deux scénarios d'inspection qui exploitent les moyens d'inspection développés par DIOTA : (1) un scénario basé sur une tablette équipée d'une caméra, portée par un opérateur pour un contrôle interactif temps-réel, (2) un scénario basé sur un robot équipé de capteurs (deux caméras et un scanner 3D) pour un contrôle totalement automatique. Dans les deux scénarios, une caméra dite de localisation fournit en temps-réel la pose entre le modèle CAO et les capteurs mis en œuvre (ce qui permet de relier directement la maquette numérique 3D avec les images 2D ou les nuages de points 3D analysés). Nous avons d'abord développé des méthodes d'inspection 2D, basées uniquement sur l'analyse d'images 2D puis, pour certains types d'inspection qui ne pouvaient pas être réalisés à partir d'images 2D (typiquement nécessitant la mesure de distances 3D), nous avons développé des méthodes d'inspection 3D basées sur l'analyse de nuages de points 3D. Pour l'inspection 3D de câbles électriques présents au sein de l'assemblage, nous avons proposé une méthode originale de segmentation 3D des câbles. Nous avons aussi traité la problématique de choix automatique de point de vue qui permet de positionner le capteur d'inspection dans une position d'observation optimale. Les méthodes développées ont été validées sur de nombreux cas industriels. Certains des algorithmes d’inspection développés durant cette thèse ont été intégrés dans le logiciel DIOTA Inspect© et sont utilisés quotidiennement chez les clients de DIOTA pour réaliser des inspections sur site industriel
This thesis makes part of a research aimed towards innovative digital tools for the service of what is commonly referred to as Factory of the Future. Our work was conducted in the scope of the joint research laboratory "Inspection 4.0" founded by IMT Mines Albi/ICA and the company DIOTA specialized in the development of numerical tools for Industry 4.0. In the thesis, we were interested in the development of systems exploiting 2D images or (and) 3D point clouds for the automatic inspection of complex aeronautical mechanical assemblies (typically an aircraft engine). The CAD (Computer Aided Design) model of the assembly is at our disposal and our task is to verify that the assembly has been correctly assembled, i.e. that all the elements constituting the assembly are present in the right position and at the right place. The CAD model serves as a reference. We have developed two inspection scenarios that exploit the inspection systems designed and implemented by DIOTA: (1) a scenario based on a tablet equipped with a camera, carried by a human operator for real-time interactive control, (2) a scenario based on a robot equipped with sensors (two cameras and a 3D scanner) for fully automatic control. In both scenarios, a so-called localisation camera provides in real-time the pose between the CAD model and the sensors (which allows to directly link the 3D digital model with the 2D images or the 3D point clouds analysed). We first developed 2D inspection methods, based solely on the analysis of 2D images. Then, for certain types of inspection that could not be performed by using 2D images only (typically requiring the measurement of 3D distances), we developed 3D inspection methods based on the analysis of 3D point clouds. For the 3D inspection of electrical cables, we proposed an original method for segmenting a cable within a point cloud. We have also tackled the problem of automatic selection of best view point, which allows the inspection sensor to be placed in an optimal observation position. The developed methods have been validated on many industrial cases. Some of the inspection algorithms developed during this thesis have been integrated into the DIOTA Inspect© software and are used daily by DIOTA's customers to perform inspections on industrial sites
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Silva, Fabrício Müller da. "Reamostragem adaptativa para simplificação de nuvens de pontos." Universidade do Vale do Rio dos Sinos, 2015. http://www.repositorio.jesuita.org.br/handle/UNISINOS/4916.

Повний текст джерела
Анотація:
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2015-10-27T14:34:35Z No. of bitstreams: 1 Fabrício Müller da Silva_.pdf: 105910980 bytes, checksum: 4ce66a9d5fff9a2b2a97835c54dac355 (MD5)
Made available in DSpace on 2015-10-27T14:34:35Z (GMT). No. of bitstreams: 1 Fabrício Müller da Silva_.pdf: 105910980 bytes, checksum: 4ce66a9d5fff9a2b2a97835c54dac355 (MD5) Previous issue date: 2015-08-31
Nenhuma
Este trabalho apresenta um algoritmo para simplificação de nuvens de pontos baseado na inclinação local da superfície amostrada pelo conjunto de pontos de entrada. O objetivo é transformar a nuvem de pontos original no menor conjunto possível, mantendo as características e a topologia da superfície original. O algoritmo proposto reamostra de forma adaptativa o conjunto de entrada, removendo pontos redundantes para manter um determinado nível de qualidade definido pelo usuário no conjunto final. O processo consiste em um particionamento recursivo do conjunto de entrada através da Análise de Componentes Principais (PCA). No algoritmo, PCA é aplicada para definir as partições sucessivas, para obter uma aproximação linear (por planos) em cada partição e para avaliar a qualidade de cada aproximação. Por fim, o algoritmo faz uma escolha simples de quais pontos serão mantidos para representar a aproximação linear de cada partição. Estes pontos formarão o conjunto de dados final após o processo de simplificação. Para avaliação dos resultados foi aplicada uma métrica de distância entre malhas de polígonos, baseada na distância de Hausdorff, comparando a superfície reconstruída com a nuvem de pontos original e aquela reconstruída com a nuvem filtrada. Os resultados obtidos com o algoritmo conseguiram uma taxa de até 95% de compactação do conjunto de dados de entrada, diminuindo o tempo total de execução do processo de reconstrução, mantendo as características e a topologia do modelo original. A qualidade da superfície reconstruída com a nuvem filtrada também é atestada pela métrica de comparação.
This paper presents a simple and efficient algorithm for point cloud simplification based on the local inclination of the surface sampled by the input set. The objective is to transform the original point cloud in a small as possible one, keeping the features and topology of the original surface. The proposed algorithm performs an adaptive resampling of the input set, removing unnecessary points to maintain a level of quality defined by the user in the final dataset. The process consists of a recursive partitioning in the input set using Principal Component Analysis (PCA). PCA is applied for defining the successive partitions, for obtaining the linear approximations (planes) for each partition, and for evaluating the quality of those approximations. Finally, the algorithm makes a simple choice of the points to represent the linear approximation of each partition. These points are the final dataset of the simplification process. For result evaluation, a distance metric between polygon meshes, based on Hausdorff distance, was defined, comparing the reconstructed surface using the original point clouds and the reconstructed surface usingthe filtered ones. The algorithm achieved compression rates up to 95% of the input dataset,while reducing the total execution time of reconstruction process, keeping the features and the topology of the original model. The quality of the reconstructed surface using the filtered point cloud is also attested by the distance metric.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Asghar, Umair. "Landslide mapping from analysis of UAV-SFM point clouds." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/63604.

Повний текст джерела
Анотація:
In recent years, unmanned aerial vehicles (UAVs) equipped with digital cameras have emerged as an inexpensive alternative to light detection and ranging (LiDAR) for mapping landslides. However, mapping with UAVs typically requires a ground control point (GCP) network to achieve higher mapping accuracies. Complex natural environments often limit the number as well as the proper distribution of GCPs. In the first part of this study, aerial imagery acquired with a quadrotor UAV was processed using structure from motion (SfM) technique to produce a three-dimensional point cloud of a large landslide involving multiple steep slopes and dense tree cover. The resulting point cloud was georeferenced with six different configurations of GCPs measured with a real-time kinetic GNSS receiver to test the influence of the number and the distribution of GCPs on mapping accuracies. Horizontal and vertical mapping accuracies of 0.058 m and 0.044 m, respectively, were achieved for the most accurate GCP configuration. A separate point cloud comparison was performed on the georeferenced point clouds to assess the effect of varying topography and tree cover on mapping accuracy. The 3D change in the natural terrain measured over a 1-year period from July 2016 to July 2017 showed movements ranging from ±0.4 m to over ±1 m at the toe of the landslide. Other parts of the landslide either remained inactive or moved less than 0.1 m. The second part of this thesis involved an accuracy comparison of five different opensource algorithms, originally developed for LiDAR data, for classification of the UAV-SfM point clouds. The influences of terrain slope, vegetation and point densities, and difficult-to-filter features on classification accuracy were also evaluated. CSF and MCC algorithms produced the lowest overall errors (4%) closely followed by LASground and FUSION (5%). All algorithms suffered in areas with densely vegetated steep slopes, understory vegetation, low point density, and low objects. Although any of the tested algorithms along with careful selection of input parameters can be used to accurately classify UAV-SfM point clouds, CSF is recommended as it is computationally efficient, does not require any preprocessing, and can process very large point clouds (>50 million points).
Applied Science, Faculty of
Engineering, School of (Okanagan)
Graduate
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Melchert, Wanessa Roberto. "Desenvolvimento de procedimentos analíticos limpos e com alta sensibilidade para a determinação de espécies de interesse ambiental." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/46/46133/tde-25062009-150929/.

Повний текст джерела
Анотація:
Procedimentos analíticos limpos e com alta sensibilidade foram desenvolvidos para a determinação de espécies de interesse ambiental (carbaril, sulfato e cloro livre). Os procedimentos foram baseados no acoplamento de sistemas de análises em fluxo com microbombas solenoides à espectrofotometria com longo caminho óptico ou em procedimentos de extração em ponto nuvem, visando a concentração das espécies de interesse sem o emprego de solventes tóxicos. A determinação de carbaril em águas naturais foi baseada em uma dupla extração em ponto nuvem: uma etapa de clean-up para a remoção de substâncias orgânicas interferentes e outra de pré-concentração do indofenol formado na reação com a forma oxidada do p-aminofenol. Resposta linear foi observada entre 10 e 500 µg L-1, com absortividade molar aparente estimada em 4,6x105 L mol-1 cm-1. O limite de detecção foi estimado em 7 µg L-1 e coeficiente de variação em 3,4% (n = 8). Recuperações entre 91 e 99% foram estimadas para adições de carbaril em amostras de águas naturais. Uma cela simples e de baixo custo com 30 cm de caminho óptico foi construída para medidas espectrofotométricas. A cela apresenta características desejáveis como baixa atenuação do feixe de radiação e volume interno (75 µL) comparável a de uma cela convencional. O desempenho da cela foi avaliado na determinação de fosfato utilizando o método azul de molibdênio com resposta linear obtida entre 0,05 e 0,8 mg L-1 de fosfato (r = 0,999). O aumento na sensibilidade (30,4 vezes) em comparação com o obtido com uma cela de fluxo convencional de 1 cm está de acordo com o estimado pela lei de Lambert-Beer. A formação do indofenol foi também explorada para a determinação de carbaril no procedimento em fluxo com celas de 30 e 100 cm. Respostas lineares; limite de detecção e coeficiente de variação foram estimados entre 50 - 750 e 5 - 200 µg L-1; 4,0 e 1,7 µg L-1 e 2,3 e 0,7%, respectivamente, para as celas de 30 e 100 cm. O procedimento proposto foi seletivo para a determinação de carbaril, sem interferências de outros pesticidas carbamatos. O resíduo gerado foi tratado com persulfato de potássio e irradiação ultravioleta, com redução de 94% do carbono orgânico total, não sendo o resíduo degradado considerado tóxico, frente às bactérias Vibrio-fischeri. A determinação de sulfato foi baseada em medidas turbidimétricas com cela de fluxo de 1 cm, com resposta linear observada entre 20 - 200 mg L-1. Deriva de linha base não foi observada em função do fluxo pulsado gerado pelas microbombas solenoides. O limite de detecção e o coeficiente de variação (n = 20) foram estimados em 3 mg L-1 e 2,4%, respectivamente, com frequência de amostragem de 33 determinações por hora. Para aumentar a sensibilidade, uma cela de fluxo de 100 cm foi empregada e deriva de linha base foi evitada utilizando uma etapa de limpeza periódica com EDTA em meio alcalino. Resposta linear foi observada entre 7 - 16 mg L-1 com limite de detecção de 150 µg L-1 e coeficiente de variação de 3,0% (n = 20). A frequência de amostragem foi estimada em 25 determinações por hora. Resultados obtidos para amostras de águas naturais e de chuva foram concordantes a nível de confiança de 95% com o procedimento turbidimétrico em batelada. A determinação de cloro livre em águas naturais e de torneira foi baseada na reação com N,N-dietil-p-fenilenodiamina, com resposta linear entre 5 e 100 µg L-1 de hipoclorito e limite de detecção e coeficiente de variação estimados em 0,23 µg L-1 e 3,4%, respectivamente. A frequência de amostragem foi estimada em 58 determinações por hora.
Clean analytical procedures with high sensitivity for the determination of species of environmental interest (carbaryl, sulphate and chlorine) were developed. Flow systems with solenoid micropumps were coupled to long optical pathlength spectrophotometry or cloud point extraction procedures, aiming the concentration of species for determination without employing toxic solvents. Carbaryl determination in natural waters was based on a double cloud point extraction: a clean-up step for removal of interfering organic species and pre-concentration of the indophenol blue, formed in the reaction with the oxidized of p-aminophenol. Linear response was observed between 10 and 50 µg L-1, with apparent molar absortivity estimated as 4.6x105 L mol-1 cm-1. Detection limit was estimated as 7 mg L-1 and the coefficient of variation as 3.4% (n = 8). Recoveries between 91 and 99% were obtained for carbaryl spiked to natural waters. A simple and low cost flow cell with 30 cm optical path was constructed for spectrophotometric measurements. The cell shows desirable characteristics such as reduced attenuation of the radiation beam and internal volume (75 µL) comparable to conventional flow cells. The performance was evaluated by phosphate determination by the molibdenium blue method, with linear response between 0.05 and 0.8 mg L-1 of phosphate (r = 0.999). The increase in sensitivity (30.4 fold) in comparison to the obtained with a conventional 1 cm optical path flow cell agreed to theoretical value estimated by the Lambert-Beer law. The determination of carbaryl was also carried out in a flow system coupled to 30 and 100 cm optical path flow cells, also exploiting the formation of indophenol compound. Linear responses, detection limits and coefficients of variation were 50 - 750 and 5 - 200 µg L-1;4.0 and 1.7 µg L-1 and 2.3 and 0.7%, respectively, for 30 and 100 cm cells. The proposed procedure was selective for the determination of carbaryl, without interferences of other carbamate pesticides. The waste of the analytical procedure was treated with potassium persulphate and ultraviolet irradiation, with decrease of 94% of total organic carbon. The residue after treatment was not toxic for Vibrio-fischeri bacteria. Sulphate determination was based on turbidimetric measurements with 1-cm flow cell, with linear response between 20 and 200 mg L-1. Baseline drift was avoided in view of the pulsed flow related to the solenoid micropumps. The detection limit and the coefficient of variation were estimated as 3 mg L-11 and 2.4%, respectively, for a sampling rate of 33 determinations per hour. Aiming the increase in sensitivity, a 100 cm optical path flow cell was employed and baseline drift was avoided with a washing, step employing EDTA in alkaline medium. Linear response was observed between 7 - 16 mg L-1, with a detection limit of 150 µg L-1, coefficient of variation of 3.0% (n = 20) and sampling rate of 25 determinations per hour. Results obtained natural and rain for water samples agreed at 95% confidence level with the batch turbidimetric procedure. The determination of free chlorine in natural and tap waters was based on the reaction with N,N-diethyl-p-phenylenediamine, with linear response between 5 and 100 µg L-1, and detection limit and coefficient of variation estimated as 0.23 µg L-1 and 3.4%, respectively. Sampling rate was estimated as 58 determinations per hour.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Oesterling, Patrick. "Visual Analysis of High-Dimensional Point Clouds using Topological Abstraction." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-203056.

Повний текст джерела
Анотація:
This thesis is about visualizing a kind of data that is trivial to process by computers but difficult to imagine by humans because nature does not allow for intuition with this type of information: high-dimensional data. Such data often result from representing observations of objects under various aspects or with different properties. In many applications, a typical, laborious task is to find related objects or to group those that are similar to each other. One classic solution for this task is to imagine the data as vectors in a Euclidean space with object variables as dimensions. Utilizing Euclidean distance as a measure of similarity, objects with similar properties and values accumulate to groups, so-called clusters, that are exposed by cluster analysis on the high-dimensional point cloud. Because similar vectors can be thought of as objects that are alike in terms of their attributes, the point cloud\'s structure and individual cluster properties, like their size or compactness, summarize data categories and their relative importance. The contribution of this thesis is a novel analysis approach for visual exploration of high-dimensional point clouds without suffering from structural occlusion. The work is based on implementing two key concepts: The first idea is to discard those geometric properties that cannot be preserved and, thus, lead to the typical artifacts. Topological concepts are used instead to shift away the focus from a point-centered view on the data to a more structure-centered perspective. The advantage is that topology-driven clustering information can be extracted in the data\'s original domain and be preserved without loss in low dimensions. The second idea is to split the analysis into a topology-based global overview and a subsequent geometric local refinement. The occlusion-free overview enables the analyst to identify features and to link them to other visualizations that permit analysis of those properties not captured by the topological abstraction, e.g. cluster shape or value distributions in particular dimensions or subspaces. The advantage of separating structure from data point analysis is that restricting local analysis only to data subsets significantly reduces artifacts and the visual complexity of standard techniques. That is, the additional topological layer enables the analyst to identify structure that was hidden before and to focus on particular features by suppressing irrelevant points during local feature analysis. This thesis addresses the topology-based visual analysis of high-dimensional point clouds for both the time-invariant and the time-varying case. Time-invariant means that the points do not change in their number or positions. That is, the analyst explores the clustering of a fixed and constant set of points. The extension to the time-varying case implies the analysis of a varying clustering, where clusters appear as new, merge or split, or vanish. Especially for high-dimensional data, both tracking---which means to relate features over time---but also visualizing changing structure are difficult problems to solve.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Polat, Songül. "Combined use of 3D and hyperspectral data for environmental applications." Thesis, Lyon, 2021. http://www.theses.fr/2021LYSES049.

Повний текст джерела
Анотація:
La demande sans cesse croissante de solutions permettant de décrire notre environnement et les ressources qu'il contient nécessite des technologies qui permettent une description efficace et complète, conduisant à une meilleure compréhension du contenu. Les technologies optiques, la combinaison de ces technologies et un traitement efficace sont cruciaux dans ce contexte. Cette thèse se concentre sur les technologies 3D et les technologies hyper-spectrales (HSI). Tandis que les technologies 3D aident à comprendre les scènes de manière plus détaillée en utilisant des informations géométriques, topologiques et de profondeur, les développements rapides de l'imagerie hyper-spectrale ouvrent de nouvelles possibilités pour mieux comprendre les aspects physiques des matériaux et des scènes dans un large éventail d'applications grâce à leurs hautes résolutions spatiales et spectrales. Les travaux de recherches de cette thèse visent à l'utilisation combinée des données 3D et hyper-spectrales. Ils visent également à démontrer le potentiel et la valeur ajoutée d'une approche combinée dans le contexte de différentes applications. Une attention particulière est accordée à l'identification et à l'extraction de caractéristiques dans les deux domaines et à l'utilisation de ces caractéristiques pour détecter des objets d'intérêt.Plus spécifiquement, nous proposons différentes approches pour combiner les données 3D et hyper-spectrales en fonction des technologies 3D et d’imagerie hyper-spectrale (HSI) utilisées et montrons comment chaque capteur peut compenser les faiblesses de l'autre. De plus, une nouvelle méthode basée sur des critères de forme dédiés à la classification de signatures spectrales et des règles de décision liés à l'analyse des signatures spectrales a été développée et présentée. Les forces et les faiblesses de cette méthode par rapport aux approches existantes sont discutées. Les expérimentations réalisées, dans le domaine du patrimoine culturel et du tri de déchets plastiques et électroniques, démontrent que la performance et l’efficacité de la méthode proposée sont supérieures à celles des méthodes de machines à vecteurs de support (SVM).En outre, une nouvelle méthode d'analyse basée sur les caractéristiques 3D et hyper-spectrales est présentée. L'évaluation de cette méthode est basée sur un exemple pratique du domaine des déchet d'équipements électriques et électroniques (WEEE) et se concentre sur la séparation de matériaux comme les plastiques, les carte à circuit imprimé (PCB) et les composants électroniques sur PCB. Les résultats obtenus confirment qu'une amélioration des ré-sultats de classification a pu être obtenue par rapport aux méthodes proposées précédemment.L’avantage des méthodes et processus individuels développés dans cette thèse est qu’ils peuvent être transposé directement à tout autre domaine d'application que ceux investigué, et généralisé à d’autres cas d’étude sans adaptation préalable
Ever-increasing demands for solutions that describe our environment and the resources it contains, require technologies that support efficient and comprehensive description, leading to a better content-understanding. Optical technologies, the combination of these technologies and effective processing are crucial in this context. The focus of this thesis lies on 3D scanning and hyperspectral technologies. Rapid developments in hyperspectral imaging are opening up new possibilities for better understanding the physical aspects of materials and scenes in a wide range of applications due to their high spatial and spectral resolutions, while 3D technologies help to understand scenes in a more detailed way by using geometrical, topological and depth information. The investigations of this thesis aim at the combined use of 3D and hyperspectral data and demonstrates the potential and added value of a combined approach by means of different applications. Special focus is given to the identification and extraction of features in both domains and the use of these features to detect objects of interest. More specifically, we propose different approaches to combine 3D and hyperspectral data depending on the HSI/3D technologies used and show how each sensor could compensate the weaknesses of the other. Furthermore, a new shape and rule-based method for the analysis of spectral signatures was developed and presented. The strengths and weaknesses compared to existing approach-es are discussed and the outperformance compared to SVM methods are demonstrated on the basis of practical findings from the field of cultural heritage and waste management.Additionally, a newly developed analytical method based on 3D and hyperspectral characteristics is presented. The evaluation of this methodology is based on a practical exam-ple from the field of WEEE and focuses on the separation of materials like plastics, PCBs and electronic components on PCBs. The results obtained confirms that an improvement of classification results could be achieved compared to previously proposed methods.The claim of the individual methods and processes developed in this thesis is general validity and simple transferability to any field of application
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Törnblom, Nils. "Underwater 3D Surface Scanning using Structured Light." Thesis, Uppsala universitet, Centrum för bildanalys, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-138205.

Повний текст джерела
Анотація:
In this thesis project, an underwater 3D scanner based on structured light has been constructed and developed. Two other scanners, based on stereoscopy and a line-swept laser, were also tested. The target application is to examine objects inside the water filled reactor vessel of nuclear power plants. Structured light systems (SLS) use a projector to illuminate the surface of the scanned object, and a camera to capture the surfaces' reflection. By projecting a series of specific line-patterns, the pixel columns of the digital projector can be identified off the scanned surface. 3D points can then be triangulated using ray-plane intersection. These points form the basis the final 3D model. To construct an accurate 3D model of the scanned surface, both the projector and the camera need to be calibrated. In the implemented 3D scanner, this was done using the Camera Calibration Toolbox for Matlab. The codebase of this scanner comes from the Matlab implementation by Lanman & Taubin at Brown University. The code has been modified and extended to meet the needs of this project. An examination of the effects of the underwater environment has been performed, both theoretically and experimentally. The performance of the scanner has been analyzed, and different 3D model visualization methods have been tested. In the constructed scanner, a small pico projector was used together with a high pixel count DSLR camera. Because these are both consumer level products, the cost of this system is just a fraction of commercial counterparts, which uses professional components. Yet, thanks to the use of a high pixel count camera, the measurement resolution of the scanner is comparable to the high-end of industrial structured light scanners.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Goussard, Charl Leonard. "Semi-automatic extraction of primitive geometric entities from point clouds." Thesis, Stellenbosch : Stellenbosch University, 2001. http://hdl.handle.net/10019.1/52449.

Повний текст джерела
Анотація:
Thesis (MScEng)--University of Stellenbosch, 2001.
ENGLISH ABSTRACT: This thesis describes an algorithm to extract primitive geometric entities (flat planes, spheres or cylinders, as determined by the user's inputs) from unstructured, unsegmented point clouds. The algorithm extracts whole entities or only parts thereof. The entity boundaries are computed automatically. Minimal user interaction is required to extract these entities. The algorithm is accurate and robust. The algorithm is intended for use in the reverse engineering environment. Point clouds created in this environment typically have normal error distributions. Comprehensive testing and results are shown as well as the algorithm's usefulness in the reverse engineering environment.
AFRIKAANSE OPSOMMING: Hierdie tesis beskryf 'n algoritme wat primitiewe geometriese entiteite (plat vlakke, sfere of silinders na gelang van die gebruiker se inset) pas op ongestruktureerde, ongesegmenteerde puntewolke. Die algoritme pas geslote geometriese entiteite of slegs dele daarvan. Die grense van hierdie entiteite word automaties bereken. Minimale gebruikersinteraksie word benodig om die geometriese entiteite te pas. Die algoritme is akkuraat en robuust. Die algoritme is ontwikkel vir gebruik in die truwaartse ingenieurswese omgewing. Puntewolke opgemeet in hierdie omgewing het tipies meetfoute met 'n normaal verdeling. Omvattende toetsing en resultate word getoon en daarmee ook die nut wat die algoritme vir die gebruiksomgewing inhou.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Yu, Shuda. "Modélisation 3D automatique d'environnements : une approche éparse à partir d'images prises par une caméra catadioptrique." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2013. http://tel.archives-ouvertes.fr/tel-00844401.

Повний текст джерела
Анотація:
La modélisation 3d automatique d'un environnement à partir d'images est un sujet toujours d'actualité en vision par ordinateur. Ce problème se résout en général en trois temps : déplacer une caméra dans la scène pour prendre la séquence d'images, reconstruire la géométrie, et utiliser une méthode de stéréo dense pour obtenir une surface de la scène. La seconde étape met en correspondances des points d'intérêts dans les images puis estime simultanément les poses de la caméra et un nuage épars de points 3d de la scène correspondant aux points d'intérêts. La troisième étape utilise l'information sur l'ensemble des pixels pour reconstruire une surface de la scène, par exemple en estimant un nuage de points dense.Ici nous proposons de traiter le problème en calculant directement une surface à partir du nuage épars de points et de son information de visibilité fournis par l'estimation de la géométrie. Les avantages sont des faibles complexités en temps et en espace, ce qui est utile par exemple pour obtenir des modèles compacts de grands environnements comme une ville. Pour cela, nous présentons une méthode de reconstruction de surface du type sculpture dans une triangulation de Delaunay 3d des points reconstruits. L'information de visibilité est utilisée pour classer les tétraèdres en espace vide ou matière. Puis une surface est extraite de sorte à séparer au mieux ces tétraèdres à l'aide d'une méthode gloutonne et d'une minorité de points de Steiner. On impose sur la surface la contrainte de 2-variété pour permettre des traitements ultérieurs classiques tels que lissage, raffinement par optimisation de photo-consistance ... Cette méthode a ensuite été étendue au cas incrémental : à chaque nouvelle image clef sélectionnée dans une vidéo, de nouveaux points 3d et une nouvelle pose sont estimés, puis la surface est mise à jour. La complexité en temps est étudiée dans les deux cas (incrémental ou non). Dans les expériences, nous utilisons une caméra catadioptrique bas coût et obtenons des modèles 3d texturés pour des environnements complets incluant bâtiments, sol, végétation ... Un inconvénient de nos méthodes est que la reconstruction des éléments fins de la scène n'est pas correcte, par exemple les branches des arbres et les pylônes électriques.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

SHAH, GHAZANFAR ALI. "Template-based reverse engineering of parametric CAD models from point clouds." Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1048640.

Повний текст джерела
Анотація:
Even if many Reverse Engineering techniques exist to reconstruct real objects in 3D, very few are able to deal directly and efficiently with the reconstruction of editable CAD models of assemblies of mechanical parts that can be used in the stages of Product Development Processes (PDP). In the absence of suitable segmentation tools, these approaches struggle to identify and reconstruct model the different parts that make up the assembly. The thesis aims to develop a new Reverse Engineering technique for the reconstruction of editable CAD models of mechanical parts’ assemblies. The originality lies in the use of a Simulated Annealing-based fitting technique optimization process that leverages a two-level filtering able to capture and manage the boundaries of the parts’ geometries inside the overall point cloud to allow for interface detection and local fitting of a part template to the point cloud. The proposed method uses various types of data (e.g. clouds of points, CAD models possibly stored in database together with the associated best parameter configurations for the fitting process). The approach is modular and integrates a sensitivity analysis to characterize the impact of the variations of the parameters of a CAD model on the evolution of the deviation between the CAD model itself and the point cloud to be fitted. The evaluation of the proposed approach is performed using both real scanned point clouds and as-scanned virtually generated point clouds which incorporate several artifacts that could appear with a real scanner. Results cover several Industry 4.0 related application scenarios, ranging from the global fitting of a single part to the update of a complete Digital Mock-Up embedding assembly constraints. The proposed approach presents good capacities to help maintaining the coherence between a product/system and its digital twin.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Frizzarin, Rejane Mara. "Desenvolvimento de procedimentos analíticos em fluxo explorando difusão gasosa ou extração em ponto de nuvem. Aplicação a amostras de interesse agronômico e ambiental." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/64/64135/tde-30032015-160404/.

Повний текст джерела
Анотація:
Procedimentos analíticos espectrofotométricos foram desenvolvidos empregando etapas de separação e pré-concentração em sistemas de análises em fluxo com multi-impulsão ou lab-in-syringe, com aplicação a amostras de interesse agronômico (ferro em materiais vegetais e alimentos) e ambiental (cianeto dissociável em ácidos, ferro e antimônio em águas). A determinação de cianeto explorou a descoloração do complexo formado entre Cu(I) e ácido 2-2´-biquinolino-4,4´-dicarboxílico (BQA) pela presença de CN-, após a separação de HCN por difusão gasosa. Espectrofotometria com longo caminho óptico foi empregada para aumentar a sensibilidade, com resposta linear entre 5 e 200 g L-1, limite de detecção, coeficiente de variação (n = 10) e frequência de amostragem de 2,0 g L-1, 1,5% e 22 h-1, respectivamente. O procedimento consumiu apenas 48 ng de Cu(II), 5,0 g de ácido ascórbico e 0,9 g de BQA por determinação e gerou 2,6 mL de efluente. Tiocianato, nitrito e sulfito não afetaram a determinação de cianeto e peróxido de hidrogênio evitou a interferência de sulfeto até 200 g L-1. Os resultados para as amostras de águas naturais foram concordantes com o procedimento fluorimétrico em fluxo com 95% de confiança. Novas estratégias foram propostas para a extração em ponto nuvem (EPN) em fluxo: (i) a fase rica em surfactante foi retida diretamente na cela de fluxo, evitando a diluição; (ii) microbombas solenoide foram exploradas para melhorar a mistura e modular a vazão na retenção e remoção da fase rica, evitando a eluição com solvente orgânico e (iii) o calor liberado e os sais fornecidos por uma reação de neutralização em linha foram explorados para indução do ponto nuvem, sem dispositivo externo de aquecimento. Estas inovações foram demonstradas pela determinação espectrofotométrica de ferro baseada no complexo com 1-(2-tiazolilazo)-2-naftol (TAN). Resposta linear foi observada entre 10 e 200 g L-1, com limite de detecção, coeficiente de variação e frequência de amostragem de 5 g L-1, 2,3% (n = 7) e 26 h-1, respectivamente. O fator de enriquecimento foi de 8,9 com consumo apenas de 6 g de TAN e 390 g de Triton X-114 por determinação. Os resultados para amostras de águas foram concordantes com o procedimento de referência e os obtidos para digeridos de materiais de referência de alimentos concordaram com os valores certificados. A determinação espectrofotométrica de antimônio foi realizada explorando pela primeira vez a EPN em sistema lab-in-syringe. O complexo iodeto e antimônio forma um par iônico com H+, que pode ser extraído com Triton X-114. Planejamento fatorial demonstrou que as concentrações de ácido ascórbico, H2SO4 e Triton X-114, bem como as interações de segunda e de terceira ordem foram significativas (95% de confiança). Planejamento Box-Behnken foi aplicado para a identificação dos valores críticos. Robustez com 95% de confiança, resposta linear entre 5 e 50 g L-1, limite de detecção, coeficiente de variação (n = 5) e frequência de amostragem foram estimados em 1,8 g L-1, 1,6% e 16 h-1, respectivamente. Os resultados para amostras de águas naturais e medicamentos anti-leishmaniose foram concordantes com os obtidos por espectrometria de absorção atômica com geração de hidretos (HGFAAS) com 95% de confiança
Spectrophotometric analytical procedures were developed by exploiting separation and preconcentration steps in flow systems based on multi-pumping or lab-in-syringe approaches with application to agronomic (iron in plant materials and food) and environmental samples (acid dissociable cyanide, iron and antimony in waters). Cyanide determination exploited bleaching of the Cu(I)/2,2\'-biquinoline 4,4\'-dicarboxylic acid (BCA) complex by the analyte, after separation of HCN by gas diffusion. Long path length spectrophotometry was successfully exploited to increase sensitivity, thus achieving a linear response from 5 to 200 g L-1, with detection limit, coefficient of variation (n = 10) and sampling rate of 2 g L-1, 1.5% and 22 h-1, respectively. Each determination consumed 48 ng of Cu(II), 5 g of ascorbic acid and 0.9 g of BCA. As high as 100 mg L-1 thiocyanate, nitrite or sulfite did not affect cyanide determination and sample pretreatment with hydrogen peroxide avoided sulfide interference up to 200 g L-1. The procedure is environmentally friendly and presented one of the lowest detection limits associated to high sampling rate. The results for freshwater samples agreed with those obtained with the flow-based fluorimetric procedure at the 95% confidence level. Novel strategies were proposed for on-line cloud point extraction (CPE): (i) the surfactant-rich phase was retained directly into the flow cell to avoid dilution prior to detection; (ii) solenoid micro-pumps were explored to improve mixing and for flow modulation in the retention and removal of the surfactant-rich phase, thus avoiding the elution step with organic solvents and (iii) the heat released and the salts provided by an on-line neutralization reaction were exploited to induce cloud point without an external heating device. These approaches were demonstrated for the spectrophotometric determination of iron based on complex formation with 1-(2-thiazolylazo)-2-naphtol (TAN). A linear response was observed from 10 to 200 g L-1, with detection limit, coefficient of variation, and sampling rate of 5 g L-1, 2.3% (n = 7) and 26 h-1, respectively. The enrichment factor was 8.9 and the procedure consumed only 6 g of TAN and 390 g of Triton X-114 per determination. The results for freshwater samples agreed with the reference procedure and those obtained for certified reference materials of food agreed with the certified values. Spectrophotometric determination of antimony was performed for the first time exploiting CPE in the lab-in-syringe system. The antimony/iodide complex forms an ion-pair with H+, which can be extracted with Triton X-114. Factorial design showed that the concentrations of ascorbic acid, H2SO4 and Triton X-114, as well as the second and third order interactions were significant (95% confidence). The Box-Behnken design was applied to identify the critical values. The system is robust with 95% confidence and a linear response was observed from 5 to 50 g L-1, with detection limit, coefficient of variation (n = 5) and sampling rate of 1.8 g L-1, 1.6% and 16 h-1, respectively. The results for water samples and antileishmanial drugs agreed with those obtained by hydride generation atomic absorption spectrometry at the 95% confidence level
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Oesterling, Patrick [Verfasser], Gerik [Akademischer Betreuer] Scheuermann, Gerik [Gutachter] Scheuermann, and Thomas [Gutachter] Wischgoll. "Visual Analysis of High-Dimensional Point Clouds using Topological Abstraction / Patrick Oesterling ; Gutachter: Gerik Scheuermann, Thomas Wischgoll ; Betreuer: Gerik Scheuermann." Leipzig : Universitätsbibliothek Leipzig, 2016. http://d-nb.info/1240481624/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Aijazi, Ahmad Kamal. "3D urban cartography incorporating recognition and temporal integration." Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22528/document.

Повний текст джерела
Анотація:
Au cours des dernières années, la cartographie urbaine 3D a suscité un intérêt croissant pour répondre à la demande d’applications d’analyse des scènes urbaines tournées vers un large public. Conjointement les techniques d’acquisition de données 3D progressaient. Les travaux concernant la modélisation et la visualisation 3D des villes se sont donc intensifiés. Des applications fournissent au plus grand nombre des visualisations efficaces de modèles urbains à grande échelle sur la base des imageries aérienne et satellitaire. Naturellement, la demande s’est portée vers des représentations avec un point de vue terrestre pour offrir une visualisation 3D plus détaillée et plus réaliste. Intégrées dans plusieurs navigateurs géographiques comme Google Street View, Microsoft Visual Earth ou Géoportail, ces modélisations sont désormais accessibles et offrent une représentation réaliste du terrain, créée à partir des numérisateurs mobiles terrestres. Dans des environnements urbains, la qualité des données obtenues à partir de ces véhicules terrestres hybrides est largement entravée par la présence d’objets temporairement statiques ou dynamiques (piétons, voitures, etc.) dans la scène. La mise à jour de la cartographie urbaine via la détection des modifications et le traitement des données bruitées dans les environnements urbains complexes, l’appariement des nuages de points au cours de passages successifs, voire la gestion des grandes variations d’aspect de la scène dues aux conditions environnementales constituent d’autres problèmes délicats associés à cette thématique. Plus récemment, les tâches de perception s’efforcent également de mener une analyse sémantique de l’environnement urbain pour renforcer les applications intégrant des cartes urbaines 3D. Dans cette thèse, nous présentons un travail supportant le passage à l’échelle pour la cartographie 3D urbaine automatique incorporant la reconnaissance et l’intégration temporelle. Nous présentons en détail les pratiques actuelles du domaine ainsi que les différentes méthodes, les applications, les technologies récentes d’acquisition des données et de cartographie, ainsi que les différents problèmes et les défis qui leur sont associés. Le travail présenté se confronte à ces nombreux défis mais principalement à la classification des zones urbaines l’environnement, à la détection automatique des changements, à la mise à jour efficace de la carte et l’analyse sémantique de l’environnement urbain. Dans la méthode proposée, nous effectuons d’abord la classification de l’environnement urbain en éléments permanents et temporaires. Les objets classés comme temporaire sont ensuite retirés du nuage de points 3D laissant une zone perforée dans le nuage de points 3D. Ces zones perforées ainsi que d’autres imperfections sont ensuite analysées et progressivement éliminées par une mise à jour incrémentale exploitant le concept de multiples passages. Nous montrons que la méthode d’intégration temporelle proposée permet également d’améliorer l’analyse sémantique de l’environnement urbain, notamment les façades des bâtiments. Les résultats, évalués sur des données réelles en utilisant différentes métriques, démontrent non seulement que la cartographie 3D résultante est précise et bien mise à jour, qu’elle ne contient que les caractéristiques permanentes exactes et sans imperfections, mais aussi que la méthode est également adaptée pour opérer sur des scènes urbaines de grande taille. La méthode est adaptée pour des applications liées à la modélisation et la cartographie du paysage urbain nécessitant une mise à jour fréquente de la base de données
Over the years, 3D urban cartography has gained widespread interest and importance in the scientific community due to an ever increasing demand for urban landscape analysis for different popular applications, coupled with advances in 3D data acquisition technology. As a result, in the last few years, work on the 3D modeling and visualization of cities has intensified. Lately, applications have been very successful in delivering effective visualizations of large scale models based on aerial and satellite imagery to a broad audience. This has created a demand for ground based models as the next logical step to offer 3D visualizations of cities. Integrated in several geographical navigators, like Google Street View, Microsoft Visual Earth or Geoportail, several such models are accessible to large public who enthusiastically view the real-like representation of the terrain, created by mobile terrestrial image acquisition techniques. However, in urban environments, the quality of data acquired by these hybrid terrestrial vehicles is widely hampered by the presence of temporary stationary and dynamic objects (pedestrians, cars, etc.) in the scene. Other associated problems include efficient update of the urban cartography, effective change detection in the urban environment and issues like processing noisy data in the cluttered urban environment, matching / registration of point clouds in successive passages, and wide variations in environmental conditions, etc. Another aspect that has attracted a lot of attention recently is the semantic analysis of the urban environment to enrich semantically 3D mapping of urban cities, necessary for various perception tasks and modern applications. In this thesis, we present a scalable framework for automatic 3D urban cartography which incorporates recognition and temporal integration. We present in details the current practices in the domain along with the different methods, applications, recent data acquisition and mapping technologies as well as the different problems and challenges associated with them. The work presented addresses many of these challenges mainly pertaining to classification of urban environment, automatic change detection, efficient updating of 3D urban cartography and semantic analysis of the urban environment. In the proposed method, we first classify the urban environment into permanent and temporary classes. The objects classified as temporary are then removed from the 3D point cloud leaving behind a perforated 3D point cloud of the urban environment. These perforations along with other imperfections are then analyzed and progressively removed by incremental updating exploiting the concept of multiple passages. We also show that the proposed method of temporal integration also helps in improved semantic analysis of the urban environment, specially building façades. The proposed methods ensure that the resulting 3D cartography contains only the exact, accurate and well updated permanent features of the urban environment. These methods are validated on real data obtained from different sources in different environments. The results not only demonstrate the efficiency, scalability and technical strength of the method but also that it is ideally suited for applications pertaining to urban landscape modeling and cartography requiring frequent database updating
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Lhéritier, Alix. "Méthodes non-paramétriques pour l'apprentissage et la détection de dissimilarité statistique multivariée." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4072/document.

Повний текст джерела
Анотація:
Cette thèse présente trois contributions en lien avec l'apprentissage et la détection de dissimilarité statistique multivariée, problématique d'importance primordiale pour de nombreuses méthodes d'apprentissage utilisées dans un nombre croissant de domaines. La première contribution introduit la notion de taille d'effet multivariée non-paramétrique, éclairant la nature de la dissimilarité détectée entre deux jeux de données, en deux étapes. La première consiste en une décomposition d'une mesure de dissimilarité (divergence de Jensen-Shannon) visant à la localiser dans l'espace ambiant, tandis que la seconde génère un résultat facilement interprétable en termes de grappes de points de forte discrépance et en proximité spatiale. La seconde contribution présente le premier test non-paramétrique d'homogénéité séquentiel, traitant les données issues de deux jeux une à une--au lieu de considérer ceux-ci- in extenso. Le test peut ainsi être arrêté dès qu'une évidence suffisamment forte est observée, offrant une flexibilité accrue tout en garantissant un contrôle del'erreur de type I. Sous certaines conditions, nous établissons aussi que le test a asymptotiquement une probabilité d'erreur de type II tendant vers zéro. La troisième contribution consiste en un test de détection de changement séquentiel basé sur deux fenêtres glissantes sur lesquelles un test d'homogénéité est effectué, avec des garanties sur l'erreur de type I. Notre test a une empreinte mémoire contrôlée et, contrairement à des méthodes de l'état de l'art qui ont aussi un contrôle sur l'erreur de type I, a une complexité en temps constante par observation, le rendant adapté aux flux de données
In this thesis, we study problems related to learning and detecting multivariate statistical dissimilarity, which are of paramount importance for many statistical learning methods nowadays used in an increasingly number of fields. This thesis makes three contributions related to these problems. The first contribution introduces a notion of multivariate nonparametric effect size shedding light on the nature of the dissimilarity detected between two datasets. Our two step method first decomposes a dissimilarity measure (Jensen-Shannon divergence) aiming at localizing the dissimilarity in the data embedding space, and then proceeds by aggregating points of high discrepancy and in spatial proximity into clusters. The second contribution presents the first sequential nonparametric two-sample test. That is, instead of being given two sets of observations of fixed size, observations can be treated one at a time and, when strongly enough evidence has been found, the test can be stopped, yielding a more flexible procedure while keeping guaranteed type I error control. Additionally, under certain conditions, when the number of observations tends to infinity, the test has a vanishing probability of type II error. The third contribution consists in a sequential change detection test based on two sliding windows on which a two-sample test is performed, with type I error guarantees. Our test has controlled memory footprint and, as opposed to state-of-the-art methods that also provide type I error control, has constant time complexity per observation, which makes our test suitable for streaming data
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Rolin, Raphaël. "Contribution à une démarche numérique intégrée pour la préservation des patrimoines bâtis." Thesis, Compiègne, 2018. http://www.theses.fr/2018COMP2450/document.

Повний текст джерела
Анотація:
Au travers de l’ensemble de ces travaux, l’objectif principal consiste à valider la pertinence de la construction et de l’utilisation de modèles 3D géométriques ou paramétriques orientés BIM/hBIM pour des analyses numériques. Il s’agit notamment d’études structurales dans le cas de bâtiments historiques ainsi que la planification potentielle de travaux de restauration, rénovation énergétique et réhabilitation. Des travaux d’exploitation complémentaires des données et des nuages de points, pour la détection, la segmentation et l’extraction d’entités géométriques ont également été intégrés dans les travaux et la méthodologie proposée. Le processus de traitement des données, modélisation géométrique ou paramétrique et leur exploitation, proposé dans ces travaux, contribue à améliorer et mieux comprendre les contraintes et enjeux des différentes configurations et conditions liées aux cas d’études et aux contraintes spécifiques propres aux types de constructions. Les contributions proposées pour les différentes méthodes de modélisation géométriques et paramétriques à partir des nuages de points, sont abordées par la construction de modèles géométriques orientés BIM ou hBIM. De même, les processus de détection d’éléments surfaciques et d’extraction de données à partir de nuages de points mis en place sont présentés. La mise en application de ces méthodes de modélisation est systématiquement illustrée par différents cas d’étude, dont l’ensemble des travaux relatifs ont été effectués dans le cadre de cette thèse. Le but est dès lors de démontrer l’intérêt et la pertinence de ces méthodes numériques en fonction du contexte, des besoins et des études envisagées, par exemple avec la flèche de la cathédrale de Senlis (Oise) et le site de l’Hermitage (Oise). Des analyses numériques de type éléments finis permettent ensuite de valider la pertinence de telles démarches
Throughout this work, the main objective is to validate the relevance of construction and use of geometric or parametric 3D models BIM or hBlM-oriented for numerical analyzes. These include structural studies in the case of historic buildings, as well as planning for restoration work, energy renovation and rehabilitation. Complementary data mining and use of point clouds for the detection, segmentation and extraction of geometric features have also been integrated into the work and proposed methodology. The process of data processing, geometric or parametric modeling and their exploitation, proposed in this work, contributes to improve and understand better the constraints and stakes of the different configurations and conditions related to the case studies and the specific constraints specific to the types of constructions. The contributions proposed for the different geometric and parametric modeling methods from point clouds are addressed by the construction of geometric models BIM or hBlM-oriented. Similarly, the process of surface detection, extraction of data and elements from point clouds are presented. The application of these modeling methods is systematically illustrated by different case studies, all of whose relative work has been carried out within the framework of this thesis. The goal is therefore to demonstrate the interest and relevance of these numerical methods according to the context, needs and studies envisaged, for example with the spire of the Senlis cathedral (Oise) and the Hermitage site (Oise). Numerical analyzes with finite element method are used to validate the relevance of these approaches
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Chia-HsiuTSAI and 蔡家修. "Point Cloud Analysis for Object Recognition and Post-Earthquake Reconnaissance." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/a2e52w.

Повний текст джерела
Анотація:
碩士
國立成功大學
土木工程學系
104
In recent year, three-dimensional laser scanner technology with high precision and high efficiency constantly updated. It’s using to modeling the building, heritage conservation, monitor bridge deformation, post-earthquake reconnaissance, coastal terrain retreat monitoring, large-scale terrain change monitoring and etc. It has gradually replaced the traditional measurement technology. However, currently technologies in the large number of scattered point cloud processing applications are concentrated in specific areas such as building models or point cloud data simulation. And the point cloud data will contain a lot of noise to the need for further processing. So the application of point cloud data is in the research and development stage, needs sustainable development. This study is divided into two directions, namely, automatic object recognition and post-earthquake reconnaissance. The first theme focuses on the automatic identification of specific objects from the scattered cloud data, the original point cloud data operated according to the set of calculation strategy. Using of region grow method will be scattered point cloud data classification. And then use all kinds of algorithms, such as boundary extraction method to extract the characteristics of clustering group. The final cross-match point group feature to recognize particular object. The latter theme mainly by investigation the disaster caused by the earthquake. Study the structural damage analysis caused by seismic force, Such as structural deformation and displacement, surface damage, records and Extraction of the earthquake induced large landslide. And put forward specific damage assessment, quantitative indicators. The results show that the feasibility of automatic identification of specific objects, and several quantitative assessment results can be used to assess structural damage.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Hsu, Ya-Ting, and 許雅婷. "Cloud point measurement and analysis of branched PEO/ salt / water systems." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/64w6na.

Повний текст джерела
Анотація:
碩士
國立臺北科技大學
生物科技研究所
101
Cloud points (CPs) of the phase separation of aqueous solution of branched polyethylene oxide (PEO), sample BP5, or BP6, that have a lower critical solution temperature (LCST), under heating process were determined by using light transmission, dynamic light scattering, and viscometry methods. It was found that the temperature of CP was dependent on the concentration of polymer and salts: NaCl, KCl, NaH2PO4, and KH2PO4. The CP temperature could be reduced to be lower than 37oC with addition of salt. It implies that the branched PEO/water/salt system has potential for medical application. Furthermore, the interaction parameter between solvent and solute was estimated by a modified Flory-Huggins model.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

(5929979), Yun-Jou Lin. "Point Cloud-Based Analysis and Modelling of Urban Environments and Transportation Corridors." Thesis, 2019.

Знайти повний текст джерела
Анотація:
3D point cloud processing has been a critical task due to the increasing demand of a variety of applications such as urban planning and management, as-built mapping of industrial sites, infrastructure monitoring, and road safety inspection. Point clouds are mainly acquired from two sources, laser scanning and optical imaging systems. However, the original point clouds usually do not provide explicit semantic information, and the collected data needs to undergo a sequence of processing steps to derive and extract the required information. Moreover, according to application requirements, the outcomes from the point cloud processing could be different. This dissertation presents two tiers of data processing. The first tier proposes an adaptive data processing framework to deal with multi-source and multi-platform point clouds. The second tier introduces two point clouds processing strategies targeting applications mainly from urban environments and transportation corridors.

For the first tier of data processing, the internal characteristics (e.g., noise level and local point density) of data should be considered first since point clouds might come from a variety of sources/platforms. The acquired point clouds may have a large number of points. Data processing (e.g., segmentation) of such large datasets is time-consuming. Hence, to attain high computational efficiency, this dissertation presents a down-sampling approach while considering the internal characteristics of data and maintaining the nature of the local surface. Moreover, point cloud segmentation is one of the essential steps in the initial data processing chain to derive the semantic information and model point clouds. Therefore, a multi-class simultaneous segmentation procedure is proposed to partition point cloud into planar, linear/cylindrical, and rough features. Since segmentation outcomes could suffer from some artifacts, a series of quality control procedures are introduced to evaluate and improve the quality of the results.

For the second tier of data processing, this dissertation focuses on two applications for high human activity areas, urban environments and transportation corridors. For urban environments, a new framework is introduced to generate digital building models with accurate right-angle, multi-orientation, and curved boundary from building hypotheses which are derived from the proposed segmentation approach. For transportation corridors, this dissertation presents an approach to derive accurate lane width estimates using point clouds acquired from a calibrated mobile mapping system. In summary, this dissertation provides two tiers of data processing. The first tier of data processing, adaptive down-sampling and segmentation, can be utilized for all kinds of point clouds. The second tier of data processing aims at digital building model generation and lane width estimation applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Li, Zhong-Yan, and 李忠彥. "A Study of the Point Cloud Date Analysis for Ground-Based LIDAR." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/53190982887035469634.

Повний текст джерела
Анотація:
碩士
清雲科技大學
土木與防災研究所
95
A procedure (abbreviated as LIDAR), which takes advantage of a three-dimensional sweeping laser to obtain highly accurate scans of cloud data within the desired analysis scene. This innovative cloud data capturing method provides useful information on the internal environment on the evaluated structure of the three-dimensional information. Generally, a ground surveyor’s beacon is located on the ground which is used for the observation of a specific point. This point is used as a control point to determine whether the desired target has shifted. Initially this investigation employed the use a laboratory based slab of bridge simulation. Later on, experimental field work was undergone in bridges within the Maioli, Houlong region. Suitable piers locations on selected piers of the bridges were used scanned, and the regular surveyor's beacon scans and by consulting regularly a bit on the pier on the pier position elected and scanned suitably, and some cloud materials that will scan the good surveyor's beacon spend Real-Works Survey 4.1.2 the after treatment software delete the materials amount of some clouds at random and utilize Microsoft by oneself Visual Basic 6.0 writes the procedure and rejects and calculates out the position of the some cloud centers through the error, whether it is can still be in conformity with primitive materials amount and understand that there are crescent trends in the skew of the centre coordinate when how much surplus is as counts to examine cores, and hope that can be stored the capacity with the reduction file with further result of the experiment .
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Bates, Jordan Steven. "Oblique UAS imagery and point cloud processing for 3D rock glacier monitoring." Master's thesis, 2020. http://hdl.handle.net/10362/94396.

Повний текст джерела
Анотація:
Dissertation submitted in partial fulfilment of the requirements for the degree of Master of Science in Geospatial Technologies
Rock glaciers play a large ecological role and are heavily relied upon by local communities for water, power, and revenue. With climate change, the rate at which they are deforming has increased over the years and is making it more important to gain a better understanding of these geomorphological movements for improved predictions, correlations, and decision making. It is becoming increasingly more practical to examine a rock glacier with 3D visualization to have more perspectives and realistic terrain profiles. Recently gaining more attention is the use of Terrestrial Laser Scanners (TLS) and Unmanned Aircraft Systems (UAS) used separately and combined to gather high-resolution data for 3D analysis. This data is typically transformed into highly detailed Digital Elevation Models (DEM) where Differences of DEM (DoD) is used to track changes over time. This study compares these commonly used collection methods and analysis to a newly conceived multirotor UAS collection method and to a new point cloud Multiscale Model to Model Cloud Comparison (M32C) change detection seen from recent studies. Data was collected of the Innere Ölgrube Rock Glacier in Austria with a TLS in 2012 and with a multirotor UAS in 2019. It was found that oblique imagery with terrain height corrections, that creates perspectives similar to what the TLS provides, increased the completeness of data collection for a better reconstruction of a rock glacier in 3D. The new method improves the completeness of data by an average of at least 8.6%. Keeping the data as point clouds provided a much better representation of the terrain. When transforming point clouds into DEMs with common interpolations methods it was found that the average area of surface items could be exaggerated by 2.2 m^2 while point clouds were much more accurate with 0.3 m^2 of accuracy. DoD and M3C2 results were compared and it was found that DoD always provides a maximum increase of at least 1.1 m and decrease of 0.85 m more than M3C2 with larger standard deviation with similar mean values which could attributed to horizontal inaccuracies and smoothing of the interpolated data.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Paul, Rahul. "Topological analysis, non-linear dimensionality reduction and optimisation applied to manifolds represented by point clouds." Thesis, 2018. http://hdl.handle.net/1959.13/1393470.

Повний текст джерела
Анотація:
Research Doctorate - Doctor of Philosophy (PhD)
In recent years, there has been a growing demand for computational techniques that respect the non-linear structure of high-dimensional data, in both real-world applications and research. Various forms of manifolds can describe non-linear objects. However, manifolds are abstract mathematical concepts and in applications these are often represented by high-dimensional finite sets of sample points. This thesis investigates techniques from machine learning, optimisation and computational topology that can be applied to such point clouds. The first part of this thesis presents a topological approach for validating nonlinear dimensionality reduction. During the process of non-linear dimensionality reduction, manifolds represented by point clouds are at risk of changing their topology. The impact of manifold learning is evaluated by comparing Betti numbers based on persistent homology of test manifolds before and after dimensionality reduction. The second part of the thesis addresses the processing of large point cloud data as it can occur in real applications. The topological analysis of this data using traditional methods for persistent homology can be a computationally costly task. If the data is represented by large point clouds, many current computing systems find processing difficult or fail to process the data. This thesis proposes an alternative approach that employs deep learning to estimate Betti numbers of manifolds represented by point clouds. The third part of the thesis investigates simulated examples of optimisation on general differentiable manifolds without the requirement of a Riemannian structure. A barrier method with exact line search for the optimisation problem over manifolds is proposed. The last part of this thesis reports on collaborative field work with Xerox India using a real-world data set. A heuristic algorithm is employed to solve a practical task allocation problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Rato, Daniela Ferreira Pinto Dias. "Detection of the navigable road limits by analysis of the accumulated point cloud density." Master's thesis, 2019. http://hdl.handle.net/10773/28205.

Повний текст джерела
Анотація:
As part of the Atlas project, this dissertation aims to identify the navigable limits of the road by analyzing the density of accumulated point clouds, obtained through laser readings from a SICK LD-MRS sensor. This sensor, installed in front of the AtlasCar2, has the purpose of identifying obstacles at road level and from it the creation of occupation grids that delimit the navigable space of the vehicle is proposed. First, the point cloud density is converted into an occupancy density grid, normalized in each frame in relation to the maximum density. Edge detection algorithms and gradient filters are subsequently applied to the density grid, in order to detect patterns that match sudden changes in density, both positive and negative. To these grids are applied thresholds in order to remove irrelevant information. Finally, a methodology for quantitative evaluation of algorithms was also developed, using KML files to define road boundaries and, relying on the accuracy of the GPS data obtained, comparing the actual navigable space with the one obtained by the methodology for detection of road boundaries and thus evaluating the performance of the work developed. In this work, the results of the different algorithms are presented, as well as several tests taking into account the influence of grid resolution, car speed, among others. In general, the work developed meets the initially proposed objectives, being able to detect both positive and negative obstacles and being minimally robust to speed and road conditions.
No âmbito do projeto Atlas, esta dissertação prevê a identificação dos limites navegáveis da estrada através da análise da densidade da acumulaçao de nuvens de pontos, obtidas através de leituras laser provenientes de um sensor SICK LD-MRS. Este sensor, instalado na frente do AtlasCar2, tem como propósito a identificação de obstáculos ao nível da estrada e a partir dos seus dados prevê-se a criação de grelhas de ocupação que delimitem o espaço navegável do veículo. Em primeiro lugar, a densidade da nuvem de pontos é transformada numa grelha de densidade normalizada em cada frame em relação à densidade máxima, à qual posteriormente são aplicados algoritmos de deteção de arestas e filtros de gradiente com o objetivo de detetar padrões que correspondam a mudanças súbitas de densidade, tanto positivas como negativas. A estas grelhas são aplicados limiares de forma a eliminar informação irrelevante. Por fim, foi desenvolvida também uma metodologia de avaliação quantitativa dos algoritmos, usando ficheiros KML para deliniar limites da estrada e, contanto com a precisão dos dados de GPS obtidos, comparar o espaço navegável real com o obtido pela metodologia de deteção de limites de estrada e assim avaliar o desempenho dos algoritmos desenvolvidos. Neste trabalho são apresentados resultados dos diferentes algoritmos, bem como diversos testes tendo em conta a influência da resolução de grelha, velocidade do carro, entre outros. O trabalho desenvovido cumpre os objetivos propostos inicialmente, sendo capaz de detetar ambos obstáculos positivos e negativos e sendo minimamente robusto a velocidade e condições de estrada.
Mestrado em Engenharia Mecânica
Стилі APA, Harvard, Vancouver, ISO та ін.
41

MinLi and 李旻. "Analysis of 3D Point Cloud-Based Surface Reconstruction Methods: Screened Poisson Vs. B-Spline." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/2b26dz.

Повний текст джерела
Анотація:
碩士
國立成功大學
資訊工程學系
106
Surface reconstruction is a process that uses scanned point cloud to construct the original surface of an object. By using some different scanning methods, we can obtain the information of the surface of the object that we interested in, and then reconstruct these information into much more intensive point clouds or even continuous surface. In recent years, due to the maturity of 3D printing technology, the price of 3D printing tools have decreased. The laboratory proposed a framework that acquires information of objects using non-contact 3D scanning device and gets reconstructed models using 3D printing tools to make the whole system fully operational. This research is focus on analyzing and comparing the two algorithms which face to the cases having high reliability of point clouds, and further proposed the framework combing to methods to deal with the object that we interested in. Which means that we can reconstruct the surface of objects by using point clouds which have geometry and topology information kept in intact and then get the model using 3D printer.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

陳佳緹. "Storage Industry Analysis--From the Point of Cloud Storage to Discuss Taiwan Storage Industry Opportunity." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/74031356571790799716.

Повний текст джерела
Анотація:
碩士
國立交通大學
企業管理碩士學程
98
Upon the blooming development of Internet, the advantage of information interflow brings us: speedy and cheap information collection, mass information store up, and provide interactive and integrative information by user’s demand. According to Jupiter Researche estimation, until 2011, the worldwide internet access population will achieve 200 millions. In the meantime, enterprise information drives mass information storage demand. 2008 Financial crisis decreased IT investment but not restrain information/data increase. People seek more cheap and efficient way to stock their valuable and un-discarded data which initiate new business model of data storage shift from local devise to internet which call “Cloud Storage Service”. Storage or cloud storage industry belongs to part of Cloud Computing Infrastructure Service. Taiwan is well known for high-tech industry, especially in OEM and ODM. Under the pressure of worldwide competitor and low profit from OEM and ODM business model, Taiwan need to have advance strategy to get better position in the world. By analyzing storage industry from the point of cloud storage service, we can see the storage industry role in cloud storage, we can learn how to establish the global supply chain, how the main player develop their business strategy. In conclusion, the research provides some worthy principles to share the bottleneck and potential opportunity of Taiwan cloud storage future development.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Chiang, Hung-Yueh, and 江泓樂. "An Analysis of 3D Indoor Scene Segmentation Based on Images, Point Cloud and Voxel Data." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/pvbj47.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
資訊工程學研究所
107
The deep learning technology has brought great success in image classification, object detection and semantic segmentation tasks. Recent years, the advent of inexpensive depth sensors hugely motivate 3D research area and real scene reconstruction datasets such as ScanNet [5] and Matterport3D [1] have been proposed. However, the problem of 3D scene semantic segmentation still remains new and challenging due to many variance of 3D data type (e.g. image, voxel, point cloud). Other difficulties such as suffering from high computation cost and the scarcity of data dispel the research progress of 3D segmentation. In this paper, we study 3D indoor scene segmentation problem with three different types of 3D data, which we categorize into image-based, voxel-based and point-based. We experiment on different input signals (e.g. color, depth, normal) and verify their effectiveness and performance in different data type networks. We further study fusion methods and improve the performance by using off-the-shelf deep models and by leveraging data modalities in the paper.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Martins, Pedro Miguel Simões Bastos. "Interference analysis in time of flight LiDARs." Master's thesis, 2019. http://hdl.handle.net/10773/29885.

Повний текст джерела
Анотація:
Every 23 seconds, someone dies on the road. In 2018, 1.35 million people died because of a road accident, 90% of which were caused by human error: reckless behavior, distractions, fatigue, and bad decisions. Autonomous vehicles are one of the solutions to tackle this problem, by replacing or helping the human driver. For that, vehicles need to understand the world around them with great precision in 3D, which makes LiDAR one of the most promising sensors up for the task. To sense their surroundings, LiDARs emit laser beams, which can, theoretically, be received by a LiDAR on another car, disturbing the accuracy of its ability to map the surroundings. In a scenario where multiple autonomous vehicles equipped with LiDAR coexist, their mutual interference can undermine their capability to accurately understand the world and their capability to tackle one of the problems they came to solve: road accidents. In this Master’s thesis we propose to study the behavior of two LiDARs on several interference scenarios, varying their relative distance, height and positioning. We also attempt to understand the different impacts of direct and scattered interference, by blocking the LiDARs line of sight and verify the behavior of the interference on specific regions of interest and objects. We construct an experimental setup containing two LiDARs and a camera, intrinsically and extrinsically calibrate them and estimate the position of the objects of interest on the point cloud through regions of intereset previously detected on the image. Using this experimental setup we gathered more than 600 GB of raw data on which we apply 4 different techniques of interference analysis. Our findings show that the relative number of interference points lies between 10−7 to 10−3 . The results also show that direct interference predominates over scattered, generating relative values of interfered points one order of magnitude higher than when obstructing the line of sight between the LiDARs. We were able to identify cases on which interference seems to behave closely to sensor noise, being almost indistinguishable; in contrast when it was strongly deleterious, resulting on depth measurement errors that surpass the physical dimensions of the room where the setup is operating. We can conclude that interference seems no to be severe for autonomous driving as few measurements are severely impaired by it. Nevertheless, it can still have ill effects, especialy in situations of direct interference. We also conclude that its nature is highly volatile, depending on conditions not yet fully understand, including the influence of the experimental setup.
A cada 23 segundos, uma pessoa morre nas estradas. Em 2018, 1.35 milhões de pessoas morreram devido a acidentes nas estradas, 90% dos quais foram devidos a erro humano: condução perigosa, distrações, fadiga e más decisões. Veículos autónomos são uma das soluções apresentadas para resolver este problema, substituindo ou ajudando o condutor. Para tal, os veículos precisam de conseguir perceber aquilo que os rodeia com grande precisão, sendo o LiDAR um dos sensores mais promissores para essa tarefa. Para compreender o que os rodeia, os LiDARs emitem raios laser que podem, teoricamente, ser recebidos por um outro LiDAR, noutro carro, interferindo com a capacidade desse segundo LiDAR compreender o que rodeia. Num cenário onde múltiplos carros autónomos equipados com LiDAR coexistem, a sua interferência mútua pode comprometer a sua capacidade para perceber o que o rodeia com precisão e a possibilidade de solucionar um dos problemas que inicialmente ira resolver: acidentes e mortes na estrada. Nesta Dissertação de Mestrado, propomos o estudo do comportamento da interferência entre dois LiDARs em vários cenários de interferência, onde variamos a sua distância, altura e posição relativa. Tentámos também perceber o diferente impacto da interferência direta e dispersa, através da obstrução da linha de vista entre os dois LiDARs, e verificar qual o comportamento da interferência em regiões de interesse e objetos. Construímos um setup experimental contendo dois LiDARs e uma câmara, calibramo-los intrínseca e extrinsecamente e estimamos a posição dos objetos de interesse na point cloud através de regiões de interesse previamente detetadas em imagem. Usando este setup experimental, recolhemos mais de 600 GB de dados não tratados, aos quais aplicamos 4 técnicas de análise de interferência diferentes, todas desenvolvidas por nós. As nossas descobertas permite afirmar que o número relativo de pontos com interferência variam entre as ordens de magnitude de 10−7 e 10−3 . Os nossos resultados mostram que a interferência direta predomina sobre a interferência dispersiva, causando com que o valor da interferência relativa seja uma ordem de magnitude maior se a linha de vista entre os dois LiDARs for obstruída. Somos também capazes de identificar situações em que a interferência se comporta de forma parecida ao ruído do sensor, sendo quase indistinguível; e outros casos em que esta está fortemente presente, causando erros nas medições de distância que ultrapassam até as dimensões físicas do espaço onde o setup experimental está a ser operado. Concluímos que a interferência não aparenta ser tão destrutiva para condução autónoma como inicialmente previsto, devido à baixa ordem de grandeza da magnitude. De qualquer forma, esta pode ainda ter efeitos graves, principalmente em situações de interferência direta. Podemos também concluir que a natureza da interferência é altamente volátil, dependendo de condições ainda não 100% definidas, incluindo a influência como é criado o setup experimental.
Mestrado em Engenharia Eletrónica e Telecomunicações
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Huang, Shih-Chang, and 黃世昌. "Three-Dimensional Laser Scan Apply To Facilities Analysis --Using Point-Cloud To Analyse A Research Space--." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/31887320161521909215.

Повний текст джерела
Анотація:
碩士
國立臺灣科技大學
建築系
93
The 3D data of space is one of important evidences for Facilities Management (FM). To gether 3D data of space is an important issue in FM. In this study, we used 3D Laser Scanner to retrieve 3D data of research rooms, in order to know facilites how were used. This research take 3D space data from a Cyrax 2500 3D long-range Laser Scanner for facilities change analysis. Because the point-cloud has ovelap advantage, this research is going to analyse, and display a change of facility as 3D modle. By the overlap analyse to know the use of facilities. This research retrieve point-cloud from personal research rooms and public research rooms, got 2 or 3 point-cloud models from each room. Those point-cloud modles was used for overlap analysis samples. By point-cloud overlap analysis, get to know the use-pattern, user-behavior, the facilities change. This research result show strong evidences that a space use analysis can be done by digital maner.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Little, Anna Victoria. "Estimating the Intrinsic Dimension of High-Dimensional Data Sets: A Multiscale, Geometric Approach." Diss., 2011. http://hdl.handle.net/10161/3863.

Повний текст джерела
Анотація:

This work deals with the problem of estimating the intrinsic dimension of noisy, high-dimensional point clouds. A general class of sets which are locally well-approximated by k dimensional planes but which are embedded in a D>>k dimensional Euclidean space are considered. Assuming one has samples from such a set, possibly corrupted by high-dimensional noise, if the data is linear the dimension can be recovered using PCA. However, when the data is non-linear, PCA fails, overestimating the intrinsic dimension. A multiscale version of PCA is thus introduced which is robust to small sample size, noise, and non-linearities in the data.


Dissertation
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Pernisco, Gaetano. "Reconstruction and analysis Of 3D models for autonomous vehicles and manufacturing industry." Doctoral thesis, 2023. https://hdl.handle.net/11589/246740.

Повний текст джерела
Анотація:
Questa tesi rientra nel campo della computer vision. In particolare, questa ricopre lo studio della ricostruzione ed analisi di modelli 3D con particolare riguardo a 2 differenti domini d’applicazione: i veicoli autonomi e l’industria manifatturiera. La computer vision è un dominio di ricerca particolarmente studiato negli ultimi decenni suscitando grande interesse sia da parte di ricercatori che da parte delle industrie. A differenza dell’image processing, la computer vision punta all’estrazione di strutture tridimensionali ed del significato semantico dalle immagini per una comprensione completa della scena. Lo sviluppo delle Convolutional Neural Networks (CNN), ha permesso alla computer vision di affrontare nuovi problemi, più complessi, raggiungendo risultati impressionanti. Le tecniche di computer vision non si limitano solo alle immagini, ma anche dati multidimensionali. Infatti, la diffusione di nuovi sensori come Lidar e scanner 3D hanno portato lo sviluppo di nuovi algoritmi in grado di lavorare con le strutture dati tipiche di questi sensori. Infatti, le point cloud, ovvero le classiche strutture in cui sono organizzati i dati acquisiti con sensori 3D, hanno caratteristiche molto diverse rispetto alle più classiche immagini. Quello dei veicoli a guida autonoma è sicuramente un dominio in cui la computer vision trova una grande varietà di applicazioni. Infatti, per spostarsi in sicurezza all’interno dell’ambiente urbano, le auto a guida autonoma necessitano di un’accurata percezione dello scenario circostante. Per questa ragione, diversi sensori vengono utilizzati insieme allo scopo di creare una rappresentazione 3D a 360° della scena. D’altra parte, l’industria manifatturiera offre grande spazio ad applicazioni di tecniche di computer vision. Infatti, in sinergia con altre tecnologie, questa può automatizzare e facilitare diversi processi per migliorare i processi produttivi. In particolare, processi come il controllo qualità e la gestione di magazzini possono trarre grande vantaggio da tecniche di robotica, 3D scanning e mixed reality allo scopo di alleggerire il lavoro. In questa tesi, sono stati proposti diversi contributi in entrambi i domini analizzati. Lo scopo è di analizzare diversi approcci basati sulla computer vision per affrontare task differenti in entrambi i domini sfruttando sia dati bi-dimensionali che tri-dimensionali. Per ogni approccio proposto, sono stati mostrati i risultati sperimentali allo scopo di validare e supportare l’efficacia del metodo specifico.
This thesis falls in the general category of computer vision. In particular, it regards the study of the reconstruction and analysis of 3D models problems with a focus on two different domains: autonomous vehicles and the manufacturing industry. Computer vision is a research topic deeply studied in the last decades with great interest from both researchers and industries. Contrary to image processing, computer vision aims to extract 3D structures and semantic means from images for a rich and complete understanding. The development of Convolutional Neural Networks (CNNs), allowed computer vision to face up new complex problems reaching impressive results. But computer vision does not regard only bi-dimensional images but also multidimensional data. Indeed, the diffusion of new sensors such as Lidars and 3D scanners requested the design of new algorithms to deal with their data structures. In fact, point clouds - the classic data produced by 3D sensors - have a really different nature compared with images. Autonomous driving represents a domain in which computer vision finds a wide range of applications. Furthermore, to safely move in the urban environment, driverless cars need an accurate and rich perception of the environment. For this reason, data from different sensors are fused to create a 360° 3D representation of the scene. On the other hand, the manufacturing industry can leverage computer vision approaches, in synergy with other technologies, to automate and facilitate several processes to make lean the production chain. In particular, quality control and warehouse management processes can leverage robotics, 3D scanning, and mixed reality to facilitate human work. In this thesis, several contributions mainly based on computer vision are proposed in both domains. It investigates the power of computer vision leveraging both bi-dimensional and tri-dimensional data peculiarity in different identified tasks typical of both domains. Experimental results shown, analyzed, and discussed in this thesis, support the effectiveness of each proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Muresan, Alexandru Camil. "Analysis and Definition of the BAT-ME (BATonomous Moon cave Explorer) Mission." Thesis, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247507.

Повний текст джерела
Анотація:
Humanity has always wanted to explore the world we live in and answer different questions about our universe. After the International Space Station will end its service one possible next step could be a Moon Outpost: a convenient location for research, astronaut training and technological development that would enable long-duration space. This location can be inside one of the presumed lava tubes that should be present under the surface but would first need to be inspected, possibly by machine capable of capturing and relaying a map to a team on Earth.In this report the past and future Moon base missions will be summarized considering feasible outpost scenarios from the space companies or agencies. and their prospected manned budget. Potential mission profiles, objectives, requirements and constrains of the BATonomous Moon cave Explorer (BAT-ME) mission will be discussed and defined. Vehicle and mission concept will be addressed, comparing and presenting possible propulsion or locomotion approaches inside the lava tube.The Inkonova “Batonomous™” system is capable of providing Simultaneous Localization And Mapping (SLAM), relay the created maps, with the possibility to easily integrate the system on any kind of vehicle that would function in a real-life scenario.Although the system is not fully developed, it will be assessed from a technical perspective, and proper changes for a viable system transition for the space-Moon environment will be devised. The transition of the system from the Batonomous™ state to the BAT-ME required state will be presented from the requirement, hardware, software, electrical and operational point of view.The mission will be devised into operational phases, with key goals in mind. Two different vehicles will be presented and designed on a high engineering level. A risk analysis and management system will be made to understand the possible negative outcomes of different parts failure on the mission outcome.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

DOTTA, GIULIA. "Semi-automatic analysis of landslide spatio-temporal evolution." Doctoral thesis, 2017. http://hdl.handle.net/2158/1076767.

Повний текст джерела
Анотація:
Le tecniche di rilevamento rappresentano un utile strumento per rilevare e caratterizzare i processi gravitativi di versante, in particolare attraverso l’uso di approcci volti ad individuare le aree in movimento. Nel dettaglio, tecniche come il laser scanner terrestre e la fotogrammetria digitale permettono di ottenere rappresentazioni ad alta risoluzione dello scenario osservato sotto forma di una nuvola di punti (point cloud) in tre dimensioni. Durante gli ultimi anni, l’uso delle nuvole di punti per investigare i cambiamenti morfologici a scala temporale e spaziale, è notevolmente aumentato. In questo contesto è maturato il presente progetto di ricerca, durante il quale, l’efficacia dell’utilizzo delle nuvole di punti per la caratterizzazione e il monitoraggio di versanti instabili è stata testata e valutata attraverso lo sviluppo di un tool semi-automatico in linguaggio di programmazione MATLAB. Lo strumento di analisi proposto consente di investigare le principali caratteristiche morfologiche dei versanti instabili indagati e di determinare le variazioni morfologiche e gli spostamenti dalla comparazione di nuvole di punti acquisite in tempi differenti. In seguito, attraverso una tecnica di clustering, il codice permette di estrapolare i gruppi le zone interessate da spostamenti significativi e calcolarne l’area. Il tool introdotto è stato testato su due casi di studio contraddistinti da differenti caratteristiche geologiche e da diversi fenomeni di instabilità: l’ammasso roccioso di San Leo (RN) e il versante presso l’abitato di Ricasoli (AR). Per entrambi i casi di studio, sono state individuate e descritte le aree caratterizzate da deformazione superficiale o accumulo di materiale e le aree caratterizzate da distacco di materiale. Inoltre, sono stati approfonditi i fattori che influenzano i risultati della change detection tra nuvole di punti. Remote sensing techniques represent a powerful instrument to detect and characterise earth’s surface processes, especially using change detection approaches. In particular, TLS (Terrestrial Laser Scanner) and UAV (Unmanned Aerial Vehicles) photogrammetry technique allow to obtain high-resolution representations of the observed scenario as a threedimensional array of points defined by x, y and z coordinates, namely point cloud. During the last years, the use of 3D point clouds to investigate the morphological changes occurring over a range of spatial and temporal scales, is considerably increased. During the three-years PhD research programme, the effectiveness of point cloud exploitation for slope characterization and monitoring was tested and evaluated by developing and applying a semi-automatic MATLAB tool. The proposed tool allows to investigate the main morphological characteristics of unstable slopes by using point clouds and to point out any spatio-temporal morphological changes, by comparing point clouds acquired at different times. Once defined a change detection threshold, the routine permits to execute a cluster analysis and automatically separate zones characterized by significant distances and compute their area. The introduced tool was tested on two test sites characterized by different geological setting and instability phenomena: the San Leo rock cliff (Rimini province, Emilia Romagna region, northern Italy) and a clayey slope near Ricasoli village (Arezzo province, Tuscany region, central Italy). For both case of studies, the main displacement or accumulation zones and detachment zone were mapped and described. Furthermore, the factors influencing the change detection results are discussed in details.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Oesterling, Patrick. "Visual Analysis of High-Dimensional Point Clouds using Topological Abstraction." Doctoral thesis, 2015. https://ul.qucosa.de/id/qucosa%3A14718.

Повний текст джерела
Анотація:
This thesis is about visualizing a kind of data that is trivial to process by computers but difficult to imagine by humans because nature does not allow for intuition with this type of information: high-dimensional data. Such data often result from representing observations of objects under various aspects or with different properties. In many applications, a typical, laborious task is to find related objects or to group those that are similar to each other. One classic solution for this task is to imagine the data as vectors in a Euclidean space with object variables as dimensions. Utilizing Euclidean distance as a measure of similarity, objects with similar properties and values accumulate to groups, so-called clusters, that are exposed by cluster analysis on the high-dimensional point cloud. Because similar vectors can be thought of as objects that are alike in terms of their attributes, the point cloud\''s structure and individual cluster properties, like their size or compactness, summarize data categories and their relative importance. The contribution of this thesis is a novel analysis approach for visual exploration of high-dimensional point clouds without suffering from structural occlusion. The work is based on implementing two key concepts: The first idea is to discard those geometric properties that cannot be preserved and, thus, lead to the typical artifacts. Topological concepts are used instead to shift away the focus from a point-centered view on the data to a more structure-centered perspective. The advantage is that topology-driven clustering information can be extracted in the data\''s original domain and be preserved without loss in low dimensions. The second idea is to split the analysis into a topology-based global overview and a subsequent geometric local refinement. The occlusion-free overview enables the analyst to identify features and to link them to other visualizations that permit analysis of those properties not captured by the topological abstraction, e.g. cluster shape or value distributions in particular dimensions or subspaces. The advantage of separating structure from data point analysis is that restricting local analysis only to data subsets significantly reduces artifacts and the visual complexity of standard techniques. That is, the additional topological layer enables the analyst to identify structure that was hidden before and to focus on particular features by suppressing irrelevant points during local feature analysis. This thesis addresses the topology-based visual analysis of high-dimensional point clouds for both the time-invariant and the time-varying case. Time-invariant means that the points do not change in their number or positions. That is, the analyst explores the clustering of a fixed and constant set of points. The extension to the time-varying case implies the analysis of a varying clustering, where clusters appear as new, merge or split, or vanish. Especially for high-dimensional data, both tracking---which means to relate features over time---but also visualizing changing structure are difficult problems to solve.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії