Tesis sobre el tema "Point cloud analysis"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Point cloud analysis".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Forsman, Mona. "Point cloud densification". Thesis, Umeå universitet, Institutionen för fysik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-39980.
Texto completoDonner, Marc, Sebastian Varga y Ralf Donner. "Point cloud generation for hyperspectral ore analysis". Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2018. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-231365.
Texto completoDonner, Marc, Sebastian Varga y Ralf Donner. "Point cloud generation for hyperspectral ore analysis". TU Bergakademie Freiberg, 2017. https://tubaf.qucosa.de/id/qucosa%3A23196.
Texto completoAwadallah, Mahmoud Sobhy Tawfeek. "Image Analysis Techniques for LiDAR Point Cloud Segmentation and Surface Estimation". Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73055.
Texto completoPh. D.
Burwell, Claire Leonora. "The effect of 2D vs. 3D visualisation on lidar point cloud analysis tasks". Thesis, University of Leicester, 2016. http://hdl.handle.net/2381/37950.
Texto completoBungula, Wako Tasisa. "Bi-filtration and stability of TDA mapper for point cloud data". Diss., University of Iowa, 2019. https://ir.uiowa.edu/etd/6918.
Texto completoMegahed, Fadel M. "The Use of Image and Point Cloud Data in Statistical Process Control". Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/26511.
Texto completoPh. D.
Chleborad, Aaron A. "Grasping unknown novel objects from single view using octant analysis". Thesis, Manhattan, Kan. : Kansas State University, 2010. http://hdl.handle.net/2097/4089.
Texto completoRasmussen, Johan y David Nilsson. "Analys av punktmoln i tre dimensioner". Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Datateknik och informatik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-36915.
Texto completoPurpose: To develop a method that can help smaller sawmills to better utilize the greatest possible amount of wood from a log. Method: A quantitative study where three iterations has been made using Design Science. Findings: To create an effective algorithm that will perform volume calculations in a point cloud consisting of about two million points for an industrial purpose, the focus is on the algorithm being fast and that it shows the correct data. The primary goal of making the algorithm quick is to process the point cloud a minimum number of times. The algorithm that meets the goals in this study is Algorithm C. The algorithm is both fast and has a low standard deviation of the measurement errors. Algorithm C has the complexity O(n) in the analysis of sub-point clouds. Implications: Based on this study’s algorithm, it would be possible to use stereo camera technology to help smaller sawmills to better utilize the most possible amount of wood from a log. Limitations: The study’s algorithm assumes that no points have been created inside the log, which could lead to misplaced points. If a log would be crooked, the center of the log would not match the z-axis position. This is something that could mean that the z-value is outside of the log, in extreme cases, which the algorithm cannot handle.
Rusinek, Cory A. "New Avenues in Electrochemical Systems and Analysis". University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1490350904669695.
Texto completoDahlin, Johan. "3D Modeling of Indoor Environments". Thesis, Linköpings universitet, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93999.
Texto completoPérez, Gramatges Aurora. "Simultaneous preconcentration of trace metals by cloud point extraction with 1-(2-pyridylazo)-2-naphthol and determination by neutron activation analysis". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0021/NQ49263.pdf.
Texto completoScharf, Alexander. "Terrestrial Laser Scanning for Wooden Facade-system Inspection". Thesis, Luleå tekniska universitet, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-77159.
Texto completoJovančević, Igor. "Exterior inspection of an aircraft using a Pan-Tilt-Zoom camera and a 3D scanner moved by a mobile robot : 2D image processing and 3D point cloud analysis". Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2016. http://www.theses.fr/2016EMAC0023/document.
Texto completoThis thesis makes part of an industry oriented multi-partners project aimed at developing a mobile collaborative robot (a cobot), autonomous in its movements on the ground, capable of performing visual inspection of an aircraft during short or long maintenance procedures in the hangar or in the pre-flight phase on the tarmac. The cobot is equipped with sensors for realizing its navigation tasks as well as with a set of optical sensors which constitute the inspection head: an orientable Pan-Tilt-Zoom visible light camera and a three-dimensional scanner, delivering data in the format of two-dimensional images and three-dimensional point clouds, respectively. The goal of the thesis is to propose original approaches for processing 2D images and 3D clouds, with intention to make a decision with respect to the flight readiness of the airplane. We developed algorithms for verification of the aircraft items such as vents, doors, sensors, tires or engine as well as for detection and characterization of three-dimensional damages on the fuselage. We integrated a-priori knowledge on the airplane structure, notably numerical three-dimensional CAD model of the Airbus-A320. We argue that with investing effort to develop robust enough algorithms and with the help of existing optical sensors to acquire suitable data, we can come up with non-invasive, accurate, and time-efficient system for automatic airplane exterior inspection. The thesis work was placed in between two main requirements: develop inspection algorithms which could be as general as possible and also meet the specific requirements of an industry oriented project. Often, these two goals do not go along and the balance had to be made. On one side, we were aiming to design and assess the approaches that can be employed on other large structures, for ex. buildings, ships. On the other hand, writing source code for controlling sensors as well as integrating our whole developed source code with other modules on the real-time robotic system, were necessary in order to demonstrate the feasibility of our robotic prototype
Saval-Calvo, Marcelo. "Methodology based on registration techniques for representing subjects and their deformations acquired from general purpose 3D sensors". Doctoral thesis, Universidad de Alicante, 2015. http://hdl.handle.net/10045/49990.
Texto completoYogeswaran, Arjun. "3D Surface Analysis for the Automated Detection of Deformations on Automotive Panels". Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/19992.
Texto completoRazafindramanana, Octavio. "Low-dimensional data analysis and clustering by means of Delaunay triangulation". Thesis, Tours, 2014. http://www.theses.fr/2014TOUR4033/document.
Texto completoThis thesis aims at proposing and discussing several solutions to the problem of low-dimensional point cloudanalysis and clustering. These solutions are based on the analysis of the Delaunay triangulation.Two types of approaches are presented and discussed. The first one follows a classical three steps approach:1) the construction of a proximity graph that embeds topological information, 2) the construction of statisticalinformation out of this graph and 3) the removal of pointless elements regarding this information. The impactof different simplicial complex-based measures, i.e. not only based on a graph, is discussed. Evaluation is madeas regards point cloud clustering quality along with handwritten character recognition rates. The second type ofapproaches consists of one-step approaches that derive clustering along with the construction of the triangulation
Lejemble, Thibault. "Analyse multi-échelle de nuage de points". Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30184.
Texto completo3D acquisition techniques like photogrammetry and laser scanning are commonly used in numerous fields such as reverse engineering, archeology, robotics and urban planning. The main objective is to get virtual versions of real objects in order to visualize, analyze and process them easily. Acquisition techniques become more and more powerful and affordable which creates important needs to process efficiently the resulting various and massive 3D data. Data are usually obtained in the form of unstructured 3D point cloud sampling the scanned surface. Traditional signal processing methods cannot be directly applied due to the lack of spatial parametrization. Points are only represented by their 3D coordinates without any particular order. This thesis focuses on the notion of scale of analysis defined by the size of the neighborhood used to locally characterize the point-sampled surface. The analysis at different scales enables to consider various shapes which increases the analysis pertinence and the robustness to acquired data imperfections. We first present some theoretical and practical results on curvature estimation adapted to a multi-scale and multi-resolution representation of point clouds. They are used to develop multi-scale algorithms for the recognition of planar and anisotropic shapes such as cylinders and feature curves. Finally, we propose to compute a global 2D parametrization of the underlying surface directly from the 3D unstructured point cloud
Ben, Abdallah Hamdi. "Inspection d'assemblages aéronautiques par vision 2D/3D en exploitant la maquette numérique et la pose estimée en temps réel Three-dimensional point cloud analysis for automatic inspection of complex aeronautical mechanical assemblies Automatic inspection of aeronautical mechanical assemblies by matching the 3D CAD model and real 2D images". Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2020. http://www.theses.fr/2020EMAC0001.
Texto completoThis thesis makes part of a research aimed towards innovative digital tools for the service of what is commonly referred to as Factory of the Future. Our work was conducted in the scope of the joint research laboratory "Inspection 4.0" founded by IMT Mines Albi/ICA and the company DIOTA specialized in the development of numerical tools for Industry 4.0. In the thesis, we were interested in the development of systems exploiting 2D images or (and) 3D point clouds for the automatic inspection of complex aeronautical mechanical assemblies (typically an aircraft engine). The CAD (Computer Aided Design) model of the assembly is at our disposal and our task is to verify that the assembly has been correctly assembled, i.e. that all the elements constituting the assembly are present in the right position and at the right place. The CAD model serves as a reference. We have developed two inspection scenarios that exploit the inspection systems designed and implemented by DIOTA: (1) a scenario based on a tablet equipped with a camera, carried by a human operator for real-time interactive control, (2) a scenario based on a robot equipped with sensors (two cameras and a 3D scanner) for fully automatic control. In both scenarios, a so-called localisation camera provides in real-time the pose between the CAD model and the sensors (which allows to directly link the 3D digital model with the 2D images or the 3D point clouds analysed). We first developed 2D inspection methods, based solely on the analysis of 2D images. Then, for certain types of inspection that could not be performed by using 2D images only (typically requiring the measurement of 3D distances), we developed 3D inspection methods based on the analysis of 3D point clouds. For the 3D inspection of electrical cables, we proposed an original method for segmenting a cable within a point cloud. We have also tackled the problem of automatic selection of best view point, which allows the inspection sensor to be placed in an optimal observation position. The developed methods have been validated on many industrial cases. Some of the inspection algorithms developed during this thesis have been integrated into the DIOTA Inspect© software and are used daily by DIOTA's customers to perform inspections on industrial sites
Silva, Fabrício Müller da. "Reamostragem adaptativa para simplificação de nuvens de pontos". Universidade do Vale do Rio dos Sinos, 2015. http://www.repositorio.jesuita.org.br/handle/UNISINOS/4916.
Texto completoMade available in DSpace on 2015-10-27T14:34:35Z (GMT). No. of bitstreams: 1 Fabrício Müller da Silva_.pdf: 105910980 bytes, checksum: 4ce66a9d5fff9a2b2a97835c54dac355 (MD5) Previous issue date: 2015-08-31
Nenhuma
Este trabalho apresenta um algoritmo para simplificação de nuvens de pontos baseado na inclinação local da superfície amostrada pelo conjunto de pontos de entrada. O objetivo é transformar a nuvem de pontos original no menor conjunto possível, mantendo as características e a topologia da superfície original. O algoritmo proposto reamostra de forma adaptativa o conjunto de entrada, removendo pontos redundantes para manter um determinado nível de qualidade definido pelo usuário no conjunto final. O processo consiste em um particionamento recursivo do conjunto de entrada através da Análise de Componentes Principais (PCA). No algoritmo, PCA é aplicada para definir as partições sucessivas, para obter uma aproximação linear (por planos) em cada partição e para avaliar a qualidade de cada aproximação. Por fim, o algoritmo faz uma escolha simples de quais pontos serão mantidos para representar a aproximação linear de cada partição. Estes pontos formarão o conjunto de dados final após o processo de simplificação. Para avaliação dos resultados foi aplicada uma métrica de distância entre malhas de polígonos, baseada na distância de Hausdorff, comparando a superfície reconstruída com a nuvem de pontos original e aquela reconstruída com a nuvem filtrada. Os resultados obtidos com o algoritmo conseguiram uma taxa de até 95% de compactação do conjunto de dados de entrada, diminuindo o tempo total de execução do processo de reconstrução, mantendo as características e a topologia do modelo original. A qualidade da superfície reconstruída com a nuvem filtrada também é atestada pela métrica de comparação.
This paper presents a simple and efficient algorithm for point cloud simplification based on the local inclination of the surface sampled by the input set. The objective is to transform the original point cloud in a small as possible one, keeping the features and topology of the original surface. The proposed algorithm performs an adaptive resampling of the input set, removing unnecessary points to maintain a level of quality defined by the user in the final dataset. The process consists of a recursive partitioning in the input set using Principal Component Analysis (PCA). PCA is applied for defining the successive partitions, for obtaining the linear approximations (planes) for each partition, and for evaluating the quality of those approximations. Finally, the algorithm makes a simple choice of the points to represent the linear approximation of each partition. These points are the final dataset of the simplification process. For result evaluation, a distance metric between polygon meshes, based on Hausdorff distance, was defined, comparing the reconstructed surface using the original point clouds and the reconstructed surface usingthe filtered ones. The algorithm achieved compression rates up to 95% of the input dataset,while reducing the total execution time of reconstruction process, keeping the features and the topology of the original model. The quality of the reconstructed surface using the filtered point cloud is also attested by the distance metric.
Asghar, Umair. "Landslide mapping from analysis of UAV-SFM point clouds". Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/63604.
Texto completoApplied Science, Faculty of
Engineering, School of (Okanagan)
Graduate
Melchert, Wanessa Roberto. "Desenvolvimento de procedimentos analíticos limpos e com alta sensibilidade para a determinação de espécies de interesse ambiental". Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/46/46133/tde-25062009-150929/.
Texto completoClean analytical procedures with high sensitivity for the determination of species of environmental interest (carbaryl, sulphate and chlorine) were developed. Flow systems with solenoid micropumps were coupled to long optical pathlength spectrophotometry or cloud point extraction procedures, aiming the concentration of species for determination without employing toxic solvents. Carbaryl determination in natural waters was based on a double cloud point extraction: a clean-up step for removal of interfering organic species and pre-concentration of the indophenol blue, formed in the reaction with the oxidized of p-aminophenol. Linear response was observed between 10 and 50 µg L-1, with apparent molar absortivity estimated as 4.6x105 L mol-1 cm-1. Detection limit was estimated as 7 mg L-1 and the coefficient of variation as 3.4% (n = 8). Recoveries between 91 and 99% were obtained for carbaryl spiked to natural waters. A simple and low cost flow cell with 30 cm optical path was constructed for spectrophotometric measurements. The cell shows desirable characteristics such as reduced attenuation of the radiation beam and internal volume (75 µL) comparable to conventional flow cells. The performance was evaluated by phosphate determination by the molibdenium blue method, with linear response between 0.05 and 0.8 mg L-1 of phosphate (r = 0.999). The increase in sensitivity (30.4 fold) in comparison to the obtained with a conventional 1 cm optical path flow cell agreed to theoretical value estimated by the Lambert-Beer law. The determination of carbaryl was also carried out in a flow system coupled to 30 and 100 cm optical path flow cells, also exploiting the formation of indophenol compound. Linear responses, detection limits and coefficients of variation were 50 - 750 and 5 - 200 µg L-1;4.0 and 1.7 µg L-1 and 2.3 and 0.7%, respectively, for 30 and 100 cm cells. The proposed procedure was selective for the determination of carbaryl, without interferences of other carbamate pesticides. The waste of the analytical procedure was treated with potassium persulphate and ultraviolet irradiation, with decrease of 94% of total organic carbon. The residue after treatment was not toxic for Vibrio-fischeri bacteria. Sulphate determination was based on turbidimetric measurements with 1-cm flow cell, with linear response between 20 and 200 mg L-1. Baseline drift was avoided in view of the pulsed flow related to the solenoid micropumps. The detection limit and the coefficient of variation were estimated as 3 mg L-11 and 2.4%, respectively, for a sampling rate of 33 determinations per hour. Aiming the increase in sensitivity, a 100 cm optical path flow cell was employed and baseline drift was avoided with a washing, step employing EDTA in alkaline medium. Linear response was observed between 7 - 16 mg L-1, with a detection limit of 150 µg L-1, coefficient of variation of 3.0% (n = 20) and sampling rate of 25 determinations per hour. Results obtained natural and rain for water samples agreed at 95% confidence level with the batch turbidimetric procedure. The determination of free chlorine in natural and tap waters was based on the reaction with N,N-diethyl-p-phenylenediamine, with linear response between 5 and 100 µg L-1, and detection limit and coefficient of variation estimated as 0.23 µg L-1 and 3.4%, respectively. Sampling rate was estimated as 58 determinations per hour.
Oesterling, Patrick. "Visual Analysis of High-Dimensional Point Clouds using Topological Abstraction". Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-203056.
Texto completoPolat, Songül. "Combined use of 3D and hyperspectral data for environmental applications". Thesis, Lyon, 2021. http://www.theses.fr/2021LYSES049.
Texto completoEver-increasing demands for solutions that describe our environment and the resources it contains, require technologies that support efficient and comprehensive description, leading to a better content-understanding. Optical technologies, the combination of these technologies and effective processing are crucial in this context. The focus of this thesis lies on 3D scanning and hyperspectral technologies. Rapid developments in hyperspectral imaging are opening up new possibilities for better understanding the physical aspects of materials and scenes in a wide range of applications due to their high spatial and spectral resolutions, while 3D technologies help to understand scenes in a more detailed way by using geometrical, topological and depth information. The investigations of this thesis aim at the combined use of 3D and hyperspectral data and demonstrates the potential and added value of a combined approach by means of different applications. Special focus is given to the identification and extraction of features in both domains and the use of these features to detect objects of interest. More specifically, we propose different approaches to combine 3D and hyperspectral data depending on the HSI/3D technologies used and show how each sensor could compensate the weaknesses of the other. Furthermore, a new shape and rule-based method for the analysis of spectral signatures was developed and presented. The strengths and weaknesses compared to existing approach-es are discussed and the outperformance compared to SVM methods are demonstrated on the basis of practical findings from the field of cultural heritage and waste management.Additionally, a newly developed analytical method based on 3D and hyperspectral characteristics is presented. The evaluation of this methodology is based on a practical exam-ple from the field of WEEE and focuses on the separation of materials like plastics, PCBs and electronic components on PCBs. The results obtained confirms that an improvement of classification results could be achieved compared to previously proposed methods.The claim of the individual methods and processes developed in this thesis is general validity and simple transferability to any field of application
Törnblom, Nils. "Underwater 3D Surface Scanning using Structured Light". Thesis, Uppsala universitet, Centrum för bildanalys, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-138205.
Texto completoGoussard, Charl Leonard. "Semi-automatic extraction of primitive geometric entities from point clouds". Thesis, Stellenbosch : Stellenbosch University, 2001. http://hdl.handle.net/10019.1/52449.
Texto completoENGLISH ABSTRACT: This thesis describes an algorithm to extract primitive geometric entities (flat planes, spheres or cylinders, as determined by the user's inputs) from unstructured, unsegmented point clouds. The algorithm extracts whole entities or only parts thereof. The entity boundaries are computed automatically. Minimal user interaction is required to extract these entities. The algorithm is accurate and robust. The algorithm is intended for use in the reverse engineering environment. Point clouds created in this environment typically have normal error distributions. Comprehensive testing and results are shown as well as the algorithm's usefulness in the reverse engineering environment.
AFRIKAANSE OPSOMMING: Hierdie tesis beskryf 'n algoritme wat primitiewe geometriese entiteite (plat vlakke, sfere of silinders na gelang van die gebruiker se inset) pas op ongestruktureerde, ongesegmenteerde puntewolke. Die algoritme pas geslote geometriese entiteite of slegs dele daarvan. Die grense van hierdie entiteite word automaties bereken. Minimale gebruikersinteraksie word benodig om die geometriese entiteite te pas. Die algoritme is akkuraat en robuust. Die algoritme is ontwikkel vir gebruik in die truwaartse ingenieurswese omgewing. Puntewolke opgemeet in hierdie omgewing het tipies meetfoute met 'n normaal verdeling. Omvattende toetsing en resultate word getoon en daarmee ook die nut wat die algoritme vir die gebruiksomgewing inhou.
Yu, Shuda. "Modélisation 3D automatique d'environnements : une approche éparse à partir d'images prises par une caméra catadioptrique". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2013. http://tel.archives-ouvertes.fr/tel-00844401.
Texto completoSHAH, GHAZANFAR ALI. "Template-based reverse engineering of parametric CAD models from point clouds". Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1048640.
Texto completoFrizzarin, Rejane Mara. "Desenvolvimento de procedimentos analíticos em fluxo explorando difusão gasosa ou extração em ponto de nuvem. Aplicação a amostras de interesse agronômico e ambiental". Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/64/64135/tde-30032015-160404/.
Texto completoSpectrophotometric analytical procedures were developed by exploiting separation and preconcentration steps in flow systems based on multi-pumping or lab-in-syringe approaches with application to agronomic (iron in plant materials and food) and environmental samples (acid dissociable cyanide, iron and antimony in waters). Cyanide determination exploited bleaching of the Cu(I)/2,2\'-biquinoline 4,4\'-dicarboxylic acid (BCA) complex by the analyte, after separation of HCN by gas diffusion. Long path length spectrophotometry was successfully exploited to increase sensitivity, thus achieving a linear response from 5 to 200 g L-1, with detection limit, coefficient of variation (n = 10) and sampling rate of 2 g L-1, 1.5% and 22 h-1, respectively. Each determination consumed 48 ng of Cu(II), 5 g of ascorbic acid and 0.9 g of BCA. As high as 100 mg L-1 thiocyanate, nitrite or sulfite did not affect cyanide determination and sample pretreatment with hydrogen peroxide avoided sulfide interference up to 200 g L-1. The procedure is environmentally friendly and presented one of the lowest detection limits associated to high sampling rate. The results for freshwater samples agreed with those obtained with the flow-based fluorimetric procedure at the 95% confidence level. Novel strategies were proposed for on-line cloud point extraction (CPE): (i) the surfactant-rich phase was retained directly into the flow cell to avoid dilution prior to detection; (ii) solenoid micro-pumps were explored to improve mixing and for flow modulation in the retention and removal of the surfactant-rich phase, thus avoiding the elution step with organic solvents and (iii) the heat released and the salts provided by an on-line neutralization reaction were exploited to induce cloud point without an external heating device. These approaches were demonstrated for the spectrophotometric determination of iron based on complex formation with 1-(2-thiazolylazo)-2-naphtol (TAN). A linear response was observed from 10 to 200 g L-1, with detection limit, coefficient of variation, and sampling rate of 5 g L-1, 2.3% (n = 7) and 26 h-1, respectively. The enrichment factor was 8.9 and the procedure consumed only 6 g of TAN and 390 g of Triton X-114 per determination. The results for freshwater samples agreed with the reference procedure and those obtained for certified reference materials of food agreed with the certified values. Spectrophotometric determination of antimony was performed for the first time exploiting CPE in the lab-in-syringe system. The antimony/iodide complex forms an ion-pair with H+, which can be extracted with Triton X-114. Factorial design showed that the concentrations of ascorbic acid, H2SO4 and Triton X-114, as well as the second and third order interactions were significant (95% confidence). The Box-Behnken design was applied to identify the critical values. The system is robust with 95% confidence and a linear response was observed from 5 to 50 g L-1, with detection limit, coefficient of variation (n = 5) and sampling rate of 1.8 g L-1, 1.6% and 16 h-1, respectively. The results for water samples and antileishmanial drugs agreed with those obtained by hydride generation atomic absorption spectrometry at the 95% confidence level
Oesterling, Patrick [Verfasser], Gerik [Akademischer Betreuer] Scheuermann, Gerik [Gutachter] Scheuermann y Thomas [Gutachter] Wischgoll. "Visual Analysis of High-Dimensional Point Clouds using Topological Abstraction / Patrick Oesterling ; Gutachter: Gerik Scheuermann, Thomas Wischgoll ; Betreuer: Gerik Scheuermann". Leipzig : Universitätsbibliothek Leipzig, 2016. http://d-nb.info/1240481624/34.
Texto completoAijazi, Ahmad Kamal. "3D urban cartography incorporating recognition and temporal integration". Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22528/document.
Texto completoOver the years, 3D urban cartography has gained widespread interest and importance in the scientific community due to an ever increasing demand for urban landscape analysis for different popular applications, coupled with advances in 3D data acquisition technology. As a result, in the last few years, work on the 3D modeling and visualization of cities has intensified. Lately, applications have been very successful in delivering effective visualizations of large scale models based on aerial and satellite imagery to a broad audience. This has created a demand for ground based models as the next logical step to offer 3D visualizations of cities. Integrated in several geographical navigators, like Google Street View, Microsoft Visual Earth or Geoportail, several such models are accessible to large public who enthusiastically view the real-like representation of the terrain, created by mobile terrestrial image acquisition techniques. However, in urban environments, the quality of data acquired by these hybrid terrestrial vehicles is widely hampered by the presence of temporary stationary and dynamic objects (pedestrians, cars, etc.) in the scene. Other associated problems include efficient update of the urban cartography, effective change detection in the urban environment and issues like processing noisy data in the cluttered urban environment, matching / registration of point clouds in successive passages, and wide variations in environmental conditions, etc. Another aspect that has attracted a lot of attention recently is the semantic analysis of the urban environment to enrich semantically 3D mapping of urban cities, necessary for various perception tasks and modern applications. In this thesis, we present a scalable framework for automatic 3D urban cartography which incorporates recognition and temporal integration. We present in details the current practices in the domain along with the different methods, applications, recent data acquisition and mapping technologies as well as the different problems and challenges associated with them. The work presented addresses many of these challenges mainly pertaining to classification of urban environment, automatic change detection, efficient updating of 3D urban cartography and semantic analysis of the urban environment. In the proposed method, we first classify the urban environment into permanent and temporary classes. The objects classified as temporary are then removed from the 3D point cloud leaving behind a perforated 3D point cloud of the urban environment. These perforations along with other imperfections are then analyzed and progressively removed by incremental updating exploiting the concept of multiple passages. We also show that the proposed method of temporal integration also helps in improved semantic analysis of the urban environment, specially building façades. The proposed methods ensure that the resulting 3D cartography contains only the exact, accurate and well updated permanent features of the urban environment. These methods are validated on real data obtained from different sources in different environments. The results not only demonstrate the efficiency, scalability and technical strength of the method but also that it is ideally suited for applications pertaining to urban landscape modeling and cartography requiring frequent database updating
Lhéritier, Alix. "Méthodes non-paramétriques pour l'apprentissage et la détection de dissimilarité statistique multivariée". Thesis, Nice, 2015. http://www.theses.fr/2015NICE4072/document.
Texto completoIn this thesis, we study problems related to learning and detecting multivariate statistical dissimilarity, which are of paramount importance for many statistical learning methods nowadays used in an increasingly number of fields. This thesis makes three contributions related to these problems. The first contribution introduces a notion of multivariate nonparametric effect size shedding light on the nature of the dissimilarity detected between two datasets. Our two step method first decomposes a dissimilarity measure (Jensen-Shannon divergence) aiming at localizing the dissimilarity in the data embedding space, and then proceeds by aggregating points of high discrepancy and in spatial proximity into clusters. The second contribution presents the first sequential nonparametric two-sample test. That is, instead of being given two sets of observations of fixed size, observations can be treated one at a time and, when strongly enough evidence has been found, the test can be stopped, yielding a more flexible procedure while keeping guaranteed type I error control. Additionally, under certain conditions, when the number of observations tends to infinity, the test has a vanishing probability of type II error. The third contribution consists in a sequential change detection test based on two sliding windows on which a two-sample test is performed, with type I error guarantees. Our test has controlled memory footprint and, as opposed to state-of-the-art methods that also provide type I error control, has constant time complexity per observation, which makes our test suitable for streaming data
Rolin, Raphaël. "Contribution à une démarche numérique intégrée pour la préservation des patrimoines bâtis". Thesis, Compiègne, 2018. http://www.theses.fr/2018COMP2450/document.
Texto completoThroughout this work, the main objective is to validate the relevance of construction and use of geometric or parametric 3D models BIM or hBlM-oriented for numerical analyzes. These include structural studies in the case of historic buildings, as well as planning for restoration work, energy renovation and rehabilitation. Complementary data mining and use of point clouds for the detection, segmentation and extraction of geometric features have also been integrated into the work and proposed methodology. The process of data processing, geometric or parametric modeling and their exploitation, proposed in this work, contributes to improve and understand better the constraints and stakes of the different configurations and conditions related to the case studies and the specific constraints specific to the types of constructions. The contributions proposed for the different geometric and parametric modeling methods from point clouds are addressed by the construction of geometric models BIM or hBlM-oriented. Similarly, the process of surface detection, extraction of data and elements from point clouds are presented. The application of these modeling methods is systematically illustrated by different case studies, all of whose relative work has been carried out within the framework of this thesis. The goal is therefore to demonstrate the interest and relevance of these numerical methods according to the context, needs and studies envisaged, for example with the spire of the Senlis cathedral (Oise) and the Hermitage site (Oise). Numerical analyzes with finite element method are used to validate the relevance of these approaches
Chia-HsiuTSAI y 蔡家修. "Point Cloud Analysis for Object Recognition and Post-Earthquake Reconnaissance". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/a2e52w.
Texto completo國立成功大學
土木工程學系
104
In recent year, three-dimensional laser scanner technology with high precision and high efficiency constantly updated. It’s using to modeling the building, heritage conservation, monitor bridge deformation, post-earthquake reconnaissance, coastal terrain retreat monitoring, large-scale terrain change monitoring and etc. It has gradually replaced the traditional measurement technology. However, currently technologies in the large number of scattered point cloud processing applications are concentrated in specific areas such as building models or point cloud data simulation. And the point cloud data will contain a lot of noise to the need for further processing. So the application of point cloud data is in the research and development stage, needs sustainable development. This study is divided into two directions, namely, automatic object recognition and post-earthquake reconnaissance. The first theme focuses on the automatic identification of specific objects from the scattered cloud data, the original point cloud data operated according to the set of calculation strategy. Using of region grow method will be scattered point cloud data classification. And then use all kinds of algorithms, such as boundary extraction method to extract the characteristics of clustering group. The final cross-match point group feature to recognize particular object. The latter theme mainly by investigation the disaster caused by the earthquake. Study the structural damage analysis caused by seismic force, Such as structural deformation and displacement, surface damage, records and Extraction of the earthquake induced large landslide. And put forward specific damage assessment, quantitative indicators. The results show that the feasibility of automatic identification of specific objects, and several quantitative assessment results can be used to assess structural damage.
Hsu, Ya-Ting y 許雅婷. "Cloud point measurement and analysis of branched PEO/ salt / water systems". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/64w6na.
Texto completo國立臺北科技大學
生物科技研究所
101
Cloud points (CPs) of the phase separation of aqueous solution of branched polyethylene oxide (PEO), sample BP5, or BP6, that have a lower critical solution temperature (LCST), under heating process were determined by using light transmission, dynamic light scattering, and viscometry methods. It was found that the temperature of CP was dependent on the concentration of polymer and salts: NaCl, KCl, NaH2PO4, and KH2PO4. The CP temperature could be reduced to be lower than 37oC with addition of salt. It implies that the branched PEO/water/salt system has potential for medical application. Furthermore, the interaction parameter between solvent and solute was estimated by a modified Flory-Huggins model.
(5929979), Yun-Jou Lin. "Point Cloud-Based Analysis and Modelling of Urban Environments and Transportation Corridors". Thesis, 2019.
Buscar texto completoLi, Zhong-Yan y 李忠彥. "A Study of the Point Cloud Date Analysis for Ground-Based LIDAR". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/53190982887035469634.
Texto completo清雲科技大學
土木與防災研究所
95
A procedure (abbreviated as LIDAR), which takes advantage of a three-dimensional sweeping laser to obtain highly accurate scans of cloud data within the desired analysis scene. This innovative cloud data capturing method provides useful information on the internal environment on the evaluated structure of the three-dimensional information. Generally, a ground surveyor’s beacon is located on the ground which is used for the observation of a specific point. This point is used as a control point to determine whether the desired target has shifted. Initially this investigation employed the use a laboratory based slab of bridge simulation. Later on, experimental field work was undergone in bridges within the Maioli, Houlong region. Suitable piers locations on selected piers of the bridges were used scanned, and the regular surveyor's beacon scans and by consulting regularly a bit on the pier on the pier position elected and scanned suitably, and some cloud materials that will scan the good surveyor's beacon spend Real-Works Survey 4.1.2 the after treatment software delete the materials amount of some clouds at random and utilize Microsoft by oneself Visual Basic 6.0 writes the procedure and rejects and calculates out the position of the some cloud centers through the error, whether it is can still be in conformity with primitive materials amount and understand that there are crescent trends in the skew of the centre coordinate when how much surplus is as counts to examine cores, and hope that can be stored the capacity with the reduction file with further result of the experiment .
Bates, Jordan Steven. "Oblique UAS imagery and point cloud processing for 3D rock glacier monitoring". Master's thesis, 2020. http://hdl.handle.net/10362/94396.
Texto completoRock glaciers play a large ecological role and are heavily relied upon by local communities for water, power, and revenue. With climate change, the rate at which they are deforming has increased over the years and is making it more important to gain a better understanding of these geomorphological movements for improved predictions, correlations, and decision making. It is becoming increasingly more practical to examine a rock glacier with 3D visualization to have more perspectives and realistic terrain profiles. Recently gaining more attention is the use of Terrestrial Laser Scanners (TLS) and Unmanned Aircraft Systems (UAS) used separately and combined to gather high-resolution data for 3D analysis. This data is typically transformed into highly detailed Digital Elevation Models (DEM) where Differences of DEM (DoD) is used to track changes over time. This study compares these commonly used collection methods and analysis to a newly conceived multirotor UAS collection method and to a new point cloud Multiscale Model to Model Cloud Comparison (M32C) change detection seen from recent studies. Data was collected of the Innere Ölgrube Rock Glacier in Austria with a TLS in 2012 and with a multirotor UAS in 2019. It was found that oblique imagery with terrain height corrections, that creates perspectives similar to what the TLS provides, increased the completeness of data collection for a better reconstruction of a rock glacier in 3D. The new method improves the completeness of data by an average of at least 8.6%. Keeping the data as point clouds provided a much better representation of the terrain. When transforming point clouds into DEMs with common interpolations methods it was found that the average area of surface items could be exaggerated by 2.2 m^2 while point clouds were much more accurate with 0.3 m^2 of accuracy. DoD and M3C2 results were compared and it was found that DoD always provides a maximum increase of at least 1.1 m and decrease of 0.85 m more than M3C2 with larger standard deviation with similar mean values which could attributed to horizontal inaccuracies and smoothing of the interpolated data.
Paul, Rahul. "Topological analysis, non-linear dimensionality reduction and optimisation applied to manifolds represented by point clouds". Thesis, 2018. http://hdl.handle.net/1959.13/1393470.
Texto completoIn recent years, there has been a growing demand for computational techniques that respect the non-linear structure of high-dimensional data, in both real-world applications and research. Various forms of manifolds can describe non-linear objects. However, manifolds are abstract mathematical concepts and in applications these are often represented by high-dimensional finite sets of sample points. This thesis investigates techniques from machine learning, optimisation and computational topology that can be applied to such point clouds. The first part of this thesis presents a topological approach for validating nonlinear dimensionality reduction. During the process of non-linear dimensionality reduction, manifolds represented by point clouds are at risk of changing their topology. The impact of manifold learning is evaluated by comparing Betti numbers based on persistent homology of test manifolds before and after dimensionality reduction. The second part of the thesis addresses the processing of large point cloud data as it can occur in real applications. The topological analysis of this data using traditional methods for persistent homology can be a computationally costly task. If the data is represented by large point clouds, many current computing systems find processing difficult or fail to process the data. This thesis proposes an alternative approach that employs deep learning to estimate Betti numbers of manifolds represented by point clouds. The third part of the thesis investigates simulated examples of optimisation on general differentiable manifolds without the requirement of a Riemannian structure. A barrier method with exact line search for the optimisation problem over manifolds is proposed. The last part of this thesis reports on collaborative field work with Xerox India using a real-world data set. A heuristic algorithm is employed to solve a practical task allocation problem.
Rato, Daniela Ferreira Pinto Dias. "Detection of the navigable road limits by analysis of the accumulated point cloud density". Master's thesis, 2019. http://hdl.handle.net/10773/28205.
Texto completoNo âmbito do projeto Atlas, esta dissertação prevê a identificação dos limites navegáveis da estrada através da análise da densidade da acumulaçao de nuvens de pontos, obtidas através de leituras laser provenientes de um sensor SICK LD-MRS. Este sensor, instalado na frente do AtlasCar2, tem como propósito a identificação de obstáculos ao nível da estrada e a partir dos seus dados prevê-se a criação de grelhas de ocupação que delimitem o espaço navegável do veículo. Em primeiro lugar, a densidade da nuvem de pontos é transformada numa grelha de densidade normalizada em cada frame em relação à densidade máxima, à qual posteriormente são aplicados algoritmos de deteção de arestas e filtros de gradiente com o objetivo de detetar padrões que correspondam a mudanças súbitas de densidade, tanto positivas como negativas. A estas grelhas são aplicados limiares de forma a eliminar informação irrelevante. Por fim, foi desenvolvida também uma metodologia de avaliação quantitativa dos algoritmos, usando ficheiros KML para deliniar limites da estrada e, contanto com a precisão dos dados de GPS obtidos, comparar o espaço navegável real com o obtido pela metodologia de deteção de limites de estrada e assim avaliar o desempenho dos algoritmos desenvolvidos. Neste trabalho são apresentados resultados dos diferentes algoritmos, bem como diversos testes tendo em conta a influência da resolução de grelha, velocidade do carro, entre outros. O trabalho desenvovido cumpre os objetivos propostos inicialmente, sendo capaz de detetar ambos obstáculos positivos e negativos e sendo minimamente robusto a velocidade e condições de estrada.
Mestrado em Engenharia Mecânica
MinLi y 李旻. "Analysis of 3D Point Cloud-Based Surface Reconstruction Methods: Screened Poisson Vs. B-Spline". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/2b26dz.
Texto completo國立成功大學
資訊工程學系
106
Surface reconstruction is a process that uses scanned point cloud to construct the original surface of an object. By using some different scanning methods, we can obtain the information of the surface of the object that we interested in, and then reconstruct these information into much more intensive point clouds or even continuous surface. In recent years, due to the maturity of 3D printing technology, the price of 3D printing tools have decreased. The laboratory proposed a framework that acquires information of objects using non-contact 3D scanning device and gets reconstructed models using 3D printing tools to make the whole system fully operational. This research is focus on analyzing and comparing the two algorithms which face to the cases having high reliability of point clouds, and further proposed the framework combing to methods to deal with the object that we interested in. Which means that we can reconstruct the surface of objects by using point clouds which have geometry and topology information kept in intact and then get the model using 3D printer.
陳佳緹. "Storage Industry Analysis--From the Point of Cloud Storage to Discuss Taiwan Storage Industry Opportunity". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/74031356571790799716.
Texto completo國立交通大學
企業管理碩士學程
98
Upon the blooming development of Internet, the advantage of information interflow brings us: speedy and cheap information collection, mass information store up, and provide interactive and integrative information by user’s demand. According to Jupiter Researche estimation, until 2011, the worldwide internet access population will achieve 200 millions. In the meantime, enterprise information drives mass information storage demand. 2008 Financial crisis decreased IT investment but not restrain information/data increase. People seek more cheap and efficient way to stock their valuable and un-discarded data which initiate new business model of data storage shift from local devise to internet which call “Cloud Storage Service”. Storage or cloud storage industry belongs to part of Cloud Computing Infrastructure Service. Taiwan is well known for high-tech industry, especially in OEM and ODM. Under the pressure of worldwide competitor and low profit from OEM and ODM business model, Taiwan need to have advance strategy to get better position in the world. By analyzing storage industry from the point of cloud storage service, we can see the storage industry role in cloud storage, we can learn how to establish the global supply chain, how the main player develop their business strategy. In conclusion, the research provides some worthy principles to share the bottleneck and potential opportunity of Taiwan cloud storage future development.
Chiang, Hung-Yueh y 江泓樂. "An Analysis of 3D Indoor Scene Segmentation Based on Images, Point Cloud and Voxel Data". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/pvbj47.
Texto completo國立臺灣大學
資訊工程學研究所
107
The deep learning technology has brought great success in image classification, object detection and semantic segmentation tasks. Recent years, the advent of inexpensive depth sensors hugely motivate 3D research area and real scene reconstruction datasets such as ScanNet [5] and Matterport3D [1] have been proposed. However, the problem of 3D scene semantic segmentation still remains new and challenging due to many variance of 3D data type (e.g. image, voxel, point cloud). Other difficulties such as suffering from high computation cost and the scarcity of data dispel the research progress of 3D segmentation. In this paper, we study 3D indoor scene segmentation problem with three different types of 3D data, which we categorize into image-based, voxel-based and point-based. We experiment on different input signals (e.g. color, depth, normal) and verify their effectiveness and performance in different data type networks. We further study fusion methods and improve the performance by using off-the-shelf deep models and by leveraging data modalities in the paper.
Martins, Pedro Miguel Simões Bastos. "Interference analysis in time of flight LiDARs". Master's thesis, 2019. http://hdl.handle.net/10773/29885.
Texto completoA cada 23 segundos, uma pessoa morre nas estradas. Em 2018, 1.35 milhões de pessoas morreram devido a acidentes nas estradas, 90% dos quais foram devidos a erro humano: condução perigosa, distrações, fadiga e más decisões. Veículos autónomos são uma das soluções apresentadas para resolver este problema, substituindo ou ajudando o condutor. Para tal, os veículos precisam de conseguir perceber aquilo que os rodeia com grande precisão, sendo o LiDAR um dos sensores mais promissores para essa tarefa. Para compreender o que os rodeia, os LiDARs emitem raios laser que podem, teoricamente, ser recebidos por um outro LiDAR, noutro carro, interferindo com a capacidade desse segundo LiDAR compreender o que rodeia. Num cenário onde múltiplos carros autónomos equipados com LiDAR coexistem, a sua interferência mútua pode comprometer a sua capacidade para perceber o que o rodeia com precisão e a possibilidade de solucionar um dos problemas que inicialmente ira resolver: acidentes e mortes na estrada. Nesta Dissertação de Mestrado, propomos o estudo do comportamento da interferência entre dois LiDARs em vários cenários de interferência, onde variamos a sua distância, altura e posição relativa. Tentámos também perceber o diferente impacto da interferência direta e dispersa, através da obstrução da linha de vista entre os dois LiDARs, e verificar qual o comportamento da interferência em regiões de interesse e objetos. Construímos um setup experimental contendo dois LiDARs e uma câmara, calibramo-los intrínseca e extrinsecamente e estimamos a posição dos objetos de interesse na point cloud através de regiões de interesse previamente detetadas em imagem. Usando este setup experimental, recolhemos mais de 600 GB de dados não tratados, aos quais aplicamos 4 técnicas de análise de interferência diferentes, todas desenvolvidas por nós. As nossas descobertas permite afirmar que o número relativo de pontos com interferência variam entre as ordens de magnitude de 10−7 e 10−3 . Os nossos resultados mostram que a interferência direta predomina sobre a interferência dispersiva, causando com que o valor da interferência relativa seja uma ordem de magnitude maior se a linha de vista entre os dois LiDARs for obstruída. Somos também capazes de identificar situações em que a interferência se comporta de forma parecida ao ruído do sensor, sendo quase indistinguível; e outros casos em que esta está fortemente presente, causando erros nas medições de distância que ultrapassam até as dimensões físicas do espaço onde o setup experimental está a ser operado. Concluímos que a interferência não aparenta ser tão destrutiva para condução autónoma como inicialmente previsto, devido à baixa ordem de grandeza da magnitude. De qualquer forma, esta pode ainda ter efeitos graves, principalmente em situações de interferência direta. Podemos também concluir que a natureza da interferência é altamente volátil, dependendo de condições ainda não 100% definidas, incluindo a influência como é criado o setup experimental.
Mestrado em Engenharia Eletrónica e Telecomunicações
Huang, Shih-Chang y 黃世昌. "Three-Dimensional Laser Scan Apply To Facilities Analysis --Using Point-Cloud To Analyse A Research Space--". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/31887320161521909215.
Texto completo國立臺灣科技大學
建築系
93
The 3D data of space is one of important evidences for Facilities Management (FM). To gether 3D data of space is an important issue in FM. In this study, we used 3D Laser Scanner to retrieve 3D data of research rooms, in order to know facilites how were used. This research take 3D space data from a Cyrax 2500 3D long-range Laser Scanner for facilities change analysis. Because the point-cloud has ovelap advantage, this research is going to analyse, and display a change of facility as 3D modle. By the overlap analyse to know the use of facilities. This research retrieve point-cloud from personal research rooms and public research rooms, got 2 or 3 point-cloud models from each room. Those point-cloud modles was used for overlap analysis samples. By point-cloud overlap analysis, get to know the use-pattern, user-behavior, the facilities change. This research result show strong evidences that a space use analysis can be done by digital maner.
Little, Anna Victoria. "Estimating the Intrinsic Dimension of High-Dimensional Data Sets: A Multiscale, Geometric Approach". Diss., 2011. http://hdl.handle.net/10161/3863.
Texto completoThis work deals with the problem of estimating the intrinsic dimension of noisy, high-dimensional point clouds. A general class of sets which are locally well-approximated by
Dissertation
Pernisco, Gaetano. "Reconstruction and analysis Of 3D models for autonomous vehicles and manufacturing industry". Doctoral thesis, 2023. https://hdl.handle.net/11589/246740.
Texto completoThis thesis falls in the general category of computer vision. In particular, it regards the study of the reconstruction and analysis of 3D models problems with a focus on two different domains: autonomous vehicles and the manufacturing industry. Computer vision is a research topic deeply studied in the last decades with great interest from both researchers and industries. Contrary to image processing, computer vision aims to extract 3D structures and semantic means from images for a rich and complete understanding. The development of Convolutional Neural Networks (CNNs), allowed computer vision to face up new complex problems reaching impressive results. But computer vision does not regard only bi-dimensional images but also multidimensional data. Indeed, the diffusion of new sensors such as Lidars and 3D scanners requested the design of new algorithms to deal with their data structures. In fact, point clouds - the classic data produced by 3D sensors - have a really different nature compared with images. Autonomous driving represents a domain in which computer vision finds a wide range of applications. Furthermore, to safely move in the urban environment, driverless cars need an accurate and rich perception of the environment. For this reason, data from different sensors are fused to create a 360° 3D representation of the scene. On the other hand, the manufacturing industry can leverage computer vision approaches, in synergy with other technologies, to automate and facilitate several processes to make lean the production chain. In particular, quality control and warehouse management processes can leverage robotics, 3D scanning, and mixed reality to facilitate human work. In this thesis, several contributions mainly based on computer vision are proposed in both domains. It investigates the power of computer vision leveraging both bi-dimensional and tri-dimensional data peculiarity in different identified tasks typical of both domains. Experimental results shown, analyzed, and discussed in this thesis, support the effectiveness of each proposed method.
Muresan, Alexandru Camil. "Analysis and Definition of the BAT-ME (BATonomous Moon cave Explorer) Mission". Thesis, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247507.
Texto completoDOTTA, GIULIA. "Semi-automatic analysis of landslide spatio-temporal evolution". Doctoral thesis, 2017. http://hdl.handle.net/2158/1076767.
Texto completoOesterling, Patrick. "Visual Analysis of High-Dimensional Point Clouds using Topological Abstraction". Doctoral thesis, 2015. https://ul.qucosa.de/id/qucosa%3A14718.
Texto completo