Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: 3D data analysi.

Дисертації з теми "3D data analysi"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "3D data analysi".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Deighton, M. J. "3D texture analysis in seismic data." Thesis, University of Surrey, 2006. http://epubs.surrey.ac.uk/842764/.

Повний текст джерела
Анотація:
The use of hydrocarbons is ubiquitous in modern society, from fuel to raw materials. Seismic surveys now routinely produce large, volumetric representations of the Earth's crust. Human interpretation of these surveys plays an important part in locating oil and gas reservoirs, however it is a lengthy and time consuming process. Methods that provide semi-automated aid to the interpreter are highly sought after. In this research, texture is identified as a major cue to interpretation. A local gradient density method is then employed for the first time with seismic data to provide volumetric texture analysis. Extensive experiments are undertaken to determine parameter choices that provide good separation of seismic texture classes according to the Bhattacharya distance. A framework is then proposed to highlight regions of interest in a survey with high confidence based on texture queries by an interpreter. The interpretation task of seismic facies analysis is then considered and its equivalence with segmentation is established. Since the facies units may take a range of orientations within the survey, sensitivity of the analysis to rotation is considered. As a result, new methods based on alternative gradient estimation kernels and data realignment are proposed. The feature based method with alternative kernels is shown to provide the best performance. Achieving high texture label confidence requires large local windows and is in direct conflict with the need for small windows to identify fine detail. It is shown that smaller windows may be employed to achieve finer detail at the expense of label confidence. A probabilistic relaxation scheme is then described that recovers the label confidence whilst constraining texture boundaries to be smooth at the smallest scale. Testing with synthetic data shows reductions in error rate by up to a factor of 2. Experiments with seismic data indicate that more detailed structure can be identified using this approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Madrigali, Andrea. "Analysis of Local Search Methods for 3D Data." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016.

Знайти повний текст джерела
Анотація:
In questa tesi sono stati analizzati alcuni metodi di ricerca per dati 3D. Viene illustrata una panoramica generale sul campo della Computer Vision, sullo stato dell’arte dei sensori per l’acquisizione e su alcuni dei formati utilizzati per la descrizione di dati 3D. In seguito è stato fatto un approfondimento sulla 3D Object Recognition dove, oltre ad essere descritto l’intero processo di matching tra Local Features, è stata fatta una focalizzazione sulla fase di detection dei punti salienti. In particolare è stato analizzato un Learned Keypoint detector, basato su tecniche di apprendimento di machine learning. Quest ultimo viene illustrato con l’implementazione di due algoritmi di ricerca di vicini: uno esauriente (K-d tree) e uno approssimato (Radial Search). Sono state riportate infine alcune valutazioni sperimentali in termini di efficienza e velocità del detector implementato con diversi metodi di ricerca, mostrando l’effettivo miglioramento di performance senza una considerabile perdita di accuratezza con la ricerca approssimata.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Orriols, Majoral Xavier. "Generative Models for Video Analysis and 3D Range Data Applications." Doctoral thesis, Universitat Autònoma de Barcelona, 2004. http://hdl.handle.net/10803/3037.

Повний текст джерела
Анотація:
La mayoría de problemas en Visión por computador no contienen una relación directa entre el estímulo que proviene de sensores de tipo genérico y su correspondiente categoría perceptual. Este tipo de conexión requiere de una tarea de aprendizaje compleja. De hecho, las formas básicas de energía, y sus posibles combinaciones, son un número reducido en comparación a las infinitas categorías perceptuales correspondientes a objetos, acciones, relaciones entre objetos, etc. Dos factores principales determinan el nivel de dificultad de cada problema específico: i) los diferentes niveles de información que se utilizan, y ii) la complejidad del modelo que se emplea con el objetivo de explicar las observaciones.
La elección de una representación adecuada para los datos toma una relevancia significativa cuando se tratan invariancias, dado que estas siempre implican una reducción del los grados de libertad del sistema, i.e., el número necesario de coordenadas para la representación es menor que el empleado en la captura de datos. De este modo, la descomposición en unidades básicas y el cambio de representación dan lugar a que un problema complejo se pueda transformar en uno de manejable. Esta simplificación del problema de la estimación debe depender del mecanismo propio de combinación de estas primitivas con el fin de obtener una descripción óptima del modelo complejo global. Esta tesis muestra como los Modelos de Variables Latentes reducen dimensionalidad, que teniendo en cuenta las simetrías internas del problema, ofrecen una manera de tratar con datos parciales y dan lugar a la posibilidad de predicciones de nuevas observaciones.
Las líneas de investigación de esta tesis están dirigidas al manejo de datos provinentes de múltiples fuentes. Concretamente, esta tesis presenta un conjunto de nuevos algoritmos aplicados a dos áreas diferentes dentro de la Visión por Computador: i) video análisis y sumarización y ii) datos range 3D. Ambas áreas se han enfocado a través del marco de los Modelos Generativos, donde se han empleado protocolos similares para representar datos.
The majority of problems in Computer Vision do not contain a direct relation between the stimuli provided by a general purpose sensor and its corresponding perceptual category. A complex learning task must be involved in order to provide such a connection. In fact, the basic forms of energy, and their possible combinations are a reduced number compared to the infinite possible perceptual categories corresponding to objects, actions, relations among objects... Two main factors determine the level of difficulty of a specific problem: i) The different levels of information that are employed and ii) The complexity of the model that is intended to explain the observations.
The choice of an appropriate representation for the data takes a significant relevance when it comes to deal with invariances, since these usually imply that the number of intrinsic degrees of
freedom in the data distribution is lower than the coordinates used to represent it. Therefore, the decomposition into basic units (model parameters) and the change of representation, make that a complex problem can be transformed into a manageable one. This simplification of the estimation problem has to rely on a proper mechanism of combination of those primitives in order to give an optimal description of the global complex model. This thesis shows how Latent Variable Models reduce dimensionality, taking into account the internal symmetries of a problem, provide a manner of dealing with missing data and make possible predicting new observations.
The lines of research of this thesis are directed to the management of multiple data sources. More specifically, this thesis presents a set of new algorithms applied to two different areas in Computer Vision: i) video analysis and summarization, and ii) 3D range data. Both areas have been approached through the Generative Models framework, where similar protocols for representing data have been employed.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Qian, Zhongping. "Analysis of seismic anisotropy in 3D multi-component seismic data." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/3515.

Повний текст джерела
Анотація:
The importance of seismic anisotropy has been recognized by the oil industry since its first observation in hydrocarbon reservoirs in 1986, and the application of seismic anisotropy to solve geophysical problems has been keenly pursued since then. However, a lot of problems remain, which have limited the applications of the technology. Nowadays, more and more 3D multi-component seismic data with wide-azimuth are becoming available. These have provided more opportunities for the study of seismic anisotropy. My thesis has focused on the study of using seismic anisotropy in 3D multi-component seismic data to characterize subsurface fractures, improve converted wave imaging and detect fluid content in fractured reservoirs, all of which are important for fractured reservoir exploration and monitoring. For the use of seismic anisotropy to characterize subsurface fracture systems, equivalent medium theories have established the link between seismic anisotropy and fracture properties. The numerical modelling in the thesis reveals that the amplitudes and interval travel-time of the radial component of PS converted waves can be used to derive fracture properties through elliptical fitting similar to P-waves. However, sufficient offset coverage is required for either the P- or PS-wave to reveal the features of elliptical variation with azimuth. Compared with numerical modelling, seismic physical modelling provides additional insights into the azimuthal variation of P and PS-wave attributes and their links with fracture properties. Analysis of the seismic physical model data in the thesis shows that the ratio of the offset to the depth of a target layer (offset-depth ratio), is a key parameter controlling the choice of suitable attributes and methods for fracture analysis. Data with a small offset-depth ratio from 0.7 to 1.0 may be more suitable for amplitude analysis; whilst the use of travel time or velocity analysis requires a large offset-depth ratio above 1.0, which can help in reducing the effect of the acquisition footprint and structural imprint on the results. Multi-component seismic data is often heavily contaminated with noise, which will limit its application potential in seismic anisotropy analysis. A new method to reduce noise in 3D multi-component seismic data has been developed and has proved to be very helpful in improving data quality. The method can automatically recognize and eliminate strong noise in 3D converted wave seismic data with little interference to useful reflection signals. Component rotation is normally a routine procedure in 3D multi-component seismic analysis. However, this study shows that incorrect rotations may occur for certain acquisition geometry and can lead to errors in shear-wave splitting analysis. A quality control method has been developed to ensure this procedure is correctly carried out. The presence of seismic anisotropy can affect the quality of seismic imaging, but the study has shown that the magnitude of the effects depends on the data type and target depth. The effects of VTI anisotropy (transverse isotropy with a vertical symmetry axis) on P-wave images are much weaker than those on PS-wave images. Anisotropic effects decrease with depth for the P- and PS-waves. The real data example shows that the overall image quality of PS-waves processed by pre-stack time migration has been improved when VTI anisotropy has been taken into account. The improvements are mainly in the upper part of the section. Monitoring fluid distribution is an important task in producing reservoirs. A synthetic study based on a multi-scale rock-physics model shows that it is possible to use seismic anisotropy to derive viscosity information in a HTI medium (transverse isotropy with a horizontal symmetry axis). The numerical modelling demonstrates the effects of fluid viscosity on medium elastic properties and seismic reflectivity, as well as the possibility of using them to discriminate between oil and water saturation. Analysis of real data reveals that it is hard to use the P-wave to discriminate oil-water saturation. However, characteristic shear-wave splitting behaviour due to pore pressure changes demonstrates the potential for discriminating between oil and water saturation in fractured reservoirs.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Laha, Bireswar. "Immersive Virtual Reality and 3D Interaction for Volume Data Analysis." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/51817.

Повний текст джерела
Анотація:
This dissertation provides empirical evidence for the effects of the fidelity of VR system components, and novel 3D interaction techniques for analyzing volume datasets. It provides domain-independent results based on an abstract task taxonomy for visual analysis of scientific datasets. Scientific data generated through various modalities e.g. computed tomography (CT), magnetic resonance imaging (MRI), etc. are in 3D spatial or volumetric format. Scientists from various domains e.g., geophysics, medical biology, etc. use visualizations to analyze data. This dissertation seeks to improve effectiveness of scientific visualizations. Traditional volume data analysis is performed on desktop computers with mouse and keyboard interfaces. Previous research and anecdotal experiences indicate improvements in volume data analysis in systems with very high fidelity of display and interaction (e.g., CAVE) over desktop environments. However, prior results are not generalizable beyond specific hardware platforms, or specific scientific domains and do not look into the effectiveness of 3D interaction techniques. We ran three controlled experiments to study the effects of a few components of VR system fidelity (field of regard, stereo and head tracking) on volume data analysis. We used volume data from paleontology, medical biology and biomechanics. Our results indicate that different components of system fidelity have different effects on the analysis of volume visualizations. One of our experiments provides evidence for validating the concept of Mixed Reality (MR) simulation. Our approach of controlled experimentation with MR simulation provides a methodology to generalize the effects of immersive virtual reality (VR) beyond individual systems. To generalize our (and other researchers') findings across disparate domains, we developed and evaluated a taxonomy of visual analysis tasks with volume visualizations. We report our empirical results tied to this taxonomy. We developed the Volume Cracker (VC) technique for improving the effectiveness of volume visualizations. This is a free-hand gesture-based novel 3D interaction (3DI) technique. We describe the design decisions in the development of the Volume Cracker (with a list of usability criteria), and provide the results from an evaluation study. Based on the results, we further demonstrate the design of a bare-hand version of the VC with the Leap Motion controller device. Our evaluations of the VC show the benefits of using 3DI over standard 2DI techniques. This body of work provides the building blocks for a three-way many-many-many mapping between the sets of VR system fidelity components, interaction techniques and visual analysis tasks with volume visualizations. Such a comprehensive mapping can inform the design of next-generation VR systems to improve the effectiveness of scientific data analysis.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Patel, Ankur. "3D morphable models : data pre-processing, statistical analysis and fitting." Thesis, University of York, 2011. http://etheses.whiterose.ac.uk/1576/.

Повний текст джерела
Анотація:
This thesis presents research aimed at using a 3D linear statistical model (known as a 3D morphable model) of an object class (which could be faces, bodies, cars, etc) for robust shape recovery. Our aim is to use this recovered information for the purposes of potentially useful applications like recognition and synthesis. With a 3D morphable model as its central theme, this thesis includes: a framework for the groupwise processing of a set of meshes in dense correspondence; a new method for model construction; a new interpretation of the statistical constraints afforded by the model and addressing of some key limitations associated with using such models in real world applications. In Chapter 1 we introduce 3D morphable models, touch on the current state-of-the-art and emphasise why these models are an interesting and important research tool in the computer vision and graphics community. We then talk about the limitations of using such models and use these limitations as a motivation for some of the contributions made in this thesis. Chapter 2 presents an end-to-end system for obtaining a single (possibly symmetric) low resolution mesh topology and texture parameterisation which are optimal with respect to a set of high resolution input meshes in dense correspondence. These methods result in data which can be used to build 3D morphable models (at any resolution). In Chapter 3 we show how the tools of thin-plate spline warping and Procrustes analysis can be used to construct a morphable model as a shape space. We observe that the distribution of parameter vector lengths follows a chi-square distribution and discuss how the parameters of this distribution can be used as a regularisation constraint on the length of parameter vectors. In Chapter 4 we take the idea introduced in Chapter 3 further by enforcing a hard constraint which restricts faces to points on a hyperspherical manifold within the parameter space of a linear statistical model. We introduce tools from differential geometry (log and exponential maps for a hyperspherical manifold) which are necessary for developing our methodology and provide empirical validation to justify our choice of manifold. Finally, we show how to use these tools to perform model fitting, warping and averaging operations on the surface of this manifold. Chapter 5 presents a method to simplify a 3D morphable model without requiring knowledge of the training meshes used to build the model. This extends the simplification ideas in Chapter 2 into a statistical setting. The proposed method is based on iterative edge collapse and we show that the expected value of the Quadric Error Metric can be computed in closed form for a linear deformable model. The simplified models can used to achieve efficient multiscale fitting and super-resolution. In Chapter 6 we consider the problem of model dominance and show how shading constraints can be used to refine morphable model shape estimates, offering the possibility of exceeding the maximum possible accuracy of the model. We present an optimisation scheme based on surface normal error as opposed to image error. This ensures the fullest possible use of the information conveyed by the shading in an image. In addition, our framework allows non-model based estimation of per-vertex bump and albedo maps. This means the recovered model is capable of describing shape and reflectance phenomena not present in the training set. We explore the use of the recovered shape and reflectance information for face recognition and synthesis. Finally, in Chapter 7 we provide concluding remarks and discuss directions for future research.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Polat, Songül. "Combined use of 3D and hyperspectral data for environmental applications." Thesis, Lyon, 2021. http://www.theses.fr/2021LYSES049.

Повний текст джерела
Анотація:
La demande sans cesse croissante de solutions permettant de décrire notre environnement et les ressources qu'il contient nécessite des technologies qui permettent une description efficace et complète, conduisant à une meilleure compréhension du contenu. Les technologies optiques, la combinaison de ces technologies et un traitement efficace sont cruciaux dans ce contexte. Cette thèse se concentre sur les technologies 3D et les technologies hyper-spectrales (HSI). Tandis que les technologies 3D aident à comprendre les scènes de manière plus détaillée en utilisant des informations géométriques, topologiques et de profondeur, les développements rapides de l'imagerie hyper-spectrale ouvrent de nouvelles possibilités pour mieux comprendre les aspects physiques des matériaux et des scènes dans un large éventail d'applications grâce à leurs hautes résolutions spatiales et spectrales. Les travaux de recherches de cette thèse visent à l'utilisation combinée des données 3D et hyper-spectrales. Ils visent également à démontrer le potentiel et la valeur ajoutée d'une approche combinée dans le contexte de différentes applications. Une attention particulière est accordée à l'identification et à l'extraction de caractéristiques dans les deux domaines et à l'utilisation de ces caractéristiques pour détecter des objets d'intérêt.Plus spécifiquement, nous proposons différentes approches pour combiner les données 3D et hyper-spectrales en fonction des technologies 3D et d’imagerie hyper-spectrale (HSI) utilisées et montrons comment chaque capteur peut compenser les faiblesses de l'autre. De plus, une nouvelle méthode basée sur des critères de forme dédiés à la classification de signatures spectrales et des règles de décision liés à l'analyse des signatures spectrales a été développée et présentée. Les forces et les faiblesses de cette méthode par rapport aux approches existantes sont discutées. Les expérimentations réalisées, dans le domaine du patrimoine culturel et du tri de déchets plastiques et électroniques, démontrent que la performance et l’efficacité de la méthode proposée sont supérieures à celles des méthodes de machines à vecteurs de support (SVM).En outre, une nouvelle méthode d'analyse basée sur les caractéristiques 3D et hyper-spectrales est présentée. L'évaluation de cette méthode est basée sur un exemple pratique du domaine des déchet d'équipements électriques et électroniques (WEEE) et se concentre sur la séparation de matériaux comme les plastiques, les carte à circuit imprimé (PCB) et les composants électroniques sur PCB. Les résultats obtenus confirment qu'une amélioration des ré-sultats de classification a pu être obtenue par rapport aux méthodes proposées précédemment.L’avantage des méthodes et processus individuels développés dans cette thèse est qu’ils peuvent être transposé directement à tout autre domaine d'application que ceux investigué, et généralisé à d’autres cas d’étude sans adaptation préalable
Ever-increasing demands for solutions that describe our environment and the resources it contains, require technologies that support efficient and comprehensive description, leading to a better content-understanding. Optical technologies, the combination of these technologies and effective processing are crucial in this context. The focus of this thesis lies on 3D scanning and hyperspectral technologies. Rapid developments in hyperspectral imaging are opening up new possibilities for better understanding the physical aspects of materials and scenes in a wide range of applications due to their high spatial and spectral resolutions, while 3D technologies help to understand scenes in a more detailed way by using geometrical, topological and depth information. The investigations of this thesis aim at the combined use of 3D and hyperspectral data and demonstrates the potential and added value of a combined approach by means of different applications. Special focus is given to the identification and extraction of features in both domains and the use of these features to detect objects of interest. More specifically, we propose different approaches to combine 3D and hyperspectral data depending on the HSI/3D technologies used and show how each sensor could compensate the weaknesses of the other. Furthermore, a new shape and rule-based method for the analysis of spectral signatures was developed and presented. The strengths and weaknesses compared to existing approach-es are discussed and the outperformance compared to SVM methods are demonstrated on the basis of practical findings from the field of cultural heritage and waste management.Additionally, a newly developed analytical method based on 3D and hyperspectral characteristics is presented. The evaluation of this methodology is based on a practical exam-ple from the field of WEEE and focuses on the separation of materials like plastics, PCBs and electronic components on PCBs. The results obtained confirms that an improvement of classification results could be achieved compared to previously proposed methods.The claim of the individual methods and processes developed in this thesis is general validity and simple transferability to any field of application
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Landström, Anders. "Adaptive tensor-based morphological filtering and analysis of 3D profile data." Licentiate thesis, Luleå tekniska universitet, Signaler och system, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-26510.

Повний текст джерела
Анотація:
Image analysis methods for processing 3D profile data have been investigated and developed. These methods include; Image reconstruction by prioritized incremental normalized convolution, morphology-based crack detection for steel slabs, and adaptive morphology based on the local structure tensor. The methods have been applied to a number of industrial applications.An issue with 3D profile data captured by laser triangulation is occlusion, which occurs when the line-of-sight between the projected laser light and the camera sensor is obstructed. To overcome this problem, interpolation of missing surface in rock piles has been investigated and a novel interpolation method for filling in missing pixel values iteratively from the edges of the reliable data, using normalized convolution, has been developed.3D profile data of the steel surface has been used to detect longitudinal cracks in casted steel slabs. Segmentation of the data is done using mathematical morphology, and the resulting connected regions are assigned a crack probability estimate based on a statistic logistic regression model. More specifically, the morphological filtering locates trenches in the data, excludes scale regions for further analysis, and finally links crack segments together in order to obtain a segmented region which receives a crack probability based on its depth and length.Also suggested is a novel method for adaptive mathematical morphology intended to improve crack segment linking, i.e. for bridging gaps in the crack signature in order to increase the length of potential crack segments. Standard morphology operations rely on a predefined structuring element which is repeatedly used for each pixel in the image. The outline of a crack, however, can range from a straight line to a zig-zag pattern. A more adaptive method for linking regions with a large enough estimated crack depth would therefore be beneficial. More advanced morphological approaches, such as morphological amoebas and path openings, adapt better to curvature in the image. For our purpose, however, we investigate how the local structure tensor can be used to adaptively assign to each pixel an elliptical structuring element based on the local orientation within the image. The information from the local structure tensor directly defines the shape of the elliptical structuring element, and the resulting morphological filtering successfully enhances crack signatures in the data.
Godkänd; 2012; 20121017 (andlan); LICENTIATSEMINARIUM Ämne: Signalbehandling/Signal Processing Examinator: Universitetslektor Matthew Thurley, Institutionen för system- och rymdteknik, Luleå tekniska universitet Diskutant: Associate Professor Cris Luengo, Centre for Image Analysis, Uppsala Tid: Onsdag den 21 november 2012 kl 12.30 Plats: A1545, Luleå tekniska universitet
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Cheewinsiriwat, Pannee. "Development of a 3D geospatial data representation and spatial analysis system." Thesis, University of Newcastle Upon Tyne, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.514467.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Vick, Louise Mary. "Evaluation of field data and 3D modelling for rockfall hazard analysis." Thesis, University of Canterbury. Geological Sciences, 2015. http://hdl.handle.net/10092/10845.

Повний текст джерела
Анотація:
The Canterbury Earthquake Sequence (CES) of 2010-2011 produced large seismic moments up to Mw 7.1. These large, near-to-surface (<15 km) ruptures triggered >6,000 rockfall boulders on the Port Hills of Christchurch, many of which impacted houses and affected the livelihoods of people within the impacted area. From these disastrous and unpredicted natural events a need arose to be able to assess the areas affected by rockfall events in the future, where it is known that a rockfall is possible from a specific source outcrop but the potential boulder runout and dynamics are not understood. The distribution of rockfall deposits is largely constrained by the physical properties and processes of the boulder and its motion such as block density, shape and size, block velocity, bounce height, impact and rebound angle, as well as the properties of the substrate. Numerical rockfall models go some way to accounting for all the complex factors in an algorithm, commonly parameterised in a user interface where site-specific effects can be calibrated. Calibration of these algorithms requires thorough field checks and often experimental practises. The purpose of this project, which began immediately following the most destructive rupture of the CES (February 22, 2011), is to collate data to characterise boulder falls, and to use this information, supplemented by a set of anthropogenic boulder fall data, to perform an in-depth calibration of the three-dimensional numerical rockfall model RAMMS::Rockfall. The thesis covers the following topics: • Use of field data to calibrate RAMMS. Boulder impact trails in the loess-colluvium soils at Rapaki Bay have been used to estimate ranges of boulder velocities and bounce heights. RAMMS results replicate field data closely; it is concluded that the model is appropriate for analysing the earthquake-triggered boulder trails at Rapaki Bay, and that it can be usefully applied to rockfall trajectory and hazard assessment at this and similar sites elsewhere. • Detailed analysis of dynamic rockfall processes, interpreted from recorded boulder rolling experiments, and compared to RAMMS simulated results at the same site. Recorded rotational and translational velocities of a particular boulder show that the boulder behaves logically and dynamically on impact with different substrate types. Simulations show that seasonal changes in soil moisture alter rockfall dynamics and runout predictions within RAMMS, and adjustments are made to the calibration to reflect this; suggesting that in hazard analysis a rockfall model should be calibrated to dry rather than wet soil conditions to anticipate the most serious outcome. • Verifying the model calibration for a separate site on the Port Hills. The results of the RAMMS simulations show the effectiveness of calibration against a real data set, as well as the effectiveness of vegetation as a rockfall barrier/retardant. The results of simulations are compared using hazard maps, where the maximum runouts match well the mapped CES fallen boulder maximum runouts. The results of the simulations in terms of frequency distribution of deposit locations on the slope are also compared with those of the CES data, using the shadow angle tool to apportion slope zones. These results also replicate real field data well. Results show that a maximum runout envelope can be mapped, as well as frequency distribution of deposited boulders for hazard (and thus risk) analysis purposes. The accuracy of the rockfall runout envelope and frequency distribution can be improved by comprehensive vegetation and substrate mapping. The topics above define the scope of the project, limiting the focus to rockfall processes on the Port Hills, and implications for model calibration for the wider scientific community. The results provide a useful rockfall analysis methodology with a defensible and replicable calibration process, that has the potential to be applied to other lithologies and substrates. Its applications include a method of analysis for the selection and positioning of rockfall countermeasure design; site safety assessment for scaling and demolition works; and risk analysis and land planning for future construction in Christchurch.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Xinyu, Chang. "Neuron Segmentation and Inner Structure Analysis of 3D Electron Microscopy Data." Kent State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=kent1369834525.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Rajamanoharan, Georgia. "Towards spatial and temporal analysis of facial expressions in 3D data." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/31549.

Повний текст джерела
Анотація:
Facial expressions are one of the most important means for communication of emotions and meaning. They are used to clarify and give emphasis, to express intentions, and form a crucial part of any human interaction. The ability to automatically recognise and analyse expressions could therefore prove to be vital in human behaviour understanding, which has applications in a number of areas such as psychology, medicine and security. 3D and 4D (3D+time) facial expression analysis is an expanding field, providing the ability to deal with problems inherent to 2D images, such as out-of-plane motion, head pose, and lighting and illumination issues. Analysis of data of this kind requires extending successful approaches applied to the 2D problem, as well as the development of new techniques. The introduction of recent new databases containing appropriate expression data, recorded in 3D or 4D, has allowed research into this exciting area for the first time. This thesis develops a number of techniques, both in 2D and 3D, that build towards a complete system for analysis of 4D expressions. Suitable feature types, designed by employing binary pattern methods, are developed for analysis of 3D facial geometry data. The full dynamics of 4D expressions are modelled, through a system reliant on motion-based features, to demonstrate how the different components of the expression (neutral-onset-apex-offset) can be distinguished and harnessed. Further, the spatial structure of expressions is harnessed to improve expression component intensity estimation in 2D videos. Finally, it is discussed how this latter step could be extended to 3D facial expression analysis, and also combined with temporal analysis. Thus, it is demonstrated that both spatial and temporal information, when combined with appropriate 3D features, is critical in analysis of 4D expression data.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Coban, Sophia. "Practical approaches to reconstruction and analysis for 3D and dynamic 3D computed tomography." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/practical-approaches-to-reconstruction-and-analysis-for-3d-and-dynamic-3d-computed-tomography(f34a2617-09f9-4c4e-9669-f86f6cf2bce5).html.

Повний текст джерела
Анотація:
The problem of reconstructing an image from a set of tomographic data is not new, nor is it lacking attention. However there is still a distinct gap between the mathematicians and the experimental scientists working in the computed tomography (CT) imaging community. One of the aims in this thesis is to bridge this gap with mathematical reconstruction algorithms and analysis approaches applied to practical CT problems. The thesis begins with an extensive analysis for assessing the suitability of reconstruction algorithms for a given problem. The paper presented examines the idea of extracting physical information from a reconstructed sample and comparing against the known sample characteristics to determine the accuracy of a reconstructed volume. Various test cases are studied, which are relevant to both mathematicians and experimental scientists. These include the variance in quality of reconstructed volume as the dose is reduced or the implementation of the level set evolution method, used as part of a simultaneous reconstruction and segmentation technique. The work shows that the assessment of physical attributes results in more accurate conclusions. Furthermore, this approach allows for further analysis into interesting questions in CT. This theme is continued throughout the thesis. Recent results in compressive sensing (CS) gained attention in the CT community as they indicate the possibility of obtaining an accurate reconstruction of a sparse image from severely limited or reduced amount of measured data. Literature produced so far has not shown that CS directly guarantees a successful recovery in X-ray CT, and it is still unclear under which conditions a successful sparsity regularized reconstruction can be achieved. The work presented in the thesis aims to answer this question in a practical setting, and seeks to establish a direct connection between the success of sparsity regularization methods and the sparsity level of the image, which is similar to CS. Using this connection, one can determine the sufficient amount of measurements to collect from just the sparsity of an image. A link was found in a previous study using simulated data, and the work is repeated here with experimental data, where the sparsity level of the scanned object varies. The preliminary work presented here verifies the results from simulated data, showing an "almost-linear" relationship between the sparsity of the image and the sufficient amount of data for a successful sparsity regularized reconstruction. Several unexplained artefacts are noted in the literature as the `partial volume', the 'exponential edge gradient' or the 'penumbra' effect, with no clear explanation for their cause, or established techniques to remove them. The work presented in this paper shows that these artefacts are due to a non-linearity in the measured data, which comes from either the set up of the system, the scattering of rays or the dependency of linear attenuation on wavelength in the polychromatic case. However, even in monochromatic CT systems, the non-linearity effect can be detected. The paper shows that in some cases, the non-linearity effect is too large to ignore, and the reconstruction problem should be adapted to solve a non-linear problem. We derive this non-linear problem and solve it using a numerical optimization technique for both simulatedand real, gamma-ray data. When compared to reconstructions obtained using the standard linear model, the non-linear reconstructed images show clear improvements in that the non-linear effect is largely eliminated. The thesis is finished with a highlight article in the special issue of Solid Earth, named "Pore-scale tomography & imaging - applications, techniques and recommended practice". The paper presents a major technical advancement in a dynamic 3D CT data acquisition, where the latest hardware and optimal data acquisition plan are applied and as a result, ultra fast 3D volume acquisition was made possible. The experiment comprised of fast, free-falling water-saline drops traveling through a pack of rock grains with varying porosities. The imaging work was enhanced by the use of iterative methods and physical quantification analysis performed. The data acquisition and imaging work is the first in the field to capture a free falling drop and the imaging work clearly shows the fluid interaction with speed, gravity and more importantly, the inter- and intra-grain fluid transfers.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Trapp, Matthias. "Analysis and exploration of virtual 3D city models using 3D information lenses." Master's thesis, Universität Potsdam, 2007. http://opus.kobv.de/ubp/volltexte/2008/1393/.

Повний текст джерела
Анотація:
This thesis addresses real-time rendering techniques for 3D information lenses based on the focus & context metaphor. It analyzes, conceives, implements, and reviews its applicability to objects and structures of virtual 3D city models. In contrast to digital terrain models, the application of focus & context visualization to virtual 3D city models is barely researched. However, the purposeful visualization of contextual data of is extreme importance for the interactive exploration and analysis of this field. Programmable hardware enables the implementation of new lens techniques, that allow the augmentation of the perceptive and cognitive quality of the visualization compared to classical perspective projections. A set of 3D information lenses is integrated into a 3D scene-graph system: • Occlusion lenses modify the appearance of virtual 3D city model objects to resolve their occlusion and consequently facilitate the navigation. • Best-view lenses display city model objects in a priority-based manner and mediate their meta information. Thus, they support exploration and navigation of virtual 3D city models. • Color and deformation lenses modify the appearance and geometry of 3D city models to facilitate their perception. The presented techniques for 3D information lenses and their application to virtual 3D city models clarify their potential for interactive visualization and form a base for further development.
Diese Diplomarbeit behandelt echtzeitfähige Renderingverfahren für 3D Informationslinsen, die auf der Fokus-&-Kontext-Metapher basieren. Im folgenden werden ihre Anwendbarkeit auf Objekte und Strukturen von virtuellen 3D-Stadtmodellen analysiert, konzipiert, implementiert und bewertet. Die Focus-&-Kontext-Visualisierung für virtuelle 3D-Stadtmodelle ist im Gegensatz zum Anwendungsbereich der 3D Geländemodelle kaum untersucht. Hier jedoch ist eine gezielte Visualisierung von kontextbezogenen Daten zu Objekten von großer Bedeutung für die interaktive Exploration und Analyse. Programmierbare Computerhardware erlaubt die Umsetzung neuer Linsen-Techniken, welche die Steigerung der perzeptorischen und kognitiven Qualität der Visualisierung im Vergleich zu klassischen perspektivischen Projektionen zum Ziel hat. Für eine Auswahl von 3D-Informationslinsen wird die Integration in ein 3D-Szenengraph-System durchgeführt: • Verdeckungslinsen modifizieren die Gestaltung von virtuellen 3D-Stadtmodell- Objekten, um deren Verdeckungen aufzulösen und somit die Navigation zu erleichtern. • Best-View Linsen zeigen Stadtmodell-Objekte in einer prioritätsdefinierten Weise und vermitteln Meta-Informationen virtueller 3D-Stadtmodelle. Sie unterstützen dadurch deren Exploration und Navigation. • Farb- und Deformationslinsen modifizieren die Gestaltung und die Geometrie von 3D-Stadtmodell-Bereichen, um deren Wahrnehmung zu steigern. Die in dieser Arbeit präsentierten Techniken für 3D Informationslinsen und die Anwendung auf virtuelle 3D Stadt-Modelle verdeutlichen deren Potenzial in der interaktiven Visualisierung und bilden eine Basis für Weiterentwicklungen.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Böniger, Urs. "Attributes and their potential to analyze and interpret 3D GPR data." Phd thesis, Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2011/5012/.

Повний текст джерела
Анотація:
Based on technological advances made within the past decades, ground-penetrating radar (GPR) has become a well-established, non-destructive subsurface imaging technique. Catalyzed by recent demands for high-resolution, near-surface imaging (e.g., the detection of unexploded ordnances and subsurface utilities, or hydrological investigations), the quality of today's GPR-based, near-surface images has significantly matured. At the same time, the analysis of oil and gas related reflection seismic data sets has experienced significant advances. Considering the sensitivity of attribute analysis with respect to data positioning in general, and multi-trace attributes in particular, trace positioning accuracy is of major importance for the success of attribute-based analysis flows. Therefore, to study the feasibility of GPR-based attribute analyses, I first developed and evaluated a real-time GPR surveying setup based on a modern tracking total station (TTS). The combination of current GPR systems capability of fusing global positioning system (GPS) and geophysical data in real-time, the ability of modern TTS systems to generate a GPS-like positional output and wireless data transmission using radio modems results in a flexible and robust surveying setup. To elaborate the feasibility of this setup, I studied the major limitations of such an approach: system cross-talk and data delays known as latencies. Experimental studies have shown that when a minimal distance of ~5 m between the GPR and the TTS system is considered, the signal-to-noise ratio of the acquired GPR data using radio communication equals the one without radio communication. To address the limitations imposed by system latencies, inherent to all real-time data fusion approaches, I developed a novel correction (calibration) strategy to assess the gross system latency and to correct for it. This resulted in the centimeter trace accuracy required by high-frequency and/or three-dimensional (3D) GPR surveys. Having introduced this flexible high-precision surveying setup, I successfully demonstrated the application of attribute-based processing to GPR specific problems, which may differ significantly from the geological ones typically addressed by the oil and gas industry using seismic data. In this thesis, I concentrated on archaeological and subsurface utility problems, as they represent typical near-surface geophysical targets. Enhancing 3D archaeological GPR data sets using a dip-steered filtering approach, followed by calculation of coherency and similarity, allowed me to conduct subsurface interpretations far beyond those obtained by classical time-slice analyses. I could show that the incorporation of additional data sets (magnetic and topographic) and attributes derived from these data sets can further improve the interpretation. In a case study, such an approach revealed the complementary nature of the individual data sets and, for example, allowed conclusions about the source location of magnetic anomalies by concurrently analyzing GPR time/depth slices to be made. In addition to archaeological targets, subsurface utility detection and characterization is a steadily growing field of application for GPR. I developed a novel attribute called depolarization. Incorporation of geometrical and physical feature characteristics into the depolarization attribute allowed me to display the observed polarization phenomena efficiently. Geometrical enhancement makes use of an improved symmetry extraction algorithm based on Laplacian high-boosting, followed by a phase-based symmetry calculation using a two-dimensional (2D) log-Gabor filterbank decomposition of the data volume. To extract the physical information from the dual-component data set, I employed a sliding-window principle component analysis. The combination of the geometrically derived feature angle and the physically derived polarization angle allowed me to enhance the polarization characteristics of subsurface features. Ground-truth information obtained by excavations confirmed this interpretation. In the future, inclusion of cross-polarized antennae configurations into the processing scheme may further improve the quality of the depolarization attribute. In addition to polarization phenomena, the time-dependent frequency evolution of GPR signals might hold further information on the subsurface architecture and/or material properties. High-resolution, sparsity promoting decomposition approaches have recently had a significant impact on the image and signal processing community. In this thesis, I introduced a modified tree-based matching pursuit approach. Based on different synthetic examples, I showed that the modified tree-based pursuit approach clearly outperforms other commonly used time-frequency decomposition approaches with respect to both time and frequency resolutions. Apart from the investigation of tuning effects in GPR data, I also demonstrated the potential of high-resolution sparse decompositions for advanced data processing. Frequency modulation of individual atoms themselves allows to efficiently correct frequency attenuation effects and improve resolution based on shifting the average frequency level. GPR-based attribute analysis is still in its infancy. Considering the growing widespread realization of 3D GPR studies there will certainly be an increasing demand towards improved subsurface interpretations in the future. Similar to the assessment of quantitative reservoir properties through the combination of 3D seismic attribute volumes with sparse well-log information, parameter estimation in a combined manner represents another step in emphasizing the potential of attribute-driven GPR data analyses.
Geophysikalische Erkundungsmethoden haben in den vergangenen Jahrzehnten eine weite Verbreitung bei der zerstörungsfreien beziehungsweise zerstörungsarmen Erkundung des oberflächennahen Untergrundes gefunden. Im Vergleich zur Vielzahl anderer existierender Verfahrenstypen ermöglicht das Georadar (auch als Ground Penetrating Radar bezeichnet) unter günstigen Standortbedingungen Untersuchungen mit der höchsten räumlichen Auflösung. Georadar zählt zu den elektromagnetischen (EM) Verfahren und beruht als Wellenverfahren auf der Ausbreitung von hochfrequenten EM-Wellen, das heisst deren Reflektion, Refraktion und Transmission im Untergrund. Während zweidimensionale Messstrategien bereits weit verbreitet sind, steigt gegenwärtig das Interesse an hochauflösenden, flächenhaften Messstrategien, die es erlauben, Untergrundstrukturen dreidimensional abzubilden. Ein dem Georadar prinzipiell ähnliches Verfahren ist die Reflexionsseismik, deren Hauptanwendung in der Lagerstättenerkundung liegt. Im Laufe des letzten Jahrzehnts führte der zunehmende Bedarf an neuen Öl- und Gaslagerstätten sowie die Notwendigkeit zur optimalen Nutzung existierender Reservoirs zu einer verstärkten Anwendung und Entwicklung sogenannter seismischer Attribute. Attribute repräsentieren ein Datenmaß, welches zu einer verbesserten visuellen Darstellung oder Quantifizierung von Dateneigenschaften führt die von Relevanz für die jeweilige Fragestellung sind. Trotz des Erfolgs von Attributanalysen bei reservoirbezogenen Anwendungen und der grundlegenden Ähnlichkeit von reflexionsseismischen und durch Georadar erhobenen Datensätzen haben attributbasierte Ansätze bisher nur eine geringe Verbreitung in der Georadargemeinschaft gefunden. Das Ziel dieser Arbeit ist es, das Potential von Attributanalysen zur verbesserten Interpretation von Georadardaten zu untersuchen. Dabei liegt der Schwerpunkt auf Anwendungen aus der Archäologie und dem Ingenieurwesen. Der Erfolg von Attributen im Allgemeinen und von solchen mit Berücksichtigung von Nachbarschaftsbeziehungen im Speziellen steht in engem Zusammenhang mit der Genauigkeit, mit welcher die gemessenen Daten räumlich lokalisiert werden können. Vor der eigentlichen Attributuntersuchung wurden deshalb die Möglichkeiten zur kinematischen Positionierung in Echtzeit beim Georadarverfahren untersucht. Ich konnte zeigen, dass die Kombination von modernen selbstverfolgenden Totalstationen mit Georadarinstrumenten unter Verwendung von leistungsfähigen Funkmodems eine zentimetergenaue Positionierung ermöglicht. Experimentelle Studien haben gezeigt, dass die beiden potentiell limitierenden Faktoren - systeminduzierte Signalstöreffekte und Datenverzögerung (sogenannte Latenzzeiten) - vernachlässigt beziehungsweise korrigiert werden können. In der Archäologie ist die Untersuchung oberflächennaher Strukturen und deren räumlicher Gestalt wichtig zur Optimierung geplanter Grabungen. Das Georadar hat sich hierbei zu einem der wohl am meisten genutzten zerstörungsfreien geophysikalischen Verfahren entwickelt. Archäologische Georadardatensätze zeichnen sich jedoch oft durch eine hohe Komplexität aus, was mit der wiederholten anthropogenen Nutzung des oberflächennahen Untergrundes in Verbindung gebracht werden kann. In dieser Arbeit konnte gezeigt werden, dass die Verwendung zweier unterschiedlicher Attribute zur Beschreibung der Variabilität zwischen benachbarten Datenspuren eine deutlich verbesserte Interpretation in Bezug auf die Fragestellung ermöglicht. Des Weiteren konnte ich zeigen, dass eine integrative Auswertung von mehreren Datensätzen (methodisch sowie bearbeitungstechnisch) zu einer fundierteren Interpretation führen kann, zum Beispiel bei komplementären Informationen der Datensätze. Im Ingenieurwesen stellen Beschädigungen oder Zerstörungen von Versorgungsleitungen im Untergrund eine große finanzielle Schadensquelle dar. Polarisationseffekte, das heisst Änderungen der Signalamplitude in Abhängigkeit von Akquisitions- sowie physikalischen Parametern stellen ein bekanntes Phänomen dar, welches in der Anwendung bisher jedoch kaum genutzt wird. In dieser Arbeit wurde gezeigt, wie Polarisationseffekte zu einer verbesserten Interpretation verwendet werden können. Die Überführung von geometrischen und physikalischen Attributen in ein neues, so genanntes Depolarisationsattribut hat gezeigt, wie unterschiedliche Leitungstypen extrahiert und anhand ihrer Polarisationscharakteristika klassifiziert werden können. Weitere wichtige physikalische Charakteristika des Georadarwellenfeldes können mit dem Matching Pursuit-Verfahren untersucht werden. Dieses Verfahren hatte in den letzten Jahren einen großen Einfluss auf moderne Signal- und Bildverarbeitungsansätze. Matching Pursuit wurde in der Geophysik bis jetzt hauptsächlich zur hochauflösenden Zeit-Frequenzanalyse verwendet. Anhand eines modifizierten Tree-based Matching Pursuit Algorithmus habe ich demonstriert, welche weiterführenden Möglichkeiten solche Datenzerlegungen für die Bearbeitung und Interpretation von Georadardaten eröffnen. Insgesamt zeigt diese Arbeit, wie moderne Vermessungstechniken und attributbasierte Analysestrategien genutzt werden können um dreidimensionale Daten effektiv und genau zu akquirieren beziehungsweise die resultierenden Datensätze effizient und verlässlich zu interpretieren.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Wright, Gabriel J. T. "Automated 3D echocardiography analysis : advanced methods and their evaluation on clinical data." Thesis, University of Oxford, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.275378.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Li, Tianyou. "3D Representation of EyeTracking Data : An Implementation in Automotive Perceived Quality Analysis." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291222.

Повний текст джерела
Анотація:
The importance of perceived quality within the automotive industry has been rapidly increasing these years. Since judgmentsconcerning perceived quality is a highly subjective process, eye-tracking technology is one of the best approaches to extractcustomers’ subconscious visual activity during interaction with the product. This thesis aims to find an appropriate solution forrepresenting 3D eye-tracking data for further improvements in the validity and verification efficiency of perceived qualityanalysis, attempting to answer the question:How can eye-tracking data be presented and integrated into 3D automobile design workflow as a material that allows designersto understand their customers better?In the study, a prototype system was built for car-interior inspection in the virtual reality (VR) showroom through an explorativeresearch process including investigations in the acquisition of gaze data in VR, classification of eye movement from thecollected gaze data, and the visualizations for the classified eye movements. The prototype system was then evaluated throughcomparisons between algorithms and feedbacks from the engineers who participated in the pilot study.As a result, a method combining I-VT (identification with velocity threshold) and DBSCAN (density-based spatial clusteringof application with noise) was implemented as the optimum algorithm for eye movement classification. A modified heat map,a cluster plot, a convex hull plot, together with textual information, were used to construct the complete visualization of theeye-tracking data. The prototype system has enabled car designers and engineers to examine both the customers’ and their ownvisual behavior in the 3D virtual showroom during a car inspection, followed by the extraction and visualization of the collectedgaze data. This paper presents the research process, including the introduction to relevant theory, the implementation of theprototype system, and its results. Eventually, strengths and weaknesses, as well as the future work in both the prototype solutionitself and potential experimental use cases, are discussed.
Betydelsen av upplevd kvalitet inom bilindustrin har ökat kraftigt dessa år. Eftersom uppfattningar om upplevd kvalitet är en mycket subjektivt är ögonspårningsteknik en av de bästa metoderna för att extrahera kundernas undermedvetna visuella aktivitet under interaktion med produkten. Denna avhandling syftar till att hitta en lämplig lösning för att representera 3Dögonspårningsdata för ytterligare förbättringar av validitets- och verifieringseffektiviteten hos upplevd kvalitetsanalys, och försöker svara på frågan: Hur kan ögonspårningsdata presenteras och integreras i 3D-arbetsflödet för bildesign som ett material som gör det möjligt för designers att bättre förstå sina kunder? I studien byggdes ett prototypsystem för bilinteriörinspektion i showroomet för virtuell verklighet (VR) genom en explorativ forskningsprocess inklusive undersökningar i förvärv av blickdata i VR, klassificering av ögonrörelse från insamlad blicksdata och visualiseringar för de klassificerade ögonrörelserna. Prototypsystemet utvärderades sedan genom jämförelser mellan algoritmer och återkopplingar från ingenjörerna som deltog i pilotstudien. Följaktligen implementerades en metod som kombinerar I-VT (identifiering med hastighetströskel) och DBSCAN (densitetsbaserad spatial gruppering av applikation med brus) som den optimala algoritmen för ögonrörelseklassificering. En modifierad värmekarta, ett klusterdiagram, en konvex skrovdiagram, tillsammans med textinformation, användes för att konstruera den fullständiga visualiseringen av ögonspårningsdata. Prototypsystemet har gjort det möjligt för bilkonstruktörer och ingenjörer att undersöka både kundernas och deras visuella beteende i det virtuella 3D-utställningsrummet under en bilinspektion, följt av utvinning och visualisering av den insamlade blicken. Denna uppsats presenterar forskningsprocessen, inklusive introduktion till relevant teori, implementeringen av prototypsystemet och dess resultat. Så småningom diskuteras styrkor och svagheter, liksom det framtida arbetet i både prototyplösningen och potentiella experimentella användningsfall.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

黃卓鴻 and Cheok-hung Wong. "An analysis of the use of an interactive 3D hypermedia paradigm for architecture." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31237824.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Achanta, Leela Venkata Naga Satish. "Data extraction for scale factor determination used in 3D-photogrammetry for plant analysis." Kansas State University, 2013. http://hdl.handle.net/2097/15975.

Повний текст джерела
Анотація:
Master of Science
Department of Computing and Information Sciences
Mitchell L. Neilsen
ImageJ and its recent upgrade, Fiji, are image processing tools that provide extensibility via Java plug-ins and recordable macros [2]. The aim of this project is to develop a plug-in compatible with ImageJ/Fiji, which extracts length information from images for scale factor determination used in 3-D Photogrammetry for plant analysis [5]. Plant images when processed using Agisoft software, gives an image consisting of the images processed merged into a single 3-D model. The coordinate system of the 3-D image generated is a relative coordinate system. The distances in the relative coordinate system are proportional to but not numerically the same as the real world distances. To know the length of any feature represented in 3-D model in real world distance, a scale factor is required. This scale factor when multiplied by some distance in the relative coordinate system, yields the actual length of that feature in the real coordinate system. For determining the scale factor we process images consisting of unsharpened yellow colored pencils which are all the same shape, color and size. The plug-in considers each pencil as a unique region by assigning unique value and unique color to all its pixels. The distance between the end midpoints of each pencil is calculated. The date and time on which the image file gets processed, name of the image file, image file creation and modification date and time, total number of valid (complete) pencils processed, the midpoints of ends of each valid pencil, length (distance) i.e., the number of pixels between the two end midpoints are all written to the output file. The length of the pencils written to the output file is used by the researchers to calculate the scale factor. Plug-in was tested on real images and the results obtained were same as the expected result.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Boguslawski, Pawel. "Modelling and analysing 3D building interiors with the dual half-edge data structure." Thesis, University of South Wales, 2011. https://pure.southwales.ac.uk/en/studentthesis/modelling-and-analysing-3d-building-interiors-with-the-dual-halfedge-data-structure(ac1af643-835a-4093-90cd-3d51c696e280).html.

Повний текст джерела
Анотація:
While many systems and standards like CAD systems or CityGML permit the user to represent the geometry and the semantics of building interior models, their use for applications where spatial analysis and/or real-time modifications are required are limited since they lack the possibility to store topological relationships between the elements. In this thesis a new topological data structure, the dual half-edge (DHE) is presented. It permits the representation of the topology of building models with the interior included. It is based on the idea of simultaneously storing a graph in 3D space and its dual graph, and to link the two. Euler-type operators for incrementally constructing 3D models (for adding individual edges, faces and volumes to the model while updating the dual structure simultaneously), and navigation operators (for example to navigate from a given point to all the connected planes or polyhedra) are proposed. The DHE also permits the assigning of attributes to any element. This technique allows the handling of important query types and performs analysis based on the building structure, for example finding the nearest exterior exit to a given room, as in disaster management planning. As the structure is locally modifiable the model may be adapted whenever a particular pathway is no longer available. The proposed DHE structure adds significant analytic value to the increasingly popular CityGML model, and to the CAD field where the dual structure is of growing interest.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Wang, Xiyao. "Augmented reality environments for the interactive exploration of 3D data." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG052.

Повний текст джерела
Анотація:
La visualisation exploratoire des données 3D est fondamentale dans des domaines scientifiques. Traditionnellement, les experts utilisent un PC et s'appuient sur la souris pour ajuster la vue. Cette configuration permet l'immersion par interaction---l'utilisateur peut contrôler précisément la vue, mais elle ne fournit pas de profondeur, qui limite la compréhension de données complexes. La réalité virtuelle ou augmentée (RV/A), en revanche, offre une immersion visuelle avec des vues stéréoscopiques. Bien que leurs avantages aient été prouvés, plusieurs points limitent leur application, notamment les besoins élevés de configuration/maintenance, les difficultés de contrôle précis et, plus important, la séparation des outils d'analyse traditionnels. Pour bénéficier des deux côtés, nous avons donc étudié un système hybride combinant l'environnement RA avec un PC pour fournir des immersions interactives et visuelles. Nous avons collaboré étroitement avec des physiciens des particules afin de comprendre leur processus de travail et leurs besoins de visualisation pour motiver notre conception. D'abord, basé sur nos discussions avec les physiciens, nous avons construit un prototype qui permet d'accomplir des tâches pour l'exploration de leurs données. Ce prototype traitait l'espace RA comme une extension de l'écran du PC et permettait aux utilisateurs d'interagir librement avec chacun d'eux avec la souris. Ainsi, les experts pouvaient bénéficier de l'immersion visuelle et utilisent les outils d'analyse sur PC. Une étude observationnelle menée avec 7 physiciens au CERN a validé la faisabilité et confirmé les avantages. Nous avons également constaté que la grande toile du RA et le fait de se déplacer pour observer les données dans le RA présentaient un grand potentiel. Cependant, la conception de l'interaction de la souris et l’utilisation de widgets dans la RA devaient être améliorés. Ensuite, nous avons décidé de ne pas utiliser intensivement les widgets plats dans la RA. Mais nous nous sommes demandé si l'utilisation de la souris pour naviguer dans la RA est problématique, et nous avons ensuite tenté d'étudier si la correspondance de la dimensionnalité entre les dispositifs d'entrée et de sortie joue un rôle important. Les résultats des études (qui ont comparé la performance de l'utilisation de la souris, de la souris spatiale et de la tablette tangible couplée à l'écran ou à l'espace de RA) n'ont pas montré que la correspondance était importante. Nous avons donc conclu que la dimensionnalité n'était pas un point critique à considérer, ce qui suggère que les utilisateurs sont libres de choisir toute entrée qui convient à une tâche spécifique. De plus, nos résultats ont montré que la souris restait un outil efficace. Nous pouvons donc valider notre conception et conserver la souris comme entrée principale, tandis que les autres modalités ne devraient servir que comme complément pour des cas spécifiques. Ensuite, pour favoriser l'interaction et conserver les informations pendant que les utilisateurs se déplacent en RA, nous avons proposé d'ajouter un appareil mobile. Nous avons introduit une nouvelle approche qui augmente l'interaction tactile avec la détection de pression pour la navigation 3D. Les résultats ont montré que cette méthode pouvait améliorer efficacement la précision, avec une influence limitée sur le temps. Nous pensons donc qu'elle est utile à des tâches de vis où une précision est exigée. Enfin, nous avons résumé tous les résultats obtenus et imaginé un scénario réaliste qui utilise un poste de travail PC, un casque RA et un appareil mobile. Les travaux présentés dans cette thèse montrent le potentiel de la combinaison d'un PC avec des environnements de RA pour améliorer le processus d'exploration de données 3D et confirment sa faisabilité, ce qui, nous l'espérons, inspirera la future conception qui apportera une visualisation immersive aux flux de travail scientifiques existants
Exploratory visualization of 3D data is fundamental in many scientific domains. Traditionally, experts use a PC workstation and rely on mouse and keyboard to interactively adjust the view to observe the data. This setup provides immersion through interaction---users can precisely control the view and the parameters, but it does not provide any depth clues which can limit the comprehension of large and complex 3D data. Virtual or augmented reality (V/AR) setups, in contrast, provide visual immersion with stereoscopic views. Although their benefits have been proven, several limitations restrict their application to existing workflows, including high setup/maintenance needs, difficulties of precise control, and, more importantly, the separation from traditional analysis tools. To benefit from both sides, we thus investigated a hybrid setting combining an AR environment with a traditional PC to provide both interactive and visual immersions for 3D data exploration. We closely collaborated with particle physicists to understand their general working process and visualization requirements to motivate our design. First, building on our observations and discussions with physicists, we built up a prototype that supports fundamental tasks for exploring their datasets. This prototype treated the AR space as an extension to the PC screen and allowed users to freely interact with each using the mouse. Thus, experts could benefit from the visual immersion while using analysis tools on the PC. An observational study with 7 physicists in CERN validated the feasibility of such a hybrid setting, and confirmed the benefits. We also found that the large canvas of the AR and walking around to observe the data in AR had a great potential for data exploration. However, the design of mouse interaction in AR and the use of PC widgets in AR needed improvements. Second, based on the results of the first study, we decided against intensively using flat widgets in AR. But we wondered if using the mouse for navigating in AR is problematic compared to high degrees of freedom (DOFs) input, and then attempted to investigate if the match or mismatch of dimensionality between input and output devices play an important role in users’ performance. Results of user studies (that compared the performance of using mouse, space mouse, and tangible tablet paired with the screen or the AR space) did not show that the (mis-)match was important. We thus concluded that the dimensionality was not a critical point to consider, which suggested that users are free to choose any input that is suitable for a specific task. Moreover, our results suggested that the mouse was still an efficient tool compared to high DOFs input. We can therefore validate our design of keeping the mouse as the primary input for the hybrid setting, while other modalities should only serve as an addition for specific use cases. Next, to support the interaction and to keep the background information while users are walking around to observe the data in AR, we proposed to add a mobile device. We introduced a novel approach that augments tactile interaction with pressure sensing for 3D object manipulation/view navigation. Results showed that this method could efficiently improve the accuracy, with limited influence on completion time. We thus believe that it is useful for visualization purposes where a high accuracy is usually demanded. Finally, we summed up in this thesis all the findings we have and came up with an envisioned setup for a realistic data exploration scenario that makes use of a PC workstation, an AR headset, and a mobile device. The work presented in this thesis shows the potential of combining a PC workstation with AR environments to improve the process of 3D data exploration and confirms its feasibility, all of which will hopefully inspire future designs that seamlessly bring immersive visualization to existing scientific workflows
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Afsar, Fatima. "ANALYSIS AND INTERPRETATION OF 2D/3D SEISMIC DATA OVER DHURNAL OIL FIELD, NORTHERN PAKISTAN." Thesis, Uppsala universitet, Geofysik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-202565.

Повний текст джерела
Анотація:
The study area, Dhurnal oil field, is located 74 km southwest of Islamabad in the Potwar basin of Pakistan. Discovered in March 1984, the field was developed with four producing wells and three water injection wells. Three main limestone reservoirs of Eocene and Paleocene ages are present in this field. These limestone reservoirs are tectonically fractured and all the production is derived from these fractures. The overlying claystone formation of Miocene age provides vertical and lateral seal to the Paleocene and Permian carbonates. The field started production in May 1984, reaching a maximum rate of 19370 BOPD in November 1989. Currently Dhurnal‐1 (D-1) and Dhurnal‐6 (D-6) wells are producing 135 BOPD and 0.65 MMCF/D gas. The field has depleted after producing over 50 million Bbls of oil and 130 BCF of gas from naturally fractured low energy shelf carbonates of the Eocene, Paleocene and Permian reservoirs. Preliminary geological and geophysical data evaluation of Dhurnal field revealed the presence of an up-dip anticlinal structure between D-1 and D-6 wells, seen on new 2003 reprocessed data. However, this structural impression is not observed on old 1987 processed data. The aim of this research is to compare and evaluate old and new reprocessed data in order to identify possible factors affecting the structural configuration. For this purpose, a detailed interpretation of old and new reprocessed data is carried out and results clearly demonstrate that structural compartmentalization exists in Dhurnal field (based on 2003 data). Therefore, to further analyse the available data sets, processing sequences pertaining to both vintages have been examined. After great effort and detailed investigation, it is concluded that the major parameter giving rise to this data discrepancy is the velocity analysis done with different gridding intervals. The detailed and dense velocity analysis carried out on the data in 2003 was able to image the subtle anticlinal feature, which was missed on the 1987 processed seismic data due to sparse gridding. In addition to this, about 105 sq.km 3D seismic data recently (2009) acquired by Ocean Pakistan Limited (OPL) is also interpreted in this project to gain greater confidence on the results. The 3D geophysical interpretation confirmed the findings and aided in accurately mapping the remaining hydrocarbon potential of Dhurnal field.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Fangerau, Jens [Verfasser], and Heike [Akademischer Betreuer] Leitte. "Interactive Similarity Analysis for 3D+t Cell Trajectory Data / Jens Fangerau ; Betreuer: Heike Leitte." Heidelberg : Universitätsbibliothek Heidelberg, 2015. http://d-nb.info/1180301900/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Millán, Vaquero Ricardo Manuel [Verfasser]. "Visualization methods for analysis of 3D multi-scale medical data / Ricardo Manuel Millán Vaquero." Hannover : Technische Informationsbibliothek (TIB), 2016. http://d-nb.info/111916088X/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Nellist, Clara. "Characterisation and beam test data analysis of 3D silicon pixel detectors for the ATLAS upgrade." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/characterisation-and-beam-test-data-analysis-of-3d-silicon-pixel-detectors-for-the-atlas-upgrade(22a82583-5588-4675-af5c-c3595b4ceb38).html.

Повний текст джерела
Анотація:
3D silicon pixel detectors are a novel technology where the electrodes penetrate the sili- con bulk perpendicularly to the wafer surface. As a consequence the collection distance is decoupled from the wafer thickness resulting in a radiation hard pixel detector by design. Between 2010 and 2012, 3D silicon pixel detectors have undergone an intensive programme of beam test experiments. As a result, 3D silicon has successfully qualified for the ATLAS upgrade project, the Insertable B-Layer (IBL), which will be installed in the long-shutdown in 2013-14. This thesis presents selected results from these beam test studies with 3D sensors bonded to both current ATLAS readout cards (FE-I3) and newly developed readout cards for the IBL (FE-I4). 3D devices were studied using 4 GeV positrons at DESY and 120 GeV pions at the SPS at CERN. Measurements presented include tracking efficiency (of the whole sensor, the pixel and the area around the electrodes), studies of the active edge pixels of SINTEF devices and cluster size distributions as a function of incident angle for IBL 3D design sensors. A simulation of 3D silicon sensors in an antiproton beam test for the AEgIS experiment, with comparison to experimental results and a previous simulation, are also presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Ramírez, Jiménez Guillermo. "Electric sustainability analysis for concrete 3D printing machine." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-258928.

Повний текст джерела
Анотація:
Nowadays, manufacturing technologies become more and more aware of efficiency and sustainability. One of them is the so called 3D printing. While 3D printing is often linked to plastic, the truth is there are many other materials that are being tested which could have several improvements over plastics.One of these options is stone or concrete, which is more suitable the architecture and artistic fields. However, due to its nature, this new technology involves the use of new techniques when compared to the more commonly used 3D printers. This implies that it could interesting to know how much energy efficient these techniques are and how can they be improved in future revisions.This thesis is an attempt to disclose and analyze the different devices that make up one of these printers and with this information, build a model that accurately describes its behavior.For this purpose, the power is measured at many points and later it is analyzed and fitted to a predefined function. After the fitting has been done, an error is calculated to show how accurate the model is when compared to the original data.It was found that many of these devices produce power spikes due to its nonlinear behavior. This behavior is usually related to switching, and can avoided with different devices.Finally, some advice is given focused on future research and revisions, which could be helpful for safety, efficiency and quality.
Numera blir tillverkningstekniken alltmer medveten om effektivitet och hållbarhet. En av dem är den så kallade 3D­utskriften. Medan 3D­utskrift ofta är kopplad till plast, är verkligheten att det finns många andra material som testas, vilket kan ha flera förbättringar över plast.Ett av dessa alternativ är sten eller betong, vilket är mer lämpligt inom arkitektur och konstnärliga fält. På grund av sin natur inbegriper denna nya teknik användningen av nya tekniker jämfört med de vanligare 3D­skrivarna. Detta innebär att det kan vara intressant att veta hur mycket mer energieffektiva dessa tekniker är och hur de kan förbättras i framtida revisioner.Denna avhandling är ett försök att studera och analysera de olika enheter som utgör en av dessa skrivare och med denna information, bygga en modell som exakt beskriver dess beteende.För detta ändamål mäts effekten på många punkter och senare analyseras och anpassas den till en fördefinierad funktion. Efter anpassning har gjorts beräknas felet för att visa hur exakt modellen är jämfört med originaldata.Det visade sig att många av dessa enheter producerar spännings­spikar på grund av dess olinjära beteende. Detta beteende är vanligtvis relaterat till omkoppling och kan undvikas med olika enheter.Slutligen ges några råd om framtida forskning och revideringar, vilket kan vara till hjälp för säkerhet, effektivitet och kvalitet.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Bagesteiro, Leia Bernardi. "Development of a ground reaction force-measuring treadmill for the analysis of prosthetic limbs during amputee running." Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/676/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Morlot, Jean-Baptiste. "Annotation of the human genome through the unsupervised analysis of high-dimensional genomic data." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066641/document.

Повний текст джерела
Анотація:
Le corps humain compte plus de 200 types cellulaires différents possédant une copie identique du génome mais exprimant un ensemble différent de gènes. Le contrôle de l'expression des gènes est assuré par un ensemble de mécanismes de régulation agissant à différentes échelles de temps et d'espace. Plusieurs maladies ont pour cause un dérèglement de ce système, notablement les certains cancers, et de nombreuses applications thérapeutiques, comme la médecine régénérative, reposent sur la compréhension des mécanismes de la régulation géniques. Ce travail de thèse propose, dans une première partie, un algorithme d'annotation (GABI) pour identifier les motifs récurrents dans les données de séquençage haut-débit. La particularité de cet algorithme est de prendre en compte la variabilité observée dans les réplicats des expériences en optimisant le taux de faux positif et de faux négatif, augmentant significativement la fiabilité de l'annotation par rapport à l'état de l'art. L'annotation fournit une information simplifiée et robuste à partir d'un grand ensemble de données. Appliquée à une base de données sur l'activité des régulateurs dans l'hématopoieïse, nous proposons des résultats originaux, en accord avec de précédentes études. La deuxième partie de ce travail s'intéresse à l'organisation 3D du génome, intimement lié à l'expression génique. Elle est accessible grâce à des algorithmes de reconstruction 3D à partir de données de contact entre chromosomes. Nous proposons des améliorations à l'algorithme le plus performant du domaine actuellement, ShRec3D, en permettant d'ajuster la reconstruction en fonction des besoins de l'utilisateur
The human body has more than 200 different cell types each containing an identical copy of the genome but expressing a different set of genes. The control of gene expression is ensured by a set of regulatory mechanisms acting at different scales of time and space. Several diseases are caused by a disturbance of this system, notably some cancers, and many therapeutic applications, such as regenerative medicine, rely on understanding the mechanisms of gene regulation. This thesis proposes, in a first part, an annotation algorithm (GABI) to identify recurrent patterns in the high-throughput sequencing data. The particularity of this algorithm is to take into account the variability observed in experimental replicates by optimizing the rate of false positive and false negative, increasing significantly the annotation reliability compared to the state of the art. The annotation provides simplified and robust information from a large dataset. Applied to a database of regulators activity in hematopoiesis, we propose original results, in agreement with previous studies. The second part of this work focuses on the 3D organization of the genome, intimately linked to gene expression. This structure is now accessible thanks to 3D reconstruction algorithm from contact data between chromosomes. We offer improvements to the currently most efficient algorithm of the domain, ShRec3D, allowing to adjust the reconstruction according to the user needs
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Ragnucci, Beatrice. "Data analysis of collapse mechanisms of a 3D printed groin vault in shaking table testing." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22365/.

Повний текст джерела
Анотація:
The aim of this novel experimental study is to investigate the behaviour of a 2m x 2m model of a masonry groin vault, which is built by the assembly of blocks made of a 3D-printed plastic skin filled with mortar. The choice of the groin vault is due to the large presence of this vulnerable roofing system in the historical heritage. Experimental tests on the shaking table are carried out to explore the vault response on two support boundary conditions, involving four lateral confinement modes. The data processing of markers displacement has allowed to examine the collapse mechanisms of the vault, based on the arches deformed shapes. There then follows a numerical evaluation, to provide the orders of magnitude of the displacements associated to the previous mechanisms. Given that these displacements are related to the arches shortening and elongation, the last objective is the definition of a critical elongation between two diagonal bricks and consequently of a diagonal portion. This study aims to continue the previous work and to take another step forward in the research of ground motion effects on masonry structures.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Morotti, Elena. "Reconstruction of 3D X-ray tomographic images from sparse data with TV-based methods." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3423265.

Повний текст джерела
Анотація:
This dissertation presents efficient implementations of iterative X-rays image reconstruction methods for the specific case of three-dimensional tomographic imaging from subsampled data. When a complete projection dataset is not available, the linear system describing the so-called Sparse Tomography (SpCT) is underdetermined, hence a Total Variation (TV) regularized model is considered. The resulting optimization problem is solved by a Scaled Gradient Projection algorithm and a Fixed Point method. They both are accelerated by effective strategies, specifically tuned for a SpCT framework where fast reconstructions must be provided in short run time, facing a very large size problem. Good results on digital simulations attest the reliability of the model-based approach and of the proposed schemes. Accurate reconstructions from real medical datasets are also achieved in few iterations, confirming the feasibility of the proposed approaches to sparse tomographic imaging.
Questa tesi propone l'implementazione efficiente di due metodi iterativi per la ricostruzione di immagini tridimensionali di tomografia a raggi X, nel caso specifico in cui il volume debba essere ottenuto da dati sottocampionati. Quando le proiezioni non possono essere acquisite completamente, la risultante tecnica di Tomografia Computerizzata Sparsa (SpCT) è descritta da un sistema lineare sottodeterminato, quindi ne riformuliamo il modello aggiungendo il termine di Variazione Totale (TV). Definiamo pertanto un problema di ottimizzazione e lo risolviamo con un algoritmo di Gradiente Scalato Proiettato e uno di Punto Fisso. Entrambi i metodi sono stati accelerati con valide strategie, calibrate appositamente per la SpCT. In questo contesto è infatti necessario ricostruire un'immagine in brevissimo tempo, risolvendo un problema di ampie dimensioni. Alcuni test di simulazione forniscono buoni risultati che attestano la validità sia dell'approccio model-based che dei metodi proposti. Accurate ricostruzioni sono state ottenute a partire da proiezioni mediche reali, in poche iterazioni: ciò conferma l'adeguatezza di quanto proposto per la ricostruzione di immagini nel campo della SpCT.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Tabbi, Giuseppe Teodoro Maria [Verfasser]. "Parallelization of a Data-Driven Independent Component Analysis to Analyze Large 3D-Polarized Light Imaging Data Sets / Giuseppe Teodoro Maria Tabbi." Wuppertal : Universitätsbibliothek Wuppertal, 2016. http://d-nb.info/1120027241/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Meinhardt, Llopis Enric. "Morphological and statistical techniques for the analysis of 3D images." Doctoral thesis, Universitat Pompeu Fabra, 2011. http://hdl.handle.net/10803/22719.

Повний текст джерела
Анотація:
Aquesta tesi proposa una estructura de dades per emmagatzemar imatges tridimensionals. L'estructura da dades té forma d'arbre i codifica les components connexes dels conjunts de nivell de la imatge. Aquesta estructura és la eina bàsica per moltes aplicacions proposades: operadors morfològics tridimensionals, visualització d'imatges mèdiques, anàlisi d'histogrames de color, seguiment d'objectes en vídeo i detecció de vores. Motivada pel problema de la completació de vores, la tesi conté un estudi de com l'eliminació de soroll mitjançant variació total anisòtropa es pot fer servir per calcular conjunts de Cheeger en mètriques anisòtropes. Aquests conjunts de Cheeger anisòtrops es poden utilitzar per trobar òptims globals d'alguns funcionals per completar vores. També estan relacionats amb certs invariants afins que s'utilitzen en reconeixement d'objectes, i en la tesi s'explicita aquesta relació.
This thesis proposes a tree data structure to encode the connected components of level sets of 3D images. This data structure is applied as a main tool in several proposed applications: 3D morphological operators, medical image visualization, analysis of color histograms, object tracking in videos and edge detection. Motivated by the problem of edge linking, the thesis contains also an study of anisotropic total variation denoising as a tool for computing anisotropic Cheeger sets. These anisotropic Cheeger sets can be used to find global optima of a class of edge linking functionals. They are also related to some affine invariant descriptors which are used in object recognition, and this relationship is laid out explicitly.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Momcheva, Ivelina G., Gabriel B. Brammer, Dokkum Pieter G. van, Rosalind E. Skelton, Katherine E. Whitaker, Erica J. Nelson, Mattia Fumagalli, et al. "THE 3D-HST SURVEY: HUBBLE SPACE TELESCOPE WFC3/G141 GRISM SPECTRA, REDSHIFTS, AND EMISSION LINE MEASUREMENTS FOR ∼100,000 GALAXIES." IOP PUBLISHING LTD, 2016. http://hdl.handle.net/10150/621407.

Повний текст джерела
Анотація:
We present reduced data and data products from the 3D-HST survey, a 248-orbit HST Treasury program. The survey obtained WFC3 G141 grism spectroscopy in four of the five CANDELS fields: AEGIS, COSMOS, GOODS-S, and UDS, along with WFC3 H-140 imaging, parallel ACS G800L spectroscopy, and parallel I-814 imaging. In a previous paper, we presented photometric catalogs in these four fields and in GOODS-N, the fifth CANDELS field. Here we describe and present the WFC3 G141 spectroscopic data, again augmented with data from GO-1600 in GOODS-N (PI: B. Weiner). We developed software to automatically and optimally extract interlaced two-dimensional (2D) and one-dimensional (1D) spectra for all objects in the Skelton et al. (2014) photometric catalogs. The 2D spectra and the multi-band photometry were fit simultaneously to determine redshifts and emission line strengths, taking the morphology of the galaxies explicitly into account. The resulting catalog has redshifts and line strengths (where available) for 22,548 unique objects down to JH(IR) <= 24 (79,609 unique objects down to JH(IR) <= 26). Of these, 5459 galaxies are at z > 1.5 and 9621 are at 0.7 < z < 1.5, where Ha falls in the G141 wavelength coverage. The typical redshift error for JH(IR) <= 24 galaxies is sigma(z) approximate to 0.003 x (1 + z), i.e., one native WFC3 pixel. The 3 sigma limit for emission line fluxes of point sources is 2.1 x 10(-17) erg s(-1) cm(-2). All 2D and 1D spectra, as well as redshifts, line fluxes, and other derived parameters, are publicly available.(18)
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Comino, Trinidad Marc. "Algorithms for the reconstruction, analysis, repairing and enhancement of 3D urban models from multiple data sources." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/670373.

Повний текст джерела
Анотація:
Over the last few years, there has been a notorious growth in the field of digitization of 3D buildings and urban environments. The substantial improvement of both scanning hardware and reconstruction algorithms has led to the development of representations of buildings and cities that can be remotely transmitted and inspected in real-time. Among the applications that implement these technologies are several GPS navigators and virtual globes such as Google Earth or the tools provided by the Institut Cartogràfic i Geològic de Catalunya. In particular, in this thesis, we conceptualize cities as a collection of individual buildings. Hence, we focus on the individual processing of one structure at a time, rather than on the larger-scale processing of urban environments. Nowadays, there is a wide diversity of digitization technologies, and the choice of the appropriate one is key for each particular application. Roughly, these techniques can be grouped around three main families: - Time-of-flight (terrestrial and aerial LiDAR). - Photogrammetry (street-level, satellite, and aerial imagery). - Human-edited vector data (cadastre and other map sources). Each of these has its advantages in terms of covered area, data quality, economic cost, and processing effort. Plane and car-mounted LiDAR devices are optimal for sweeping huge areas, but acquiring and calibrating such devices is not a trivial task. Moreover, the capturing process is done by scan lines, which need to be registered using GPS and inertial data. As an alternative, terrestrial LiDAR devices are more accessible but cover smaller areas, and their sampling strategy usually produces massive point clouds with over-represented plain regions. A more inexpensive option is street-level imagery. A dense set of images captured with a commodity camera can be fed to state-of-the-art multi-view stereo algorithms to produce realistic-enough reconstructions. One other advantage of this approach is capturing high-quality color data, whereas the geometric information is usually lacking. In this thesis, we analyze in-depth some of the shortcomings of these data-acquisition methods and propose new ways to overcome them. Mainly, we focus on the technologies that allow high-quality digitization of individual buildings. These are terrestrial LiDAR for geometric information and street-level imagery for color information. Our main goal is the processing and completion of detailed 3D urban representations. For this, we will work with multiple data sources and combine them when possible to produce models that can be inspected in real-time. Our research has focused on the following contributions: - Effective and feature-preserving simplification of massive point clouds. - Developing normal estimation algorithms explicitly designed for LiDAR data. - Low-stretch panoramic representation for point clouds. - Semantic analysis of street-level imagery for improved multi-view stereo reconstruction. - Color improvement through heuristic techniques and the registration of LiDAR and imagery data. - Efficient and faithful visualization of massive point clouds using image-based techniques.
Durant els darrers anys, hi ha hagut un creixement notori en el camp de la digitalització d'edificis en 3D i entorns urbans. La millora substancial tant del maquinari d'escaneig com dels algorismes de reconstrucció ha portat al desenvolupament de representacions d'edificis i ciutats que es poden transmetre i inspeccionar remotament en temps real. Entre les aplicacions que implementen aquestes tecnologies hi ha diversos navegadors GPS i globus virtuals com Google Earth o les eines proporcionades per l'Institut Cartogràfic i Geològic de Catalunya. En particular, en aquesta tesi, conceptualitzem les ciutats com una col·lecció d'edificis individuals. Per tant, ens centrem en el processament individual d'una estructura a la vegada, en lloc del processament a gran escala d'entorns urbans. Avui en dia, hi ha una àmplia diversitat de tecnologies de digitalització i la selecció de l'adequada és clau per a cada aplicació particular. Aproximadament, aquestes tècniques es poden agrupar en tres famílies principals: - Temps de vol (LiDAR terrestre i aeri). - Fotogrametria (imatges a escala de carrer, de satèl·lit i aèries). - Dades vectorials editades per humans (cadastre i altres fonts de mapes). Cadascun d'ells presenta els seus avantatges en termes d'àrea coberta, qualitat de les dades, cost econòmic i esforç de processament. Els dispositius LiDAR muntats en avió i en cotxe són òptims per escombrar àrees enormes, però adquirir i calibrar aquests dispositius no és una tasca trivial. A més, el procés de captura es realitza mitjançant línies d'escaneig, que cal registrar mitjançant GPS i dades inercials. Com a alternativa, els dispositius terrestres de LiDAR són més accessibles, però cobreixen àrees més petites, i la seva estratègia de mostreig sol produir núvols de punts massius amb regions planes sobrerepresentades. Una opció més barata són les imatges a escala de carrer. Es pot fer servir un conjunt dens d'imatges capturades amb una càmera de qualitat mitjana per obtenir reconstruccions prou realistes mitjançant algorismes estèreo d'última generació per produir. Un altre avantatge d'aquest mètode és la captura de dades de color d'alta qualitat. Tanmateix, la informació geomètrica resultant sol ser de baixa qualitat. En aquesta tesi, analitzem en profunditat algunes de les mancances d'aquests mètodes d'adquisició de dades i proposem noves maneres de superar-les. Principalment, ens centrem en les tecnologies que permeten una digitalització d'alta qualitat d'edificis individuals. Es tracta de LiDAR terrestre per obtenir informació geomètrica i imatges a escala de carrer per obtenir informació sobre colors. El nostre objectiu principal és el processament i la millora de representacions urbanes 3D amb molt detall. Per a això, treballarem amb diverses fonts de dades i les combinarem quan sigui possible per produir models que es puguin inspeccionar en temps real. La nostra investigació s'ha centrat en les següents contribucions: - Simplificació eficaç de núvols de punts massius, preservant detalls d'alta resolució. - Desenvolupament d'algoritmes d'estimació normal dissenyats explícitament per a dades LiDAR. - Representació panoràmica de baixa distorsió per a núvols de punts. - Anàlisi semàntica d'imatges a escala de carrer per millorar la reconstrucció estèreo de façanes. - Millora del color mitjançant tècniques heurístiques i el registre de dades LiDAR i imatge. - Visualització eficient i fidel de núvols de punts massius mitjançant tècniques basades en imatges.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Tašárová, Zuzana. "Gravity data analysis and interdisciplinary 3D modelling of a convergent plate margin (Chile, 36°-42°S)." [S.l. : s.n.], 2004. http://www.diss.fu-berlin.de/2005/19/index.html.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Borke, Lukas. "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA." Doctoral thesis, Humboldt-Universität zu Berlin, 2017. http://dx.doi.org/10.18452/18307.

Повний текст джерела
Анотація:
Mit der wachsenden Popularität von GitHub, dem größten Online-Anbieter von Programm-Quellcode und der größten Kollaborationsplattform der Welt, hat es sich zu einer Big-Data-Ressource entfaltet, die eine Vielfalt von Open-Source-Repositorien (OSR) anbietet. Gegenwärtig gibt es auf GitHub mehr als eine Million Organisationen, darunter solche wie Google, Facebook, Twitter, Yahoo, CRAN, RStudio, D3, Plotly und viele mehr. GitHub verfügt über eine umfassende REST API, die es Forschern ermöglicht, wertvolle Informationen über die Entwicklungszyklen von Software und Forschung abzurufen. Unsere Arbeit verfolgt zwei Hauptziele: (I) ein automatisches OSR-Kategorisierungssystem für Data Science Teams und Softwareentwickler zu ermöglichen, das Entdeckbarkeit, Technologietransfer und Koexistenz fördert. (II) Visuelle Daten-Exploration und thematisch strukturierte Navigation innerhalb von GitHub-Organisationen für reproduzierbare Kooperationsforschung und Web-Applikationen zu etablieren. Um Mehrwert aus Big Data zu generieren, ist die Speicherung und Verarbeitung der Datensemantik und Metadaten essenziell. Ferner ist die Wahl eines geeigneten Text Mining (TM) Modells von Bedeutung. Die dynamische Kalibrierung der Metadaten-Konfigurationen, TM Modelle (VSM, GVSM, LSA), Clustering-Methoden und Clustering-Qualitätsindizes wird als "Smart Clusterization" abgekürzt. Data-Driven Documents (D3) und Three.js (3D) sind JavaScript-Bibliotheken, um dynamische, interaktive Datenvisualisierung zu erzeugen. Beide Techniken erlauben Visuelles Data Mining (VDM) in Webbrowsern, und werden als D3-3D abgekürzt. Latent Semantic Analysis (LSA) misst semantische Information durch Kontingenzanalyse des Textkorpus. Ihre Eigenschaften und Anwendbarkeit für Big-Data-Analytik werden demonstriert. "Smart clusterization", kombiniert mit den dynamischen VDM-Möglichkeiten von D3-3D, wird unter dem Begriff "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA" zusammengefasst.
With the growing popularity of GitHub, the largest host of source code and collaboration platform in the world, it has evolved to a Big Data resource offering a variety of Open Source repositories (OSR). At present, there are more than one million organizations on GitHub, among them Google, Facebook, Twitter, Yahoo, CRAN, RStudio, D3, Plotly and many more. GitHub provides an extensive REST API, which enables scientists to retrieve valuable information about the software and research development life cycles. Our research pursues two main objectives: (I) provide an automatic OSR categorization system for data science teams and software developers promoting discoverability, technology transfer and coexistence; (II) establish visual data exploration and topic driven navigation of GitHub organizations for collaborative reproducible research and web deployment. To transform Big Data into value, in other words into Smart Data, storing and processing of the data semantics and metadata is essential. Further, the choice of an adequate text mining (TM) model is important. The dynamic calibration of metadata configurations, TM models (VSM, GVSM, LSA), clustering methods and clustering quality indices will be shortened as "smart clusterization". Data-Driven Documents (D3) and Three.js (3D) are JavaScript libraries for producing dynamic, interactive data visualizations, featuring hardware acceleration for rendering complex 2D or 3D computer animations of large data sets. Both techniques enable visual data mining (VDM) in web browsers, and will be abbreviated as D3-3D. Latent Semantic Analysis (LSA) measures semantic information through co-occurrence analysis in the text corpus. Its properties and applicability for Big Data analytics will be demonstrated. "Smart clusterization" combined with the dynamic VDM capabilities of D3-3D will be summarized under the term "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA".
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Khokhlova, Margarita. "Évaluation clinique de la démarche à partir de données 3D." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCK079.

Повний текст джерела
Анотація:
L'analyse de la démarche clinique est généralement subjective, étant effectuée par des cliniciens observant la démarche des patients. Des alternatives à une telle analyse sont les systèmes basés sur les marqueurs et les systèmes basés sur les plates-formes au sol. Cependant, cette analyse standard de la marche nécessite des laboratoires spécialisés, des équipements coûteux et de longs délais d'installation et de post-traitement. Les chercheurs ont fait de nombreuses tentatives pour proposer une alternative basée sur la vision par ordinateur pour l'analyse de la demarche. Avec l'apparition de caméras 3D bon marche, le problème de l'évaluation qualitative de la démarche a été re-examiné. Les chercheurs ont réalisé le potentiel des dispositifs de cameras 3D pour les applications d'analyse de mouvement. Cependant, malgré des progrès très encourageants dans les technologies de détection 3D, leur utilisation réelle dans l'application clinique reste rare.Cette thèse propose des modèles et des techniques pour l'évaluation du mouvement à l'aide d'un capteur Microsoft Kinect. En particulier, nous étudions la possibilité d'utiliser différentes données fournies par une caméra RGBD pour l'analyse du mouvement et de la posture. Les principales contributions sont les suivantes. Nous avons réalisé une étude de l'etait de l'art pour estimer les paramètres importants de la démarche, la faisabilité de différentes solutions techniques et les méthodes d'évaluation de la démarche existantes. Ensuite, nous proposons un descripteur de posture basé sur un nuage de points 3D. Le descripteur conçu peut classer les postures humaines statiques a partir des données 3D. Nous construisons un système d'acquisition à utiliser pour l'analyse de la marche basée sur les donnees acquises par un capteur Kinect v2. Enfin, nous proposons une approche de détection de démarche anormale basée sur les données du squelette. Nous démontrons que notre outil d'analyse de la marche fonctionne bien sur une collection de données personnalisées et de repères existants. Notre méthode d'évaluation de la démarche affirme des avances significatives dans le domain, nécessite un équipement limité et est prêt à être utilisé pour l'évaluation de la démarche
Clinical Gait analysis is traditionally subjective, being performed by clinicians observing patients gait. A common alternative to such analysis is markers-based systems and ground-force platforms based systems. However, this standard gait analysis requires specialized locomotion laboratories, expensive equipment, and lengthy setup and post-processing times. Researchers made numerous attempts to propose a computer vision based alternative for clinical gait analysis. With the appearance of commercial 3D cameras, the problem of qualitative gait assessment was reviewed. Researchers realized the potential of depth-sensing devices for motion analysis applications. However, despite much encouraging progress in 3D sensing technologies, their real use in clinical application remains scarce.In this dissertation, we develop models and techniques for movement assessment using a Microsoft Kinect sensor. In particular, we study the possibility to use different data provided by an RGBD camera for motion and posture analysis. The main contributions of this dissertation are the following. First, we executed a literature study to estimate the important gait parameters, the feasibility of different possible technical solutions and existing gait assessment methods. Second, we propose a 3D point cloud based posture descriptor. The designed descriptor can classify static human postures based on 3D data without the use of skeletonization algorithms. Third, we build an acquisition system to be used for gait analysis based on the Kinect v2 sensor. Fourth, we propose an abnormal gait detection approach based on the skeleton data. We demonstrate that our gait analysis tool works well on a collection of custom data and existing benchmarks. Weshow that our gait assessment approach advances the progress in the field, is ready to be used for gait assessment scenario and requires a minimum of the equipment
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Maglo, Adrien Enam. "Progressive and Random Accessible Mesh Compression." Phd thesis, Ecole Centrale Paris, 2013. http://tel.archives-ouvertes.fr/tel-00966180.

Повний текст джерела
Анотація:
Previous work on progressive mesh compression focused on triangle meshes but meshes containing other types of faces are commonly used. Therefore, we propose a new progressive mesh compression method that can efficiently compress meshes with arbitrary face degrees. Its compression performance is competitive with approaches dedicated to progressive triangle mesh compression. Progressive mesh compression is linked to mesh decimation because both applications generate levels of detail. Consequently, we propose a new simple volume metric to drive the polygon mesh decimation. We apply this metric to the progressive compression and the simplification of polygon meshes. We then show that the features offered by progressive mesh compression algorithms can be exploited for 3D adaptation by the proposition of a new framework for remote scientific visualization. Progressive random accessible mesh compression schemes can better adapt 3D mesh data to the various constraints by taking into account regions of interest. So, we propose two new progressive random-accessible algorithms. The first one is based on the initial segmentation of the input model. Each generated cluster is compressed independently with a progressive algorithm. The second one is based on the hierarchical grouping of vertices obtained by the decimation. The advantage of this second method is that it offers a high random accessibility granularity and generates one-piece decompressed meshes with smooth transitions between parts decompressed at low and high levels of detail. Experimental results demonstrate the compression and adaptation efficiency of both approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Roberts, Ronald Anthony. "A new approach to Road Pavement Management Systems by exploiting Data Analytics, Image Analysis and Deep Learning." Doctoral thesis, Università degli Studi di Palermo, 2021. http://hdl.handle.net/10447/492523.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Motakis, Efthimios. "Multi-scale approaches for the statistical analysis of microarray data (with an application to 3D vesicle tracking)." Thesis, University of Bristol, 2007. http://hdl.handle.net/1983/6a764dc8-c4b8-4034-94cc-e58457825a47.

Повний текст джерела
Анотація:
The recent developments in experimental methods for gene data analysis, called microarrays, provide the possibility of interrogating changes in the expression of a vast number of genes in cell or tissue cultures and thus in depth exploration of disease conditions. As part of an ongoing program of research in Guy A. Rutter (G.A.R.) laboratory, Department of Biochemistry, University of Bristol, UK, with support from the Welcome Trust, we study the impact of established and of potentially new methods to the statistical analysis of gene expression data.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Gafeira, Gonçalves Joana. "Submarine mass movement processes on the North Sea Fan as interpreted from the 3D seismic data." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4714.

Повний текст джерела
Анотація:
This research has been focused on the characterisation and analysis of the deposits of large-scale mass movement events that shaped the North Sea Fan since the Mid-Pleistocene. Located at the mouth of the cross-shelf trough Norwegian Channel, the North Sea Fan is one of the largest through-mouth fans in the glaciated european margin with an area of approximately 142,000 km2. Submarine mass movement processed have occurred intermittenrly throughout the Quarternary history of the North Sea Fan, related to recurrent climate-related episodes of growth and retreat of the ice sheets. These processes can transport large amounts of sediment from the upper shelf up to the abyssal basins, playing an important role on the evolution of continental margins and can also reporesnet major geological hazards. This thesis uses mainly 3D seismic data to investigate the external geometry and internal structure of large-scale mass movement deposits. The high spatial resolution provided by the 3D seismic data has allowed a detailed geomorpholocial analysis of these deposits, This study involved the interpretation of the seismic data and the detailed pickling of key reflectors followed by tge extraction of both horizon and window-based seismic attributes. Digital elevation models of the key reflectors and their seismic attribute maps were then transferred to a geographical information system (GIS) where they were interactively interpreted using spatial analysis tools and the full visualisation potential of the software. The outcomes of this study highlight the importance of detailed horizon pickling and interactice interpretation followed by spatial analysis and visualisation in GIS environment. The identification of acoustic patterns within deposits that are normally described from 2D seismic as chaotic or acoustically transparent emphasizes the potential of detailed analysis of 3D seismic data. It gives an example of how this type of data can provide new insights into the mechanisms and processes associated with mass movements. In particular, amplitude and RMS amplitude maps provide remarkable detailed information of internal deformation structures whereas slope, shaded-relief and thickness maps allowed detailed characterisation of the external geometry. Various types of kinematic indicators can be recognized within the mass movement deposits through combined seismic analysis and detaield morphological mapping.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Dudziak, William James. "PRESENTATION AND ANALYSIS OF A MULTI-DIMENSIONAL INTERPOLATION FUNCTION FOR NON-UNIFORM DATA: MICROSPHERE PROJECTION." University of Akron / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=akron1183403994.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Hofmann, Alexandra. "An Approach to 3D Building Model Reconstruction from Airborne Laser Scanner Data Using Parameter Space Analysis and Fusion of Primitives." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2005. http://nbn-resolving.de/urn:nbn:de:swb:14-1121943034550-40151.

Повний текст джерела
Анотація:
Within this work an approach was developed, which utilises airborne laser scanner data in order to generate 3D building models. These 3D building models may be used for technical and environmental planning. The approach has to follow certain requirements such as working automatically and robust and being flexible in use but still practicable. The approach starts with small point clouds containing one building at the time extracted from laser scanner data set by applying a pre-segmentation scheme. The laser scanner point cloud of each building is analysed separately. A 2.5D-Delaunay triangle mesh structure (TIN) is calculated into the laser scanner point cloud. For each triangle the orientation parameters in space (orientation, slope and perpendicular distance to the barycentre of the laser scanner point cloud) are determined and mapped into a parameter space. As buildings are composed of planar features, primitives, triangles representing these features should group in parameter space. A cluster analysis technique is utilised to find and outline these groups/clusters. The clusters found in parameter space represent plane objects in object space. Grouping adjacent triangles in object space - which represent points in parameter space - enables the interpolation of planes in the ALS points that form the triangles. In each cluster point group a plane in object space is interpolated. All planes derived from the data set are intersected with their appropriate neighbours. From this, a roof topology is established, which describes the shape of the roof. This ensures that each plane has knowledge on its direct adjacent neighbours. Walls are added to the intersected roof planes and the virtual 3D building model is presented in a file written in VRML (Virtual Reality Macro Language). Besides developing the 3D building model reconstruction scheme, this research focuses on the geometric reconstruction and the derivation of attributes of 3D building models. The developed method was tested on different data sets obtained from different laser scanner systems. This study will also show, which potential and limits the developed method has when applied to these different data sets
In der vorliegenden Arbeit wird eine neue Methode zur automatischen Rekonstruktion von 3D Gebäudemodellen aus Flugzeuglaserscannerdaten vorgestellt. Diese 3D Gebäudemodelle können in technischer und landschaftsplanerischer Hinsicht genutzt werden. Bezüglich der zu entwickelnden Methode wurden Regelungen und Bedingungen erstellt, die eine voll automatische und robuste Arbeitsweise sowie eine flexible und praktikable Nutzung gewährleisten sollten. Die entwickelte Methode verwendet Punktwolken, welche mittels einer Vorsegmentierung aus dem gesamten Laserscannerdatensatz extrahiert wurden und jeweils nur ein Gebäude beinhalten. Diese Laserscannerdatenpunktwolken werden separat analysiert. Eine 2,5D-Delaunay-Dreiecksvermaschung (TIN) wird in jede Punktwolke gerechnet. Für jedes Dreieck dieser Vermaschung werden die Lageparameter im Raum (Ausrichtung, Neigungsgrad und senkrechter Abstand der Ebene des Dreiecks zum Schwerpunkt der Punktwolke) bestimmt und in einen Parameterraum aufgetragen. Im Parameterraum bilden diejenigen Dreiecke Gruppen, welche sich im Objektraum auf ebenen Flächen befinden. Mit der Annahme, dass sich ein Gebäude aus ebenen Flächen zusammensetzt, dient die Identifizierung von Clustern im Parameterraum der Detektierung dieser Flächen. Um diese Gruppen/Cluster aufzufinden wurde eine Clusteranalysetechnik genutzt. Über die detektierten Cluster können jene Laserscannerpunkte im Objektraum bestimmt werden, die eine Dachfläche formen. In die Laserscannerpunkte der somit gefundenen Dachflächen werden Ebenen interpoliert. Alle abgeleiteten Ebenen gehen in den entwickelten Rekonstruktionsalgorithmus ein, der eine Topologie zwischen den einzelnen Ebenen aufbaut. Anhand dieser Topologie erhalten die Ebenen ?Kenntnis? über ihre jeweiligen Nachbarn und können miteinander verschnitten werden. Der fertigen Dachgestalt werden Wände zugefügt und das komplette 3D Gebäudemodell wird mittels VRML (Virtual Reality Macro Language) visualisiert. Diese Studie bezieht sich neben der Entwicklung eines Schemas zu automatischen Gebäuderekonstruktion auch auf die Ableitung von Attributen der 3D Gebäudemodellen. Die entwickelte Methode wurde an verschiedenen Flugzeuglaserscannerdatensätzen getestet. Es wird gezeigt, welche Potentiale und Grenzen die entwickelte Methode bei der Bearbeitung dieser verschiedenen Laserscannerdatensätze hat
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Yalcin, Bayramoglu Neslihan. "Range Data Recognition: Segmentation, Matching, And Similarity Retrieval." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613586/index.pdf.

Повний текст джерела
Анотація:
The improvements in 3D scanning technologies have led the necessity for managing range image databases. Hence, the requirement of describing and indexing this type of data arises. Up to now, rather much work is achieved on capturing, transmission and visualization
however, there is still a gap in the 3D semantic analysis between the requirements of the applications and the obtained results. In this thesis we studied 3D semantic analysis of range data. Under this broad title we address segmentation of range scenes, correspondence matching of range images and the similarity retrieval of range models. Inputs are considered as single view depth images. First, possible research topics related to 3D semantic analysis are introduced. Planar structure detection in range scenes are analyzed and some modifications on available methods are proposed. Also, a novel algorithm to segment 3D point cloud (obtained via TOF camera) into objects by using the spatial information is presented. We proposed a novel local range image matching method that combines 3D surface properties with the 2D scale invariant feature transform. Next, our proposal for retrieving similar models where the query and the database both consist of only range models is presented. Finally, analysis of heat diffusion process on range data is presented. Challenges and some experimental results are presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Aijazi, Ahmad Kamal. "3D urban cartography incorporating recognition and temporal integration." Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22528/document.

Повний текст джерела
Анотація:
Au cours des dernières années, la cartographie urbaine 3D a suscité un intérêt croissant pour répondre à la demande d’applications d’analyse des scènes urbaines tournées vers un large public. Conjointement les techniques d’acquisition de données 3D progressaient. Les travaux concernant la modélisation et la visualisation 3D des villes se sont donc intensifiés. Des applications fournissent au plus grand nombre des visualisations efficaces de modèles urbains à grande échelle sur la base des imageries aérienne et satellitaire. Naturellement, la demande s’est portée vers des représentations avec un point de vue terrestre pour offrir une visualisation 3D plus détaillée et plus réaliste. Intégrées dans plusieurs navigateurs géographiques comme Google Street View, Microsoft Visual Earth ou Géoportail, ces modélisations sont désormais accessibles et offrent une représentation réaliste du terrain, créée à partir des numérisateurs mobiles terrestres. Dans des environnements urbains, la qualité des données obtenues à partir de ces véhicules terrestres hybrides est largement entravée par la présence d’objets temporairement statiques ou dynamiques (piétons, voitures, etc.) dans la scène. La mise à jour de la cartographie urbaine via la détection des modifications et le traitement des données bruitées dans les environnements urbains complexes, l’appariement des nuages de points au cours de passages successifs, voire la gestion des grandes variations d’aspect de la scène dues aux conditions environnementales constituent d’autres problèmes délicats associés à cette thématique. Plus récemment, les tâches de perception s’efforcent également de mener une analyse sémantique de l’environnement urbain pour renforcer les applications intégrant des cartes urbaines 3D. Dans cette thèse, nous présentons un travail supportant le passage à l’échelle pour la cartographie 3D urbaine automatique incorporant la reconnaissance et l’intégration temporelle. Nous présentons en détail les pratiques actuelles du domaine ainsi que les différentes méthodes, les applications, les technologies récentes d’acquisition des données et de cartographie, ainsi que les différents problèmes et les défis qui leur sont associés. Le travail présenté se confronte à ces nombreux défis mais principalement à la classification des zones urbaines l’environnement, à la détection automatique des changements, à la mise à jour efficace de la carte et l’analyse sémantique de l’environnement urbain. Dans la méthode proposée, nous effectuons d’abord la classification de l’environnement urbain en éléments permanents et temporaires. Les objets classés comme temporaire sont ensuite retirés du nuage de points 3D laissant une zone perforée dans le nuage de points 3D. Ces zones perforées ainsi que d’autres imperfections sont ensuite analysées et progressivement éliminées par une mise à jour incrémentale exploitant le concept de multiples passages. Nous montrons que la méthode d’intégration temporelle proposée permet également d’améliorer l’analyse sémantique de l’environnement urbain, notamment les façades des bâtiments. Les résultats, évalués sur des données réelles en utilisant différentes métriques, démontrent non seulement que la cartographie 3D résultante est précise et bien mise à jour, qu’elle ne contient que les caractéristiques permanentes exactes et sans imperfections, mais aussi que la méthode est également adaptée pour opérer sur des scènes urbaines de grande taille. La méthode est adaptée pour des applications liées à la modélisation et la cartographie du paysage urbain nécessitant une mise à jour fréquente de la base de données
Over the years, 3D urban cartography has gained widespread interest and importance in the scientific community due to an ever increasing demand for urban landscape analysis for different popular applications, coupled with advances in 3D data acquisition technology. As a result, in the last few years, work on the 3D modeling and visualization of cities has intensified. Lately, applications have been very successful in delivering effective visualizations of large scale models based on aerial and satellite imagery to a broad audience. This has created a demand for ground based models as the next logical step to offer 3D visualizations of cities. Integrated in several geographical navigators, like Google Street View, Microsoft Visual Earth or Geoportail, several such models are accessible to large public who enthusiastically view the real-like representation of the terrain, created by mobile terrestrial image acquisition techniques. However, in urban environments, the quality of data acquired by these hybrid terrestrial vehicles is widely hampered by the presence of temporary stationary and dynamic objects (pedestrians, cars, etc.) in the scene. Other associated problems include efficient update of the urban cartography, effective change detection in the urban environment and issues like processing noisy data in the cluttered urban environment, matching / registration of point clouds in successive passages, and wide variations in environmental conditions, etc. Another aspect that has attracted a lot of attention recently is the semantic analysis of the urban environment to enrich semantically 3D mapping of urban cities, necessary for various perception tasks and modern applications. In this thesis, we present a scalable framework for automatic 3D urban cartography which incorporates recognition and temporal integration. We present in details the current practices in the domain along with the different methods, applications, recent data acquisition and mapping technologies as well as the different problems and challenges associated with them. The work presented addresses many of these challenges mainly pertaining to classification of urban environment, automatic change detection, efficient updating of 3D urban cartography and semantic analysis of the urban environment. In the proposed method, we first classify the urban environment into permanent and temporary classes. The objects classified as temporary are then removed from the 3D point cloud leaving behind a perforated 3D point cloud of the urban environment. These perforations along with other imperfections are then analyzed and progressively removed by incremental updating exploiting the concept of multiple passages. We also show that the proposed method of temporal integration also helps in improved semantic analysis of the urban environment, specially building façades. The proposed methods ensure that the resulting 3D cartography contains only the exact, accurate and well updated permanent features of the urban environment. These methods are validated on real data obtained from different sources in different environments. The results not only demonstrate the efficiency, scalability and technical strength of the method but also that it is ideally suited for applications pertaining to urban landscape modeling and cartography requiring frequent database updating
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Heldreich, Georgina. "A quantitative analysis of the fluvio-deltaic Mungaroo Formation : better-defining architectural elements from 3D seismic and well data." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/a-quantitative-analysis-of-the-fluviodeltaic-mungaroo-formation-betterdefining-architectural-elements-from-3d-seismic-and-well-data(866e245b-ba19-455d-924c-6d20af3dd700).html.

Повний текст джерела
Анотація:
Upper to lower delta plain fluvial sand bodies, sealed by delta plain mudstones, form important hydrocarbon reservoir targets. Modelling complex geobodies in the subsurface is challenging, with a significant degree of uncertainty on dimensions, distribution and connectivity. Studies of modern and ancient paralic systems have produced a myriad of nomenclature and hierarchy schemes for classifying fluvial architectural elements; often lacking clearly-defined terminology. These are largely based on outcrop data where lateral and vertical relationships of bounding scour surfaces can be assessed in detail. Many of these key defining criteria are difficult to recognise or cannot be obtained from typical 3D seismic reflection data at reservoir depths greater than or equal to 2 km subsurface. This research provides a detailed statistical analysis of the Triassic fluvio-deltaic Mungaroo Formation on the North West Shelf of Australia, which is one of the most important gas plays in the world. A multidisciplinary approach addresses the challenge of characterising the reservoir by utilising an integrated dataset of 830 m of conventional core, wireline logs from 21 wells (penetrating up to 1.4 km of the upper Mungaroo Fm) and a 3D seismic volume covering approximately 10,000 km2. Using seismic attribute analysis and frequency decomposition, constrained by well and core data, the planform geobody geometries and dimensions of a variety of architectural elements at different scales of observation are extracted. The results produce a statistically significant geobody database comprising over 27,000 measurements made from more than 6,000 sample points. Three classes of geobodies are identified and interpreted to represent fluvial channel belts and channel belt complexes of varying scales. Fluvial geobody dimensions and geomorphology vary spatially and temporally and the inferred controls on reservoir distribution and architecture are discussed. Results document periods of regression and transgression, interpreted in relation to potential allocyclic and autocyclic controls on the evolution of the depositional system. Statistical analysis of width-to-thickness dimensions and key metrics, such as sinuosity, provided a well-constrained and valuable dataset that augments, and has been compared to, existing published datasets. Uncertainty in interpretation caused by data resolution is addressed; something recognised in many other studies of paralic systems. Given the data distribution, type and resolution, geobodies have possible interpretations as either incised valleys or amalgamated channel belts, with implications for developing predictive models of the system. This study offers the first published, statistically significant dataset for the Mungaroo Formation. It builds upon previous regional work, offering a detailed analysis of this continental scale paralic system and provides insight into the controls and mechanisms that influenced its spatial and temporal evolution. Focusing on improved understanding of geobody distribution and origin, the statistical parameters generated provide a robust dataset that can be used for 3D static reservoir models of analogue systems. Thus, helping to constrain potential geobody dimensions and reduce the uncertainties associated with modelling.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Sanchez, Rojas Javier [Verfasser]. "Gravity Data Analysis and 3D Modeling of the Caribe-South America Boundary (76°– 64° W) / Javier Sanchez-Rojas." Kiel : Universitätsbibliothek Kiel, 2012. http://d-nb.info/102256112X/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Kunde, Felix. "CityGML in PostGIS : Portierung, Anwendung und Performanz-Analyse am Beipiel der 3D City Database von Berlin." Bachelor's thesis, Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6365/.

Повний текст джерела
Анотація:
Der internationale Standard CityGML ist zu einer zentralen Schnittstelle für die geometrische wie semantische Beschreibung von 3D-Stadtmodellen geworden. Das Institut für Geodäsie und Geoinformationstechnik (IGG) der Technischen Universität Berlin leistet mit ihren Entwicklung der 3D City Database und der Importer/Exporter Software einen entscheidenden Beitrag die Komplexität von CityGML-Daten in einer Geodatenbank intuitiv und effizient nutzen zu können. Die Software des IGG ist Open Source, unterstützte mit Oracle Spatial (ab Version 10g) aber bisher nur ein proprietäres Datenbank Management System (DBMS). Im Rahmen dieser Masterarbeit wurde eine Portierung auf die freie Datenbank-Software PostgreSQL/PostGIS vorgenommen und mit der Performanz der Oracle-Version verglichen. PostGIS gilt als eine der ausgereiftesten Geodatenbanken und wurde in diesem Jahr mit dem Release der Version 2.0 nochmals um zahlreiche Funktionen und Features (u.a. auch 3D-Unterstützung) erweitert. Die Ergebnisse des Vergleiches sowie die umfangreiche Gegenüberstellung aller verwendeten Konzepte (SQL, PL, Java) geben Aufschluss auf die Charakteristika beider räumlicher DBMS und ermöglichen einen Erkenntnisgewinn über die Projektgrenzen hinaus.
The international standard CityGML has become a key interface for describing 3D city models in a geometric and semantic manner. With the relational database schema 3D City Database and an according Importer/Exporter tool the Institute for Geodesy and Geoinformation (IGG) of the Technische Universität Berlin plays a leading role in developing concepts and tools that help to facilitate the understanding and handling of the complex CityGML data model. The software itself runs under the Open Source label but yet the only supported database management system (DBMS) is Oracle Spatial (since version 10g), which is proprietary. Within this Master's thesis the 3D City Database and the Importer/Exporter were ported to the free DBMS PostgreSQL/PostGIS and compared to the performance of the Oracle version. PostGIS is one the most sophisticated spatial database systems and was recently extended by several features (like 3D support) for the release of the version 2.0. The results of the comparison analysis as well as a detailed explanation of concepts and implementations (SQL, PL, Java) will provide insights in the characteristics of the two DBMS that go beyond the project focus.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Hassan, Raju Chandrashekara. "ANALYSIS OF VERY LARGE SCALE IMAGE DATA USING OUT-OF-CORE TECHNIQUE AND AUTOMATED 3D RECONSTRUCTION USING CALIBRATED IMAGES." Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1189785164.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Ahmed-Chaouch, Nabil. "Analyse historique et comparative des deux villes : la vieille ville d'Aix-en-Provence, la médina de Constantine à l'aide des S.I.G. : Comparaison historique et géographique de la croissance de deux villes méditerranéennes." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM3025.

Повний текст джерела
Анотація:
De nombreux domaines d'applications utilisent des représentations spatiales. C'est le cas de l'architecture, de l'urbanisme ou de la géographie. L'acquisition de ces données spatiales en urbanisme a connu ces dernières années des progrès significatifs grâce à l'introduction de nouveaux instruments de mesure. Ces derniers permettent d'obtenir des supports d'analyse urbaine à différents niveaux de détails et pour différentes finalités. Cette thèse propose une approche permettant de combiner deux disciplines, la typo-morphologie urbaine et la géomatique. Nous avons expliqué la notion centrale de processus morphologique, les différentes étapes d'opération propre à l'analyse historique, notamment le traitement des sources cartographiques avec l'outil S.I.G. Notre travail a avant tout consisté à explorer l'apport des S.I.G. au traitement et à l'analyse des données historiques. Nous nous sommes intéressés particulièrement à l'approche typo-morphologique pour compléter le potentiel interprétatif et descriptif. Notre travail de thèse est issu de différentes étapes, nous pouvons mentionner la construction d'une classification formelle, des concepts liés à l'évolution historique et morphologique des villes de Constantine et d'Aix-en-Provence. En partant de cette histoire urbaine, comparer ces deux villes à permis d'établir une chronologie de l'évolution des formes urbaines, de mieux saisir les enjeux de chacune de celles-ci
Many fields of applications use spatial representations. This is the case of architecture, town planning or geography. The acquisition of these spatial datas in town planning these last years has experienced a significant progress with the introduction of new instruments. This acquisition allows to get urban support analysis at different levels of details and for different purposes. This thesis proposes an approach to combine two disciplines, the urban typomorphology and geomatics. We have explained the central notion of morphological process, the different steps of operation peculiar to the historical analysis for the treatment of map datas with the GIS instrument, primarily our work consist to explore the GIS contribution to the historical data treatment and analysis. We focused particularly on the approach to complete typomorphological potential interpretive and descriptive. Our thesis work has been made from different stages, we can mention the construction of a formal classification, concepts related to the historical development and morphology of Constantine and Aix-en-Provence. Starting from this urban history compare the two cities has established a chronology of the evolution of urban forms, to better understand the challenges each of these latter. Specifically, this work allows us to contribute to improving the mastery of the urban project. Finally tracks are proposed to continue this work by exploiting the platform exploration of 3D representation proved very useful for making historical analysis
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії