Literatura académica sobre el tema "3D data analysi"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "3D data analysi".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "3D data analysi"

1

Browning, Paul. "MACSPIN: 3D DATA ANALYSIS SOFTWARE". Terra Nova 4, n.º 6 (noviembre de 1992): 701–4. http://dx.doi.org/10.1111/j.1365-3121.1992.tb00620.x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Wu, Youping y Zhihui Zhou. "Intelligent City 3D Modeling Model Based on Multisource Data Point Cloud Algorithm". Journal of Function Spaces 2022 (21 de julio de 2022): 1–10. http://dx.doi.org/10.1155/2022/6135829.

Texto completo
Resumen
With the rapid development of smart cities, intelligent navigation, and autonomous driving, how to quickly obtain 3D spatial information of urban buildings and build a high-precision 3D fine model has become a key problem to be solved. As the two-dimensional mapping results have constrained various needs in people’s social life, coupled with the concept of digital city and advocacy, making three-dimensional, virtualization and actualization become the common pursuit of people’s goals. However, the original point cloud obtained is always incomplete due to reasons such as occlusion during acquisition and data density decreasing with distance, resulting in extracted boundaries that are often incomplete as well. In this paper, based on the study of current mainstream 3D model data organization methods, geographic grids and map service specifications, and other related technologies, an intelligent urban 3D modeling model based on multisource data point cloud algorithm is designed for the two problems of unified organization and expression of urban multisource 3D model data. A point cloud preprocessing process is also designed: point cloud noise reduction and downsampling to ensure the original point cloud geometry structure remain unchanged, while improving the point cloud quality and reducing the number of point clouds. By outputting to a common 3D format, the 3D model constructed in this paper can be applied to many fields such as urban planning and design, architectural landscape design, urban management, emergency disaster relief, environmental protection, and virtual tourism.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Wang, Cuixia. "Optimization of Three-Dimensional Model of Landscape Space Based on Big Data Analysis". Journal of Function Spaces 2022 (17 de agosto de 2022): 1–11. http://dx.doi.org/10.1155/2022/7002983.

Texto completo
Resumen
Based on virtual reality technology, landscape 3D modeling provides users with the possibility to construct a simulated garden landscape environment design effect online, so it has high requirements for accuracy. With the continuous improvement of precision requirements, the number of people involved in the construction of 3D models is also increasing, which puts forward higher requirements for modeling. Based on this, this paper studies the optimization strategy of landscape space 3D model based on big data analysis. Based on the analysis of the establishment of the 3D model and the related algorithm research, this paper analyzes the optimal design of the 3D model under the background of big data. In the 3D modeling of the edge folded area, it is based on the traditional quadratic error measurement grid simplification algorithm, combined with the vertex error matrix to simplify, so as to shorten the modeling time. Based on an efficient search algorithm, an adaptive nonsearch fractal image compression and decoding method is proposed in the image compression and decoding stage of 3D modeling. The search is performed by specifying the defined area block. Finally, an experiment is designed to analyze the performance of the optimization algorithm. The results show that the improved edge folding region algorithm can reduce errors on the basis of ensuring image quality, and the adaptive search algorithm can shorten the search time and improve the compression rate. This method provides a technical reference for the visualization experience and simulation system of garden landscape design and improves the presentation quality of virtual garden landscape design scenes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Deighton, M. y M. Petrou. "Data mining for large scale 3D seismic data analysis". Machine Vision and Applications 20, n.º 1 (15 de noviembre de 2007): 11–22. http://dx.doi.org/10.1007/s00138-007-0101-3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Li, W., S. Zlatanova y B. Gorte. "VOXEL DATA MANAGEMENT AND ANALYSIS IN POSTGRESQL/POSTGIS UNDER DIFFERENT DATA LAYOUTS". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences VI-3/W1-2020 (17 de noviembre de 2020): 35–42. http://dx.doi.org/10.5194/isprs-annals-vi-3-w1-2020-35-2020.

Texto completo
Resumen
Abstract. Three-dimensional (3D) raster data (also named voxel) is important sources for 3D geo-information applications, which have long been used for modelling continuous phenomena such as geological and medical objects. Our world can be represented in voxels by gridding the 3D space and specifying what each grid represents by attaching every voxel to a real-world object. Nature-triggered disasters can also be modelled in volumetric representation. Unlike point cloud, it is still a lack of wide research on how to efficiently store and manage such semantic 3D raster data. In this work, we would like to investigate four different data layouts for voxel management in open-source (spatial) DBMS - PostgreSQL/PostGIS, which is suitable for efficiently retrieving and quick querying. Besides, a benchmark has been developed to compare various voxel data management solutions concerning functionality and performance. The main test dataset is the groups of buildings of UNSW Kensington Campus, with 10cm resolution. The obtained storage and query results suggest that the presented approach can be successfully used to handle voxel management, semantic and range queries on large voxel dataset.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Gautier, J., S. Christophe y M. Brédif. "VISUALIZING 3D CLIMATE DATA IN URBAN 3D MODELS". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B4-2020 (25 de agosto de 2020): 781–89. http://dx.doi.org/10.5194/isprs-archives-xliii-b4-2020-781-2020.

Texto completo
Resumen
Abstract. In order to understand and explain urban climate, the visual analysis of urban climate data and their relationships with the urban morphology is at stake. This involves partly to co-visualize 3D field climate data, obtained from simulation, with urban 3D models. We propose two ways to visualize and navigate into simulated climate data in urban 3D models, using series of horizontal 2D planes and 3D point clouds. We then explore different parameters regarding transparency, 3D semiologic rules, filtering and animation functions in order to improve the visual analysis of climate data 3D distribution. To achieve this, we apply our propositions to the co-visualization of air temperature data with a 3D urban city model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Barbu, Viorel y Michael Röckner. "Global solutions to random 3D vorticity equations for small initial data". Journal of Differential Equations 263, n.º 9 (noviembre de 2017): 5395–411. http://dx.doi.org/10.1016/j.jde.2017.06.020.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Papatheodorou, Theodore, John Giannatsis y Vassilis Dedoussis. "Evaluating 3D Printers Using Data Envelopment Analysis". Applied Sciences 11, n.º 9 (5 de mayo de 2021): 4209. http://dx.doi.org/10.3390/app11094209.

Texto completo
Resumen
Data Envelopment Analysis (DEA) is an established powerful mathematical programming technique, which has been employed quite extensively for assessing the efficiency/performance of various physical or virtual and simple or complex production systems, as well as of consumer and industrial products and technologies. The purpose of the present study is to investigate whether DEA may be employed for evaluating the technical efficiency/performance of 3D printers, an advanced manufacturing technology of increasing importance for the manufacturing sector. For this purpose, a representative sample of 3D printers based on Fused Deposition Modeling technology is examined. The technical factors/parameters of 3D printers, which are incorporated in the DEA, are investigated and discussed in detail. DEA evaluation results compare favorably with relevant benchmarks from experts, indicating that the suggested DEA technique in conjunction with technical and expert evaluation could be employed for evaluating the performance of a highly technological system, such as the 3D printer.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Mery, Francisco, Carolina Méndez-Orellana, Javier Torres, Francisco Aranda, Iván Caro, José Pesenti, Ricardo Rojas, Pablo Villanueva y Isabelle Germano. "3D simulation of aneurysm clipping: Data analysis". Data in Brief 37 (agosto de 2021): 107258. http://dx.doi.org/10.1016/j.dib.2021.107258.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Plyusnin, Ilya, Alistair R. Evans, Aleksis Karme, Aristides Gionis y Jukka Jernvall. "Automated 3D Phenotype Analysis Using Data Mining". PLoS ONE 3, n.º 3 (5 de marzo de 2008): e1742. http://dx.doi.org/10.1371/journal.pone.0001742.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "3D data analysi"

1

Deighton, M. J. "3D texture analysis in seismic data". Thesis, University of Surrey, 2006. http://epubs.surrey.ac.uk/842764/.

Texto completo
Resumen
The use of hydrocarbons is ubiquitous in modern society, from fuel to raw materials. Seismic surveys now routinely produce large, volumetric representations of the Earth's crust. Human interpretation of these surveys plays an important part in locating oil and gas reservoirs, however it is a lengthy and time consuming process. Methods that provide semi-automated aid to the interpreter are highly sought after. In this research, texture is identified as a major cue to interpretation. A local gradient density method is then employed for the first time with seismic data to provide volumetric texture analysis. Extensive experiments are undertaken to determine parameter choices that provide good separation of seismic texture classes according to the Bhattacharya distance. A framework is then proposed to highlight regions of interest in a survey with high confidence based on texture queries by an interpreter. The interpretation task of seismic facies analysis is then considered and its equivalence with segmentation is established. Since the facies units may take a range of orientations within the survey, sensitivity of the analysis to rotation is considered. As a result, new methods based on alternative gradient estimation kernels and data realignment are proposed. The feature based method with alternative kernels is shown to provide the best performance. Achieving high texture label confidence requires large local windows and is in direct conflict with the need for small windows to identify fine detail. It is shown that smaller windows may be employed to achieve finer detail at the expense of label confidence. A probabilistic relaxation scheme is then described that recovers the label confidence whilst constraining texture boundaries to be smooth at the smallest scale. Testing with synthetic data shows reductions in error rate by up to a factor of 2. Experiments with seismic data indicate that more detailed structure can be identified using this approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Madrigali, Andrea. "Analysis of Local Search Methods for 3D Data". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016.

Buscar texto completo
Resumen
In questa tesi sono stati analizzati alcuni metodi di ricerca per dati 3D. Viene illustrata una panoramica generale sul campo della Computer Vision, sullo stato dell’arte dei sensori per l’acquisizione e su alcuni dei formati utilizzati per la descrizione di dati 3D. In seguito è stato fatto un approfondimento sulla 3D Object Recognition dove, oltre ad essere descritto l’intero processo di matching tra Local Features, è stata fatta una focalizzazione sulla fase di detection dei punti salienti. In particolare è stato analizzato un Learned Keypoint detector, basato su tecniche di apprendimento di machine learning. Quest ultimo viene illustrato con l’implementazione di due algoritmi di ricerca di vicini: uno esauriente (K-d tree) e uno approssimato (Radial Search). Sono state riportate infine alcune valutazioni sperimentali in termini di efficienza e velocità del detector implementato con diversi metodi di ricerca, mostrando l’effettivo miglioramento di performance senza una considerabile perdita di accuratezza con la ricerca approssimata.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Orriols, Majoral Xavier. "Generative Models for Video Analysis and 3D Range Data Applications". Doctoral thesis, Universitat Autònoma de Barcelona, 2004. http://hdl.handle.net/10803/3037.

Texto completo
Resumen
La mayoría de problemas en Visión por computador no contienen una relación directa entre el estímulo que proviene de sensores de tipo genérico y su correspondiente categoría perceptual. Este tipo de conexión requiere de una tarea de aprendizaje compleja. De hecho, las formas básicas de energía, y sus posibles combinaciones, son un número reducido en comparación a las infinitas categorías perceptuales correspondientes a objetos, acciones, relaciones entre objetos, etc. Dos factores principales determinan el nivel de dificultad de cada problema específico: i) los diferentes niveles de información que se utilizan, y ii) la complejidad del modelo que se emplea con el objetivo de explicar las observaciones.
La elección de una representación adecuada para los datos toma una relevancia significativa cuando se tratan invariancias, dado que estas siempre implican una reducción del los grados de libertad del sistema, i.e., el número necesario de coordenadas para la representación es menor que el empleado en la captura de datos. De este modo, la descomposición en unidades básicas y el cambio de representación dan lugar a que un problema complejo se pueda transformar en uno de manejable. Esta simplificación del problema de la estimación debe depender del mecanismo propio de combinación de estas primitivas con el fin de obtener una descripción óptima del modelo complejo global. Esta tesis muestra como los Modelos de Variables Latentes reducen dimensionalidad, que teniendo en cuenta las simetrías internas del problema, ofrecen una manera de tratar con datos parciales y dan lugar a la posibilidad de predicciones de nuevas observaciones.
Las líneas de investigación de esta tesis están dirigidas al manejo de datos provinentes de múltiples fuentes. Concretamente, esta tesis presenta un conjunto de nuevos algoritmos aplicados a dos áreas diferentes dentro de la Visión por Computador: i) video análisis y sumarización y ii) datos range 3D. Ambas áreas se han enfocado a través del marco de los Modelos Generativos, donde se han empleado protocolos similares para representar datos.
The majority of problems in Computer Vision do not contain a direct relation between the stimuli provided by a general purpose sensor and its corresponding perceptual category. A complex learning task must be involved in order to provide such a connection. In fact, the basic forms of energy, and their possible combinations are a reduced number compared to the infinite possible perceptual categories corresponding to objects, actions, relations among objects... Two main factors determine the level of difficulty of a specific problem: i) The different levels of information that are employed and ii) The complexity of the model that is intended to explain the observations.
The choice of an appropriate representation for the data takes a significant relevance when it comes to deal with invariances, since these usually imply that the number of intrinsic degrees of
freedom in the data distribution is lower than the coordinates used to represent it. Therefore, the decomposition into basic units (model parameters) and the change of representation, make that a complex problem can be transformed into a manageable one. This simplification of the estimation problem has to rely on a proper mechanism of combination of those primitives in order to give an optimal description of the global complex model. This thesis shows how Latent Variable Models reduce dimensionality, taking into account the internal symmetries of a problem, provide a manner of dealing with missing data and make possible predicting new observations.
The lines of research of this thesis are directed to the management of multiple data sources. More specifically, this thesis presents a set of new algorithms applied to two different areas in Computer Vision: i) video analysis and summarization, and ii) 3D range data. Both areas have been approached through the Generative Models framework, where similar protocols for representing data have been employed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Qian, Zhongping. "Analysis of seismic anisotropy in 3D multi-component seismic data". Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/3515.

Texto completo
Resumen
The importance of seismic anisotropy has been recognized by the oil industry since its first observation in hydrocarbon reservoirs in 1986, and the application of seismic anisotropy to solve geophysical problems has been keenly pursued since then. However, a lot of problems remain, which have limited the applications of the technology. Nowadays, more and more 3D multi-component seismic data with wide-azimuth are becoming available. These have provided more opportunities for the study of seismic anisotropy. My thesis has focused on the study of using seismic anisotropy in 3D multi-component seismic data to characterize subsurface fractures, improve converted wave imaging and detect fluid content in fractured reservoirs, all of which are important for fractured reservoir exploration and monitoring. For the use of seismic anisotropy to characterize subsurface fracture systems, equivalent medium theories have established the link between seismic anisotropy and fracture properties. The numerical modelling in the thesis reveals that the amplitudes and interval travel-time of the radial component of PS converted waves can be used to derive fracture properties through elliptical fitting similar to P-waves. However, sufficient offset coverage is required for either the P- or PS-wave to reveal the features of elliptical variation with azimuth. Compared with numerical modelling, seismic physical modelling provides additional insights into the azimuthal variation of P and PS-wave attributes and their links with fracture properties. Analysis of the seismic physical model data in the thesis shows that the ratio of the offset to the depth of a target layer (offset-depth ratio), is a key parameter controlling the choice of suitable attributes and methods for fracture analysis. Data with a small offset-depth ratio from 0.7 to 1.0 may be more suitable for amplitude analysis; whilst the use of travel time or velocity analysis requires a large offset-depth ratio above 1.0, which can help in reducing the effect of the acquisition footprint and structural imprint on the results. Multi-component seismic data is often heavily contaminated with noise, which will limit its application potential in seismic anisotropy analysis. A new method to reduce noise in 3D multi-component seismic data has been developed and has proved to be very helpful in improving data quality. The method can automatically recognize and eliminate strong noise in 3D converted wave seismic data with little interference to useful reflection signals. Component rotation is normally a routine procedure in 3D multi-component seismic analysis. However, this study shows that incorrect rotations may occur for certain acquisition geometry and can lead to errors in shear-wave splitting analysis. A quality control method has been developed to ensure this procedure is correctly carried out. The presence of seismic anisotropy can affect the quality of seismic imaging, but the study has shown that the magnitude of the effects depends on the data type and target depth. The effects of VTI anisotropy (transverse isotropy with a vertical symmetry axis) on P-wave images are much weaker than those on PS-wave images. Anisotropic effects decrease with depth for the P- and PS-waves. The real data example shows that the overall image quality of PS-waves processed by pre-stack time migration has been improved when VTI anisotropy has been taken into account. The improvements are mainly in the upper part of the section. Monitoring fluid distribution is an important task in producing reservoirs. A synthetic study based on a multi-scale rock-physics model shows that it is possible to use seismic anisotropy to derive viscosity information in a HTI medium (transverse isotropy with a horizontal symmetry axis). The numerical modelling demonstrates the effects of fluid viscosity on medium elastic properties and seismic reflectivity, as well as the possibility of using them to discriminate between oil and water saturation. Analysis of real data reveals that it is hard to use the P-wave to discriminate oil-water saturation. However, characteristic shear-wave splitting behaviour due to pore pressure changes demonstrates the potential for discriminating between oil and water saturation in fractured reservoirs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Laha, Bireswar. "Immersive Virtual Reality and 3D Interaction for Volume Data Analysis". Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/51817.

Texto completo
Resumen
This dissertation provides empirical evidence for the effects of the fidelity of VR system components, and novel 3D interaction techniques for analyzing volume datasets. It provides domain-independent results based on an abstract task taxonomy for visual analysis of scientific datasets. Scientific data generated through various modalities e.g. computed tomography (CT), magnetic resonance imaging (MRI), etc. are in 3D spatial or volumetric format. Scientists from various domains e.g., geophysics, medical biology, etc. use visualizations to analyze data. This dissertation seeks to improve effectiveness of scientific visualizations. Traditional volume data analysis is performed on desktop computers with mouse and keyboard interfaces. Previous research and anecdotal experiences indicate improvements in volume data analysis in systems with very high fidelity of display and interaction (e.g., CAVE) over desktop environments. However, prior results are not generalizable beyond specific hardware platforms, or specific scientific domains and do not look into the effectiveness of 3D interaction techniques. We ran three controlled experiments to study the effects of a few components of VR system fidelity (field of regard, stereo and head tracking) on volume data analysis. We used volume data from paleontology, medical biology and biomechanics. Our results indicate that different components of system fidelity have different effects on the analysis of volume visualizations. One of our experiments provides evidence for validating the concept of Mixed Reality (MR) simulation. Our approach of controlled experimentation with MR simulation provides a methodology to generalize the effects of immersive virtual reality (VR) beyond individual systems. To generalize our (and other researchers') findings across disparate domains, we developed and evaluated a taxonomy of visual analysis tasks with volume visualizations. We report our empirical results tied to this taxonomy. We developed the Volume Cracker (VC) technique for improving the effectiveness of volume visualizations. This is a free-hand gesture-based novel 3D interaction (3DI) technique. We describe the design decisions in the development of the Volume Cracker (with a list of usability criteria), and provide the results from an evaluation study. Based on the results, we further demonstrate the design of a bare-hand version of the VC with the Leap Motion controller device. Our evaluations of the VC show the benefits of using 3DI over standard 2DI techniques. This body of work provides the building blocks for a three-way many-many-many mapping between the sets of VR system fidelity components, interaction techniques and visual analysis tasks with volume visualizations. Such a comprehensive mapping can inform the design of next-generation VR systems to improve the effectiveness of scientific data analysis.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Patel, Ankur. "3D morphable models : data pre-processing, statistical analysis and fitting". Thesis, University of York, 2011. http://etheses.whiterose.ac.uk/1576/.

Texto completo
Resumen
This thesis presents research aimed at using a 3D linear statistical model (known as a 3D morphable model) of an object class (which could be faces, bodies, cars, etc) for robust shape recovery. Our aim is to use this recovered information for the purposes of potentially useful applications like recognition and synthesis. With a 3D morphable model as its central theme, this thesis includes: a framework for the groupwise processing of a set of meshes in dense correspondence; a new method for model construction; a new interpretation of the statistical constraints afforded by the model and addressing of some key limitations associated with using such models in real world applications. In Chapter 1 we introduce 3D morphable models, touch on the current state-of-the-art and emphasise why these models are an interesting and important research tool in the computer vision and graphics community. We then talk about the limitations of using such models and use these limitations as a motivation for some of the contributions made in this thesis. Chapter 2 presents an end-to-end system for obtaining a single (possibly symmetric) low resolution mesh topology and texture parameterisation which are optimal with respect to a set of high resolution input meshes in dense correspondence. These methods result in data which can be used to build 3D morphable models (at any resolution). In Chapter 3 we show how the tools of thin-plate spline warping and Procrustes analysis can be used to construct a morphable model as a shape space. We observe that the distribution of parameter vector lengths follows a chi-square distribution and discuss how the parameters of this distribution can be used as a regularisation constraint on the length of parameter vectors. In Chapter 4 we take the idea introduced in Chapter 3 further by enforcing a hard constraint which restricts faces to points on a hyperspherical manifold within the parameter space of a linear statistical model. We introduce tools from differential geometry (log and exponential maps for a hyperspherical manifold) which are necessary for developing our methodology and provide empirical validation to justify our choice of manifold. Finally, we show how to use these tools to perform model fitting, warping and averaging operations on the surface of this manifold. Chapter 5 presents a method to simplify a 3D morphable model without requiring knowledge of the training meshes used to build the model. This extends the simplification ideas in Chapter 2 into a statistical setting. The proposed method is based on iterative edge collapse and we show that the expected value of the Quadric Error Metric can be computed in closed form for a linear deformable model. The simplified models can used to achieve efficient multiscale fitting and super-resolution. In Chapter 6 we consider the problem of model dominance and show how shading constraints can be used to refine morphable model shape estimates, offering the possibility of exceeding the maximum possible accuracy of the model. We present an optimisation scheme based on surface normal error as opposed to image error. This ensures the fullest possible use of the information conveyed by the shading in an image. In addition, our framework allows non-model based estimation of per-vertex bump and albedo maps. This means the recovered model is capable of describing shape and reflectance phenomena not present in the training set. We explore the use of the recovered shape and reflectance information for face recognition and synthesis. Finally, in Chapter 7 we provide concluding remarks and discuss directions for future research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Polat, Songül. "Combined use of 3D and hyperspectral data for environmental applications". Thesis, Lyon, 2021. http://www.theses.fr/2021LYSES049.

Texto completo
Resumen
La demande sans cesse croissante de solutions permettant de décrire notre environnement et les ressources qu'il contient nécessite des technologies qui permettent une description efficace et complète, conduisant à une meilleure compréhension du contenu. Les technologies optiques, la combinaison de ces technologies et un traitement efficace sont cruciaux dans ce contexte. Cette thèse se concentre sur les technologies 3D et les technologies hyper-spectrales (HSI). Tandis que les technologies 3D aident à comprendre les scènes de manière plus détaillée en utilisant des informations géométriques, topologiques et de profondeur, les développements rapides de l'imagerie hyper-spectrale ouvrent de nouvelles possibilités pour mieux comprendre les aspects physiques des matériaux et des scènes dans un large éventail d'applications grâce à leurs hautes résolutions spatiales et spectrales. Les travaux de recherches de cette thèse visent à l'utilisation combinée des données 3D et hyper-spectrales. Ils visent également à démontrer le potentiel et la valeur ajoutée d'une approche combinée dans le contexte de différentes applications. Une attention particulière est accordée à l'identification et à l'extraction de caractéristiques dans les deux domaines et à l'utilisation de ces caractéristiques pour détecter des objets d'intérêt.Plus spécifiquement, nous proposons différentes approches pour combiner les données 3D et hyper-spectrales en fonction des technologies 3D et d’imagerie hyper-spectrale (HSI) utilisées et montrons comment chaque capteur peut compenser les faiblesses de l'autre. De plus, une nouvelle méthode basée sur des critères de forme dédiés à la classification de signatures spectrales et des règles de décision liés à l'analyse des signatures spectrales a été développée et présentée. Les forces et les faiblesses de cette méthode par rapport aux approches existantes sont discutées. Les expérimentations réalisées, dans le domaine du patrimoine culturel et du tri de déchets plastiques et électroniques, démontrent que la performance et l’efficacité de la méthode proposée sont supérieures à celles des méthodes de machines à vecteurs de support (SVM).En outre, une nouvelle méthode d'analyse basée sur les caractéristiques 3D et hyper-spectrales est présentée. L'évaluation de cette méthode est basée sur un exemple pratique du domaine des déchet d'équipements électriques et électroniques (WEEE) et se concentre sur la séparation de matériaux comme les plastiques, les carte à circuit imprimé (PCB) et les composants électroniques sur PCB. Les résultats obtenus confirment qu'une amélioration des ré-sultats de classification a pu être obtenue par rapport aux méthodes proposées précédemment.L’avantage des méthodes et processus individuels développés dans cette thèse est qu’ils peuvent être transposé directement à tout autre domaine d'application que ceux investigué, et généralisé à d’autres cas d’étude sans adaptation préalable
Ever-increasing demands for solutions that describe our environment and the resources it contains, require technologies that support efficient and comprehensive description, leading to a better content-understanding. Optical technologies, the combination of these technologies and effective processing are crucial in this context. The focus of this thesis lies on 3D scanning and hyperspectral technologies. Rapid developments in hyperspectral imaging are opening up new possibilities for better understanding the physical aspects of materials and scenes in a wide range of applications due to their high spatial and spectral resolutions, while 3D technologies help to understand scenes in a more detailed way by using geometrical, topological and depth information. The investigations of this thesis aim at the combined use of 3D and hyperspectral data and demonstrates the potential and added value of a combined approach by means of different applications. Special focus is given to the identification and extraction of features in both domains and the use of these features to detect objects of interest. More specifically, we propose different approaches to combine 3D and hyperspectral data depending on the HSI/3D technologies used and show how each sensor could compensate the weaknesses of the other. Furthermore, a new shape and rule-based method for the analysis of spectral signatures was developed and presented. The strengths and weaknesses compared to existing approach-es are discussed and the outperformance compared to SVM methods are demonstrated on the basis of practical findings from the field of cultural heritage and waste management.Additionally, a newly developed analytical method based on 3D and hyperspectral characteristics is presented. The evaluation of this methodology is based on a practical exam-ple from the field of WEEE and focuses on the separation of materials like plastics, PCBs and electronic components on PCBs. The results obtained confirms that an improvement of classification results could be achieved compared to previously proposed methods.The claim of the individual methods and processes developed in this thesis is general validity and simple transferability to any field of application
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Landström, Anders. "Adaptive tensor-based morphological filtering and analysis of 3D profile data". Licentiate thesis, Luleå tekniska universitet, Signaler och system, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-26510.

Texto completo
Resumen
Image analysis methods for processing 3D profile data have been investigated and developed. These methods include; Image reconstruction by prioritized incremental normalized convolution, morphology-based crack detection for steel slabs, and adaptive morphology based on the local structure tensor. The methods have been applied to a number of industrial applications.An issue with 3D profile data captured by laser triangulation is occlusion, which occurs when the line-of-sight between the projected laser light and the camera sensor is obstructed. To overcome this problem, interpolation of missing surface in rock piles has been investigated and a novel interpolation method for filling in missing pixel values iteratively from the edges of the reliable data, using normalized convolution, has been developed.3D profile data of the steel surface has been used to detect longitudinal cracks in casted steel slabs. Segmentation of the data is done using mathematical morphology, and the resulting connected regions are assigned a crack probability estimate based on a statistic logistic regression model. More specifically, the morphological filtering locates trenches in the data, excludes scale regions for further analysis, and finally links crack segments together in order to obtain a segmented region which receives a crack probability based on its depth and length.Also suggested is a novel method for adaptive mathematical morphology intended to improve crack segment linking, i.e. for bridging gaps in the crack signature in order to increase the length of potential crack segments. Standard morphology operations rely on a predefined structuring element which is repeatedly used for each pixel in the image. The outline of a crack, however, can range from a straight line to a zig-zag pattern. A more adaptive method for linking regions with a large enough estimated crack depth would therefore be beneficial. More advanced morphological approaches, such as morphological amoebas and path openings, adapt better to curvature in the image. For our purpose, however, we investigate how the local structure tensor can be used to adaptively assign to each pixel an elliptical structuring element based on the local orientation within the image. The information from the local structure tensor directly defines the shape of the elliptical structuring element, and the resulting morphological filtering successfully enhances crack signatures in the data.
Godkänd; 2012; 20121017 (andlan); LICENTIATSEMINARIUM Ämne: Signalbehandling/Signal Processing Examinator: Universitetslektor Matthew Thurley, Institutionen för system- och rymdteknik, Luleå tekniska universitet Diskutant: Associate Professor Cris Luengo, Centre for Image Analysis, Uppsala Tid: Onsdag den 21 november 2012 kl 12.30 Plats: A1545, Luleå tekniska universitet
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Cheewinsiriwat, Pannee. "Development of a 3D geospatial data representation and spatial analysis system". Thesis, University of Newcastle Upon Tyne, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.514467.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Vick, Louise Mary. "Evaluation of field data and 3D modelling for rockfall hazard analysis". Thesis, University of Canterbury. Geological Sciences, 2015. http://hdl.handle.net/10092/10845.

Texto completo
Resumen
The Canterbury Earthquake Sequence (CES) of 2010-2011 produced large seismic moments up to Mw 7.1. These large, near-to-surface (<15 km) ruptures triggered >6,000 rockfall boulders on the Port Hills of Christchurch, many of which impacted houses and affected the livelihoods of people within the impacted area. From these disastrous and unpredicted natural events a need arose to be able to assess the areas affected by rockfall events in the future, where it is known that a rockfall is possible from a specific source outcrop but the potential boulder runout and dynamics are not understood. The distribution of rockfall deposits is largely constrained by the physical properties and processes of the boulder and its motion such as block density, shape and size, block velocity, bounce height, impact and rebound angle, as well as the properties of the substrate. Numerical rockfall models go some way to accounting for all the complex factors in an algorithm, commonly parameterised in a user interface where site-specific effects can be calibrated. Calibration of these algorithms requires thorough field checks and often experimental practises. The purpose of this project, which began immediately following the most destructive rupture of the CES (February 22, 2011), is to collate data to characterise boulder falls, and to use this information, supplemented by a set of anthropogenic boulder fall data, to perform an in-depth calibration of the three-dimensional numerical rockfall model RAMMS::Rockfall. The thesis covers the following topics: • Use of field data to calibrate RAMMS. Boulder impact trails in the loess-colluvium soils at Rapaki Bay have been used to estimate ranges of boulder velocities and bounce heights. RAMMS results replicate field data closely; it is concluded that the model is appropriate for analysing the earthquake-triggered boulder trails at Rapaki Bay, and that it can be usefully applied to rockfall trajectory and hazard assessment at this and similar sites elsewhere. • Detailed analysis of dynamic rockfall processes, interpreted from recorded boulder rolling experiments, and compared to RAMMS simulated results at the same site. Recorded rotational and translational velocities of a particular boulder show that the boulder behaves logically and dynamically on impact with different substrate types. Simulations show that seasonal changes in soil moisture alter rockfall dynamics and runout predictions within RAMMS, and adjustments are made to the calibration to reflect this; suggesting that in hazard analysis a rockfall model should be calibrated to dry rather than wet soil conditions to anticipate the most serious outcome. • Verifying the model calibration for a separate site on the Port Hills. The results of the RAMMS simulations show the effectiveness of calibration against a real data set, as well as the effectiveness of vegetation as a rockfall barrier/retardant. The results of simulations are compared using hazard maps, where the maximum runouts match well the mapped CES fallen boulder maximum runouts. The results of the simulations in terms of frequency distribution of deposit locations on the slope are also compared with those of the CES data, using the shadow angle tool to apportion slope zones. These results also replicate real field data well. Results show that a maximum runout envelope can be mapped, as well as frequency distribution of deposited boulders for hazard (and thus risk) analysis purposes. The accuracy of the rockfall runout envelope and frequency distribution can be improved by comprehensive vegetation and substrate mapping. The topics above define the scope of the project, limiting the focus to rockfall processes on the Port Hills, and implications for model calibration for the wider scientific community. The results provide a useful rockfall analysis methodology with a defensible and replicable calibration process, that has the potential to be applied to other lithologies and substrates. Its applications include a method of analysis for the selection and positioning of rockfall countermeasure design; site safety assessment for scaling and demolition works; and risk analysis and land planning for future construction in Christchurch.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "3D data analysi"

1

Introduction to 3D data: Modeling with arcGIS 3D analyst and Google earth. Hoboken, N.J: John Wiley, 2009.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Belytschko, Ted. WHAMS-3D: An explicit 3D finite element program. Willow Springs, Ill: KBS2, 1988.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Data in three dimensions: A guide to ArcGIS 3D analyst. Clifton Park, NY: Thomson/Delmar Learning, 2004.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

1936-, Huang Thomas S., ed. 3D face processing: Modeling, analysis, and synthesis. Boston: Kluwer Academic Publishers, 2004.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Daniel, Cremers y SpringerLink (Online service), eds. Stereo Scene Flow for 3D Motion Analysis. London: Springer-Verlag London Limited, 2011.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

S, Pirzadeh y Institute for Computer Applications in Science and Engineering., eds. Large-scale parallel unstructured mesh computations for 3D high-lift analysis. Hampton, VA: Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, 1999.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

S, Pirzadeh y Institute for Computer Applications in Science and Engineering., eds. Large-scale parallel unstructured mesh computations for 3D high-lift analysis. Hampton, VA: Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, 1999.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Mavriplis, Dimitri. Large-scale parallel unstructured mesh computations for 3D high-lift analysis. Hampton, VA: Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, 1999.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

S, Pirzadeh y Institute for Computer Applications in Science and Engineering., eds. Large-scale parallel unstructured mesh computations for 3D high-lift analysis. Hampton, VA: Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, 1999.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

S, Pirzadeh y Institute for Computer Applications in Science and Engineering., eds. Large-scale parallel unstructured mesh computations for 3D high-lift analysis. Hampton, VA: Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, 1999.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "3D data analysi"

1

Thió-Henestrosa, Santiago y Josep Daunis-i-Estadella. "Exploratory Analysis Using CoDaPack 3D". En Compositional Data Analysis, 327–40. Chichester, UK: John Wiley & Sons, Ltd, 2011. http://dx.doi.org/10.1002/9781119976462.ch24.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Cordelières, Fabrice P. y Chong Zhang. "3D Quantitative Colocalisation Analysis". En Bioimage Data Analysis Workflows, 33–66. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22386-1_3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Ruizhongtai Qi, Charles. "Deep Learning on 3D Data". En 3D Imaging, Analysis and Applications, 513–66. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44070-1_11.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Smith, William A. P. "3D Data Representation, Storage and Processing". En 3D Imaging, Analysis and Applications, 265–316. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44070-1_6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Smith, William A. P. "Representing, Storing and Visualizing 3D Data". En 3D Imaging, Analysis and Applications, 139–82. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-4063-4_4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Rotger, D., C. Cañero, P. Radeva, J. Mauri, E. Fernandez, A. Tovar y V. Valle. "Advanced Visualization of 3D data of Intravascular Ultrasound images". En Medical Data Analysis, 245–50. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45497-7_37.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Schiavi, Emanuele, C. Hernández y Juan A. Hernández. "Fully 3D Wavelets MRI Compression". En Biological and Medical Data Analysis, 9–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30547-7_2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Anderl, Reiner y Peter Binde. "Management of Analysis and Simulation Data". En Simulations with NX / Simcenter 3D, 353–72. München: Carl Hanser Verlag GmbH & Co. KG, 2018. http://dx.doi.org/10.3139/9781569907139.007.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Choi, Soo-Mi, Don-Su Lee, Seong-Joon Yoo y Myoung-Hee Kim. "Interactive Visualization of Diagnostic Data from Cardiac Images Using 3D Glyphs". En Medical Data Analysis, 83–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39619-2_11.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Capozza, M., G. D. Iannetti, J. J. Marx, G. Cruccu y N. Accornero. "An Artificial Neural Network for 3D Localization of Brainstem Functional Lesions". En Medical Data Analysis, 186–97. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36104-9_21.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "3D data analysi"

1

Wang, Qingguo. "A 3D data model for fast visualization". En International Symposium on Spatial Analysis, Spatial-temporal Data Modeling, and Data Mining, editado por Yaolin Liu y Xinming Tang. SPIE, 2009. http://dx.doi.org/10.1117/12.837793.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Yang, Nai, Qingsheng Guo y Dayong Shen. "Automatic modeling of cliff symbol in 3D topographic map". En International Symposium on Spatial Analysis, Spatial-temporal Data Modeling, and Data Mining, editado por Yaolin Liu y Xinming Tang. SPIE, 2009. http://dx.doi.org/10.1117/12.837760.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Zhang, Xia y Qing Zhu. "Scheme evaluation of urban design using 3D visual analysis". En International Symposium on Spatial Analysis, Spatial-temporal Data Modeling, and Data Mining, editado por Yaolin Liu y Xinming Tang. SPIE, 2009. http://dx.doi.org/10.1117/12.838542.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Li, Linhai, Lina Qu, Shen Ying, Dongdong Liang y Zhenlong Hu. "Use of Google SketchUp to implement 3D spatio-temporal visualization". En International Symposium on Spatial Analysis, Spatial-temporal Data Modeling, and Data Mining, editado por Yaolin Liu y Xinming Tang. SPIE, 2009. http://dx.doi.org/10.1117/12.837572.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Wang, Xianghong, Jiping Liu, Yong Wang y Junfang Bi. "Visualization of spatial-temporal data based on 3D virtual scene". En International Symposium on Spatial Analysis, Spatial-temporal Data Modeling, and Data Mining, editado por Yaolin Liu y Xinming Tang. SPIE, 2009. http://dx.doi.org/10.1117/12.838626.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Mochocki, B., K. Lahiri y S. Cadambi. "Power Analysis of Mobile 3D Graphics". En 2006 Design, Automation and Test in Europe. IEEE, 2006. http://dx.doi.org/10.1109/date.2006.243859.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Liu, Yining, Lifan Fei y Qiuping Lan. "A new way of modeling 3D entities based on raster technique". En International Symposium on Spatial Analysis, Spatial-temporal Data Modeling, and Data Mining, editado por Yaolin Liu y Xinming Tang. SPIE, 2009. http://dx.doi.org/10.1117/12.838060.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Ding, Jing y Wenping Jiang. "Research on scene organization of process simulation in port 3D GIS". En International Symposium on Spatial Analysis, Spatial-temporal Data Modeling, and Data Mining, editado por Yaolin Liu y Xinming Tang. SPIE, 2009. http://dx.doi.org/10.1117/12.839411.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Zhao, Weidong, Guo'an Tang, Bin Ji y Lei Ma. "Research on optimal DEM cell size for 3D visualization of loess terraces". En International Symposium on Spatial Analysis, Spatial-temporal Data Modeling, and Data Mining, editado por Yaolin Liu y Xinming Tang. SPIE, 2009. http://dx.doi.org/10.1117/12.837469.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Xu, Hanwei, Rami Badawi, Xiaohu Fan, Jiayong Ren y Zhiqiang Zhang. "Research for 3D visualization of Digital City based on SketchUp and ArcGIS". En International Symposium on Spatial Analysis, Spatial-temporal Data Modeling, and Data Mining, editado por Yaolin Liu y Xinming Tang. SPIE, 2009. http://dx.doi.org/10.1117/12.838558.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "3D data analysi"

1

Fenlon, Riley. Facial respirator shape analysis using 3D anthropometric data. Gaithersburg, MD: National Institute of Standards and Technology, 2007. http://dx.doi.org/10.6028/nist.ir.7460.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Bethel, E. Wes, Oliver Rubel, Gunther H. Weber, Bernd Hamann y Hans Hagen. Visualization and Analysis of 3D Gene Expression Data. Office of Scientific and Technical Information (OSTI), octubre de 2007. http://dx.doi.org/10.2172/928239.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Williams, Michelle. Data Analysis Final Project: 3D printed rock analysis using Python. Office of Scientific and Technical Information (OSTI), junio de 2019. http://dx.doi.org/10.2172/1762647.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Gamey, T. J. 3D Geophysical Data Collection and Analysis for UXO Discrimination. Fort Belvoir, VA: Defense Technical Information Center, junio de 2004. http://dx.doi.org/10.21236/ada438458.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Arthur, J. D., J. Cichon, A. Baker, J. Marquez, A. Rudin y A. Wood. Hydrogeologic mapping and aquifer vulnerability modeling in Florida: 2D and 3D data analysis and visualization. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 2002. http://dx.doi.org/10.4095/299490.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Tang, Shuaiqi, Shaocheng Xie y Minghua Zhang. Description of the Three-Dimensional Large-Scale Forcing Data from the 3D Constrained Variational Analysis (VARANAL3D). Office of Scientific and Technical Information (OSTI), agosto de 2020. http://dx.doi.org/10.2172/1648153.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Tang, S., Shaocheng Xie y Minghua Zhang. Description of the Three-Dimensional Large-Scale Forcing Data from the 3D Constrained Variational Analysis (VARANAL3D). Office of Scientific and Technical Information (OSTI), agosto de 2020. http://dx.doi.org/10.2172/1808707.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Schetselaar, E., D. White, O. Boulanger, J. Craven, G. Bellefleur y S. Ansari. 3D regional scale modelling of the Flin Flon-Glennie Complex: preparatory data analysis and preliminary results, Manitoba and Saskatchewan. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/330304.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Habib, Ayman, Darcy M. Bullock, Yi-Chun Lin y Raja Manish. Road Ditch Line Mapping with Mobile LiDAR. Purdue University, 2021. http://dx.doi.org/10.5703/1288284317354.

Texto completo
Resumen
Maintenance of roadside ditches is important to avoid localized flooding and premature failure of pavements. Scheduling effective preventative maintenance requires mapping of the ditch profile to identify areas requiring excavation of long-term sediment accumulation. High-resolution, high-quality point clouds collected by mobile LiDAR mapping systems (MLMS) provide an opportunity for effective monitoring of roadside ditches and performing hydrological analyses. This study evaluated the applicability of mobile LiDAR for mapping roadside ditches for slope and drainage analyses. The performance of alternative MLMS units was performed. These MLMS included an unmanned ground vehicle, an unmanned aerial vehicle, a portable backpack system along with its vehicle-mounted version, a medium-grade wheel-based system, and a high-grade wheel-based system. Point cloud from all the MLMS units were in agreement in the vertical direction within the ±3 cm range for solid surfaces, such as paved roads, and ±7 cm range for surfaces with vegetation. The portable backpack system that could be carried by a surveyor or mounted on a vehicle and was the most flexible MLMS. The report concludes that due to flexibility and cost effectiveness of the portable backpack system, it is the preferred platform for mapping roadside ditches, followed by the medium-grade wheel-based system. Furthermore, a framework for ditch line characterization is proposed and tested using datasets acquired by the medium-grade wheel-based and vehicle-mounted portable systems over a state highway. An existing ground filtering approach is modified to handle variations in point density of mobile LiDAR data. Hydrological analyses, including flow direction and flow accumulation, are applied to extract the drainage network from the digital terrain model (DTM). Cross-sectional/longitudinal profiles of the ditch are automatically extracted from LiDAR data and visualized in 3D point clouds and 2D images. The slope derived from the LiDAR data was found to be very close to highway cross slope design standards of 2% on driving lanes, 4% on shoulders, as well as 6-by-1 slope for ditch lines. Potential flooded regions are identified by detecting areas with no LiDAR return and a recall score of 54% and 92% was achieved by the medium-grade wheel-based and vehicle-mounted portable systems, respectively. Furthermore, a framework for ditch line characterization is proposed and tested using datasets acquired by the medium-grade wheel-based and vehicle-mounted portable systems over a state highway. An existing ground filtering approach is modified to handle variations in point density of mobile LiDAR data. Hydrological analyses, including flow direction and flow accumulation, are applied to extract the drainage network from the digital terrain model (DTM). Cross-sectional/longitudinal profiles of the ditch are automatically extracted from LiDAR data, and visualized in 3D point clouds and 2D images. The slope derived from the LiDAR data was found to be very close to highway cross slope design standards of 2% on driving lanes, 4% on shoulder, as well as 6-by-1 slope for ditch lines. Potential flooded regions are identified by detecting areas with no LiDAR return and a recall score of 54% and 92% was achieved by the medium-grade wheel-based and vehicle-mounted portable systems, respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Slattery, Kevin T. Unsettled Aspects of the Digital Thread in Additive Manufacturing. SAE International, noviembre de 2021. http://dx.doi.org/10.4271/epr2021026.

Texto completo
Resumen
In the past years, additive manufacturing (AM), also known as “3D printing,” has transitioned from rapid prototyping to making parts with potentially long service lives. Now AM provides the ability to have an almost fully digital chain from part design through manufacture and service. Web searches will reveal many statements that AM can help an organization in its pursuit of a “digital thread.” Equally, it is often stated that a digital thread may bring great benefits in improving designs, processes, materials, operations, and the ability to predict failure in a way that maximizes safety and minimizes cost and downtime. Now that the capability is emerging, a whole series of new questions begin to surface as well: •• What data should be stored, how will it be stored, and how much space will it require? •• What is the cost-to-benefit ratio of having a digital thread? •• Who owns the data and who can access and analyze it? •• How long will the data be stored and who will store it? •• How will the data remain readable and usable over the lifetime of a product? •• How much manipulation of disparate data is necessary for analysis without losing information? •• How will the data be secured, and its provenance validated? •• How does an enterprise accomplish configuration management of, and linkages between, data that may be distributed across multiple organizations? •• How do we determine what is “authoritative” in such an environment? These, along with many other questions, mark the combination of AM with a digital thread as an unsettled issue. As the seventh title in a series of SAE EDGE™ Research Reports on AM, this report discusses what the interplay between AM and a digital thread in the mobility industry would look like. This outlook includes the potential benefits and costs, the hurdles that need to be overcome for the combination to be useful, and how an organization can answer these questions to scope and benefit from the combination. This report, like the others in the series, is directed at a product team that is implementing AM. Unlike most of the other reports, putting the infrastructure in place, addressing the issues, and taking full advantage of the benefits will often fall outside of the purview of the product team and at the higher organizational, customer, and industry levels.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía