To see the other types of publications on this topic, follow the link: Assessment; Monitoring; Point cloud.

Dissertations / Theses on the topic 'Assessment; Monitoring; Point cloud'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 16 dissertations / theses for your research on the topic 'Assessment; Monitoring; Point cloud.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Quach, Maurice. "Deep learning-based Point Cloud Compression." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG051.

Full text
Abstract:
Les nuages de points deviennent essentiels dans de nombreuses applications et les progrès des technologies de capture conduisent à des volumes de données croissants.La compression est donc essentielle pour le stockage et la transmission.La compression des nuages de points peut être divisée en deux parties : la compression de la géométrie et des attributs.En outre, l'évaluation de la qualité des nuages de points est nécessaire afin d'évaluer les méthodes de compression des nuages de points.La compression de la géométrie, la compression des attributs et l'évaluation de la qualité constituent les trois parties principales de cette thèse.Le défi commun à ces trois problèmes est la parcimonie et l'irrégularité des nuages de points.En effet, alors que d'autres modalités telles que les images reposent sur une grille régulière, la géométrie des nuages de points peut être considérée comme un signal binaire parcimonieux dans un espace 3D et les attributs sont définis sur la géométrie qui peut être à la fois parcimonieuse et irrégulière.Dans un premier temps, l'état de l'art des méthodes de compression de la géométrie et des attributs est passé en revue, en mettant l'accent sur les approches basées sur l'apprentissage profond.Les défis rencontrés lors de la compression de la géométrie et des attributs sont examinés, avec une analyse des approches actuelles pour les résoudre, leurs limites et les relations entre l'apprentissage profond et les approches traditionnelles.Nous présentons nos travaux sur la compression de la géométrie : une approche de compression de la géométrie avec perte basée sur la convolution avec une étude sur les facteurs de performance clés pour ces méthodes et un modèle génératif pour la compression de la géométrie sans perte avec une variante multi-échelle atténuant ses problèmes de complexité.Ensuite, nous présentons une approche basée sur le pliage pour la compression d'attributs qui apprend un mapping du nuage de points à une grille 2D afin de réduire la compression d'attributs de nuages de points à un problème de compression d'images.De plus, nous proposons une métrique de qualité perceptive profonde différentiable qui peut être utilisée pour entraîner des réseaux de compression géométrique de nuages de points avec perte tout en étant corrélée avec la qualité visuelle perçue, ainsi qu'un réseau neuronal convolutif pour l'évaluation de la qualité des nuages de points basé sur une approche d'extraction de patchs.Enfin, nous concluons la thèse et discutons des questions ouvertes dans la compression des nuages de points, des solutions existantes et des perspectives. Nous soulignons le lien entre la recherche actuelle sur la compression des nuages de points et les problèmes de recherche dans des domaines adjacents, tels que le rendu dans l'infographie, la compression des maillages et l'évaluation de la qualité des nuages de points
Point clouds are becoming essential in key applications with advances in capture technologies leading to large volumes of data.Compression is thus essential for storage and transmission.Point Cloud Compression can be divided into two parts: geometry and attribute compression.In addition, point cloud quality assessment is necessary in order to evaluate point cloud compression methods.Geometry compression, attribute compression and quality assessment form the three main parts of this dissertation.The common challenge across these three problems is the sparsity and irregularity of point clouds.Indeed, while other modalities such as images lie on a regular grid, point cloud geometry can be considered as a sparse binary signal over 3D space and attributes are defined on the geometry which can be both sparse and irregular.First, the state of the art for geometry and attribute compression methods with a focus on deep learning based approaches is reviewed.The challenges faced when compressing geometry and attributes are considered, with an analysis of the current approaches to address them, their limitations and the relations between deep learning and traditional ones.We present our work on geometry compression: a convolutional lossy geometry compression approach with a study on the key performance factors of such methods and a generative model for lossless geometry compression with a multiscale variant addressing its complexity issues.Then, we present a folding-based approach for attribute compression that learns a mapping from the point cloud to a 2D grid in order to reduce point cloud attribute compression to an image compression problem.Furthermore, we propose a differentiable deep perceptual quality metric that can be used to train lossy point cloud geometry compression networks while being well correlated with perceived visual quality and a convolutional neural network for point cloud quality assessment based on a patch extraction approach.Finally, we conclude the dissertation and discuss open questions in point cloud compression, existing solutions and perspectives. We highlight the link between existing point cloud compression research and research problems to relevant areas of adjacent fields, such as rendering in computer graphics, mesh compression and point cloud quality assessment
APA, Harvard, Vancouver, ISO, and other styles
2

Megahed, Fadel M. "The Use of Image and Point Cloud Data in Statistical Process Control." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/26511.

Full text
Abstract:
The volume of data acquired in production systems continues to expand. Emerging imaging technologies, such as machine vision systems (MVSs) and 3D surface scanners, diversify the types of data being collected, further pushing data collection beyond discrete dimensional data. These large and diverse datasets increase the challenge of extracting useful information. Unfortunately, industry still relies heavily on traditional quality methods that are limited to fault detection, which fails to consider important diagnostic information needed for process recovery. Modern measurement technologies should spur the transformation of statistical process control (SPC) to provide practitioners with additional diagnostic information. This dissertation focuses on how MVSs and 3D laser scanners can be further utilized to meet that goal. More specifically, this work: 1) reviews image-based control charts while highlighting their advantages and disadvantages; 2) integrates spatiotemporal methods with digital image processing to detect process faults and estimate their location, size, and time of occurrence; and 3) shows how point cloud data (3D laser scans) can be used to detect and locate unknown faults in complex geometries. Overall, the research goal is to create new quality control tools that utilize high density data available in manufacturing environments to generate knowledge that supports decision-making beyond just indicating the existence of a process issue. This allows industrial practitioners to have a rapid process recovery once a process issue has been detected, and consequently reduce the associated downtime.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Dinh-Xuan, Lam [Verfasser], and Phuoc [Gutachter] Tran-Gia. "Quality of Experience Assessment of Cloud Applications and Performance Evaluation of VNF-Based QoE Monitoring / Lam Dinh-Xuan ; Gutachter: Phuoc Tran-Gia." Würzburg : Universität Würzburg, 2018. http://d-nb.info/1169573053/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gray, Michelle Anya. "Assessing non-point source pollution in agricultural regions of the upper St. John River basin using the slimy sculpin (Cottus cognatus)." Thesis, Department of Biology, University of New Brunswick, 2003. http://hdl.handle.net/1882/48.

Full text
Abstract:
The overall objective of this research project was to assess whether fish populations in areas of potato cultivation responded to changes in environmental conditions. An effects-based assessment was conducted in the ‘potato belt’ of northwestern New Brunswick in the Little River catchment. From 1999-2001, the health and performance of slimy sculpin (Cottus cognatus) was monitored in agricultural and forested sections of the river. In the fall of 1999 and 2000, agricultural sites had fewer young-of-the-year (YOY) sculpin than the forested region. Adult sculpin were larger in the agricultural region, but had significantly smaller gonads, and female sculpin had smaller livers, and fewer and smaller eggs than the forested region. By the fall of 2001, only female gonad size showed a difference from the forested region. These results were used to design a follow-up study designed to investigate the relative importance of environmental factors influencing sculpin responses.

The second study investigated the relative influence of temperature and sediment deposition on slimy sculpin populations across 20 sites on 19 streams in forested and agricultural catchments in northwestern New Brunswick. YOY sculpin were present at all forested sites, but only at 2 of 11 agricultural sites. There were no relationships between body size or density and sediment deposition in either the agricultural or forested regions, but sculpin density decreased and median YOY size increased with increasing temperatures. The variability in density of YOY sculpin at agricultural sites suggested that additional factors beyond temperature might be contributing to responses.

A secondary overall objective was to evaluate the slimy sculpin as a sentinel and indicator of site-specific conditions. Stable isotopes of muscle tissues showed little variability in isotopic signatures, and significant differences between adjacent sites. Passive integrated transponder (PIT) tags implanted in 112 adult sculpin showed that 75% of sculpin captured over 10 months moved less than 30m. Both isotopes and PIT tags suggested high spatial and temporal residency of slimy sculpin.

This PhD project showed biological impacts on sculpin populations residing in streams influenced by non-point source agricultural stressors, and provided support for the ability of the slimy sculpin to reflect local environmental conditions.
APA, Harvard, Vancouver, ISO, and other styles
5

Lama, Salomon Abraham. "Digital State Models for Infrastructure Condition Assessment and Structural Testing." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/84502.

Full text
Abstract:
This research introduces and applies the concept of digital state models for civil infrastructure condition assessment and structural testing. Digital state models are defined herein as any transient or permanent 3D model of an object (e.g. textured meshes and point clouds) combined with any electromagnetic radiation (e.g., visible light, infrared, X-ray) or other two-dimensional image-like representation. In this study, digital state models are built using visible light and used to document the transient state of a wide variety of structures (ranging from concrete elements to cold-formed steel columns and hot-rolled steel shear-walls) and civil infrastructures (bridges). The accuracy of digital state models was validated in comparison to traditional sensors (e.g., digital caliper, crack microscope, wire potentiometer). Overall, features measured from the 3D point clouds data presented a maximum error of ±0.10 in. (±2.5 mm); and surface features (i.e., crack widths) measured from the texture information in textured polygon meshes had a maximum error of ±0.010 in. (±0.25 mm). Results showed that digital state models have a similar performance between all specimen surface types and between laboratory and field experiments. Also, it is shown that digital state models have great potential for structural assessment by significantly improving data collection, automation, change detection, visualization, and augmented reality, with significant opportunities for commercial development. Algorithms to analyze and extract information from digital state models such as cracks, displacement, and buckling deformation are developed and tested. Finally, the extensive data sets collected in this effort are shared for research development in computer vision-based infrastructure condition assessment, eliminating the major obstacle for advancing in this field, the absence of publicly available data sets.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Scharf, Alexander. "Terrestrial Laser Scanning for Wooden Facade-system Inspection." Thesis, Luleå tekniska universitet, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-77159.

Full text
Abstract:
The objective of this study was to evaluate the feasibility of measuring movement, deformation and displacement in wooden façade-systems by terrestrial laser scanning. An overview of different surveying techniques and methods has been created. Point cloud structure and processing was explained in detail as it is the foundation for understanding the advantages and disadvantages of laser scanning.    The boundaries of monitoring façades with simple and complex façade structures were tested with the phase-based laser scanner FARO Focus 3DS. In-field measurements of existing facades were done to show the capabilities of extracting defect features such as cracks by laser scanning. The high noise in the data caused by the limited precision of 3D laser scanners is problematic. Details on a scale of several mm are hidden by the data noise. Methods to reduce the noise during point cloud processing have proven to be very data-specific. The uneven point cloud structure of a façade scan made it therefore difficult to find a method working for the whole scans. Dividing the point cloud data automatically into different façade parts by a process called segmentation could make it possible. However, no suitable segmentation algorithm was found and developing an own algorithm would have exceeded the scope of this thesis. Therefore, the goal of automatic point cloud processing was not fulfilled and neglected in the further analyses of outdoor facades and laboratory experiments. The experimental scans showed that several information could be extracted out of the scans. The accuracy of measured board and gap dimensions were, however, highly depended on the point cloud cleaning steps but provided information which could be used for tracking development of a facade’s features. Extensive calibration might improve the accuracy of the measurements. Deviation of façade structures from flat planes were clearly visible when using colorization of point clouds and might be the main benefit of measuring spatial information of facades by non-contact methods. The determination of façade displacement was done under laboratory conditions. A façade panel was displaced manually, and displacement was calculated with different algorithms. The algorithm determining distance to the closest point in a pair of point clouds provided the best results, while being the simplest one in terms of computational complexity. Out-of-plane displacement was the most suitable to detect with this method. Displacement sideways or upwards required more advanced point cloud processing and manual interpretation by the software operator. Based on the findings during the study it can be concluded that laser scanning is not the correct methods for structural health monitoring of facades when the tracking of small deformations, especially deformations below 5 mm and defects like cracks are the main goal. Displacements, defects and deformations of larger scale can be detected but are tied to a large amount of point cloud processing. It is not clear if the equipment costs, surveying time and the problems caused by high variability of scans results based on façade color, shape and texture are in a positive relation to the benefits obtained from using laser scanning over manually surveying.
APA, Harvard, Vancouver, ISO, and other styles
7

Crabtree, Gärdin David, and Alexander Jimenez. "Optical methods for 3D-reconstruction of railway bridges : Infrared scanning, Close range photogrammetry and Terrestrial laser scanning." Thesis, Luleå tekniska universitet, Byggkonstruktion och brand, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-67716.

Full text
Abstract:
The forecast of the next upcoming years estimates a growth of demand in transport. As the railway sector in Europe has developed over many years, the infrastructure presents performance issues because of, among other factors, asset maintenance activities being difficult and time consuming. There are currently 4000 railway bridges in Sweden managed by Trafikverket which are submitted to inspections at least every six years. The most common survey is done visually to determine the physical and functional condition of the bridges as well as finding damages that may exist on them. Because visual inspection is a subjective evaluation technique, the results of these bridge inspections may vary from inspector to inspector. The data collection is time consuming and written in standard inspection reports which may not provide sufficient visualization of damages. The inspector also needs to move around the bridge at close distance which could lead to unsafe working conditions. 3D modelling technology is becoming more and more common. Methods such as Close Ranged Photogrammetry (CRP) and Terrestrial Laser Scanning (TLS) are starting to be used for architecture and heritage preservation as well as engineering applications. Infrared (IR) scanning is also showing potential in creating 3D models but has yet not been used for structural analysis and inspections. A result from these methods is a point cloud, a 3D representation of a model in points that can be used for creating as-built Building Information Modeling (BIM)-models. In this study, the authors put these three methods to test to see if IR scanning and CRP are suitable ways, such as TLS is, to gather data for 3D-reconstruction of concrete railway bridges in fast, safe and non-disturbing ways. For this, the three technologies are performed on six bridges chosen by Trafikverket. The further aim is to determine if the 3D-reconstructions can be used for acquiring BIM-information to, among other things, create as-built drawings and to perform structural evaluations. As a result from the study, IR scanning and CRP show great potential as well as TLS in 3D-reconstruction of concrete railway bridges in fast, safe and non-disturbing ways. Still, there is a need of development regarding the technologies before we can start to rely on them completely.
APA, Harvard, Vancouver, ISO, and other styles
8

Benneyworth, Laura Mahoney. "Distribution of Trace Elements in Cumberland River Basin Reservoir Sediments." TopSCHOLAR®, 2011. http://digitalcommons.wku.edu/theses/1113.

Full text
Abstract:
The U.S. Army Corps of Engineers, Nashville District, maintains ten reservoirs in the Cumberland River Basin in Kentucky and Tennessee, and has been monitoring sediment chemistry in the reservoirs since 1994. The purpose of this study is to evaluate the sediment data collected from the reservoirs from 1994 to 2010 to determine if there are any spatial patterns of the trace elements: arsenic, beryllium, cadmium, chromium, copper, lead, mercury, nickel, and zinc. The results indicated that trace element levels were consistent with national baseline concentrations measured by the U.S. Geological Survey. Center Hill reservoir had the greatest number of trace element concentrations (all except cadmium) that were significantly higher when compared to all other reservoirs. The degree of urbanization in the reservoir basins was based on population density from the 2000 Census and the percentage of developed land using the 2006 national land cover dataset. Aquatic toxicity values were used as a measure of sediment quality. The reservoirs with the worst aquatic toxicity rankings were not the most urban, instead they were the reservoirs with the longest retention times. Therefore, it may be concluded that retention time has a larger effect on Cumberland River Basin sediment concentrations than the type of land use or the degree of urbanization. The results also indicate that it may be prudent to include an evaluation of quality based on aquatic toxicity when monitoring sediment quality, and that when reservoirs are the subject of sediment quality assessments, the consideration of the physical properties of the reservoir, especially the retention time, is essential for a comprehensive evaluation. This may also imply that sediment quality in reservoirs may effectively be regulated by water resource management techniques at the reservoirs that affect retention time.
APA, Harvard, Vancouver, ISO, and other styles
9

Williams, Keith E. "Accuracy assessment of LiDAR point cloud geo-referencing." Thesis, 2012. http://hdl.handle.net/1957/30209.

Full text
Abstract:
Three-dimensional laser scanning has revolutionized spatial data acquisition and can be completed from a variety of platforms including airborne (ALS), mobile (MLS), and static terrestrial (TLS) laser scanning. MLS is a rapidly evolving technology that provides increases in efficiency and safety over static TLS, while still providing similar levels of accuracy and resolution. The componentry that make up a MLS system are more parallel to Airborne Laser Scanning (ALS) than to that of TLS. However, achievable accuracies, precisions, and resolution results are not clearly defined for MLS systems. As such, industry professionals need guidelines to standardize the process of data collection, processing, and reporting. This thesis lays the foundation for MLS guidelines with a thorough review of currently available literature that has been completed in order to demonstrate the capabilities and limitations of a generic MLS system. A key difference between MLS and TLS is that a mobile platform is able to collect a continuous path of geo-referenced points along the navigation path, while a TLS collects points from many separate reference frames as the scanner is moved from location to location. Each individual TLS setup must be registered (linked with a common coordinate system) to adjoining scan setups. A study was completed comparing common methods of TLS registration and geo-referencing (e.g., target, cloud-cloud, and hybrid methods) to assist a TLS surveyor in deciding the most appropriate method for their projects. Results provide insight into the level of accuracy (mm to cm level) that can be achieved using the various methods as well as the field collection and office processing time required to obtain a fully geo-referenced point cloud. Lastly, a quality assurance methodology has been developed for any form of LiDAR data to verify both the absolute and relative accuracy of a point cloud without the use of retro-reflective targets. This methodology incorporates total station validation of a scanners point cloud to compare slopes of common features. The comparison of 2D slope features across a complex geometry of cross-sections provides 3D positional error in both horizontal and vertical component. This methodology lowers the uncertainty of single point accuracy statistics for point clouds by utilizing a larger portion of a point cloud for statistical accuracy verification. This use of physical features for accuracy validation is particularly important for MLS systems because MLS systems cannot produce sufficient resolution on targets for accuracy validation unless they are placed close to the vehicle.
Graduation date: 2012
APA, Harvard, Vancouver, ISO, and other styles
10

Bates, Jordan Steven. "Oblique UAS imagery and point cloud processing for 3D rock glacier monitoring." Master's thesis, 2020. http://hdl.handle.net/10362/94396.

Full text
Abstract:
Dissertation submitted in partial fulfilment of the requirements for the degree of Master of Science in Geospatial Technologies
Rock glaciers play a large ecological role and are heavily relied upon by local communities for water, power, and revenue. With climate change, the rate at which they are deforming has increased over the years and is making it more important to gain a better understanding of these geomorphological movements for improved predictions, correlations, and decision making. It is becoming increasingly more practical to examine a rock glacier with 3D visualization to have more perspectives and realistic terrain profiles. Recently gaining more attention is the use of Terrestrial Laser Scanners (TLS) and Unmanned Aircraft Systems (UAS) used separately and combined to gather high-resolution data for 3D analysis. This data is typically transformed into highly detailed Digital Elevation Models (DEM) where Differences of DEM (DoD) is used to track changes over time. This study compares these commonly used collection methods and analysis to a newly conceived multirotor UAS collection method and to a new point cloud Multiscale Model to Model Cloud Comparison (M32C) change detection seen from recent studies. Data was collected of the Innere Ölgrube Rock Glacier in Austria with a TLS in 2012 and with a multirotor UAS in 2019. It was found that oblique imagery with terrain height corrections, that creates perspectives similar to what the TLS provides, increased the completeness of data collection for a better reconstruction of a rock glacier in 3D. The new method improves the completeness of data by an average of at least 8.6%. Keeping the data as point clouds provided a much better representation of the terrain. When transforming point clouds into DEMs with common interpolations methods it was found that the average area of surface items could be exaggerated by 2.2 m^2 while point clouds were much more accurate with 0.3 m^2 of accuracy. DoD and M3C2 results were compared and it was found that DoD always provides a maximum increase of at least 1.1 m and decrease of 0.85 m more than M3C2 with larger standard deviation with similar mean values which could attributed to horizontal inaccuracies and smoothing of the interpolated data.
APA, Harvard, Vancouver, ISO, and other styles
11

Dinh-Xuan, Lam. "Quality of Experience Assessment of Cloud Applications and Performance Evaluation of VNF-Based QoE Monitoring." Doctoral thesis, 2018. https://nbn-resolving.org/urn:nbn:de:bvb:20-opus-169182.

Full text
Abstract:
In this thesis various aspects of Quality of Experience (QoE) research are examined. The work is divided into three major blocks: QoE Assessment, QoE Monitoring, and VNF Performance Evaluation. First, prominent cloud applications such as Google Docs and a cloud-based photo album are explored. The QoE is characterized and the influence of packet loss and delay is studied. Afterwards, objective QoE monitoring for HTTP Adaptive Video Streaming (HAS) in the cloud is investigated. Additionally, by using a Virtual Network Function (VNF) for QoE monitoring in the cloud, the feasibility of an interworking of Network Function Virtualization (NFV) and cloud paradigm is evaluated. To this end, a VNF that exploits deep packet inspection technique was used to parse the video traffic. An algorithm is then designed accordingly to estimate video quality and QoE based on network and application layer parameters. To assess the accuracy of the estimation, the VNF is measured in different scenarios under different network QoS and the virtual environment of the cloud architecture. The insights show that the different geographical deployments of the VNF influence the accuracy of the video quality and QoE estimation. Various Service Function Chain (SFC) placement algorithms have been proposed and compared in the context of edge cloud networks. On the one hand, this research is aimed at cloud service providers by providing methods for evaluating QoE for cloud applications. On the other hand, network operators can learn the pitfalls and disadvantages of using the NFV paradigm for such a QoE monitoring mechanism
In dieser Arbeit werden verschiedene Aspekte von Quality of Experience (QoE) und QoE-Monitoring untersucht. Die Arbeit teilt sich in drei große Blöcke auf: QoE Assessment, QoE Monitoring und Leistungsuntersuchung einer VNF. Zunächst werden prominente Cloud-Anwendungen wie Google Docs und ein Cloud-basiertes Photoalbum untersucht. Die QoE wird charakterisiert und es wird der Einfluss von Paketverlust und Delay studiert. Danach wird das objektive QoE-Monitoring für HTTP Adaptive Video Streaming (HAS) in der Cloud untersucht. Durch die Verwendung einer virtuellen Netzwerkfunktion (Virtual Network Function, VNF) für die QoE-Überwachung in der Cloud wurde außerdem die Durchführbarkeit eines Zusammenwirkens von Netzwerkfunktionsvirtualisierung (NFV) und Cloud-Paradigma bewertet. Zu diesem Zweck wurde der VNF, die die Deep-Packet-Inspection-Technik benutzt, zum Parsen des Videoverkehrs verwendet. Im Anschluss wurde ein Algorithmus entworfen, um die Videoqualität und die QoE basierend auf Netzwerk- und Anwendungsschichtparametern zu schätzen. Um die Genauigkeit der Schätzung zu bewerten, wurde die VNF in verschiedenen Szenarien unter verschiedener Netzwerk-QoS und der virtuellen Umgebung der Cloud-Architektur gemessen. Die Erkenntnisse zeigen, dass die unterschiedlichen geografischen Implementierungen der VNF die Genauigkeit der Schätzung der Videoqualität und QoE beeinflussen. Es wurden verschiedene Platzierungsalgorithmen der Service Function Chain (SFC) vorgeschlagen und im Kontext von Edge-Cloud-Netzwerken verglichen. Diese Forschungsarbeit zielt zum einen auf Cloud-Service-Provider ab, indem ihnen Methoden zur Bewertung der QoE für Cloud-Anwendungen zur Verfügung gestellt werden. Auf der anderen Seite können die Netzwerkbetreiber die Fallstricke und Nachteile der Anwendung des NFV-Paradigmas für einen solchen QoE-Überwachungsmechanismus erlernen
APA, Harvard, Vancouver, ISO, and other styles
12

Dias, Ana Catarina Duque. "Cyclist performance assessment based on WSN and cloud technologies." Master's thesis, 2018. http://hdl.handle.net/10071/18462.

Full text
Abstract:
Mobility in big cities is a growing problem and the use of bicycles has been a solution which, together with new sharing services, helps to motivate users. There are also more and more users practicing sports involving the use of bicycles. It was in this context that the present dissertation was developed, a distributed sensor system for monitoring cyclists. With the support of a wireless sensor network connected to the internet and, using a set of smart sensors as end-nodes, it is possible to obtain data that will help the cyclist to improve his performance. The coach can monitor and evaluate the performance to improve their training sessions. The health status condition during training it is also monitored using cardiac and respiratory assessment sensors. The information from the nodes of the wireless sensor network is uploaded, via the internet connection, to the Firebase platform. An Android mobile application has been developed, this allows trainers to register cyclists, plan routes and observe the results collected by the network. With the inclusion of these technologies, the coach and the athlete may analyze the performance of a session and compare it with the previous training results. New training sessions may be established according to the athlete's needs. The effectiveness of the proposed system was experimentally tested and several results are included in this dissertation.
A mobilidade nas grandes cidades é um problema crescente e a utilização das bicicletas tem vindo a ser uma solução que, em conjunto com novos serviços de partilha, ajudam a motivar os utilizadores. Há também cada vez mais utilizadores a praticar desportos que envolvem a utilização da bicicleta. Foi neste contexto que a presente dissertação foi desenvolvida, um sistema de sensores distribuídos para monitorização de ciclistas. Com o suporte de uma rede de sensores sem fios ligada á internet e, utilizando um conjunto de sensores inteligentes como nós, é possível obter dados que vão ajudar o ciclista a melhorar o seu desempenho. O treinador consegue monitorizar e avaliar o desempenho para aperfeiçoar as sessões de treino. A condição do estado de saúde é também monitorizada utilizando sensores de avaliação cardíaca e de respiratória. A informação proveniente dos nós da rede de sensores sem fios é carregada, através da ligação á internet, para a plataforma Firebase. Foi desenvolvida uma aplicação móvel Android, que permite que os treinadores registem ciclistas, planeiem rotas e observem os resultados recolhidos pela rede. Com a inclusão destas tecnologias, o treinador e o ciclista podem analisar o desempenho de uma sessão e compara-lo com os resultados do treino anterior. Podem ser estabelecidas novas sessões de treino de acordo com as necessidades do atleta. A eficácia do sistema proposto foi testada experimentalmente e os vários resultados foram incluídos nesta dissertação.
APA, Harvard, Vancouver, ISO, and other styles
13

Zimmer, Camille. "Innovative techniques for the quantification of waterborne microbial risks in field studies." Thesis, 2019. http://hdl.handle.net/1828/11090.

Full text
Abstract:
In low-resource contexts, household-level point-of-use water treatment (POUWT) techniques are the final, and sometimes only, barrier against waterborne illnesses, and in these and other water-related applications, health risks can be quantified using one of two methods. Firstly, Escherichia coli (or other indicator organism) counts can be used to monitor water and determine adherence to a health-based limit (i.e. compliance monitoring). Secondly, E. coli can be used to conduct a quantitative microbial risk assessment (QMRA), indicating the level of protection conferred by a given POUWT device by spiking test water with E. coli to ascertain a reduction efficacy relative to that target organism, a process referred to as challenge testing, which is typically carried out in a laboratory context. Although both methods are well established, both have scope for improvement for effective field application in low-resource contexts. Regarding compliance monitoring, I assessed the performance of a new low-cost field kit for E. coli enumeration, which was designed by others. I also assessed the feasibility of re-using some disposable materials, in terms of sterility and mechanical wear. The use of the new low-cost field kit was successful during the fieldwork campaign; however, re-using disposable materials introduced a relatively high occurrence of false positive results during E. coli enumeration. Use of the new low-cost field kit can reduce financial barriers, thus enabling greater water quality testing coverage. Regarding challenge testing, the aim of this study was to adapt current protocols to assess the household performance (as opposed to laboratory performance) of POUWT techniques. I developed a conceptual framework to conduct Field Challenge Tests (FCT’s) on POUWT techniques, using a probiotic health supplement containing E. coli as the challenge organism. I successfully carried out a FCT in Malawi with limited resources, verifying FCT viability. Applications of such FCT’s include quality control practices for manufactured devices, guiding QMRA and recommendations by public health organizations regarding POU device selection, and assessing the impact of user training programmes regarding POUWT techniques.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
14

DOTTA, GIULIA. "Semi-automatic analysis of landslide spatio-temporal evolution." Doctoral thesis, 2017. http://hdl.handle.net/2158/1076767.

Full text
Abstract:
Le tecniche di rilevamento rappresentano un utile strumento per rilevare e caratterizzare i processi gravitativi di versante, in particolare attraverso l’uso di approcci volti ad individuare le aree in movimento. Nel dettaglio, tecniche come il laser scanner terrestre e la fotogrammetria digitale permettono di ottenere rappresentazioni ad alta risoluzione dello scenario osservato sotto forma di una nuvola di punti (point cloud) in tre dimensioni. Durante gli ultimi anni, l’uso delle nuvole di punti per investigare i cambiamenti morfologici a scala temporale e spaziale, è notevolmente aumentato. In questo contesto è maturato il presente progetto di ricerca, durante il quale, l’efficacia dell’utilizzo delle nuvole di punti per la caratterizzazione e il monitoraggio di versanti instabili è stata testata e valutata attraverso lo sviluppo di un tool semi-automatico in linguaggio di programmazione MATLAB. Lo strumento di analisi proposto consente di investigare le principali caratteristiche morfologiche dei versanti instabili indagati e di determinare le variazioni morfologiche e gli spostamenti dalla comparazione di nuvole di punti acquisite in tempi differenti. In seguito, attraverso una tecnica di clustering, il codice permette di estrapolare i gruppi le zone interessate da spostamenti significativi e calcolarne l’area. Il tool introdotto è stato testato su due casi di studio contraddistinti da differenti caratteristiche geologiche e da diversi fenomeni di instabilità: l’ammasso roccioso di San Leo (RN) e il versante presso l’abitato di Ricasoli (AR). Per entrambi i casi di studio, sono state individuate e descritte le aree caratterizzate da deformazione superficiale o accumulo di materiale e le aree caratterizzate da distacco di materiale. Inoltre, sono stati approfonditi i fattori che influenzano i risultati della change detection tra nuvole di punti. Remote sensing techniques represent a powerful instrument to detect and characterise earth’s surface processes, especially using change detection approaches. In particular, TLS (Terrestrial Laser Scanner) and UAV (Unmanned Aerial Vehicles) photogrammetry technique allow to obtain high-resolution representations of the observed scenario as a threedimensional array of points defined by x, y and z coordinates, namely point cloud. During the last years, the use of 3D point clouds to investigate the morphological changes occurring over a range of spatial and temporal scales, is considerably increased. During the three-years PhD research programme, the effectiveness of point cloud exploitation for slope characterization and monitoring was tested and evaluated by developing and applying a semi-automatic MATLAB tool. The proposed tool allows to investigate the main morphological characteristics of unstable slopes by using point clouds and to point out any spatio-temporal morphological changes, by comparing point clouds acquired at different times. Once defined a change detection threshold, the routine permits to execute a cluster analysis and automatically separate zones characterized by significant distances and compute their area. The introduced tool was tested on two test sites characterized by different geological setting and instability phenomena: the San Leo rock cliff (Rimini province, Emilia Romagna region, northern Italy) and a clayey slope near Ricasoli village (Arezzo province, Tuscany region, central Italy). For both case of studies, the main displacement or accumulation zones and detachment zone were mapped and described. Furthermore, the factors influencing the change detection results are discussed in details.
APA, Harvard, Vancouver, ISO, and other styles
15

NOBILE, ALESSIA. "I sistemi a scansione 3D per la documentazione metrica e lo studio diagnostico dei Beni Culturali. Dalla scala edilizia alla scala urbana. I casi studio della Basilica dell’Umiltà di Pistoia e delle Torri di San Gimignano." Doctoral thesis, 2013. http://hdl.handle.net/2158/797885.

Full text
Abstract:
L’attività di ricerca si è sviluppata con l’obiettivo di sperimentare i metodi e le tecniche di acquisizione, gestione e rappresentazione tridimensionale mediante l’uso del laser scanner per offrire un valido supporto alla documentazione e alla diagnostica finalizzate alla conservazione del nostro patrimonio culturale costruito. L’ampia diffusione delle tecniche di scansione non ci consente ancora di considerare concluso un tema di ricerca che erroneamente oggi si identifica soprattutto con la fase di “acquisizione dei dati”. Il problema è in realtà posposto alle fasi successive di elaborazione e rappresentazione e sono molti i quesiti a cui si cerca di rispondere in un tentativo di integrazione culturale tra restauro, geomatica ed elettronica: è insieme una sfida e una opportunità dove si tenta di superare le barriere linguistiche, dovute a differenti ambiti culturali, diversi approcci metodologici e vari percorsi formativi. Lo studio è condotto a livello multi-scala: a scala dell’edificio, con la Basilica di Santa Maria dell’Umiltà di Pistoia, nell’ambito della convenzione di ricerca stipulata tra il Laboratorio di Geomatica per i Beni Culturali dell’Università degli Studi di Firenze e la Soprintendenza per il Patrimonio Storico Artistico ed Etnoantropologico per le province di Firenze, Pistoia e Prato, in vista del restauro e del consolidamento dell’importante struttura rinascimentale; a scala urbana, con le Torri di San Gimignano, in occasione del progetto “RIschio Sismico negli Edifici Monumentali (RISEM)” finanziato dalla Regione Toscana e coordinato dal Dipartimento di Ingegneria Civile e Ambientale dell’Università degli Studi di Firenze, per la definizione del rischio sismico. I casi studio analizzati hanno portato alla consapevolezza che, a partire dalla banca dati tridimensionale, sempre aggiornabile e interrogabile, è possibile modulare l’elaborazione in funzione degli obiettivi interdisciplinari condivisi. Si propone, pertanto, un utilizzo nuovo della tecnica di rilievo laser scanning: l’attenzione non si pone specificatamente sugli elementi artistici e architettonici e lo scopo non è solo restituire in modalità tridimensionale un oggetto al fine di effettuare valutazioni qualitative di natura storica e culturale. L’idea è approcciarsi al dato laser con occhio critico nei confronti della struttura stessa e delle geometrie più o meno complesse. L’attenzione si sposta sugli elementi costitutivi e costruttivi, su eventuali testimonianze di fessurazioni e deformazioni critiche per l’effettiva stabilità della struttura. La necessità di disporre di un “modello irrefutabile”, a cui riferire le scelte progettuali, costituisce l’ossatura portante della ricerca. Goal of the research has been to test laser scanning acquisition, management and threedimensional representation methods and techniques to provide a valid documentation and diagnostics support aimed at the preservation of our cultural built heritage. The widespread use of scanning techniques does not allow to consider concluded yet a research topic that today is mistakenly identified especially with the phase of data acquisition. Actually the problem is postponed to the later stages of processing and representation and there are many issues partially solved through an attempt of cultural integration between restoration, geomatics and electronics: it is both a challenge and an opportunity, which carries along an effort to overcome language barriers, due to different cultural backgrounds, methodological approaches and educational paths. The study has been conducted with a multi-scale approach: at the building scale, with focus on the Basilica of Santa Maria dell’Umiltà in Pistoia, within the research agreement signed by the Laboratory of Geomatics for Cultural Heritage of the University of Florence and the Soprintendenza per il Patrimonio Storico Artistico ed Etnoantropologico per le province di Firenze, Pistoia e Prato, in view of the restoration and reinforcement of the relevant Renaissance architecture; at the urban scale, within the project “Seismic Risk in Monumental Buildings (RISEM)” funded by the Region of Tuscany and coordinated by the Department of Civil and Environmental Engineering of the University of Florence, for the seismic risk evaluation related to the San Gimignano towers. The above-mentioned case-studies raised the awareness that, on the basis of the 3D data set, which can be updated and queried at any time, it’s always possible to adjust the processing phase according to the fixed interdisciplinary goals. It is therefore proposed a new use of the laser scanning surveying technique: attention is not specifically given to the artistic and architectural elements and the aim is not only to represent an object in three-dimensions in order to make qualitative assessments on its historical and cultural value. The idea is to read data from laser scanning with the intent to review critically the structure and the more or less complex geometries. Focus is shifted on the constituent and constructive elements, on any evidence of cracks and deformations which may weaken the stability of the structure. The need for an “irrefutable model”, which can be used to orient any restoration plan, globally frames the research.
APA, Harvard, Vancouver, ISO, and other styles
16

(8803076), Jordan M. McGraw. "Implementation and Analysis of Co-Located Virtual Reality for Scientific Data Visualization." Thesis, 2020.

Find full text
Abstract:
Advancements in virtual reality (VR) technologies have led to overwhelming critique and acclaim in recent years. Academic researchers have already begun to take advantage of these immersive technologies across all manner of settings. Using immersive technologies, educators are able to more easily interpret complex information with students and colleagues. Despite the advantages these technologies bring, some drawbacks still remain. One particular drawback is the difficulty of engaging in immersive environments with others in a shared physical space (i.e., with a shared virtual environment). A common strategy for improving collaborative data exploration has been to use technological substitutions to make distant users feel they are collaborating in the same space. This research, however, is focused on how virtual reality can be used to build upon real-world interactions which take place in the same physical space (i.e., collaborative, co-located, multi-user virtual reality).

In this study we address two primary dimensions of collaborative data visualization and analysis as follows: [1] we detail the implementation of a novel co-located VR hardware and software system, [2] we conduct a formal user experience study of the novel system using the NASA Task Load Index (Hart, 1986) and introduce the Modified User Experience Inventory, a new user study inventory based upon the Unified User Experience Inventory, (Tcha-Tokey, Christmann, Loup-Escande, Richir, 2016) to empirically observe the dependent measures of Workload, Presence, Engagement, Consequence, and Immersion. A total of 77 participants volunteered to join a demonstration of this technology at Purdue University. In groups ranging from two to four, participants shared a co-located virtual environment built to visualize point cloud measurements of exploded supernovae. This study is not experimental but observational. We found there to be moderately high levels of user experience and moderate levels of workload demand in our results. We describe the implementation of the software platform and present user reactions to the technology that was created. These are described in detail within this manuscript.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography