Dissertations / Theses on the topic 'Mesh data'

To see the other types of publications on this topic, follow the link: Mesh data.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Mesh data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Karlsson, Simon. "A Data Collection Framework for Bluetooth Mesh Networks." Thesis, Linköpings universitet, Programvara och system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157660.

Full text
Abstract:
This thesis presents a framework for collecting network traffic data usable in performance evaluations of Bluetooth Mesh networks. The framework is designed to be adaptive, effective, and efficient. These design goals are intended to minimize resource usage and thereby take constraints in Bluetooth Mesh into account. An implementation of the framework, based on the Bluetooth Mesh model concept, is also presented. The implementation is then validated and evaluated to analyse to what degree it fulfills the requirements of adaptive, effective, and efficient data collection. The evaluation demonstrates the importance of minimizing the size of the reports sent in the framework since larger messages sent with short intervals have a noticeable effect on both the packet delivery ratio of user traffic and the reporting latency. It is also shown that the adaptive reporting feature, that aims to reduce the effect of the framework on user traffic by postponing reporting during high traffic loads, has a positive effect on neighboring nodes overall packet delivery ratio.
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Kai-wah, and 李啟華. "Mesh denoising and feature extraction from point cloud data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42664330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, Kai-wah. "Mesh denoising and feature extraction from point cloud data." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B42664330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Al, Shbat Sherin [Verfasser]. "Decoupling Mesh and Data Representations for Geo-spatial Data Visualization / Sherin Al Shbat." Bremen : IRC-Library, Information Resource Center der Jacobs University Bremen, 2012. http://d-nb.info/1035265885/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cheung, Steven. "Packet routing on mesh-connected computers /." [Hong Kong] : University of Hong Kong, 1992. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13209607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kruskall, Peter S. (Peter Stephen). "Collaborative internet and voice data transfer using bluetooth mesh networking." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/63013.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 61-62).
We present a new networking protocol, AirRAID, intended for wireless devices that, using the collective power of multiple devices within short-range communication sight, extends the availability of a secondary medium over an ad-hoc mesh network, resilient to the erratic movements of the mobile nodes from which it is comprised. We suggest improvements to the Bluetooth discovery algorithm, making use of a quantized hop velocity space to lower the probability of two devices missing each other completely during discovery, and introduce the concept of redundant backup paths to the wireless mesh, allowing for improved reliability in dynamic mesh network situations.
by Peter S. Kruskall.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
7

Xiao, Fei. "Hexahedral Mesh Generation from Volumetric Data by Dual Interval Volume." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1532003347814656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Maglo, Adrien Enam. "Progressive and Random Accessible Mesh Compression." Phd thesis, Ecole Centrale Paris, 2013. http://tel.archives-ouvertes.fr/tel-00966180.

Full text
Abstract:
Previous work on progressive mesh compression focused on triangle meshes but meshes containing other types of faces are commonly used. Therefore, we propose a new progressive mesh compression method that can efficiently compress meshes with arbitrary face degrees. Its compression performance is competitive with approaches dedicated to progressive triangle mesh compression. Progressive mesh compression is linked to mesh decimation because both applications generate levels of detail. Consequently, we propose a new simple volume metric to drive the polygon mesh decimation. We apply this metric to the progressive compression and the simplification of polygon meshes. We then show that the features offered by progressive mesh compression algorithms can be exploited for 3D adaptation by the proposition of a new framework for remote scientific visualization. Progressive random accessible mesh compression schemes can better adapt 3D mesh data to the various constraints by taking into account regions of interest. So, we propose two new progressive random-accessible algorithms. The first one is based on the initial segmentation of the input model. Each generated cluster is compressed independently with a progressive algorithm. The second one is based on the hierarchical grouping of vertices obtained by the decimation. The advantage of this second method is that it offers a high random accessibility granularity and generates one-piece decompressed meshes with smooth transitions between parts decompressed at low and high levels of detail. Experimental results demonstrate the compression and adaptation efficiency of both approaches.
APA, Harvard, Vancouver, ISO, and other styles
9

張治昌 and Steven Cheung. "Packet routing on mesh-connected computers." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1992. http://hub.hku.hk/bib/B3121020X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Olsson, Rasmus, and Jens Egeland. "Reinforcement Learning Routing Algorithm for Bluetooth Mesh Networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234287.

Full text
Abstract:
Today’s office and home environments are moving towards more connected dig- ital infrastructures, meaning there are multiple heterogeneous devices that uses short-range communication to stay connected. Mobile phones, tablets, lap- tops, sensors, printers are examples of devices in such environments. From this, the Internet of Things (IoT) paradigm arises, and to enable it, energy efficient machine-to-machine (M2M) communications are needed. Our study will use Bluetooth Low Energy (BLE) technology for communication between devices, and it demonstrates the impact of routing algorithms in such networks. With the goal to increase the network lifetime, a distributed and dynamic Reinforce- ment Learning (RL) routing algorithm is proposed. The algorithm is based on a RL technique called Q-learning. Performance analysis is performed in different scenarios comparing the proposed algorithm against two static and centralized reference routing algorithms. The results show that our proposed RL routing algorithm performs better as the node degree of the topology increases. Com- pared to the reference algorithms the proposed algorithm can handle a higher load on the network with significant performance improvement, due to the dy- namic change of routes. The increase in network lifetime with 75 devices is 124% and 100 devices is 349%, because of the ability to change routes as time passes which is emphasized when the node degree increases. For 35, 55 and 75 devices the average node degrees are 2.21, 2.39 and 2.54. On a lower number of devices our RL routing algorithm performs nearly as good as the best refer- ence algorithm, the Energy Aware Routing (EAR) algorithm, with a decrease in network lifetime around 19% on 35 devices and 10% on 55 devices. A decrease in the network lifetime on lower number of devices is because of the cost for learning new paths is higher than the gain from exploring multiple paths.
Dagens kontors- och hemmiljöer rör sig mot mer sammankopplad digital in-frastruktur, vilket innebär att det finns många heterogena enheter som behöver kommunicera med varandra på korta avstånd. Mobiltelefoner, tablets, bärbara datorer, sensorer, skrivare är exempel på enheter i sådana miljöer. Utifrån detta uppkommer IoT, och för att möjliggöra det, behövs energieffektiva M2M kom-munikationslösningar. Vår studie kommer att anvanda BLE teknik för kommu-nikation mellan enheter, och den kommer att demonstrera effekterna av routing algoritmer i sådana nätverk. Med målet att öka livstiden för nätverket föreslås en distribuerad och dynamisk RL routing algoritm baserad på Q-learning. En jämförelse mellan den föreslagna algoritmen och de två statiska och centraliser-ade referensalgoritmerna görs i olika simulerings scenarier. Resultaten visar att vår föreslagna RL routing algoritm fungerar bättre när nod graden i topologin ökar. Jämfört med referensalgoritmerna kan den föreslagna algoritmen hantera en högre belastning på nätverket med betydande prestandaförbättring, tack vare den dynamiska förändringen av rutter som leder till en bättre belastningsbal-ans. Ökningen i nätverkslivstiden med 75 enheter är 124% och med 100 enheter är ökningen 349%, på grund av förmågan att byta rutter vilket syns tydligare när nodgraden ökar. För 35, 55 och 75 enheter är nodgraderna 2.21, 2.39 och 2.54. Vid ett lägre antal enheter presterar vår RL routing algoritm nästan lika bra som den bästa referensalgoritmen, EAR, med en minskning av nätverks livstiden på runt 19% med 35 enheter och 10% med 55 enheter. En minskning av nätverks livstiden på lägre antal enheter beror på att kostnaden för att lära sig nya vägar är högre än vinsten från att utforska flera vägar.
APA, Harvard, Vancouver, ISO, and other styles
11

Randrianarivony, Maharavo. "Software pertaining to the preparation of CAD data from IGES interface for mesh-free and mesh-based numerical solvers." Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200700267.

Full text
Abstract:
We focus on the programming aspect of the treatment of digitized geometries for subsequent use in mesh-free and mesh-based numerical solvers. That perspective includes the description of our C/C++ implementations which use OpenGL for the visualization and MFC classes for the user interface. We report on our experience about implementing with the IGES interface which serves as input for storage of geometric information. For mesh-free numerical solvers, it is helpful to decompose the boundary of a given solid into a set of four-sided surfaces. Additionally, we will describe the treatment of diffeomorphisms on four-sided domains by using transfinite interpolations. In particular, Coons and Gordon patches are appropriate for dealing with such mappings when the equations of the delineating curves are explicitly known. On the other hand, we show the implementation of the mesh generation algorithms which invoke the Laplace-Beltrami operator. We start from coarse meshes which one refine according to generalized Delaunay techniques. Our software is also featured by its ability of treating assembly of solids in B-Rep scheme.
APA, Harvard, Vancouver, ISO, and other styles
12

Mara, Jösch Ronja. "Managing Microservices with a Service Mesh : An implementation of a service mesh with Kubernetes and Istio." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280407.

Full text
Abstract:
The adoption of microservices facilitates extending computer systems in size, complexity, and distribution. Alongside their benefits, they introduce the possibility of partial failures. Besides focusing on the business logic, developers have to tackle cross-cutting concerns of service-to-service communication which now defines the applications' reliability and performance. Currently, developers use libraries embedded into the application code to address these concerns. However, this increases the complexity of the code and requires the maintenance and management of various libraries. The service mesh is a relatively new technology that possibly enables developers staying focused on their business logic. This thesis investigates one of the available service meshes called Istio, to identify its benefits and limitations. The main benefits found are that Istio adds resilience and security, allows features currently difficult to implement, and enables a cleaner structure and a standard implementation of features within and across teams. Drawbacks are that it decreases performance by adding CPU usage, memory usage, and latency. Furthermore, the main disadvantage of Istio is its limited testing tools. Based on the findings, the Webcore Infra team of the company can make a more informed decision whether or not Istio is to be introduced.
Tillämpningen av microservices underlättar utvidgningen av datorsystem i storlek, komplexitet och distribution. Utöver fördelarna introducerar de möjligheten till partiella misslyckanden. Förutom att fokusera på affärslogiken måste utvecklare hantera övergripande problem med kommunikation mellan olika tjänster som nu definierar applikationernas pålitlighet och prestanda. För närvarande använder utvecklare bibliotek inbäddade i programkoden för att hantera dessa problem. Detta ökar dock kodens komplexitet och kräver underhåll och hantering av olika bibliotek. Service mesh är en relativt ny teknik som kan möjliggöra för utvecklare att hålla fokus på sin affärslogik. Denna avhandling undersöker ett av de tillgängliga service mesh som kallas Istio för att identifiera dess fördelar och begränsningar. De viktigaste fördelarna som hittas är att Istio lägger till resistens och säkerhet, tillåter funktioner som för närvarande är svåra att implementera och möjliggör en renare struktur och en standardimplementering av funktioner inom och över olika team. Nackdelarna är att det minskar prestandan genom att öka CPU-användning, minnesanvändning och latens. Dessutom är Istios största nackdel dess begränsade testverktyg. Baserat på resultaten kan Webcore Infra-teamet i företaget fatta ett mer informerat beslut om Istio ska införas eller inte.
APA, Harvard, Vancouver, ISO, and other styles
13

Choung, Yunjae. "Extraction of blufflines from 2.5 dimensional Delaunay triangle mesh using LiDAR data." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1251138890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Agnihotri, Mohit Kumar. "Energy efficient topology formation for Bluetooth mesh networks using heterogeneous devices." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187022.

Full text
Abstract:
Internet of things (IoT) is the latest trend in our living spaces allowing machine to machine (M2M) communications at the extensive scale. To enable massive M2M communication and portable devices to run on limited power supplies for the extended duration of time, low-cost energy efficient wireless technologies are needed. Among the many competing technologies including Wi-Fi, Bluetooth has shown the potential to be one of the strong candidates to act as the connectivity solution for the IoT especially after the introduction of Bluetooth Low Energy (BLE). Nowadays BLE is one of the biggest players in the market of short-range wireless technologies. By 2020, nearly 30 billion BLE devices in the form of mobile phones, tablets, sports utilities, sensors, security systems and health monitors are expected to be shipped. This proliferation of low-cost devices may for the first time actualize the vision of IoT. This thesis studies various mesh topology formation techniques that can be used to aid the development of large-scale networks in capillary networks focusing on BLE. In particular, the thesis focuses on how mesh networks can be established over BLE communications especially exploiting the heterogeneous characteristics of the devices. A novel algorithm is proposed called Topology Formation considering Role Suitability (TFRS) to maximize the network lifetime. The algorithm uses a newly introduced metric called role suitability metric (RSM) to assign the best role among master, relay and slave to a device. The RSM metric bases its decision on various device characteristics including, but not limited to, energy, mobility, and computational capability. We use the system-level simulation to evaluate the performance of the proposed algorithm against a reference under homogeneous deployment scenario consisting of heterogeneous devices. Results show that the network lifetime can be improved significantly when the topology is formed considering the device characteristics for both master role selection and relay selection. TFRS can achieve moderate improvements ranging from 20% to 40% varying on the deployment characteristics over the reference case.
APA, Harvard, Vancouver, ISO, and other styles
15

Nandwani, Mukta. "Real-time Remote Visualization of Scientific Data." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/33138.

Full text
Abstract:
Visualization of large amounts of simulation data is important for the understanding of most physical phenomena. The limited capabilities of desktop machines make them unsuitable for handling excessive amounts of simulation data. The present day high speed networks have made it possible to remotely visualize the data being generated by a supercomputer in real time. In order for such a system to be reliable, a robust communication protocol and an efficient compression mechanism are needed. This work presents a remote visualization system that addresses these issues, and emphasizes the design and implementation of the application level network protocol. A control theory based adaptive rate control algorithm is presented for UDP streams that maximizes the effective throughout experienced by the stream while minimizing the packet loss. The algorithm is shown to make the system responsive to changing network conditions. This makes the system deployable over any network, including the Internet.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
16

Gudla, Prabhakar Reddy. "Texture-based segmentation and finite element mesh generation for heterogeneous biological image data." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2395.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Biological Resources Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
17

Rokos, Georgios. "Scalable multithreaded algorithms for mutable irregular data with application to anisotropic mesh adaptivity." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/24812.

Full text
Abstract:
Anisotropic mesh adaptation is a powerful way to directly minimise the computational cost of mesh based simulation. It is particularly important for multi-scale problems where the required number of floating-point operations can be reduced by orders of magnitude relative to more traditional static mesh approaches. Increasingly, finite element/volume codes are being optimised for modern multicore architectures. Inter-node parallelism for mesh adaptivity has been successfully implemented by a number of groups using domain decomposition methods. However, thread-level parallelism using programming models such as OpenMP is significantly more challenging because the underlying data structures are extensively modified during mesh adaptation and a greater degree of parallelism must be realised while keeping the code race-free. In this thesis we describe a new thread-parallel implementation of four anisotropic mesh adaptation algorithms, namely edge coarsening, element refinement, edge swapping and vertex smoothing. For each of the mesh optimisation phases we describe how safe parallel execution is guaranteed by processing workitems in batches of independent sets and using a deferred-operations strategy to update the mesh data structures in parallel without data contention. Scalable execution is further assisted by creating worklists using atomic operations, which provides a synchronisation-free alternative to reduction-based worklist algorithms. Additionally, we compare graph colouring methods for the creation of independent sets and present an improved version which can run up to 50% faster than existing techniques. Finally, we describe some early work on an interrupt-driven work-sharing for-loop scheduler which is shown to perform better than existing work-stealing schedulers. Combining all aforementioned novel techniques, which are generally applicable to other unordered irregular problems, we show that despite the complex nature of mesh adaptation and inherent load imbalances, we achieve a parallel efficiency of 60% on an 8-core Intel(R) Xeon(R) Sandy Bridge and 40% using 16 cores on a dual-socket Intel(R) Xeon(R) Sandy Bridge ccNUMA system.
APA, Harvard, Vancouver, ISO, and other styles
18

Deo, Sonali. "Mesh Networking in Low Power Location Systems (Swarm)." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-204558.

Full text
Abstract:
Today, Internet of Things (IoT) is the driving force in making operations and processes smart. Indoor localization is such an application of IoT that has proven the potential of location awareness in countless scenarios, from mines to industries to even people. nanotron Technologies GmbH, based in Berlin, is one of the pioneers in low power location systems. nanotron's embedded location platform delivers location-awareness for safety and productivity solutions across industrial and consumer markets. The platform consists of chips, modules and software that enable precise real-time positioning and concurrent wireless communication. The ubiquitous proliferation of interoperable platforms is creating the locationaware Internet of Things. One of their product families is swarm. A swarm is a group of independent radios or nodes which facilitates the nodes to communicate with their immediate neighboring nodes to get each other’s positions. This position information is collected by one of the nodes (called gateway) and delivered to the host controller. However, the nodes need to be in range to communicate. The company wants to improve the range of communication and for that purpose; I am implementing a routing protocol with some additional changes for swarm, to allow out-of-range nodes to communicate via intermediate neighbors. This is called mesh networking which would result in so-called ‘mesh’ of nodes and would increase the range of swarm operation that could be beneficial in achieving uniform connectivity throughout large spaces without needing excessive number of gateways. This is of high importance because a node acting as gateway should be ‘awake’ all the time so that it can collect data efficiently, while the other nodes can beon power saving mode. Mesh networking will allow data collection even with fewer such gateways thereby being energy efficient while facilitating larger range of communication. This was made possible by adding the feature of allowing nodes to store messages for their neighbors in case they are asleep and wake up for the neighbors to transmit data. It is done using a schedule that is built and updated in addition to the routing protocol. The purpose of this thesis is to justify the implemented mesh routing protocol for swarm among all the other routing protocols available. It also focuses on the modifications and improvements that were devised to make the protocol tailored for how swarm works and to support Message Queuing Telemetry Transport (MQTT) on top of it, at a later stage. MQTT is a lightweight messaging protocol that provides resource-constrained network clients with a simple way to distribute information. It uses a publish/subscribe communication pattern and is used for machine-to-machine (M2M) communication and plays an important role in the Internet of Things. The implemented routing protocol also takes into consideration, the sleeping nodes, route maintenance through advertisements, hierarchical nature of mesh to make data collection more efficient, message formats keeping in mind the memory shortage, etc. The document gives a thorough overview of concepts, design implementation, improvements and tests to prove the importance of mesh networking in existing swarm.
APA, Harvard, Vancouver, ISO, and other styles
19

Shan, Ju-Lin. "Research and application of adaptive finite element Mesh generation algorithm." Reims, 2007. http://theses.univ-reims.fr/exl-doc/GED00000709.pdf.

Full text
Abstract:
An improved adaptive triangle and tetrahedral adaptive mesh generator has been developed. The interfaces of B-Rep which is used to smooth over the differences in various CAD systems and the mesh data structures based on topology and relation matrix are also introduced. For 3D combined surfaces, an extended Advancing Front Technique (AFT) is extended to overcome the meh quality-worsening problem in closed surface mesh generation due to introducing virtual boundaries into 2D open parametric domains in resulting generate high-quality meshes and guarantee convergence in both open and closed surfaces. Ffor the shifting-AFT, it's not necessary to introduce virtual boundaries manually and automatically while meshing a closed surface, better-shaped triangles will be generate. Comparing with direct methods, the shifting-AFT avoids carrying out costly and unstable 3D geometrical computations in real space. During rolling back of tetrahedral meshing the advancing path is changed by changing front's preferential factor, the times of rolling back is decreased significantly. Moreover, node inserting based on linear programming technique improves the convergence of the algorithm. At the end, a robust backward search method based on walk-through algorithm is proposed to deal with the searching problems in non-convex fields and to avoid the problems of infinite loop
APA, Harvard, Vancouver, ISO, and other styles
20

Cakmak, Ozan. "PRIVACY PRESERVATION IN A HYBRID MULTI MESH-LTE AMI NETWORK FOR SMART GRID." OpenSIUC, 2015. https://opensiuc.lib.siu.edu/theses/1720.

Full text
Abstract:
While the newly envisioned Smart(er) Grid (SG) will result in a more efficient and reliable power grid, its collection and use of fine-grained meter data has widely raised concerns on consumer privacy. While a number of approaches are available for preserving consumer privacy, these approaches are mostly not very practical to be used due to two reasons: First, since the data is hidden, this reduces the ability of the utility company to use the data for distribution state estimation. Secondly and more importantly, the approaches were not tested under realistic wireless infrastructures that are currently in use. In this thesis, a meter data obfuscation approach to preserve consumer privacy is proposed to implement that has the ability to perform distribution state estimation. Then, its performance on LTE and a large-scale Advanced Metering Infrastructure (AMI) network built upon the new IEEE 802.11s wireless mesh standard are assessed. LTE/EPC(Evolved Packet Core) model is used between the gateway and the utility. EPC's goal is to improve network performance by the separation of control and data planes and through a flattened IP architecture, which reduces the hierarchy between mobile data elements. Using obfuscation values provided via this approach, the meter readings are obfuscated to protect consumer privacy from eavesdroppers and the utility companies while preserving the utility companies' ability to use the data for state estimation.The impact of this approach on data throughput, delay and packet delivery ratio under a variety of conditions are assessed.
APA, Harvard, Vancouver, ISO, and other styles
21

Xing, Baoyuan. "Improved 3D Heart Segmentation Using Surface Parameterization for Volumetric Heart Data." Digital WPI, 2013. https://digitalcommons.wpi.edu/etd-theses/270.

Full text
Abstract:
Imaging modalities such as CT, MRI, and SPECT have had a tremendous impact on diagnosis and treatment planning. These imaging techniques have given doctors the capability to visualize 3D anatomy structures of human body and soft tissues while being non-invasive. Unfortunately, the 3D images produced by these modalities often have boundaries between the organs and soft tissues that are difficult to delineate due to low signal to noise ratios and other factors. Image segmentation is employed as a method for differentiating Regions of Interest in these images by creating artificial contours or boundaries in the images. There are many different techniques for performing segmentation and automating these methods is an active area of research, but currently there are no generalized methods for automatic segmentation due to the complexity of the problem. Therefore hand-segmentation is still widely used in the medical community and is the €œGold standard€� by which all other segmentation methods are measured. However, existing manual segmentation techniques have several drawbacks such as being time consuming, introduce slice interpolation errors when segmenting slice-by-slice, and are generally not reproducible. In this thesis, we present a novel semi-automated method for 3D hand-segmentation that uses mesh extraction and surface parameterization to project several 3D meshes to 2D plane . We hypothesize that allowing the user to better view the relationships between neighboring voxels will aid in delineating Regions of Interest resulting in reduced segmentation time, alleviating slice interpolation artifacts, and be more reproducible.
APA, Harvard, Vancouver, ISO, and other styles
22

Kpodzo, Elias, Marc DiLemmo, and Wearn-Juhn Wang. "Wireless Rotor Data Acquisition System." International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595665.

Full text
Abstract:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada
Flight test data acquisition systems have been widely deployed in helicopter certification programs for a few decades. A data acquisition system uses a series of strategically placed sensors to provide instantaneous status condition of the helicopter's components and structure. However, until recently, it has been difficult to collect flight test data from helicopter rotors in motion. Traditional rotor solutions have used slip rings to electrically connect fixed and rotating mechanical elements; but slip rings are inconvenient to use, prone to wear, and notoriously unreliable.
APA, Harvard, Vancouver, ISO, and other styles
23

Balduzzi, Mathilde. "Plant canopy modeling from Terrestrial LiDAR System distance and intensity data." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20203.

Full text
Abstract:
Le défi de cette thèse est de reconstruire la géométrie 3D de la végétation à partir des données de distance et d'intensité fournies par un scanner de type LiDAR. Une méthode de « shape-from-shading » par propagation est développée pour être combinée avec une méthode de fusion de données type filtre de Kalman pour la reconstruction optimale des surfaces foliaires.-Introduction-L'analyse des données LiDAR nous permet de dire que la qualité du nuage de point est variable en fonction de la configuration de la mesure : lorsque le LiDAR mesure le bord d'une surface ou une surface fortement inclinée, il intègre dans sa mesure une partie de l'arrière plan. Ces configurations de mesures produisent des points aberrants. On retrouve souvent ce type de configuration pour la mesure de feuillages puisque ces derniers ont des géométries fragmentées et variables. Les scans sont en général de mauvaise qualité et la quantité d'objets présents dans le scan rend la suppression manuelle des points aberrants fastidieuse. L'objectif de cette thèse est de développer une méthodologie permettant d'intégrer les données d'intensité LiDAR aux distances pour corriger automatiquement ces points aberrants. -Shape-From-Shading-Le principe du Shape-From-Shading (SFS) est de retrouver les valeurs de distance à partir des intensités d'un objet pris en photo. La caméra (capteur LiDAR) et la source de lumière (laser LiDAR) ont la même direction et sont placés à l'infini relativement à la surface, ce qui rend l'effet de la distance sur l'intensité négligeable et l'hypothèse d'une caméra orthographique valide. En outre, la relation entre angle d'incidence lumière/surface et intensité est connue. Par la nature des données LiDAR, nous pourrons choisir la meilleure donnée entre distance et intensité à utiliser pour la reconstruction des surfaces foliaires. Nous mettons en place un algorithme de SFS par propagation le long des régions iso-intenses pour pouvoir intégrer la correction de la distance grâce à l'intensité via un filtre de type Kalman. -Design mathématique de la méthode-Les morceaux de surface correspondant aux régions iso-intenses sont des morceaux de surfaces dites d'égales pentes, ou de tas de sable. Nous allons utiliser ce type de surface pour reconstruire la géométrie 3D correspondant aux images d'intensité.Nous démontrons qu'à partir de la connaissance de la 3D d'un bord d'une région iso-intense, nous pouvons retrouver des surfaces paramétriques correspondant à la région iso-intense qui correspondent aux surfaces de tas de sable. L'initialisation de la région iso-intense initiale (graine de propagation) se fait grâce aux données de distance LiDAR. Les lignes de plus grandes pentes de ces surfaces sont générées. Par propagation de ces lignes (et donc génération du morceau de la surface en tas de sable), nous déterminons l'autre bord de la région iso-intense. Puis, par itération, nous propagerons la reconstruction de la surface. -Filtre de Kalman-Nous pouvons considérer cette propagation des lignes de plus grande pente comme étant le calcul d'une trajectoire sur la surface à reconstruire. Dans le cadre de notre étude, la donnée de distance est toujours disponible (données du scanner 3D). Ainsi il est possible de choisir, lors de la propagation, quelle donnée (distance ou intensité) utiliser pour la reconstruction. Ceci peut être fait notamment grâce à une fusion de type Kalman. -Algorithme-Pour procéder à la reconstruction par propagation, il est nécessaire d'hiérarchiser les domaines iso-intenses de l'image. Une fois que les graines de propagation sont repérées, elles sont initialisées avec l'image des distances. Enfin, pour chacun des nœuds de la hiérarchie (représentant un domaine iso-intense), la reconstruction d'un tas de sable est faite. C'est lors de cette dernière étape qu'une fusion de type Kalman peut être introduite
The challenge of this thesis is reconstruct the 3D geometry of vegetation from distance and intensity data provided by a 3D scanner LiDAR. A method of “Shape-From-Shading” by propagation is developed to be combined with a fusion method of type “Kalman” to get an optimal reconstruction of the leaves. -Introduction-The LiDAR data analysis shows that the point cloud quality is variable. This quality depends upon the measurement set up. When the LiDAR laser beam reaches the edge of a surface (or a steeply inclined surface), it also integrate background measurement. Those set up produce outliers. This kind of set up is common for foliage measurement as foliages have in general fragmented and complex shape. LiDAR data are of bad quality and the quantity of leaves in a scan makes the correction of outliers fastidious. This thesis goal is to develop a methodology to allow us to integrate the LiDAR intensity data to the distance to make an automatic correction of those outliers. -Shape-from-shading-The Shape-from-shading principle is to reconstruct the distance values from intensities of a photographed object. The camera (LiDAR sensor) and the light source (LiDAR laser) have the same direction and are placed at infinity relatively to the surface. This makes the distance effect on intensity negligible and the hypothesis of an orthographic camera valid. In addition, the relationship between the incident angle light beam and intensity is known. Thanks to the LiDAR data analysis, we are able to choose the best data between distance and intensity in the scope of leaves reconstruction. An algorithm of propagation SFS along iso-intense regions is developed. This type of algorithm allows us to integrate a fusion method of type Kalman. -Mathematical design of the method-The patches of the surface corresponding to the iso-intense regions are patches of surfaces called the constant slope surfaces, or sand-pile surfaces. We are going to use those surfaces to rebuild the 3D geometry corresponding to the scanned surfaces. We show that from the knowledge of the 3d of an iso-intensity region, we can construct those sand-pile surfaces. The initialization of the first iso-intense regions contour (propagation seeds) is done with the 3D LiDAR data. The greatest slope lines of those surfaces are generated. Thanks to the propagation of those lines (and thus of the corresponding sand-pile surface), we build the other contour of the iso-intense region. Then, we propagate the reconstruction iteratively. -Kalman filter-We can consider this propagation as being the computation of a trajectory on the reconstructed surface. In our study framework, the distance data is always available (3D scanner data). It is thus possible to choose which data (intensity vs distance) is the best to reconstruct the object surface. This can be done with a fusion of type Kalman filter. -Algorithm-To proceed a reconstruction by propagation, it is necessary to order the iso-intensity regions. Once the propagation seeds are found, they are initialized with the distances provided by the LiDAR. For each nodes of the hierarchy (corresponding to an iso-intensity region), the sand-pile surface reconstruction is done. -Manuscript-The thesis manuscript gathers five chapters. First, we give a short description of the LiDAR technology and an overview of the traditional 3D surface reconstruction from point cloud. Then we make a state-of-art of the shape-from –shading methods. LiDAR intensity is studied in a third chapter to define the strategy of distance effect correction and to set up the incidence angle vs intensity relationship. A fourth chapter gives the principal results of this thesis. It gathers the theoretical approach of the SFS algorithm developed in this thesis. We will provide its description and results when applied to synthetic images. Finally, a last chapter introduces results of leaves reconstruction
APA, Harvard, Vancouver, ISO, and other styles
24

Chai, Yi. "A novel progressive mesh representation method based on the half-edge data structure and √3 subdivision." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5797.

Full text
Abstract:
Progressive mesh (PM) representation can perfectly meet the requirements of generating multi-resolutions for a detailed 3D model. This research proposes a new PM representation method to improve the PM representation storage efficiency and reduce PM generation time. In existing PM representation methods, more than 4 adjacent vertices will be stored for one vertex in the PM representation. Furthermore, the methods always use the inefficient vertex and face list representation during the generation process. In our proposed method, only three vertices are stored by using the √3 subdivision scheme and the efficient half-edge data structure replaces the vertex and face list representation. To evaluate the proposed method, a designed experiment is conducted by using three common testing 3D models. The result illustrates the improvements by comparing to other previous methods.
0707671386
APA, Harvard, Vancouver, ISO, and other styles
25

Quan, Yongyun. "Topology-based Device Self-identification in Wireless Mesh Networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-261147.

Full text
Abstract:
In the context of the Internet of Things (IoT), commissioning is the process of securely adding a new device to a network. It covers many different tasks, including the physical deployment of devices and configuration of parameters. It is network installers who need to manually commission each device one by one with the help of commissioning tools. In practice, the first task for a network installer is to identify each device correctly before configuring it with proper parameters. Individually identifying each device, especially in a large network, is a very time-consuming process. This is also known as the identification problem. This project is going to address the problem.A novel device identification approach is presented in the thesis, and there is no human intervention involved in the identification process. Devices are trying to identify themselves based on predefined rules and given information. The approach is therefore called device self-identification, and it is implemented in two different algorithms. One is the centralized device selfidentification, and the other is the distributed device self-identification. In short, only one device participates in the device identification process in the centralized approach, and in the distributed counterpart, every device is part of the identification process.The results of the implementations show the potential of the new way to identify devices in IoT. Devices in both the centralized approach and the distributed approach are able to identify themselves given necessary information about the network. A detailed discussion regarding the two proposed algorithms and the network information is presented in the thesis.
I samband med Internet of Things (IoT) är igångkörning processen att säkert lägga till en ny enhet i ett nätverk. Den täcker många olika uppgifter, inklusive fysisk distribution av enheter och konfigurering av parametrar. Det är nätverksinstallatörer som måste manuellt beställa varje enhet en efter en med hjälp av installationsverktyg. I praktiken är den första uppgiften för en nätverksinstallatör att korrekt identifiera varje enhet innan den konfigureras med lämpliga parametrar. Att identifiera varje enhet speciellt i ett stort nätverk är en mycket tidskrävande process. Detta är också känt som identifieringsproblemet. Detta projekt kommer att ta itu med problemet.En nyhetsidentifieringsmetod presenteras i avhandlingen och det finns ingen mänsklig intervention involverad i identifieringsprocessen. Enheter försöker identifiera sig baserat på fördefinierade regler och information. Tillvägagångssättet kallas därför enhetens självidentifiering och det implementeras i två olika algoritmer, en är centraliserad enhetens självidentifiering och den andra är distribuerad enhetens självidentifiering. Kort sagt, endast en enhet deltar i enhetsidentifieringsprocessen i centraliserat tillvägagångssätt, och i distribuerad motsvarighet är varje enhet en del av identifieringsprocessen.Resultaten av implementationerna visar potentialen för det nya sättet att identifiera enheter i IoT. Enheter i både centraliserat tillvägagångssätt och distribuerat tillvägagångssätt kan identifiera sig med den information som krävs för nätverket. En detaljerad diskussion om de två föreslagna algoritmerna och nätverksinformationen presenteras i avhandlingen.
APA, Harvard, Vancouver, ISO, and other styles
26

Rahat, Alma As-Aad Mohammad. "Hybrid evolutionary routing optimisation for wireless sensor mesh networks." Thesis, University of Exeter, 2015. http://hdl.handle.net/10871/21330.

Full text
Abstract:
Battery powered wireless sensors are widely used in industrial and regulatory monitoring applications. This is primarily due to the ease of installation and the ability to monitor areas that are difficult to access. Additionally, they can be left unattended for long periods of time. However, there are many challenges to successful deployments of wireless sensor networks (WSNs). In this thesis we draw attention to two major challenges. Firstly, with a view to extending network range, modern WSNs use mesh network topologies, where data is sent either directly or by relaying data from node-to-node en route to the central base station. The additional load of relaying other nodes’ data is expensive in terms of energy consumption, and depending on the routes taken some nodes may be heavily loaded. Hence, it is crucial to locate routes that achieve energy efficiency in the network and extend the time before the first node exhausts its battery, thus improving the network lifetime. Secondly, WSNs operate in a dynamic radio environment. With changing conditions, such as modified buildings or the passage of people, links may fail and data will be lost as a consequence. Therefore in addition to finding energy efficient routes, it is important to locate combinations of routes that are robust to the failure of radio links. Dealing with these challenges presents a routing optimisation problem with multiple objectives: find good routes to ensure energy efficiency, extend network lifetime and improve robustness. This is however an NP-hard problem, and thus polynomial time algorithms to solve this problem are unavailable. Therefore we propose hybrid evolutionary approaches to approximate the optimal trade-offs between these objectives. In our approach, we use novel search space pruning methods for network graphs, based on k-shortest paths, partially and edge disjoint paths, and graph reduction to combat the combinatorial explosion in search space size and consequently conduct rapid optimisation. The proposed methods can successfully approximate optimal Pareto fronts. The estimated fronts contain a wide range of robust and energy efficient routes. The fronts typically also include solutions with a network lifetime close to the optimal lifetime if the number of routes per nodes were unconstrained. These methods are demonstrated in a real network deployed at the Victoria & Albert Museum, London, UK.
APA, Harvard, Vancouver, ISO, and other styles
27

Lavén, Andreas. "Multi-Channel Anypath Routing for Multi-Channel Wireless Mesh Networks." Thesis, Karlstad University, Faculty of Economic Sciences, Communication and IT, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-5370.

Full text
Abstract:

Increasing capacity in wireless mesh networks can be achieved by using multiple channels and radios. By using different channels, two nodes can send packets at the same time without interfering with each other. To utilize diversity of available frequency, typically cards use channel-switching, which implies significant overhead in terms of delay. Assignment of which channels to use needs to be coupled with routing decisions as routing influences topology and traffic demands, which in turn impacts the channel assignment.

Routing algorithms for wireless mesh networks differ from routing algorithms that are used in wired networks. In wired networks, the number of hops is usually the only metric that matters. Wireless networks, on the other hand, must consider the quality of different links, as it is possible for a path with a larger amount of hops to be better than a path with fewer hops.

Typical routing protocols for wireless mesh networks such as Optimized Link State Routing (OLSR) use a single path to send packets from source to destination. This path is precomputed based on link state information received through control packets. The consideration of more information than hop-count in the routing process has shown to be beneficial as for example link quality and physical layer data rate determines the quality of the end-to-end path. In multi-channel mesh networks, also channel switching overhead and channel diversity need to be considered as a routing metric. However, a major drawback of current approaches is that a path is precomputed and used as long as the path is available and shows a good enough metric. As a result, short term variations on link quality or channel switching are not considered.

In this thesis, a new routing protocol is designed that provides a set of alternative forwarding candidates for each destination. To minimize delay (from both transmission and channel switching), a forwarding mechanism is developed to select one of the available forwarding candidates for each packet. The implementation was tested on an ARM based multi-radio platform, of which the results show that in a simple evaluation scenario the average delay was reduced by 22 % when compared to single path routing.

APA, Harvard, Vancouver, ISO, and other styles
28

Pieskä, Marcus. "Emulating Software-Defined Small-Cell Wireless Mesh Networks Using ns-3 and Mininet." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-68795.

Full text
Abstract:
The objective of this thesis was to create a network emulator, suitable for evaluatingsolutions in a small-cell wireless mesh SDN backhaul network environment, by integratingexisting software. The most important efforts in this process have been a transparentintegration of Mininet and ns-3 at both the data and the control plane, with ns-3 servingas the front-end. The goal has been to design the system such that solutions revolvingaround fast failover, resilient routing, and energy efficient small cell management may beevaluated. The constituent components include an augmented ns-3 WiFi module withmillimeter wave communication capabilities; a socket API suitable for remote-controllermanagement, as well as the network emulator Mininet. Mininet in turn integrates OpenvSwitch, virtual hosts in the form of Linux network namespaces, and OpenFlow controllers.The work has also included a brief evaluation of the system, which revealed that the designhas a fundamental flaw.
SOCRA
APA, Harvard, Vancouver, ISO, and other styles
29

Sun, Zhen. "Latency-aware Optimization of the Existing Service Mesh in Edge Computing Environment." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254621.

Full text
Abstract:
Edge computing, as an approach to leveraging computation capabilities located in different places, is widely deployed in the industry nowadays. With the development of edge computing, many big companies move from the traditional monolithic software architecture to the microservice design. To provide better performance of the applications which contain numerous loosely coupled modules that are deployed among multiple clusters, service routing among multiple clusters needs to be effective. However, most existing solutions are dedicated to static service routing and load balancing strategy, and thus the performance of the application cannot be effectively optimized when network condition changes.To address the problem mentioned above, we proposed a dynamic weighted round robin algorithm and implemented it on top of the cutting edge service mesh Istio. The solution is implemented as a Docker image called RoutingAgent, which is simple to deployed and managed. With the RoutingAgent running in the system, the weights of the target routing clusters will be dynamically changed based on the detected inter-cluster network latency. Consequently, the client-side request turnaround time will be decreased.The solution is evaluated in an emulated environment. Compared to the Istio without RoutingAgent, the experiment results show that the client-side latency can be effectively minimized by the proposed solution in the multicluster environment with dynamic network conditions. In addition to minimizing response time, emulation results demonstrate that loads of each cluster are well balanced.
Edge computing, som ett tillvägagångssätt för att utnyttja beräkningsfunktioner som finns på olika ställen, används i stor utsträckning i branschen nuförtiden. Med utvecklingen av kantdatabasen flyttar många stora företag från den traditionella monolitiska mjukvaruarkitekturen till mikroserviceteknik. För att ge bättre prestanda för de applikationer som innehåller många löst kopplade moduler som distribueras bland flera kluster, måste service routing bland flera kluster vara effektiva. De flesta befintliga lösningarna är dock dedikerade till statisk service-routing och belastningsbalanseringsstrategi, vilket gör att programmets prestanda inte effektivt kan optimeras när nätverksförhållandena ändras.För att ta itu med problemet som nämnts ovan föreslog vi en dynamisk viktad round robin-algoritm och implementerade den ovanpå den avancerade servicenätverket Istio. Lösningen implementeras som en Docker-bild som heter RoutingAgent, som är enkel att distribuera och hantera. Med agenten som körs i systemet ändras vikten av målruteringsklustret dynamiskt baserat på den upptäckta interklusternätets latens. Följaktligen kommer klientsidans begäran om omställningstid att minskas.Lösningen utvärderas i en emulerad miljö. Jämfört med Istio utan agent visar experimentresultaten att klientens latentitet effektivt kan minimeras av den föreslagna lösningen i multicluster-miljö med dynamiska nätverksförhållanden. Förutom att minimera responstid visar emuleringsresultat att belastningar i varje kluster är välbalanserade.
APA, Harvard, Vancouver, ISO, and other styles
30

Cho, Minn, and Philipe Granhäll. "An Analysis on Bluetooth Mesh Networks and its Limits to Practical Use." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301847.

Full text
Abstract:
A mesh network is a technology that is being repopularized and becoming commonly used by the general public. As this increase in use is observed, technologies such as Bluetooth are being adapted to create mesh variants. In this thesis, a Bluetooth mesh network is created and tested using raspberry pi 4’s and the Bluetooth interface, btferret. This thesis attempts to approach the limits of this technology using accessible tools, outlining the performance the network possesses to serve as a guideline to determine if it suitable for use for tasks at hand. Experimentation is split into two overarching methods where a test for latency and throughput is conducted. The thesis goes on to expose these tests to different stressors, categorized as either internal or external. The data collected aims to show the impacts of internal properties, in this case size of the packets transmitted, the size of the network, and finally the number of hops a packet is able to make within the network. The external factors tested for consists of various environmental properties in the form of obstacles and interference. Walls and a microwaves were used as obstacles while WiFi and other Bluetooth signals were used for interference. The results show that Bluetooth Low Energy (BLE) mesh networks are clearly affected by several internal and external factors. From the experimentation conducted, the thesis illustrates the relative effects of each property the tests are exposed to.
Ett mesh nätverk är en teknik som blivit populär igen och används ofta av allmänheten. Eftersom denna ökade användning observeras, tekniker som Bluetooth anpassas för att skapa mesh nätverksvarianter. I denna avhandling skapas och testas ett Bluetoothnätverk med Raspberry pi 4’s och Bluetoothgränssnittet, btferret. Denna uppsats försöker nå gränserna för denna teknik med hjälp av tillgängliga verktyg, definiera nätverks prestandan som en riktlinje för att avgöra om det är lämpligt för användning för uppgifter till hands. Resultaten visar att BLE mesh nätverk har tydliga begränsningar som avslöjar sig i olika sammanhang. I denna raport så undersöks paket storlek och antal hopp som ett paket kan göra inom nätverket utan signifikant prestandafall. Dessutom har olika andra faktorer, såsom väggar och andra störande radiofrekvenser visat sig påverka nätverket. Från alla experiment som genomförts så illustreras relativa effekt av det olika faktorer.
APA, Harvard, Vancouver, ISO, and other styles
31

Mara, Hubert [Verfasser], and Willi [Akademischer Betreuer] Jäger. "Multi-Scale Integral Invariants for Robust Character Extraction from Irregular Polygon Mesh Data / Hubert Mara ; Betreuer: Willi Jäger." Heidelberg : Universitätsbibliothek Heidelberg, 2012. http://d-nb.info/1177039567/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Limper, Max [Verfasser], Dieter W. [Akademischer Betreuer] Fellner, and Marc [Akademischer Betreuer] Alexa. "Automatic Optimization of 3D Mesh Data for Real-Time Online Presentation / Max Limper ; Dieter W. Fellner, Marc Alexa." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2018. http://d-nb.info/1162275170/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Yinan. "Integrated Mobility and Service Management for Network Cost Minimization in Wireless Mesh Networks." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/27622.

Full text
Abstract:
In this dissertation research, we design and analyze integrated mobility and service manage- ment for network cost minimization in Wireless Mesh Networks (WMNs). We first investigate the problem of mobility management in WMNs for which we propose two efficient per-user mobility management schemes based on pointer forwarding, and then a third one that integrates routing- based location update and pointer forwarding for further performance improvement. We further study integrated mobility and service management for which we propose protocols that support efficient mobile data access services with cache consistency management, and mobile multicast services. We also investigate reliable and secure integrated mobility and service man- agement in WMNs, and apply the idea to the design of a protocol for secure and reliable mobile multicast. The most salient feature of our protocols is that they are optimal on a per-user basis (or on a per-group basis for mobile multicast), that is, the overall network communication cost incurred is minimized for each individual user (or group). Per-user based optimization is critical because mobile users normally have vastly different mobility and service characteristics. Thus, the overall cost saving due to per-user based optimization is cumulatively significant with an increasing mobile user population. To evaluate the performance of our proposed protocols, we develop mathematical models and computational procedures used to compute the network communication cost incurred and build simulation systems for validating the results obtained from analytical modeling. We identify optimal design settings under which the network cost is minimized for our mobility and service management protocols in WMNs. Intensive comparative performance studies are carried out to compare our protocols with existing work in the literature. The results show that our protocols significantly outperform existing protocols under identical environmental and operational settings. We extend the design notion of integrated mobility and service management for cost minimiza- tion to MANETs and propose a scalable dual-region mobility management scheme for location- based routing. The basic design concept is to use local regions to complement home regions and have mobile nodes in the home region of a mobile node serve as location servers for that node. We develop a mathematical model to derive the optimal home region size and local region size under which overall network cost incurred is minimized. Through a comparative performance study, we show that dual-region mobility management outperforms existing mobility management schemes based on static home regions.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
34

Brage, Carl. "Synchronizing 3D data between software : Driving 3D collaboration forward using direct links." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-175165.

Full text
Abstract:
In the area of 3D visualization there are often several stages in the design process. These stages can involve creating a model, applying a texture to the model and creating a rendered image from the model. Some software can handle all stages of the process while some are focused on a single stage to try to perfect and narrow down the service provided. In this case there needs to be a way to transfer 3D data between software in an efficient way where the user experience isn’t lacking. This thesis explores the area of 3D data synchronization by first getting foundation from the prestudy and literature study. The findings from these studies are used in a shared file-based implementation and a design of a network-based system. The work presented in this thesis forms a comprehensive overview which can be used for future work.
APA, Harvard, Vancouver, ISO, and other styles
35

Le, goff Nicolas. "Construction of a conformal hexahedral mesh from volume fractions : theory and applications." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG033.

Full text
Abstract:
Ces travaux abordent le problème de la génération automatique de maillages hexaédriques pour des codes de simulation, à partir d'un maillage portant des fractions volumiques, c'est-à-dire dont les mailles peuvent contenir plusieurs matériaux. La solution proposée doit contruire un maillage hexaédrique dans lequel chaque maille correspond à un seul matériau, et dont les interfaces entre matériaux doivent former des surfaces lisses. D'un point de vue théorique, nous cherchons à adapter et étendre des solutions existantes, et à les appliquer sur une large variété d'exemples : certains issus de modèles de CAO (plaqués sur un maillage pour obtenir des fractions volumiques), d'autres générés procéduralement et enfin d'autres utilisés dans un rôle d'intercode, récupérés en sortie de codes de simulation. Nous définissons une métrique permettant d'évaluer notre (et d'autres) méthodes, ainsi qu'un post-processus pour améliorer ces résultats; nous introduisons également une méthode de reconstruction d'interfaces discrètes inspirés de méthodes issues du domaine de la visualisation scientifique, et nous proposons un algorithme appelé {sc ELG} avec garantie sur la qualité du maillage, faisant intervenir des modifications géométriques et topologiques sur ce maillage
This thesis addresses the problem of the automatic generation of purely hexahedral meshes for simulation codes when having a mesh carrying volume fraction data as an input, meaning that there can be several materials inside one cell. The proposed approach should create an hexahedral mesh where each cell corresponds to a single material, and where interfaces between materials form smooth surfaces. From a theoretical standpoint, we aim at adapting and extending state-of-the-art techniques and we apply them on examples, some classically issued from CAD models (and imprinted onto a mesh to obtain volume fractions), some procedurally generated cases and others in an intercode capacity where we take the results of a first simulation code to be our inputs. We first define a metric that allows the evaluation of our (or others') results and a method to improve those; we then introduce a discrete material interface reconstruction method inspired from the scientific visualization field and finally we present an algorithmic pipeline, called {sc ELG}, that offers a guarantee on the mesh quality by performing geometrical and topological mesh adaptation
APA, Harvard, Vancouver, ISO, and other styles
36

Gurung, Topraj. "Compact connectivity representation for triangle meshes." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47709.

Full text
Abstract:
Many digital models used in entertainment, medical visualization, material science, architecture, Geographic Information Systems (GIS), and mechanical Computer Aided Design (CAD) are defined in terms of their boundaries. These boundaries are often approximated using triangle meshes. The complexity of models, which can be measured by triangle count, increases rapidly with the precision of scanning technologies and with the need for higher resolution. An increase in mesh complexity results in an increase of storage requirement, which in turn increases the frequency of disk access or cache misses during mesh processing, and hence decreases performance. For example, in a test application involving a mesh with 55 million triangles in a machine with 4GB of memory versus a machine with 1GB of memory, performance decreases by a factor of about 6000 because of memory thrashing. To help reduce memory thrashing, we focus on decreasing the average storage requirement per triangle measured in 32-bit integer references per triangle (rpt). This thesis covers compact connectivity representation for triangle meshes and discusses four data structures: 1. Sorted Opposite Table (SOT), which uses 3 rpt and has been extended to support tetrahedral meshes. 2. Sorted Quad (SQuad), which uses about 2 rpt and has been extended to support streaming. 3. Laced Ring (LR), which uses about 1 rpt and offers an excellent compromise between storage compactness and performance of mesh traversal operators. 4. Zipper, an extension of LR, which uses about 6 bits per triangle (equivalently 0.19 rpt), therefore is the most compact representation. The triangle mesh data structures proposed in this thesis support the standard set of mesh connectivity operators introduced by the previously proposed Corner Table at an amortized constant time complexity. They can be constructed in linear time and space from the Corner Table or any equivalent representation. If geometry is stored as 16-bit coordinates, using Zipper instead of the Corner Table increases the size of the mesh that can be stored in core memory by a factor of about 8.
APA, Harvard, Vancouver, ISO, and other styles
37

Joshi, Shriyanka. "Reverse Engineering of 3-D Point Cloud into NURBS Geometry." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1595849563494564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Berbel, Talita dos Reis Lopes. "Recomendação semântica de documentos de texto mediante a personalização de agregações OLAP." Universidade Federal de São Carlos, 2015. https://repositorio.ufscar.br/handle/ufscar/632.

Full text
Abstract:
Made available in DSpace on 2016-06-02T19:07:09Z (GMT). No. of bitstreams: 1 BERBEL_Talita_2015.pdf: 2383674 bytes, checksum: 3c3c42908a145864cffb9aa42b7d45b7 (MD5) Previous issue date: 2015-03-23
With the rapid growth of unstructured data, such as text documents, it becomes more and more interesting and necessary to extract such information to support decision making in business intelligence systems. Recommendations can be used in the OLAP process, because they allow users to have a particular experience in exploiting data. The process of recommendation, together with the possibility of query personalisation, allows recommendations to be increasingly relevant. The main contribution of this work is to propose an effective solution for semantic recommendation of documents through personalisation of OLAP aggregation queries in a data warehousing environment. In order to aggregate and recommend documents, we propose the use of semantic similarity. Domain ontology and the statistical measure of frequency are used in order to verify the similarity between documents. The threshold of similarity between documents in the recommendation process is adjustable and this is the personalisation that provides to the user an interactive way to improve the relevance of the results. The proposed case study is based on articles from PubMed and its domain ontology in order to create a prototype using real data. The results of the experiments are presented and discussed, showing that good recommendations and aggregations are possible with the suggested approach. The results are discussed on the basis of evaluation measures: precision, recall and F1-measure.
Com o crescimento do volume dos dados não estruturados, como os documentos de texto, torna-se cada vez mais interessante e necessário extrair informações deste tipo de dado para dar suporte à tomada de decisão em sistemas de Business Intelligence. Recomendações podem ser utilizadas no processo OLAP, pois permitem que os usuários tenham uma experiência diferenciada na exploração dos dados. O processo de recomendação, aliado à possibilidade da personalização das consultas dos usuários, tomadores de decisão, permite que as recomendações possam ser cada vez mais relevantes. A principal contribuição deste trabalho é a proposta de uma solução eficaz para a recomendação semântica de documentos mediante a personalização de consultas de agregação OLAP em um ambiente de Data Warehousing. Com o intuito de agregar e recomendar documentos propõe-se a utilização da similaridade semântica. A ontologia de domínio e a medida estatística de frequência são utilizadas com o objetivo de verificar a similaridade entre os documentos. O limiar de similaridade entre os documentos no processo de recomendação pode ser parametrizado e é esta a personalização que oferece ao usuário uma maneira interativa de melhorar a relevância dos resultados obtidos. O estudo de caso proposto se baseia em artigos da PubMed e em sua ontologia de domínio com o propósito de criar um protótipo utilizando dados reais. Os resultados dos experimentos realizados são expostos e analisados, mostrando que boas recomendações e agregações são possíveis utilizando a abordagem sugerida. Os resultados são discutidos com base nas métricas de avaliação: precision, recall e F1-measure.
APA, Harvard, Vancouver, ISO, and other styles
39

Carosi, Robert. "Protractor: Leveraging distributed tracing in service meshes for application profiling at scale." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-232139.

Full text
Abstract:
Large scale Internet services are increasingly implemented as distributed systems in order to achieve fault tolerance, availability, and scalability. When requests traverse multiple services, end-to-end metrics no longer tell a clear picture. Distributed tracing emerged to break down end-to-end latency on a per service basis, but only answers where a problem occurs, not why. From user research we found that root-cause analysis of performance problems is often still done by manually correlating information from logs, stack traces, and monitoring tools. Profilers provide fine-grained information, but we found they are rarely used in production systems because of the required changes to existing applications, the substantial storage requirements they introduce, and because it is difficult to correlate profiling data with information from other sources. The proliferation of modern low-overhead profilers opens up possibilities to do online always-on profiling in production environments. We propose Protractor as the missing link that exploits these possibilities to provide distributed profiling. It features a novel approach that leverages service meshes for application-level transparency, and uses anomaly detection to selectively store relevant profiling information. Profiling information is correlated with distributed traces to provide contextual information for root-cause analysis. Protractor has support for different profilers, and experimental work shows impact on end-to-end request latency is less than 3%. The utility of Protractor is further substantiated with a survey showing the majority of the participants would use it frequently
Storskaliga Internettjänster implementeras allt oftare som distribuerade system för att uppnå feltolerans, tillgänglighet och skalbarhet. När en request spänner över flera tjänster ger inte längre end-to-end övervakning en tydlig bild av orsaken till felet. Distribuerad tracing utvecklades för att spåra end-to-end request latency per tjänst och för att ge en indikation vart problemet kan ligger med visar oftas inte orsaken. Genom user research fann vi att root-cause-analys av prestandaproblem ofta fortfarande görs genom att manuellt korrelera information från loggar, stack traces och övervakningsverktyg. Kod-profilering tillhandahåller detaljerad information, men vi fann att den sällan används i produktionssystem på grund av att de kräver ändringar i den befintliga koden, de stora lagringskraven som de introducerar och eftersom det är svårt att korrelera profilerings data med information från andra källor. Utbredning av moderna kodprofilerare med låg overhead öppnar upp möjligheten att kontinuerligt köra dem i produktionsmiljöer. Vi introducerar Protractor som kombinerar kodprofilering och distribuerad tracing. Genom att utnyttja och bygga på koncept så som service meshes uppnår vi transparens på applikationsnivå och använder anomalitetsdetektering för att selektivt lagra relevant profileringsinformation. Den informationen korreleras med distribuerade traces för att ge kontext för root-cause-analys. Protractor har stöd för olika kodprofilerare och experiment har visat att påverkan på end-to-end request latency är mindre än 3Användbarheten av Protractor är ytterligare underbyggd med en undersökning som visar att majoriteten av deltagarna skulle använda den ofta.
APA, Harvard, Vancouver, ISO, and other styles
40

Rountree, Richard John. "Novel technologies for the manipulation of meshes on the CPU and GPU : a thesis presented in partial fulfilment of the requirements for the degree of Masters of Science in Computer Science at Massey University, Palmerston North, New Zealand." Massey University, 2007. http://hdl.handle.net/10179/700.

Full text
Abstract:
This thesis relates to research and development in the field of 3D mesh data for computer graphics. A review of existing storage and manipulation techniques for mesh data is given followed by a framework for mesh editing. The proposed framework combines complex mesh editing techniques, automatic level of detail generation and mesh compression for storage. These methods work coherently due to the underlying data structure. The problem of storing and manipulating data for 3D models is a highly researched field. Models are usually represented by sparse mesh data which consists of vertex position information, the connectivity information to generate faces from those vertices, surface normal data and texture coordinate information. This sparse data is sent to the graphics hardware for rendering but must be manipulated on the CPU. The proposed framework is based upon geometry images and is designed to store and manipulate the mesh data entirely on the graphics hardware. By utilizing the highly parallel nature of current graphics hardware and new hardware features, new levels of interactivity with large meshes can be gained. Automatic level of detail rendering can be used to allow models upwards of 2 million polygons to be manipulated in real time while viewing a lower level of detail. Through the use of pixels shaders the high detail is preserved in the surface normals while geometric detail is reduced. A compression scheme is then introduced which utilizes the regular structure of the geometry image to compress the floating point data. A number of existing compression schemes are compared as well as custom bit packing. This is a TIF funded project which is partnered with Unlimited Realities, a Palmerston North software development company. The project was to design a system to create, manipulate and store 3D meshes in a compressed and easy to manipulate manner. The goal is to create the underlying technologies to allow for a 3D modelling system to become integrated into the Umajin engine, not to create a user interface/stand alone modelling program. The Umajin engine is a 3D engine created by Unlimited Realities which has a strong focus on multimedia. More information on the Umajin engine can be found at www.umajin.com. In this project we propose a method which gives the user the ability to model with the high level of detail found in packages aimed at creating offline renders but create models which are designed for real time rendering.
APA, Harvard, Vancouver, ISO, and other styles
41

Cunha, Ícaro Lins Leitão da. "Estrutura de dados Mate Face e aplicações em geração e movimento de malhas." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-17062009-105850/.

Full text
Abstract:
Estruturas de dados (ED) topológicas oferecem diversas vantagens quando se deseja executarumadeformação sobreumamalha. Essas EDs permitem movimentar os nós da malha sem modificar sua topologia, são relativamente simples de seremimplementadas e também são passíveis de serem incorporadas a um ciclo simulação/deformação de forma completamente automática e eficiente. O primeiro objetivo deste trabalho é a concepção de uma ED topológica para representação de malhas elásticas. Tais malhas podem ser do tipo superficial ou volumétrica, e ainda simples ou mista. Para melhor desempenho, confiabilidade e menor consumo de memória, deseja-se que a ED seja implícita quanto à representação de componentes incidentes e adjacentes dos elementos presentes na malha. Outro objetivo deste trabalho é abordar o problema de geração de malhas em domínios arbitrários definidos por uma função implícita. O método proposto é uma extensão do algoritmo de Partição da Unidade Implícita (PUI). Para isso, o método proposto é baseado numa abordagem de preenchimento de superfícies. Este método proposto gera adaptativamente tetraedros em diferentes níveis de refinamento de acordo com o nível de detalhe presente na região do domínio. Diferentemente de trabalhos anteriores, esta característica é feita naturalmente sem necessitar de uma estrutura auxiliar. Para este fim, usa-se uma estrutura algébrica chamada de triangulação Ja1 que é capaz de lidar com tais refinamentos. Além do mais, a triangulação Ja1 permite que se percorra a estrutura simplesmente através de regras algébricas que é uma outra vantagem do método proposto
Topological data Structures (DS) oer several advantages when performing a deformation on a mesh. These DSs allow movement throughout the mesh without modifying its topology, are relatively simple to implement, and there is always the possibility of merging it to simulation/deformation cycle on a completely automatic and ecient form. The main goal of this work is to design and implement a topological DS to represent elastic meshes. These meshes can be either of surface or volume kind, and even simple or mixed. For better performance, more reliability and lower memory consumption, it is necessary that the DSs allow the representation of incident and adjacent components of a given element to be implicit. The second objective to this work is to tackle the problem of mesh generation on arbitrary domains defined by implicit functions. The proposed method is an extension to the algorithm of Partition of Unity Implicits (PUI). For this the proposed method is based on an isosurface stung approach. It adaptively generates the tetrahedra in dierent levels of refinement accordingly to the level of detail presented by the regions of the domain. Dierently to previous work, this feature is done naturally without the aid of an auxiliary data structure. To this end, we use an algebraic structure, named Ja1 triangulation, which is capable of dealing with such refinements. In addition, the Ja1 triangulation permits traversing through the mesh by simply using algebraic rules which is another advantage to the proposed method
APA, Harvard, Vancouver, ISO, and other styles
42

Randrianarivony, Maharavo. "Geometric processing of CAD data and meshes as input of integral equation solvers." Doctoral thesis, [S.l. : s.n.], 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200601972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Lundgren, Therese. "Digitizing the Parthenon using 3D Scanning : Managing Huge Datasets." Thesis, Linköping University, Department of Science and Technology, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2636.

Full text
Abstract:

Digitizing objects and environments from real world has become an important part of creating realistic computer graphics. Through the use of structured lighting and laser time-of-flight measurements the capturing of geometric models is now a common process. The result are visualizations where viewers gain new possibilities for both visual and intellectual experiences.

This thesis presents the reconstruction of the Parthenon temple and its environment in Athens, Greece by using a 3D laser-scanning technique.

In order to reconstruct a realistic model using 3D scanning techniques there are various phases in which the acquired datasets have to be processed. The data has to be organized, registered and integrated in addition to pre and post processing. This thesis describes the development of a suitable and efficient data processing pipeline for the given data.

The approach differs from previous scanning projects considering digitizing this large scale object at very high resolution. In particular the issue managing and processing huge datasets is described.

Finally, the processing of the datasets in the different phases and the resulting 3D model of the Parthenon is presented and evaluated.

APA, Harvard, Vancouver, ISO, and other styles
44

Li, Ting. "Contributions to Mean Shift filtering and segmentation : Application to MRI ischemic data." Phd thesis, INSA de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00768315.

Full text
Abstract:
Medical studies increasingly use multi-modality imaging, producing multidimensional data that bring additional information that are also challenging to process and interpret. As an example, for predicting salvageable tissue, ischemic studies in which combinations of different multiple MRI imaging modalities (DWI, PWI) are used produced more conclusive results than studies made using a single modality. However, the multi-modality approach necessitates the use of more advanced algorithms to perform otherwise regular image processing tasks such as filtering, segmentation and clustering. A robust method for addressing the problems associated with processing data obtained from multi-modality imaging is Mean Shift which is based on feature space analysis and on non-parametric kernel density estimation and can be used for multi-dimensional filtering, segmentation and clustering. In this thesis, we sought to optimize the mean shift process by analyzing the factors that influence it and optimizing its parameters. We examine the effect of noise in processing the feature space and how Mean Shift can be tuned for optimal de-noising and also to reduce blurring. The large success of Mean Shift is mainly due to the intuitive tuning of bandwidth parameters which describe the scale at which features are analyzed. Based on univariate Plug-In (PI) bandwidth selectors of kernel density estimation, we propose the bandwidth matrix estimation method based on multi-variate PI for Mean Shift filtering. We study the interest of using diagonal and full bandwidth matrix with experiment on synthesized and natural images. We propose a new and automatic volume-based segmentation framework which combines Mean Shift filtering and Region Growing segmentation as well as Probability Map optimization. The framework is developed using synthesized MRI images as test data and yielded a perfect segmentation with DICE similarity measurement values reaching the highest value of 1. Testing is then extended to real MRI data obtained from animals and patients with the aim of predicting the evolution of the ischemic penumbra several days following the onset of ischemia using only information obtained from the very first scan. The results obtained are an average DICE of 0.8 for the animal MRI image scans and 0.53 for the patients MRI image scans; the reference images for both cases are manually segmented by a team of expert medical staff. In addition, the most relevant combination of parameters for the MRI modalities is determined.
APA, Harvard, Vancouver, ISO, and other styles
45

Itier, Vincent. "Nouvelles méthodes de synchronisation de nuages de points 3D pour l'insertion de données cachées." Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS017/document.

Full text
Abstract:
Cette thèse aborde les problèmes liés à la protection de maillages d'objets 3D. Ces objets peuvent, par exemple, être créés à l'aide d'outil de CAD développés par la société STRATEGIES. Dans un cadre industriel, les créateurs de maillages 3D ont besoin de disposer d'outils leur permettant de vérifier l'intégrité des maillages, ou de vérifier des autorisations pour l'impression 3D par exemple. Dans ce contexte nous étudions l'insertion de données cachées dans des maillages 3D. Cette approche permet d'insérer de façon imperceptible et sécurisée de l'information dans un maillage. Il peut s'agir d'un identifiant, de méta-informations ou d'un contenu tiers, par exemple, pour transmettre de façon secrète une texture. L'insertion de données cachées permet de répondre à ces problèmes en jouant sur le compromis entre la capacité, l'imperceptibilité et la robustesse. Généralement, les méthodes d'insertion de données cachées se composent de deux phases, la synchronisation et l'insertion. La synchronisation consiste à trouver et ordonner les éléments disponibles pour l'insertion. L'un des principaux challenges est de proposer une méthode de synchronisation 3D efficace qui définit un ordre sur les composants des maillages. Dans nos travaux, nous proposons d'utiliser les sommets du maillage, plus précisément leur représentation géométrique dans l'espace comme composants de base pour la synchronisation et l'insertion. Nous présentons donc trois nouvelles méthodes de synchronisation de la géométrie des maillages basées sur la construction d'un chemin hamiltonien dans un nuage de sommets. Deux de ces méthodes permettent de manière conjointe de synchroniser les sommets et de cacher un message. Cela est possible grâce à deux nouvelles méthodes d'insertion haute capacité (de $3$ à $24$ bits par sommet) qui s'appuient sur la quantification des coordonnées. Dans ces travaux nous mettons également en évidence les contraintes propres à ce type de synchronisation. Nous discutons des différentes approches proposées dans plusieurs études expérimentales. Nos travaux sont évalués sur différents critères dont la capacité et l'imperceptibilité de la méthode d'insertion. Nous portons également notre attention aux aspects sécurité des méthodes
This thesis addresses issues relating to the protection of 3D object meshes. For instance, these objects can be created using CAD tool developed by the company STRATEGIES. In an industrial context, 3D meshes creators need to have tools in order to verify meshes integrity, or check permission for 3D printing for example.In this context we study data hiding on 3D meshes. This approach allows us to insert information in a secure and imperceptible way in a mesh. This may be an identifier, a meta-information or a third-party content, for instance, in order to transmit secretly a texture. Data hiding can address these problems by adjusting the trade-off between capacity, imperceptibility and robustness. Generally, data hiding methods consist of two stages, the synchronization and the embedding. The synchronization stage consists of finding and ordering available components for insertion. One of the main challenges is to propose an effective synchronization method that defines an order on mesh components. In our work, we propose to use mesh vertices, specifically their geometric representation in space, as basic components for synchronization and embedding. We present three new synchronisation methods based on the construction of a Hamiltonian path in a vertex cloud. Two of these methods jointly perform the synchronization stage and the embedding stage. This is possible thanks to two new high-capacity embedding methods (from 3 to 24 bits per vertex) that rely on coordinates quantization. In this work we also highlight the constraints of this kind of synchronization. We analyze the different approaches proposed with several experimental studies. Our work is assessed on various criteria including the capacity and imperceptibility of the embedding method. We also pay attention to security aspects of the proposed methods
APA, Harvard, Vancouver, ISO, and other styles
46

Jääskeläinen, Perttu. "Comparing Cloud Architectures in terms of Performance and Scalability." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254615.

Full text
Abstract:
Cloud Computing is becoming increasingly popular, with large amounts of corporations revenue coming in from various cloud solutions offered to customers. When it comes to choosing a solution, multiple options exist for the same problem from many competitors. This report focuses on the ones offered by Microsoft in their Azure platform, and compares the architectures in terms of performance and scalability.In order to determine the most suitable architecture, three offered by Azure are considered: Cloud Services (CS), Service Fabric Mesh (SFM) and Virtual Machines (VM). By developing and deploying a REST Web API to each service and performing a load test, average response times in milliseconds are measured and compared. To determine scalability, the point at which each service starts timing out requests is identified. The services are tested both by scaling up, by increasing the power of a single instance of a machine, and by scaling out, if possible, by duplicating instances of machines running in parallel.The results show that VMs fall considerably behind both CS and SFM in both performance and scalability, for a regular use case. For low amounts of requests, all services perform about the same, but as soon as the requests increase, it is clear that both SFM and CS outperform VMs. In the end, CS comes ahead both in terms of scalability and performance.Further research may be done into other platforms which offer the same service solutions, such as Amazon Web Services (AWS) and Google Cloud, or other architectures within Azure.
Molntjänster blir alltmer populära i dagens industri, där stora mängder av företagens omsättning består av tjänster erbjudna i form av molnlösningar. När det kommer till att välja en lösning finns många för samma problem, där det är upp till kunden att välja vilken som passar bäst. Denna rapport fokuserar på tjänster erbjudna av Microsofts Azure plattform, i en jämförelse av arkitekturer som belastningstestas för att mäta prestanda och skalbarhet.För att avgöra vilken arkitektur som är optimalast mäts tre olika tjänster erbjudna i Azure: Cloud Services (CS), Service Fabric Mesh (SFM) och Virtual Machines (VM). Detta görs genom att utveckla och deploya ett REST Web API som är simulerat med användare, där prestanda mäts genom att ta medelresponstiden i millisekunder per anrop. För att avgöra skalbarhet identifieras en punkt där tjänsten inte längre klarar av antalet inkommande anrop och börjar returnera felkoder. Maskinerna för varje tjänst testas både genom att skala upp, genom att förstärka en maskin, men även genom att skala ut, där det skapas flera instanser av samma maskin.Resultatet visar att Virtual Machines hamnar betydligt efter både CS och SFM i både prestanda och skalbarhet för ett vanligt användarfall. För låga mängder anrop ligger samtliga tjänster väldigt lika, men så fort anropen börjar öka så märks det tydligt att SFM och CS presterar bättre än Virtual Machines. I slutändan ligger CS i framkant, både i form av prestanda och skalbarhet.Vidare undersökning kan göras för de olika plattformarna erbjudna av konkurrenter, så som Amazon Web Services (AWS) och Google Cloud, samt andra arkitekturer från Azure.
APA, Harvard, Vancouver, ISO, and other styles
47

Hauge, John Hutcheson Drew Scott Paul. "Boundary layer data system (BLDS) heating system : final project report /." Click here to view, 2009. http://digitalcommons.calpoly.edu/mesp/2/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Jain, Sachin. "Multiresolution strategies for the numerical solution of optimal control problems." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22656.

Full text
Abstract:
Thesis (Ph. D.)--Aerospace Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Tsiotras, Panagiotis; Committee Member: Calise, Anthony J.; Committee Member: Egerstedt, Magnus; Committee Member: Prasad, J. V. R.; Committee Member: Russell, Ryan P.; Committee Member: Zhou, Hao-Min.
APA, Harvard, Vancouver, ISO, and other styles
49

Ross, Brant Arnold. "Flexible Engineering Software: An Integrated Workstation Approach to Finite Element Analysis." BYU ScholarsArchive, 1985. https://scholarsarchive.byu.edu/etd/3460.

Full text
Abstract:
One obstacle preventing more engineers from using finite element analysis (FEA) is the difficulty of transferring data between steps in the modeling process. A Fortran computer program, Rosetta.BYU, has been developed to open data paths between finite element preprocessors (mesh generators) and finite element analysis programs, using a custom data structure. It accepts neutral data files, Version 2.0 IGES data files, and Movie.BYU files for input/output. An application of Rosetta is described. A general workstation manager program, Davinci.BYU, is reviewed that provides a support layer between the engineer and the operating system, organizes software and data files, and facilitates on-line documentation and demonstrations. Requirements of a good user interface are discussed and supporting software, Squire.BYU, is described. An application of this software in an industrial setting is described.
APA, Harvard, Vancouver, ISO, and other styles
50

Akbar, Yousef M. A. H. "Intrusion Detection of Flooding DoS Attacks on Emulated Smart Meters." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/98554.

Full text
Abstract:
The power grid has changed a great deal from what has been generally viewed as a traditional power grid. The modernization of the power grid has seen an increase in the integration and incorporation of computing and communication elements, creating an interdependence of both physical and cyber assets of the power grid. The fast-increasing connectivity has transformed the grid from what used to be primarily a physical system into a Cyber- Physical System (CPS). The physical elements within a power grid are well understood by power engineers; however, the newly deployed cyber aspects are new to most researchers and operators in this field. The new computing and communications structure brings new vulnerabilities along with all the benefits it provides. Cyber security of the power grid is critical due to the potential impact it can make on the community or society that relies on the critical infrastructure. These vulnerabilities have already been exploited in the attack on the Ukrainian power grid, a highly sophisticated, multi-layered attack which caused large power outages for numerous customers. There is an urgent need to understand the cyber aspects of the modernized power grid and take the necessary precautions such that the security of the CPS can be better achieved. The power grid is dependent on two main cyber infrastructures, i.e., Supervisory Control And Data Acquisition (SCADA) and Advanced Metering Infrastructure (AMI). This thesis investigates the AMI in power grids by developing a testbed environment that can be created and used to better understand and develop security strategies to remove the vulnerabilities that exist within it. The testbed is to be used to conduct and implement security strategies, i.e., an Intrusion Detections Systems (IDS), creating an emulated environment to best resemble the environment of the AMI system. A DoS flooding attack and an IDS are implemented on the emulated testbed to show the effectiveness and validate the performance of the emulated testbed.
M.S.
The power grid is becoming more digitized and is utilizing information and communication technologies more, hence the smart grid. New systems are developed and utilized in the modernized power grid that directly relies on new communication networks. The power grid is becoming more efficient and more effective due to these developments, however, there are some considerations to be made as for the security of the power grid. An important expectation of the power grid is the reliability of power delivery to its customers. New information and communication technology integration brings rise to new cyber vulnerabilities that can inhibit the functionality of the power grid. A coordinated cyber-attack was conducted against the Ukrainian power grid in 2015 that targeted the cyber vulnerabilities of the system. The attackers made sure that the grid operators were unable to observe their system being attacked via Denial of Service attacks. Smart meters are the digitized equivalent of a traditional energy meter, it wirelessly communicates with the grid operators. An increase in deployment of these smart meters makes it such that we are more dependent on them and hence creating a new vulnerability for an attack. The smart meter integration into the power grid needs to be studied and carefully considered for the prevention of attacks. A testbed is created using devices that emulate the smart meters and a network is established between the devices. The network was attacked with a Denial of Service attack to validate the testbed performance, and an Intrusion detection method was developed and applied onto the testbed to prove that the testbed created can be used to study and develop methods to cover the vulnerabilities present.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography