Dissertations / Theses on the topic 'Computer virtualization'

To see the other types of publications on this topic, follow the link: Computer virtualization.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computer virtualization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Southern, Gabriel. "Symmetric multiprocessing virtualization." Fairfax, VA : George Mason University, 2008. http://hdl.handle.net/1920/3225.

Full text
Abstract:
Thesis (M.S.)--George Mason University, 2008.
Vita: p. 77. Thesis director: David Hwang. Submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Engineering. Title from PDF t.p. (viewed Aug. 28, 20088). Includes bibliographical references (p. 73-76). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
2

Pelletingeas, Christophe. "Performance evaluation of virtualization with cloud computing." Thesis, Edinburgh Napier University, 2010. http://researchrepository.napier.ac.uk/Output/4010.

Full text
Abstract:
Cloud computing has been the subject of many researches. Researches shows that cloud computing permit to reduce hardware cost, reduce the energy consumption and allow a more efficient use of servers. Nowadays lot of servers are used inefficiently because they are underutilized. The uses of cloud computing associate to virtualization have been a solution to the underutilisation of those servers. However the virtualization performances with cloud computing cannot offers performances equal to the native performances. The aim of this project was to study the performances of the virtualization with cloud computing. To be able to meet this aim it has been review at first the previous researches on this area. It has been outline the different types of cloud toolkit as well as the different ways available to virtualize machines. In addition to that it has been examined open source solutions available to implement a private cloud. The findings of the literature review have been used to realize the design of the different experiments and also in the choice the tools used to implement a private cloud. In the design and the implementation it has been setup experiment to evaluate the performances of public and private cloud. The results obtains through those experiments have outline the performances of public cloud and shows that the virtualization of Linux gives better performances than the virtualization of Windows. This is explained by the fact that Linux is using paravitualization while Windows is using HVM. The evaluation of performances on the private cloud has permitted the comparison of native performance with paravirtualization and HVM. It has been seen that paravirtualization hasperformances really close to the native performances contrary to HVM. Finally it hasbeen presented the cost of the different solutions and their advantages.
APA, Harvard, Vancouver, ISO, and other styles
3

Koppe, Jason. "Differential virtualization for large-scale system modeling /." Online version of thesis, 2008. http://hdl.handle.net/1850/7543.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pham, Duy M. "Performance comparison between x86 virtualization technologies." Thesis, California State University, Long Beach, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1528024.

Full text
Abstract:

In computing, virtualization provides the capability to service users with different resource requirements and operating system platform needs on a single host computer system. The potential benefits of virtualization include efficient resource utilization, flexible service offering, as well as scalable system planning and expansion, all desirable whether it is for enterprise level data centers, personal computing, or anything in between. These benefits, however, involve certain costs of performance degradation. This thesis compares the performance costs between two of the most popular and widely-used x86 CPU-based virtualization technologies today in personal computing. The results should be useful for users when determining which virtualization technology to adopt for their particular computing needs.

APA, Harvard, Vancouver, ISO, and other styles
5

Jensen, Deron Eugene. "System-wide Performance Analysis for Virtualization." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1789.

Full text
Abstract:
With the current trend in cloud computing and virtualization, more organizations are moving their systems from a physical host to a virtual server. Although this can significantly reduce hardware, power, and administration costs, it can increase the cost of analyzing performance problems. With virtualization, there is an initial performance overhead, and as more virtual machines are added to a physical host the interference increases between various guest machines. When this interference occurs, a virtualized guest application may not perform as expected. There is little or no information to the virtual OS about the interference, and the current performance tools in the guest are unable to show this interference. We examine the interference that has been shown in previous research, and relate that to existing tools and research in root cause analysis. We show that in virtualization there are additional layers which need to be analyzed, and design a framework to determine if degradation is occurring from an external virtualization layer. Additionally, we build a virtualization test suite with Xen and PostgreSQL and run multiple tests to create I/O interference. We show that our method can distinguish between a problem caused by interference from external systems and a problem from within the virtual guest.
APA, Harvard, Vancouver, ISO, and other styles
6

Narayanan, Sivaramakrishnan. "Efficient Virtualization of Scientific Data." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1221079391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Johansson, Marcus, and Lukas Olsson. "Comparative evaluation of virtualization technologies in the cloud." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-49242.

Full text
Abstract:
The cloud has over the years become a staple of the IT industry, not only for storage purposes, but for services, platforms and infrastructures. A key component of the cloud is virtualization and the fluidity it makes possible, allowing resources to be utilized more efficiently and services to be relocated more easily when needed. Virtual machine technology, consisting of a hypervisor managing several guest systems has been the method for achieving this virtualization, but container technology, a lightweight virtualization method running directly on the host without a classic hypervisor, has been making headway in recent years. This report investigates the differences between VM’s (Virtual Machines) and containers, comparing the two in relevant areas. The software chosen for this comparison are KVM as VM hypervisor, and Docker as container platform, both run on Linux as the underlying host system. The work conducted for this report compares efficiency in common use areas through experimental evidence, and also evaluates differences in design through study of relevant literature. The results are then discussed and weighed to provide a conclusion. The results of this work shows that Docker has the capability to potentially take over the role as the main virtualization technology in the coming years, providing some of its current shortcomings are addressed and improved upon.
APA, Harvard, Vancouver, ISO, and other styles
8

Athreya, Manoj B. "Subverting Linux on-the-fly using hardware virtualization technology." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34844.

Full text
Abstract:
In this thesis, we address the problem faced by modern operating systems due to the exploitation of Hardware-Assisted Full-Virtualization technology by attackers. Virtualization technology has been of growing importance these days. With the help of such a technology, multiple operating systems can be run on a single piece of hardware, with little or no modification to the operating system. Both Intel and AMD have contributed to x86 full-virtualization through their respective instruction set architectures. Hardware virtualization extensions can be found in almost all x86 processors these days. Hardware virtualization technologies have opened a whole new frontier for a new kind of attack. A system hacker can abuse hardware virualization technology to gain control over an operating system on-the-fly (i.e., without a system restart) by installing a thin Virtual Machine Monitor (VMM) below the native operating system. Such a VMM based malware is termed a Hardware-Assisted Virtual Machine (HVM) rootkit. We discuss the technique used by a rootkit named Blue Pill to subvert the Windows Vista operating system by exploiting the AMD-V (codenamed "Pacifica") virtualization extensions. HVM rootkits do not hook any operating system code or data regions; hence detecting the existence of such malware using conventional techniques becomes extremely difficult. This thesis discusses existing methods to detect such rootkits and their inefficiencies. In this work, we implement a proof-of-concept HVM rootkit using Intel-VT hardware virtualization technology and also discuss how such an attack can be defended against by using an autonomic architecture called SHARK, which was proposed by Vikas et al., in MICRO 2008.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Wei. "Light-Weight Virtualization Driven Runtimes for Big Data Applications." Thesis, University of Colorado Colorado Springs, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13862451.

Full text
Abstract:

Datacenters are evolving to host heterogeneous Big Data workloads on shared clusters to reduce the operational cost and achieve higher resource utilization. However, it is challenging to schedule heterogeneous workloads with diverse resource requirements and QoS constraints. For example, when consolidating latency critical jobs and best-effort batch jobs in the same cluster, latency critical jobs may suffer from long queuing delay if their resource requests cannot be met immediately; while best-effort jobs would suffer from killing overhead when preempted. Moreover, resource contention may harm task performance running on worker nodes. Since resource requirements for diverse applications show heterogeneity and is not known before task execution, either the cluster manager has to over-provision resources for all incoming applications resulting in low cluster utilization; or applications may experience performance slowdown or even failure due to resource insufficiency. Existing approaches focus on either application awareness or system awareness and fail to address the semantic gap between the application layer and the system layer (e.g., OS scheduling mechanisms or cloud resource allocators).

To address these issues, we propose to attack these problems from a different angle. In other words, applications and underlying systems should cooperate synergistically. This this way, the resource demands of application can be exposed to the system. At the same time, application schedulers can be assisted with more runtimes of the system layer and perform more dedicated scheduling. However, the system and application co-design is challenging. First, the real resource demands for an application is hard to be predicted since its requirements vary during its lifetime. Second, there are tons of information generated from system layers (e.g., OS process schedulers or hardware counters), from which it is hard to associate these information to a dedicated task. Fortunately, with the help of lightweight virtualization, applications could run in isolated containers such that system level runtime information can be collected at the container level. The rich APIs of container based virtualization also enable to perform more advanced scheduling.

In this thesis, we focus on efficient and scalable techniques in datacenter scheduling by leveraging lightweight virtualization. Our thesis is two folds. First, we focus on profiling and optimizing the performance of Big Data applications. In this aspect, we built a tool to trace the scheduling delay for low-latency online data analytics workloads. We further built a map execution engine to address the performance heterogeneity for MapReduce. Second, we focus on leveraging OS containers to build advanced cluster scheduling mechanisms. In that, we built a preemptive cluster scheduler, an elastic memory manager and an OOM killer for Big Data applications. We also conducted a supplementary research on tracing the performance of Big Data training on TensorFlow.

We conducted extensive evaluations of the proposed projects in a real-world cluster. The experimental results demonstrate the effectiveness of proposed approaches in terms of improving performance and utilization of Big Data clusters.

APA, Harvard, Vancouver, ISO, and other styles
10

WENG, LI. "Automatic and efficient data virtualization system for scientific datasets." The Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=osu1154717945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Le, Duy. "Understanding and Leveraging Virtualization Technology in Commodity Computing Systems." W&M ScholarWorks, 2012. https://scholarworks.wm.edu/etd/1539623603.

Full text
Abstract:
Commodity computing platforms are imperfect, requiring various enhancements for performance and security purposes. In the past decade, virtualization technology has emerged as a promising trend for commodity computing platforms, ushering many opportunities to optimize the allocation of hardware resources. However, many abstractions offered by virtualization not only make enhancements more challenging, but also complicate the proper understanding of virtualized systems. The current understanding and analysis of these abstractions are far from being satisfactory. This dissertation aims to tackle this problem from a holistic view, by systematically studying the system behaviors. The focus of our work lies in performance implication and security vulnerabilities of a virtualized system.;We start with the first abstraction---an intensive memory multiplexing for I/O of Virtual Machines (VMs)---and present a new technique, called Batmem, to effectively reduce the memory multiplexing overhead of VMs and emulated devices by optimizing the operations of the conventional emulated Memory Mapped I/O in hypervisors. Then we analyze another particular abstraction---a nested file system---and attempt to both quantify and understand the crucial aspects of performance in a variety of settings. Our investigation demonstrates that the choice of a file system at both the guest and hypervisor levels has significant impact upon I/O performance.;Finally, leveraging utilities to manage VM disk images, we present a new patch management framework, called Shadow Patching, to achieve effective software updates. This framework allows system administrators to still take the offline patching approach but retain most of the benefits of live patching by using commonly available virtualization techniques. to demonstrate the effectiveness of the approach, we conduct a series of experiments applying a wide variety of software patches. Our results show that our framework incurs only small overhead in running systems, but can significantly reduce maintenance window.
APA, Harvard, Vancouver, ISO, and other styles
12

Semnanian, Amir Ali. "A study on virtualization technology and its impact on computer hardware." Thesis, California State University, Long Beach, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=1523065.

Full text
Abstract:

Underutilization of hardware is one of the challenges that large organizations have been trying to overcome. Most of today's computer hardware is designed and architected for hosting a single operating system and application. Virtualization is the primary solution for this problem. Virtualization is the capability of a system to host multiple virtual computers while running on a single hardware platform. This has both advantages and disadvantages. This thesis concentrates on introducing virtualization technology and comparing different techniques through which virtualization is achieved. It will examine how computer hardware can be virtualized and the impact virtualization would have on different parts of the system. This study evaluates the changes necessary to hardware architectures when virtualization is used. This thesis provides an analysis of the benefits of this technology which conveys to the computer industry and the disadvantages which accompany this new solution. Finally the future of virtualization technology and how it can affect the infrastructure of an organization is evaluated.

APA, Harvard, Vancouver, ISO, and other styles
13

Isenstierna, Tobias, and Stefan Popovic. "Computer systems in airborne radar : Virtualization and load balancing of nodes." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18300.

Full text
Abstract:
Introduction. For hardware used in radar systems of today, technology is evolving in an increasing rate. For existing software in radar systems, relying on specific drivers or hardware, this quickly becomes a problem. When hardware required is no longer produced or outdated, compatibility problems emerges between the new hardware and existing software. This research will focus on exploring if the virtualization technology can be helpful in solving this problem. Would it be possible to address the compatibility problem with the help of hypervisor solutions, while also maintaining high performance? Objectives. The aim with this research is to explore the virtualization technology with focus on hypervisors, to improve the way that hardware and software cooperate within a radar system. The research will investigate if it is possible to solve compatibility problems between new hardware and already existing software, while also analysing the performance of virtual solutions compared to non-virtualized. Methods. The proposed method is an experiment were the two hypervisors Xen and KVM will analysed. The hypervisors will be running on two different systems. A native environment with similarities to a radar system will be built and then compared with the same system, but now with hypervisor solutions applied. Research around the area of virtualization will be conducted with focus on security, hypervisor features and compatibility. Results. The results will present a proposed virtual environment setup with the hypervisors installed. To address the compatibility issue, an old operating system has been used to prove that implemented virtualization works. Finally performance results are presented for the native environment compared against a virtual environment. Conclusions. From results gathered with benchmarks, we can see that the individual performance might vary, which is to be expected when used on different hardware. A virtual setup has been built, including Xen and KVM hypervisors, together with NAS communication. Running an old operating system as a virtual guest, compatibility has been proven to exist between software and hardware using KVM as the virtual solution. From the results gathered, KVM seems like a good solution to investigate more.
APA, Harvard, Vancouver, ISO, and other styles
14

Wei, Junyi. "QoS-aware joint power and subchannel allocation algorithms for wireless network virtualization." Thesis, University of Essex, 2017. http://repository.essex.ac.uk/20142/.

Full text
Abstract:
Wireless network virtualization (WNV) is a promising technology which aims to overcome the network redundancy problems of the current Internet. WNV involves abstraction and sharing of resources among different parties. It has been considered as a long term solution for the future Internet due to its flexibility and feasibility. WNV separates the traditional Internet service provider’s role into the infrastructure provider (InP) and service provider (SP). The InP owns all physical resources while SPs borrow such resources to create their own virtual networks in order to provide services to end users. Because the radio resources is finite, it is sensible to introduce WNV to improve resources efficiency. This thesis proposes three resource allocation algorithms on an orthogonal frequency division multiple access (OFDMA)-based WNV transmission system aiming to improve resources utility. The subject of the first algorithm is to maximize the InP and virtual network operators’ (VNOs’) total throughput by means of subchannel allocation. The second one is a power allocation algorithm which aims to improve VNO’s energy efficiency. In addition, this algorithm also balances the competition across VNOs. Finally, a joint power and subchannel allocation algorithm is proposed. This algorithm tries to find out the overall transmission rate. Moreover, all the above alogorithms consider the InP’s quality of service (QoS) requirement in terms of data rate. The evaluation results indicates that the joint resource allocation algorithm has a better performance than others. Furthermore, the results also can be a guideline for WNV performance guarantees.
APA, Harvard, Vancouver, ISO, and other styles
15

Saleh, Mehdi. "Virtualization and self-organization for utility computing." Master's thesis, University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5026.

Full text
Abstract:
We present an alternative paradigm for utility computing when the delivery of service is subject to binding contracts; the solution we propose is based on resource virtualization and a self-management scheme. A virtual cloud aggregates set virtual machines to work in concert for the tasks specified by the service agreement. A first step for the establishment of a virtual cloud is to create a scale-free overlay network through a biased random walk; scale-free networks enjoy a set of remarkable properties such as: robustness against random failures, favorable scaling, and resilience to congestion, small diameter, and average path length. Constrains such as limits on the cost of per unit of service, total cost, or the requirement to use only "green" computing cycles are then considered when a node of this overlay network decides whether to join the virtual cloud or not. A VIRTUAL CLOUD consists of a subset of the nodes assigned to the tasks specified by a Service Level Agreement, SLA, as well as a virtual interconnection network, or overlay network, for the virtual cloud. SLAs could serve as a congestion control mechanism for an organization providing utility computing; this mechanism allows the system to reject new contracts when there is the danger of overloading the system and failing to fulfill existing contractual obligations. The objective of this thesis is to show that biased random walks in power law networks are capable of responding to dynamic changes of the workload in utility computing.
ID: 029809025; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (M.S.)--University of Central Florida, 2011.; Includes bibliographical references (p. 65-68).
M.S.
Masters
Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering
APA, Harvard, Vancouver, ISO, and other styles
16

Köhler, Fredrik. "Network Virtualization in Multi-hop Heterogeneous Architecture." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-38696.

Full text
Abstract:
Software Defined Networking is a technology that introduces modular and programmable networks that uses separated control- and data-planes. Traditional networks use mostly proprietary protocols and devices that needs to communicate with each other and share information about routes and paths in the network. SDN on the other hand uses a controller that communicates with devices through an open source protocol called OpenFlow. The routing rules of flows can be inserted into the networking devices by the controller. This technology is still new and it requires more research to provide evidence that it can perform better, or at least as good as, the conventional networks. By doing some experiments on different topologies, this thesis aims at discovering how delays of flows are affected by having OpenFlow in the network, and identifying overhead of using such technology. The results show that, the overhead can be to large for time and noise sensitive applications and average delay is comparable to conventional networks.
APA, Harvard, Vancouver, ISO, and other styles
17

McAdams, Sean. "Virtualization Components of the Modern Hypervisor." UNF Digital Commons, 2015. http://digitalcommons.unf.edu/etd/599.

Full text
Abstract:
Virtualization is the foundation on which cloud services build their business. It supports the infrastructure for the largest companies around the globe and is a key component for scaling software for the ever-growing technology industry. If companies decide to use virtualization as part of their infrastructure it is important for them to quickly and reliably have a way to choose a virtualization technology and tweak the performance of that technology to fit their intended usage. Unfortunately, while many papers exist discussing and testing the performance of various virtualization systems, most of these performance tests do not take into account components that can be configured to improve performance for certain scenarios. This study provides a comparison of how three hypervisors (VMWare vSphere, Citrix XenServer, and KVM) perform under different sets of configurations at this point and which system workloads would be ideal for these configurations. This study also provides a means in which to compare different configurations with each other so that implementers of these technologies have a way in which to make informed decisions on which components should be enabled for their current or future systems.
APA, Harvard, Vancouver, ISO, and other styles
18

Ghodke, Ninad Hari. "Virtualization techniques to enable transparent access to peripheral devices across networks." [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0005684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Raj, Himanshu. "Virtualization services scalable methods for virtualizing multicore systems /." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22677.

Full text
Abstract:
Thesis (Ph. D.)--Computing, Georgia Institute of Technology, 2008.
Committee Chair: Schwan, Karsten; Committee Member: Ahamad, Mustaq; Committee Member: Fujimoto, Richard; Committee Member: Gavrilovska, Ada; Committee Member: Owen, Henry; Committee Member: Xenidis, Jimi.
APA, Harvard, Vancouver, ISO, and other styles
20

Coogan, Kevin Patrick. "Deobfuscation of Packed and Virtualization-Obfuscation Protected Binaries." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/202716.

Full text
Abstract:
Code obfuscation techniques are increasingly being used in software for such reasons as protecting trade secret algorithms from competitors and deterring license tampering by those wishing to use the software for free. However, these techniques have also grown in popularity in less legitimate areas, such as protecting malware from detection and reverse engineering. This work examines two such techniques - packing and virtualization-obfuscation - and presents new behavioral approaches to analysis that may be relevant to security analysts whose job it is to defend against malicious code. These approaches are robust against variations in obfuscation algorithms, such as changing encryption keys or virtual instruction byte code.Packing refers to the process of encrypting or compressing an executable file. This process "scrambles" the bytes of the executable so that byte-signature matching algorithms commonly used by anti-virus programs are ineffective. Standard static analysis techniques are similarly ineffective since the actual byte code of the program is hidden until after the program is executed. Dynamic analysis approaches exist, but are vulnerable to dynamic defenses. We detail a static analysis technique that starts by identifying the code used to "unpack" the executable, then uses this unpacker to generate the unpacked code in a form suitable for static analysis. Results show we are able to correctly unpack several encrypted and compressed malware, while still handling several dynamic defenses.Virtualization-obfuscation is a technique that translates the original program into virtual instructions, then builds a customized virtual machine for these instructions. As with packing, the byte-signature of the original program is destroyed. Furthermore, static analysis of the obfuscated program reveals only the structure of the virtual machine, and dynamic analysis produces a dynamic trace where original program instructions are intermixed, and often indistinguishable from, virtual machine instructions. We present a dynamic analysis approach whereby all instructions that affect the external behavior of the program are identified, thus building an approximation of the original program that is observationally equivalent. We achieve good results at both identifying instructions from the original program, as well as eliminating instructions known to be part of the virtual machine.
APA, Harvard, Vancouver, ISO, and other styles
21

Oljira, Dejene Boru. "Telecom Networks Virtualization : Overcoming the Latency Challenge." Licentiate thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-67243.

Full text
Abstract:
Telecom service providers are adopting a Network Functions Virtualization (NFV) based service delivery model, in response to the unprecedented traffic growth and an increasing customers demand for new high-quality network services. In NFV, telecom network functions are virtualized and run on top of commodity servers. Ensuring network performance equivalent to the legacy non-virtualized system is a determining factor for the success of telecom networks virtualization. Whereas in virtualized systems, achieving carrier-grade network performance such as low latency, high throughput, and high availability to guarantee the quality of experience (QoE) for customer is challenging. In this thesis, we focus on addressing the latency challenge. We investigate the delay overhead of virtualization by comprehensive network performance measurements and analysis in a controlled virtualized environment. With this, a break-down of the latency incurred by the virtualization and the impact of co-locating virtual machines (VMs) of different workloads on the end-to-end latency is provided. We exploit this result to develop an optimization model for placement and provisioning of the virtualized telecom network functions to ensure both the latency and cost-efficiency requirements. To further alleviate the latency challenge, we propose a multipath transport protocol MDTCP, that leverage Explicit Congestion Notification (ECN) to quickly detect and react to an incipient congestion to minimize queuing delays, and achieve high network utilization in telecom datacenters.
HITS, 4707
APA, Harvard, Vancouver, ISO, and other styles
22

Oliveira, Diogo. "Multi-Objective Resource Provisioning in Network Function Virtualization Infrastructures." Scholar Commons, 2018. http://scholarcommons.usf.edu/etd/7206.

Full text
Abstract:
Network function virtualization (NFV) and software-dened networking (SDN) are two recent networking paradigms that strive to increase manageability, scalability, pro- grammability and dynamism. The former decouples network functions and hosting devices, while the latter decouples the data and control planes. As more and more service providers adopt these new paradigms, there is a growing need to address multi-failure conditions, particularly those arising from large-scale disaster events. Overall, addressing the virtual network function (VNF) placement and routing problem is crucial to deploy NFV surviv- ability. In particular, many studies have inspected non-survivable VNF provisioning, however no known work have proposed survivable/resilient solutions for multi-failure scenarios. In light of the above, this work proposes and deploys a survivable multi-objective provisioning solution for NFV infrastructures. Overall, this study initially proposes multi- objective solutions to eciently solve the VNF mapping/placement and routing problem. In particular, a integer linear programming (ILP) optimization and a greedy heuristic meth- ods try to maximize the requests acceptance rate while minimizing costs and implementing trac engineering (TE) load-balancing. Next, these schemes are expanded to perform \risk- aware" virtual function mapping and trac routing in order to improve the reliability of user services. Furthermore, additionally to the ILP optimization and greedy heuristic schemes, a metaheuristic genetic algorithm (GA) is also introduced, which is more suitable for large- scale networks. Overall, these solutions are then tested in idealistic and realistic stressor scenarios in order to evaluate their performance, accuracy and reliability.
APA, Harvard, Vancouver, ISO, and other styles
23

Paladi, Nicolae. "Trusted Computing and Secure Virtualization in Cloud Computing." Thesis, Security Lab, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:ri:diva-24035.

Full text
Abstract:
Large-scale deployment and use of cloud computing in industry is accompanied and in the same time hampered by concerns regarding protection of data handled by cloud computing providers. One of the consequences of moving data processing and storage off company premises is that organizations have less control over their infrastructure. As a result, cloud service (CS) clients must trust that the CS provider is able to protect their data and infrastructure from both external and internal attacks. Currently however, such trust can only rely on organizational processes declared by the CS provider and can not be remotely verified and validated by an external party. Enabling the CS client to verify the integrity of the host where the virtual machine instance will run, as well as to ensure that the virtual machine image has not been tampered with, are some steps towards building trust in the CS provider. Having the tools to perform such verifications prior to the launch of the VM instance allows the CS clients to decide in runtime whether certain data should be stored- or calculations should be made on the VM instance offered by the CS provider. This thesis combines three components -- trusted computing, virtualization technology and cloud computing platforms -- to address issues of trust and security in public cloud computing environments. Of the three components, virtualization technology has had the longest evolution and is a cornerstone for the realization of cloud computing. Trusted computing is a recent industry initiative that aims to implement the root of trust in a hardware component, the trusted platform module. The initiative has been formalized in a set of specifications and is currently at version 1.2. Cloud computing platforms pool virtualized computing, storage and network resources in order to serve a large number of customers customers that use a multi-tenant multiplexing model to offer on-demand self-service over broad network. Open source cloud computing platforms are, similar to trusted computing, a fairly recent technology in active development. The issue of trust in public cloud environments is addressed by examining the state of the art within cloud computing security and subsequently addressing the issues of establishing trust in the launch of a generic virtual machine in a public cloud environment. As a result, the thesis proposes a trusted launch protocol that allows CS clients to verify and ensure the integrity of the VM instance at launch time, as well as the integrity of the host where the VM instance is launched. The protocol relies on the use of Trusted Platform Module (TPM) for key generation and data protection. The TPM also plays an essential part in the integrity attestation of the VM instance host. Along with a theoretical, platform-agnostic protocol, the thesis also describes a detailed implementation design of the protocol using the OpenStack cloud computing platform. In order the verify the implementability of the proposed protocol, a prototype implementation has built using a distributed deployment of OpenStack. While the protocol covers only the trusted launch procedure using generic virtual machine images, it presents a step aimed to contribute towards the creation of a secure and trusted public cloud computing environment.
TESPEV
CNS
APA, Harvard, Vancouver, ISO, and other styles
24

Zahedi, Saed. "Virtualization Security Threat Forensic and Environment Safeguarding." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-32144.

Full text
Abstract:
The advent of virtualization technologies has evolved the IT infrastructure and organizations are migrating to virtual platforms. Virtualization is also the foundation for cloud platform services. Virtualization is known to provide more security into the infrastructure apart from agility and flexibility. However security aspects of virtualization are often overlooked. Various attacks to the virtualization hypervisor and its administration component are desirable for adversaries. The threats to virtualization must be rigorously scrutinized to realize common breaches and knowing what is more attractive for attackers. In this thesis a current state of perimeter and operational threats along with taxonomy of virtualization security threats is provided. The common attacks based on vulnerability database are investigated. A distribution of the virtualization software vulnerabilities, mapped to the taxonomy is visualized. The famous industry best practices and standards are introduced and key features of each one are presented for safeguarding the virtualization environments. A discussion of other possible approaches to investigate the severity of threats based on automatic systems is presented.
APA, Harvard, Vancouver, ISO, and other styles
25

Nimgaonkar, Satyajeet. "Secure and Energy Efficient Execution Frameworks Using Virtualization and Light-weight Cryptographic Components." Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc699986/.

Full text
Abstract:
Security is a primary concern in this era of pervasive computing. Hardware based security mechanisms facilitate the construction of trustworthy secure systems; however, existing hardware security approaches require modifications to the micro-architecture of the processor and such changes are extremely time consuming and expensive to test and implement. Additionally, they incorporate cryptographic security mechanisms that are computationally intensive and account for excessive energy consumption, which significantly degrades the performance of the system. In this dissertation, I explore the domain of hardware based security approaches with an objective to overcome the issues that impede their usability. I have proposed viable solutions to successfully test and implement hardware security mechanisms in real world computing systems. Moreover, with an emphasis on cryptographic memory integrity verification technique and embedded systems as the target application, I have presented energy efficient architectures that considerably reduce the energy consumption of the security mechanisms, thereby improving the performance of the system. The detailed simulation results show that the average energy savings are in the range of 36% to 99% during the memory integrity verification phase, whereas the total power savings of the entire embedded processor are approximately 57%.
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Yu-hsin M. Eng Massachusetts Institute of Technology. "Dynamic binary translation from x86-32 code to x86-64 code for virtualization." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53095.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Includes bibliographical references (p. 95).
The goal of this project is to enhance performance of virtual machines and simplify the design of the virtual machine monitor by running 32-bit x86 operating systems in x86-64 mode. In order to do so, 32-bit operating system binary code is translated into x86-64 binary code via "widening binary translation"; x86-32 code is "widened" into x86-64 code. The main challenge of widening BT is emulating x86-32 legacy segmentation in x86-64 mode. Widening BT's solution is to emulate segmentation in software. Most of the overhead for software segmentation can be optimized away. The main contribution of widening BT is simplification of the VMM, which reduces the human cost of maintaining a complicated VMM. Widening BT also improves performance of 32-bit guest operating systems running in virtual machines and demonstrates the independence of virtual machines from physical hardware. With widening BT, legacy hardware mechanisms like segmentation can be dropped. Therefore widening BT reduces hardware's burden of backwards-compatibility, encouraging software/hardware co-design.
by Yu-hsin Chen.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
27

Oprescu, Mihaela Iuniana. "Virtualization and distribution of the BGP control plane." Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2012. http://tel.archives-ouvertes.fr/tel-00785007.

Full text
Abstract:
L'Internet est organisé sous la forme d'une multitude de réseaux appelés Systèmes Autonomes (AS). Le Border Gateway Protocol (BGP) est le langage commun qui permet à ces domaines administratifs de s'interconnecter. Grâce à BGP, deux utilisateurs situés n'importe o'u dans le monde peuvent communiquer, car ce protocole est responsable de la propagation des messages de routage entre tous les réseaux voisins. Afin de répondre aux nouvelles exigences, BGP a dû s'améliorer et évoluer à travers des extensions fréquentes et de nouvelles architectures. Dans la version d'origine, il était indispensable que chaque routeur maintienne une session avec tous les autres routeurs du réseau. Cette contrainte a soulevé des problèmes de scalabilité, puisque le maillage complet des sessions BGP internes (iBGP) était devenu difficile à réaliser dans les grands réseaux. Pour couvrir ce besoin de connectivité, les opérateurs de réseaux font appel à la réflection de routes (RR) et aux confédérations. Mais si elles résolvent un problème de scalabilité, ces deux solutions ont soulevé des nouveaux défis car elles sont accompagnées de multiples défauts; la perte de diversité des routes candidates au processus de séléction BGP ou des anomalies comme par exemple des oscillations de routage, des déflections et des boucles en font partie. Les travaux menés dans cette thèse se concentrent sur oBGP, une nouvelle architecture pour redistribuer les routes externes à l'intérieur d'un AS. à la place des classiques sessions iBGP, un réseau de type overlay est responsable (I) de l'échange d'informations de routage avec les autres AS, (II) du stockage distribué des routes internes et externes, (III) de l'application de la politique de routage au niveau de l'AS et (IV) du calcul et de la redistribution des meilleures routes vers les destinations de l'Internet pour tous les routeurs
APA, Harvard, Vancouver, ISO, and other styles
28

Nemati, Hamed. "Secure System Virtualization : End-to-End Verification of Memory Isolation." Doctoral thesis, KTH, Teoretisk datalogi, TCS, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-213030.

Full text
Abstract:
Over the last years, security-kernels have played a promising role in reshaping the landscape of platform security on embedded devices. Security-kernels, such as separation kernels, enable constructing high-assurance mixed-criticality execution platforms on a small TCB, which enforces isolation between components. The reduced TCB  minimizes the system attack surface and facilitates the use of formal methods to ensure the kernel functional correctness and security. In this thesis, we explore various aspects of building a provably secure separation kernel using virtualization technology. We show how the memory management subsystem can be virtualized to enforce isolation of system components. Virtualization is done using direct-paging that enables a guest software to manage its own memory configuration. We demonstrate the soundness of our approach by verifying that the high-level model of the system fulfills the desired security properties. Through refinement, we then propagate these properties (semi-)automatically to the machine-code of the virtualization mechanism. Further, we show how a runtime monitor can be securely deployed alongside a Linux guest on a hypervisor to prevent code injection attacks targeting Linux. The monitor takes advantage of the provided separation to protect itself and to retain a complete view of the guest. Separating components using a low-level software cannot by itself guarantee the system security. Indeed, current processors architecture involves features that can be utilized to violate the isolation of components. We present a new low-noise attack vector constructed by measuring caches effects which is capable of breaching isolation of components and invalidates the verification of a software that has been verified on a memory coherent model. To restore isolation, we provide several countermeasures and propose a methodology to repair the verification by including data-caches in the statement of the top-level security properties of the system.

QC 20170831


PROSPER
HASPOC
APA, Harvard, Vancouver, ISO, and other styles
29

Young, Bobby Dalton. "MPI WITHIN A GPU." UKnowledge, 2009. http://uknowledge.uky.edu/gradschool_theses/614.

Full text
Abstract:
GPUs offer high-performance floating-point computation at commodity prices, but their usage is hindered by programming models which expose the user to irregularities in the current shared-memory environments and require learning new interfaces and semantics. This thesis will demonstrate that the message-passing paradigm can be conceptually cleaner than the current data-parallel models for programming GPUs because it can hide the quirks of current GPU shared-memory environments, as well as GPU-specific features, behind a well-established and well-understood interface. This will be shown by demonstrating a proof-of-concept MPI implementation which provides cleaner, simpler code with a reasonable performance cost. This thesis will also demonstrate that, although there is a virtualization constraint imposed by MPI, this constraint is harmless as long as the virtualization was already chosen to be optimal in terms of a strong execution model and nearly-optimal execution time. This will be demonstrated by examining execution times with varying virtualization using a computationally-expensive micro-kernel.
APA, Harvard, Vancouver, ISO, and other styles
30

Anwer, Muhammad Bilal. "Enhancing capabilities of the network data plane using network virtualization and software defined networking." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54422.

Full text
Abstract:
Enhancement of network data-plane functionality is an open problem that has recently gained momentum. Addition and programmability of new functions inside the network data-plane to enable high speed, complex network functions with minimum resource utilization, is main focus of this thesis. In this work, we look at different levels of the network data-plane design and using network virtualization and software defined networking we propose data-plane enhancements to achieve these goals. This thesis is divided into two parts, in first part we take a ground up approach where we focus our attention at the fast path packet processing. Using hardware and software based network virtualization we show how hardware and software based network switches can be designed to achieve above mentioned goals. We then present a switch design to quickly add these custom fast path packet processors to the network data-plane using software defined networking. In second part of this thesis we take a top to bottom approach where we present a programming abstraction for network operators and a network function deployment system for this programming abstraction. We use network virtualization and software defined networking to introduce new functions inside the network data-plane while alleviating the network operators of the deployment details and minimizing the network resource utilization.
APA, Harvard, Vancouver, ISO, and other styles
31

Svärd, Petter. "Live VM Migration : Principles and Performance." Licentiate thesis, Umeå universitet, Institutionen för datavetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-87246.

Full text
Abstract:
Virtualization is a key technology for cloud computing as it allows several operating system instances to run on the same machine, enhances resource manageability and enables flexible definition of billing units. Virtualization works by adding a software layer, a hypervisor, on top of the hardware platform. Virtual Machines, \emph{VMs}, are run on top of the hypervisor, which provisions hardwares resources to the VM guests. In addition to enabling higher utilization of hardware resources, the ability to move VMs from one host to another is an important feature. Live migration is the concept of migrating a VM while it is running and responding to requests. Since VMs can be re-located while running, live migration allows for better hardware utilization. This is because placement of services can be performed dynamically and not only when the are started. Live migration is also a useful tool for administrative purposes. If a server needs to be taken off-line for maintenance reasons, it can be cleared of services by live migrating these to other hosts. This thesis investigates the principles behind live migration. The common live migration approaches in use today are evaluated and common objectives are presented as well as challenges that have to be overcome in order to implement an ideal live migration algorithm. The performance of common live migration approaches is also evaluated and it is found that even though live migration is supported by most hypervisors, it has drawbacks which makes the technique hard to use in certain situations. Migrating CPU and/or memory intensive VMs or migrating VMs over low-bandwidth links is a problem regardless of which approach that is used. To tackle this problem, two improvements to live migration are proposed and evaluated, delta compression and dynamic page transfer reordering. Both improvements demonstrate better performance than the standard algorithm when migrating CPU and/or memory intensive VMs and migrating over low bandwidth links. Finally, recommendations are made on which live migration approach to use depending on the scenario and also what improvements to the standard live migration algorithms should be used and when.
APA, Harvard, Vancouver, ISO, and other styles
32

Yousif, Wael K. Yousif. "EXAMINING ENGINEERING & TECHNOLOGY STUDENTS ACCEPTANCE OF NETWORK VIRTUALIZATION TECHNOLOGY USING THE TECHNOLOGY ACCEPTANCE MODE." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3039.

Full text
Abstract:
This causal and correlational study was designed to extend the Technology Acceptance Model (TAM) and to test its applicability to Valencia Community College (VCC) Engineering and Technology students as the target user group when investigating the factors influencing their decision to adopt and to utilize VMware as the target technology. In addition to the primary three indigenous factors: perceived ease of use, perceived usefulness, and intention toward utilization, the model was also extended with enjoyment, external control, and computer self-efficacy as antecedents to perceived ease of use. In an attempt to further increase the explanatory power of the model, the Task-Technology Fit constructs (TTF) were included as antecedents to perceived usefulness. The model was also expanded with subjective norms and voluntariness to assess the degree to which social influences affect students decision for adoption and utilization. This study was conducted during the fall term of 2009, using 11 instruments: (1) VMware Tools Functions Instrument; (2) Computer Networking Tasks Characteristics Instrument; (3) Perceived Usefulness Instrument; (4) Voluntariness Instrument; (5) Subjective Norms Instrument; (6) Perceived Enjoyment Instrument; (7) Computer Self-Efficacy Instrument; (8) Perception of External Control Instrument; (9) Perceived Ease of Use Instrument; (10) Intention Instrument; and (11) a Utilization Instrument. The 11 instruments collectively contained 58 items. Additionally, a demographics instrument of six items was included to investigate the influence of age, prior experience with the technology, prior experience in computer networking, academic enrollment status, and employment status on student intentions and behavior with regard to VMware as a network virtualization technology. Data were analyzed using path analysis, regressions, and univariate analysis of variance in SPSS and AMOS for Windows. The results suggest that perceived ease of use was found to be the strongest determinant of student intention. The analysis also suggested that external control, measuring the facilitating conditions (knowledge, resources, etc) necessary for adoption was the highest predictor of perceived ease of use. Consistent with previous studies, perceived ease of use was found to be the strongest predictor of perceived usefulness followed by subjective norms as students continued to use the technology. Even though the integration of the task-technology fit construct was not helpful in explaining the variance in student perceived usefulness of the target technology, it was statistically significant in predicting student perception of ease of use. The study concluded with recommendations to investigate other factors (such as service quality and ease of implementation) that might contribute to explaining the variance in perceived ease of use as the primary driving force in influencing student decision for adoption. A recommendation was also made to modify the task-technology fit construct instruments to improve the articulation and the specificity of the task. The need for further examination of the influence of the instructor on student decision for adoption of a target technology was also emphasized.
Ed.D.
Department of Educational Research, Technology and Leadership
Education
Education EdD
APA, Harvard, Vancouver, ISO, and other styles
33

Färlind, Filip, and Kim Ottosson. "Servervirtualisering idag : En undersökning om servervirtualisering hos offentliga verksamheter i Sverige." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-37032.

Full text
Abstract:
I dagens läge saknas en sammanställning av hur servervirtualisering är implementerat och hur det fungerar hos olika verksamheter i Sverige. Detta arbete har därför, genom en enkätundersökning, besvarat frågeställningen: "Hur ser servervirtualiseringen ut hos kommuner och landsting i Sverige?" Resultaten visade bl.a. att servervirtualisering är väl implementerat av kommuner och landsting i Sverige. Resultaten var dessutom väldigt lika mellan dessa organisationer. Det genomförda arbetet ger olika typer av verksamheter stöd vid planering och implementering av servervirtualisering.
At present, there’s no summary of how server virtualization is implemented and how it works in different companies in Sweden. This work will therefore, through a survey, try to answer the question: "How is server virtualization implemented by municipality and county councils in Sweden?" Our results show that server virtualization is well implemented by municipality and county councils in Sweden. The results are also very similar between these organizations. Finalized work provides different types of companies support in planning and implementation of server virtualization.
APA, Harvard, Vancouver, ISO, and other styles
34

Vellanki, Mohit. "Performance Evaluation of Cassandra in a Virtualized Environment." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14032.

Full text
Abstract:
Context. Apache Cassandra is an open-source, scalable, NoSQL database that distributes the data over many commodity servers. It provides no single point of failure by copying and storing the data in different locations. Cassandra uses a ring design rather than the traditional master-slave design. Virtualization is the technique using which physical resources of a machine are divided and utilized by various virtual machines. It is the fundamental technology, which allows cloud computing to provide resource sharing among the users.  Objectives. Through this research, the effects of virtualization on Cassandra are observed by comparing the virtual machine arrangement to physical machine arrangement along with the overhead caused by virtualization.  Methods. An experiment is conducted in this study to identify the aforementioned effects of virtualization on Cassandra compared to the physical machines. Cassandra runs on physical machines with Ubuntu 14.04 LTS arranged in a multi node cluster. Results are obtained by executing the mixed, read only and write only operations in the Cassandra stress tool on the data populated in this cluster. This procedure is repeated for 100% and 66% workload. The same procedure is repeated in virtual machines cluster and the results are compared.  Results. Virtualization overhead has been identified in terms of CPU utilization and the effects of virtualization on Cassandra are found out in terms of Disk utilization, throughput and latency.  Conclusions. The overhead caused due to virtualization is observed and the effect of this overhead on the performance of Cassandra has been identified. The consequence of the virtualization overhead has been related to the change in performance of Cassandra.
APA, Harvard, Vancouver, ISO, and other styles
35

Wilcox, David Luke. "Packing Virtual Machines onto Servers." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2798.

Full text
Abstract:
Data centers consume a significant amount of energy. This problem is aggravated by the fact that most servers and desktops are underutilized when powered on, and still consume a majority of the energy of a fully utilized computer even when idle This problem would be much worse were it not for the growing use of virtual machines. Virtual machines allow system administrators to more fully utilize hardware capabilities by putting more than one virtual system on the same physical server. Many times, virtual machines are placed onto physical servers inefficiently. To address this inefficiency, I developed a new family of packing algorithms. This family of algorithms is meant to solve the problem of packing virtual machines onto a cluster of physical servers. This problem is different than the conventional bin packing problem in two ways. First, each server has multiple resources that can be consumed. Second, loads on virtual machines are probabilistic and not completely known to the packing algorithm. We first compare our developed algorithm with other bin packing algorithms and show that it performs better than state-of-the-art genetic algorithms in literature. We then show the general feasibility of our algorithm in packing real virtual machines on physical servers.
APA, Harvard, Vancouver, ISO, and other styles
36

Kotikela, Srujan D. "Secure and Trusted Execution Framework for Virtualized Workloads." Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1248514/.

Full text
Abstract:
In this dissertation, we have analyzed various security and trustworthy solutions for modern computing systems and proposed a framework that will provide holistic security and trust for the entire lifecycle of a virtualized workload. The framework consists of 3 novel techniques and a set of guidelines. These 3 techniques provide necessary elements for secure and trusted execution environment while the guidelines ensure that the virtualized workload remains in a secure and trusted state throughout its lifecycle. We have successfully implemented and demonstrated that the framework provides security and trust guarantees at the time of launch, any time during the execution, and during an update of the virtualized workload. Given the proliferation of virtualization from cloud servers to embedded systems, techniques presented in this dissertation can be implemented on most computing systems.
APA, Harvard, Vancouver, ISO, and other styles
37

Pham, Khoi Minh. "NEURAL NETWORK ON VIRTUALIZATION SYSTEM, AS A WAY TO MANAGE FAILURE EVENTS OCCURRENCE ON CLOUD COMPUTING." CSUSB ScholarWorks, 2018. https://scholarworks.lib.csusb.edu/etd/670.

Full text
Abstract:
Cloud computing is one important direction of current advanced technology trends, which is dominating the industry in many aspects. These days Cloud computing has become an intense battlefield of many big technology companies, whoever can win this war can have a very high potential to rule the next generation of technologies. From a technical point of view, Cloud computing is classified into three different categories, each can provide different crucial services to users: Infrastructure (Hardware) as a Service (IaaS), Software as a Service (SaaS), and Platform as a Service (PaaS). Normally, the standard measurements for cloud computing reliability level is based on two approaches: Service Level Agreements (SLAs) and Quality of Service (QoS). This thesis will focus on IaaS cloud systems’ Error Event Logs as an aspect of QoS in IaaS cloud reliability. To have a better view, basically, IaaS is a derivation of the traditional virtualization system where multiple virtual machines (VMs) with different Operating System (OS) platforms, are run solely on one physical machine (PM) that has enough computational power. The PM will play the role of the host machine in cloud computing, and the VMs will play the role as the guest machines in cloud computing. Due to the lack of fully access to the complete real cloud system, this thesis will investigate the technical reliability level of IaaS cloud through simulated virtualization system. By collecting and analyzing the event logs generated from the virtualization system, we can have a general overview of the system’s technical reliability level based on number of error events occur in the system. Then, these events will be used on neural network time series model to detect the system failure events’ pattern, as well as predict the next error event that is going to occur in the virtualization system.
APA, Harvard, Vancouver, ISO, and other styles
38

Su, Yu. "Big Data Management Framework based on Virtualization and Bitmap Data Summarization." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1420738636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Klemperer, Peter Friedrich. "Efficient Hypervisor Based Malware Detection." Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/466.

Full text
Abstract:
Recent years have seen an uptick in master boot record (MBR) based rootkits that load before the Windows operating system and subvert the operating system’s own procedures. As such, MBR rootkits are difficult to counter with operating system-based antivirus software that runs at the same privilege-level as the rookits. Hypervisors operate at a higher privilege level than the guests they manage, creating a high-ground position in the host. This high-ground position can be exploited to perform security checks on the virtual machine guests where the checking software is isolated from guest-based viruses. The efficient introspection system described in this thesis targets existing virtualized systems to improve security with real-time, concurrent memory introspection capabilities. Efficient introspection decouples memory introspection from virtual machine guest execution, establishes coherent and consistent memory views between the host and running guest, while maintaining normal guest operation. Existing introspection systems have provided one or two of these properties but not all three at once. This thesis presents a new concurrent-computing approach – high-performance memory snapshotting – to accelerating hypervisor based introspection of virtual machine guest memory that combines all three elements to improve performance and security. Memory snapshots create a coherent and consistent memory view of the guest that can be shared with the independently running introspection application. Three memory snapshotting mechanisms are presented and evaluated for their impact on normal guest operation. Existing introspection systems and security protection techniques that were previously dismissed as too slow are now be enabled by efficient introspection. This thesis explains why existing introspection systems are inadequate, describes how existing system performance can be improved, evaluates an efficient introspection prototype on both applications and microbenchmarks, and discusses two potential security applications that are enabled by efficient introspection. These applications point to efficient introspection’s utility for supporting useful security applications.
APA, Harvard, Vancouver, ISO, and other styles
40

Al, burhan Mohammad. "Differences between DockerizedContainers and Virtual Machines : A performance analysis for hosting web-applications in a virtualized environment." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19816.

Full text
Abstract:
This is a bachelor thesis regarding the performance differences for hosting a web-application in a virtualized environment. We compare virtual machines against containers and observe their resource usage in categories such as CPU, RAM and disk storage in idle state and perform a range of computation experiments in which response times are measured from a series of request intervals. Response times are measured with the help of a web-application created in Python. The experiments are performed under both normal and stressed conditions to give a better indication in to which virtualized environment outperform the other during different scenarios. The results show that virtual machines and containers remained close to each other in response times during the first request interval, but the containers outperformed virtual machines in terms of resource usages while in idle state, they had less of a burden on the host computer. They were also significantly more rapid in terms of response times. This is also most noticeable during stressed conditions in which the virtual machine almost doubled its sluggishness.
APA, Harvard, Vancouver, ISO, and other styles
41

Kieu, Le Truong Van. "Container Based Virtualization Techniques on Lightweight Internet of Things Devices : Evaluating Docker container effectiveness on Raspberry Pi computers." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-42745.

Full text
Abstract:
There currently does not exist a way for creating digital twins’information and transferring it between networks , but container-basedvirtualization could be a possible solution. One of those techniques isDocker, which is an engine to isolate a software and can bring benefits toimprove the workflow of software development. Making changes withDocker is very fast as it uses the copy-on-write model, it can containerizeapplications in minutes. This study will design a scenario with twodevices sending a data packet between each other to simulate theproblem. The results from the study is further investigate and analyze toanswer the question of whether the container-based virtualization can bea possible solution for creating digital twins. The result from the scenariois Docker works equal or worse when used with a low-cost computinghardware compared to a computing hardware. It is speculated that theresources used in the images is a factor that can affect the performance,but the hardware is also another factor that can affect it.
Det finns för närvarande inget sätt att skapa digitala tvillingar ochöverföra den mellan nätverk, men containerbaserad virtualisering kanvara en möjlig lösning. En av containerbaserad virtualisering tekniker ärDocker, som är en motor för att isolera en programvara och kanframbringa fördelar för att förbättra arbetsflödet för programutveckling.Att göra ändringar med Docker är mycket snabbt tack vare användningav copy-on-write-modellen. Denna studie kommer att utforma ettscenario med två enheter som skickar ett datapaket mellan varandra fratt simulera överföringsprocessen. Mätresultat från studien bliranalyserad för att besvara frågan om containerbaserad virtualisering kanvara en möjlig lösning för att skapa och skicka digitala tvillingar.Resultatet från scenariot är att Docker fungerar lika eller sämre när detanvänds med en låg kostnad datorhårdvara jämfört med endatorhårdvara. Det spekuleras att resurserna som används i datapaketetär en faktor som kan påverka prestandan, men hårdvaran är också enannan faktor som påverkar det.
APA, Harvard, Vancouver, ISO, and other styles
42

Lee, Min. "Memory region: a system abstraction for managing the complex memory structures of multicore platforms." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50398.

Full text
Abstract:
The performance of modern many-core systems depends on the effective use of their complex cache and memory structures, and this will likely become more pronounced with the impending arrival of on-chip 3D stacked and non-volatile off-chip byte-addressable memory. Yet to date, operating systems have not treated memory as a first class schedulable resource, embracing memory heterogeneity. This dissertation presents a new software abstraction, called ‘memory region’, which denotes the current set of physical memory pages actively used by workloads. Using this abstraction, memory resources can be scheduled for applications to fully exploit a platform's underlying cache and memory system, thereby gaining improved performance and predictability in execution, particularly for the consolidated workloads seen in virtualized and cloud computing infrastructures. The abstraction's implementation in the Xen hypervisor involves the run-time detection of memory regions, the scheduled mapping of these regions to caches to match performance goals, and maintaining region-to-cache mappings using per-cache page tables. This dissertation makes the following specific contributions. First, its region scheduling method proposes that the location of memory blocks rather than CPU utilization is the principal determinant where workloads are run. It proposes a new scheduling method, the region scheduling that the location of memory blocks determines where the workloads are run. Second, treating memory blocks as first-class resources, new methods for efficient cache management are shown to improve application performance as well as the performance of certain operating system functions. Third, explicit memory scheduling makes it possible to disaggregate operating systems, without the need to change OS sources and with only small markups of target guest OS functionality. With this method, OS functions can be mapped to specific desired platform components, such as file system confined to running on specific cores and using only certain memory resources designated for its use. This can improve performance for applications heavily dependent on certain OS functions, by dynamically providing those functions with the resources needed for their current use, and it can prevent performance-critical application functionality from being needlessly perturbed by OS functions used for other purposes or by other jobs. Fourth, extensions of region scheduling can also help applications deal with the heterogeneous memory resources present in future systems, including on-chip stacked DRAM and NUMA or even NVRAM memory modules. More generally, regions scheduling is shown to apply to memory structures with well-defined differences in memory access latencies.
APA, Harvard, Vancouver, ISO, and other styles
43

Indukuri, Pavan Sutha Varma. "Performance comparison of Linux containers(LXC) and OpenVZ during live migration : An experiment." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13540.

Full text
Abstract:
Context: Cloud computing is one of the most widely used technologies all over the world that provides numerous products and IT services. Virtualization is one of the innovative technologies in cloud computing which has advantages of improved resource utilisation and management. Live migration is an innovative feature of virtualization that allows a virtual machine or container to be transferred from one physical server to another.  Live migration is a complex process which can have a significant impact on cloud computing when used by the cloud-based software.  Objectives: In this study, live migration of LXC and OpenVZ containers has been performed.  Later the performance of LXC and OpenVZ has been conducted in terms of total migration time and downtime. Further CPU utilisation, disk utilisation and an average load of the servers is also evaluated during the process of live migration. The main aim of this research is to compare the performance of LXC and OpenVZ during live migration of containers.  Methods: A literature study has been done to gain knowledge about the process of live migration and the metrics that are required to compare the performance of LXC and OpenVZ during live migration of containers. Further an experiment has been conducted to compute and evaluate the performance metrics that have been identified in the literature study. The experiment was done to investigate and evaluate migration process for both LXC and OpenVZ. Experiments were designed and conducted based on the objectives which were to be met. Results:  The results of the experiments include the migration performance of both LXC and OpenVZ. The performance metrics identified in the literature review, total migration time and downtime, were evaluated for LXC and OpenVZ. Further graphs were plotted for the CPU utilisation, disk utilisation, and average load during the live migration of containers. The results were analysed to compare the performance differences between OpenVZ and LXC during live migration of containers. Conclusions.  The conclusions that can be drawn from the experiment. LXC has shown higher utilisation, thus lower performance when compared with OpenVZ. However, LXC has less migration time and downtime when compared to OpenVZ.
APA, Harvard, Vancouver, ISO, and other styles
44

Hudzina, John Stephen. "An Enhanced MapReduce Workload Allocation Tool for Spot Market Resources." NSUWorks, 2015. http://nsuworks.nova.edu/gscis_etd/34.

Full text
Abstract:
When a cloud user allocates a cluster to execute a map-reduce workload, the user must determine the number and type of virtual machine instances to minimize the workload's financial cost. The cloud user may rent on-demand instances at a fixed price or spot instances at a variable price to execute the workload. Although the cloud user may bid on spot virtual machine instances at a reduced rate, the spot market auction may delay the workload's start or terminate the spot instances before the workload completes. The cloud user requires a forecast for the workload's financial cost and completion time to analyze the trade-offs between on-demand and spot instances. While existing estimation tools predict map-reduce workloads' completion times and costs, these tools do not provide spot instance estimates because a spot market auction determines the instance's start time and duration. The ephemeral spot instances impact execution time estimates because the spot market auction forces the map-reduce workloads to use different storage strategies to persist data after the spot instances terminate. The spot market also reduces the existing tools' completion time and cost estimate accuracy because the tool must factor in spot instance wait times and early terminations. This dissertation updated an existing tool to forecast map-reduce workload's monetary cost and completion time based on spot market historical traces. The enhanced estimation tool includes three new enhancements over existing tools. First, the estimation tool models the impact to the execution from new storage strategies. Second, the enhanced tool calculates additional execution time from early spot instance termination. Finally, the enhance tool predicts the workloads wait time and early termination probabilities from historic traces. Based on two historical Amazon EC2 spot market traces, the enhancements reduce the average completion time prediction error by 96% and the average monetary cost prediction error by 99% over existing tools.
APA, Harvard, Vancouver, ISO, and other styles
45

Arvidsson, Jonas. "Utvärdering av containerbaserad virtualisering för telekomsignalering." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-67721.

Full text
Abstract:
New and innovative technologies to improve the techniques that are already being used are constantly developing. This project was about evaluating if containers could be something for the IT company Tieto to use on their products in telecommunications. Container are portable, standalone, executable lightweight packages of software that also contains all it needs to run the software. Containers are a very hot topic right now and are a fast-growing technology. Tieto wanted an investigation of the technology and it would be carried out with certain requirements where the main requirement was to have a working and executable protocol stack in a container environment. In the investigation, a proof of concept was developed, proof of concept is a realization of a certain method or idea in order to demonstrate its feasibility. The proof of concept led to Tieto wanting additional experiments carried out on containers. The experiments investigated if equal performance could be achieved with containers compared to the method with virtual machine used by Tieto today. The experiment observed a small performance reduction of efficiency, but it also showed benefits such as higher flexibility. Further development of the container method could provide a just as good and equitable solution. The project can therefore be seen as successful whereas the proof of concept developed, and experiments carried out both points to that this new technology will be part of Tieto's product development in the future.
APA, Harvard, Vancouver, ISO, and other styles
46

Craig, Kyle. "Exploration and Integration of File Systems in LlamaOS." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1418910310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Eriksson, Magnus, and Staffan Palmroos. "Comparative Study of Containment Strategies in Solaris and Security Enhanced Linux." Thesis, Linköping University, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9078.

Full text
Abstract:

To minimize the damage in the event of a security breach it is desirable to limit the privileges of remotely available services to the bare minimum and to isolate the individual services from the rest of the operating system. To achieve this there is a number of different containment strategies and process privilege security models that may be used. Two of these mechanisms are Solaris Containers (a.k.a. Solaris Zones) and Type Enforcement, as implemented in the Fedora distribution of Security Enhanced Linux (SELinux). This thesis compares how these technologies can be used to isolate a single service in the operating system.

As these two technologies differ significantly we have examined how the isolation effect can be achieved in two separate experiments. In the Solaris experiments we show how the footprint of the installed zone can be reduced and how to minimize the runtime overhead associated with the zone. To demonstrate SELinux we create a deliberately flawed network daemon and show how this can be isolated by writing a SELinux policy.

We demonstrate how both technologies can be used to achieve isolation for a single service. Differences between the two technologies become apparent when trying to run multiple instances of the same service where the SELinux implementation suffers from lack of namespace isolation. When using zones the administration work is the same regardless of the services running in the zone whereas SELinux requires a separate policy for each service. If a policy is not available from the operating system vendor the administrator needs to be familiar with the SELinux policy framework and create the policy from scratch. The overhead of the technologies is small and is not a critical factor for the scalability of a system using them.

APA, Harvard, Vancouver, ISO, and other styles
48

Modig, Dennis. "Assessing performance and security in virtualized home residential gateways." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-9966.

Full text
Abstract:
Over the past years the use of digital devices has increased heavily, and home networks continue to grow in size and complexity. By the use of virtualized residential gateways, advanced functionality can be moved away from the home and thereby decrease the administrative burden for the home user. Using virtualizing residential gateways instead of physical devices creates new challenges. This project is looking in to how the choice of virtualization technology impacts performance and security by investigating operating system level virtualization in contrast to full virtualization for use in home residential gateways. Results show that operating system level virtualization uses fewer resources in terms of disk, memory, and processor in virtualized residential gateways. The results also show that with choice of setups and virtualization technologies different security issues arises, which has been analyzed in lab environment. Recommendations regarding solutions to security issues are proposed in the concluding parts of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
49

Struhar, Vaclav. "Improving Soft Real-time Performance of Fog Computing." Licentiate thesis, Mälardalens högskola, Inbyggda system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55679.

Full text
Abstract:
Fog computing is a distributed computing paradigm that brings data processing from remote cloud data centers into the vicinity of the edge of the network. The computation is performed closer to the source of the data, and thus it decreases the time unpredictability of cloud computing that stems from (i) the computation in shared multi-tenant remote data centers, and (ii) long distance data transfers between the source of the data and the data centers. The computation in fog computing provides fast response times and enables latency sensitive applications. However, industrial systems require time-bounded response times, also denoted as RT. The correctness of such systems depends not only on the logical results of the computations but also on the physical time instant at which these results are produced. Time-bounded responses in fog computing are attributed to two main aspects: computation and communication.    In this thesis, we explore both aspects targeting soft RT applications in fog computing in which the usefulness of the produced computational results degrades with real-time requirements violations. With regards to the computation, we provide a systematic literature survey on a novel lightweight RT container-based virtualization that ensures spatial and temporal isolation of co-located applications. Subsequently, we utilize a mechanism enabling RT container-based virtualization and propose a solution for orchestrating RT containers in a distributed environment. Concerning the communication aspect, we propose a solution for a dynamic bandwidth distribution in virtualized networks.
APA, Harvard, Vancouver, ISO, and other styles
50

Svantesson, Björn. "Software Defined Networking : Virtual Router Performance." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-13417.

Full text
Abstract:
Virtualization is becoming more and more popular since the hardware that is available today often has theability to run more than just a single machine. The hardware is too powerful in relation to the requirementsof the software that is supposed to run on the hardware, making it inefficient to run too little software ontoo powerful of machines. With virtualization, the ability exists to run a lot of different software on thesame hardware, thereby increasing the efficiency of hardware usage.Virtualization doesn't stop at just virtualizing operating systems or commodity software, but can also beused to virtualize networking components. These networking components include everything from routersto switches and are possible to set up on any kind of virtulized system.When discussing virtualization of networking components, the experssion “Software Defined Networking”is hard to miss. Software Defined Networking is a definition that contains all of these virtualized networkingcomponents and is the expression that should be used when researching further into this subject. There'san increasing interest in these virtualized networking components now in relation to just a few years ago.This is due to company networking becoming much more complex now in relation to the complexity thatcould be found in a network a few years back. More services need to be up inside of the network and a lotof people believe that Software Defined Networking can help in this regard.This thesis aim is to try to find out what kind of differences there are between multiple different softwarerouters. Finding out things like, which one of the routers that offer the highest network speed for the leastamount of hardware cost, are the kind of things that this thesis will be focused on. It will also look at somedifferent aspects of performance that the routers offer in relation to one another in order to try toestablish if there exists any kind of “best” router in multiple different areas.The idea is to build up a virtualized network that somewhat relates to how a normal network looks insmaller companies today. This network will then be used for different types of testing while having thesoftware based router placed in the middle and having it take care of routing between different local virtualnetworks. All of the routers will be placed on the same server and their configuration will be very basicwhile also making sure that each of the routers get access to the same amount of hardware.After initial testing, all routers that perform bad will be opted out for additional testing. This is done tomake sure that there's no unnecessary testing done on routers that seem to not be able to keep up withthe other ones. The results from these tests will be compared to the results of a hardware router with thesame kind of tests used with it in the middle in relation to the tests the software routers had to go through.The results from the testing were fairly surprising, only having one single router being eliminated early onas the remaining ones continued to “battle” one another with more tests. These tests were compared tothe results of a hardware router and the results here were also quite surprising with a much betterperformance in many different areas from the software routers perspective.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography