Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Virtual Machines (VM).

Дисертації з теми "Virtual Machines (VM)"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-27 дисертацій для дослідження на тему "Virtual Machines (VM)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

George, Sharath. "Usermode kernel : running the kernel in userspace in VM environments." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2858.

Повний текст джерела
Анотація:
In many instances of virtual machine deployments today, virtual machine instances are created to support a single application. Traditional operating systems provide an extensive framework for protecting one process from another. In such deployments, this protection layer becomes an additional source of overhead as isolation between services is provided at an operating system level and each instance of an operating system supports only one service. This makes the operating system the equivalent of a process from the traditional operating system perspective. Isolation between these operating systems and indirectly the services they support, is ensured by the virtual machine monitor in these deployments. In these scenarios the process protection provided by the operating system becomes redundant and a source of additional overhead. We propose a new model for these scenarios with operating systems that bypass this redundant protection offered by the traditional operating systems. We prototyped such an operating system by executing parts of the operating system in the same protection ring as user applications. This gives processes more power and access to kernel memory bypassing the need to copy data from user to kernel and vice versa as is required when the traditional ring protection layer is enforced. This allows us to save the system call trap overhead and allows application program mers to directly call kernel functions exposing the rich kernel library. This does not compromise security on the other virtual machines running on the same physical machine, as they are protected by the VMM. We illustrate the design and implementation of such a system with the Xen hypervisor and the XenoLinux kernel.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yoginath, Srikanth B. "Virtual time-aware virtual machine systems." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52321.

Повний текст джерела
Анотація:
Discrete dynamic system models that track, maintain, utilize, and evolve virtual time are referred to as virtual time systems (VTS). The realization of VTS using virtual machine (VM) technology offers several benefits including fidelity, scalability, interoperability, fault tolerance and load balancing. The usage of VTS with VMs appears in two ways: (a) VMs within VTS, and (b) VTS over VMs. The former is prevalent in high-fidelity cyber infrastructure simulations and cyber-physical system simulations, wherein VMs form a crucial component of VTS. The latter appears in the popular Cloud computing services, where VMs are offered as computing commodities and the VTS utilizes VMs as parallel execution platforms. Prior to our work presented here, the simulation community using VM within VTS (specifically, cyber infrastructure simulations) had little awareness of the existence of a fundamental virtual time-ordering problem. The correctness problem was largely unnoticed and unaddressed because of the unrecognized effects of fair-share multiplexing of VMs to realize virtual time evolution of VMs within VTS. The dissertation research reported here demonstrated the latent incorrectness of existing methods, defined key correctness benchmarks, quantitatively measured the incorrectness, proposed and implemented novel algorithms to overcome incorrectness, and optimized the solutions to execute without a performance penalty. In fact our novel, correctness-enforcing design yields better runtime performance than the traditional (incorrect) methods. Similarly, the VTS execution over VM platforms such as Cloud computing services incurs large performance degradation, which was not known until our research uncovered the fundamental mismatch between the scheduling needs of VTS execution and those of traditional parallel workloads. Consequently, we designed a novel VTS-aware hypervisor scheduler and showed significant performance gains in VTS execution over VM platforms. Prior to our work, the performance concern of VTS over VM was largely unaddressed due to the absence of an understanding of execution policy mismatch between VMs and VTS applications. VTS follows virtual-time order execution whereas the conventional VM execution follows fair-share policy. Our research quantitatively uncovered the exact cause of poor performance of VTS in VM platforms. Moreover, we proposed and implemented a novel virtual time-aware execution methodology that relieves the degradation and provides over an order of magnitude faster execution than the traditional virtual time-unaware execution.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Atchukatla, Mahammad suhail. "Algorithms for efficient VM placement in data centers : Cloud Based Design and Performance Analysis." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17221.

Повний текст джерела
Анотація:
Content: Recent trends show that cloud computing adoption is continuously increasing in every organization. So, demand for the cloud datacenters tremendously increases over a period, resulting in significantly increased resource utilization of the datacenters. In this thesis work, research was carried out on optimizing the energy consumption by using packing of the virtual machines in the datacenter. The CloudSim simulator was used for evaluating bin-packing algorithms and for practical implementation OpenStack cloud computing environment was chosen as the platform for this research.   Objectives:  In this research, our objectives are as follows
    Perform simulation of algorithms in CloudSim simulator. Estimate and compare the energy consumption of different packing algorithms. Design an OpenStack testbed to implement the Bin packing algorithm.   Methods: We use CloudSim simulator to estimate the energy consumption of the First fit, the First fit decreasing, Best fit and Enhanced best-fit algorithms. Design a heuristic model for implementation in the OpenStack environment for optimizing the energy consumption for the physical machines. Server consolidation and live migration are used for the algorithms design in the OpenStack implementation. Our research also extended to the Nova scheduler functionality in an OpenStack environment.   Results: Most of the case the enhanced best-fit algorithm gives the better results. The results are obtained from the default OpenStack VM placement algorithm as well as from the heuristic algorithm developed in this simulation work. The comparison of results indicates that the total energy consumption of the data center is reduced without affecting potential service level agreements.   Conclusions: The research tells that energy consumption of the physical machines can be optimized without compromising the offered service quality. A Python wrapper was developed to implement this model in the OpenStack environment and minimize the energy consumption of the Physical machine by shutdown the unused physical machines. The results indicate that CPU Utilization does not vary much when live migration of the virtual machine is performed.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ducasse, Quentin. "Sécurisation matérielle de la compilation à la volée des machines virtuelles langage." Electronic Thesis or Diss., Brest, École nationale supérieure de techniques avancées Bretagne, 2024. http://www.theses.fr/2024ENTA0003.

Повний текст джерела
Анотація:
Les machines virtuelles langage (VM) sont l’environnement d’exécution des langages de haut niveau les plus répandus. Elles permettent une portabilité du code applicatif et la gestion automatique de la mémoire. Leur large diffusion couplée à l’exécution de tâches de bas niveau les rendent intéressantes pour les attaquants. Les solutions purement logicielles entraînent souvent une perte de performance incompatible avec la compilation just-in-time (JIT). Des solutions accélérées matériellement sont ajoutées dans des processeurs commerciaux pour concilier des garanties de sécurité fortes avec la performance. Pour comparer ces solutions, cette thèse s’intéresse au jeu d’instructions RISC-V et à ses capacités d’extension. Nous présentons Gigue, un générateur de binaires similaires au code JIT directement exécutables sur les softcores RISC-V. Il fournit une interface pour des instructions personnalisées et garantit leur exécution. Nous présentons une solution d’isolation de domaine au niveau des instructions ajoutée aux binaires de Gigue et déployée dans un processeur avec des modifications minimales. La solution ajoute un surcoût de performance négligeable tout en garantissant des propriétés fortes sur les domaines. Afin de motiver le déploiement dans des cas d’utilisation réels, nous étendons le compilateur JIT Pharo au jeu d’instructions RISC-V, ainsi que son infrastructure de test
Language Virtual Machines (VMs) are the run-time environment of popular high level managed languages. They offer portability and memory handling for the developer and are deployed on most computing devices. Their widespread distribution, handling of untrusted user inputs, and low-level task execution make them interesting to attackers. Software-only solutions that isolate their different components often incur a high performance overhead incompatible with just-in-time (JIT) compilation. Hardware-accelerated run time protections are pushed in vendor processors as a solution to conciliate strong security guarantees with performance. To allow experimentation in the design and comparison of such solutions, this thesis is interested in the RISC-V instruction set and its extension capabilities. We present Gigue, a workload generator that outputs binaries similar to JIT code directly executable on RISC-V softcores. It provides an interface for custom instructions and guarantees their execution. We present an instruction-level domain isolation solution added to Gigue binaries and implemented in an application-class processor with processor modifications. The solution adds negligible performance overhead while enforcing strong properties on domains. As an effort to motivate deployment in real use cases, we extend the Pharo JIT compiler to the RISC-V instruction set along with its testing infrastructure
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ahvar, Ehsan. "Cost-efficient resource allocation for green distributed clouds." Thesis, Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0001.

Повний текст джерела
Анотація:
L'objectif de cette thèse est de présenter de nouveaux algorithmes de placement de machines virtuelles (VMs) à fin d’optimiser le coût et les émissions de carbone dans les Clouds distribués. La thèse se concentre d’abord sur la rentabilité des Clouds distribués, et développe ensuite les raisons d’optimiser les coûts ainsi que les émissions de carbone. La thèse comprend deux principales parties: la première propose, développe et évalue les algorithmes de placement statiques de VMs (où un premier placement d'une VM détient pendant toute la durée de vie de la VM). La deuxième partie propose des algorithmes de placement dynamiques de VMs où le placement initial de VM peut changer dynamiquement (par exemple, grâce à la migration de VMs et à leur consolidation). Cette thèse comprend cinq contributions. La première contribution est une étude de l'état de l'art sur la répartition des coûts et des émissions de carbone dans les environnements de clouds distribués. La deuxième contribution propose une méthode d'allocation des ressources, appelée NACER, pour les clouds distribués. L'objectif est de minimiser le coût de communication du réseau pour exécuter une tâche dans un cloud distribué. La troisième contribution propose une méthode de placement VM (appelée NACEV) pour les clouds distribués. NACEV est une version étendue de NACER. Tandis que NACER considère seulement le coût de communication parmi les DCs, NACEV optimise en même temps les coûts de communication et de calcul. Il propose également un algorithme de cartographie pour placer des machines virtuelles sur des machines physiques (PM). La quatrième contribution présente une méthode de placement VM efficace en termes de coûts et de carbone (appelée CACEV) pour les clouds distribués verts. CACEV est une version étendue de NACEV. En plus de la rentabilité, CACEV considère l'efficacité des émissions de carbone pour les clouds distribués. Pour obtenir une meilleure performance, la cinquième contribution propose une méthode dynamique de placement VM (D-CACEV) pour les clouds distribués. D-CACEV est une version étendue de notre travail précédent, CACEV, avec des chiffres supplémentaires, une description et également des mécanismes de migration de VM en direct. Nous montrons que notre mécanisme conjoint de réallocation-placement de VM peut constamment optimiser à la fois le coût et l'émission de carbone dans un cloud distribué
Virtual machine (VM) placement (i.e., resource allocation) method has a direct effect on both cost and carbon emission. Considering the geographic distribution of data centers (DCs), there are a variety of resources, energy prices and carbon emission rates to consider in a distributed cloud, which makes the placement of VMs for cost and carbon efficiency even more critical and complex than in centralized clouds. The goal of this thesis is to present new VM placement algorithms to optimize cost and carbon emission in a distributed cloud. It first focuses on cost efficiency in distributed clouds and, then, extends the goal to optimization of both cost and carbon emission at the same time. Thesis includes two main parts. The first part of thesis proposes, develops and evaluates static VM placement algorithms to reach the mentioned goal where an initial placement of a VM holds throughout the lifetime of the VM. The second part proposes dynamic VM placement algorithms where the initial placement of VMs is allowed to change (e.g., through VM migration and consolidation). The first contribution is a survey of the state of the art on cost and carbon emission resource allocation in distributed cloud environments. The second contribution targets the challenge of optimizing inter-DC communication cost for large-scale tasks and proposes a Network-Aware Cost-Efficient Resource allocation method, called NACER, for distributed clouds. The goal is to minimize the network communication cost of running a task in a distributed cloud by selecting the DCs to provision the VMs in such a way that the total network distance (hop count or any reasonable measure) among the selected DCs is minimized. The third contribution proposes a Network-Aware Cost Efficient VM Placement method (called NACEV) for Distributed Clouds. NACEV is an extended version of NACER. While NACER only considers inter-DC communication cost, NACEV optimizes both communication and computing cost at the same time and also proposes a mapping algorithm to place VMs on Physical Machines (PMs) inside of the selected DCs. NACEV also considers some aspects such as heterogeneity of VMs, PMs and switches, variety of energy prices, multiple paths between PMs, effects of workload on cost (energy consumption) of cloud devices (i.e., switches and PMs) and also heterogeneity of energy model of cloud elements. The forth contribution presents a Cost and Carbon Emission-Efficient VM Placement Method (called CACEV) for green distributed clouds. CACEV is an extended version of NACEV. In addition to cost efficiency, CACEV considers carbon emission efficiency and green distributed clouds. It is a VM placement algorithm for joint optimization of computing and network resources, which also considers price, location and carbon emission rate of resources. It also, unlike previous contributions of thesis, considers IaaS Service Level Agreement (SLA) violation in the system model. To get a better performance, the fifth contribution proposes a dynamic Cost and Carbon Emission-Efficient VM Placement method (D-CACEV) for green distributed clouds. D-CACEV is an extended version of our previous work, CACEV, with additional figures, description and also live VM migration mechanisms. We show that our joint VM placement-reallocation mechanism can constantly optimize both cost and carbon emission at the same time in a distributed cloud
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ahvar, Ehsan. "Cost-efficient resource allocation for green distributed clouds." Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0001.

Повний текст джерела
Анотація:
L'objectif de cette thèse est de présenter de nouveaux algorithmes de placement de machines virtuelles (VMs) à fin d’optimiser le coût et les émissions de carbone dans les Clouds distribués. La thèse se concentre d’abord sur la rentabilité des Clouds distribués, et développe ensuite les raisons d’optimiser les coûts ainsi que les émissions de carbone. La thèse comprend deux principales parties: la première propose, développe et évalue les algorithmes de placement statiques de VMs (où un premier placement d'une VM détient pendant toute la durée de vie de la VM). La deuxième partie propose des algorithmes de placement dynamiques de VMs où le placement initial de VM peut changer dynamiquement (par exemple, grâce à la migration de VMs et à leur consolidation). Cette thèse comprend cinq contributions. La première contribution est une étude de l'état de l'art sur la répartition des coûts et des émissions de carbone dans les environnements de clouds distribués. La deuxième contribution propose une méthode d'allocation des ressources, appelée NACER, pour les clouds distribués. L'objectif est de minimiser le coût de communication du réseau pour exécuter une tâche dans un cloud distribué. La troisième contribution propose une méthode de placement VM (appelée NACEV) pour les clouds distribués. NACEV est une version étendue de NACER. Tandis que NACER considère seulement le coût de communication parmi les DCs, NACEV optimise en même temps les coûts de communication et de calcul. Il propose également un algorithme de cartographie pour placer des machines virtuelles sur des machines physiques (PM). La quatrième contribution présente une méthode de placement VM efficace en termes de coûts et de carbone (appelée CACEV) pour les clouds distribués verts. CACEV est une version étendue de NACEV. En plus de la rentabilité, CACEV considère l'efficacité des émissions de carbone pour les clouds distribués. Pour obtenir une meilleure performance, la cinquième contribution propose une méthode dynamique de placement VM (D-CACEV) pour les clouds distribués. D-CACEV est une version étendue de notre travail précédent, CACEV, avec des chiffres supplémentaires, une description et également des mécanismes de migration de VM en direct. Nous montrons que notre mécanisme conjoint de réallocation-placement de VM peut constamment optimiser à la fois le coût et l'émission de carbone dans un cloud distribué
Virtual machine (VM) placement (i.e., resource allocation) method has a direct effect on both cost and carbon emission. Considering the geographic distribution of data centers (DCs), there are a variety of resources, energy prices and carbon emission rates to consider in a distributed cloud, which makes the placement of VMs for cost and carbon efficiency even more critical and complex than in centralized clouds. The goal of this thesis is to present new VM placement algorithms to optimize cost and carbon emission in a distributed cloud. It first focuses on cost efficiency in distributed clouds and, then, extends the goal to optimization of both cost and carbon emission at the same time. Thesis includes two main parts. The first part of thesis proposes, develops and evaluates static VM placement algorithms to reach the mentioned goal where an initial placement of a VM holds throughout the lifetime of the VM. The second part proposes dynamic VM placement algorithms where the initial placement of VMs is allowed to change (e.g., through VM migration and consolidation). The first contribution is a survey of the state of the art on cost and carbon emission resource allocation in distributed cloud environments. The second contribution targets the challenge of optimizing inter-DC communication cost for large-scale tasks and proposes a Network-Aware Cost-Efficient Resource allocation method, called NACER, for distributed clouds. The goal is to minimize the network communication cost of running a task in a distributed cloud by selecting the DCs to provision the VMs in such a way that the total network distance (hop count or any reasonable measure) among the selected DCs is minimized. The third contribution proposes a Network-Aware Cost Efficient VM Placement method (called NACEV) for Distributed Clouds. NACEV is an extended version of NACER. While NACER only considers inter-DC communication cost, NACEV optimizes both communication and computing cost at the same time and also proposes a mapping algorithm to place VMs on Physical Machines (PMs) inside of the selected DCs. NACEV also considers some aspects such as heterogeneity of VMs, PMs and switches, variety of energy prices, multiple paths between PMs, effects of workload on cost (energy consumption) of cloud devices (i.e., switches and PMs) and also heterogeneity of energy model of cloud elements. The forth contribution presents a Cost and Carbon Emission-Efficient VM Placement Method (called CACEV) for green distributed clouds. CACEV is an extended version of NACEV. In addition to cost efficiency, CACEV considers carbon emission efficiency and green distributed clouds. It is a VM placement algorithm for joint optimization of computing and network resources, which also considers price, location and carbon emission rate of resources. It also, unlike previous contributions of thesis, considers IaaS Service Level Agreement (SLA) violation in the system model. To get a better performance, the fifth contribution proposes a dynamic Cost and Carbon Emission-Efficient VM Placement method (D-CACEV) for green distributed clouds. D-CACEV is an extended version of our previous work, CACEV, with additional figures, description and also live VM migration mechanisms. We show that our joint VM placement-reallocation mechanism can constantly optimize both cost and carbon emission at the same time in a distributed cloud
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Hu, Ji. "A virtual machine architecture for IT-security laboratories." Phd thesis, [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=980935652.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Albaaj, Hassan, and Victor Berggren. "Benchmark av Containers och Unikernels." Thesis, Tekniska Högskolan, Jönköping University, JTH, Datateknik och informatik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-50214.

Повний текст джерела
Анотація:
Purpose – The purpose of this paper is to explore the possibility to effectivize local networks and databases using unikernels and compare this to containers. This could also apply to reliability of executing programs the same way on different hardware in software development. Method – Two experiments have been performed to explore if the purpose could be realized, quantitative data have been gatheredand displayed in both cases. Python-scripts have been used to start C-scripts, acting client and server. Algorithms have been timed running in unikernels as well as in containers along with compared measurements of memory in multiple simultaneous instantiations. Findings – Intermittent response times spiked made the data hard to parse correctly. Containers had a lower average response time when running lighter algorithms. The average response times of unikernels dives below that of containers when heavier programs are simulated. Few minor bugs were discovered in Unikraft unikernels. Implications – unikernels havecharacteristics that make them more suitable for certain tasks compared to their counterpart, this is also true for containers. Unikraft unikernels are unstable which makes it seem like containers are faster during lighter simulations. Unikernels are onlyfaster and more secure if the tools used to build them does so in a manner that makes them stable. Limitations – The lack of standards, the lack of a support community together with the fact that unikernels is a small and niche field means that unikernels have a relatively high learning curve. Keywords – Unikraft, Unikernels, Docker, Container
Syfte – Syftet med denna studie är att undersöka möjligheten att effektivisera lokala nätverk och databaser med hjälp av unikernels och att jämföra denna möjlighet med containrar. Detta kan även gälla utveckling av programvara för att säkerställa att programvaran exekveras på servern på exakt samma sätt som den tidigare gjort lokalt på utvecklarens lokala dator. Metod – Två experiment utförs för att undersöka om det går besvara syftet, kvantitativa data samlas in i båda fallen, datan är även redovisad kvantitativt. Python-script används föratt starta C-script som agerar klient och server. Tidtagning på algoritmer i unikernels respektive containrar samt minnesanvändning vid multipel instansiering mättes för att analyseras och jämföras. Resultat – Intermittenta svarstids-toppar gjorde datan från unikernels svår att korrekt utvärdera. Containrar hade ett lägre medelvärde på svarstider vid mindre krävande algoritm-användning. Unikernels medelvärde dyker under container-svarstiderna när mer krävande program simuleras. Några små buggar upptäcktesi Unikraft unikernels. Implikationer – Unikernels har egenskaper som gör de mer passande för vissa uppgifter jämfört med dess motsvarighet medan detsamma gäller för Containrar. Unikraft unikernels är instabila och ger därfören bild av att containrar vidmindre processorkrävande program faktiskt är snabbare än unikernels. Unikernels är bara snabbare och säkrare i den mån verktyget som bygger dem, gör det på ett sätt att de är stabila. Begränsningar – Avsaknaden av standarder, avsaknaden av ett communitysom kan svara på frågor tillsammans med att unikernelsär ett litet och nischat fält gör att unikernels har en relativ hög inlärningskurva. Nyckelord – Unikernel, Unikraft, Container, Docker
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Durelli, Vinicius Humberto Serapilha. "Toward harnessing a Java high-level language virtual machine for supporting software testing." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-06012014-150025/.

Повний текст джерела
Анотація:
High-level language virtual machines (HLL VMs) have been playing a key role as a mechanism for implementing programming languages. Languages that run on these execution environments have many advantages over languages that are compiled to native code. These advantages have led HLL VMs to gain broad acceptance in both academy and industry. However, much of the research in this area has been devoted to boosting the performance of these execution environments. Few eorts have attempted to introduce features that automate or facilitate some software engineering activities, including software testing. This research argues that HLL VMs provide a reasonable basis for building an integrated software testing environment. To this end, two software testing features that build on the characteristics of a Java virtual machine (JVM) were devised. The purpose of the rst feature is to automate weak mutation. Augmented with mutation support, the chosen JVM achieved speedups of as much as 95% in comparison to a strong mutation tool. To support the testing of concurrent programs, the second feature is concerned with enabling the deterministic re-execution of Java programs and exploration of new scheduling sequences
Máquinas virtuais de linguagens de programação têm desempenhado um papel importante como mecanismo para a implementação de linguagens de programação. Linguagens voltadas para esses ambientes de execução possuem várias vantagens em relação às linguagens compiladas. Essas vantagens fizeram com que tais ambientes de execução se tornassem amplamente utilizados pela indústria e academia. Entretanto, a maioria dos estudos nessa area têm se dedicado a aprimorar o desempenho desses ambientes de execução e poucos têm enfocado o desenvolvimento de funcionalidades que automatizem ou facilitem a condução de atividades de engenharia de software, incluindo atividades de teste de software. Este trabalho apresenta indícios de que máquinas virtuais de linguagens de programação podem apoiar a criação de ambientes de teste de software integrado. Para tal, duas funcionalidades que tiram proveito das características de uma máquina virtual Java foram desenvolvidas. O propósito da primeira funcionalidade e automatizar a condução de atividades de mutação fraca. Após a implementação de tal funcionalidade na máquina virtual Java selecionada, observou-se um desempenho até 95% melhor em relação a uma ferramenta de mutação forte. Afim de apoiar o teste de programas concorrentes, a segunda funcionalidade permite reexecutá-los de forma determinística além de automatizar a exploração de que novas sequências de escalonamento
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Mohammad, Taha, and Chandra Sekhar Eati. "A Performance Study of VM Live Migration over the WAN." Thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1529.

Повний текст джерела
Анотація:
Virtualization is the key technology that has provided the Cloud computing platforms a new way for small and large enterprises to host their applications by renting the available resources. Live VM migration allows a Virtual Machine to be transferred form one host to another while the Virtual Machine is active and running. The main challenge in Live migration over WAN is maintaining the network connectivity during and after the migration. We have carried out live VM migration over the WAN migrating different sizes of VM memory states and presented our solutions based on Open vSwitch/VXLAN and Cisco GRE approaches. VXLAN provides the mobility support needed to maintain the network connectivity between the client and the Virtual machine. We have setup an experimental testbed to calculate the concerned performance metrics and analyzed the performance of live migration in VXLAN and GRE network. Our experimental results present that the network connectivity was maintained throughout the migration process with negligible signaling overhead and minimal downtime. The downtime variation experience with change in the applied network delay was relatively higher when compared to variation experienced when migrating different VM memory states. The total migration time experienced showed a strong relationship with size of the migrating VM memory state.
0763472814
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Al-ou'n, Ashraf M. S. "VM Allocation in Cloud Datacenters Based on the Multi-Agent System. An Investigation into the Design and Response Time Analysis of a Multi-Agent-based Virtual Machine (VM) Allocation/Placement Policy in Cloud Datacenters." Thesis, University of Bradford, 2017. http://hdl.handle.net/10454/16067.

Повний текст джерела
Анотація:
Recent years have witnessed a surge in demand for infrastructure and services to cover high demands on processing big chunks of data and applications resulting in a mega Cloud Datacenter. A datacenter is of high complexity with increasing difficulties to identify, allocate efficiently and fast an appropriate host for the requested virtual machine (VM). Establishing a good awareness of all datacenter’s resources enables the allocation “placement” policies to make the best decision in reducing the time that is needed to allocate and create the VM(s) at the appropriate host(s). However, current algorithms and policies of placement “allocation” do not focus efficiently on awareness of the resources of the datacenter, and moreover, they are based on conventional static techniques. Which are adversely impacting on the allocation progress of the policies. This thesis proposes a new Agent-based allocation/placement policy that employs some of the Multi-Agent system features to get a good awareness of Cloud Datacenter resources and also provide an efficient allocation decision for the requested VMs. Specifically, (a) The Multi-Agent concept is used as a part of the placement policy (b) A Contract Net Protocol is devised to establish good awareness and (c) A verification process is developed to fully dimensional VM specifications during allocation. These new results show a reduction in response time of VM allocation and the usage improvement of occupied resources. The proposed Agent-based policy was implemented using the CloudSim toolkit and consequently was compared, based on a series of typical numerical experiments, with the toolkit’s default policy. The comparative study was carried out in terms of the time duration of VM allocation and other aspects such as the number of available VM types and the amount of occupied resources. Moreover, a two-stage comparative study was introduced through this thesis. Firstly, the proposed policy is compared with four state of the art algorithms, namely the Random algorithm and three one-dimensional Bin-Packing algorithms. Secondly, the three Bin-Packing algorithms were enhanced to have a two-dimensional verification structure and were compared against the proposed new algorithm of the Agent-based policy. Following a rigorous comparative study, it was shown that, through the typical numerical experiments of all stages, the proposed new Agent-based policy had superior performance in terms of the allocation times. Finally, avenues arising from this thesis are included.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Kassahun, Solomon, and Atinkut Demissie. "A PMIPv6 Approach to Maintain Network Connectivity during VM Live Migration over the Internet." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4814.

Повний текст джерела
Анотація:
Live migration is a mechanism that allows a VM to be moved from one host to another while the guest operating system is running. Current live migration implementations are able to maintain network connectivity in a LAN. However, the same techniques cannot be applied for live migration over the Internet. We present a solution based on PMIPv6, a light-weight mobility protocol standardized by IETF. PMIPv6 handles node mobility without requiring any support from the moving nodes. In addition, PMIPv6 works with IPv4, IPv6 and dual-stack nodes. We have setup a testbed to measure the performance of live migration in a PMIPv6 network. Our results show that network connectivity is successfully maintained with little signaling overhead and short VM downtime. As far as we know, this is the first time PMIPv6 is used to enable live migration beyond the scope of a LAN.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Cherukuri, Prudhvi Nath Naidu, and Sree Kavya Ganja. "Comparison of GCP and AWS using usability heuristic and cognitive walkthrough while creating and launching Virtual Machine instances in VirtualPrivate Cloud." Thesis, Blekinge Tekniska Högskola, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21896.

Повний текст джерела
Анотація:
ABSTRACT Cloud computing has become increasingly important over the years, as the need for computational resources, data storage, and networking capabilities in the field of information technology has been increased. There are several large corporations that offer these services to small companies or to end-users such as GCP, AWS, Microsoft Azure, IBM Cloud, and many more. The main aim of this thesis is to perform the comparison of GCP and AWS consoles in terms of the user interface while performing tasks related to compute engine. The cognitive walkthrough has been performed on tasks such as the creation of VPC, creation of VM instances, and launching them and then from the results, both the interfaces are compared using usability heuristics. Background: As the usage of cloud computing has increased over the years, the companies that are offering these services have grown eventually. Though there are many cloud services available in the market the user might always choose the services that are more flexible and efficient to use. In this manner, the choice of our research is made to compare the cloud services in terms of user interaction user experience. As we dig deep into the topic of user interaction and experience there are evaluation techniques and principles such as cognitive walkthrough and usability heuristics are suitable for our research. Here the comparison is made among GCP and AWS user interfaces while performing some tasks related to compute engine. Objectives: The main objectives of this thesis are to create VPC, VM instances,s and launch VM instances in two different cloud services such as GCP and AWS. To find out the best user interface among these two cloud services from the perspective of the user. Method: The process of finding best user interface among GCP and AWS cloud services is based on the cognitive walkthrough and comparing with usability heuristics. The cognitive walkthrough is performed on chosen tasks in both the services and then compared using usability heuristics to get the results of our research. Results: The results that are obtained from cognitive walkthrough and comparison with usability heuristics shown in graphical formats such as bar graphs, pie charts, and the comparison results are shown in the form of tabular form. The results cannot be universal, as they are just observational results from cognitive walkthrough and usability heuristic evaluation. Conclusion: After performing the above-mentioned methods it is observed that the user interface of GCP is more flexible and efficient in terms of user interaction and experience. Though the user experience may vary based on the experience level of users in cloud services, as per our research the novice user and moderate users have chosen GCP as a better interactive system over AWS. Keywords: Cloud computing, VM instance, Cognitive walkthrough, Usability heuristics, User-interface.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Vemulapalli, Revanth, and Ravi Kumar Mada. "Performance of Disk I/O operations during the Live Migration of a Virtual Machine over WAN." Thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2443.

Повний текст джерела
Анотація:
Virtualization is a technique that allows several virtual machines (VMs) to run on a single physical machine (PM) by adding a virtualization layer above the physical host's hardware. Many virtualization products allow a VM be migrated from one PM to other PM without interrupting the services running on the VM. This is called live migration and offers many potential advantages like server consolidation, reduced energy consumption, disaster recovery, reliability, and efficient workflows such as "Follow-the-Sun''. At present, the advantages of VM live migration are limited to Local Area Networks (LANs) as migrations over Wide Area Networks (WAN) offer lower performance due to IP address changes in the migrating VMs and also due to large network latency. For scenarios which require migrations, shared storage solutions like iSCSI (block storage) and NFS (file storage) are used to store the VM's disk to avoid the high latencies associated with disk state migration when private storage is used. When using iSCSI or NFS, all the disk I/O operations generated by the VM are encapsulated and carried to the shared storage over the IP network. The underlying latency in WAN will effect the performance of application requesting the disk I/O from the VM. In this thesis our objective was to determine the performance of shared and private storage when VMs are live migrated in networks with high latency, with WANs as the typical case. To achieve this objective, we used Iometer, a disk benchmarking tool, to investigate the I/O performance of iSCSI and NFS when used as shared storage for live migrating Xen VMs over emulated WANs. In addition, we have configured the Distributed Replicated Block Device (DRBD) system to provide private storage for our VMs through incremental disk replication. Then, we have studied the I/O performance of the private storage solution in the context of live disk migration and compared it to the performance of shared storage based on iSCSI and NFS. The results from our testbed indicate that the DRBD-based solution should be preferred over the considered shared storage solutions because DRBD consumed less network bandwidth and has a lower maximum I/O response time.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Kugel, Rudolf. "Ein Beitrag zur Problematik der Integration virtueller Maschinen." Phd thesis, [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=980016371.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Ouarnoughi, Hamza. "Placement autonomique de machines virtuelles sur un système de stockage hybride dans un cloud IaaS." Thesis, Brest, 2017. http://www.theses.fr/2017BRES0055/document.

Повний текст джерела
Анотація:
Les opérateurs de cloud IaaS (Infrastructure as a Service) proposent à leurs clients des ressources virtualisées (CPU, stockage et réseau) sous forme de machines virtuelles (VM). L’explosion du marché du cloud les a contraints à optimiser très finement l’utilisation de leurs centres de données afin de proposer des services attractifs à moindre coût. En plus des investissements liés à l’achat des infrastructures et de leur coût d’utilisation, la consommation énergétique apparaît comme un point de dépense important (2% de la consommation mondiale) et en constante augmentation. Sa maîtrise représente pour ces opérateurs un levier très intéressant à exploiter. D’un point de vue technique, le contrôle de la consommation énergétique s’appuie essentiellement sur les méthodes de consolidation. Or la plupart d'entre elles ne prennent en compte que l’utilisation CPU des machines physiques (PM) pour le placement de VM. En effet, des études récentes ont montré que les systèmes de stockage et les E/S disque constituent une part considérable de la consommation énergétique d’un centre de données (entre 14% et 40%). Dans cette thèse nous introduisons un nouveau modèle autonomique d’optimisation de placement de VM inspiré de MAPE-K (Monitor, Analyze, Plan, Execute, Knowledge), et prenant en compte en plus du CPU, les E/S des VM ainsi que les systèmes de stockage associés. Ainsi, notre première contribution est relative au développement d’un outil de trace des E/S de VM multi-niveaux. Les traces collectées alimentent, dans l’étape Analyze, un modèle de coût étendu dont l’originalité consiste à prendre en compte le profil d’accès des VM, les caractéristiques du système de stockage, ainsi que les contraintes économiques de l’environnement cloud. Nous analysons par ailleurs les caractéristiques des deux principales classes de stockage, pour aboutir à un modèle hybride exploitant au mieux les avantages de chacune. En effet, les disques durs magnétiques (HDD) sont des supports de stockage à la fois énergivores et peu performants comparés aux unités de calcul. Néanmoins, leur prix par gigaoctet et leur longévité peuvent jouer en leur faveur. Contrairement aux HDD, les disques SSD à base de mémoire flash sont plus performants et consomment peu d’énergie. Leur prix élevé par gigaoctet et leur courte durée de vie (comparés aux HDD) représentent leurs contraintes majeures. L’étape Plan a donné lieu, d’une part, à une extension de l'outil de simulation CloudSim pour la prise en compte des E/S des VM, du caractère hybride du système de stockage, ainsi que la mise en oeuvre du modèle de coût proposé dans l'étape Analyze. Nous avons proposé d’autre part, plusieurs heuristiques se basant sur notre modèle de coût et que nous avons intégrées dans CloudSim. Nous montrons finalement que notre approche permet d’améliorer d’un facteur trois le coût de placement de VM obtenu par les approches existantes
IaaS cloud providers offer virtualized resources (CPU, storage, and network) as Virtual Machines(VM). The growth and highly competitive nature of this economy has compelled them to optimize the use of their data centers, in order to offer attractive services at a lower cost. In addition to investments related to infrastructure purchase and cost of use, energy efficiency is a major point of expenditure (2% of world consumption) and is constantly increasing. Its control represents a vital opportunity. From a technical point of view, the control of energy consumption is mainly based on consolidation approaches. These approaches, which exclusively take into account the CPU use of physical machines (PM) for the VM placement, present however many drawbacks. Indeed, recent studies have shown that storage systems and disk I/O represent a significant part of the data center energy consumption (between 14% and 40%).In this thesis we propose a new autonomic model for VM placement optimization based on MAPEK (Monitor, Analyze, Plan, Execute, Knowledge) whereby in addition to CPU, VM I/O and related storage systems are considered. Our first contribution proposes a multilevel VM I/O tracer which overcomes the limitations of existing I/O monitoring tools. In the Analyze step, the collected I/O traces are introduced in a cost model which takes into account the VM I/O profile, the storage system characteristics, and the cloud environment constraints. We also analyze the complementarity between the two main storage classes, resulting in a hybrid storage model exploiting the advantages of each. Indeed, Hard Disk Drives (HDD) represent energy-intensive and inefficient devices compared to compute units. However, their low cost per gigabyte and their long lifetime may constitute positive arguments. Unlike HDD, flash-based Solid-State Disks (SSD) are more efficient and consume less power, but their high cost per gigabyte and their short lifetime (compared to HDD) represent major constraints. The Plan phase has initially resulted in an extension of CloudSim to take into account VM I/O, the hybrid nature of the storage system, as well as the implementation of the previously proposed cost model. Secondly, we proposed several heuristics based on our cost model, integrated and evaluated using CloudSim. Finally, we showed that our contribution improves existing approaches of VM placement optimization by a factor of three
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Johansson, Filip, and Christoffer Lindström. "Inter-Process Communication in a Virtualized Environment." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148317.

Повний текст джерела
Анотація:
Selecting the correct inter-process communication method isan important aspect of ensuring effective inter-vm and inter-container process communication. We will conduct a study ofIPC methods which might be useful and fits the Qemu/KVMvirtual machine and Docker container environments, and se-lect those that fit our criteria. After implementing our chosenmethods we will benchmark them in a test suite to find theones with highest performance in terms of speed. Our resultsshow that, at the most common message sizes, Unix DomainSockets work best for containers and Transparent Inter Pro-cess Communication has the best performance between vir-tual machines out of the chosen methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Chouichi, Aabir. "Real-time detection and control of machine/chamber mismatching in the semi- conductor industry." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEM001.

Повний текст джерела
Анотація:
Dans toutes les industries manufacturières, les chambres mises en parallèles sur une même opération de production sont censées donner un résultat similaire et offrir des produits de qualité identique. Ceci n'est toutefois pas toujours le cas dans les unités de production.Le maintien d'une performance stable des chambres parallèles dans l'industrie des semi-conducteurs est un véritable défi car les machines traitent simultanément un grand nombre de produits dans le but de maximiser le rendement et optimiser l'utilisation des machines. Le travail de thèse consiste à proposer une méthodologie permettant de détecter et de corriger en temps réel ces différences de performance en exploitant toutes les données disponibles, utilisées habituellement de façon séparée, pour identifier les causes racines de toute différence significative entre les chambres traitant des produits identiques.L'approche proposée consiste d'abord à détecter les écarts existants entre les chambres parallèles en se référant aux mesures des paramètres physiques. Les données des capteurs sont ensuite analysées pour mettre en évidence les indicateurs causant ces écarts. Ces indicateurs sont ajustés grâce à un mécanisme de contrôle efficace composé de deux parties : La métrologie virtuelle et la régulation. Tout d'abord, l'impact du réglage des paramètres d'entrée des chambres sur la qualité des produits est modélisé. Les modèles créés sont ensuite utilisés pour mettre en œuvre des boucles de régulation dont le but est de faire correspondre les indicateurs source de variabilité, et compenser ainsi l'erreur de sortie
In the manufacturing industries, the machines/chambers placed in parallel on the same production operation are expected to have similar capabilities and, most importantly, to yield identical product quality. However, this is usually not the case in real practice due to the systematic variations accumulated in time. Maintaining stable performance of parallel machines/chambers in the semiconductor industry is a critical challenge given the fact that, in the large-scale production environment, machines/chambers can process a large number of products simultaneously to maximize throughput and optimize machine utilization. Un- surprisingly, after processing very different settings, called recipes, the conditions of parallel machines/chambers will be no longer the same. This thesis proposes a methodology to detect and correct the performance differences in real-time by using all the available data, namely: measurements of physical parameters, data from sensors installed on machines, data from the control loops, and maintenance data. The core idea is to integrate the different sources of data, which are usually used separately, to identify the root causes of any significant differences among the machines/chambers that process identical recipes.The proposed approach starts by detecting existing gaps between parallel machines/ chambers by referring to the measurements of physical parameters since they reflect the quality of manufactured products. The sensor data are then analyzed to highlight the in- dicators that cause these discrepancies. These indicators are adjusted through an effective control mechanism composed of two parts: 1) virtual metrology and 2) process regulation. First, the impact of recipe changes on product quality is quantified by modeling the link between the inputs and outputs of the mismatched machines/chambers. The constructed models are then used to implement the revised control loops to match as much as possible the controllable input factors and compensate for the output errors
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Subramanian, Suriya. "Dynamic software updates : a VM-centric approach." Thesis, 2010. http://hdl.handle.net/2152/ETD-UT-2010-05-1436.

Повний текст джерела
Анотація:
Because software systems are imperfect, developers are forced to fix bugs and add new features. The common way of applying changes to a running system is to stop the application or machine and restart with the new version. Stopping and restarting causes a disruption in service that is at best inconvenient and at worst causes revenue loss and compromises safety. Dynamic software updating (DSU) addresses these problems by updating programs while they execute. Prior DSU systems for managed languages like Java and C# lack necessary functionality: they are inefficient and do not support updates that occur commonly in practice. This dissertation presents the design and implementation of Jvolve, a DSU system for Java. Jvolve's combination of flexibility, safety, and efficiency is a significant advance over prior approaches. Our key contribution is the extension and integration of existing Virtual Machine services with safe, flexible, and efficient dynamic updating functionality. Our approach is flexible enough to support a large class of updates, guarantees type-safety, and imposes no space or time overheads on steady-state execution. Jvolve supports many common updates. Users can add, delete, and change existing classes. Changes may add or remove fields and methods, replace existing ones, and change type signatures. Changes may occur at any level of the class hierarchy. To initialize new fields and update existing ones, Jvolve applies class and object transformer functions, the former for static fields and the latter for object instance fields. These features cover many updates seen in practice. Jvolve supports 20 of 22 updates to three open-source programs---Jetty web server, JavaEmailServer, and CrossFTP server---based on actual releases occurring over a one to two year period. This support is substantially more flexible than prior systems. Jvolve is safe. It relies on bytecode verification to statically type-check updated classes. To avoid dynamic type errors due to the timing of an update, Jvolve stops the executing threads at a DSU safe point and then applies the update. DSU safe points are a subset of VM safe points, where it is safe to perform garbage collection and thread scheduling. DSU safe points further restrict the methods that may be on each thread's stack, depending on the update. Restricted methods include updated methods for code consistency and safety, and user-specified methods for semantic safety. Jvolve installs return barriers and uses on-stack replacement to speed up reaching a safe point when necessary. While Jvolve does not guarantee that it will reach a DSU safe point, in our multithreaded benchmarks it almost always does. Jvolve includes a tool that automatically generates default object transformers which initialize new and changed fields to default values and retain values of unchanged fields in heap objects. If needed, programmers may customize the default transformers. Jvolve is the first dynamic updating system to extend the garbage collector to identify and transform all object instances of updated types. This dissertation introduces the concept of object-specific state transformers to repair application heap state for certain classes of bugs that corrupt part of the heap, and a novel methodology that employes dynamic analysis to automatically generate these transformers. Jvolve's eager object transformation design and implementation supports the widest class of updates to date. Finally, Jvolve is efficient. It imposes no overhead during steady-state execution. During an update, it imposes overheads to classloading and garbage collection. After an update, the adaptive compilation system will incrementally optimize the updated code in its usual fashion. Jvolve is the first full-featured dynamic updating system that imposes no steady-state overhead. In summary, Jvolve is the most-featured, most flexible, safest, and best-performing dynamic updating system for Java and marks a significant step towards practical support for dynamic updates in managed language virtual machines.
text
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Garg, Surya Kant. "Migrating VM Workloads to Containers: Issues and Challenges." Thesis, 2019. https://etd.iisc.ac.in/handle/2005/5100.

Повний текст джерела
Анотація:
Modern day enterprises are adopting virtualization to leverage the benefits of improved server utilization through workload consolidation. Server consolidation provides this benefit to enterprise applications as many of these do not exercise all the system resources to their full capacities, all the time. But co-hosting multiple applications together leads to several challenges including regulating resource sharing, enforcing isolation, minimizing interference etc. To address these challenges, several solutions emerged namely hypervisor based system virtual machines (VMs) and process virtualization mechanisms like containers. Both of these system virtualization techniques have their advantages and disadvantages. Hypervisors abstracts the ISA (Instruction Set Architecture) layer and allow multiple guest operating systems to run simultaneously in isolated environments called virtual machines. On the other hand process virtual machines like container technology abstract operating system layer and use the operating system kernel features such as namespaces, cgroups and apparmor to control resource sharing and provide isolation. Containers offer better workload consolidation than VMs as they have lower memory footprint and faster provisioning time. This enables data centers to handle more workload with existing hardware for applications that can be consolidated using containers. This work explores containers as constructs for workload consolidation and explores issues and challenges in this space. It also exposes concerns while moving workloads from VMs to containers. Containerisation of workloads throws up several challenges that need to be addressed while moving from VMs. Containers share the host OS kernel and hence only workloads with same OS dependency can be co-hosted together. Further, sharing of OS resources such as kernel space data structures, process ids, file descriptors and network stack by co-hosted containers often results in interference and performance hit for applications. In the first part we use OS level micro-benchmarks to identify the cause and symptoms of the bottlenecks and interference visible on the applications co-hosted inside containers. We also identify the key metrics that can be used to measure such concerns with a view to monitor the changes in workload requirements and dynamically place containers to achieve desired isolation and performance. This study is carried out using real-life retail e-commerce workloads of M/s Flipkart hosted in their private data center. Key advantage for such private cloud workloads is that majority of these applications are naturally developed on same OS platforms which gives a strong motivation to use containers for consolidating them. In the second part of the work we look at constructs for managing the elastic scaling of the containerised workloads. It is observed that majority (more than 70\%) of the Flipkart's workloads are stateless which allows seamless cloning of containers across data centers. We leverage this capability through in-kernel load balancing and horizontal scaling to adjust to dynamic workload variation. E-commerce workloads in Flipkart data center exhibit seasonality and show similar workload variations everyday. These variations can be predicted using Seasonal ARIMA model with minimal errors. Containers are light weight in terms of resource footprint as compared to VMs and can be subjected to frequent vertical scaling without any overhead. Vertical scaling offers benefits of increased performance without loss of service or migration overhead and thus provide better elasticity. However, vertical scaling is feasible only based on idle resources available on the platform on which the container is hosted. With adaptive container placement strategies we can identify potential containers that can be dynamically migrated for creating necessary idle resources to enable vertical scaling for the desired containers. It compliments vertical scaling by filling the gaps to utilize idle resources or vacate the required resources for enabling vertical scaling. Exploiting seasonality, offered by workloads, resources can be provisioned proactively. We observe that predictive scaling reduces SLA violations as compared to reactive scaling by allocating required resources in advance. While in arbitrarily varying workloads, when future requirements cannot be predicted, proactive scaling cannot be used. We show that adjusting to variation in workloads dynamically by provisioning and de-provisioning resources automatically, allocated to containers, reduced average resource requirement as compared to fixed resource allocation. It enables us to consolidate more applications on the existing capacity.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Md, Mahfuzur Rahman. "Improved Virtual Machine (VM) based Resource Provisioning in Cloud Computing." 2016. http://hdl.handle.net/1993/31887.

Повний текст джерела
Анотація:
To achieve “provisioning elasticity”, the cloud needs to manage its available resources on demand. A-priori, static, VM provisioning introduces no runtime overhead but fails to handle unanticipated changes in resource demands. Dynamic provisioning addresses this problem but introduces runtime overhead. To avoid sub-optimal provisioning my PhD thesis adopts a hybrid approach that combines static and dynamic provisioning. The idea is to adapt an initial static placement of VMs in response to evolving load characteristics. My work is focused on broadening the applicability of clouds by looking at how the infrastructure can be more effectively used to support historically atypical applications (e.g. those that are interactive in nature with tighter QoS constraints). To accomplish this I have developed a family of related algorithms that collectively improve resource sharing on physical machines to permit load variation to be better addressed and to lessen the probability of VM interference due to resource contention. The family includes three core dynamic provisioning algorithms. The first algorithm provides for the short-term, controlled sharing of resources between co-hosted VMs, the second identifies pairs (and by extrapolation larger groups) of VMs that are predicted to be "compatible" in terms of the resources they need. This allows the cloud provider to do co-location to make the first algorithm more effective. The final, third, algorithm deals with under-utilized physical machines by re-packing the VMs on those machines while also considering their compatibility. This final algorithm both addresses the possibility of the second algorithm creating underutilized machines as a result of pairing and migration and also handles underutilization arising from “holes” left by the termination of short-duration VMs (another form of atypical VM application). I have also created a surprisingly simple static provisioning algorithm that considers compatibility to minimize VM interference that can be used before my dynamic algorithms. My evaluation is primarily simulation-based though I have also implemented the core algorithms on a small test-bed system to ensure correctness. The results obtained from my simulation experiments suggest that hybrid static and dynamic provisioning approaches are both feasible and should be effective supporting a broad range of applications in cloud environments.
February 2017
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Quintela, João Afonso Amaral. "Virtual machine instantiation performance assessment using virtualization infrastructure management." Master's thesis, 2019. http://hdl.handle.net/10773/29367.

Повний текст джерела
Анотація:
Two of the most talked technologies nowadays in the networking industry are Software Defined Networks (SDN) and Network Function Virtualization (NFV), whose functioning is supported by virtualization infrastructure management technologies. In this dissertation, one of the technologies that was studied was Openstack, responsible for the creation and management of public and private clouds. In this context, this dissertation presents an extended analysis of the impact of the characteristics and configurations for different types of virtual machines, in the associated instantiation times. The results show that, for the different tested images, the instantiation times differ with the size and complexity of those images.
Duas das tecnologias mais faladas na indústria de redes nos dias de hoje são as Redes Definidas por Software (SDN) e a Virtualização das Funções de Rede (NFV), cujo funcionamento é suportado por tecnologias de gestão de virtualização de infraestruturas. Nesta dissertação, uma das tecnologias usadas foi o Openstack, responsável pela criação e gestão de nuvens públicas e privadas. Neste contexto, esta dissertação apresenta o resultado de uma extensa análise do impacto das características e configurações de diferentes tipos de máquinas virtuais, nos tempos de instanciação associados. Os resultados mostram que, para as várias imagens testadas, os tempos de instanciação diferem tendo em conta o tamanho e a complexidade dessas mesmas imagens.
Mestrado em Engenharia Eletrónica e Telecomunicações
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Lin, Yi. "An efficient implementation of a micro virtual machine." Phd thesis, 2018. http://hdl.handle.net/1885/158122.

Повний текст джерела
Анотація:
Implementing a managed language efficiently is hard, and it is becoming more difficult as the complexity of both language-level design and machines is increasing. To make things worse, current approaches to language implementations make them prone to inefficiency as well. A high-quality monolithic language implementation demands extensive expertise and resources, but most language implementers do not have those available so their implementations suffer from poor performance. Alternatively, implementers may build on existing frameworks. However, the back-end frameworks often offer abstractions that are mismatched to the language, which either bounces back the complexity to the implementers or results in inefficiency. Want et al. proposed micro virtual machines as a solution to address this issue. Micro VMs are explicitly minimal and efficient. Micro VMs support high-performance implementation of managed languages by providing key abstractions, i.e. code execution, garbage collection, and concurrency. The abstractions are neutral across client languages, and general and flexible to different implementation strategies. These constraints impose interesting challenges on a micro VM implementation. Prior to this work, no attempt had been made to efficiently implement a micro VM. My thesis is that key abstractions provided by a micro virtual machine can be implemented efficiently to support client languages. By exploring the efficient implementation of micro virtual machines, we present a concrete implementation, Zebu VM, which implements the Mu micro VM specification. The thesis addresses three critical designs in Zebu, each mapping to a key abstraction that micro virtual machines provide, and establishes their efficiency: 1) demonstrating the benefits of utilizing a modern language that focuses on safety to implement a high performance garbage collector, 2) analysing the design space of yieldpoint mechanism for thread synchronization, and 3) building a micro compiler under the specific constraints imposed by micro virtual machines, i.e. minimalism, efficiency and flexibility. This thesis is a proof of concept and an initial proof of performance to establish micro virtual machines as an efficient substrate for managed language implementation. It encourages the approach of building language implementations with micro virtual machines, and reinforces the hope that Mu will be a suitable back-end target. The thesis discusses the efficient implementation for micro virtual machines, but illustrates broader topics useful in general virtual machine design and construction.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Pipada, Pankaj. "Studies In Automatic Management Of Storage Systems." Thesis, 2012. https://etd.iisc.ac.in/handle/2005/2489.

Повний текст джерела
Анотація:
Autonomic management is important in storage systems and the space of autonomics in storage systems is vast. Such autonomic management systems can employ a variety of techniques depending upon the specific problem. In this thesis, we first take an algorithmic approach towards reliability enhancement and then we use learning along with a reactive framework to facilitate storage optimization for applications. We study how the reliability of non-repairable systems can be improved through automatic reconfiguration of their XOR-coded structure. To this regard we propose to increase the fault tolerance of non-repairable systems by reorganizing the system, after a failure is detected, to a new XOR-code with a better fault tolerance. As errors can manifest during reorganization due to whole reads of multiple submodules, our framework takes them in to account and models such errors as based on access intensity (ie.BER-biterrorrate). We present and evaluate the reliability of an example storage system with and without reorganization. Motivated by the critical need for automating various aspects of data management in virtualized data centers, we study the specific problem of automatically implementing Virtual Machine (VM) migration in a dynamic environment according to some pre-set policies. This is a problem that requires automated identification of various workloads and their execution environments running inside virtual machines in a non-intrusive manner. To this end we propose AuM (for Autonomous Manager) that has the capability to learn workloads by aggregating variety of information obtained from network traces of storage protocols. We use state of the art Machine Learning tools, namely Multiple Kernel learning ,to aggregate information and show that AuM is indeed very accurate in identifying work loads, their execution environments and is also successful in following user set policies very closely for the VM migration tasks. Storage infrastructure in large-scale cloud data center environments must support applications with diverse, time-varying data access patterns while observing the quality of service. To meet service level requirements in such heterogeneous application phases, storage management needs to be phase-aware and adaptive ,i.e. ,identify specific storage access patterns of applications as they occur and customize their handling accordingly. We build LoadIQ, an online application phase detector for networked (file and block) storage systems. In a live deployment , LoadIQ analyzes traces and emits phase labels learnt online. Such labels could be used to generate alerts or to trigger phase-specific system tuning.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Pipada, Pankaj. "Studies In Automatic Management Of Storage Systems." Thesis, 2012. http://etd.iisc.ernet.in/handle/2005/2489.

Повний текст джерела
Анотація:
Autonomic management is important in storage systems and the space of autonomics in storage systems is vast. Such autonomic management systems can employ a variety of techniques depending upon the specific problem. In this thesis, we first take an algorithmic approach towards reliability enhancement and then we use learning along with a reactive framework to facilitate storage optimization for applications. We study how the reliability of non-repairable systems can be improved through automatic reconfiguration of their XOR-coded structure. To this regard we propose to increase the fault tolerance of non-repairable systems by reorganizing the system, after a failure is detected, to a new XOR-code with a better fault tolerance. As errors can manifest during reorganization due to whole reads of multiple submodules, our framework takes them in to account and models such errors as based on access intensity (ie.BER-biterrorrate). We present and evaluate the reliability of an example storage system with and without reorganization. Motivated by the critical need for automating various aspects of data management in virtualized data centers, we study the specific problem of automatically implementing Virtual Machine (VM) migration in a dynamic environment according to some pre-set policies. This is a problem that requires automated identification of various workloads and their execution environments running inside virtual machines in a non-intrusive manner. To this end we propose AuM (for Autonomous Manager) that has the capability to learn workloads by aggregating variety of information obtained from network traces of storage protocols. We use state of the art Machine Learning tools, namely Multiple Kernel learning ,to aggregate information and show that AuM is indeed very accurate in identifying work loads, their execution environments and is also successful in following user set policies very closely for the VM migration tasks. Storage infrastructure in large-scale cloud data center environments must support applications with diverse, time-varying data access patterns while observing the quality of service. To meet service level requirements in such heterogeneous application phases, storage management needs to be phase-aware and adaptive ,i.e. ,identify specific storage access patterns of applications as they occur and customize their handling accordingly. We build LoadIQ, an online application phase detector for networked (file and block) storage systems. In a live deployment , LoadIQ analyzes traces and emits phase labels learnt online. Such labels could be used to generate alerts or to trigger phase-specific system tuning.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Sabih, Rafia. "Balancing Money and Time for OLAP Queries on Cloud Databases." Thesis, 2016. http://etd.iisc.ac.in/handle/2005/2931.

Повний текст джерела
Анотація:
Enterprise Database Management Systems (DBMSs) have to contend with resource-intensive and time-varying workloads, making them well-suited candidates for migration to cloud plat-forms { specifically, they can dynamically leverage the resource elasticity while retaining affordability through the pay-as-you-go rental interface. The current design of database engine components lays emphasis on maximizing computing efficiency, but to fully capitalize on the cloud's benefits, the outlays of these computations also need to be factored into the planning exercise. In this thesis, we investigate this contemporary problem in the context of industrial-strength deployments of relational database systems on real-world cloud platforms. Specifically, we consider how the traditional metric used to compare query execution plans, namely response-time, can be augmented to incorporate monetary costs in the decision process. The challenge here is that execution-time and monetary costs are adversarial metrics, with a decrease in one entailing a rise in the other. For instance, a Virtual Machine (VM) with rich physical resources (RAM, cores, etc.) decreases the query response-time, but is expensive with regard to rental rates. In a nutshell, there is a tradeoff between money and time, and our goal therefore is to identify the VM that others the best tradeoff between these two competing considerations. In our study, we pro le the behavior of money versus time for a given query, and de ne the best tradeoff as the \knee" { that is, the location on the pro le with the minimum Euclidean distance from the origin. To study the performance of industrial-strength database engines on real-world cloud infrastructure, we have deployed a commercial DBMS on Google cloud services. On this platform, we have carried out extensive experimentation with the TPC-DS decision-support benchmark, an industry-wide standard for evaluating database system performance. Our experiments demonstrate that the choice of VM for hosting the database server is a crucial decision, because: (i) variation in time and money across VMs is significant for a given query, (ii) no one VM offers the best money-time tradeoff across all queries. To efficiently identify the VM with the best tradeoff from a large suite of available configurations, we propose a technique to characterize the money-time pro le for a given query. The core of this technique is a VM pruning mechanism that exploits the property of partially ordered set of the VMs on their resources. It processes the minimal and maximal VMs of this poset for estimated query response-time. If the response-times on these extreme VMs are similar, then all the VMs sandwiched between them are pruned from further consideration. Otherwise, the already processed VMs are set aside, and the minimal and maximal VMs of the remaining unprocessed VMs are evaluated for their response-times. Finally, the knee VM is identified from the processed VMs as the one with the minimum Euclidean distance from the origin on the money-time space. We theoretically prove that this technique always identifies the knee VM; further, if it is acceptable to and a \near-optimal" knee by providing a relaxation-factor on the response-time distance from the optimal knee, then it is also capable of finding more efficiently a satisfactory knee under these relaxed conditions. We propose two favors of this approach: the first one prunes the VMs using complete plan information received from database engine API, and named as Plan-based Identification of Knee (PIK). On the other hand, to further increase the efficiency of the identification of the knee VM, we propose a sub-plan based pruning algorithm called Sub-Plan-based Identification of Knee (SPIK), which requires modifications in the query optimizer. We have evaluated PIK on a commercial system and found that it often requires processing for only 20% of the total VMs. The efficiency of the algorithm is further increased significantly, by using 10-20% relaxation in response-time. For evaluating SPIK , we prototyped it on an open-source engine { Postgresql 9.3, and also implemented it as Java wrapper program with the commercial engine. Experimentally, the processing done by SPIK is found to be only 40% of the PIK approach. Therefore, from an overall perspective, this thesis facilitates the desired migration of enterprise databases to cloud platforms, by identifying the VM(s) that offer competitive tradeoffs between money and time for the given query.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Sabih, Rafia. "Balancing Money and Time for OLAP Queries on Cloud Databases." Thesis, 2016. http://etd.iisc.ernet.in/handle/2005/2931.

Повний текст джерела
Анотація:
Enterprise Database Management Systems (DBMSs) have to contend with resource-intensive and time-varying workloads, making them well-suited candidates for migration to cloud plat-forms { specifically, they can dynamically leverage the resource elasticity while retaining affordability through the pay-as-you-go rental interface. The current design of database engine components lays emphasis on maximizing computing efficiency, but to fully capitalize on the cloud's benefits, the outlays of these computations also need to be factored into the planning exercise. In this thesis, we investigate this contemporary problem in the context of industrial-strength deployments of relational database systems on real-world cloud platforms. Specifically, we consider how the traditional metric used to compare query execution plans, namely response-time, can be augmented to incorporate monetary costs in the decision process. The challenge here is that execution-time and monetary costs are adversarial metrics, with a decrease in one entailing a rise in the other. For instance, a Virtual Machine (VM) with rich physical resources (RAM, cores, etc.) decreases the query response-time, but is expensive with regard to rental rates. In a nutshell, there is a tradeoff between money and time, and our goal therefore is to identify the VM that others the best tradeoff between these two competing considerations. In our study, we pro le the behavior of money versus time for a given query, and de ne the best tradeoff as the \knee" { that is, the location on the pro le with the minimum Euclidean distance from the origin. To study the performance of industrial-strength database engines on real-world cloud infrastructure, we have deployed a commercial DBMS on Google cloud services. On this platform, we have carried out extensive experimentation with the TPC-DS decision-support benchmark, an industry-wide standard for evaluating database system performance. Our experiments demonstrate that the choice of VM for hosting the database server is a crucial decision, because: (i) variation in time and money across VMs is significant for a given query, (ii) no one VM offers the best money-time tradeoff across all queries. To efficiently identify the VM with the best tradeoff from a large suite of available configurations, we propose a technique to characterize the money-time pro le for a given query. The core of this technique is a VM pruning mechanism that exploits the property of partially ordered set of the VMs on their resources. It processes the minimal and maximal VMs of this poset for estimated query response-time. If the response-times on these extreme VMs are similar, then all the VMs sandwiched between them are pruned from further consideration. Otherwise, the already processed VMs are set aside, and the minimal and maximal VMs of the remaining unprocessed VMs are evaluated for their response-times. Finally, the knee VM is identified from the processed VMs as the one with the minimum Euclidean distance from the origin on the money-time space. We theoretically prove that this technique always identifies the knee VM; further, if it is acceptable to and a \near-optimal" knee by providing a relaxation-factor on the response-time distance from the optimal knee, then it is also capable of finding more efficiently a satisfactory knee under these relaxed conditions. We propose two favors of this approach: the first one prunes the VMs using complete plan information received from database engine API, and named as Plan-based Identification of Knee (PIK). On the other hand, to further increase the efficiency of the identification of the knee VM, we propose a sub-plan based pruning algorithm called Sub-Plan-based Identification of Knee (SPIK), which requires modifications in the query optimizer. We have evaluated PIK on a commercial system and found that it often requires processing for only 20% of the total VMs. The efficiency of the algorithm is further increased significantly, by using 10-20% relaxation in response-time. For evaluating SPIK , we prototyped it on an open-source engine { Postgresql 9.3, and also implemented it as Java wrapper program with the commercial engine. Experimentally, the processing done by SPIK is found to be only 40% of the PIK approach. Therefore, from an overall perspective, this thesis facilitates the desired migration of enterprise databases to cloud platforms, by identifying the VM(s) that offer competitive tradeoffs between money and time for the given query.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії