To see the other types of publications on this topic, follow the link: Virtualized Data Center.

Dissertations / Theses on the topic 'Virtualized Data Center'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 16 dissertations / theses for your research on the topic 'Virtualized Data Center.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tayachi, Zeineb. "Sûreté de fonctionnement et provisionnement éco-énergétique dans les centres de données virtualisés IaaS." Electronic Thesis or Diss., Paris, CNAM, 2021. http://www.theses.fr/2021CNAM1292.

Full text
Abstract:
Le Cloud computing offre aux utilisateurs l’opportunité d’exploiter des services qui peuvent être des infrastructures, des plateformes, des applications,... Ceci représente un gain de temps et d’argent considérable puisque l’utilisateur n’a besoin ni d’investir dans une infrastructure onéreuse, ni de gérer sa maintenance, de plus il paye juste les ressources utilisées. Afin de prendre en charge des applications à grande échelle et stocker de gros volumes de données, les centres de données ont été largement déployés par des fournisseurs cloud. Cependant, les études ont montré une mauvaise utilisation des ressources qui ne sont pas exploitées pleinement. Les technologies de virtualisation ont permis d’améliorer la situation en déployant des centres de données virtualisés. Ces derniers sont des centres de données où tout ou une partie du matériel (par exemple, serveurs, routeurs, commutateurs et liens) est virtualisé à l’aide d’un logiciel appelé hyperviseur qui divise l’équipement en plusieurs instances virtuelles isolées et indépendantes (comme des machines virtuelles). Toutefois les performances des équipements peuvent être atténuées à cause de plusieurs phénomènes tels que le vieillissement logiciel. L’objectif de cette thèse est d’évaluer les performances de deux composants clés dans les centres de données qui sont le serveur virtualisé et le commutateur virtuel. Puisque, l’architecture de ces systèmes est complexe, on a opté pour utiliser les formalismes de modélisation avant de mettre en place des solutions pratiques. Notre première contribution concerne la modélisation et l’évaluation de la performabilité d’un serveur virtualisé qui implémente une politique de gestion de l’énergie et utilise le rajeunissement logiciel comme une technique proactive afin de prévenir les aléas de vieillissement logiciel. Cette modélisation est basée sur une approche modulaire utilisant le SRN (Stochastic Reward Nets) qui décrit les différents états du SVS ainsi que les transitions régissant le passage d’un état à l’autre. L’analyse numérique permet de capturer l’impact de la variation de la charge de travail et le trafic en rafale sur les métriques de performabilité, ce qui permet de bien définir les paramètres du système. La seconde contribution porte sur l’évaluation des performances d’un commutateur virtuel qui détermine les performances du réseau puisqu’il établit la communication entre des VMs. Le modèle analytique proposé représente l’architecture interne de cenoeud critique avec plusieurs cartes d’interface réseau (représentant des ports) et plusieurs cœurs de processeur (CPU). Chaque CPU sert un ensemble de ports. Le modèle est basé sur les files d’attente avec serveur en vacance et des arrivées groupées. Les résultats numériques montrent l’impact de la taille du groupe et la politique d’acceptation sur les performances du commutateur. Ces résultats peuvent être intéressants lors du dimensionnement des ressources d’un commutateur virtuel
Cloud computing allows users to exploit services such as infrastructures, platforms, applications, ...This allows a considerable cost and time saving since users do not need buying and managing of equipment. Moreover, they just pay the resources used (pay-as-you go). With the increasing large-scale applications and the need to store huge quantities of data, data centers have been widely deployed. However, studies have shown the under utilization of resources. Therefore, Cloud providers resort to virtualization technologies that are adopted by data center architectures and virtualized data centres have been deployed. A Virtualized Data Center is a data center where some or all of the hardware (e.g, servers, routers, switches, and links) are virtualized by using software called hypervisor that divides the equipment into multiple isolated and independent virtual instances (e.g virtual machines (VMs)). However, equipment performance can be mitigated due to several phenomena such as software aging. In this thesis, we focus on performance evaluation of two components in the data centers which are the virtualized server and the virtual switch, by usingmodeling formalisms. The first contribution concerns performability modeling and analysis of server virtualized systems subject to software aging, software rejuvenation and implements an energy management policy. A modular approach based on SRNs is proposed to investigate dependencies between several server virtualized modules. Numerical analysis shows how workload with bursty nature impacts performability metrics. This can handle decision making related to rejuvenation scheduling algorithms and to select the suitable rejuvenation mechanism. The second contribution concerns virtual switch (VS) which is considered as key element in data center networks since it achieves the communication between virtual machines. An analytical queueing model with batch arrivals and server vacations is proposed to evaluate VS performance with several network interface cards and several CPU cores. Performance metrics are obtained as a function of two proposed batch acceptance strategies and mean batch size. Numerical results aremeaningful when sizing virtual switch resources
APA, Harvard, Vancouver, ISO, and other styles
2

Goiri, Íñigo. "Multifaceted resource management on virtualized providers." Doctoral thesis, Universitat Politècnica de Catalunya, 2011. http://hdl.handle.net/10803/80487.

Full text
Abstract:
Last decade, providers started using Virtual Machines (VMs) in their datacenters to pack users and their applications. This was a good way to consolidate multiple users in fewer physical nodes while isolating them from each other. Later on in 2006, Amazon started offering their Infrastructure as a Service where their users rent computing resources as VMs in a pay-as-you-go manner. However, virtualized providers cannot be managed like traditional ones as they are now confronted with a set of new challenges. First of all, providers must deal efficiently with new management operations such as the dynamic creation of VMs. These operations enable new capabilities that were not there before, such as moving VMs across the nodes, or the ability to checkpoint VMs. We propose a Decentralized virtualization management infrastructure to create VMs on demand, migrate them between nodes, and checkpointing mechanisms. With the introduction of this infrastructure, virtualized providers become decentralized and are able to scale. Secondly, these providers consolidate multiple VMs in a single machine to more efficiently utilize resources. Nevertheless, this is not straightforward and implies the use of more complex resource management techniques. In addition, this requires that both customers and providers can be confident that signed Service Level Agreements (SLAs) are supporting their respective business activities to their best extent. Providers typically offer very simple metrics that hinder an efficient exploitation of their resources. To solve this, we propose mechanisms to dynamically distribute resources among VMs and a resource-level metric, which together allow increasing provider utilization while maintaining Quality of Service. Thirdly, the provider must allocate the VMs evaluating multiple facets such as power consumption and customers' requirements. In addition, it must exploit the new capabilities introduced by virtualization and manage its overhead. Ultimately, this VM placement must minimize the costs associated with the execution of a VM in a provider to maximize the provider's profit. We propose a new scheduling policy that places VMs on provider nodes according to multiple facets and is able to understand and manage the overheads of dealing with virtualization. And fourthly, resource provisioning in these providers is a challenge because of the high load variability over time. Providers can serve most of the requests owning only a restricted amount of resources but this under-provisioning may cause customers to be rejected during peak hours. In the opposite situation, valley hours incur under-utilization of the resources. As this new paradigm makes the access to resources easier, providers can share resources to serve their loads. We leverage a federated scenario where multiple providers share their resources to overcome this load variability. We exploit the federation capabilities to create policies that take the most convenient decision depending on the environment conditions and tackle the load variability. All these challenges mean that providers must manage their virtualized resources in a different way than they have done traditionally. This dissertation identifies and studies the challenges faced by virtualized provider that offers IaaS, and designs and evaluates a solution to manage the provider's resources in the most cost-effective way by exploiting the virtualization capabilities.
APA, Harvard, Vancouver, ISO, and other styles
3

Kundu, Sajib. "Improving Resource Management in Virtualized Data Centers using Application Performance Models." FIU Digital Commons, 2013. http://digitalcommons.fiu.edu/etd/874.

Full text
Abstract:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.
APA, Harvard, Vancouver, ISO, and other styles
4

Feller, Eugen. "Autonomic and Energy-Efficient Management of Large-Scale Virtualized Data Centers." Phd thesis, Université Rennes 1, 2012. http://tel.archives-ouvertes.fr/tel-00785090.

Full text
Abstract:
Large-scale virtualized data centers require cloud providers to implement scalable, autonomic, and energy-efficient cloud management systems. To address these challenges this thesis provides four main contributions. The first one proposes Snooze, a novel Infrastructure-as-a-Service (IaaS) cloud management system, which is designed to scale across many thousands of servers and virtual machines (VMs) while being easy to configure, highly available, and energy efficient. For scalability, Snooze performs distributed VM management based on a hierarchical architecture. To support ease of configuration and high availability Snooze implements self-configuring and self-healing features. Finally, for energy efficiency, Snooze integrates a holistic energy management approach via VM resource (i.e. CPU, memory, network) utilization monitoring, underload/overload detection and mitigation, VM consolidation (by implementing a modified version of the Sercon algorithm), and power management to transition idle servers into a power saving mode. A highly modular Snooze prototype was developed and extensively evaluated on the Grid'5000 testbed using realistic applications. Results show that: (i) distributed VM management does not impact submission time; (ii) fault tolerance mechanisms do not impact application performance and (iii) the system scales well with an increasing number of resources thus making it suitable for managing large-scale data centers. We also show that the system is able to dynamically scale the data center energy consumption with its utilization thus allowing it to conserve substantial power amounts with only limited impact on application performance. Snooze is an open-source software under the GPLv2 license. The second contribution is a novel VM placement algorithm based on the Ant Colony Optimization (ACO) meta-heuristic. ACO is interesting for VM placement due to its polynomial worst-case time complexity, close to optimal solutions and ease of parallelization. Simulation results show that while the scalability of the current algorithm implementation is limited to a smaller number of servers and VMs, the algorithm outperforms the evaluated First-Fit Decreasing greedy approach in terms of the number of required servers and computes close to optimal solutions. In order to enable scalable VM consolidation, this thesis makes two further contributions: (i) an ACO-based consolidation algorithm; (ii) a fully decentralized consolidation system based on an unstructured peer-to-peer network. The key idea is to apply consolidation only in small, randomly formed neighbourhoods of servers. We evaluated our approach by emulation on the Grid'5000 testbed using two state-of-the-art consolidation algorithms (i.e. Sercon and V-MAN) and our ACO-based consolidation algorithm. Results show our system to be scalable as well as to achieve a data center utilization close to the one obtained by executing a centralized consolidation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
5

Feller, Eugen. "Automatic and energy-efficient management of large scale virtualized data centers." Rennes 1, 2012. http://www.theses.fr/2012REN1S136.

Full text
Abstract:
Cette thèse propose Snooze, un système autonome et économique en énergie pour des clouds "Infrastructure-as-a-Service" (IaaS). Pour le passage à l’échelle, la facilité d’administration et la haute disponibilité, Snooze repose sur une architecture hiérarchique auto-configurable et auto-réparante. Pour la gestion de l’énergie, Snooze intègre la surveillance des ressources utilisées par les machines virtuelles (VM), la résolution des situations de sous-charge et de surcharge des serveurs, la gestion de leur alimentation électrique et le regroupement de VMs. Un prototype robuste du système Snooze a été développé et évalué avec des applications réalistes sur la plate-forme Grid’5000. Pour favoriser les périodes d’inactivité des serveurs dans un cloud IaaS, il faut placer les VMs judicieusement et les regrouper. Cette thèse propose un algorithme de placement de VMs fondé sur la méta-heuristique d’optimisation par colonies de fourmis (ACO). Des simulations ont montré que cet algorithme calcule des solutions proches de l’optimal, meilleures que celles de l’algorithme "First-Fit-Decreasing" au prix d’un moins bon passage à l’échelle. Pour le passage à l’échelle du regroupement de VMs, cette thèse apporte deux autres contributions : un algorithme de regroupement de VMs fondé sur l'ACO et un système de regroupement de VMs complètement décentralisé fondé sur un réseau pair-à-pair non structuré de serveurs. Les résultats d’émulation ont montré que notre système passe à l’échelle et qu’il permet d’atteindre un taux d’utilisation du centre de données proche de celui obtenu avec un système centralisé
Large-scale virtualized data centers now require cloud providers to implement scalable, autonomic, and energy-efficient cloud management systems. To address these challenges this thesis proposes Snooze, a novel highly available, easy to configure, and energy-efficient Infrastructure-as-a-Service (IaaS) cloud management system. For scalability and high availability Snooze integrates a self-configuring and healing hierarchical architecture. To achieve energy efficiency Snooze integrates a holistic energy management approach via virtual machine (VM) resource utilization monitoring, server underload/overload mitigation, VM consolidation, and power management. A robust Snooze prototype was developed and extensively evaluated on the Grid'5000 testbed using realistic applications. The experiments have proven Snooze to be scalable, highly available and energy-efficient. One way to favor servers idle times in IaaS clouds is to perform energy-efficient VM placement and consolidation. This thesis proposes a novel VM placement algorithm based on the Ant Colony Optimization (ACO) meta-heuristic. Simulation results have shown that the proposed algorithm computes close to optimal solutions and outperforms the evaluated First-Fit Decreasing algorithm at the cost of decreased scalability. To enable scalable VM consolidation, this thesis makes two further contributions: (i) an ACO-based VM consolidation algorithm; (ii) a fully decentralized VM consolidation system based on an unstructured peer-to-peer network of servers. Emulation conducted on the Grid'5000 testbed has proven our system to be scalable as well as to achieve data center utilization close to the one of a centralized system
APA, Harvard, Vancouver, ISO, and other styles
6

Tesfatsion, Kostentinos Selome. "A Combined Frequency Scaling and Application Elasticity Approach for Energy-Efficient Virtualized Data Centers." Thesis, Umeå universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-85211.

Full text
Abstract:
At present, large-scale data centers are typically over-provisioned in order to handle peak load requirements. The resulting low utilization of resources contribute to a huge amounts of power consumption in data centers. The effects of high power consumption manifest in a high operational cost in data centers and carbon footprints to the environment. Therefore, the management solutions for large-scale data centers must be designed to effectively take power consumption into account. In this work, we combine three management techniques that can be used to control systems in an energy-efficient manner: changing the number of virtual machines, changing the number of cores, and scaling the CPU frequencies. The proposed system consists of a controller that combines feedback and feedforward information to determine a configuration that minimizes power consumption while meeting the performance target. The controller can also be configured to accomplish power minimization in a stable manner, without causing large oscillations in the resource allocations. Our experimental evaluation based on the Sysbench benchmark combined with workload traces from production systems shows that our approach achieves the lowest energy consumption among the compared three approaches while meeting the performance target.
APA, Harvard, Vancouver, ISO, and other styles
7

Spinner, Simon [Verfasser], Samuel [Gutachter] Kounev, and Kurt [Gutachter] Geihs. "Self-Aware Resource Management in Virtualized Data Centers / Simon Spinner ; Gutachter: Samuel Kounev, Kurt Geihs." Würzburg : Universität Würzburg, 2017. http://d-nb.info/1141576945/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Božić, Nikola. "Blockchain technologies and their application to secure virtualized infrastructure control." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS596.

Full text
Abstract:
Blockchain est une technologie qui fait du concept de registre partagé à partir de systèmes distribués une réalité pour un certain nombre de domaines d’application, du crypto-monnaie à potentiellement tout système industriel nécessitant une prise de décision décentralisée, robuste, fiable et automatisée dans une situation à plusieurs parties prenantes. Néanmoins, les avantages réels de l’utilisation de la blockchain au lieu de toute autre solution traditionnelle (telle que des bases de données centralisées) ne sont pas complètement compris à ce jour, ni quel type de blockchain répond le mieux aux exigences du cas d'utilisation et à son utilisation. Au début, notre objectif est de fournir une sorte de « vademecum » à la communauté, tout en donnant une présentation générale de la blockchain qui dépasse son cas d'utilisation en des crypto monnaies telle que Bitcoin, et en passant en revue une sélection de la vaste littérature qui est apparue au cours des dernières années. Nous décrivons les exigences clés et leur évolution lorsque nous passons des blockchains publics à priver, en présentant les différences entre les mécanismes de consensus proposés et expérimentés, et en décrivant les plateformes de blockchain existantes. De plus, nous présentons la blockchain B-VMOA pour sécuriser les opérations d’orchestration de machines virtuelles pour les systèmes de cloud computing et de virtualisation des fonctions réseau en appliquant la logique de vademecum proposée. À l'aide d'exemples de didacticiels, nous décrivons nos choix de conception et élaborons des plans de mise en œuvre. Nous développons plus avant la logique de vademecum appliquée à l'orchestration dans le cloud et comment elle peut conduire à des spécifications de plateforme précises. Nous capturons les opérations du système clés et les interactions complexes entre elles. Nous nous concentrons sur la dernière version de la plateforme Hyperledger Fabric en tant que moyen de développer le système B-VMOA. De plus, Hyperledger Fabric optimise les performances, la sécurité et l’évolutivité conçues pour le réseau B-VMOA en séparant la charge de travail entre (i) les homologues d’exécution et de validation de transaction et (ii) les nœuds qui sont charges pour l'ordre des transactions. Nous étudions et utilisons une architecture <> qui différencie notre système B-VMOA conçu des systèmes distribués hérités qui suivent une architecture de réplication d'état de machine traditionnelle. Nous paramétrons et validons notre modèle avec les données recueillies sur un banc d'essai réaliste, en présentant une étude empirique pour caractériser les performances du système et identifier les goulots d'étranglement potentiels. En outre, nous présentons les outils que nous avons utilisés, la configuration du réseau et la discussion sur les observations empiriques issues de la collecte de données. Nous examinons l'impact de divers paramètres configurables pour mener une étude approfondie des composants principaux et des performances de référence pour les modèles d'utilisation courants. À savoir, B-VMOA est destiné à être exécuté dans un centre de données. Différentes topologies d'interconnexion de centres de données évoluent différemment en raison des protocoles de communication. Il semble difficile de concevoir efficacement les interconnexions réseau de manière à rentabiliser le déploiement et la maintenance de l’infrastructure. Nous analysons les propriétés structurelles de plusieurs topologies DCN et présentons également une comparaison entre ces architectures de réseau dans le but de réduire les coûts indirects de la technologie B-VMOA. D'après notre analyse, nous recommandons l'hypercube topologie comme solution pour remédier au goulot d'étranglement des performances dans le plan de contrôle B-VMOA provoqué par gossip, le protocole de diffusion, ainsi qu'une estimation de l'amélioration des performances
Blockchain is a technology making the shared registry concept from distributed systems a reality for a number of application domains, from the cryptocurrency one to potentially any industrial system requiring decentralized, robust, trusted and automated decision making in a multi-stakeholder situation. Nevertheless, the actual advantages in using blockchain instead of any other traditional solution (such as centralized databases) are not completely understood to date, or at least there is a strong need for a vademecum guiding designers toward the right decision about when to adopt blockchain or not, which kind of blockchain better meets use-case requirements, and how to use it. At first, we aim at providing the community with such a vademecum, while giving a general presentation of blockchain that goes beyond its usage in Bitcoin and surveying a selection of the vast literature that emerged in the last few years. We draw the key requirements and their evolution when passing from permissionless to permissioned blockchains, presenting the differences between proposed and experimented consensus mechanisms, and describing existing blockchain platforms. Furthermore, we present the B-VMOA blockchain to secure virtual machine orchestration operations for cloud computing and network functions virtualization systems applying the proposed vademecum logic. Using tutorial examples, we describe our design choices and draw implementation plans. We further develop the vademecum logic applied to cloud orchestration and how it can lead to precise platform specifications. We capture the key system operations and complex interactions between them. We focus on the last release of Hyperledger Fabric platform as a way to develop B-VMOA system. Besides, Hyperledger Fabric optimizes conceived B-VMOA network performance, security, and scalability by way of workload separation across: (i) transaction execution and validation peers, and (ii) transaction ordering nodes. We study and use a distributed execute-order-validate architecture which differentiates our conceived B-VMOA system from legacy distributed systems that follow a traditional state-machine replication architecture. We parameterize and validate our model with data collected from a realistic testbed, presenting an empirical study to characterize system performance and identify potential performance bottlenecks. Furthermore, we present the tools we used, the network setup and the discussion on empirical observations from the data collection. We examine the impact of various configurable parameters to conduct an in-dept study of core components and benchmark performance for common usage patterns. Namely, B-VMOA is meant to be run within data center. Different data center interconnection topologies scale differently due to communication protocols. Enormous challenges appear to efficiently design the network interconnections so that the deployment and maintenance of the infrastructure is cost-effective. We analyze the structural properties of several DCN topologies and also present some comparison among these network architectures with the aim to reduce B-VMOA overhead costs. From our analysis, we recommend the hypercube topology as a solution to address the performance bottleneck in the B-VMOA control plane caused by gossip dissemination protocol along with an estimate of performance improvement
APA, Harvard, Vancouver, ISO, and other styles
9

Le, Louët Guillaume. "Maîtrise énergétique des centres de données virtualisés : D'un scénario de charge à l'optimisation du placement des calculs." Phd thesis, Ecole des Mines de Nantes, 2014. http://tel.archives-ouvertes.fr/tel-01044650.

Full text
Abstract:
Cette thèse se place dans le contexte de l'hébergement de services informatiques virtualisés et apporte deux contributions. Elle propose premièrement un système d'aide à la gestion modulaire, déplaçant les machines virtuelles du centre pour le maintenir dans un état satisfaisant. Ce système permet en particulier d'intégrer la notion de consommation électrique des serveurs ainsi que des règles propres à cette consommation. Sa modularité permet de plus l'adaptation de ses composants à des problèmes de grande taille. Cette thèse propose de plus un outil pour comparer différents gestionnaires de centres virtualisés. Cet outil injecte un scénario de montée en charge reproductible dans une infrastructure virtualisée. L'injection d'un tel scénario permet d'évaluer les performances du système de gestion du centre grâce à des sondes spécifiques. Le langage utilisé pour cette injection est extensible et permet l'utilisation de scénarios paramétrés.
APA, Harvard, Vancouver, ISO, and other styles
10

Wolke, Andreas [Verfasser], Martin [Akademischer Betreuer] Bichler, and Georg [Akademischer Betreuer] Carle. "Energy efficient capacity management in virtualized data centers / Andreas Wolke. Gutachter: Georg Carle ; Martin Bichler. Betreuer: Martin Bichler." München : Universitätsbibliothek der TU München, 2015. http://d-nb.info/1070372390/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Rabbani, Md. "Resource Management in Virtualized Data Center." Thesis, 2014. http://hdl.handle.net/10012/8280.

Full text
Abstract:
As businesses are increasingly relying on the cloud to host their services, cloud providers are striving to offer guaranteed and highly-available resources. To achieve this goal, recent proposals have advocated to offer both computing and networking resources in the form of Virtual Data Centers (VDCs). However, to offer VDCs, cloud providers have to overcome several technical challenges. In this thesis, we focus on two key challenges: (1) the VDC embedding problem: how to efficiently allocate resources to VDCs such that energy costs and bandwidth consumption are minimized, and (2) the availability-aware VDC embedding and backup provisioning problem which aims at allocating resources to VDCs with hard guarantees on their availability. The first part of this thesis is primarily concerned with the first challenge. The goal of the VDC embedding problem is to allocate resources to VDCs while minimizing the bandwidth usage in the data center and maximizing the cloud provider's revenue. Existing proposals have focused only on the placement of VMs and ignored mapping of other types of resources like switches. Hence, we propose a new VDC embedding solution that explicitly considers the embedding of virtual switches in addition to virtual machines and communication links. Simulations show that our solution results in high acceptance rate of VDC requests, less bandwidth consumption in the data center network, and increased revenue for the cloud provider. In the second part of this thesis, we study the availability-aware VDC embedding and backup provisioning problem. The goal is to provision virtual backup nodes and links in order to achieve the desired availability for each VDC. Existing solutions addressing this challenge have overlooked the heterogeneity of the data center equipment in terms of failure rates and availability. To address this limitation, we propose a High-availability Virtual Infrastructure (Hi-VI) management framework that jointly allocates resources for VDCs and their backups while minimizing total energy costs. Hi-VI uses a novel technique to compute the availability of a VDC that considers both (1) the heterogeneity of the data center networking and computing equipment, and (2) the number of redundant virtual nodes and links provisioned as backups. Simulations demonstrate the effectiveness of our framework compared to heterogeneity-oblivious solutions in terms of revenue and the number of physical servers used to embed VDCs.
APA, Harvard, Vancouver, ISO, and other styles
12

Spinner, Simon. "Self-Aware Resource Management in Virtualized Data Centers." Doctoral thesis, 2017. https://nbn-resolving.org/urn:nbn:de:bvb:20-opus-153754.

Full text
Abstract:
Enterprise applications in virtualized data centers are often subject to time-varying workloads, i.e., the load intensity and request mix change over time, due to seasonal patterns and trends, or unpredictable bursts in user requests. Varying workloads result in frequently changing resource demands to the underlying hardware infrastructure. Virtualization technologies enable sharing and on-demand allocation of hardware resources between multiple applications. In this context, the resource allocations to virtualized applications should be continuously adapted in an elastic fashion, so that "at each point in time the available resources match the current demand as closely as possible" (Herbst el al., 2013). Autonomic approaches to resource management promise significant increases in resource efficiency while avoiding violations of performance and availability requirements during peak workloads. Traditional approaches for autonomic resource management use threshold-based rules (e.g., Amazon EC2) that execute pre-defined reconfiguration actions when a metric reaches a certain threshold (e.g., high resource utilization or load imbalance). However, many business-critical applications are subject to Service-Level-Objectives defined on an application performance metric (e.g., response time or throughput). To determine thresholds so that the end-to-end application SLO is fulfilled poses a major challenge due to the complex relationship between the resource allocation to an application and the application performance. Furthermore, threshold-based approaches are inherently prone to an oscillating behavior resulting in unnecessary reconfigurations. In order to overcome the deficiencies of threshold-based approaches and enable a fully automated approach to dynamically control the resource allocations of virtualized applications, model-based approaches are required that can predict the impact of a reconfiguration on the application performance in advance. However, existing model-based approaches are severely limited in their learning capabilities. They either require complete performance models of the application as input, or use a pre-identified model structure and only learn certain model parameters from empirical data at run-time. The former requires high manual efforts and deep system knowledge to create the performance models. The latter does not provide the flexibility to capture the specifics of complex and heterogeneous system architectures. This thesis presents a self-aware approach to the resource management in virtualized data centers. In this context, self-aware means that it automatically learns performance models of the application and the virtualized infrastructure and reasons based on these models to autonomically adapt the resource allocations in accordance with given application SLOs. Learning a performance model requires the extraction of the model structure representing the system architecture as well as the estimation of model parameters, such as resource demands. The estimation of resource demands is a key challenge as they cannot be observed directly in most systems. The major scientific contributions of this thesis are: - A reference architecture for online model learning in virtualized systems. Our reference architecture is based on a set of model extraction agents. Each agent focuses on specific tasks to automatically create and update model skeletons capturing its local knowledge of the system and collaborates with other agents to extract the structural parts of a global performance model of the system. We define different agent roles in the reference architecture and propose a model-based collaboration mechanism for the agents. The agents may be bundled within virtual appliances and may be tailored to include knowledge about the software stack deployed in a specific virtual appliance. - An online method for the statistical estimation of resource demands. For a given request processed by an application, the resource time consumed for a specified resource within the system (e.g., CPU or I/O device), referred to as resource demand, is the total average time the resource is busy processing the request. A request could be any unit of work (e.g., web page request, database transaction, batch job) processed by the system. We provide a systematization of existing statistical approaches to resource demand estimation and conduct an extensive experimental comparison to evaluate the accuracy of these approaches. We propose a novel method to automatically select estimation approaches and demonstrate that it increases the robustness and accuracy of the estimated resource demands significantly. - Model-based controllers for autonomic vertical scaling of virtualized applications. We design two controllers based on online model-based reasoning techniques in order to vertically scale applications at run-time in accordance with application SLOs. The controllers exploit the knowledge from the automatically extracted performance models when determining necessary reconfigurations. The first controller adds and removes virtual CPUs to an application depending on the current demand. It uses a layered performance model to also consider the physical resource contention when determining the required resources. The second controller adapts the resource allocations proactively to ensure the availability of the application during workload peaks and avoid reconfiguration during phases of high workload. We demonstrate the applicability of our approach in current virtualized environments and show its effectiveness leading to significant increases in resource efficiency and improvements of the application performance and availability under time-varying workloads. The evaluation of our approach is based on two case studies representative of widely used enterprise applications in virtualized data centers. In our case studies, we were able to reduce the amount of required CPU resources by up to 23% and the number of reconfigurations by up to 95% compared to a rule-based approach while ensuring full compliance with application SLO. Furthermore, using workload forecasting techniques we were able to schedule expensive reconfigurations (e.g., changes to the memory size) during phases of load load and thus were able to reduce their impact on application availability by over 80% while significantly improving application performance compared to a reactive controller. The methods and techniques for resource demand estimation and vertical application scaling were developed and evaluated in close collaboration with VMware and Google
Unternehmensanwendungen in virtualisierten Rechenzentren unterliegen häufig zeitabhängigen Arbeitslasten, d.h. die Lastintensität und der Anfragemix ändern sich mit der Zeit wegen saisonalen Mustern und Trends, sowie unvorhergesehenen Lastspitzen bei den Nutzeranfragen. Variierende Arbeitslasten führen dazu, dass sich die Ressourcenanforderungen an die darunterliegende Hardware-Infrastruktur häufig ändern. Virtualisierungstechniken erlauben die gemeinsame Nutzung und bedarfsgesteuerte Zuteilung von Hardware-Ressourcen zwischen mehreren Anwendungen. In diesem Zusammenhang sollte die Zuteilung von Ressourcen an virtualisierte Anwendungen fortwährend in einer elastischen Art und Weise angepasst werden, um sicherzustellen, dass "zu jedem Zeitpunkt die verfügbaren Ressourcen dem derzeitigen Bedarf möglichst genau entsprechen" (Herbst et al., 2013). Autonome Ansätze zur Ressourcenverwaltung versprechen eine deutliche Steigerung der Ressourceneffizienz wobei Verletzungen der Anforderungen hinsichtlich Performanz und Verfügbarkeit bei Lastspitzen vermieden werden. Herkömmliche Ansätze zur autonomen Ressourcenverwaltung nutzen feste Regeln (z.B., Amazon EC2), die vordefinierte Rekonfigurationen durchführen sobald eine Metrik einen bestimmten Schwellwert erreicht (z.B., hohe Ressourcenauslastung oder ungleichmäßige Lastverteilung). Viele geschäftskritische Anwendungen unterliegen jedoch Zielvorgaben hinsichtlich der Dienstgüte (SLO, engl. Service Level Objectives), die auf Performanzmetriken der Anwendung definiert sind (z.B., Antwortzeit oder Durchsatz). Die Bestimmung von Schwellwerten, sodass die Ende-zu-Ende Anwendungs-SLOs erfüllt werden, stellt aufgrund des komplexen Zusammenspiels zwischen der Ressourcenzuteilung und der Performanz einer Anwendung eine bedeutende Herausforderung dar. Des Weiteren sind Ansätze basierend auf Schwellwerten inhärent anfällig für Oszillationen, die zu überflüssigen Rekonfigurationen führen können. Um die Schwächen schwellwertbasierter Ansätze zu lösen und einen vollständig automatisierten Ansatz zur dynamischen Steuerung von Ressourcenzuteilungen virtualisierter Anwendungen zu ermöglichen, bedarf es modellbasierter Ansätze, die den Einfluss einer Rekonfiguration auf die Performanz einer Anwendung im Voraus vorhersagen können. Bestehende modellbasierte Ansätze sind jedoch stark eingeschränkt hinsichtlich ihrer Lernfähigkeiten. Sie erfordern entweder vollständige Performanzmodelle der Anwendung als Eingabe oder nutzen vorbestimmte Modellstrukturen und lernen nur bestimmte Modellparameter auf Basis von empirischen Daten zur Laufzeit. Erstere erfordern hohe manuelle Aufwände und eine tiefe Systemkenntnis um die Performanzmodelle zu erstellen. Letztere bieten nur eingeschränkte Möglichkeiten um die Besonderheiten von komplexen und heterogenen Systemarchitekturen zu erfassen. Diese Arbeit stellt einen selbstwahrnehmenden (engl. self-aware) Ansatz zur Ressourcenverwaltung in virtualisierten Rechenzentren vor. In diesem Zusammenhang bedeutet Selbstwahrnehmung, dass der Ansatz automatisch Performanzmodelle der Anwendung und der virtualisierten Infrastruktur lernt Basierend auf diesen Modellen entscheidet er autonom wie die Ressourcenzuteilungen angepasst werden, um die Anwendungs-SLOs zu erfüllen. Das Lernen von Performanzmodellen erfordert sowohl die Extraktion der Modellstruktur, die die Systemarchitektur abbildet, als auch die Schätzung von Modellparametern, wie zum Beispiel der Ressourcenverbräuche einzelner Funktionen. Die Schätzung der Ressourcenverbräuche stellt hier eine zentrale Herausforderung dar, da diese in den meisten Systemen nicht direkt gemessen werden können. Die wissenschaftlichen Hauptbeiträge dieser Arbeit sind wie folgt: - Eine Referenzarchitektur, die das Lernen von Modellen in virtualisierten Systemen während des Betriebs ermöglicht. Unsere Referenzarchitektur basiert auf einer Menge von Modellextraktionsagenten. Jeder Agent fokussiert sich auf bestimmte Aufgaben um automatisch ein Modellskeleton, das sein lokales Wissen über das System erfasst, zu erstellen und zu aktualisieren. Jeder Agent arbeitet mit anderen Agenten zusammen um die strukturellen Teile eines globalen Performanzmodells des Systems zu extrahieren. Die Rereferenzarchitektur definiert unterschiedliche Agentenrollen und beinhaltet einen modellbasierten Mechanismus, der die Kooperation unterschiedlicher Agenten ermöglicht. Die Agenten können als Teil virtuellen Appliances gebündelt werden und können dabei maßgeschneidertes Wissen über die Software-Strukturen in dieser virtuellen Appliance beinhalten. - Eine Methode zur fortwährenden statistischen Schätzung von Ressourcenverbräuchen. Der Ressourcenverbrauch (engl. resource demand) einer Anfrage, die von einer Anwendung verarbeitet wird, entspricht der Zeit, die an einer spezifischen Ressource im System (z.B., CPU oder I/O-Gerät) verbraucht wird. Eine Anfrage kann dabei eine beliebige Arbeitseinheit, die von einem System verarbeitet wird, darstellen (z.B. eine Webseitenanfrage, eine Datenbanktransaktion, oder ein Stapelverarbeitungsauftrag). Die vorliegende Arbeit bietet eine Systematisierung existierender Ansätze zur statistischen Schätzung von Ressourcenverbräuchen und führt einen umfangreichen, auf Experimenten aufbauenden Vergleich zur Bewertung der Genauigkeit dieser Ansätze durch. Es wird eine neuartige Methode zur automatischen Auswahl eines Schätzverfahrens vorgeschlagen und gezeigt, dass diese die Robustheit und Genauigkeit der geschätzten Ressourcenverbräuche maßgeblich verbessert. - Modellbasierte Regler für das autonome, vertikale Skalieren von virtualisierten Anwendungen. Es werden zwei Regler entworfen, die auf modellbasierten Entscheidungstechniken basieren, um Anwendungen zur Laufzeit vertikal in Übereinstimmung mit Anwendungs-SLOs zu skalieren. Die Regler nutzen das Wissen aus automatisch extrahierten Performanzmodellen bei der Bestimmung notwendiger Rekonfigurationen. Der erste Regler fügt virtuelle CPUs zu Anwendungen hinzu und entfernt sie wieder in Abhängigkeit vom aktuellen Bedarf. Er nutzt ein geschichtetes Performanzmodell, um bei der Bestimmung der benötigten Ressourcen die Konkurrenzsituation der physikalischen Ressourcen zu beachten. Der zweite Regler passt Ressourcenzuteilungen proaktiv an, um die Verfügbarkeit einer Anwendung während Lastspitzen sicherzustellen und Rekonfigurationen unter großer Last zu vermeiden. Die Arbeit demonstriert die Anwendbarkeit unseres Ansatzes in aktuellen virtualisierten Umgebungen und zeigt seine Effektivität bei der Erhöhung der Ressourceneffizienz und der Verbesserung der Anwendungsperformanz und -verfügbarkeit unter zeitabhängigen Arbeitslasten. Die Evaluation des Ansatzes basiert auf zwei Fallstudien, die repräsentativ für gängige Unternehmensanwendungen in virtualisierten Rechenzentren sind. In den Fallstudien wurde eine Reduzierung der benötigten CPU-Ressourcen von bis zu 23% und der Anzahl der Rekonfigurationen von bis zu 95% im Vergleich zu regel-basierten Ansätzen erreicht, bei gleichzeitiger Erfüllung der Anwendungs-SLOs. Mit Hilfe von Vorhersagetechniken für die Arbeitslast konnten außerdem aufwändige Rekonfigurationen (z.B., Änderungen bei der Menge an zugewiesenem Arbeitsspeicher) so geplant werden, dass sie in Phasen geringer Last durchgeführt werden. Dadurch konnten deren Auswirkungen auf die Verfügbarkeit der Anwendung um mehr als 80% verringert werden bei gleichzeitiger Verbesserung der Anwendungsperformanz verglichen mit einem reaktiven Regler. Die Methoden und Techniken zur Schätzung von Ressourcenverbräuchen und zur vertikalen Skalierung von Anwendungen wurden in enger Zusammenarbeit mit VMware und Google entwickelt und evaluiert
APA, Harvard, Vancouver, ISO, and other styles
13

Marotta, Antonio. "Architectures and Algorithms for Resource Management in Virtualized Cloud Data Centers." Tesi di dottorato, 2015. http://www.fedoa.unina.it/10416/1/marotta_antonio_27.pdf.

Full text
Abstract:
Cloud Computing has raised a great interest over the last years as it represents an enabling technology for flexible and ubiquitous access over the network to a set of shared computing resources. Cloud paradigm leverages the instruments of the virtualization technique, such as the \textit{VM Live Migration}, which can be exploited in order to achieve multiple objectives. For instance, it could be used for hardware maintenance purposes or to avoid over-load and under-load in the resources utilization. Another typical use of the VM migration is the consolidation of the workload into a smaller number of physical hosts: this is one of the techniques aimed at increasing the energy efficiency of the IT infrastructure, which represents the motivation behind the birth of the \textit{Green Computing}. However, despite all the advantages that come from the application of the cloud paradigm, there is some reluctance in its adoption for mission critical infrastructures because of the security pitfalls it still exhibits, as well as the lack of mechanisms intended at increasing isolation and protection from internal and external threats. This thesis is intended at exploring both the described aspects: on one hand, an architecture for enhancing security of a virtualized critical infrastructure is designed and provided with a mitigation strategy based on the Software Defined Networking approach and live migration. On the other hand, migrations are used to propose two VM consolidation strategies with the objective of minimizing the overall infrastructure power consumption.
APA, Harvard, Vancouver, ISO, and other styles
14

Lin, An-Dee, and 林安笛. "Capacity Planning and Goodput Optimization for Virtualized Data Centers with Composable Systems." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/g3nfka.

Full text
Abstract:
博士
國立臺灣大學
電信工程學研究所
106
Recent research trends exhibit a growing imbalance between the demands of tenants’ software applications and the provisioning of hardware resources. Misalignment of demand and supply gradually hinders workloads from being efficiently mapped to fixed-sized server nodes in traditional data centers. The incurred resource holes not only lower infrastructure utilization but also cripple the capability of a data center for hosting large-sized workloads. This deficiency motivates the development of a new rack-wide architecture referred to as the composable system. The composable system transforms traditional server racks of static capacity into a dynamic compute platform. Specifically, this novel architecture aims to link up all compute components that are traditionally distributed on traditional server boards, such as central processing unit (CPU), random access memory (RAM), storage devices, and other application-specific processors. By doing so, a logically giant compute platform is created and this platform is more resistant against the variety of workload demands by breaking the resource boundaries among traditional server boards. This research is divided into three parts. In the first part, we introduce the concepts of this reconfigurable architecture and design a framework of the composable system for cloud data centers. We then develop mathematical models to describe the resource usage patterns on this platform and enumerate some types of workloads that commonly appear in data centers. From the simulations, we show that the composable system sustains nearly up to 1.6 times stronger workload intensity than that of traditional systems and it is insensitive to the distribution of workload demands. This demonstrates that this composable system is indeed an effective solution to support cloud data center services. In the next part, we extend the framework from a single data center into a network of data centers, where each of them is geographic distributed in the serving area. A workload may need to migrate to the data center that close to its tenants for lower transmission delay. The migration may happen multiple times during its runtime, conditioned on the mobility of its tenants. We develop a two-tier model to tell the overall resource usage patterns. The mobility patterns of tenants are transformed into the effective arrival rates to each data center. Under the conditions of the Poisson arrivals of incoming workloads and probabilistic mobility patterns, the resource usage patterns of each data center can be calculated in parallel. In the last part, we turn our viewpoint to the communication links between workloads. An integrated application may consist of multiple workloads which run inside dedicated VMs. These VMs form a logical network which is physically distributed in the network of data centers. However, traditional protocols used in local area networks may not be applicable for data center networks due to the difference in network topology. Recent research suggests that layer-2-in-layer-3 tunneling protocols may be the solution to address the challenges. We find via testbed experiments that directly applying these tunneling protocols toward network virtualization only results in poor performance due to the scalability problems. Specifically, we observe that the bottlenecks actually reside inside the servers. We then propose a CPU offloading mechanism that exploits a packet steering function to balance packet processing among available CPU threads, thus greatly improving network performance. Compared to a virtualized network created based on VXLAN, our scheme improves the bandwidth for up to almost 300 percent on a 10 Gb/s link between a pair of tunnel endpoints.
APA, Harvard, Vancouver, ISO, and other styles
15

Hoyer, Marko [Verfasser]. "Resource management in virtualized data centers regarding performance and energy aspects / vorgelegt von Marko Hoyer." 2011. http://d-nb.info/101313575X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

VINUEZA, NARANJO PAOLA GABRIELA. "Energy Saving in QoS Fog-supported Data Centers." Doctoral thesis, 2018. http://hdl.handle.net/11573/1081356.

Full text
Abstract:
One of the most important challenges that cloud providers face in the explosive growth of data is to reduce the energy consumption of their designed, modern data centers. The majority of current research focuses on energy-efficient resources management in the infrastructure as a service (IaaS) model through "resources virtualization" - virtual machines and physical machines consolidation. However, actual virtualized data centers are not supporting communication–computing intensive real-time applications, big data stream computing (info-mobility applications, real-time video co-decoding). Indeed, imposing hard-limits on the overall per-job computing-plus-communication delays forces the overall networked computing infrastructure to quickly adopt its resource utilization to the (possibly, unpredictable and abrupt) time fluctuations of the offered workload. Recently, Fog Computing centers are as promising commodities in Internet virtual computing platform that raising the energy consumption and making the critical issues on such platform. Therefore, it is expected to present some green solutions (i.e., support energy provisioning) that cover fog-supported delay-sensitive web applications. Moreover, the usage of traffic engineering-based methods dynamically keep up the number of active servers to match the current workload. Therefore, it is desirable to develop a flexible, reliable technological paradigm and resource allocation algorithm to pay attention the consumed energy. Furthermore, these algorithms could automatically adapt themselves to time-varying workloads, joint reconfiguration, and orchestration of the virtualized computing-plus-communication resources available at the computing nodes. Besides, these methods facilitate things devices to operate under real-time constraints on the allowed computing-plus-communication delay and service latency. The purpose of this thesis is: i) to propose a novel technological paradigm, the Fog of Everything (FoE) paradigm, where we detail the main building blocks and services of the corresponding technological platform and protocol stack; ii) propose a dynamic and adaptive energy-aware algorithm that models and manages virtualized networked data centers Fog Nodes (FNs), to minimize the resulting networking-plus-computing average energy consumption; and, iii) propose a novel Software-as-a-Service (SaaS) Fog Computing platform to integrate the user applications over the FoE. The emerging utilization of SaaS Fog Computing centers as an Internet virtual computing commodity is to support delay-sensitive applications. The main blocks of the virtualized Fog node, operating at the Middleware layer of the underlying protocol stack and comprises of: i) admission control of the offered input traffic; ii) balanced control and dispatching of the admitted workload; iii) dynamic reconfiguration and consolidation of the Dynamic Voltage and Frequency Scaling (DVFS)-enabled Virtual Machines (VMs) instantiated onto the parallel computing platform; and, iv) rate control of the traffic injected into the TCP/IP connection. The salient features of this algorithm are that: i) it is adaptive and admits distributed scalable implementation; ii) it has the capacity to provide hard QoS guarantees, in terms of minimum/maximum instantaneous rate of the traffic delivered to the client, instantaneous goodput and total processing delay; and, iii) it explicitly accounts for the dynamic interaction between computing and networking resources in order to maximize the resulting energy efficiency. Actual performance of the proposed scheduler in the presence of: i) client mobility; ii) wireless fading; iii) reconfiguration and two-thresholds consolidation costs of the underlying networked computing platform; and, iv) abrupt changes of the transport quality of the available TCP/IP mobile connection, is numerically tested and compared to the corresponding ones of some state-of-the-art static schedulers, under both synthetically generated and measured real-world workload traces.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography