Dissertations / Theses on the topic 'Cloud data centers'

To see the other types of publications on this topic, follow the link: Cloud data centers.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Cloud data centers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Mahmud, A. S. M. Hasan. "Sustainable Resource Management for Cloud Data Centers." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2634.

Full text
Abstract:
In recent years, the demand for data center computing has increased significantly due to the growing popularity of cloud applications and Internet-based services. Today's large data centers host hundreds of thousands of servers and the peak power rating of a single data center may even exceed 100MW. The combined electricity consumption of global data centers accounts for about 3% of worldwide production, raising serious concerns about their carbon footprint. The utility providers and governments are consistently pressuring data center operators to reduce their carbon footprint and energy consumption. While these operators (e.g., Apple, Facebook, and Google) have taken steps to reduce their carbon footprints (e.g., by installing on-site/off-site renewable energy facility), they are aggressively looking for new approaches that do not require expensive hardware installation or modification. This dissertation focuses on developing algorithms and systems to improve the sustainability in data centers without incurring significant additional operational or setup costs. In the first part, we propose a provably-efficient resource management solution for a self-managed data center to cap and reduce the carbon emission while maintaining satisfactory service performance. Our solution reduces the carbon emission of a self-managed data center to net-zero level and achieves carbon neutrality. In the second part, we consider minimizing the carbon emission in a hybrid data center infrastructure that includes geographically distributed self-managed and colocation data centers. This segment identifies and addresses the challenges of resource management in a hybrid data center infrastructure and proposes an efficient distributed solution to optimize the workload and resource allocation jointly in both self-managed and colocation data centers. In the final part, we explore sustainable resource management from cloud service users' point of view. A cloud service user purchases computing resources (e.g., virtual machines) from the service provider and does not have direct control over the carbon emission of the service provider's data center. Our proposed solution encourages a user to take part in sustainable (both economical and environmental) computing by limiting its spending on cloud resource purchase while satisfying its application performance requirements.
APA, Harvard, Vancouver, ISO, and other styles
2

Jawad, Muhammad. "Energy Efficient Data Centers for On-Demand Cloud Services." Diss., North Dakota State University, 2015. http://hdl.handle.net/10365/25198.

Full text
Abstract:
The primary objective of the Data Centers (DCs) is to provide in-time services to the cloud customers. For in-time services, DCs required an uninterruptable power supply at low cost. The DCs? power supply is directly linked with the stability and steady-state performance of the power system under faults and disturbances. Smart Grids (SGs) also known as the next generation power systems utilize communication and information technology to optimize power generation, distribution, and consumption. Therefore, it is beneficial to run DCs under SG environment. We present a thorough study of the wide area smart grid architecture, design, network, and control. The goal was to familiarize with the smart grid operation, monitoring, and control. We analyze different control mechanisms proposed in the past to study the behavior of the wide area smart grid symmetric and asymmetric grid faults conditions. The Study of the SG architecture was a first step to design power management and energy cost reduction models for the DC running under SGs. At first, we present a Power Management Model (PMM) for the DCs to estimate energy consumption cost. The PMM is a comprehensive model that considers many important quantities into account, such as DC power consumption, data center battery bank charging/discharging, backup generation operation during power outages, and power transactions between the main grid and the SG. Second, renewable energy, such as wind energy is integrated with the SG to minimize DC energy consumption cost. Third, forecasting algorithms are introduced in the PMM to predict DC power consumption, wind energy generation, and main grid power availability for the SG. The forecasting algorithms are employed for day-ahead and week-ahead prediction horizons. The purpose of the forecasting algorithms is to manage power generation and consumption, and reduce energy prices. Fourth, we formulate chargeback model for the DC customers to calculate on-demand cloud services cost. The DC energy consumption cost estimated through PMM is integrated with the other operational and capital expenditures to calculate per server utilization cost for the DC customers. Finally, the effectiveness of the proposed models is evaluated on real-world data sets.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Gong. "Data and application migration in cloud based data centers --architectures and techniques." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41078.

Full text
Abstract:
Computing and communication have continued to impact on the way we run business, the way we learn, and the way we live. The rapid technology evolution of computing has also expedited the growth of digital data, the workload of services, and the complexity of applications. Today, the cost of managing storage hardware ranges from two to ten times the acquisition cost of the storage hardware. We see an increasing demand on technologies for transferring management burden from humans to software. Data migration and application migration are one of popular technologies that enable computing and data storage management to be autonomic and self-managing. In this dissertation, we examine important issues in designing and developing scalable architectures and techniques for efficient and effective data migration and application migration. The first contribution we have made is to investigate the opportunity of automated data migration across multi-tier storage systems. The significant IO improvement in Solid State Disks (SSD) over traditional rotational hard disks (HDD) motivates the integration of SSD into existing storage hierarchy for enhanced performance. We developed adaptive look-ahead data migration approach to effectively integrate SSD into the multi-tiered storage architecture. When using the fast and expensive SSD tier to store the high temperature data (hot data) while placing the relatively low temperature data (low data) in the HDD tier, one of the important functionality is to manage the migration of data as their access patterns are changed from hot to cold and vice versa. For example, workloads during day time in typical banking applications can be dramatically different from those during night time. We designed and implemented an adaptive lookahead data migration model. A unique feature of our automated migration approach is its ability to dynamically adapt the data migration schedule to achieve the optimal migration effectiveness by taking into account of application specific characteristics and I/O profiles as well as workload deadlines. Our experiments running over the real system trace show that the basic look-ahead data migration model is effective in improving system resource utilization and the adaptive look-ahead migration model is more efficient for continuously improving and tuning of the performance and scalability of multi-tier storage systems. The second main contribution we have made in this dissertation research is to address the challenge of ensuring reliability and balancing loads across a network of computing nodes, managed in a decentralized service computing system. Considering providing location based services for geographically distributed mobile users, the continuous and massive service request workloads pose significant technical challenges for the system to guarantee scalable and reliable service provision. We design and develop a decentralized service computing architecture, called Reliable GeoGrid, with two unique features. First, we develop a distributed workload migration scheme with controlled replication, which utilizes a shortcut-based optimization to increase the resilience of the system against various node failures and network partition failures. Second, we devise a dynamic load balancing technique to scale the system in anticipation of unexpected workload changes. Our experimental results show that the Reliable GeoGrid architecture is highly scalable under changing service workloads with moving hotspots and highly reliable in the presence of massive node failures. The third research thrust in this dissertation research is focused on study the process of migrating applications from local physical data centers to Cloud. We design migration experiments and study the error types and further build the error model. Based on the analysis and observations in migration experiments, we propose the CloudMig system which provides both configuration validation and installation automation which effectively reduces the configuration errors and installation complexity. In this dissertation, I will provide an in-depth discussion of the principles of migration and its applications in improving data storage performance, balancing service workloads and adapting to cloud platform.
APA, Harvard, Vancouver, ISO, and other styles
4

Penumetsa, Swetha. "A comparison of energy efficient adaptation algorithms in cloud data centers." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17374.

Full text
Abstract:
Context: In recent years, Cloud computing has gained a wide range of attention in both industry and academics as Cloud services offer pay-per-use model, due to increase in need of factors like reliability and computing results with immense growth in Cloud-based companies along with a continuous expansion of their scale. However, the rise in Cloud computing users can cause a negative impact on energy consumption in the Cloud data centers as they consume huge amount of overall energy. In order to minimize the energy consumption in virtual datacenters, researchers proposed various energy efficient resources management strategies. Virtual Machine dynamic Consolidation is one of the prominent technique and an active research area in recent time, used to improve resource utilization and minimize the electric power consumption of a data center. This technique monitors the data centers utilization, identify overloaded, and underloaded hosts then migrate few/all Virtual Machines (VMs) to other suitable hosts using Virtual Machine selection and Virtual Machine placement, and switch underloaded hosts to sleep mode.   Objectives: Objective of this study is to define and implement new energy-aware heuristic algorithms to save energy consumption in Cloud data centers and show the best-resulted algorithm then compare performances of proposed heuristic algorithms with old heuristics.   Methods: Initially, a literature review is conducted to identify and obtain knowledge about the adaptive heuristic algorithms proposed previously for energy-aware VM Consolidation, and find the metrics to measure the performance of heuristic algorithms. Based on this knowledge, for our thesis we have proposed 32 combinations of novel adaptive heuristics for host overload detection (8) and VM selection algorithms (4), one host underload detection and two adaptive heuristic for VM placement algorithms which helps in minimizing both energy consumption and reducing overall Service Level Agreement (SLA) violation of Cloud data center. Further, an experiment is conducted to measure the performances of all proposed heuristic algorithms. We have used the CloudSim simulation toolkit for the modeling, simulation, and implementation of proposed heuristics. We have evaluated the proposed algorithms using PlanetLab VMs real workload traces.   Results: The results were measured using metrics energy consumption of data center (power model), Performance Degradation due to Migration (PDM), Service Level Agreement violation Time per Active Host (SLATAH), Service Level Agreement Violation (SLAV = PDM . SLATAH) and, Energy consumption and Service level agreement Violation (ESV).  Here for all four categories of VM Consolidation, we have compared the performances of proposed heuristics with each other and presented the best heuristic algorithm proposed in each category. We have also compared the performances of proposed heuristic algorithms with existing heuristics which are identified in the literature and presented the number of newly proposed algorithms work efficiently than existing algorithms. This comparative analysis is done using T-test and Cohen's d effect size.   From the comparison results of all proposed algorithms, we have concluded that Mean absolute Deviation around median (MADmedain) host overload detection algorithm equipped with Maximum requested RAM VM selection (MaxR) using Modified First Fit Decreasing VM placement (MFFD), and Standard Deviation (STD) host overload detection algorithm equipped with Maximum requested RAM VM selection (MaxR) using Modified Last Fit decreasing VM placement (MLFD) respectively performed better than other 31 combinations of proposed overload detection and VM selection heuristic algorithms, with regards to Energy consumption and Service level agreement Violation (ESV). However, from the comparative study between existing and proposed algorithms, 23 and 21 combinations of proposed host overload detection and VM selection algorithms using MFFD and MLFD VM placements respectively performed efficiently compared to existing (baseline) heuristic algorithms considered for this study.   Conclusions: This thesis presents novel proposed heuristic algorithms that are useful for minimization of both energy consumption and Service Level Agreement Violation in virtual datacenters. It presents new 23 combinations of proposed host overloading detection and VM selection algorithms using MFFD VM placement and 21 combinations of proposed host overloading detection and VM selection algorithms using MLFD VM placement, which consumes the minimum amount of energy with minimal SLA violation compared to the existing algorithms. It gives scope for future researchers related to improving resource utilization and minimizing the electric power consumption of a data center. This study can be extended in further by implementing the work on other Cloud software platforms and developing much more efficient algorithms for all four categories of VM consolidation.
APA, Harvard, Vancouver, ISO, and other styles
5

Takouna, Ibrahim. "Energy-efficient and performance-aware virtual machine management for cloud data centers." Phd thesis, Universität Potsdam, 2014. http://opus.kobv.de/ubp/texte_eingeschraenkt_verlag/2014/7239/.

Full text
Abstract:
Virtualisierte Cloud Datenzentren stellen nach Bedarf Ressourcen zur Verfügu-ng, ermöglichen agile Ressourcenbereitstellung und beherbergen heterogene Applikationen mit verschiedenen Anforderungen an Ressourcen. Solche Datenzentren verbrauchen enorme Mengen an Energie, was die Erhöhung der Betriebskosten, der Wärme innerhalb der Zentren und des Kohlendioxidausstoßes verursacht. Der Anstieg des Energieverbrauches kann durch ein ineffektives Ressourcenmanagement, das die ineffiziente Ressourcenausnutzung verursacht, entstehen. Die vorliegende Dissertation stellt detaillierte Modelle und neue Verfahren für virtualisiertes Ressourcenmanagement in Cloud Datenzentren vor. Die vorgestellten Verfahren ziehen das Service-Level-Agreement (SLA) und die Heterogenität der Auslastung bezüglich des Bedarfs an Speicherzugriffen und Kommunikationsmustern von Web- und HPC- (High Performance Computing) Applikationen in Betracht. Um die präsentierten Techniken zu evaluieren, verwenden wir Simulationen und echte Protokollierung der Auslastungen von Web- und HPC- Applikationen. Außerdem vergleichen wir unser Techniken und Verfahren mit anderen aktuellen Verfahren durch die Anwendung von verschiedenen Performance Metriken. Die Hauptbeiträge dieser Dissertation sind Folgendes: Ein Proaktives auf robuster Optimierung basierendes Ressourcenbereitstellungsverfahren. Dieses Verfahren erhöht die Fähigkeit der Hostes zur Verfüg-ungsstellung von mehr VMs. Gleichzeitig aber wird der unnötige Energieverbrauch minimiert. Zusätzlich mindert diese Technik unerwünschte Ände-rungen im Energiezustand des Servers. Die vorgestellte Technik nutzt einen auf Intervall basierenden Vorhersagealgorithmus zur Implementierung einer robusten Optimierung. Dabei werden unsichere Anforderungen in Betracht gezogen. Ein adaptives und auf Intervall basierendes Verfahren zur Vorhersage des Arbeitsaufkommens mit hohen, in kürzer Zeit auftretenden Schwankungen. Die Intervall basierende Vorhersage ist implementiert in der Standard Abweichung Variante und in der Median absoluter Abweichung Variante. Die Intervall-Änderungen basieren auf einem adaptiven Vertrauensfenster um die Schwankungen des Arbeitsaufkommens zu bewältigen. Eine robuste VM Zusammenlegung für ein effizientes Energie und Performance Management. Dies ermöglicht die gegenseitige Abhängigkeit zwischen der Energie und der Performance zu minimieren. Unser Verfahren reduziert die Anzahl der VM-Migrationen im Vergleich mit den neu vor kurzem vorgestellten Verfahren. Dies trägt auch zur Reduzierung des durch das Netzwerk verursachten Energieverbrauches. Außerdem reduziert dieses Verfahren SLA-Verletzungen und die Anzahl von Änderungen an Energiezus-tänden. Ein generisches Modell für das Netzwerk eines Datenzentrums um die verzö-gerte Kommunikation und ihre Auswirkung auf die VM Performance und auf die Netzwerkenergie zu simulieren. Außerdem wird ein generisches Modell für ein Memory-Bus des Servers vorgestellt. Dieses Modell beinhaltet auch Modelle für die Latenzzeit und den Energieverbrauch für verschiedene Memory Frequenzen. Dies erlaubt eine Simulation der Memory Verzögerung und ihre Auswirkung auf die VM-Performance und auf den Memory Energieverbrauch. Kommunikation bewusste und Energie effiziente Zusammenlegung für parallele Applikationen um die dynamische Entdeckung von Kommunikationsmustern und das Umplanen von VMs zu ermöglichen. Das Umplanen von VMs benutzt eine auf den entdeckten Kommunikationsmustern basierende Migration. Eine neue Technik zur Entdeckung von dynamischen Mustern ist implementiert. Sie basiert auf der Signal Verarbeitung des Netzwerks von VMs, anstatt die Informationen des virtuellen Umstellung der Hosts oder der Initiierung der VMs zu nutzen. Das Ergebnis zeigt, dass unsere Methode die durchschnittliche Anwendung des Netzwerks reduziert und aufgrund der Reduzierung der aktiven Umstellungen Energie gespart. Außerdem bietet sie eine bessere VM Performance im Vergleich zu der CPU-basierten Platzierung. Memory bewusste VM Zusammenlegung für unabhängige VMs. Sie nutzt die Vielfalt des VMs Memory Zuganges um die Anwendung vom Memory-Bus der Hosts zu balancieren. Die vorgestellte Technik, Memory-Bus Load Balancing (MLB), verteilt die VMs reaktiv neu im Bezug auf ihre Anwendung vom Memory-Bus. Sie nutzt die VM Migration um die Performance des gesamtem Systems zu verbessern. Außerdem sind die dynamische Spannung, die Frequenz Skalierung des Memory und die MLB Methode kombiniert um ein besseres Energiesparen zu leisten.
Virtualized cloud data centers provide on-demand resources, enable agile resource provisioning, and host heterogeneous applications with different resource requirements. These data centers consume enormous amounts of energy, increasing operational expenses, inducing high thermal inside data centers, and raising carbon dioxide emissions. The increase in energy consumption can result from ineffective resource management that causes inefficient resource utilization. This dissertation presents detailed models and novel techniques and algorithms for virtual resource management in cloud data centers. The proposed techniques take into account Service Level Agreements (SLAs) and workload heterogeneity in terms of memory access demand and communication patterns of web applications and High Performance Computing (HPC) applications. To evaluate our proposed techniques, we use simulation and real workload traces of web applications and HPC applications and compare our techniques against the other recently proposed techniques using several performance metrics. The major contributions of this dissertation are the following: proactive resource provisioning technique based on robust optimization to increase the hosts' availability for hosting new VMs while minimizing the idle energy consumption. Additionally, this technique mitigates undesirable changes in the power state of the hosts by which the hosts' reliability can be enhanced in avoiding failure during a power state change. The proposed technique exploits the range-based prediction algorithm for implementing robust optimization, taking into consideration the uncertainty of demand. An adaptive range-based prediction for predicting workload with high fluctuations in the short-term. The range prediction is implemented in two ways: standard deviation and median absolute deviation. The range is changed based on an adaptive confidence window to cope with the workload fluctuations. A robust VM consolidation for efficient energy and performance management to achieve equilibrium between energy and performance trade-offs. Our technique reduces the number of VM migrations compared to recently proposed techniques. This also contributes to a reduction in energy consumption by the network infrastructure. Additionally, our technique reduces SLA violations and the number of power state changes. A generic model for the network of a data center to simulate the communication delay and its impact on VM performance, as well as network energy consumption. In addition, a generic model for a memory-bus of a server, including latency and energy consumption models for different memory frequencies. This allows simulating the memory delay and its influence on VM performance, as well as memory energy consumption. Communication-aware and energy-efficient consolidation for parallel applications to enable the dynamic discovery of communication patterns and reschedule VMs using migration based on the determined communication patterns. A novel dynamic pattern discovery technique is implemented, based on signal processing of network utilization of VMs instead of using the information from the hosts' virtual switches or initiation from VMs. The result shows that our proposed approach reduces the network's average utilization, achieves energy savings due to reducing the number of active switches, and provides better VM performance compared to CPU-based placement. Memory-aware VM consolidation for independent VMs, which exploits the diversity of VMs' memory access to balance memory-bus utilization of hosts. The proposed technique, Memory-bus Load Balancing (MLB), reactively redistributes VMs according to their utilization of a memory-bus using VM migration to improve the performance of the overall system. Furthermore, Dynamic Voltage and Frequency Scaling (DVFS) of the memory and the proposed MLB technique are combined to achieve better energy savings.
APA, Harvard, Vancouver, ISO, and other styles
6

Yanggratoke, Rerngvit. "Contributions to Performance Modeling and Management of Data Centers." Licentiate thesis, KTH, Kommunikationsnät, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-129296.

Full text
Abstract:
Over the last decade, Internet-based services, such as electronic-mail, music-on-demand, and social-network services, have changed the ways we communicate and access information. Usually, the key functionality of such a service is in backend components, which are located in a data center, a facility for hosting computing systems and related equipment. This thesis focuses on two fundamental problems related to the management, dimensioning, and provisioning of such backend components. The first problem centers around resource allocation for a large-scale cloud environment. Data centers have become very large; they often contain hundreds of thousands of machines and applications. In such a data center, resource allocation cannot be efficiently achieved through a traditional management system that is centralized in nature. Therefore, a more scalable solution is needed. To address this problem, we have developed and evaluated a scalable and generic protocol for resource allocation. The protocol is generic in the sense that it can be instantiated for different management objectives through objective functions. The protocol jointly allocates CPU, memory, and network resources to applications that are hosted by the cloud. We prove that the protocol converges to a solution, if an objective function satisfies a certain property. We perform a simulation study of the protocol for realistic scenarios. Simulation results suggest that the quality of the allocation is independent of the system size, up to 100,000 machines and applications, for the management objectives considered. The second problem is related to performance modeling of a distributed key-value store. The specific distributed key-value store we focus on in this thesis is the Spotify storage system. Understanding the performance of the Spotify storage system is essential for achieving a key quality of service objective, namely that the playback latency of a song is sufficiently low. To address this problem, we have developed and evaluated models for predicting the performance of a distributed key-value store for a lightly loaded system. First, we developed a model that allows us to predict the response time distribution of requests. Second, we modeled the capacity of the distributed key-value store for two different object allocation policies. We evaluate the models by comparing model predictions with measurements from two different environments: our lab testbed and a Spotify operational environment. We found that the models are accurate in the sense that the prediction error, i.e., the difference between the model predictions and the measurements from the real systems, is at most 11%.

QC 20131001

APA, Harvard, Vancouver, ISO, and other styles
7

Peloso, Pietro. "Possibili soluzioni per garantire qos nelle comunicazioni inter-data centers in ambienti cloud computing." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/6205/.

Full text
Abstract:
Nel presente lavoro, partendo dalla definizione di alcuni punti chiavi del concetto di cloud computing, si è insistito molto sulle problematiche relative alle performance degli ambenti cloud, e alle diverse proposte attualmente presenti sul mercato con i relativi limiti. Dopo averle illustrate in modo dettagliato, le diverse proposte sono state tra loro messe a confronto al fine di evidenziare, per ciascuna di essa, tanto gli aspetti positivi quanto i punti di criticità.
APA, Harvard, Vancouver, ISO, and other styles
8

Atchukatla, Mahammad suhail. "Algorithms for efficient VM placement in data centers : Cloud Based Design and Performance Analysis." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17221.

Full text
Abstract:
Content: Recent trends show that cloud computing adoption is continuously increasing in every organization. So, demand for the cloud datacenters tremendously increases over a period, resulting in significantly increased resource utilization of the datacenters. In this thesis work, research was carried out on optimizing the energy consumption by using packing of the virtual machines in the datacenter. The CloudSim simulator was used for evaluating bin-packing algorithms and for practical implementation OpenStack cloud computing environment was chosen as the platform for this research.   Objectives:  In this research, our objectives are as follows
    Perform simulation of algorithms in CloudSim simulator. Estimate and compare the energy consumption of different packing algorithms. Design an OpenStack testbed to implement the Bin packing algorithm.   Methods: We use CloudSim simulator to estimate the energy consumption of the First fit, the First fit decreasing, Best fit and Enhanced best-fit algorithms. Design a heuristic model for implementation in the OpenStack environment for optimizing the energy consumption for the physical machines. Server consolidation and live migration are used for the algorithms design in the OpenStack implementation. Our research also extended to the Nova scheduler functionality in an OpenStack environment.   Results: Most of the case the enhanced best-fit algorithm gives the better results. The results are obtained from the default OpenStack VM placement algorithm as well as from the heuristic algorithm developed in this simulation work. The comparison of results indicates that the total energy consumption of the data center is reduced without affecting potential service level agreements.   Conclusions: The research tells that energy consumption of the physical machines can be optimized without compromising the offered service quality. A Python wrapper was developed to implement this model in the OpenStack environment and minimize the energy consumption of the Physical machine by shutdown the unused physical machines. The results indicate that CPU Utilization does not vary much when live migration of the virtual machine is performed.
APA, Harvard, Vancouver, ISO, and other styles
9

Pipkin, Everest R. "It Was Raining in the Data Center." Research Showcase @ CMU, 2018. http://repository.cmu.edu/theses/138.

Full text
Abstract:
Stemming from a 2011 incident inside of a Facebook data facility in which hyper-cooled air formed a literal (if somewhat transient) rain cloud in the stacks, It was raining in the data center examines ideas of non-places and supermodernity applied to contemporary network infrastructure. It was raining in the data center argues that the problem of the rain cloud is as much a problem of psychology as it is a problem of engineering. Although humidity-management is a predictable snag for any data center, the cloud was a surprise; a self-inflicted side-effect of a strategy of distance. The rain cloud was a result of the same rhetoric of ephemerality that makes it easy to imagine the inside of a data center to be both everywhere and nowhere. This conceit of internet data being placeless shares roots with Marc Augé’s idea of non-places (airports, highways, malls), which are predicated on the qualities of excess and movement. Without long-term inhabitants, these places fail to tether themselves to their locations, instead existing as a markers of everywhere. Such a premise allows the internet to exist as an other-space that is not conceptually beholden to the demands of energy and landscape. It also liberates the idea of ‘the network’ from a similar history of industry. However, the network is deeply rooted in place, as well as in industry and transit. Examining the prevalence of network overlap in American fiber-optic cabling, it becomes easy to trace routes of cables along major US freight train lines and the US interstate highway system. The historical origin of this network technology is in weaponization and defense, from highways as a nuclear-readiness response to ARPANET’s Pentagon-based funding. Such a linkage with the military continues today, with data centers likely to be situated near military installations— sharing similar needs electricity, network connectivity, fair climate, space, and invisibility. We see the repetition of militarized tropes across data structures. Fiber-optic network locations are kept secret; servers are housed in cold-war bunkers; data centers nest next to military black-sites. Similarly, Augé reminds us that non-places are a particular target of terrorism, populated as they are with cars, trains, drugs and planes that turn into weapons. When the network itself is at threat of weaponization, the effect is an ambient and ephemeral fear; a paranoia made of over-connection.
APA, Harvard, Vancouver, ISO, and other styles
10

Bergström, Rasmus. "Predicting Container-Level Power Consumption in Data Centers using Machine Learning Approaches." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-79416.

Full text
Abstract:
Due to the ongoing climate crisis, reducing waste and carbon emissions has become hot topic in many fields of study. Cloud data centers contribute a large portion to the world’s energy consumption. In this work, methodologies are developed using machine learning algorithms to improve prediction of the energy consumption of a container in a data center. The goal is to share this information with the user ahead of time, so that the same can make educated decisions about their environmental footprint.This work differentiates itself in its sole focus on optimizing prediction, as opposed to other approaches in the field where energy modeling and prediction has been studied as a means to building advanced scheduling policies in data centers. In this thesis, a qualitative comparison between various machine learning approaches to energy modeling and prediction is put forward. These approaches include Linear, Polynomial Linear and Polynomial Random Forest Regression as well as a Genetic Algorithm, LSTM Neural Networks and Reinforcement Learning. The best results were obtained using the Polynomial Random Forest Regression, which produced a Mean Absolute Error of of 26.48% when run against data center metrics gathered after the model was built. This prediction engine was then integrated into a Proof of Concept application as an educative tool to estimate what metrics of a cloud job have what impact on the container power consumption.
APA, Harvard, Vancouver, ISO, and other styles
11

Alharbi, Fares Abdi H. "Profile-based virtual machine management for more energy-efficient data centers." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/129871/8/Fares%20Abdi%20H%20Alharbi%20Thesis.pdf.

Full text
Abstract:
This research develops a resource management framework for improved energy efficiency in cloud data centers through energy-efficient virtual machine placement to physical machines as well as application assignment to virtual machines. The study investigates static virtual machine placement, dynamic virtual machine placement and application assignment using ant colony optimization to minimize the total energy consumption in data centers.
APA, Harvard, Vancouver, ISO, and other styles
12

Le, Trung. "Towards Sustainable Cloud Computing: Reducing Electricity Cost and Carbon Footprint for Cloud Data Centers through Geographical and Temporal Shifting of Workloads." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23082.

Full text
Abstract:
Cloud Computing presents a novel way for businesses to procure their IT needs. Its elasticity and on-demand provisioning enables a shift from capital expenditures to operating expenses, giving businesses the technological agility they need to respond to an ever-changing marketplace. The rapid adoption of Cloud Computing, however, poses a unique challenge to Cloud providers—their already very large electricity bill and carbon footprint will get larger as they expand; managing both costs is therefore essential to their growth. This thesis squarely addresses the above challenge. Recognizing the presence of Cloud data centers in multiple locations and the differences in electricity price and emission intensity among these locations and over time, we develop an optimization framework that couples workload distribution with time-varying signals on electricity price and emission intensity for financial and environmental benefits. The framework is comprised of an optimization model, an aggregate cost function, and 6 scheduling heuristics. To evaluate cost savings, we run simulations with 5 data centers located across North America over a period of 81 days. We use historical data on electricity price, emission intensity, and workload collected from market operators and research data archives. We find that our framework can produce substantial cost savings, especially when workloads are distributed both geographically and temporally—up to 53.35% on electricity cost, or 29.13% on carbon cost, or 51.44% on electricity cost and 13.14% on carbon cost simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Chengwei. "Monitoring and analysis system for performance troubleshooting in data centers." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50411.

Full text
Abstract:
It was not long ago. On Christmas Eve 2012, a war of troubleshooting began in Amazon data centers. It started at 12:24 PM, with an mistaken deletion of the state data of Amazon Elastic Load Balancing Service (ELB for short), which was not realized at that time. The mistake first led to a local issue that a small number of ELB service APIs were affected. In about six minutes, it evolved into a critical one that EC2 customers were significantly affected. One example was that Netflix, which was using hundreds of Amazon ELB services, was experiencing an extensive streaming service outage when many customers could not watch TV shows or movies on Christmas Eve. It took Amazon engineers 5 hours 42 minutes to find the root cause, the mistaken deletion, and another 15 hours and 32 minutes to fully recover the ELB service. The war ended at 8:15 AM the next day and brought the performance troubleshooting in data centers to world’s attention. As shown in this Amazon ELB case.Troubleshooting runtime performance issues is crucial in time-sensitive multi-tier cloud services because of their stringent end-to-end timing requirements, but it is also notoriously difficult and time consuming. To address the troubleshooting challenge, this dissertation proposes VScope, a flexible monitoring and analysis system for online troubleshooting in data centers. VScope provides primitive operations which data center operators can use to troubleshoot various performance issues. Each operation is essentially a series of monitoring and analysis functions executed on an overlay network. We design a novel software architecture for VScope so that the overlay networks can be generated, executed and terminated automatically, on-demand. From the troubleshooting side, we design novel anomaly detection algorithms and implement them in VScope. By running anomaly detection algorithms in VScope, data center operators are notified when performance anomalies happen. We also design a graph-based guidance approach, called VFocus, which tracks the interactions among hardware and software components in data centers. VFocus provides primitive operations by which operators can analyze the interactions to find out which components are relevant to the performance issue. VScope’s capabilities and performance are evaluated on a testbed with over 1000 virtual machines (VMs). Experimental results show that the VScope runtime negligibly perturbs system and application performance, and requires mere seconds to deploy monitoring and analytics functions on over 1000 nodes. This demonstrates VScope’s ability to support fast operation and online queries against a comprehensive set of application to system/platform level metrics, and a variety of representative analytics functions. When supporting algorithms with high computation complexity, VScope serves as a ‘thin layer’ that occupies no more than 5% of their total latency. Further, by using VFocus, VScope can locate problematic VMs that cannot be found via solely application-level monitoring, and in one of the use cases explored in the dissertation, it operates with levels of perturbation of over 400% less than what is seen for brute-force and most sampling-based approaches. We also validate VFocus with real-world data center traces. The experimental results show that VFocus has troubleshooting accuracy of 83% on average.
APA, Harvard, Vancouver, ISO, and other styles
14

Chkirbene, Zina. "Network topologies for cost reduction and QoS improvement in massive data centers." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCK002/document.

Full text
Abstract:
L'expansion des services en ligne, l'avènement du big data, favorisé par l'internet des objets et les terminaux mobiles, a entraîné une croissance exponentielle du nombre de centres de données qui fournissent des divers services de cloud computing. Par conséquent, la topologie du centre de données est considérée comme un facteur d'influence sur la performance du centre de données. En effet, les topologies des centres de données devraient offrir une latence faible, une longueur de chemin moyenne réduite avec une bande passante élevée. Ces exigences augmentent la consommation énergétique dans les centres de données. Dans cette dissertation, différentes solutions ont été proposées pour surmonter ces problèmes. Tout d'abord, nous proposons une nouvelle topologie appelée LCT (Linked Cluster Topology) qui augmente le nombre de nœuds, améliore la connexion réseau et optimise le routage des données pour avoir une faible latence réseau. Une nouvelle topologie appelée VacoNet (Variable connexion Network) a été également présentée. VacoNet offre un nouveau algorithme qui définit le exact nombre de port par commutateur pour connecter le nombre de serveurs requis tout en réduisant l'énergie consommée et les matériaux inutilisés (câbles, commutateurs). En outre, nous _étudions une nouvelle technique pour optimiser la consumation d'énergie aux centres de données. Cette technique peut périodiquement estimer la matrice de trafic et gérer l'_état des ports de serveurs tout en maintenant le centre de données entièrement connecté. La technique proposée prend en considération le trafic réseau dans la décision de gestion des ports
Data centers (DC) are being built around the world to provide various cloud computing services. One of the fundamental challenges of existing DC is to design a network that interconnects massive number of nodes (servers)1 while reducing DC' cost and energy consumption. Several solutions have been proposed (e.g. FatTree, DCell and BCube), but they either scale too fast (i.e., double exponentially) or too slow. Effcient DC topologies should incorporate high scalability, low latency, low Average Path Length (APL), high Aggregated Bottleneck Throughput (ABT) and low cost and energy consumption. Therefore, in this dissertation, different solutions have been proposed to overcome these problems. First, we propose a novel DC topology called LCT (Linked Cluster Topology) as a new solution for building scalable and cost effective DC networking infrastructures. The proposed topology reduces the number of redundant connections between clusters of nodes, while increasing the numbers of nodes without affecting the network bisection bandwidth. Furthermore, in order to reduce the DCs cost and energy consumption, we propose first a new static energy saving topology called VacoNet (Variable Connection Network) that connects the needed number of servers while reducing the unused materials (cables, switches). Also, we propose a new approach that exploits the correlation in time of internode communication and some topological features to maximize energy saving without too much impacting the average path length
APA, Harvard, Vancouver, ISO, and other styles
15

Soares, Maria José. "Data center - a importância de uma arquitectura." Master's thesis, Universidade de Évora, 2011. http://hdl.handle.net/10174/11604.

Full text
Abstract:
Este trabalho, apresenta um estudo sob a forma de overview, abordando a temática dos Data Centers no que concerne à importância da sua arquitectura. Foram elencados os principais factores críticos a considerar numa arquitectura, bem como as melhores práticas a implementar no sentido de se avaliar a importância de uma certificação pela entidade de certificação - Uptime Institute. Aborda-se ainda o eventual interesse em como expandir essa certificação/qualificação aos recursos humanos, como garantia de qualidade de serviços e estratégia de marketing. Como forma de consubstanciar a temática, foi criado um Case Study, observando-se um universo de sete Data Centers em Portugal, pertencentes ao sector público e privado, permitindo a verificação e comparação de boas práticas, bem como os aspectos menos positivos a considerar dentro da área. Finalmente, são deixadas algumas reflexões sobre o que pode ser a tendência de evolução dos Data Centers numa perspectiva de qualidade; ### Abstract: This is presents a study, in the form of overview, addressing the issue of the importance of architecture in Data Centers. The main critical factors in architecture were considered as well as the best practices to implement it in order to assess the value of a recognized certificate. It also discusses the possible interest in expanding the certification/qualification of human resources as a guarantee for quality of the services provided and marketing strategies. To support this work we analyzed seven Case Studies, where it was possible to observe a representative universe of Data Centers in Portugal, belonging to the public and private sectors, allowing the verification and comparison of good practices as well as the less positive aspects to consider within this area. At the end of the document we present conclusions on what may be the trend for the evolution of Data Center as far as quality is concerned.
APA, Harvard, Vancouver, ISO, and other styles
16

Roozbeh, Amir. "Toward Next-generation Data Centers : Principles of Software-Defined “Hardware” Infrastructures and Resource Disaggregation." Licentiate thesis, KTH, Kommunikationssystem, CoS, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-249618.

Full text
Abstract:
The cloud is evolving due to additional demands introduced by new technological advancements and the wide movement toward digitalization. Therefore, next-generation data centers (DCs) and clouds are expected (and need) to become cheaper, more efficient, and capable of offering more predictable services. Aligned with this, we examine the concept of software-defined “hardware” infrastructures (SDHI) based on hardware resource disaggregation as one possible way of realizing next-generation DCs. We start with an overview of the functional architecture of a cloud based on SDHI. Following this, we discuss a series of use-cases and deployment scenarios enabled by SDHI and explore the role of each functional block of SDHI’s architecture, i.e., cloud infrastructure, cloud platforms, cloud execution environments, and applications. Next, we propose a framework to evaluate the impact of SDHI on techno-economic efficiency of DCs, specifically focusing on application profiling, hardware dimensioning, and total cost of ownership (TCO). Our study shows that combining resource disaggregation and software-defined capabilities makes DCs less expensive and easier to expand; hence they can rapidly follow the exponential demand growth. Additionally, we elaborate on technologies behind SDHI, its challenges, and its potential future directions. Finally, to identify a suitable memory management scheme for SDHI and show its advantages, we focus on the management of Last Level Cache (LLC) in currently available Intel processors. Aligned with this, we investigate how better management of LLC can provide higher performance, more predictable response time, and improved isolation between threads. More specifically, we take advantage of LLC’s non-uniform cache architecture (NUCA) in which the LLC is divided into “slices,” where access by the core to which it closer is faster than access to other slices. Based upon this, we introduce a new memory management scheme, called slice-aware memory management, which carefully maps the allocated memory to LLC slices based on their access time latency rather than the de facto scheme that maps them uniformly. Many applications can benefit from our memory management scheme with relatively small changes. As an example, we show the potential benefits that Key-Value Store (KVS) applications gain by utilizing our memory management scheme. Moreover, we discuss how this scheme could be used to provide explicit CPU slicing – which is one of the expectations of SDHI  and hardware resource disaggregation.

QC 20190415

APA, Harvard, Vancouver, ISO, and other styles
17

Feller, Eugen. "Autonomic and Energy-Efficient Management of Large-Scale Virtualized Data Centers." Phd thesis, Université Rennes 1, 2012. http://tel.archives-ouvertes.fr/tel-00785090.

Full text
Abstract:
Large-scale virtualized data centers require cloud providers to implement scalable, autonomic, and energy-efficient cloud management systems. To address these challenges this thesis provides four main contributions. The first one proposes Snooze, a novel Infrastructure-as-a-Service (IaaS) cloud management system, which is designed to scale across many thousands of servers and virtual machines (VMs) while being easy to configure, highly available, and energy efficient. For scalability, Snooze performs distributed VM management based on a hierarchical architecture. To support ease of configuration and high availability Snooze implements self-configuring and self-healing features. Finally, for energy efficiency, Snooze integrates a holistic energy management approach via VM resource (i.e. CPU, memory, network) utilization monitoring, underload/overload detection and mitigation, VM consolidation (by implementing a modified version of the Sercon algorithm), and power management to transition idle servers into a power saving mode. A highly modular Snooze prototype was developed and extensively evaluated on the Grid'5000 testbed using realistic applications. Results show that: (i) distributed VM management does not impact submission time; (ii) fault tolerance mechanisms do not impact application performance and (iii) the system scales well with an increasing number of resources thus making it suitable for managing large-scale data centers. We also show that the system is able to dynamically scale the data center energy consumption with its utilization thus allowing it to conserve substantial power amounts with only limited impact on application performance. Snooze is an open-source software under the GPLv2 license. The second contribution is a novel VM placement algorithm based on the Ant Colony Optimization (ACO) meta-heuristic. ACO is interesting for VM placement due to its polynomial worst-case time complexity, close to optimal solutions and ease of parallelization. Simulation results show that while the scalability of the current algorithm implementation is limited to a smaller number of servers and VMs, the algorithm outperforms the evaluated First-Fit Decreasing greedy approach in terms of the number of required servers and computes close to optimal solutions. In order to enable scalable VM consolidation, this thesis makes two further contributions: (i) an ACO-based consolidation algorithm; (ii) a fully decentralized consolidation system based on an unstructured peer-to-peer network. The key idea is to apply consolidation only in small, randomly formed neighbourhoods of servers. We evaluated our approach by emulation on the Grid'5000 testbed using two state-of-the-art consolidation algorithms (i.e. Sercon and V-MAN) and our ACO-based consolidation algorithm. Results show our system to be scalable as well as to achieve a data center utilization close to the one obtained by executing a centralized consolidation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
18

Weerasinghe, Jagath [Verfasser], Andreas [Akademischer Betreuer] Herkersdorf, Christian [Gutachter] Plessl, and Andreas [Gutachter] Herkersdorf. "Standalone Disaggregated Reconfigurable Computing Platforms in Cloud Data Centers / Jagath Weerasinghe ; Gutachter: Christian Plessl, Andreas Herkersdorf ; Betreuer: Andreas Herkersdorf." München : Universitätsbibliothek der TU München, 2018. http://d-nb.info/1160381321/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Madi, wamba Gilles. "Combiner la programmation par contraintes et l’apprentissage machine pour construire un modèle éco-énergétique pour petits et moyens data centers." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0045/document.

Full text
Abstract:
Au cours de la dernière décennie les technologies de cloud computing ont connu un essor considérable se traduisant par la montée en flèche de la consommation électrique des data center. L’ampleur du problème a motivé de nombreux travaux de recherche s’articulant autour de solutions de réduction statique ou dynamique de l’enveloppe globale de consommation électrique d’un data center. L’objectif de cette thèse est d’intégrer les sources d’énergie renouvelables dans les modèles d’optimisation dynamique d’énergie dans un data center. Pour cela nous utilisons la programmation par contraintes ainsi que des techniques d’apprentissage machine. Dans un premier temps, nous proposons une contrainte globale d’intersection de tâches tenant compte d’une ressource à coûts variables. Dans un second temps, nous proposons deux modèles d’apprentissage pour la prédiction de la charge de travail d’un data center et pour la génération de telles courbes. Enfin, nous formalisons le problème de planification énergiquement écologique (PPEE) et proposons un modèle global à base de PPC ainsi qu’une heuristique de recherche pour le résoudre efficacement. Le modèle proposé intègre les différents aspects inhérents au problème de planification dynamique dans un data center : machines physiques hétérogènes, types d’applications variés (i.e., applications interactives et applications par lots), opérations et coûts énergétiques de mise en route et d’extinction des machines physiques, interruption/reprise des applications par lots, consommation des ressources CPU et RAM des applications, migration des tâches et coûts énergétiques relatifs aux migrations, prédiction de la disponibilité d’énergie verte, consommation énergétique variable des machines physiques
Over the last decade, cloud computing technologies have considerably grown, this translates into a surge in data center power consumption. The magnitude of the problem has motivated numerous research studies around static or dynamic solutions to reduce the overall energy consumption of a data center. The aim of this thesis is to integrate renewable energy sources into dynamic energy optimization models in a data center. For this we use constraint programming as well as machine learning techniques. First, we propose a global constraint for tasks intersection that takes into account a ressource with variable cost. Second, we propose two learning models for the prediction of the work load of a data center and for the generation of such curves. Finally, we formalize the green energy aware scheduling problem (GEASP) and propose a global model based on constraint programming as well as a search heuristic to solve it efficiently. The proposed model integrates the various aspects inherent to the dynamic planning problem in a data center : heterogeneous physical machines, various application types (i.e., ractive applications and batch applications), actions and energetic costs of turning ON/OFF physical machine, interrupting/resuming batch applications, CPU and RAM ressource consumption of applications, migration of tasks and energy costs related to the migrations, prediction of green energy availability, variable energy consumption of physical machines
APA, Harvard, Vancouver, ISO, and other styles
20

Dagala, Wadzani Jabani. "Analysis of Total Cost of Ownership for Medium Scale Cloud Service Provider with emphasis on Technology and Security." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15003.

Full text
Abstract:
Total cost of ownership is a great factor to consider when deciding to deploy cloud computing. The cost to own a data centre or run a data centre outweighs the thought of IT manager or owner of the business organisation.The research work is concerned with specifying the factors that sum the TCO for medium scale service providers with respect to technology and security. A valid analysis was made with respect to the cloud service providers expenses and how to reduce the cost of ownership.In this research work, a review of related articles was used from a wide source, reading through the abstract and overview of the articles to find its relevance to the subject. A further interview was conducted with two medium scale cloud service providers and one cloud user.In this study, an average calculation of the TCO was made and we implemented a proposed cost reduction method. We made a proposal on which and how to decide as to which cloud services users should deploy in terms of cost and security.We conclude that many articles have focused their TCO calculation on the building without making emphasis on the security. The security accumulates huge amount under hidden cost and this research work identified the hidden cost, made an average calculation and proffer a method of reducing the TCO.

APA, Harvard, Vancouver, ISO, and other styles
21

Furgiuele, Antonio. "Architecture of the cloud, virtualization takes command : learning from black boxes, data centers and an architecture of the conditioned environment." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/81746.

Full text
Abstract:
Thesis (S.M. in History, Theory and Criticism of Art and Architecture)--Massachusetts Institute of Technology, Dept. of Architecture, 2013.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 127-128).
A single manageable architecture of the Cloud has been one of the most important social and technical changes of the 21st century. Cloud computing, our newest public utility is an attempt to confront and control cultural risk, it has rendered the environment of our exchanges calculable, manageable, seemingly predictable, and most importantly as a new form of capital. Cloud computing in its most basic terms is the system of virtualization of data storage and program access into an instantaneous service utility. The transformation of computing into a service industry is one of the key changes of the Information Age, and its logic is tied to the highly guarded mechanisms of a black box, an architecture machine, or more commonly known as the data center. In 2008, on a day with without the usual fanfare or barrage of academic manifestoes, grand claims of paradigm shifts, virtualization quietly took command. A seemingly simple moment where a cloud, the Cloud, emerged as a new form of managerial space that tied a large system of users to the hidden mechanisms of large scaled factories of information, a network of data centers. The project positions the Cloud and the data center into the architectural discourse, both historically and materially, through an analysis of its relationship to an emergent digital sublime and how it is managed, controlled and propelled through the obscure typologies of its architecture and images. The study of the Cloud and the data center through the notion of the sublime, and the organizational structures of typology we can more critically assess architecture's relationship to this new phase of the Information Age.
by Antonio Furgiuele.
S.M.in History, Theory and Criticism of Art and Architecture
APA, Harvard, Vancouver, ISO, and other styles
22

Ruty, Guillaume. "Towards more scalability and flexibility for distributed storage systems." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT006/document.

Full text
Abstract:
Les besoins en terme de stockage, en augmentation exponentielle, sont difficilement satisfaits par les systèmes de stockage distribué traditionnels. Alors que les performances des disques ont ratrappé celles des cartes réseau en terme d'ordre de grandeur, leur capacité ne croit pas à la même vitesse que l'ensemble des données requérant d'êtres stockées, notamment à cause de l'avènement des applications de big data. Par ailleurs, l'équilibre de performances entre disques, cartes réseau et processeurs a changé et les états de fait sur lesquels se basent la plupart des systèmes de stockage distribué actuels ne sont plus vrais. Cette dissertation explique de quelle manière certains aspects de tels systèmes de stockages peuvent être modifiés et repensés pour faire une utilisation plus efficace des ressources qui les composent. Elle présente une architecture de stockage nouvelle qui se base sur une couche de métadonnées distribuée afin de fournir du stockage d'objet de manière flexible tout en passant à l'échelle. Elle détaille ensuite un algorithme d'ordonnancement des requêtes permettant a un système de stockage générique de traiter les requêtes de clients en parallèle de manière plus équitable. Enfin, elle décrit comment améliorer le cache générique du système de fichier dans le contexte de systèmes de stockage distribué basés sur des codes correcteurs avant de présenter des contributions effectuées dans le cadre de courts projets de recherche
The exponentially growing demand for storage puts a huge stress on traditionnal distributed storage systems. While storage devices' performance have caught up with network devices in the last decade, their capacity do not grow as fast as the rate of data growth, especially with the rise of cloud big data applications. Furthermore, the performance balance between storage, network and compute devices has shifted and the assumptions that are the foundation for most distributed storage systems are not true anymore. This dissertation explains how several aspects of such storage systems can be modified and rethought to make a more efficient use of the resource at their disposal. It presents an original architecture that uses a distributed layer of metadata to provide flexible and scalable object-level storage, then proposes a scheduling algorithm improving how a generic storage system handles concurrent requests. Finally, it describes how to improve legacy filesystem-level caching for erasure-code-based distributed storage systems, before presenting a few other contributions made in the context of short research projects
APA, Harvard, Vancouver, ISO, and other styles
23

Rostirolla, Gustavo. "Ordonnancement dans un centre de calculs alimenté par des sources d'énergie renouvelables sans connexion au réseau avec une charge de travail mixte basée sur des phases." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30160.

Full text
Abstract:
Les centres de données sont reconnus pour être l'un des principaux acteurs en matière de consommation d'énergie du fait de l'augmentation de l'utilisation du cloud, des services web et des applications de calcul haute performance dans le monde entier. En 2006, les centres de données ont consommé 61,4 milliards de kWh aux états-Unis. Au niveau mondial, les centres de données consomment actuellement plus d'énergie que l'ensemble du Royaume-Uni, c'est-à-dire environ 1,3% de la consommation électrique mondiale, et ils sont de fait appelés les usines de l'ère numérique. Un des moyens d'atténuer le changement climatique est d'alimenter les centres de données en énergie renouvelable (énergie propre). La grande majorité des fournisseurs de cloud computing qui prétendent alimenter leurs centres de données en énergie verte sont en fait connectés au réseau classique et déploient des panneaux solaires et des éoliennes ailleurs puis vendent l'électricité produite aux compagnies d'électricité. Cette approche entraîne des pertes d'énergie lorsque l'électricité traverse le réseau. Même si différents efforts ont été réalisés au niveau informatique dans les centres de données partiellement alimentés par des énergies renouvelables, des améliorations sont encore possibles notamment concernant l'ordonnancement prenant en compte les sources d'énergie renouvelables sur site sans connexion au réseau et leur intermittence. C'est le but du projet ANR DataZERO, dans le cadre duquel cette thèse a été réalisée. L'efficacité énergétique dans les centres de données étant directement liée à la consommation de ressources d'un nœud de calcul, l'optimisation des performances et un ordonnancement efficace des calculs sont essentiels pour économiser l'énergie. La spécificité principale de notre approche est de placer le centre de données sous une contrainte de puissance, provenant entièrement d'énergies renouvelables : la puissance disponible peut ainsi varier au cours du temps. L'ordonnancement de tâches sous ce genre de contrainte rend le problème plus difficile, puisqu'on doit notamment s'assurer qu'une tâche qui commence aura assez d'énergie pour aller jusqu'à son terme. Dans cette thèse, nous commençons par proposer une planification de tâches de type "batch" qui se caractérisent par leur instant d'arrivée, leur date d'échéance et leurs demandes de ressources tout en respectant une contrainte de puissance. Les données utilisées pour les tâches de type batch viennent de traces de centres de données et contiennent des mesures de consommation CPU, mémoire et réseau. Quant aux enveloppes de puissance considérées, elles représentent ce que pourrait fournir un module de décision électrique, c'est-à-dire la production d'énergie prévue (énergie renouvelable seulement) basée sur les prévisions météorologiques. L'objectif est de maximiser la Qualité de Service avec une contrainte sur la puissance électrique. Par la suite, nous examinons une charge de travail composée de tâches de type "batch" et de services, où la consommation des ressources varie au cours du temps. Les tracecs utilisées pour les services proviennent d'une centre de données à "business critique". Dans ce cadre, nous envisageons le concpet de phases, dans lequel les changements significatifs de consommation de resources à l'intérieur d'une même tâche marquent le début d'une nouvelle phase. Nous considérons également un modèle de tâches pouvant recevoir moins de ressources que demandées. Nous étudions l'impact de ce modèle sur le profit du centre de données pour chaque type de tâche. Nous intégrons aussi le concept de "corrélation croisée" pour évaluer où placer une tâche selon une courbe de puissance afin de trouver le meilleur nœud pour placer plusieurs tâches (c.-à-d. Partager les ressources)
Due to the increase of cloud, web-services and high performance computing demands all over the world, datacenters are now known to be one of the biggest actors when talking about energy consumption. In 2006 alone, datacenters were responsible for consuming 61.4 billion kWh in the United States. When looking at the global scenario, datacenters are currently consuming more energy than the entire United Kingdom, representing about 1.3\% of world's electricity consumption, and being even called the factories of the digital age. Supplying datacenters with clean-to-use renewable energy is therefore essential to help mitigate climate change. The vast majority of cloud provider companies that claim to use green energy supply on their datacenters consider the classical grid, and deploy the solar panels/wind turbines somewhere else and sell the energy to electricity companies, which incurs in energy losses when the electricity travels throughout the grid. Even though several efforts have been conducted at the computing level in datacenters partially powered by renewable energy sources, the scheduling considering on site renewable energy sources and its variations, without connection to the grid can still be widely explored. Since energy efficiency in datacenters is directly related to the resource consumption of the computing nodes, performance optimization and an efficient load scheduling are essential for energy saving. Today, we observe the use of cloud computing as the basis of datacenters, either in a public or private fashion. The main particularity of our approach is that we consider a power envelope composed only by renewable energy as a constraint, hence with a variable amount of power available at each moment. The scheduling under this kind of constraint becomes more complex: without further checks, we are not ensured that a running task will run until completion. We start by addressing the IT load scheduling of batch tasks, which are characterized by their release time, due date and resource demand, in a cloud datacenter while respecting the aforementioned power envelope. The data utilized for the batch tasks comes from datacenter traces, containing CPU, memory and network values. The power envelopes considered, represent an estimation which would be provided by a power decision module and is the expected power production based on weather forecasts. The aim is to maximize the Quality of Service with a variable constraint on electrical power. Furthermore, we explore a workload composed by batch and services, where the resources consumption varies over time. The traces utilized for the service tasks originate from business critical datacenter. In this case we rely on the concept of phases, where each significant resource change in the resources consumption constitutes a new phase of the given task. In this task model phases could also receive less resources than requested. The reduction of resources can impact the QoS and consequently the datacenter profit. In this approach we also include the concept of cross-correlation to evaluate where to place a task under a power curve, and what is the best node to place tasks together (i.e. sharing resources). Finally, considering the previous workload of batch tasks and services, we present an approach towards handling unexpected events in the datacenter. More specifically we focus on IT related events such as tasks arriving at any given time, demanding more or less resources than expected, or having a different finish time than what was initially expected. We adapt the proposed algorithms to take actions depending on which event occurs, e.g. task degradation to reduce the impact on the datacenter profit
APA, Harvard, Vancouver, ISO, and other styles
24

Benblidia, Mohammed Anis. "Pour une meilleure efficacité énergétique dans un système Smart Grid - Cloud." Thesis, Troyes, 2021. http://www.theses.fr/2021TROY0019.

Full text
Abstract:
Dans cette thèse, nous étudions l’efficacité énergétique des infrastructures informatiques dans un système smart grid – cloud. Nous nous intéressons plus particulièrement aux réseaux de communication et aux data centers du cloud. Nous nous focalisons sur ces derniers à cause de leur grande consommation d’énergie et du rôle vital qu’ils jouent dans un monde connecté en pleine expansion, les positionnant, ainsi, comme des éléments importants dans un système smart grid - cloud. De ce fait, les travaux de cette thèse s’inscrivent dans le cadre d’un seul framework intégrant le smart grid, le microgrid, le cloud, les data centers et les utilisateurs. Nous avons, en effet, étudié l’interaction entre les data centers du cloud et le fournisseur d’énergie du smart grid et nous avons proposé des solutions d’allocation d’énergie et de minimisation du coût d’énergie en utilisant deux architectures : (1) une architecture smart grid-cloud et (2) une architecture microgrid-cloud. Par ailleurs, nous avons porté une attention particulière à l’exécution des requêtes des utilisateurs tout en leur garantissant un niveau de qualité de service satisfaisant dans une architecture fog -cloud. En comparaison avec les travaux de l’état de l’art, les résultats de nos contributions ont montré qu’ils répondent aux enjeux identifiés, notamment en réduisant les émissions de gaz à effet de serre et le coût d’énergie des data centers
This thesis considers the energy efficiency of information and communication infrastructures in a smart grid - cloud system. It especially deals with communication networks and cloud data centers due to their high energy consumption, which confers them an important role in the network. The contributions of this thesis are implemented on the same framework integrating the smart grid, microgrid, cloud, data centers and users. Indeed, we have studied the interaction between the cloud data centers and the smart grid provider and we have proposed energy efficient power allocation solutions and an energy cost minimization scheme using two architectures: a smart grid-cloud architecture and a microgrid-cloud architecture. In addition, we paid close attention to execute user requests while ensuring a good quality of service in a fog-cloud architecture. In comparison with state-of-the-art works, the results of our contributions have shown that they respond to the identified challenges, particularly in terms of reducing carbon emissions and energy costs of cloud data centers
APA, Harvard, Vancouver, ISO, and other styles
25

Božić, Nikola. "Blockchain technologies and their application to secure virtualized infrastructure control." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS596.

Full text
Abstract:
Blockchain est une technologie qui fait du concept de registre partagé à partir de systèmes distribués une réalité pour un certain nombre de domaines d’application, du crypto-monnaie à potentiellement tout système industriel nécessitant une prise de décision décentralisée, robuste, fiable et automatisée dans une situation à plusieurs parties prenantes. Néanmoins, les avantages réels de l’utilisation de la blockchain au lieu de toute autre solution traditionnelle (telle que des bases de données centralisées) ne sont pas complètement compris à ce jour, ni quel type de blockchain répond le mieux aux exigences du cas d'utilisation et à son utilisation. Au début, notre objectif est de fournir une sorte de « vademecum » à la communauté, tout en donnant une présentation générale de la blockchain qui dépasse son cas d'utilisation en des crypto monnaies telle que Bitcoin, et en passant en revue une sélection de la vaste littérature qui est apparue au cours des dernières années. Nous décrivons les exigences clés et leur évolution lorsque nous passons des blockchains publics à priver, en présentant les différences entre les mécanismes de consensus proposés et expérimentés, et en décrivant les plateformes de blockchain existantes. De plus, nous présentons la blockchain B-VMOA pour sécuriser les opérations d’orchestration de machines virtuelles pour les systèmes de cloud computing et de virtualisation des fonctions réseau en appliquant la logique de vademecum proposée. À l'aide d'exemples de didacticiels, nous décrivons nos choix de conception et élaborons des plans de mise en œuvre. Nous développons plus avant la logique de vademecum appliquée à l'orchestration dans le cloud et comment elle peut conduire à des spécifications de plateforme précises. Nous capturons les opérations du système clés et les interactions complexes entre elles. Nous nous concentrons sur la dernière version de la plateforme Hyperledger Fabric en tant que moyen de développer le système B-VMOA. De plus, Hyperledger Fabric optimise les performances, la sécurité et l’évolutivité conçues pour le réseau B-VMOA en séparant la charge de travail entre (i) les homologues d’exécution et de validation de transaction et (ii) les nœuds qui sont charges pour l'ordre des transactions. Nous étudions et utilisons une architecture <> qui différencie notre système B-VMOA conçu des systèmes distribués hérités qui suivent une architecture de réplication d'état de machine traditionnelle. Nous paramétrons et validons notre modèle avec les données recueillies sur un banc d'essai réaliste, en présentant une étude empirique pour caractériser les performances du système et identifier les goulots d'étranglement potentiels. En outre, nous présentons les outils que nous avons utilisés, la configuration du réseau et la discussion sur les observations empiriques issues de la collecte de données. Nous examinons l'impact de divers paramètres configurables pour mener une étude approfondie des composants principaux et des performances de référence pour les modèles d'utilisation courants. À savoir, B-VMOA est destiné à être exécuté dans un centre de données. Différentes topologies d'interconnexion de centres de données évoluent différemment en raison des protocoles de communication. Il semble difficile de concevoir efficacement les interconnexions réseau de manière à rentabiliser le déploiement et la maintenance de l’infrastructure. Nous analysons les propriétés structurelles de plusieurs topologies DCN et présentons également une comparaison entre ces architectures de réseau dans le but de réduire les coûts indirects de la technologie B-VMOA. D'après notre analyse, nous recommandons l'hypercube topologie comme solution pour remédier au goulot d'étranglement des performances dans le plan de contrôle B-VMOA provoqué par gossip, le protocole de diffusion, ainsi qu'une estimation de l'amélioration des performances
Blockchain is a technology making the shared registry concept from distributed systems a reality for a number of application domains, from the cryptocurrency one to potentially any industrial system requiring decentralized, robust, trusted and automated decision making in a multi-stakeholder situation. Nevertheless, the actual advantages in using blockchain instead of any other traditional solution (such as centralized databases) are not completely understood to date, or at least there is a strong need for a vademecum guiding designers toward the right decision about when to adopt blockchain or not, which kind of blockchain better meets use-case requirements, and how to use it. At first, we aim at providing the community with such a vademecum, while giving a general presentation of blockchain that goes beyond its usage in Bitcoin and surveying a selection of the vast literature that emerged in the last few years. We draw the key requirements and their evolution when passing from permissionless to permissioned blockchains, presenting the differences between proposed and experimented consensus mechanisms, and describing existing blockchain platforms. Furthermore, we present the B-VMOA blockchain to secure virtual machine orchestration operations for cloud computing and network functions virtualization systems applying the proposed vademecum logic. Using tutorial examples, we describe our design choices and draw implementation plans. We further develop the vademecum logic applied to cloud orchestration and how it can lead to precise platform specifications. We capture the key system operations and complex interactions between them. We focus on the last release of Hyperledger Fabric platform as a way to develop B-VMOA system. Besides, Hyperledger Fabric optimizes conceived B-VMOA network performance, security, and scalability by way of workload separation across: (i) transaction execution and validation peers, and (ii) transaction ordering nodes. We study and use a distributed execute-order-validate architecture which differentiates our conceived B-VMOA system from legacy distributed systems that follow a traditional state-machine replication architecture. We parameterize and validate our model with data collected from a realistic testbed, presenting an empirical study to characterize system performance and identify potential performance bottlenecks. Furthermore, we present the tools we used, the network setup and the discussion on empirical observations from the data collection. We examine the impact of various configurable parameters to conduct an in-dept study of core components and benchmark performance for common usage patterns. Namely, B-VMOA is meant to be run within data center. Different data center interconnection topologies scale differently due to communication protocols. Enormous challenges appear to efficiently design the network interconnections so that the deployment and maintenance of the infrastructure is cost-effective. We analyze the structural properties of several DCN topologies and also present some comparison among these network architectures with the aim to reduce B-VMOA overhead costs. From our analysis, we recommend the hypercube topology as a solution to address the performance bottleneck in the B-VMOA control plane caused by gossip dissemination protocol along with an estimate of performance improvement
APA, Harvard, Vancouver, ISO, and other styles
26

Bayati, Léa. "Data centers energy optimization." Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC0063.

Full text
Abstract:
Pour garantir à la fois une bonne performance des services offerts par des centres de données, et une consommation énergétique raisonnable, une analyse détaillée du comportement de ces systèmes est essentielle pour la conception d'algorithmes d'optimisation efficaces permettant de réduire la consommation énergétique. Cette thèse, s'inscrit dans ce contexte, et notre travail principal consiste à concevoir des systèmes de gestion dynamique de l'énergie basés sur des modèles stochastiques de files d'attente contrôlées. Le but est de rechercher les politiques de contrôle optimales afin de les appliquer sur des centres de données, ce qui devrait répondre aux demandes croissantes de réduction de la consommation énergétique et de la pollution numérique tout en préservant la qualité de service. Nous nous sommes intéressés d’abord à la modélisation de la gestion dynamique de l’énergie par un modèle stochastique pour un centre de données homogène, principalement pour étudier certaines propriétés structurelles de la stratégie optimale, telle que la monotonie. Après, comme des centres de données présentent un niveau non négligeable d'hétérogénéité de serveurs en termes de consommation d'énergie et de taux de service, nous avons généralisé le modèle homogène à un modèle hétérogène. De plus, comme le réveil (resp. l'arrêt) d’un serveur de centre de données n’est pas instantané et nécessite un peu plus de temps pour passer du mode veille au mode prêt à fonctionner, nous avons étendu le modèle dans le but d'inclure cette latence temporelle des serveurs. Tout au long de cette optimisation exacte, les arrivées et les taux de service sont spécifiés avec des histogrammes pouvant être obtenus à partir de traces réelles, de données empiriques ou de mesures de trafic entrant. Nous avons montré que la taille du modèle MDP est très grande et conduit au problème de l’explosion d’espace d'états et à un temps de calcul important. Ainsi, nous avons montré que l'optimisation optimale nécessitant le passage par un MDP est souvent difficile, voire pratiquement impossible pour les grands centres de données. Surtout si nous prenons en compte des aspects réels tels que l'hétérogénéité ou la latence des serveurs. Alors, nous avons suggéré ce que nous appelons l’algorithme greedy-window qui permet de trouver une stratégie sous-optimale meilleure que celle produite lorsqu’on envisage un mécanisme spécial comme les approches à seuil. Et plus important encore, contrairement à l’approche MDP, cet algorithme n’exige pas la construction complète de la structure qui encode toutes les stratégies possibles. Ainsi, cette algorithme donne une stratégie très proche de la stratégie optimale avec des complexités spatio-temporelles très faibles. Cela rend cette solution pratique, évolutive, dynamique et peut être mise en ligne
To ensure both good data center service performance and reasonable power consumption, a detailed analysis of the behavior of these systems is essential for the design of efficient optimization algorithms to reduce energy consumption. This thesis fits into this context, and our main work is to design dynamic energy management systems based on stochastic models of controlled queues. The goal is to search for optimal control policies for data center management, which should meet the growing demands of reducing energy consumption and digital pollution while maintaining quality of service. We first focused on the modeling of dynamic energy management by a stochastic model for a homogeneous data center, mainly to study some structural properties of the optimal strategy, such as monotony. Afterwards, since data centers have a significant level of server heterogeneity in terms of energy consumption and service rates, we have generalized the homogeneous model to a heterogeneous model. In addition, since the data center server's wake-up and shutdown are not instantaneous and a server requires a little more time to go from sleep mode to ready-to-work mode, we have extended the model to the purpose of including this server time latency. Throughout this exact optimization, arrivals and service rates are specified with histograms that can be obtained from actual traces, empirical data, or traffic measurements. We have shown that the size of the MDP model is very large and leads to the problem of the explosion of state space and a large computation time. Thus, we have shown that optimal optimization requiring a MDP is often difficult or almost impossible to apply for large data centers. Especially if we take into account real aspects such as server heterogeneity or latency. So, we have suggested what we call the greedy-window algorithm that allows to find a sub-optimal strategy better than that produced when considering a special mechanism like threshold approaches. And more importantly, unlike the MDP approach, this algorithm does not require the complete construction of the structure that encodes all possible strategies. Thus, this algorithm gives a strategy very close to the optimal strategy with very low space-time complexities. This makes this solution practical, scalable, dynamic and can be put online
APA, Harvard, Vancouver, ISO, and other styles
27

Sriram, Ilango Leonardo. "Exploration of scaling properties in cloud data centres." Thesis, University of Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.573405.

Full text
Abstract:
In the past five years, the phrase "cloud computing" has come into common usage to mean remotely-hosted computing services, where the provider of the service relies on one or more highly automated large-scale data centres as the computing infrastructure. With increasing demand for cloud computing, economies of scale are causing a race towards ever-larger data centres. With these increases in size comes a problematic growth in system complexity: Many conventional management techniques that work well when controlling a relatively small number of data-centre nodes become impracticable on larger scales. There is currently very little engineering theory, or experience-based teaching, that can be brought to bear on the design and management of large-scale data centres. One major issue facing traditional academic and industrial research facilities is that, contrary to the way data centre designers used to work, in the case of cloud data centres the systems they are able to use for detailed development and testing are always going to be much smaller than the final systems that go into production. For this reason, it is fair to characterise much of the current development work as being more art than science, and this imprecision can lead to costly errors. In almost all of current engineering practices, predictive simulation studies are used for rapid exploration and evaluation of design alternatives before they go into production. This helps avoiding costly mistakes. Despite this well-established tradition of computational modelling and simulation, there are currently no comparable tools for cloud-scale computing data centres. The research work described in this thesis is motivated by exactly that problem. I argue that there is a need for tools that allow owners and man- agers of cloud computing infrastructure to evaluate alternative designs, and to answer "what-if" questions. In many other areas of engineering, predictive computer simulation systems allow engineers to explore aspects of a design for some artefacts without that artefact actually having to be constructed. I have developed SPECI (Simulation Program for Elastic Cloud Infrastructures) and released it to the open-source community as a first step towards meeting this need. The research presented in this thesis covers several disciplines, presenting the cloud middleware model of component policy subscription updates that are used to manage services in the system, introducing simulation to this model, and using recent advances from complex network theory to model subscription distributions. It is a first step towards developing adaptive data-centre management policies that "intelligently" and dynamically organise and reorganise the network of components that work together within the data-centre in light of changing demands.
APA, Harvard, Vancouver, ISO, and other styles
28

Sergejev, Ivan. "Exposing the Data Center." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/51838.

Full text
Abstract:
Given the rapid growth in the importance of the Internet, data centers - the buildings that store information on the web - are quickly becoming the most critical infrastructural objects in the world. However, so far they have received very little, if any, architectural attention. This thesis proclaims data centers to be the 'churches' of the digital society and proposes a new type of a publicly accessible data center. The thesis starts with a brief overview of the history of data centers and the Internet in general, leading to a manifesto for making data centers into public facilities with an architecture of their own. After, the paper proposes a roadmap for the possible future development of the building type with suggestions for placing future data centers in urban environments, incorporating public programs as a part of the building program, and optimizing the inside workings of a typical data center. The final part of the work, concentrates on a design for an exemplary new data center, buildable with currently available technologies. This thesis aims to: 1) change the public perception of the internet as a non-physical thing, and data centers as purely functional infrastructural objects without any deeper cultural significance and 2) propose a new architectural language for the type.
Master of Architecture
APA, Harvard, Vancouver, ISO, and other styles
29

Zhuang, Hao. "Performance Evaluation of Virtualization in Cloud Data Center." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-104206.

Full text
Abstract:
Amazon Elastic Compute Cloud (EC2) has been adopted by a large number of small and medium enterprises (SMEs), e.g. foursquare, Monster World, and Netflix, to provide various kinds of services. There has been some existing work in the current literature investigating the variation and unpredictability of cloud services. These work demonstrated interesting observations regarding cloud offerings. However, they failed to reveal the underlying essence of the various appearances for the cloud services. In this thesis, we looked into the underlying scheduling mechanisms, and hardware configurations, of Amazon EC2, and investigated their impact on the performance of virtual machine instances running atop. Specifically, several instances with the standard and high-CPU instance families are covered to shed light on the hardware upgrade and replacement of Amazon EC2. Then large instance from the standard family is selected to conduct focus analysis. To better understand the various behaviors of the instances, a local cluster environment is set up, which consists of two Intel Xeon servers, using different scheduling algorithms. Through a series of benchmark measurements, we observed the following findings: (1) Amazon utilizes highly diversified hardware to provision different instances. It results in significant performance variation, which can reach up to 30%. (2) Two different scheduling mechanisms were observed, one is similar to Simple Earliest Deadline Fist (SEDF) scheduler, whilst the other one analogies Credit scheduler in Xen hypervisor. These two scheduling mechanisms also arouse variations in performance. (3) By applying a simple "trial-and-failure" instance selection strategy, the cost saving is surprisingly significant. Given certain distribution of fast-instances and slow-instances, the achievable cost saving can reach 30%, which is attractive to SMEs which use Amazon EC2 platform.
Amazon Elastic Compute Cloud (EC2) har antagits av ett stort antal små och medelstora företag (SMB), t.ex. foursquare, Monster World, och Netflix, för att ge olika typer av tjänster. Det finns en del tidigare arbeten i den aktuella litteraturen som undersöker variationen och oförutsägbarheten av molntjänster. Dessa arbetenhar visat intressanta iakttagelser om molnerbjudanden, men de har misslyckats med att avslöja den underliggande kärnan hos de olika utseendena för molntjänster. I denna avhandling tittade vi på de underliggande schemaläggningsmekanismerna och maskinvarukonfigurationer i Amazon EC2, och undersökte deras inverkan på resultatet för de virtuella maskiners instanser som körs ovanpå. Närmare bestämt är det flera fall med standard- och hög-CPU instanser som omfattas att belysa uppgradering av hårdvara och utbyte av Amazon EC2. Stora instanser från standardfamiljen är valda för att genomföra en fokusanalys. För att bättre förstå olika beteenden av de olika instanserna har lokala kluster miljöer inrättas, dessa klustermiljöer består av två Intel Xeonservrar och har inrättats med hjälp av olika schemaläggningsalgoritmer. Genom en serie benchmarkmätningar observerade vi följande slutsatser: (1) Amazon använder mycket diversifierad hårdvara för att tillhandahållandet olika instanser. Från de olika instans-sub-typernas perspektiv leder hårdvarumångfald till betydande prestationsvariation som kan nå upp till 30%. (2) Två olika schemaläggningsmekanismer observerades, en liknande Simple Earliest Deadline Fist(SEDF) schemaläggare, medan den andra mer liknar Credit-schemaläggaren i Xenhypervisor. Dessa två schemaläggningsmekanismer ger även upphov till variationer i prestanda. (3) Genom att tillämpa en enkel "trial-and-failure" strategi för val av instans, är kostnadsbesparande förvånansvärt stor. Med tanke på fördelning av snabba och långsamma instanser kan kostnadsbesparingen uppgå till 30%, vilket är attraktivt för små och medelstora företag som använder Amazon EC2 plattform.
APA, Harvard, Vancouver, ISO, and other styles
30

Gao, Xing. "Investigating Emerging Security Threats in Clouds and Data Centers." W&M ScholarWorks, 2018. https://scholarworks.wm.edu/etd/1550153840.

Full text
Abstract:
Data centers have been growing rapidly in recent years to meet the surging demand of cloud services. However, the expanding scale of a data center also brings new security threats. This dissertation studies emerging security issues in clouds and data centers from different aspects, including low-level cooling infrastructures and different virtualization techniques such as container and virtual machine (VM). We first unveil a new vulnerability called reduced cooling redundancy that might be exploited to launch thermal attacks, resulting in severely worsened thermal conditions in a data center. Such a vulnerability is caused by the wide adoption of aggressive cooling energy saving policies. We conduct thermal measurements and uncover effective thermal attack vectors at the server, rack, and data center levels. We also present damage assessments of thermal attacks. Our results demonstrate that thermal attacks can negatively impact the thermal conditions and reliability of victim servers, significantly raise the cooling cost, and even lead to cooling failures. Finally, we propose effective defenses to mitigate thermal attacks. We then perform a systematic study to understand the security implications of the information leakage in multi-tenancy container cloud services. Due to the incomplete implementation of system resource isolation mechanisms in the Linux kernel, a spectrum of system-wide host information is exposed to the containers, including host-system state information and individual process execution information. By exploiting such leaked host information, malicious adversaries can easily launch advanced attacks that can seriously affect the reliability of cloud services. Additionally, we discuss the root causes of the containers' information leakage and propose a two-stage defense approach. The experimental results show that our defense is effective and incurs trivial performance overhead. Finally, we investigate security issues in the existing VM live migration approaches, especially the post-copy approach. While the entire live migration process relies upon reliable TCP connectivity for the transfer of the VM state, we demonstrate that the loss of TCP reliability leads to VM live migration failure. By intentionally aborting the TCP connection, attackers can cause unrecoverable memory inconsistency for post-copy, significantly increase service downtime, and degrade the running VM's performance. From the offensive side, we present detailed techniques to reset the migration connection under heavy networking traffic. From the defensive side, we also propose effective protection to secure the live migration procedure.
APA, Harvard, Vancouver, ISO, and other styles
31

Raad, Patrick. "Protocol architecture and algorithms for distributed data center networks." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066571/document.

Full text
Abstract:
De nos jours les données ainsi que les applications dans le nuage (cloud) connaissent une forte croissance, ce qui pousse les fournisseurs à chercher des solutions garantissant un lien réseau stable et résilient à leurs utilisateurs. Dans cette thèse on étudie les protocoles réseaux et les stratégies de communication dans un environnement de centre de données distribués. On propose une architecture cloud distribuée, centrée sur l’utilisateur et qui a pour but de: (i) migrer des machines virtuelles entre les centres de données avec un temps d’indisponibilité faible; (ii) fournir un accès résilient aux machines virtuelles; (iii) minimiser le délai d'accès au cloud. On a identifié deux problèmes de décision: le problème d'orchestration de machines virtuelles, prenant en compte la mobilité des utilisateurs, et le problème de basculement et de configuration des localisateurs, prenant en compte les états des liens inter- et intra-centre de données. On évalue notre architecture en utilisant une plate-forme de test avec des centres de données distribués géographiquement et en simulant des scenarios basés sur des traces de mobilités réelles. On montre que, grâce à quelques modifications apportées aux protocoles d'overlay, on peut avoir des temps d'indisponibilité très faibles pendant la migration de machines virtuelles entre deux centres de données. Puis on montre qu’en reliant la mobilité des machines virtuelles aux déplacement géographiques des utilisateurs, on peut augmenter le débit de la connexion. De plus, quand l’objectif est de maximiser le débit entre l’utilisateur et sa ressource, on démontre par des simulations que la décision de l'emplacement des machines virtuelles est plus importante que la décision de basculement de point d'entrée du centre de données. Enfin, grâce à un protocole de transport multi-chemins, on montre comment optimiser les performances de notre architecture et comment à partir des solutions de routage intra-centre de données on peut piloter le basculement des localisateurs
While many business and personal applications are being pushed to the cloud, offering a reliable and a stable network connectivity to cloud-hosted services becomes an important challenge to face in future networks. In this dissertation, we design advanced network protocols, algorithms and communication strategies to cope with this evolution in distributed data center architectures. We propose a user-centric distributed cloud network architecture that is able to: (i) migrate virtual resources between data centers with an optimized service downtime; (ii) offer resilient access to virtual resources; (iii) minimize the cloud access latency. We identify two main decision making problems: the virtual machine orchestration problem, also taking care of user mobility, and the routing locator switching configuration problem, taking care of both extra and intra data center link states. We evaluate our architecture using real test beds of geographically distributed data centers, and we also simulate realistic scenarios based on real mobility traces. We show that migrating virtual machines between data centers at negligible downtime is possible by enhancing overlay protocols. We then demonstrate that by linking cloud virtual resource mobility to user mobility we can get a considerable gain in the transfer rates. We prove by simulations using real traces that the virtual machine placement decision is more important than the routing locator switching decision problem when the goal is to increase the connection throughput: the cloud access performance is primarily affected by the former decision, while the latter decision can be left to intra data center traffic engineering solutions. Finally, we propose solutions to take profit from multipath transport protocols for accelerating cloud access performance in our architecture, and to let link-state intra data center routing fabrics piloting the cloud access routing locator switching
APA, Harvard, Vancouver, ISO, and other styles
32

de, Carvalho Tiago Filipe Rodrigues. "Integrated Approach to Dynamic and Distributed Cloud Data Center Management." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/739.

Full text
Abstract:
Management solutions for current and future Infrastructure-as-a-Service (IaaS) Data Centers (DCs) face complex challenges. First, DCs are now very large infrastructures holding hundreds of thousands if not millions of servers and applications. Second, DCs are highly heterogeneous. DC infrastructures consist of servers and network devices with different capabilities from various vendors and different generations. Cloud applications are owned by different tenants and have different characteristics and requirements. Third, most DC elements are highly dynamic. Applications can change over time. During their lifetime, their logical architectures evolve and change according to workload and resource requirements. Failures and bursty resource demand can lead to unstable states affecting a large number of services. Global and centralized approaches limit scalability and are not suitable for large dynamic DC environments with multiple tenants with different application requirements. We propose a novel fully distributed and dynamic management paradigm for highly diverse and volatile DC environments. We develop LAMA, a novel framework for managing large scale cloud infrastructures based on a multi-agent system (MAS). Provider agents collaborate to advertise and manage available resources, while app agents provide integrated and customized application management. Distributing management tasks allows LAMA to scale naturally. Integrated approach improves its efficiency. The proximity to the application and knowledge of the DC environment allow agents to quickly react to changes in performance and to pre-plan for potential failures. We implement and deploy LAMA in a testbed server cluster. We demonstrate how LAMA improves scalability of management tasks such as provisioning and monitoring. We evaluate LAMA in light of state-of-the-art open source frameworks. LAMA enables customized dynamic management strategies to multi-tier applications. These strategies can be configured to respond to failures and workload changes within the limits of the desired SLA for each application.
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Dawei. "On the Design and Analysis of Cloud Data Center Network Architectures." Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/413608.

Full text
Abstract:
Computer and Information Science
Ph.D.
Cloud computing has become pervasive in the IT world, as well as in our daily lives. The underlying infrastructures for cloud computing are the cloud data centers. The Data Center Network (DCN) defines what networking devices are used and how different devices are interconnected in a cloud data center; thus, it has great impacts on the total cost, performances, and power consumption of the entire data center. Conventional DCNs use tree-based architectures, where a limited number of high-end switches and high-bandwidth links are used at the core and aggregation levels to provide required bandwidth capacity. A conventional DCN often suffers from high expenses and low fault-tolerance, because high-end switches are expensive and a failure of such a high-end switch will result in disastrous consequences in the network. To avoid the problems and drawbacks in conventional DCNs, recent works adopt an important design principle: using Commodity-Off-The-Shelf (COTS) cheap switches to scale out data centers to large sizes, instead of using high-end switches to scale up data centers. Based on this scale-out principle, a large number of novel DCN architectures have been proposed. These DCN architectures are classified into two categories: switch-centric and server-centric DCN architectures. In both switch-centric and server-centric architectures, COTS switches are used to scale out the network to a large size. In switch-centric DCNs, routing intelligence is placed on switches; each server usually uses only one port of the Network Interface Card (NIC) to connect to the switches. In server-centric DCNs, switches are only used as dummy cross-bars; servers in the network serve as both computation nodes and packet forwarding nodes that connect switches and other servers, and routing intelligence is placed on servers, where multiple NIC ports may be used. This dissertation considers two fundamental problems in designing DCN architectures using the scale-out principle. The first problem considers how to maximize the total number of dual-port servers in a server-centric DCN given a network diameter constraint. Motivated by the Moore Bound, which provides the upper bound on the number of nodes in a traditional graph given a node degree and diameter, we give an upper bound on the maximum number of dual-port servers in a DCN, given a network diameter constraint and a switch port number. Then, we propose three novel DCN architectures, SWCube, SWKautz, and SWdBruijn, whose numbers of servers are close to the upper bound, and are larger than existing DCN architectures in most cases. SWCube is based on the generalized hypercube. SWCube accommodates a comparable number of servers to that of DPillar, which is the largest existing one prior to our work. SWKautz and SWdBruijn are based on the Kautz graph and the de Bruijn graph, respectively. They always accommodate more servers than DPillar. We investigate various properties of SWCube, SWKautz, and SWdBruijn; we also compare them with various existing DCN architectures and demonstrate their advantages over existing architectures. The second problem focuses on the tradeoffs between network performances and power consumption in designing DCN architectures. We have two motivations for our work. The first one is that most existing works take extreme designs in terms of improving network performances and reducing the power consumption. Some DCNs use too many networking devices to improve the performances; their power consumption is very high. Other DCNs use two few networking devices, and their performances are very poor. We are interested in exploring the quantitative tradeoffs between network performances and power consumption in designing DCN architectures. The second motivation is that there do not exist important unified performance and power consumption metrics for general DCNs. Thus, we propose two important unified performance and power consumption metrics. Then, we propose three novel DCN architectures that achieve important tradeoff points in the design spectrum: FCell, FSquare, and FRectangle. Besides, we find that in all these three new architectures, routing intelligence can be placed on both servers and switches; thus they enjoy the advantages of both switch-centric and server-centric architectures, and can be regarded as a new category of DCN architectures, the dual-centric DCN architectures. We also investigate various other properties for our proposed architectures and verify that they are excellent candidates for practical cloud data centers.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
34

Macias, Lloret Mario. "Business-driven resource allocation and management for data centres in cloud computing markets." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/144562.

Full text
Abstract:
Cloud Computing markets arise as an efficient way to allocate resources for the execution of tasks and services within a set of geographically dispersed providers from different organisations. Client applications and service providers meet in a market and negotiate for the sales of services by means of the signature of a Service Level Agreement that contains the Quality of Service terms that the Cloud provider has to guarantee by managing properly its resources. Current implementations of Cloud markets suffer from a lack of information flow between the negotiating agents, which sell the resources, and the resource managers that allocate the resources to fulfil the agreed Quality of Service. This thesis establishes an intermediate layer between the market agents and the resource managers. In consequence, agents can perform accurate negotiations by considering the status of the resources in their negotiation models, and providers can manage their resources considering both the performance and the business objectives. This thesis defines a set of policies for the negotiation and enforcement of Service Level Agreements. Such policies deal with different Business-Level Objectives: maximisation of the revenue, classification of clients, trust and reputation maximisation, and risk minimisation. This thesis demonstrates the effectiveness of such policies by means of fine-grained simulations. A pricing model may be influenced by many parameters. The weight of such parameters within the final model is not always known, or it can change as the market environment evolves. This thesis models and evaluates how the providers can self-adapt to changing environments by means of genetic algorithms. Providers that rapidly adapt to changes in the environment achieve higher revenues than providers that do not. Policies are usually conceived for the short term: they model the behaviour of the system by considering the current status and the expected immediate after their application. This thesis defines and evaluates a trust and reputation system that enforces providers to consider the impact of their decisions in the long term. The trust and reputation system expels providers and clients with dishonest behaviour, and providers that consider the impact of their reputation in their actions improve on the achievement of their Business-Level Objectives. Finally, this thesis studies the risk as the effects of the uncertainty over the expected outcomes of cloud providers. The particularities of cloud appliances as a set of interconnected resources are studied, as well as how the risk is propagated through the linked nodes. Incorporating risk models helps providers differentiate Service Level Agreements according to their risk, take preventive actions in the focus of the risk, and pricing accordingly. Applying risk management raises the fulfilment rate of the Service-Level Agreements and increases the profit of the provider
APA, Harvard, Vancouver, ISO, and other styles
35

Izumo, Naoki. "Clouded space: Internet physicality." Thesis, University of Iowa, 2017. https://ir.uiowa.edu/etd/5515.

Full text
Abstract:
On Friday October 21st, 2016, there was a large-scale hack of an Internet domain hosting provider that took several websites including Netflix, Amazon, Reddit, and Twitter offline. Dyn, a cloud-based Internet Performance Management company, announced at 9:20AM ET that it resolved an attack that began at 7AM ET that day. However, another attack happened at 11:52AM ET. The attacks raised concern among the public and directed our attention towards Internet security. This also revealed the precariousness of Internet infrastructure. The infrastructure being used today is opaque, unregulated, and incontestable. Municipally provided public utilities are built without any transparency; thus, we do not expect failure from those systems. For instance, the Flint, Michigan water crisis raised issues of water infrastructure. Not only did the crisis spark talks about the corrosion of pipes, but also larger societal issues. Flint, a poor, largely African American community, became a victim of environmental racism—a type of discrimination where communities of color or low-income residents are forced to live in environmental dangerous areas. In order for myself and the larger public to understand this opaque system, we need to understand the infrastructure and how it works. With regards to Internet infrastructure, I focus on data centers, where there are backup servers, batteries and generators built into the architectural landscape in case of failure. There is a common held thought that overshadows the possibility of imminent technological failure—it cannot happen. This sort of thinking influences other modes of our daily lives: individuals building concrete bomb shelters underground for the apocalypse, stocking food, but not preparing for data breakdown. The consciousness of loss is further perpetuated by technology and its life expectancy. Clouded Space: Internet Physicality attempts to explore the unexceptional infrastructure of the Internet and how it exists right beneath our feet. That in itself is not very cloud-like. The work questions integrity of our infrastructure as much as environmental issues, highlighting the questionable relationship we have with data and our inclination to backup data to protect ourselves from failure. This is a relatively new topic and the challenges are not well understood. There seem to be cracks in the foundation, and though they are not yet obvious, they appear to be widening.
APA, Harvard, Vancouver, ISO, and other styles
36

Dab, Boutheina. "Optimization of routing and wireless resource allocation in hybrid data center networks." Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1068/document.

Full text
Abstract:
L’arrivée de la prochaine technologie 5G va permettre la connectivité des billions de terminaux mobiles et donc une énorme augmentation du trafic de données. A cet égard, les fournisseurs des services Cloud doivent posséder les infrastructures physiques capables de supporter cette explosion de trafic. Malheureusement, les architectures filaires conventionnelles des centres de données deviennent staturées et la congestion des équipements d’interconnexion est souvent atteinte. Dans cette thèse, nous explorons une approche récente qui consiste à augmenter le réseau filaire du centre de données avec l’infrastructure sans fil. En effet, nous exploitons une nouvelle technologie émergente, la technologie 60 GHz, qui assure un débit de l’ordre de 7 Gbits/s afin d’améliorer la QoS. Nous concevons une architecture hybride (filaire/sans fil) du réseau de centre de données basée sur : i) le modèle "Cisco’s Massively Scalable Data Center" (MSDC), et ii) le standard IEEE 802.11ad. Dans une telle architecture, les serveurs sont regroupés dans des racks, et sont interconnectés à travers un switch Ethernet, appelé top-of-rack (ToR) switch. Chaque ToR switch possède plusieurs antennes utilisées en parallèle sur différents canaux sans fil. L’objectif final consiste à minimiser la congestion du réseau filaire, en acheminant le maximum du trafic sur les canaux sans fil. Pour ce faire, cette thèse se focalise sur l’optimisation du routage et de l’allocation des canaux sans fil pour les communications inter-rack, au sein d’un centre de données hybride (HDCN). Ce problème étant NP-difficile, nous allons procéder en trois étapes. En premier lieu, on considère le cas des communications à un saut, où les racks sont placés dans le même rayon de transmission. Nous proposons un nouvel algorithme d’allocation des canaux sans fil dans les HDCN, qui permet d’acheminer le maximum des communications en sans-fil, tout en améliorant les performances réseau en termes de débit et délai. En second lieu, nous nous adressons au cas des communications à plusieurs sauts, où les racks ne sont pas dans le même rayon de transmission. Nous allons proposer une nouvelle approche optimale traitant conjointement le problème du routage et de l’allocation de canaux sans fils dans le HDCN, pour chaque communication, dans un mode online. En troisième étape, nous proposons un nouvel algorithme qui calcule conjointement le routage et l’allocation des canaux pour un ensemble des communications arrivant en mode batch (i.e., par lot). En utilisant le simulateur réseau QualNet, considérant toute la pile TCP/IP, les résultats obtenus montrent que nos propositions améliorent les performances comparées aux méthodes de l’état de l’art
The high proliferation of smart devices and online services allows billions of users to connect with network while deploying a vast range of applications. Particularly, with the advent of the future 5G technology, it is expected that a tremendous mobile and data traffic will be crossing Internet network. In this regard, Cloud service providers are urged to rethink their data center architectures in order to cope with this unprecedented traffic explosion. Unfortunately, the conventional wired infrastructures struggle to resist to such a traffic growth and become prone to serious congestion problems. Therefore, new innovative techniques are required. In this thesis, we investigate a recent promising approach that augments the wired Data Center Network (DCN) with wireless communications. Indeed, motivated by the feasibility of the new emerging 60 GHz technology, offering an impressive data rate (≈ 7 Gbps), we envision, a Hybrid (wireless/wired) DCN (HDCN) architecture. Our HDCN is based on i) Cisco’s Massively Scalable Data Center (MSDC) model and ii) IEEE 802.11ad standard. Servers in the HDCN are regrouped into racks, where each rack is equipped with a: i) Ethernet top-of-rack (ToR) switch and ii) set of wireless antennas. Our research aims to optimize the routing and the allocation of wireless resources for inter-rack communications in HDCN while enhancing network performance and minimizing congestion. The problem of routing and resource allocation in HDCN is NP-hard. To deal with this difficulty, we will tackle the problem into three stages. In the first stage, we consider only one-hop inter-rack communications in HDCN, where all communicating racks are in the same transmission range. We will propound a new wireless channel allocation approach in HDCN to hardness both wireless and wired interfaces for incoming flows while enhancing network throughput. In the second stage, we deal with the multi-hop communications in HDCN where communicating racks can not communicate in one single-hop wireless path. We propose a new approach to jointly route and allocate channels for each single communication flow, in an online way. Finally, in the third stage, we address the batched arrival of inter-rack communications to the HDCN so as to further optimize the usage of wireless and wired resources. For that end, we propose: i) a heuristic-based and ii) an approximate, solutions, to solve the joint batch routing and channel assignment. Based on extensive simulations conducted in QualNet simulator while considering the full protocol stack, the obtained results for both real workload and uniform traces, show that our proposals outperform the prominent related strategies
APA, Harvard, Vancouver, ISO, and other styles
37

Sanhaji, Ali. "Nouveaux paradigmes de contrôle de congestion dans un réseau d'opérateur." Phd thesis, Toulouse, INPT, 2016. http://oatao.univ-toulouse.fr/17304/1/sanhaji.pdf.

Full text
Abstract:
La congestion dans les réseaux est un phénomène qui peut influer sur la qualité de service ressentie par les utilisateurs. L’augmentation continue du trafic sur l’internet rend le phénomène de congestion un problème auquel l’opérateur doit répondre pour satisfaire ses clients. Les solutions historiques à la congestion pour un opérateur, comme le surdimensionnement des liens de son infrastructure, ne sont plus aujourd’hui viables. Avec l’évolution de l’architecture des réseaux et l’arrivée de nouvelles applications sur l’internet, de nouveaux paradigmes de contrôle de congestion sont à envisager pour répondre aux attentes des utilisateurs du réseau de l’opérateur. Dans cette thèse, nous examinons les nouvelles approches proposées pour le contrôle de congestion dans le réseau d’un opérateur. Nous proposons une évaluation de ces approches à travers des simulations, ce qui nous permet d’estimer leur efficacité et leur potentiel à être déployés et opérationnels dans le contexte d’internet, ainsi que de se rendre compte des défis qu’il faut relever pour atteindre cet objectif. Nous proposons également des solutions de contrôle de congestion dans des environnements nouveaux tels que les architectures Software Defined Networking et le cloud déployé sur un ou plusieurs data centers, où la congestion est à surveiller pour maintenir la qualité des services cloud offerts aux clients. Pour appuyer nos propositions d’architectures de contrôle de congestion, nous présentons des plateformes expérimentales qui démontrent le fonctionnement et le potentiel de nos solutions.
APA, Harvard, Vancouver, ISO, and other styles
38

Sonklin, Chanipa. "Minimising the energy consumption of data centres by genetic algorithms." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/198050/1/Chanipa_Sonklin_Thesis.pdf.

Full text
Abstract:
Due to the rapid growth of cloud computing, the energy consumption in cloud data centres is increasing dramatically, which leads to the increase of the operational cost of the data centres and creates carbon pollution. The energy consumption of a cloud data centre can be significantly reduced through smart resource management. In this PhD research, I have developed an intelligent approach to the resource management problem. Experimental results have shown that the proposed approach is effective and efficient.
APA, Harvard, Vancouver, ISO, and other styles
39

Francischetti, Emilio Junior. "Garanzie del servizio in ambienti di cloud computing: uno studio sperimentale." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amslaurea.unibo.it/2323/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Bilal, Kashif. "Analysis and Characterization of Cloud Based Data Center Architectures for Performance, Robustness, Energy Efficiency, and Thermal Uniformity." Diss., North Dakota State University, 2014. https://hdl.handle.net/10365/27323.

Full text
Abstract:
Cloud computing is anticipated to revolutionize the Information and Communication Technology (ICT) sector and has been a mainstream of research over the last decade. Today, the contemporary society relies more than ever on the Internet and cloud computing. However, the advent and enormous adoption of cloud computing paradigm in various domains of human life also brings numerous challenges to cloud providers and research community. Data Centers (DCs) constitute the structural and operational foundations of cloud computing platforms. The legacy DC architectures are inadequate to accommodate the enormous adoption and increasing resource demands of cloud computing. The scalability, high cross-section bandwidth, Quality of Service (QoS) guarantees, privacy, and Service Level Agreement (SLA) assurance are some of the major challenges faced by today?s cloud DC architectures. Similarly, reliability and robustness are among the mandatory features of cloud paradigm to handle the workload perturbations, hardware failures, and intentional attacks. The concerns about the environmental impacts, energy demands, and electricity costs of cloud DCs are intensifying. Energy efficiency is one of mandatory features of today?s DCs. Considering the paramount importance of characterization and performance analysis of the cloud based DCs, we analyze the robustness and performance of the state-of-the-art DC architectures and highlight the advantages and drawbacks of such DC architecture. Moreover, we highlight the potentials and techniques that can be used to achieve energy efficiency and propose an energy efficient DC scheduling strategy based on a real DC workload analysis. Thermal uniformity within the DC also brings energy savings. Therefore, we propose thermal-aware scheduling policies to deliver the thermal uniformity within the DC to ensure the hardware reliability, elimination of hot spots, and reduction in power consumed by cooling infrastructure. One of the salient contributions of our work is to deliver the handy and adaptable experimentation tools and simulators for the research community. We develop two discrete event simulators for the DC research community: (a) for the detailed DC network analysis under various configurations, network loads, and traffic patterns, and (b) a cloud scheduler to analyze and compare various scheduling strategies and their thermal impact.
APA, Harvard, Vancouver, ISO, and other styles
41

Knauth, Thomas. "Energy Efficient Cloud Computing: Techniques and Tools." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-164391.

Full text
Abstract:
Data centers hosting internet-scale services consume megawatts of power. Mainly for cost reasons but also to appease environmental concerns, data center operators are interested to reduce their use of energy. This thesis investigates if and how hardware virtualization helps to improve the energy efficiency of modern cloud data centers. Our main motivation is to power off unused servers to save energy. The work encompasses three major parts: First, a simulation-driven analysis to quantify the benefits of known reservation times in infrastructure clouds. Virtual machines with similar expiration times are co-located to increase the probability to power down unused physical hosts. Second, we propose and prototyped a system to deliver truly on-demand cloud services. Idle virtual machines are suspended to free resources and as a first step to power off the physical server. Third, a novel block-level data synchronization tool enables fast and efficient state replication. Frequent state synchronization is necessary to prevent data unavailability: powering down a server disables access to the locally attached disks and any data stored on them. The techniques effectively reduce the overall number of required servers either through optimized scheduling or by suspending idle virtual machines. Fewer live servers translate into proportional energy savings, as the unused servers must no longer be powered.
APA, Harvard, Vancouver, ISO, and other styles
42

López, Saavedra Alejandra Esperanza. "Evaluación e implementación del rediseño del proceso de gestión de incidentes en el Data Center & Cloud de Sonda S.A." Tesis, Universidad de Chile, 2017. http://repositorio.uchile.cl/handle/2250/151226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Hangwei, Qian. "Dynamic Resource Management of Cloud-Hosted Internet Applications." Case Western Reserve University School of Graduate Studies / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=case1338317801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Sutrisno, Harry. "Techno-Economic Study on The Alternative Power and Cooling Systems Design for Cost & Energy-Efficient Edge Cloud Data Center(s)." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302990.

Full text
Abstract:
The 5G technology has enabled performance-sensitive applications with low latency and high bandwidth requirements, which has put more low latency requirements on computing services. To answer this need, a small-scale data center called edge cloud is predicted to grow fast in the future. Due to its nature of being close to the end-users, the growth of edge clouds in the populated area may cause a problem with the existing power system. Besides this power system challenge, the edge cloud also requires a higher resource cost than the hyper-scale data center because of the economies of scale. In this thesis, four viable alternative power and cooling technologies are introduced to address those challenges. These four technologies are solar PV, Vertical Axis Wind Turbine (VAWT), Rear Door Heat Exchanger (RDHx), and immersion cooling. Detailed data of edge cloud are required to understand the contribution of these four technologies. However, due to the infancy state of edge cloud, those data are unavailable, and assumptions regarding data are made. Besides that, a cost model for an edge cloud is also required to show how significant the contribution of those alternative technologies is if compared to the total cost of ownership. In this thesis, the cost model for the edge cloud is extended for the alternative power and cooling system scenarios. Along with the assumed data of an edge cloud, sensitivity analysis is performed to determine whether the alternative power and cooling technologies can bring down the cost of edge cloud resources or not. Through the cost modeling, it was found out that VAWT and immersion cooling is not feasible for the particular assumed data center. On the other hand, solar PV can save 4.55% of data center electricity consumption (equal to 0.21% reduction of the total expense when calculated using the current electricity price). Furthermore, RDHx performed better with 22.73% of data center electricity expenses (equivalent to 8.35% of saving from total cost when calculated using the current electricity price).
5G-tekniken har möjliggjort prestandakänsliga applikationer med låg latens och höga bandbreddskrav, vilket har ställt högre krav på låg latens för datatjänster. För att möta detta behov förutspås ett småskaligt datacenter - edge cloud – växa i framtiden. På grund av dess användarnära natur kan tillväxten av edge clouds i tätområden orsaka problem med det befintliga kraftsystemet. Förutom denna kraftsystemutmaning kräver edge cloud också en högre resurskostnad än storskaliga datacenter på grund av skalfördelarna. I denna avhandling introduceras fyra alternativa energi- och kyltekniker för att hantera dessa utmaningar. Dessa fyra tekniker är solpanel, vertikalaxel vindturbin (VAWT), bakdörrvärmeväxlare (RDHx), och nedsänkningskylning. Detaljerad information om edge cloud erfordras för att förstå bidraget från dessa fyra tekniker. På grund av edge clouds tidiga stadium är all nödvändig data dock inte tillgänglig, vaför antaganden om görs. Förutom det krävs också en kostnadsmodell för edge cloud för att visa hur betydande bidraget från den alternativa tekniken är om den jämförs med den totala ägandekostnaden. I denna avhandling utökas kostnadsmodellen för edge cloud för de alternativa energi- och kylsystemscenarierna. Med antagen data för ett edge cloud genomförs en känslighetsanalys för att avgöra om alternativa energi- och kyltekniker kan sänka kostnaden för edge cloud-resurser eller inte. Kostnadsmodelleringen visar att VAWT och nedsänkningskylning inte är möjlig för det specifika antagna datacentret. Å andra sidan kan solpanel spara 4,55% av datacentrets elförbrukning (motsvarande 0,21% minskning av den totala kostnaden när den beräknas med det aktuella elpriset). Dessutom presterade RDHx bättre med 22,73% av datacenters elutgifter (motsvarande 8,35% av besparingen från totalkostnaden när den beräknas med det aktuella elpriset).
APA, Harvard, Vancouver, ISO, and other styles
45

Khaleel, Ali. "Optimisation of a Hadoop cluster based on SDN in cloud computing for big data applications." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/17076.

Full text
Abstract:
Big data has received a great deal attention from many sectors, including academia, industry and government. The Hadoop framework has emerged for supporting its storage and analysis using the MapReduce programming module. However, this framework is a complex system that has more than 150 parameters and some of them can exert a considerable effect on the performance of a Hadoop job. The optimum tuning of the Hadoop parameters is a difficult task as well as being time consuming. In this thesis, an optimisation approach is presented to improve the performance of a Hadoop framework by setting the values of the Hadoop parameters automatically. Specifically, genetic programming is used to construct a fitness function that represents the interrelations among the Hadoop parameters. Then, a genetic algorithm is employed to search for the optimum or near the optimum values of the Hadoop parameters. A Hadoop cluster is configured on two severe at Brunel University London to evaluate the performance of the proposed optimisation approach. The experimental results show that the performance of a Hadoop MapReduce job for 20 GB on Word Count Application is improved by 69.63% and 30.31% when compared to the default settings and state of the art, respectively. Whilst on Tera sort application, it is improved by 73.39% and 55.93%. For better optimisation, SDN is also employed to improve the performance of a Hadoop job. The experimental results show that the performance of a Hadoop job in SDN network for 50 GB is improved by 32.8% when compared to traditional network. Whilst on Tera sort application, the improvement for 50 GB is on average 38.7%. An effective computing platform is also presented in this thesis to support solar irradiation data analytics. It is built based on RHIPE to provide fast analysis and calculation for solar irradiation datasets. The performance of RHIPE is compared with the R language in terms of accuracy, scalability and speedup. The speed up of RHIPE is evaluated by Gustafson's Law, which is revised to enhance the performance of the parallel computation on intensive irradiation data sets in a cluster computing environment like Hadoop. The performance of the proposed work is evaluated using a Hadoop cluster based on the Microsoft azure cloud and the experimental results show that RHIPE provides considerable improvements over the R language. Finally, an effective routing algorithm based on SDN to improve the performance of a Hadoop job in a large scale cluster in a data centre network is presented. The proposed algorithm is used to improve the performance of a Hadoop job during the shuffle phase by allocating efficient paths for each shuffling flow, according to the network resources demand of each flow as well as their size and number. Furthermore, it is also employed to allocate alternative paths for each shuffling flow in the case of any link crashing or failure. This algorithm is evaluated by two network topologies, namely, fat tree and leaf-spine, built by EstiNet emulator software. The experimental results show that the proposed approach improves the performance of a Hadoop job in a data centre network.
APA, Harvard, Vancouver, ISO, and other styles
46

Vítek, Daniel. "Cloud computing s ohledem na technologické aspekty a změny v infrastruktuře." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-72548.

Full text
Abstract:
This thesis discusses the new way of delivering IT services over the Internet widely known as cloud computing. In its opening part, cloud computing is put into a historical context of the evolution of enterprise computing, and the dominant issues the IT department faces today are mentioned. Further, the paper deals with several components that make up the architecture of cloud computing and reviews the benefits and drawbacks an enterprise can have while it adopts this new model. One of the primary aims of this thesis is to identify the impact of the technology trends on cloud computing. The thesis brings together four major computing trends, namely virtualization, multi-tenant architecture, service-oriented architecture and grid computing. Another aim is to focus on two trends related to IT infrastructure that will lead to fundamental changes in IT industry. The first of them is the emergence of extremely large-scale data centers at low cost locations, which can serve tremendous amount of customers and achieve considerable economies of scale. The second trend this paper points out is the shift from multi-purpose all-in-one computers into a wide range of mobile devices dedicated to a specific user's needs. The last aim of this thesis is to clarify the economic impact of cloud computing in terms of costs and changes in business models. The thesis concludes by evaluating the current adoption and predicting the future trend of cloud computing.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Yuan [Verfasser], Xiaoming [Akademischer Betreuer] Fu, K. K. [Akademischer Betreuer] Ramakrishnan, Dieter [Akademischer Betreuer] Hogrefe, Winfried [Akademischer Betreuer] Kurth, and Carsten [Akademischer Betreuer] Damm. "Dynamic Resource Scheduling in Cloud Data Center / Yuan Zhang. Betreuer: Xiaoming Fu. Gutachter: K. K. Ramakrishnan ; Dieter Hogrefe ; Winfried Kurth ; Carsten Damm." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2015. http://d-nb.info/1078150753/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Paolucci, Fabio. "Migrazione concorrente di macchine virtuali su piattaforme open source." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8454/.

Full text
Abstract:
Questa tesi si pone l’obiettivo di effettuare un’analisi aggiornata sulla recente evoluzione del Cloud Computing e dei nuovi modelli architetturali a sostegno della continua crescita di richiesta di risorse di computazione, di storage e di rete all'interno dei data center, per poi dedicarsi ad una fase sperimentale di migrazioni live singole e concorrenti di macchine virtuali, studiandone le prestazioni a livello di risorse applicative e di rete all’interno della piattaforma open source di virtualizzazione QEMU-KVM, oggi alla base di sistemi cloud-based come Openstack. Nel primo capitolo, viene effettuato uno studio dello stato dell’arte del Cloud Computing, dei suoi attuali limiti e delle prospettive offerte da un modello di Cloud Federation nel futuro immediato. Nel secondo capitolo vengono discusse nel dettaglio le tecniche di live migration, di recente riferimento per la comunità scientifica internazionale e le possibili ottimizzazioni in scenari inter e intra data center, con l’intento di definire la base teorica per lo studio approfondito dell’implementazione effettiva del processo di migrazione su piattaforma QEMU-KVM, che viene affrontato nel terzo capitolo. In particolare, in quest’ultimo sono descritti i principi architetturali e di funzionamento dell'hypervisor e viene definito il modello di progettazione e l’algoritmo alla base del processo di migrazione. Nel quarto capitolo, infine, si presenta il lavoro svolto, le scelte configurative e progettuali per la creazione di un ambiente di testbed adatto allo studio di sessioni di live migration concorrenti e vengono discussi i risultati delle misure di performance e del comportamento del sistema, tramite le sperimentazioni effettuate.
APA, Harvard, Vancouver, ISO, and other styles
49

Mohammed, Bashir. "A Framework for Efficient Management of Fault Tolerance in Cloud Data Centres and High-Performance Computing Systems: An Investigation and Performance analysis of a Cloud Based Virtual Machine Success and Failure Rate in a typical Cloud Computing Environment and Prediction Methods." Thesis, University of Bradford, 2019. http://hdl.handle.net/10454/17400.

Full text
Abstract:
Cloud computing is increasingly attracting huge attention both in academic research and industry initiatives and has been widely used to solve advanced computation problem. As cloud datacentres continue to grow in scale and complexity, the risk of failure of Virtual Machines (VM) and hosts running several jobs and processing large amount of user request increases and consequently becomes even more difficult to predict potential failures within a datacentre. However, even though fault tolerance continues to be an issue of growing concern in cloud and HPC systems, mitigating the impact of failure and providing accurate predictions with enough lead time remains a difficult research problem. Traditional existing fault-tolerance strategies such as regular check-point/restart and replication are not adequate due to emerging complexities in the systems and do not scale well in the cloud due to resource sharing and distributed systems networks. In the thesis, a new reliable Fault Tolerance scheme using an intelligent optimal strategy is presented to ensure high system availability, reduced task completion time and efficient VM allocation process. Specifically, (i) A generic fault tolerance algorithm for cloud data centres and HPC systems in the cloud was developed. (ii) A verification process is developed to a fully dimensional VM specification during allocation in the presence of fault. In comparison to existing approaches, the results obtained shows an increase in success rate of the VMs, a reduction in response time of VM allocation and an improved overall performance. (iii) A failure prediction model is further developed, and the predictive capabilities of machine learning is explored by applying several algorithms to improve the accuracy of prediction. Experimental results indicate that the average prediction accuracy of the proposed model when predicting failure is about 90% accurate compared to existing algorithms, which implies that the approach can effectively predict potential system and application failures within the system.
APA, Harvard, Vancouver, ISO, and other styles
50

Degoutin, Stéphane. "Société-nuage." Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC1009.

Full text
Abstract:
Ce livre se déroule, comme une peinture de paysage chinois que le regard parcourt lentement. J’utilise cette forme car je décris un panorama. Il n’est pas fait de montagnes dans la brume ou de buissons balayés par le vent, mais de centres de traitement de données, d’entrepôts de livraison, de flux de réseaux sociaux…J’explore l’hypothèse qu’Internet s’inscrit dans un mouvement général de réduction de la société à des composants de petite échelle, ce qui permet une fluidification de ses mécanismes. Une idée de chimiste – la décomposition en poudre de la matière permettant de faciliter sa recomposition – est également appliquée aux relations sociales, à la mémoire, à l’humain en général.Tout comme la réduction en poudre de la matière permet d’accélérer les réactions chimiques, la réduction en poudre de la société permet une décomposition et une recomposition accélérée de la matière dont est faite l’humain. Elle permet de multiplier les réactions au sein de la société, les productions de l’humanité, la chimie sociale : combinatoire des passions (Charles Fourier), hyperfragmentation du travail (Mechanical Turk), décomposition du savoir (Paul Otlet), Internet des neurones (Michael Chorost), société par agrégation des micro affects (Facebook). C’est ce que j’appelle la « société-nuage »
This book unfolds, like a Chinese landscape painting through which the viewer’s gaze wanders slowly. I describe a panorama. It is not made of mountains in the mist or bushes swept by the wind, but of data centers, automated warehouses, social network feeds...I explore the hypothesis that the Internet is part of a general process that reduces society and materials to small-scale components, which allow its mechanisms to become more fluid. A chemist’s idea – the decomposition of matter into powder to facilitate its recomposition – is also applied to social relations, memory and humans in general.Just as the reduction of matter accelerates chemical reactions, the reduction of society to powder allows for an accelerated decomposition and recomposition of all from which humans are made. It allows to multiply the reactions within society, to accelerate the productions of humanity and the social chemistry : combination of human passions (Charles Fourier), hyperfragmentation of work (Mechanical Turk), decomposition of knowledge (Paul Otlet), Internet of neurons (Michael Chorost), agregation of micro affects (Facebook). This is what I call the « society as cloud »
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography