Academic literature on the topic 'Cloud data centers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Cloud data centers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Cloud data centers"

1

Guo, Le Jiang, Feng Zheng, Ya Hui Hu, Lei Xiao, and Liang Liu. "Analysis and Research of Cloud Computing Data Center." Applied Mechanics and Materials 427-429 (September 2013): 2184–87. http://dx.doi.org/10.4028/www.scientific.net/amm.427-429.2184.

Full text
Abstract:
Cloud computing data centers can be called cloud computing centers. It has put forward newer and higher demands for data centers with the development of cloud computing technologies. This paper will discuss what are cloud computing data centers, cloud computing data center construction, cloud computing data center architecture, cloud computing data center management and maintenance, and the relationship between cloud computing data centers and clouds.
APA, Harvard, Vancouver, ISO, and other styles
2

Spillner, Josef, and Alan Sill. "Reengineering Cloud Data Centers." IEEE Cloud Computing 5, no. 6 (November 2018): 26–27. http://dx.doi.org/10.1109/mcc.2018.064181117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Karamat Khan, Tehmina, Mohsin Tanveer, and Asadullah Shah. "Energy Efficiency in Virtualized Data Center." International Journal of Engineering & Technology 7, no. 4.15 (October 7, 2018): 315. http://dx.doi.org/10.14419/ijet.v7i4.15.23019.

Full text
Abstract:
Industrial and academic communities have been trying to get more computational power out of their investments. Data centers have recently received huge attention due to its increased business value and achievable scalability on public/private clouds. Infra-structure and applications of modern data center is being virtualized to achieve energy efficient operation on servers. Despite of data center advantages on performance, there is a tradeoff between power and performance especially with cloud data centers. Today, these cloud application-based organizations are facing many energy related challenges. In this paper, through survey it has been analyzed how virtualization and networking related challenges affects energy efficiency of data center with suggested optimization strategies.
APA, Harvard, Vancouver, ISO, and other styles
4

Rajput, Ravindra Kumar Singh, Dinesh Goyal, Anjali Pant, Gajanand Sharma, Varsha Arya, and Marjan Kuchaki Rafsanjani. "Cloud Data Centre Energy Utilization Estimation." International Journal of Cloud Applications and Computing 12, no. 1 (January 1, 2022): 1–16. http://dx.doi.org/10.4018/ijcac.311035.

Full text
Abstract:
Due to the growth of the internet and internet-based software applications, cloud data center demand has increased. Cloud data centers have thousands of servers that are 24×7 working for users; it is the strong witness of enormous energy consumption for the operation of the cloud data center. However, server utilization is not remaining the same all the time, so, from an economic feasibility point of view, energy management is an essential activity for cloud resource management. Some well-known energy management techniques for cloud data centers generally used are dynamic voltage and frequency scaling (DVFS), dynamic power management (DPM), and task scheduling-based techniques. The present work is based on an analytical approach to integrating resource provisioning with sophisticated task scheduling; the authors estimate energy utilization by cloud data centers using iDR cloud simulator. The work is intended to optimize power consumption in the cloud data center.
APA, Harvard, Vancouver, ISO, and other styles
5

Ding, Jie, Hai Yun Han, and Ai Hua Zhou. "A Data Placement Strategy for Data-Intensive Cloud Storage." Advanced Materials Research 354-355 (October 2011): 896–900. http://dx.doi.org/10.4028/www.scientific.net/amr.354-355.896.

Full text
Abstract:
Data-Intensive applications in power systems often perform complex computations which always involve large amount of datasets. In a distributed environment, an application may needs several datasets located in different data centers which faces two challenges including the high cost of data movements between data centers and data dependencies within the same data centers. In this paper, a data placement strategy among and within data centers in a cloud environment is proposed. Datasets are placed in different centers by a clustering scheme based on the data dependencies. And within the center, data is partitioned and replicated using consistent hashing. Simulations show that the algorithm can effectively reduce the cost of data movements and perform a evenly data distribution.
APA, Harvard, Vancouver, ISO, and other styles
6

Khajehei, Kamyab. "Green Cloud and reduction of energy consumption." Computer Engineering and Applications Journal 4, no. 1 (February 18, 2015): 51–60. http://dx.doi.org/10.18495/comengapp.v4i1.119.

Full text
Abstract:
By using global application environments, cloud computing based data centers growing every day and this exponentially grows definitely effect on our environment. Researchers that have a commitment to their environment and others which was concerned about the electricity bills came up with a solution which called “Green Cloud”. Green cloud data centers based on how consume energy are known as high efficient data centers. In green cloud we try to reduce number of active devices and consume less electricity energy. In green data centers toke an advantage of VM and ability of copying, deleting and moving VMs over the data center and reduce energy consumption. This paper focused on which parts of data centers may change and how researchers found the suitable solution for each component of data centers. Also with all these problems why still the cloud data centers are the best technology for IT businesses.
APA, Harvard, Vancouver, ISO, and other styles
7

Shojafar, Mohammad, Zahra Pooranian, Mehdi Sookhak, and Rajkumar Buyya. "Recent advances in cloud data centers toward fog data centers." Concurrency and Computation: Practice and Experience 31, no. 8 (January 30, 2019): e5164. http://dx.doi.org/10.1002/cpe.5164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

LI, YANGYANG, HONGBO WANG, JIANKANG DONG, JUNBO LI, and SHIDUAN CHENG. "Differentiated Bandwidth Guarantees for Cloud Data Centers." Journal of Interconnection Networks 14, no. 03 (September 2013): 1360002. http://dx.doi.org/10.1142/s0219265913600025.

Full text
Abstract:
By means of virtualization, computing and storage resources are effectively multiplexed by different applications in cloud data centers. However, there lacks useful approaches to share the internal network resource of cloud data centers. Invalid network sharing not only degrade the performance of applications, but also affect the efficiency of data center operation. To guarantee network performance of applications and provide fine-grained service differentiation, in this paper, we propose a differentiated bandwidth guarantee scheme for data center networks. Utility functions are constructed according to the throughput and delay sensitive characteristics of different applications. Aiming to maximize the utility of all applications, the problem is formulated as a multi-objective optimization problem. We solve this problem using a heuristic algorithm: the elitist Non-Dominated Sorted Genetic Algorithm-II(NSGA-II), and we make a multi-attribute decision to refine the solutions. Extensive simulations are conducted to show that our scheme provides minimum band-width guarantees and achieves more fine-grained service differentiation than existing approaches. The simulation also verifies that the proposed mechanism is suitable for arbitrary data center architectures.
APA, Harvard, Vancouver, ISO, and other styles
9

M N, Kavyasri, and Dr Ramesh B. "Key-Cipher-Policy based ABE with Efficient Encryption of Multimedia Data at Data Centers of Cloud." International Journal of Recent Technology and Engineering (IJRTE) 11, no. 1 (May 30, 2022): 73–76. http://dx.doi.org/10.35940/ijrte.c6486.0511122.

Full text
Abstract:
Cloud computing is relatively one among the new technologies that is attracting moreclients to adopt cloud storage for easy and convenient online data storage and sharing. Because of its efficient computations, it has attracted the attention of both industry and academia. Companies began outsourcing confidential data to cloud service data centres. These data storage applications have security concerns about data confidentiality and privacy. When data is moved to the cloud, the customer loses control of the data and must rely on the cloud service provider. To protect their data, clients must first guarantee that it is encrypted. Encryption is a promising method for keeping data private in the cloud. ABE is a potential technique for dealing with security challenges in data center cloud storage. The key benefit is that it allows flexible one-to-many encryption. We present a model termed key-Cipher-policy-based ABE in this study. It allows optimized and more secure access to data stored at the data centers of cloud. with optimized encryption and less encryption time.
APA, Harvard, Vancouver, ISO, and other styles
10

Bao, Hao. "Homomorphic computing of encrypted data outsourcing in cloud data center." Frontiers in Computing and Intelligent Systems 2, no. 1 (November 23, 2022): 1–3. http://dx.doi.org/10.54097/fcis.v2i1.2482.

Full text
Abstract:
In the era of data explosion, data contains massive information, such as health data, time and place, hydrological waves, etc. In order to process and calculate these data, local Wang networking devices will send data to the cloud data center for outsourcing processing due to their limited storage and computing capabilities. However, our data contains a large amount of private data, so we need to protect the privacy of our outsourced data before outsourcing, so as to protect our personal privacy. At the same time, cloud data centers have strong advantages in data storage and computing capabilities, so cloud data centers are increasingly used.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Cloud data centers"

1

Mahmud, A. S. M. Hasan. "Sustainable Resource Management for Cloud Data Centers." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2634.

Full text
Abstract:
In recent years, the demand for data center computing has increased significantly due to the growing popularity of cloud applications and Internet-based services. Today's large data centers host hundreds of thousands of servers and the peak power rating of a single data center may even exceed 100MW. The combined electricity consumption of global data centers accounts for about 3% of worldwide production, raising serious concerns about their carbon footprint. The utility providers and governments are consistently pressuring data center operators to reduce their carbon footprint and energy consumption. While these operators (e.g., Apple, Facebook, and Google) have taken steps to reduce their carbon footprints (e.g., by installing on-site/off-site renewable energy facility), they are aggressively looking for new approaches that do not require expensive hardware installation or modification. This dissertation focuses on developing algorithms and systems to improve the sustainability in data centers without incurring significant additional operational or setup costs. In the first part, we propose a provably-efficient resource management solution for a self-managed data center to cap and reduce the carbon emission while maintaining satisfactory service performance. Our solution reduces the carbon emission of a self-managed data center to net-zero level and achieves carbon neutrality. In the second part, we consider minimizing the carbon emission in a hybrid data center infrastructure that includes geographically distributed self-managed and colocation data centers. This segment identifies and addresses the challenges of resource management in a hybrid data center infrastructure and proposes an efficient distributed solution to optimize the workload and resource allocation jointly in both self-managed and colocation data centers. In the final part, we explore sustainable resource management from cloud service users' point of view. A cloud service user purchases computing resources (e.g., virtual machines) from the service provider and does not have direct control over the carbon emission of the service provider's data center. Our proposed solution encourages a user to take part in sustainable (both economical and environmental) computing by limiting its spending on cloud resource purchase while satisfying its application performance requirements.
APA, Harvard, Vancouver, ISO, and other styles
2

Jawad, Muhammad. "Energy Efficient Data Centers for On-Demand Cloud Services." Diss., North Dakota State University, 2015. http://hdl.handle.net/10365/25198.

Full text
Abstract:
The primary objective of the Data Centers (DCs) is to provide in-time services to the cloud customers. For in-time services, DCs required an uninterruptable power supply at low cost. The DCs? power supply is directly linked with the stability and steady-state performance of the power system under faults and disturbances. Smart Grids (SGs) also known as the next generation power systems utilize communication and information technology to optimize power generation, distribution, and consumption. Therefore, it is beneficial to run DCs under SG environment. We present a thorough study of the wide area smart grid architecture, design, network, and control. The goal was to familiarize with the smart grid operation, monitoring, and control. We analyze different control mechanisms proposed in the past to study the behavior of the wide area smart grid symmetric and asymmetric grid faults conditions. The Study of the SG architecture was a first step to design power management and energy cost reduction models for the DC running under SGs. At first, we present a Power Management Model (PMM) for the DCs to estimate energy consumption cost. The PMM is a comprehensive model that considers many important quantities into account, such as DC power consumption, data center battery bank charging/discharging, backup generation operation during power outages, and power transactions between the main grid and the SG. Second, renewable energy, such as wind energy is integrated with the SG to minimize DC energy consumption cost. Third, forecasting algorithms are introduced in the PMM to predict DC power consumption, wind energy generation, and main grid power availability for the SG. The forecasting algorithms are employed for day-ahead and week-ahead prediction horizons. The purpose of the forecasting algorithms is to manage power generation and consumption, and reduce energy prices. Fourth, we formulate chargeback model for the DC customers to calculate on-demand cloud services cost. The DC energy consumption cost estimated through PMM is integrated with the other operational and capital expenditures to calculate per server utilization cost for the DC customers. Finally, the effectiveness of the proposed models is evaluated on real-world data sets.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Gong. "Data and application migration in cloud based data centers --architectures and techniques." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41078.

Full text
Abstract:
Computing and communication have continued to impact on the way we run business, the way we learn, and the way we live. The rapid technology evolution of computing has also expedited the growth of digital data, the workload of services, and the complexity of applications. Today, the cost of managing storage hardware ranges from two to ten times the acquisition cost of the storage hardware. We see an increasing demand on technologies for transferring management burden from humans to software. Data migration and application migration are one of popular technologies that enable computing and data storage management to be autonomic and self-managing. In this dissertation, we examine important issues in designing and developing scalable architectures and techniques for efficient and effective data migration and application migration. The first contribution we have made is to investigate the opportunity of automated data migration across multi-tier storage systems. The significant IO improvement in Solid State Disks (SSD) over traditional rotational hard disks (HDD) motivates the integration of SSD into existing storage hierarchy for enhanced performance. We developed adaptive look-ahead data migration approach to effectively integrate SSD into the multi-tiered storage architecture. When using the fast and expensive SSD tier to store the high temperature data (hot data) while placing the relatively low temperature data (low data) in the HDD tier, one of the important functionality is to manage the migration of data as their access patterns are changed from hot to cold and vice versa. For example, workloads during day time in typical banking applications can be dramatically different from those during night time. We designed and implemented an adaptive lookahead data migration model. A unique feature of our automated migration approach is its ability to dynamically adapt the data migration schedule to achieve the optimal migration effectiveness by taking into account of application specific characteristics and I/O profiles as well as workload deadlines. Our experiments running over the real system trace show that the basic look-ahead data migration model is effective in improving system resource utilization and the adaptive look-ahead migration model is more efficient for continuously improving and tuning of the performance and scalability of multi-tier storage systems. The second main contribution we have made in this dissertation research is to address the challenge of ensuring reliability and balancing loads across a network of computing nodes, managed in a decentralized service computing system. Considering providing location based services for geographically distributed mobile users, the continuous and massive service request workloads pose significant technical challenges for the system to guarantee scalable and reliable service provision. We design and develop a decentralized service computing architecture, called Reliable GeoGrid, with two unique features. First, we develop a distributed workload migration scheme with controlled replication, which utilizes a shortcut-based optimization to increase the resilience of the system against various node failures and network partition failures. Second, we devise a dynamic load balancing technique to scale the system in anticipation of unexpected workload changes. Our experimental results show that the Reliable GeoGrid architecture is highly scalable under changing service workloads with moving hotspots and highly reliable in the presence of massive node failures. The third research thrust in this dissertation research is focused on study the process of migrating applications from local physical data centers to Cloud. We design migration experiments and study the error types and further build the error model. Based on the analysis and observations in migration experiments, we propose the CloudMig system which provides both configuration validation and installation automation which effectively reduces the configuration errors and installation complexity. In this dissertation, I will provide an in-depth discussion of the principles of migration and its applications in improving data storage performance, balancing service workloads and adapting to cloud platform.
APA, Harvard, Vancouver, ISO, and other styles
4

Penumetsa, Swetha. "A comparison of energy efficient adaptation algorithms in cloud data centers." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17374.

Full text
Abstract:
Context: In recent years, Cloud computing has gained a wide range of attention in both industry and academics as Cloud services offer pay-per-use model, due to increase in need of factors like reliability and computing results with immense growth in Cloud-based companies along with a continuous expansion of their scale. However, the rise in Cloud computing users can cause a negative impact on energy consumption in the Cloud data centers as they consume huge amount of overall energy. In order to minimize the energy consumption in virtual datacenters, researchers proposed various energy efficient resources management strategies. Virtual Machine dynamic Consolidation is one of the prominent technique and an active research area in recent time, used to improve resource utilization and minimize the electric power consumption of a data center. This technique monitors the data centers utilization, identify overloaded, and underloaded hosts then migrate few/all Virtual Machines (VMs) to other suitable hosts using Virtual Machine selection and Virtual Machine placement, and switch underloaded hosts to sleep mode.   Objectives: Objective of this study is to define and implement new energy-aware heuristic algorithms to save energy consumption in Cloud data centers and show the best-resulted algorithm then compare performances of proposed heuristic algorithms with old heuristics.   Methods: Initially, a literature review is conducted to identify and obtain knowledge about the adaptive heuristic algorithms proposed previously for energy-aware VM Consolidation, and find the metrics to measure the performance of heuristic algorithms. Based on this knowledge, for our thesis we have proposed 32 combinations of novel adaptive heuristics for host overload detection (8) and VM selection algorithms (4), one host underload detection and two adaptive heuristic for VM placement algorithms which helps in minimizing both energy consumption and reducing overall Service Level Agreement (SLA) violation of Cloud data center. Further, an experiment is conducted to measure the performances of all proposed heuristic algorithms. We have used the CloudSim simulation toolkit for the modeling, simulation, and implementation of proposed heuristics. We have evaluated the proposed algorithms using PlanetLab VMs real workload traces.   Results: The results were measured using metrics energy consumption of data center (power model), Performance Degradation due to Migration (PDM), Service Level Agreement violation Time per Active Host (SLATAH), Service Level Agreement Violation (SLAV = PDM . SLATAH) and, Energy consumption and Service level agreement Violation (ESV).  Here for all four categories of VM Consolidation, we have compared the performances of proposed heuristics with each other and presented the best heuristic algorithm proposed in each category. We have also compared the performances of proposed heuristic algorithms with existing heuristics which are identified in the literature and presented the number of newly proposed algorithms work efficiently than existing algorithms. This comparative analysis is done using T-test and Cohen's d effect size.   From the comparison results of all proposed algorithms, we have concluded that Mean absolute Deviation around median (MADmedain) host overload detection algorithm equipped with Maximum requested RAM VM selection (MaxR) using Modified First Fit Decreasing VM placement (MFFD), and Standard Deviation (STD) host overload detection algorithm equipped with Maximum requested RAM VM selection (MaxR) using Modified Last Fit decreasing VM placement (MLFD) respectively performed better than other 31 combinations of proposed overload detection and VM selection heuristic algorithms, with regards to Energy consumption and Service level agreement Violation (ESV). However, from the comparative study between existing and proposed algorithms, 23 and 21 combinations of proposed host overload detection and VM selection algorithms using MFFD and MLFD VM placements respectively performed efficiently compared to existing (baseline) heuristic algorithms considered for this study.   Conclusions: This thesis presents novel proposed heuristic algorithms that are useful for minimization of both energy consumption and Service Level Agreement Violation in virtual datacenters. It presents new 23 combinations of proposed host overloading detection and VM selection algorithms using MFFD VM placement and 21 combinations of proposed host overloading detection and VM selection algorithms using MLFD VM placement, which consumes the minimum amount of energy with minimal SLA violation compared to the existing algorithms. It gives scope for future researchers related to improving resource utilization and minimizing the electric power consumption of a data center. This study can be extended in further by implementing the work on other Cloud software platforms and developing much more efficient algorithms for all four categories of VM consolidation.
APA, Harvard, Vancouver, ISO, and other styles
5

Takouna, Ibrahim. "Energy-efficient and performance-aware virtual machine management for cloud data centers." Phd thesis, Universität Potsdam, 2014. http://opus.kobv.de/ubp/texte_eingeschraenkt_verlag/2014/7239/.

Full text
Abstract:
Virtualisierte Cloud Datenzentren stellen nach Bedarf Ressourcen zur Verfügu-ng, ermöglichen agile Ressourcenbereitstellung und beherbergen heterogene Applikationen mit verschiedenen Anforderungen an Ressourcen. Solche Datenzentren verbrauchen enorme Mengen an Energie, was die Erhöhung der Betriebskosten, der Wärme innerhalb der Zentren und des Kohlendioxidausstoßes verursacht. Der Anstieg des Energieverbrauches kann durch ein ineffektives Ressourcenmanagement, das die ineffiziente Ressourcenausnutzung verursacht, entstehen. Die vorliegende Dissertation stellt detaillierte Modelle und neue Verfahren für virtualisiertes Ressourcenmanagement in Cloud Datenzentren vor. Die vorgestellten Verfahren ziehen das Service-Level-Agreement (SLA) und die Heterogenität der Auslastung bezüglich des Bedarfs an Speicherzugriffen und Kommunikationsmustern von Web- und HPC- (High Performance Computing) Applikationen in Betracht. Um die präsentierten Techniken zu evaluieren, verwenden wir Simulationen und echte Protokollierung der Auslastungen von Web- und HPC- Applikationen. Außerdem vergleichen wir unser Techniken und Verfahren mit anderen aktuellen Verfahren durch die Anwendung von verschiedenen Performance Metriken. Die Hauptbeiträge dieser Dissertation sind Folgendes: Ein Proaktives auf robuster Optimierung basierendes Ressourcenbereitstellungsverfahren. Dieses Verfahren erhöht die Fähigkeit der Hostes zur Verfüg-ungsstellung von mehr VMs. Gleichzeitig aber wird der unnötige Energieverbrauch minimiert. Zusätzlich mindert diese Technik unerwünschte Ände-rungen im Energiezustand des Servers. Die vorgestellte Technik nutzt einen auf Intervall basierenden Vorhersagealgorithmus zur Implementierung einer robusten Optimierung. Dabei werden unsichere Anforderungen in Betracht gezogen. Ein adaptives und auf Intervall basierendes Verfahren zur Vorhersage des Arbeitsaufkommens mit hohen, in kürzer Zeit auftretenden Schwankungen. Die Intervall basierende Vorhersage ist implementiert in der Standard Abweichung Variante und in der Median absoluter Abweichung Variante. Die Intervall-Änderungen basieren auf einem adaptiven Vertrauensfenster um die Schwankungen des Arbeitsaufkommens zu bewältigen. Eine robuste VM Zusammenlegung für ein effizientes Energie und Performance Management. Dies ermöglicht die gegenseitige Abhängigkeit zwischen der Energie und der Performance zu minimieren. Unser Verfahren reduziert die Anzahl der VM-Migrationen im Vergleich mit den neu vor kurzem vorgestellten Verfahren. Dies trägt auch zur Reduzierung des durch das Netzwerk verursachten Energieverbrauches. Außerdem reduziert dieses Verfahren SLA-Verletzungen und die Anzahl von Änderungen an Energiezus-tänden. Ein generisches Modell für das Netzwerk eines Datenzentrums um die verzö-gerte Kommunikation und ihre Auswirkung auf die VM Performance und auf die Netzwerkenergie zu simulieren. Außerdem wird ein generisches Modell für ein Memory-Bus des Servers vorgestellt. Dieses Modell beinhaltet auch Modelle für die Latenzzeit und den Energieverbrauch für verschiedene Memory Frequenzen. Dies erlaubt eine Simulation der Memory Verzögerung und ihre Auswirkung auf die VM-Performance und auf den Memory Energieverbrauch. Kommunikation bewusste und Energie effiziente Zusammenlegung für parallele Applikationen um die dynamische Entdeckung von Kommunikationsmustern und das Umplanen von VMs zu ermöglichen. Das Umplanen von VMs benutzt eine auf den entdeckten Kommunikationsmustern basierende Migration. Eine neue Technik zur Entdeckung von dynamischen Mustern ist implementiert. Sie basiert auf der Signal Verarbeitung des Netzwerks von VMs, anstatt die Informationen des virtuellen Umstellung der Hosts oder der Initiierung der VMs zu nutzen. Das Ergebnis zeigt, dass unsere Methode die durchschnittliche Anwendung des Netzwerks reduziert und aufgrund der Reduzierung der aktiven Umstellungen Energie gespart. Außerdem bietet sie eine bessere VM Performance im Vergleich zu der CPU-basierten Platzierung. Memory bewusste VM Zusammenlegung für unabhängige VMs. Sie nutzt die Vielfalt des VMs Memory Zuganges um die Anwendung vom Memory-Bus der Hosts zu balancieren. Die vorgestellte Technik, Memory-Bus Load Balancing (MLB), verteilt die VMs reaktiv neu im Bezug auf ihre Anwendung vom Memory-Bus. Sie nutzt die VM Migration um die Performance des gesamtem Systems zu verbessern. Außerdem sind die dynamische Spannung, die Frequenz Skalierung des Memory und die MLB Methode kombiniert um ein besseres Energiesparen zu leisten.
Virtualized cloud data centers provide on-demand resources, enable agile resource provisioning, and host heterogeneous applications with different resource requirements. These data centers consume enormous amounts of energy, increasing operational expenses, inducing high thermal inside data centers, and raising carbon dioxide emissions. The increase in energy consumption can result from ineffective resource management that causes inefficient resource utilization. This dissertation presents detailed models and novel techniques and algorithms for virtual resource management in cloud data centers. The proposed techniques take into account Service Level Agreements (SLAs) and workload heterogeneity in terms of memory access demand and communication patterns of web applications and High Performance Computing (HPC) applications. To evaluate our proposed techniques, we use simulation and real workload traces of web applications and HPC applications and compare our techniques against the other recently proposed techniques using several performance metrics. The major contributions of this dissertation are the following: proactive resource provisioning technique based on robust optimization to increase the hosts' availability for hosting new VMs while minimizing the idle energy consumption. Additionally, this technique mitigates undesirable changes in the power state of the hosts by which the hosts' reliability can be enhanced in avoiding failure during a power state change. The proposed technique exploits the range-based prediction algorithm for implementing robust optimization, taking into consideration the uncertainty of demand. An adaptive range-based prediction for predicting workload with high fluctuations in the short-term. The range prediction is implemented in two ways: standard deviation and median absolute deviation. The range is changed based on an adaptive confidence window to cope with the workload fluctuations. A robust VM consolidation for efficient energy and performance management to achieve equilibrium between energy and performance trade-offs. Our technique reduces the number of VM migrations compared to recently proposed techniques. This also contributes to a reduction in energy consumption by the network infrastructure. Additionally, our technique reduces SLA violations and the number of power state changes. A generic model for the network of a data center to simulate the communication delay and its impact on VM performance, as well as network energy consumption. In addition, a generic model for a memory-bus of a server, including latency and energy consumption models for different memory frequencies. This allows simulating the memory delay and its influence on VM performance, as well as memory energy consumption. Communication-aware and energy-efficient consolidation for parallel applications to enable the dynamic discovery of communication patterns and reschedule VMs using migration based on the determined communication patterns. A novel dynamic pattern discovery technique is implemented, based on signal processing of network utilization of VMs instead of using the information from the hosts' virtual switches or initiation from VMs. The result shows that our proposed approach reduces the network's average utilization, achieves energy savings due to reducing the number of active switches, and provides better VM performance compared to CPU-based placement. Memory-aware VM consolidation for independent VMs, which exploits the diversity of VMs' memory access to balance memory-bus utilization of hosts. The proposed technique, Memory-bus Load Balancing (MLB), reactively redistributes VMs according to their utilization of a memory-bus using VM migration to improve the performance of the overall system. Furthermore, Dynamic Voltage and Frequency Scaling (DVFS) of the memory and the proposed MLB technique are combined to achieve better energy savings.
APA, Harvard, Vancouver, ISO, and other styles
6

Yanggratoke, Rerngvit. "Contributions to Performance Modeling and Management of Data Centers." Licentiate thesis, KTH, Kommunikationsnät, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-129296.

Full text
Abstract:
Over the last decade, Internet-based services, such as electronic-mail, music-on-demand, and social-network services, have changed the ways we communicate and access information. Usually, the key functionality of such a service is in backend components, which are located in a data center, a facility for hosting computing systems and related equipment. This thesis focuses on two fundamental problems related to the management, dimensioning, and provisioning of such backend components. The first problem centers around resource allocation for a large-scale cloud environment. Data centers have become very large; they often contain hundreds of thousands of machines and applications. In such a data center, resource allocation cannot be efficiently achieved through a traditional management system that is centralized in nature. Therefore, a more scalable solution is needed. To address this problem, we have developed and evaluated a scalable and generic protocol for resource allocation. The protocol is generic in the sense that it can be instantiated for different management objectives through objective functions. The protocol jointly allocates CPU, memory, and network resources to applications that are hosted by the cloud. We prove that the protocol converges to a solution, if an objective function satisfies a certain property. We perform a simulation study of the protocol for realistic scenarios. Simulation results suggest that the quality of the allocation is independent of the system size, up to 100,000 machines and applications, for the management objectives considered. The second problem is related to performance modeling of a distributed key-value store. The specific distributed key-value store we focus on in this thesis is the Spotify storage system. Understanding the performance of the Spotify storage system is essential for achieving a key quality of service objective, namely that the playback latency of a song is sufficiently low. To address this problem, we have developed and evaluated models for predicting the performance of a distributed key-value store for a lightly loaded system. First, we developed a model that allows us to predict the response time distribution of requests. Second, we modeled the capacity of the distributed key-value store for two different object allocation policies. We evaluate the models by comparing model predictions with measurements from two different environments: our lab testbed and a Spotify operational environment. We found that the models are accurate in the sense that the prediction error, i.e., the difference between the model predictions and the measurements from the real systems, is at most 11%.

QC 20131001

APA, Harvard, Vancouver, ISO, and other styles
7

Peloso, Pietro. "Possibili soluzioni per garantire qos nelle comunicazioni inter-data centers in ambienti cloud computing." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/6205/.

Full text
Abstract:
Nel presente lavoro, partendo dalla definizione di alcuni punti chiavi del concetto di cloud computing, si è insistito molto sulle problematiche relative alle performance degli ambenti cloud, e alle diverse proposte attualmente presenti sul mercato con i relativi limiti. Dopo averle illustrate in modo dettagliato, le diverse proposte sono state tra loro messe a confronto al fine di evidenziare, per ciascuna di essa, tanto gli aspetti positivi quanto i punti di criticità.
APA, Harvard, Vancouver, ISO, and other styles
8

Atchukatla, Mahammad suhail. "Algorithms for efficient VM placement in data centers : Cloud Based Design and Performance Analysis." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17221.

Full text
Abstract:
Content: Recent trends show that cloud computing adoption is continuously increasing in every organization. So, demand for the cloud datacenters tremendously increases over a period, resulting in significantly increased resource utilization of the datacenters. In this thesis work, research was carried out on optimizing the energy consumption by using packing of the virtual machines in the datacenter. The CloudSim simulator was used for evaluating bin-packing algorithms and for practical implementation OpenStack cloud computing environment was chosen as the platform for this research.   Objectives:  In this research, our objectives are as follows
    Perform simulation of algorithms in CloudSim simulator. Estimate and compare the energy consumption of different packing algorithms. Design an OpenStack testbed to implement the Bin packing algorithm.   Methods: We use CloudSim simulator to estimate the energy consumption of the First fit, the First fit decreasing, Best fit and Enhanced best-fit algorithms. Design a heuristic model for implementation in the OpenStack environment for optimizing the energy consumption for the physical machines. Server consolidation and live migration are used for the algorithms design in the OpenStack implementation. Our research also extended to the Nova scheduler functionality in an OpenStack environment.   Results: Most of the case the enhanced best-fit algorithm gives the better results. The results are obtained from the default OpenStack VM placement algorithm as well as from the heuristic algorithm developed in this simulation work. The comparison of results indicates that the total energy consumption of the data center is reduced without affecting potential service level agreements.   Conclusions: The research tells that energy consumption of the physical machines can be optimized without compromising the offered service quality. A Python wrapper was developed to implement this model in the OpenStack environment and minimize the energy consumption of the Physical machine by shutdown the unused physical machines. The results indicate that CPU Utilization does not vary much when live migration of the virtual machine is performed.
APA, Harvard, Vancouver, ISO, and other styles
9

Pipkin, Everest R. "It Was Raining in the Data Center." Research Showcase @ CMU, 2018. http://repository.cmu.edu/theses/138.

Full text
Abstract:
Stemming from a 2011 incident inside of a Facebook data facility in which hyper-cooled air formed a literal (if somewhat transient) rain cloud in the stacks, It was raining in the data center examines ideas of non-places and supermodernity applied to contemporary network infrastructure. It was raining in the data center argues that the problem of the rain cloud is as much a problem of psychology as it is a problem of engineering. Although humidity-management is a predictable snag for any data center, the cloud was a surprise; a self-inflicted side-effect of a strategy of distance. The rain cloud was a result of the same rhetoric of ephemerality that makes it easy to imagine the inside of a data center to be both everywhere and nowhere. This conceit of internet data being placeless shares roots with Marc Augé’s idea of non-places (airports, highways, malls), which are predicated on the qualities of excess and movement. Without long-term inhabitants, these places fail to tether themselves to their locations, instead existing as a markers of everywhere. Such a premise allows the internet to exist as an other-space that is not conceptually beholden to the demands of energy and landscape. It also liberates the idea of ‘the network’ from a similar history of industry. However, the network is deeply rooted in place, as well as in industry and transit. Examining the prevalence of network overlap in American fiber-optic cabling, it becomes easy to trace routes of cables along major US freight train lines and the US interstate highway system. The historical origin of this network technology is in weaponization and defense, from highways as a nuclear-readiness response to ARPANET’s Pentagon-based funding. Such a linkage with the military continues today, with data centers likely to be situated near military installations— sharing similar needs electricity, network connectivity, fair climate, space, and invisibility. We see the repetition of militarized tropes across data structures. Fiber-optic network locations are kept secret; servers are housed in cold-war bunkers; data centers nest next to military black-sites. Similarly, Augé reminds us that non-places are a particular target of terrorism, populated as they are with cars, trains, drugs and planes that turn into weapons. When the network itself is at threat of weaponization, the effect is an ambient and ephemeral fear; a paranoia made of over-connection.
APA, Harvard, Vancouver, ISO, and other styles
10

Bergström, Rasmus. "Predicting Container-Level Power Consumption in Data Centers using Machine Learning Approaches." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-79416.

Full text
Abstract:
Due to the ongoing climate crisis, reducing waste and carbon emissions has become hot topic in many fields of study. Cloud data centers contribute a large portion to the world’s energy consumption. In this work, methodologies are developed using machine learning algorithms to improve prediction of the energy consumption of a container in a data center. The goal is to share this information with the user ahead of time, so that the same can make educated decisions about their environmental footprint.This work differentiates itself in its sole focus on optimizing prediction, as opposed to other approaches in the field where energy modeling and prediction has been studied as a means to building advanced scheduling policies in data centers. In this thesis, a qualitative comparison between various machine learning approaches to energy modeling and prediction is put forward. These approaches include Linear, Polynomial Linear and Polynomial Random Forest Regression as well as a Genetic Algorithm, LSTM Neural Networks and Reinforcement Learning. The best results were obtained using the Polynomial Random Forest Regression, which produced a Mean Absolute Error of of 26.48% when run against data center metrics gathered after the model was built. This prediction engine was then integrated into a Proof of Concept application as an educative tool to estimate what metrics of a cloud job have what impact on the container power consumption.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Cloud data centers"

1

K, Kokula Krishna Hari, ed. An Efficient Load Balancing Algorithm for virtualized Cloud Data Centers: ICCCEG 2014. Vietnam: Association of Scientists, Developers and Faculties, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Beard, Haley. Cloud computing best practices: For managing and measuring processes for on-demand computing, applications and data centers in the cloud with SLAs. Brisbane, Australia: Art of Service, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Malcolm, Orr, and Page Greg, eds. Cloud computing: Automating the virtualized data center. Indianapolis, IN: Cisco Press, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Lei, and Le Chen. Cloud Data Center Network Architectures and Technologies. First edition. | Boca Raton : CRC Press, 2021. | Summary: “This book has: CRC Press, 2021. http://dx.doi.org/10.1201/9781003143185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

1960-, Franklin Curtis, ed. Cloud computing: Technologies and strategies of the ubiquitous data center. New York: CRC, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tsai, Linjiun, and Wanjiun Liao. Virtualized Cloud Data Center Networks: Issues in Resource Management. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-32632-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chee, Brian J. S. Yun ji suan: Wu chu bu zai de shu ju zhong xin = Cloud computing ; technologies and strategies of the ubiquitous data center. Beijing: Guo fang gong ye chu ban she, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Green computing: Tools and techniques for saving energy, money, and resources. Boca Raton: CRC Press, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cloud Data Centers and Cost Modeling. Elsevier, 2015. http://dx.doi.org/10.1016/c2013-0-23202-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bugwadia, Jim, Zeeshan Naseh, and Habib Madani. Transforming Data Centers to Public and Private Cloud. Wiley & Sons, Incorporated, John, 2026.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Cloud data centers"

1

Mu, Shuai, Maomeng Su, Pin Gao, Yongwei Wu, Keqin Li, and Albert Y. Zomaya. "Cloud Storage over Multiple Data Centers." In Handbook on Data Centers, 691–725. New York, NY: Springer New York, 2015. http://dx.doi.org/10.1007/978-1-4939-2092-1_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jin, Xin, and Yu-Kwong Kwok. "Cloud Resource Pricing Under Tenant Rationality." In Handbook on Data Centers, 583–605. New York, NY: Springer New York, 2015. http://dx.doi.org/10.1007/978-1-4939-2092-1_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Xiong, Huanhuan, Christos Filelis-Papadopoulos, Dapeng Dong, Gabriel G. Castañé, Stefan Meyer, and John P. Morrison. "Energy-Efficient Servers and Cloud." In Hardware Accelerators in Data Centers, 163–80. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92792-3_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ahmed, Kishwar, Shaolei Ren, Yuxiong He, and Athanasios V. Vasilakos. "Online Resource Management for Carbon-Neutral Cloud Computing." In Handbook on Data Centers, 607–30. New York, NY: Springer New York, 2015. http://dx.doi.org/10.1007/978-1-4939-2092-1_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Birman, Kenneth P. "The Structure of Cloud Data Centers." In Guide to Reliable Distributed Systems, 145–83. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-2416-0_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Bingwei, and Yu Chen. "Auditing for Data Integrity and Reliability in Cloud Storage." In Handbook on Data Centers, 535–59. New York, NY: Springer New York, 2015. http://dx.doi.org/10.1007/978-1-4939-2092-1_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xie, Tao, and Haibao Chen. "AutoCSD: Automatic Cloud System Deployment in Data Centers." In Cloud Computing and Big Data, 72–85. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-28430-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

De Prisco, R., A. De Santis, and M. Mannetta. "Reducing Costs in HSM-Based Data Centers." In Green, Pervasive, and Cloud Computing, 3–14. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57186-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kliazovich, Dzmitry, Pascal Bouvry, Fabrizio Granelli, and Nelson L. S. da Fonseca. "Energy Consumption Optimization in Cloud Data Centers." In Cloud Services, Networking, and Management, 191–215. Hoboken, NJ: John Wiley & Sons, Inc, 2015. http://dx.doi.org/10.1002/9781119042655.ch8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cherkaoui, Omar, and Ramesh Menon. "Virtualization, Cloud, SDN, and SDDC in Data Centers." In Data Center Handbook, 389–400. Hoboken, NJ: John Wiley & Sons, Inc, 2014. http://dx.doi.org/10.1002/9781118937563.ch20.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Cloud data centers"

1

Xu, Jielong, Jian Tang, Kevin Kwiat, Weiyi Zhang, and Guoliang Xue. "Survivable Virtual Infrastructure Mapping in Virtualized Data Centers." In 2012 IEEE 5th International Conference on Cloud Computing (CLOUD). IEEE, 2012. http://dx.doi.org/10.1109/cloud.2012.100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Salehi, Mohsen Amini, P. Radha Krishna, Krishnamurty Sai Deepak, and Rajkumar Buyya. "Preemption-Aware Energy Management in Virtualized Data Centers." In 2012 IEEE 5th International Conference on Cloud Computing (CLOUD). IEEE, 2012. http://dx.doi.org/10.1109/cloud.2012.147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Linquan, Xunrui Yin, Zongpeng Li, and Chuan Wu. "Hierarchical Virtual Machine Placement in Modular Data Centers." In 2015 IEEE 8th International Conference on Cloud Computing (CLOUD). IEEE, 2015. http://dx.doi.org/10.1109/cloud.2015.32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tian, Wenhong. "Adaptive Dimensioning of Cloud Data Centers." In 2009 International Conference on Dependable, Autonomic and Secure Computing (DASC). IEEE, 2009. http://dx.doi.org/10.1109/dasc.2009.58.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Seungjoon, Manish Purohit, and Barna Saha. "Firewall placement in cloud data centers." In SOCC '13: ACM Symposium on Cloud Computing. New York, NY, USA: ACM, 2013. http://dx.doi.org/10.1145/2523616.2525960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Karthik, C., Mayank Sharma, Kirti Maurya, and K. Chandrasekaran. "Green intelligence for cloud data centers." In 2016 3rd International Conference on Recent Advances in Information Technology (RAIT). IEEE, 2016. http://dx.doi.org/10.1109/rait.2016.7507965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Maltz, David A. "Challenges in cloud scale data centers." In the ACM SIGMETRICS/international conference. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2465529.2465767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Maswood, Mirza Mohd Shahriar, and Deep Medhi. "Optimal connectivity to cloud data centers." In 2017 IEEE 6th International Conference on Cloud Networking (CloudNet). IEEE, 2017. http://dx.doi.org/10.1109/cloudnet.2017.8071542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sakamoto, Takumi, Hiroshi Yamada, Hikaru Horie, and Kenji Kono. "Energy-Price-Driven Request Dispatching for Cloud Data Centers." In 2012 IEEE 5th International Conference on Cloud Computing (CLOUD). IEEE, 2012. http://dx.doi.org/10.1109/cloud.2012.115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hans, Ronny, Ulrich Lampe, and Ralf Steinmetz. "QoS-Aware, Cost-Efficient Selection of Cloud Data Centers." In 2013 IEEE 6th International Conference on Cloud Computing (CLOUD). IEEE, 2013. http://dx.doi.org/10.1109/cloud.2013.113.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Cloud data centers"

1

Gurieiev, Viktor, Yulii Kutsan, Anna Iatsyshyn, Andrii Iatsyshyn, Valeriia Kovach, Evgen Lysenko, Volodymyr Artemchuk, and Oleksandr Popov. Simulating Systems for Advanced Training and Professional Development of Energy Specialists in Power Sector. [б. в.], November 2020. http://dx.doi.org/10.31812/123456789/4456.

Full text
Abstract:
The crisis of the system of professional development and personnel training in the energy sector exists not only in Ukraine but also all over the world. The article describes the concept of development and functioning of the industry system of personnel training in the energy sector of Ukraine. The importance of using modern web-oriented technologies to improve the skills of operational and dispatching personnel in the energy sector of Ukraine is substantiated. The meth- ods of distributed power system operating modes modelling are presented. De- velopment and software tools for the construction of distributed simulating sys- tems and particular features of cloud technologies application for the creation of a virtual training centers network in the energy sector, as well as the ways to automate the process of simulating scenarios development are described. The ex- perience of introducing remote training courses for energy specialists and remote web-based training simulators based on a comprehensive model of the energy system of Ukraine is presented. An important practical aspect of the research is the application of software and data support for the development of personnel key competencies in the energy sector for rapid recognition of accidents and, if necessary, accident management. This will allow them to acquire knowledge and practical skills to solve the problems of analysis, modelling, forecasting, and monitoring data visualization of large power systems operating modes.
APA, Harvard, Vancouver, ISO, and other styles
2

DEFENSE BUSINESS BOARD WASHINGTON DC. DoD Information Technology Modernization: A Recommended Approach to Data Center Consolidation and Cloud Computing. Fort Belvoir, VA: Defense Technical Information Center, January 2012. http://dx.doi.org/10.21236/ada563977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography