To see the other types of publications on this topic, follow the link: Data centers management.

Dissertations / Theses on the topic 'Data centers management'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data centers management.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tudoran, Radu-Marius. "High-Performance Big Data Management Across Cloud Data Centers." Electronic Thesis or Diss., Rennes, École normale supérieure, 2014. http://www.theses.fr/2014ENSR0004.

Full text
Abstract:
La puissance de calcul facilement accessible offerte par les infrastructures clouds, couplés à la révolution du "Big Data", augmentent l'échelle et la vitesse auxquelles l'analyse des données est effectuée. Les ressources de cloud computing pour le calcul et le stockage sont répartis entre plusieurs centres de données de par le monde. Permettre des transferts de données rapides devient particulièrement important dans le cadre d'applications scientifiques pour lesquels déplacer le traitement proche de données est coûteux voire impossible. Les principaux objectifs de cette thèse consistent à analyser comment les clouds peuvent devenir "Big Data - friendly", et quelles sont les meilleures options pour fournir des services de gestion de données aptes à répondre aux besoins des applications. Dans cette thèse, nous présentons nos contributions pour améliorer la performance de la gestion de données pour les applications exécutées sur plusieurs centres de données géographiquement distribués. Nous commençons avec les aspects concernant l'échelle du traitement de données sur un site, et poursuivons avec le développements de solutions de type MapReduce permettant la distribution des calculs entre plusieurs centres. Ensuite, nous présentons une architecture de service de transfert qui permet d'optimiser le rapport coût-performance des transferts. Ce service est exploité dans le contexte de la diffusion de données en temps-réel entre des centres de données de clouds. Enfin, nous étudions la viabilité, pour une fournisseur de cloud, de la solution consistant à intégrer cette architecture comme un service basé sur un paradigme de tarification flexible, qualifiée de "Transfert-as-a-Service"
The easily accessible computing power offered by cloud infrastructures, coupled with the "Big Data" revolution, are increasing the scale and speed at which data analysis is performed. Cloud computing resources for compute and storage are spread across multiple data centers around the world. Enabling fast data transfers becomes especially important in scientific applications where moving the processing close to data is expensive or even impossible. The main objectives of this thesis are to analyze how clouds can become "Big Data - friendly", and what are the best options to provide data management services able to meet the needs of applications. In this thesis, we present our contributions to improve the performance of data management for applications running on several geographically distributed data centers. We start with aspects concerning the scale of data processing on a site, and continue with the development of MapReduce type solutions allowing the distribution of calculations between several centers. Then, we present a transfer service architecture that optimizes the cost-performance ratio of transfers. This service is operated in the context of real-time data streaming between cloud data centers. Finally, we study the viability, for a cloud provider, of the solution consisting in integrating this architecture as a service based on a flexible pricing paradigm, qualified as "Transfer-as-a-Service"
APA, Harvard, Vancouver, ISO, and other styles
2

Mahmud, A. S. M. Hasan. "Sustainable Resource Management for Cloud Data Centers." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2634.

Full text
Abstract:
In recent years, the demand for data center computing has increased significantly due to the growing popularity of cloud applications and Internet-based services. Today's large data centers host hundreds of thousands of servers and the peak power rating of a single data center may even exceed 100MW. The combined electricity consumption of global data centers accounts for about 3% of worldwide production, raising serious concerns about their carbon footprint. The utility providers and governments are consistently pressuring data center operators to reduce their carbon footprint and energy consumption. While these operators (e.g., Apple, Facebook, and Google) have taken steps to reduce their carbon footprints (e.g., by installing on-site/off-site renewable energy facility), they are aggressively looking for new approaches that do not require expensive hardware installation or modification. This dissertation focuses on developing algorithms and systems to improve the sustainability in data centers without incurring significant additional operational or setup costs. In the first part, we propose a provably-efficient resource management solution for a self-managed data center to cap and reduce the carbon emission while maintaining satisfactory service performance. Our solution reduces the carbon emission of a self-managed data center to net-zero level and achieves carbon neutrality. In the second part, we consider minimizing the carbon emission in a hybrid data center infrastructure that includes geographically distributed self-managed and colocation data centers. This segment identifies and addresses the challenges of resource management in a hybrid data center infrastructure and proposes an efficient distributed solution to optimize the workload and resource allocation jointly in both self-managed and colocation data centers. In the final part, we explore sustainable resource management from cloud service users' point of view. A cloud service user purchases computing resources (e.g., virtual machines) from the service provider and does not have direct control over the carbon emission of the service provider's data center. Our proposed solution encourages a user to take part in sustainable (both economical and environmental) computing by limiting its spending on cloud resource purchase while satisfying its application performance requirements.
APA, Harvard, Vancouver, ISO, and other styles
3

Le, Tuan Anh. "Workload prediction for resource management in data centers." Thesis, Umeå universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-124985.

Full text
Abstract:
Resource management is to arrange and allocate resources for computing operations and applications. In large scale data centers that contain thousands of servers, resource management is critical for efficient operation. To know workload characteristics in advance helps us proactively control resources in data centers, leading to benefits such as power savings and improved service performance. Workload prediction can be used, e.g., to decide how many resources to allocate for each application in a data center in the future. The accuracy of workload prediction varies depending on the used prediction methods and the characteristics of the workload. In this thesis work, we investigate three different methods: Linear Regression (LR), Adaptive Neuro-Fuzzy Inference Systems (ANFIS), and Nonlinear Autoregressive Network with Exogenous Inputs (NARX). These methods are used to build models of resource consumption such as memory, CPU, and disk. Based on these models, future workload resource usage is predicted, and the accuracy of prediction is assessed. We analyze a trace from a production cluster at Google, predict resource consumption for different time intervals, and compute the error between predicted and actual values. The results show that NARX gives higher accuracy than ANFIS and LR when forecasting one-step ahead prediction, and that the ANFIS method provides the best result with multi-step ahead prediction compared to the others. Finally, time to train and re-train LR, ANFIS and NARX are computed. The running times are short, suggesting that the methods can be used in real-time operation.
APA, Harvard, Vancouver, ISO, and other styles
4

Lu, Lei. "Effective Resource and Workload Management in Data Centers." W&M ScholarWorks, 2014. https://scholarworks.wm.edu/etd/1539623637.

Full text
Abstract:
The increasing demand for storage, computation, and business continuity has driven the growth of data centers. Managing data centers efficiently is a difficult task because of the wide variety of datacenter applications, their ever-changing intensities, and the fact that application performance targets may differ widely. Server virtualization has been a game-changing technology for IT, providing the possibility to support multiple virtual machines (VMs) simultaneously. This dissertation focuses on how virtualization technologies can be utilized to develop new tools for maintaining high resource utilization, for achieving high application performance, and for reducing the cost of data center management.;For multi-tiered applications, bursty workload traffic can significantly deteriorate performance. This dissertation proposes an admission control algorithm AWAIT, for handling overloading conditions in multi-tier web services. AWAIT places on hold requests of accepted sessions and refuses to admit new sessions when the system is in a sudden workload surge. to meet the service-level objective, AWAIT serves the requests in the blocking queue with high priority. The size of the queue is dynamically determined according to the workload burstiness.;Many admission control policies are triggered by instantaneous measurements of system resource usage, e.g., CPU utilization. This dissertation first demonstrates that directly measuring virtual machine resource utilizations with standard tools cannot always lead to accurate estimates. A directed factor graph (DFG) model is defined to model the dependencies among multiple types of resources across physical and virtual layers.;Virtualized data centers always enable sharing of resources among hosted applications for achieving high resource utilization. However, it is difficult to satisfy application SLOs on a shared infrastructure, as application workloads patterns change over time. AppRM, an automated management system not only allocates right amount of resources to applications for their performance target but also adjusts to dynamic workloads using an adaptive model.;Server consolidation is one of the key applications of server virtualization. This dissertation proposes a VM consolidation mechanism, first by extending the fair load balancing scheme for multi-dimensional vector scheduling, and then by using a queueing network model to capture the service contentions for a particular virtual machine placement.
APA, Harvard, Vancouver, ISO, and other styles
5

Sarker, Tusher Kumer. "Cost-efficient virtual machine management in data centers." Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/94743/1/Tusher%20Kumer_Sarker_Thesis.pdf.

Full text
Abstract:
Virtual Machine (VM) management is an obvious need in today's data centers for various management activities and is accomplished in two phases— finding an optimal VM placement plan and implementing that placement through live VM migrations. These phases result in two research problems— VM placement problem (VMPP) and VM migration scheduling problem (VMMSP). This research proposes and develops several evolutionary algorithms and heuristic algorithms to address the VMPP and VMMSP. Experimental results show the effectiveness and scalability of the proposed algorithms. Finally, a VM management framework has been proposed and developed to automate the VM management activity in cost-efficient way.
APA, Harvard, Vancouver, ISO, and other styles
6

Rincon, Mateus Cesar Augusto. "Dynamic resource allocation for energy management in data centers." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-3182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yanggratoke, Rerngvit. "Contributions to Performance Modeling and Management of Data Centers." Licentiate thesis, KTH, Kommunikationsnät, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-129296.

Full text
Abstract:
Over the last decade, Internet-based services, such as electronic-mail, music-on-demand, and social-network services, have changed the ways we communicate and access information. Usually, the key functionality of such a service is in backend components, which are located in a data center, a facility for hosting computing systems and related equipment. This thesis focuses on two fundamental problems related to the management, dimensioning, and provisioning of such backend components. The first problem centers around resource allocation for a large-scale cloud environment. Data centers have become very large; they often contain hundreds of thousands of machines and applications. In such a data center, resource allocation cannot be efficiently achieved through a traditional management system that is centralized in nature. Therefore, a more scalable solution is needed. To address this problem, we have developed and evaluated a scalable and generic protocol for resource allocation. The protocol is generic in the sense that it can be instantiated for different management objectives through objective functions. The protocol jointly allocates CPU, memory, and network resources to applications that are hosted by the cloud. We prove that the protocol converges to a solution, if an objective function satisfies a certain property. We perform a simulation study of the protocol for realistic scenarios. Simulation results suggest that the quality of the allocation is independent of the system size, up to 100,000 machines and applications, for the management objectives considered. The second problem is related to performance modeling of a distributed key-value store. The specific distributed key-value store we focus on in this thesis is the Spotify storage system. Understanding the performance of the Spotify storage system is essential for achieving a key quality of service objective, namely that the playback latency of a song is sufficiently low. To address this problem, we have developed and evaluated models for predicting the performance of a distributed key-value store for a lightly loaded system. First, we developed a model that allows us to predict the response time distribution of requests. Second, we modeled the capacity of the distributed key-value store for two different object allocation policies. We evaluate the models by comparing model predictions with measurements from two different environments: our lab testbed and a Spotify operational environment. We found that the models are accurate in the sense that the prediction error, i.e., the difference between the model predictions and the measurements from the real systems, is at most 11%.

QC 20131001

APA, Harvard, Vancouver, ISO, and other styles
8

Somani, Ankit. "Advanced thermal management strategies for energy-efficient data centers." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/26527.

Full text
Abstract:
Thesis (M. S.)--Mechanical Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Joshi, Yogendra; Committee Member: ghiaasiaan, mostafa; Committee Member: Schwan, Karsten. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
9

Kekelishvili, Rebecca. "DHISC : Disk Health Indexing System for Centers of Data Management." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113181.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 85-88).
If we want to have reliable data centers, we must improve reliability at the lowest level of data storage at the disk level. To improve reliability, we need to convert storage systems from reactive mechanisms that handle disk failures to a proactive mechanism that predict and address failures. Because the definition of disk failure is specific to a customer rather than defined by a standard, we developed a relative disk health metric and proposed a customer-oriented disk-maintenance pipeline. We designed a program that processes data collected from data center disks into a format that is easy to analyze using machine learning. Then, we used a neural network to recognize disks that show signs of oncoming failure with 95.4-98.7% accuracy, and used the result of the network to produce a rank of most and least reliable disks at the data center, enabling customers to perform bulk disk maintenance, decreasing system downtime.
by Rebecca Kekelishvili.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
10

Kundu, Sajib. "Improving Resource Management in Virtualized Data Centers using Application Performance Models." FIU Digital Commons, 2013. http://digitalcommons.fiu.edu/etd/874.

Full text
Abstract:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.
APA, Harvard, Vancouver, ISO, and other styles
11

Feller, Eugen. "Autonomic and Energy-Efficient Management of Large-Scale Virtualized Data Centers." Phd thesis, Université Rennes 1, 2012. http://tel.archives-ouvertes.fr/tel-00785090.

Full text
Abstract:
Large-scale virtualized data centers require cloud providers to implement scalable, autonomic, and energy-efficient cloud management systems. To address these challenges this thesis provides four main contributions. The first one proposes Snooze, a novel Infrastructure-as-a-Service (IaaS) cloud management system, which is designed to scale across many thousands of servers and virtual machines (VMs) while being easy to configure, highly available, and energy efficient. For scalability, Snooze performs distributed VM management based on a hierarchical architecture. To support ease of configuration and high availability Snooze implements self-configuring and self-healing features. Finally, for energy efficiency, Snooze integrates a holistic energy management approach via VM resource (i.e. CPU, memory, network) utilization monitoring, underload/overload detection and mitigation, VM consolidation (by implementing a modified version of the Sercon algorithm), and power management to transition idle servers into a power saving mode. A highly modular Snooze prototype was developed and extensively evaluated on the Grid'5000 testbed using realistic applications. Results show that: (i) distributed VM management does not impact submission time; (ii) fault tolerance mechanisms do not impact application performance and (iii) the system scales well with an increasing number of resources thus making it suitable for managing large-scale data centers. We also show that the system is able to dynamically scale the data center energy consumption with its utilization thus allowing it to conserve substantial power amounts with only limited impact on application performance. Snooze is an open-source software under the GPLv2 license. The second contribution is a novel VM placement algorithm based on the Ant Colony Optimization (ACO) meta-heuristic. ACO is interesting for VM placement due to its polynomial worst-case time complexity, close to optimal solutions and ease of parallelization. Simulation results show that while the scalability of the current algorithm implementation is limited to a smaller number of servers and VMs, the algorithm outperforms the evaluated First-Fit Decreasing greedy approach in terms of the number of required servers and computes close to optimal solutions. In order to enable scalable VM consolidation, this thesis makes two further contributions: (i) an ACO-based consolidation algorithm; (ii) a fully decentralized consolidation system based on an unstructured peer-to-peer network. The key idea is to apply consolidation only in small, randomly formed neighbourhoods of servers. We evaluated our approach by emulation on the Grid'5000 testbed using two state-of-the-art consolidation algorithms (i.e. Sercon and V-MAN) and our ACO-based consolidation algorithm. Results show our system to be scalable as well as to achieve a data center utilization close to the one obtained by executing a centralized consolidation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
12

Feller, Eugen. "Automatic and energy-efficient management of large scale virtualized data centers." Rennes 1, 2012. http://www.theses.fr/2012REN1S136.

Full text
Abstract:
Cette thèse propose Snooze, un système autonome et économique en énergie pour des clouds "Infrastructure-as-a-Service" (IaaS). Pour le passage à l’échelle, la facilité d’administration et la haute disponibilité, Snooze repose sur une architecture hiérarchique auto-configurable et auto-réparante. Pour la gestion de l’énergie, Snooze intègre la surveillance des ressources utilisées par les machines virtuelles (VM), la résolution des situations de sous-charge et de surcharge des serveurs, la gestion de leur alimentation électrique et le regroupement de VMs. Un prototype robuste du système Snooze a été développé et évalué avec des applications réalistes sur la plate-forme Grid’5000. Pour favoriser les périodes d’inactivité des serveurs dans un cloud IaaS, il faut placer les VMs judicieusement et les regrouper. Cette thèse propose un algorithme de placement de VMs fondé sur la méta-heuristique d’optimisation par colonies de fourmis (ACO). Des simulations ont montré que cet algorithme calcule des solutions proches de l’optimal, meilleures que celles de l’algorithme "First-Fit-Decreasing" au prix d’un moins bon passage à l’échelle. Pour le passage à l’échelle du regroupement de VMs, cette thèse apporte deux autres contributions : un algorithme de regroupement de VMs fondé sur l'ACO et un système de regroupement de VMs complètement décentralisé fondé sur un réseau pair-à-pair non structuré de serveurs. Les résultats d’émulation ont montré que notre système passe à l’échelle et qu’il permet d’atteindre un taux d’utilisation du centre de données proche de celui obtenu avec un système centralisé
Large-scale virtualized data centers now require cloud providers to implement scalable, autonomic, and energy-efficient cloud management systems. To address these challenges this thesis proposes Snooze, a novel highly available, easy to configure, and energy-efficient Infrastructure-as-a-Service (IaaS) cloud management system. For scalability and high availability Snooze integrates a self-configuring and healing hierarchical architecture. To achieve energy efficiency Snooze integrates a holistic energy management approach via virtual machine (VM) resource utilization monitoring, server underload/overload mitigation, VM consolidation, and power management. A robust Snooze prototype was developed and extensively evaluated on the Grid'5000 testbed using realistic applications. The experiments have proven Snooze to be scalable, highly available and energy-efficient. One way to favor servers idle times in IaaS clouds is to perform energy-efficient VM placement and consolidation. This thesis proposes a novel VM placement algorithm based on the Ant Colony Optimization (ACO) meta-heuristic. Simulation results have shown that the proposed algorithm computes close to optimal solutions and outperforms the evaluated First-Fit Decreasing algorithm at the cost of decreased scalability. To enable scalable VM consolidation, this thesis makes two further contributions: (i) an ACO-based VM consolidation algorithm; (ii) a fully decentralized VM consolidation system based on an unstructured peer-to-peer network of servers. Emulation conducted on the Grid'5000 testbed has proven our system to be scalable as well as to achieve data center utilization close to the one of a centralized system
APA, Harvard, Vancouver, ISO, and other styles
13

Alharbi, Fares Abdi H. "Profile-based virtual machine management for more energy-efficient data centers." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/129871/8/Fares%20Abdi%20H%20Alharbi%20Thesis.pdf.

Full text
Abstract:
This research develops a resource management framework for improved energy efficiency in cloud data centers through energy-efficient virtual machine placement to physical machines as well as application assignment to virtual machines. The study investigates static virtual machine placement, dynamic virtual machine placement and application assignment using ant colony optimization to minimize the total energy consumption in data centers.
APA, Harvard, Vancouver, ISO, and other styles
14

Polany, Rany. "Multidisciplinary system design optimization of fiber-optic networks within data centers." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107503.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, School of Engineering, System Design and Management Program, Engineering and Management Program, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 136-142).
The growth of the Internet and the vast amount of cloud-based data have created a need to develop data centers that can respond to market dynamics. The role of a data center designer, whom is responsible for scoping, building, and managing the infrastructure design is becoming increasingly complex. This work presents a new analytical systems approach to modeling fiber-optic network design within data centers. Multidisciplinary system design optimization (MSDO) is utilized to integrate seven disciplines into a unified software framework for modeling 10G, 40G, and 100G multi-mode fiber-optics networks: 1) market and industry analysis, 2) fiber-optic technology, 3) data center infrastructure, 4) systems analysis, 5) multi-objective optimization using genetic algorithms, 6) parallel computing, and 7) simulation research using MATLAB and OptiSystem. The framework is applied to four theoretical data center case studies to simultaneously evaluate the Pareto optimal trade-offs of (a) minimizing life-cycle costs, (b) maximizing user capacity, and (c) maximizing optical transmission quality (Q-factor). The results demonstrate that data center life-cycle costs are most sensitive to power costs, 10G OM4 multi-mode optical fiber is Pareto optimal for long reach and low user capacity needs, and 100G OM4 multi-mode optical fiber is Pareto optimal for short reach and high user capacity needs.
by Rany Polany.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
15

Samadiani, Emad. "Energy efficient thermal management of data centers via open multi-scale design." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/37218.

Full text
Abstract:
Data centers are computing infrastructure facilities that house arrays of electronic racks containing high power dissipation data processing and storage equipment whose temperature must be maintained within allowable limits. In this research, the sustainable and reliable operations of the electronic equipment in data centers are shown to be possible through the Open Engineering Systems paradigm. A design approach is developed to bring adaptability and robustness, two main features of open systems, in multi-scale convective systems such as data centers. The presented approach is centered on the integration of three constructs: a) Proper Orthogonal Decomposition (POD) based multi-scale modeling, b) compromise Decision Support Problem (cDSP), and c) robust design to overcome the challenges in thermal-fluid modeling, having multiple objectives, and inherent variability management, respectively. Two new POD based reduced order thermal modeling methods are presented to simulate multi-parameter dependent temperature field in multi-scale thermal/fluid systems such as data centers. The methods are verified to achieve an adaptable, robust, and energy efficient thermal design of an air-cooled data center cell with an annual increase in the power consumption for the next ten years. Also, a simpler reduced order modeling approach centered on POD technique with modal coefficient interpolation is validated against experimental measurements in an operational data center facility.
APA, Harvard, Vancouver, ISO, and other styles
16

Takouna, Ibrahim. "Energy-efficient and performance-aware virtual machine management for cloud data centers." Phd thesis, Universität Potsdam, 2014. http://opus.kobv.de/ubp/texte_eingeschraenkt_verlag/2014/7239/.

Full text
Abstract:
Virtualisierte Cloud Datenzentren stellen nach Bedarf Ressourcen zur Verfügu-ng, ermöglichen agile Ressourcenbereitstellung und beherbergen heterogene Applikationen mit verschiedenen Anforderungen an Ressourcen. Solche Datenzentren verbrauchen enorme Mengen an Energie, was die Erhöhung der Betriebskosten, der Wärme innerhalb der Zentren und des Kohlendioxidausstoßes verursacht. Der Anstieg des Energieverbrauches kann durch ein ineffektives Ressourcenmanagement, das die ineffiziente Ressourcenausnutzung verursacht, entstehen. Die vorliegende Dissertation stellt detaillierte Modelle und neue Verfahren für virtualisiertes Ressourcenmanagement in Cloud Datenzentren vor. Die vorgestellten Verfahren ziehen das Service-Level-Agreement (SLA) und die Heterogenität der Auslastung bezüglich des Bedarfs an Speicherzugriffen und Kommunikationsmustern von Web- und HPC- (High Performance Computing) Applikationen in Betracht. Um die präsentierten Techniken zu evaluieren, verwenden wir Simulationen und echte Protokollierung der Auslastungen von Web- und HPC- Applikationen. Außerdem vergleichen wir unser Techniken und Verfahren mit anderen aktuellen Verfahren durch die Anwendung von verschiedenen Performance Metriken. Die Hauptbeiträge dieser Dissertation sind Folgendes: Ein Proaktives auf robuster Optimierung basierendes Ressourcenbereitstellungsverfahren. Dieses Verfahren erhöht die Fähigkeit der Hostes zur Verfüg-ungsstellung von mehr VMs. Gleichzeitig aber wird der unnötige Energieverbrauch minimiert. Zusätzlich mindert diese Technik unerwünschte Ände-rungen im Energiezustand des Servers. Die vorgestellte Technik nutzt einen auf Intervall basierenden Vorhersagealgorithmus zur Implementierung einer robusten Optimierung. Dabei werden unsichere Anforderungen in Betracht gezogen. Ein adaptives und auf Intervall basierendes Verfahren zur Vorhersage des Arbeitsaufkommens mit hohen, in kürzer Zeit auftretenden Schwankungen. Die Intervall basierende Vorhersage ist implementiert in der Standard Abweichung Variante und in der Median absoluter Abweichung Variante. Die Intervall-Änderungen basieren auf einem adaptiven Vertrauensfenster um die Schwankungen des Arbeitsaufkommens zu bewältigen. Eine robuste VM Zusammenlegung für ein effizientes Energie und Performance Management. Dies ermöglicht die gegenseitige Abhängigkeit zwischen der Energie und der Performance zu minimieren. Unser Verfahren reduziert die Anzahl der VM-Migrationen im Vergleich mit den neu vor kurzem vorgestellten Verfahren. Dies trägt auch zur Reduzierung des durch das Netzwerk verursachten Energieverbrauches. Außerdem reduziert dieses Verfahren SLA-Verletzungen und die Anzahl von Änderungen an Energiezus-tänden. Ein generisches Modell für das Netzwerk eines Datenzentrums um die verzö-gerte Kommunikation und ihre Auswirkung auf die VM Performance und auf die Netzwerkenergie zu simulieren. Außerdem wird ein generisches Modell für ein Memory-Bus des Servers vorgestellt. Dieses Modell beinhaltet auch Modelle für die Latenzzeit und den Energieverbrauch für verschiedene Memory Frequenzen. Dies erlaubt eine Simulation der Memory Verzögerung und ihre Auswirkung auf die VM-Performance und auf den Memory Energieverbrauch. Kommunikation bewusste und Energie effiziente Zusammenlegung für parallele Applikationen um die dynamische Entdeckung von Kommunikationsmustern und das Umplanen von VMs zu ermöglichen. Das Umplanen von VMs benutzt eine auf den entdeckten Kommunikationsmustern basierende Migration. Eine neue Technik zur Entdeckung von dynamischen Mustern ist implementiert. Sie basiert auf der Signal Verarbeitung des Netzwerks von VMs, anstatt die Informationen des virtuellen Umstellung der Hosts oder der Initiierung der VMs zu nutzen. Das Ergebnis zeigt, dass unsere Methode die durchschnittliche Anwendung des Netzwerks reduziert und aufgrund der Reduzierung der aktiven Umstellungen Energie gespart. Außerdem bietet sie eine bessere VM Performance im Vergleich zu der CPU-basierten Platzierung. Memory bewusste VM Zusammenlegung für unabhängige VMs. Sie nutzt die Vielfalt des VMs Memory Zuganges um die Anwendung vom Memory-Bus der Hosts zu balancieren. Die vorgestellte Technik, Memory-Bus Load Balancing (MLB), verteilt die VMs reaktiv neu im Bezug auf ihre Anwendung vom Memory-Bus. Sie nutzt die VM Migration um die Performance des gesamtem Systems zu verbessern. Außerdem sind die dynamische Spannung, die Frequenz Skalierung des Memory und die MLB Methode kombiniert um ein besseres Energiesparen zu leisten.
Virtualized cloud data centers provide on-demand resources, enable agile resource provisioning, and host heterogeneous applications with different resource requirements. These data centers consume enormous amounts of energy, increasing operational expenses, inducing high thermal inside data centers, and raising carbon dioxide emissions. The increase in energy consumption can result from ineffective resource management that causes inefficient resource utilization. This dissertation presents detailed models and novel techniques and algorithms for virtual resource management in cloud data centers. The proposed techniques take into account Service Level Agreements (SLAs) and workload heterogeneity in terms of memory access demand and communication patterns of web applications and High Performance Computing (HPC) applications. To evaluate our proposed techniques, we use simulation and real workload traces of web applications and HPC applications and compare our techniques against the other recently proposed techniques using several performance metrics. The major contributions of this dissertation are the following: proactive resource provisioning technique based on robust optimization to increase the hosts' availability for hosting new VMs while minimizing the idle energy consumption. Additionally, this technique mitigates undesirable changes in the power state of the hosts by which the hosts' reliability can be enhanced in avoiding failure during a power state change. The proposed technique exploits the range-based prediction algorithm for implementing robust optimization, taking into consideration the uncertainty of demand. An adaptive range-based prediction for predicting workload with high fluctuations in the short-term. The range prediction is implemented in two ways: standard deviation and median absolute deviation. The range is changed based on an adaptive confidence window to cope with the workload fluctuations. A robust VM consolidation for efficient energy and performance management to achieve equilibrium between energy and performance trade-offs. Our technique reduces the number of VM migrations compared to recently proposed techniques. This also contributes to a reduction in energy consumption by the network infrastructure. Additionally, our technique reduces SLA violations and the number of power state changes. A generic model for the network of a data center to simulate the communication delay and its impact on VM performance, as well as network energy consumption. In addition, a generic model for a memory-bus of a server, including latency and energy consumption models for different memory frequencies. This allows simulating the memory delay and its influence on VM performance, as well as memory energy consumption. Communication-aware and energy-efficient consolidation for parallel applications to enable the dynamic discovery of communication patterns and reschedule VMs using migration based on the determined communication patterns. A novel dynamic pattern discovery technique is implemented, based on signal processing of network utilization of VMs instead of using the information from the hosts' virtual switches or initiation from VMs. The result shows that our proposed approach reduces the network's average utilization, achieves energy savings due to reducing the number of active switches, and provides better VM performance compared to CPU-based placement. Memory-aware VM consolidation for independent VMs, which exploits the diversity of VMs' memory access to balance memory-bus utilization of hosts. The proposed technique, Memory-bus Load Balancing (MLB), reactively redistributes VMs according to their utilization of a memory-bus using VM migration to improve the performance of the overall system. Furthermore, Dynamic Voltage and Frequency Scaling (DVFS) of the memory and the proposed MLB technique are combined to achieve better energy savings.
APA, Harvard, Vancouver, ISO, and other styles
17

Kumar, Anubhav. "Use of air side economizer for data center thermal management." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lazaar, Nouhaila. "Optimisation des alimentations électriques des Data Centers." Thesis, Normandie, 2021. http://www.theses.fr/2021NORMC206.

Full text
Abstract:
Les data centers, des usines abritant des milliers de serveurs informatiques, fonctionnent en permanence pour échanger, stocker, traiter des données et les rendre accessibles via l'internet. Avec le développement du secteur numérique, leur consommation énergétique, en grande partie d’origine fossile, n’a cessé de croitre au cours de la dernière décennie, représentant une réelle menace pour l’environnement. Le recours aux énergies renouvelables constitue un levier prometteur pour limiter l’empreinte écologique des data centers. Néanmoins, le caractère intermittent de ces sources freine leur intégration dans un système nécessitant un degré de fiabilité élevée. L’hybridation de plusieurs technologies pour la production d’électricité verte, couplée à des dispositifs de stockage est actuellement une solution efficace pour pallier ce problème. De ce fait, ce travail de recherche étudie un système multi-sources, intégrant des hydroliennes, des panneaux photovoltaïques, des batteries et un système de stockage d’hydrogène pour alimenter un data center à l’échelle du MW. L'objectif principal de cette thèse est l’optimisation de l'alimentation électrique d'un data center, aussi bien pour des sites isolés que des installations raccordées au réseau. Le premier axe de ce travail est la modélisation des différents composants du système à l’aide de la représentation énergétique macroscopique (REM). Une gestion d’énergie reposant sur le principe de séparation fréquentielle est adoptée dans un premier temps pour répartir l’énergie entre des organes de stockage présentant des caractéristiques dynamiques différentes. Le deuxième axe concerne le dimensionnement optimal du système proposé afin de trouver la meilleure configuration qui satisfasse les contraintes techniques imposées à un coût minimum, en utilisant l’optimisation par essaims particulaires (PSO) et l’algorithme génétique (AG). Ici, une technique de gestion d’énergie basée sur des règles simples est utilisée pour des raisons de simplicité et de réduction de temps de calcul. Le dernier axe se focalise sur l’optimisation de la gestion d’énergie via l’AG, en tenant compte des problèmes de dégradation des systèmes de stockage en vue de réduire leur coût d’exploitation et de prolonger leur durée de vie. Il est bien entendu que chaque axe précédemment abordé a fait l’objet d’une analyse de sensibilité spécifique, afin d’évaluer les performances du système hybride dans différentes conditions de fonctionnement
Data centers, factories housing thousands of computer servers that work permanently to exchange, store, process data and make it accessible via the Internet. With the digital sector development, their energy consumption, which is largely fossil fuel-based, has grown continuously over the last decade, posing a real threat to the environment. The use of renewable energy is a promising way to limit the ecological footprint of data centers. Nevertheless, the intermittent nature of these sources hinders their integration into a system requiring a high reliability degree. The hybridization of several technologies for green electricity production, coupled with storage devices, is currently an effective solution to this problem. As a result, this research work studies a multi-source system, integrating tidal turbines, photovoltaic panels, batteries and a hydrogen storage system to power an MW-scale data center. The main objective of this thesis is the optimization of a data center power supply, both for isolated sites and grid-connected ones. The first axis of this work is the modeling of the system components using the energetic macroscopic representation (EMR). Energy management strategy based on the frequency separation principle is first adopted to share power between storage devices with different dynamic characteristics. The second axis concerns the optimal sizing of the proposed system, in order to find the best configuration that meets the technical constraints imposed at minimum cost, using particle swarm optimization (PSO) and genetic algorithm (GA). Here, a rules-based energy management technique is used for simplicity and reduced computing time purposes. The last axis focuses on the energy management optimization through GA, taking into account the storage systems degradation in order to reduce their operating costs and extend their lifetime. It should be noted that each axis previously discussed has been the subject of a specific sensitivity analysis, which aims to evaluate the performance of the hybrid system under different operating conditions
APA, Harvard, Vancouver, ISO, and other styles
19

Alansari, Hayder. "Clustered Data Management in Virtual Docker Networks Spanning Geo-Redundant Data Centers : A Performance Evaluation Study of Docker Networking." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-141681.

Full text
Abstract:
Software containers in general and Docker in particular is becoming more popular both in software development and deployment. Software containers are intended to be a lightweight virtualization that provides the isolation of virtual machines with a performance that is close to native. Docker does not only provide virtual isolation but also virtual networking to connect the isolated containers in the desired way. Many alternatives exist when it comes to the virtual networking provided by Docker such as Host, Macvlan, Bridge, and Overlay networks. Each of these networking solutions has its own advantages and disadvantages. One application that can be developed and deployed in software containers is data grid system. The purpose of this thesis is to measure the impact of various Docker networks on the performance of Oracle Coherence data grid system. Therefore, the performance metrics are measured and compared between native deployment and Docker built-in networking solutions. A scaled-down model of a data grid system is used along with benchmarking tools to measure the performance metrics. The obtained results show that changing the Docker networking has an impact on performance. In fact, some results suggested that some Docker networks can outperform native deployment. The conclusion of the thesis suggests that if performance is the only consideration, then Docker networks that showed high performance can be used. However, real applications require more aspects than performance such as security, availability, and simplicity. Therefore Docker network should be carefully selected based on the requirements of the application.
APA, Harvard, Vancouver, ISO, and other styles
20

Spinner, Simon [Verfasser], Samuel [Gutachter] Kounev, and Kurt [Gutachter] Geihs. "Self-Aware Resource Management in Virtualized Data Centers / Simon Spinner ; Gutachter: Samuel Kounev, Kurt Geihs." Würzburg : Universität Würzburg, 2017. http://d-nb.info/1141576945/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Goiri, Íñigo. "Multifaceted resource management on virtualized providers." Doctoral thesis, Universitat Politècnica de Catalunya, 2011. http://hdl.handle.net/10803/80487.

Full text
Abstract:
Last decade, providers started using Virtual Machines (VMs) in their datacenters to pack users and their applications. This was a good way to consolidate multiple users in fewer physical nodes while isolating them from each other. Later on in 2006, Amazon started offering their Infrastructure as a Service where their users rent computing resources as VMs in a pay-as-you-go manner. However, virtualized providers cannot be managed like traditional ones as they are now confronted with a set of new challenges. First of all, providers must deal efficiently with new management operations such as the dynamic creation of VMs. These operations enable new capabilities that were not there before, such as moving VMs across the nodes, or the ability to checkpoint VMs. We propose a Decentralized virtualization management infrastructure to create VMs on demand, migrate them between nodes, and checkpointing mechanisms. With the introduction of this infrastructure, virtualized providers become decentralized and are able to scale. Secondly, these providers consolidate multiple VMs in a single machine to more efficiently utilize resources. Nevertheless, this is not straightforward and implies the use of more complex resource management techniques. In addition, this requires that both customers and providers can be confident that signed Service Level Agreements (SLAs) are supporting their respective business activities to their best extent. Providers typically offer very simple metrics that hinder an efficient exploitation of their resources. To solve this, we propose mechanisms to dynamically distribute resources among VMs and a resource-level metric, which together allow increasing provider utilization while maintaining Quality of Service. Thirdly, the provider must allocate the VMs evaluating multiple facets such as power consumption and customers' requirements. In addition, it must exploit the new capabilities introduced by virtualization and manage its overhead. Ultimately, this VM placement must minimize the costs associated with the execution of a VM in a provider to maximize the provider's profit. We propose a new scheduling policy that places VMs on provider nodes according to multiple facets and is able to understand and manage the overheads of dealing with virtualization. And fourthly, resource provisioning in these providers is a challenge because of the high load variability over time. Providers can serve most of the requests owning only a restricted amount of resources but this under-provisioning may cause customers to be rejected during peak hours. In the opposite situation, valley hours incur under-utilization of the resources. As this new paradigm makes the access to resources easier, providers can share resources to serve their loads. We leverage a federated scenario where multiple providers share their resources to overcome this load variability. We exploit the federation capabilities to create policies that take the most convenient decision depending on the environment conditions and tackle the load variability. All these challenges mean that providers must manage their virtualized resources in a different way than they have done traditionally. This dissertation identifies and studies the challenges faced by virtualized provider that offers IaaS, and designs and evaluates a solution to manage the provider's resources in the most cost-effective way by exploiting the virtualization capabilities.
APA, Harvard, Vancouver, ISO, and other styles
22

Da, Silva Ralston A. "Green Computing – Power Efficient Management in Data Centers Using Resource Utilization as a Proxy for Power." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1259760420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Donald, John Anthony. "Re-architecting Telkom's information technology data centres for business alignment and asset efficiency." Thesis, Stellenbosch : Stellenbosch University, 2002. http://hdl.handle.net/10019.1/85156.

Full text
Abstract:
Thesis (MBA)--Stellenbosch University, 2002.
ENGLISH ABSTRACT: In this case study, the writer proposes a methodology for the re-architecting of Telkom’s Information Technology data centres to achieve business alignment and improve IT asset efficiency. The methodology advocated begins with the defining of Telkom’s high-level business domains and maps against these the current deployment of IT infrastructure in the company’s data centres. Next, a Future Mode of Operation (FMO) architecture is proposed, together with the establishment of deployment principles and guidelines to ensure that ‘best practices’ are leveraged in future IT infrastructure deployment. After addressing implementations considerations, the writer discusses the benefits of implementing the FMO architecture, and suggests some success measures. The work is concluded with recommendations for further development.
AFRIKAANSE OPSOMMING: In hierdie gevallestudie het die skrywer 'n metodologie voorgestel om Telkom se Inligtingstegnologie (IT) datasentrums te herskep ten einde dit met die besigheidsprosesse in lyn te bring en die batedoeltreffendheid daarvan te verbeter. Die metodologie, soos voorgestel, begin met die definiëring van Telkom se hoëvlak besigheidsdomeine en karteer hierteenoor die huidige ontplooïng van die IT infrastruktuur in die maatskappy se datasentrums. Hierna, word 'n Toekomstige Modus Operandi (TMO) argitektuur voorgestel tesame met die daarstelling van ontplooïngsbeginsels en riglyne ten einde te verseker dat die “beste praktyk” beginsels ingebou word in 'n toekomstige IT infrastruktuur. Nadat implementeringsoorwegings aangespreek is, bespreek die skrywer die voordele van die TMO argitektuur en stel seker suksesmaatstawwe voor. Die werkstuk word afgesluit by wyse van aanbevelings rakende verdere ontwikkeling.
APA, Harvard, Vancouver, ISO, and other styles
24

Albrecht, Scott E. "A systems thinking approach to IT process automation gaining efficiencies in very large multi-service data centers." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/105292.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, February 2014.
Cataloged from PDF version of thesis. "December 2013."
Includes bibliographical references (page 75).
Keeping up with the Joneses, in an Information Technology (IT) sense, is not a feel good activity, it's a necessity to remain competitive. Building and maintaining a relevant, reliable, and scalable IT service infrastructures, without crushing the bottom line, is a necessary undertaking to avoid obsolescence in the marketplace. This is particularly true for very large scale IT Service and "Cloud" providers. At the very top of many CIO's wish list is to obtain, or create, an effective and efficient IT Process Automation (ITPA) framework. Use of ITPA or Run Book automation is a requirement to efficiently manage increasingly massive pools of systems and services under any particular IT Service Provider's management domain. A successful process workflow, run book, automation, and orchestration framework implementation requires a high degree of flexibility and scalability to be successful. It also requires an intuitive command and control structure to manage today's massive scale deployments and their increasingly demanding customers and service level agreements. This paper explores a new applications of a "publish-subscribe" messaging paradigm and how it can be leveraged to construct a core ITPA framework. This ITPA framework will scale to match the various needs of very large IT service infrastructures. The overarching intent of the paper is to discuss this ITPA framework, at a level of detail sufficient enough to provide a well-trained IT practitioner the ability to construct it themselves within their organization. This paper is however abstract enough to give the practitioner a high degree of choice with regards to the specific technologies and implementation details that must ultimately be tailored to their organization's specific needs and requirements.
by Scott E. Albrecht.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
25

Wolke, Andreas [Verfasser], Martin [Akademischer Betreuer] Bichler, and Georg [Akademischer Betreuer] Carle. "Energy efficient capacity management in virtualized data centers / Andreas Wolke. Gutachter: Georg Carle ; Martin Bichler. Betreuer: Martin Bichler." München : Universitätsbibliothek der TU München, 2015. http://d-nb.info/1070372390/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Althomali, Khalid. "Energy Management System Modeling of DC Data Center with Hybrid Energy Sources Using Neural Network." DigitalCommons@CalPoly, 2017. https://digitalcommons.calpoly.edu/theses/1701.

Full text
Abstract:
As data centers continue to grow rapidly, engineers will face the greater challenge in finding ways to minimize the cost of powering data centers while improving their reliability. The continuing growth of renewable energy sources such as photovoltaics (PV) system presents an opportunity to reduce the long-term energy cost of data centers and to enhance reliability when used with utility AC power and energy storage. However, the inter-temporal and the intermittency nature of solar energy makes it necessary for the proper coordination and management of these energy sources. This thesis proposes an energy management system in DC data center using a neural network to coordinate AC power, energy storage, and PV system that constitutes a reliable electrical power distribution to the data center. Software modeling of the DC data center was first developed for the proposed system followed by the construction of a lab-scale model to simulate the proposed system. Five scenarios were tested on the hardware model and the results demonstrate the effectiveness and accuracy of the neural network approach. Results further prove the feasibility in utilizing renewable energy source and energy storage in DC data centers. Analysis and performance of the proposed system will be discussed in this thesis, and future improvement for improved energy system reliability will also be presented.
APA, Harvard, Vancouver, ISO, and other styles
27

Thayer, Jenny P. "Evaluation of the Inland Counties trauma patient data collection, management, and analysis." CSUSB ScholarWorks, 1986. https://scholarworks.lib.csusb.edu/etd-project/378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Ha, Wai On. "Empirical studies toward DRP constructs and a model for DRP development for information systems function." HKBU Institutional Repository, 2002. http://repository.hkbu.edu.hk/etd_ra/432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Augulis, Nauris. "Didelių duomenų kiekių saugojimas ir apdorojimas nutolusių interneto centrų stebėjimo ir administravimo sistemoje." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2008. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20080716_101123-42465.

Full text
Abstract:
Lietuvoje sparčiai plečiantis informacinių technologijų naudojimui, kuriama vis daugiau informacinių technologijų projektų, kuriuos remia Europos Sąjunga ir kitos įvairios organizacijos. Taip pat stengiamasi pasiekti, kad informacinės technologijos būtų pasiekiamos kuo platesniam vartotojų ratui. Todėl steigiami interneto centrai kaimiškose vietovėse ir ne tik. Tačiau įsteigus tokius centrus ir norint juos tinkamai administruoti, reikia atitinkamos programinės įrangos. Deja lietuviškų produktų skirtų nutolusių interneto centrų stebėsenai ir administravimui nėra. Todėl sukūrus šią sistemą, palengvėjo interneto centrų, kuriuose ji įdiegta, administravimas.
Project describes specifying, designing and implementing tracking and administration system for distant internet centers. Analysis of design and technology solutions were researched during this project development. Some basic goals of system realization and potential solutions were formulated, which were presented. The architecture of the software developed is based on three layer design. This software was installed over thousand of computers and successfully used by people. Some research of system usage and user experience was done after system installation. This was done with the purpose of software quality analysis, that showed system quality is evaluated as an average, but its functionality was high.
APA, Harvard, Vancouver, ISO, and other styles
30

Minter, Dion Len. "Development of Strategies in Finding the Optimal Cooling of Systems of Integrated Circuits." Thesis, Virginia Tech, 2004. http://hdl.handle.net/10919/9961.

Full text
Abstract:
The task of thermal management in electrical systems has never been simple and has only become more difficult in recent years as the power electronics industry pushes towards devices with higher power densities. At the Center for Power Electronic Systems (CPES), a new approach to power electronic design is being implemented with the Integrated Power Electronic Module (IPEM). It is believed that an IPEM-based design approach will significantly enhance the competitiveness of the U.S. electronics industry, revolutionize the power electronics industry, and overcome many of the technology limits in today's industry by driving down the cost of manufacturing and design turnaround time. But with increased component integration comes the increased risk of component failure due to overheating. This thesis addresses the issues associated with the thermal management of integrated power electronic devices. Two studies are presented in this thesis. The focus of these studies is on the thermal design of a DC-DC front-end power converter developed at CPES with an IPEM-based approach. The first study investigates how the system would respond when the fan location and heat sink fin arrangement are varied in order to optimize the effects of conduction and forced-convection heat transfer to cool the system. The set-up of an experimental test is presented, and the results are compared to the thermal model. The second study presents an improved methodology for the thermal modeling of large-scale electrical systems and their many subsystems. A zoom-in/zoom-out approach is used to overcome the computational limitations associated with modeling large systems. The analysis performed in this paper was completed using I-DEAS©,, a three-dimensional finite element analysis (FEA) program which allows the thermal designer to simulate the affects of conduction and convection heat transfer in a forced-air cooling environment.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
31

Kumar, Sanjay. "New abstractions and mechanisms for virtualizing future many-core systems." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24644.

Full text
Abstract:
Thesis (Ph.D.)--Computing, Georgia Institute of Technology, 2009.
Committee Chair: Dr. Karsten Schwan; Committee Member: Dr. Calton Pu; Committee Member: Dr. Mustaque Ahamad; Committee Member: Dr. Parthasarathy Ranganathan; Committee Member: Dr. Sudhakar Yalamanchili
APA, Harvard, Vancouver, ISO, and other styles
32

Almoli, Ali Mubarak. "Air flow management in data centres." Thesis, University of Leeds, 2013. http://etheses.whiterose.ac.uk/4567/.

Full text
Abstract:
A data centre can be defined as an infrastructure facility that houses file servers, processors and other computer equipment, along with a standby power supply. These servers are kept inside cabinets and those cabinets are called racks. These racks are located close to each other inside a data centre to form rows. These rows are located front to front and back to back to form the aisles. These aisles could be used to supply the chilled air and also to provide room for operational purposes. Data centres are now widespread due to the high demand of infrastructure requirements, such as the network to operate Internet services. In this thesis, research is focused on the air cooling method, a popular method of cooling that is used to cool many data centres. The aim of this thesis is to understand the capabilities and limitations of Computational Fluid Dynamics (CFD) analysis of cooling air flow in data centres. The data centre components, which are the server blade and rack, have been simulated in order to study the environmental conditions (temperature, pressure and velocity fields) inside the data centre; as such, CFD analysis has been carried out at server, rack and room levels. The proposed method of a porous media model has been implemented to simulate servers and racks and has been tested and validated through corresponding experiments. It is shown from the results that the porous media model provides good agreement with experimental data of an actual case at the server level. The server racks have been simulated as a porous media with different permeability values in each direction (x ,y, z). In addition, a 3-dimensional CFD model has been used to explore the performance of three different room level cooling strategies based on the aisle containment (cold and hot aisle containments) and back door cooler. It is shown that using either cold or hot aisle containment within a data centre provides significant improvement inside the data centre with respect to temperature distribution and the avoidance of hot spots. Finally, the power input to the computer room air conditioning (CRAC) unit has been analysed for different cooling configurations when assuming the Coefficient of Performance (COP) of either direct expansion CRAC unit or a chiller system. Furthermore, the comparison between active and passive back door coolers has been done to evaluate the power consumption in the CRAC unit. It is shown that the supply temperature inside the data centre has a significant effect on the CRAC power input (compressor work) of the DX CRAC unit. With respect to comparison between the active and passive back door coolers, it has been found that the reduction of the CRAC unit load is higher when using the active back door cooler compared to the passive back door cooler, so the active back door cooler is better than passive back door cooler with respect to reduction of load on CRAC unit.
APA, Harvard, Vancouver, ISO, and other styles
33

REIS, JUNIOR JOSE S. B. "Métodos e softwares para análise da produção científica e detecção de frentes emergentes de pesquisa." reponame:Repositório Institucional do IPEN, 2015. http://repositorio.ipen.br:8080/xmlui/handle/123456789/26929.

Full text
Abstract:
Submitted by Marco Antonio Oliveira da Silva (maosilva@ipen.br) on 2016-12-21T15:07:24Z No. of bitstreams: 0
Made available in DSpace on 2016-12-21T15:07:24Z (GMT). No. of bitstreams: 0
O progresso de projetos anteriores salientou a necessidade de tratar o problema dos softwares para detecção, a partir de bases de dados de publicações científicas, de tendências emergentes de pesquisa e desenvolvimento. Evidenciou-se a carência de aplicações computacionais eficientes dedicadas a este propósito, que são artigos de grande utilidade para um melhor planejamento de programas de pesquisa e desenvolvimento em instituições. Foi realizada, então, uma revisão dos softwares atualmente disponíveis, para poder-se delinear claramente a oportunidade de desenvolver novas ferramentas. Como resultado, implementou-se um aplicativo chamado Citesnake, projetado especialmente para auxiliar a detecção e o estudo de tendências emergentes a partir da análise de redes de vários tipos, extraídas das bases de dados científicas. Através desta ferramenta computacional robusta e eficaz, foram conduzidas análises de frentes emergentes de pesquisa e desenvolvimento na área de Sistemas Geradores de Energia Nuclear de Geração IV, de forma que se pudesse evidenciar, dentre os tipos de reatores selecionados como os mais promissores pelo GIF - Generation IV International Forum, aqueles que mais se desenvolveram nos últimos dez anos e que se apresentam, atualmente, como os mais capazes de cumprir as promessas realizadas sobre os seus conceitos inovadores.
Dissertação (Mestrado em Tecnologia Nuclear)
IPEN/D
Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
34

RUIU, PIETRO. "Energy Management in Large Data Center Networks." Doctoral thesis, Politecnico di Torino, 2018. http://hdl.handle.net/11583/2706336.

Full text
Abstract:
In the era of digitalization, one of the most challenging research topic regards the energy consumption reduction of ICT equipment to contrast the global climate change. The ICT world is very sensitive to the problem of Greenhouse Gas emissions (GHG) and for several years has begun to implement some countermeasures to reduce consumption waste and increase efficiency of infrastructure: the total embodied emissions of end-use devices have significantly decreased, networks have become more energy efficient, and trends such as virtualization and dematerialization will continue to make equipment more efficient. One of the main contributor to GHG emissions is data centers industry, which provision end users with the necessary computing and communication resources to access the vast majority of services online and on a pay-as-you-go basis. Data centers require a tremendous amount of energy to operate, since the efficiency of cooling systems is increasing, more research efforts should be put in making green the IT system, which is becoming the major contributor to energy consumption. Being the network one of the non-negligible contributors to energy consumption in data centers, several architectures have been designed with the goal of improving energy-efficient of data centers. These architectures are called Data Center Networks (DCNs) and provide interconnections among the computing servers and between the servers and the Internet, according to specific layouts.In my PhD I have extensively investigated on energy efficiency of data center, working on different projects which try to tackle the problems from different views. The research can be divided into two main parts with the Energy Proportionality as connection argument. The main focus of the work is about the trade-off between size and energy efficiency of data centers, with the aim to find a relationship between scalability and energy proportionality of data centers. In this regard, the energy consumption of different data center architectures have been analyzed, varying the dimension in terms of number of server and switches. Extensive simulation experiments, performed in small and large scale scenarios, unveil the ability of network-aware allocation policies in loading the the data center in a energy-proportional manner and the robustness of classical two- and three-tier design under network-oblivious allocation strategies. The concept of energy proportionality, applied to the whole DCN and used as efficiency metric, is one of the main contributions of the work. Energy proportionality is a property defining the degree of proportionality between load and the energy spent to support such load, thus devices are energy proportional when any increase of the load corresponds to a proportional increase of energy consumption. A peculiar feature of our analysis is in the consideration of the whole data center, i.e., both computing and communication devices are taken into account. Our methodology consists of an asymptotic analysis of data center consumption, whenever its size (in terms of servers) become very large. In our analysis, we investigate the impact of three different allocation policies on the energy proportionality of computing and networking equipment for different DCNs, including 2-Tier, 3-Tier and Jupiter topologies. For evaluation, the size of the DCNs varies to accommodate up to several thousands of computing servers. Validation of the analysis is conducted through simulations. We propose new metrics with the objective to characterize in a holistic manner the energy proportionality in data centers. The experiments unveil that, when consolidation policies are in place and regardless of the type of architecture, the size of the DCN plays a key role, i.e., larger DCNs containing thousands of servers are more energy proportional than small DCNs.
APA, Harvard, Vancouver, ISO, and other styles
35

Vasudevan, Meera. "Profile-based application management for green data centres." Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/98294/1/Meera_Vasudevan_Thesis.pdf.

Full text
Abstract:
This thesis presents a profile-based application management framework for energy-efficient data centres. The framework is based on a concept of using Profiles that provide prior knowledge of the run-time workload characteristics to assign applications to virtual machines. The thesis explores the building of profiles for applications, virtual machines and servers from real data centre workload logs. This is then used to inform static and dynamic application assignment, and consolidation of applications.
APA, Harvard, Vancouver, ISO, and other styles
36

Ostapenco, Vladimir. "Modélisation, évaluation et orchestration des leviers hétérogènes pour la gestion des centres de données cloud à grande échelle." Electronic Thesis or Diss., Lyon, École normale supérieure, 2024. http://www.theses.fr/2024ENSL0096.

Full text
Abstract:
Le secteur des Technologies de l’Information et de la Communication (TIC) est en pleine croissance en raison de l'augmentation du nombre d’utilisateurs d’Internet et de la démocratisation des services numériques, entraînant une empreinte carbone non négligeable et toujours croissante. La part des émissions de gaz à effet de serre (GES) liées aux TIC est estimée entre 1,8% et 3,9% des émissions mondiales en 2020, avec un risque de presque doubler et d’atteindre plus de 7% d'ici à 2025. Les datacenters sont au cœur de cette croissance, estimés d'être responsables d'une part importante des émissions de GES du secteur des TIC (allant de 17% à 45% en 2020) et à consommer environ 1% de l'électricité mondiale en 2018.De nombreux leviers existent et peuvent aider les fournisseurs de cloud et les gestionnaires de datacenters à réduire certains de ces impacts. Ces leviers peuvent opérer sur de multiples facettes telles que l’extinction de ressources inutilisées, le ralentissement de ressources pour s’adapter aux besoins réels des applications et services, l’optimisation ou la consolidation des services pour réduire le nombre de ressources physiques mobilisées. Ces leviers peuvent être très hétérogènes et impliquer du matériel informatique, des couches logicielles ou des contraintes plus logistiques à l’échelle des datacenters. Activer, désactiver et orchestrer ces leviers à grande échelle est un réel enjeu permettant des gains potentiels en termes de réduction de la consommation énergétique et des émissions de dioxyde de carbone.Dans cette thèse, nous abordons la modélisation, évaluation et gestion de leviers hétérogènes dans le contexte d'un datacenter cloud à grande échelle en proposant pour la première fois la combinaison de leviers hétérogènes : à la fois technologiques (allumage/extinction de ressources, migration, ralentissement) et logistiques (installation de nouvelles machines, décommissionnement, changement fonctionnels ou géographiques de ressources IT).Dans un premier temps, nous proposons une modélisation des leviers hétérogènes couvrant les impacts, les coûts et les combinaisons des leviers, les concepts de Gantt Chart environnemental contenant des leviers appliqués à l'infrastructure du fournisseur de cloud et d'un environnement logiciel de gestion des leviers qui vise à améliorer les performances énergétiques et environnementales globales de l'ensemble de l'infrastructure d'un fournisseur de cloud. Ensuite, nous abordons le suivi et la collecte de métriques, incluant des données énergétiques et environnementales. Nous discutons de la mesure de la puissance et de l’énergie et effectuons une comparaison expérimentale des wattmètres logiciels. Par la suite, nous étudions un levier technologique unique en effectuant une analyse approfondie du levier Intel RAPL à des fins de plafonnement de la puissance sur un ensemble de nœuds hétérogènes pour une variété de charges de travail gourmandes en CPU et en mémoire. Finalement, nous validons la modélisation des leviers hétérogènes proposée à grande échelle en explorant trois scénarios distincts qui montrent la pertinence de l’approche proposée en termes de gestion des ressources et de réduction des impacts potentiels
The Information and Communication Technology (ICT) sector is constantly growing due to the increasing number of Internet users and the democratization of digital services, leading to a significant and ever-increasing carbon footprint. The share of greenhouse gas (GHG) emissions related to ICT is estimated to be between 1.8% and 3.9% of global GHG emissions in 2020, with a risk of almost doubling and reaching more than 7% by 2025. Data centers are at the center of this growth, estimated to be responsible for a significant portion of the ICT industry's global GHG emissions (ranging from 17% to 45% in 2020) and to consume approximately 1% of global electricity in 2018.Numerous leverages exist and can help cloud providers and data center managers to reduce some of these impacts. These leverages can operate on multiple facets such as turning off unused resources, slowing down resources to adapt to the real needs of applications and services, optimizing or consolidating services to reduce the number of physical resources mobilized. These leverages can be very heterogeneous and involve hardware, software layers or more logistical constraints at the data center scale. Activating, deactivating and orchestrating these heterogeneous leverages on a large scale can be a challenging task, allowing for potential gains in terms of reducing energy consumption and GHG emissions.In this thesis, we address the modeling, evaluation and orchestration of heterogeneous leverages in the context of a large-scale cloud data center by proposing for the first time the combination of heterogeneous leverages: both technological (turning on/off resources, migration, slowdown) and logistical (installation of new machines, decommissioning, functional or geographical changes of IT resources).First, we propose a novel heterogeneous leverage modeling approach covering leverages impacts, costs and combinations, the concepts of an environmental Gantt Chart containing leverages applied to the cloud provider's infrastructure and of a leverage management framework that aims to improve the overall energy and environmental performance of a cloud provider's entire infrastructure. Then, we focus on metric monitoring and collection, including energy and environmental data. We discuss power and energy measurement and conduct an experimental comparison of software-based power meters. Next, we study of a single technological leverage by conducting a thorough analysis of Intel RAPL leverage for power capping purposes on a set of heterogeneous nodes for a variety of CPU- and memory-intensive workloads. Finally, we validate the proposed heterogeneous leverage modeling approach on a large scale by exploring three distinct scenarios that show the pertinence of the proposed approach in terms of resource management and potential impacts reduction
APA, Harvard, Vancouver, ISO, and other styles
37

Ma, Wei (Will Wei). "Dynamic, data-driven decision-making in revenue management." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120224.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 233-241).
Motivated by applications in Revenue Management (RM), this thesis studies various problems in sequential decision-making and demand learning. In the first module, we consider a personalized RM setting, where items with limited inventories are recommended to heterogeneous customers sequentially visiting an e-commerce platform. We take the perspective of worst-case competitive ratio analysis, and aim to develop algorithms whose performance guarantees do not depend on the customer arrival process. We provide the first solution to this problem when there are both multiple items and multiple prices at which they could be sold, framing it as a general online resource allocation problem and developing a system of forecast-independent bid prices (Chapter 2). Second, we study a related assortment planning problem faced by Walmart Online Grocery, where before checkout, customers are recommended "add-on" items that are complementary to their current shopping cart (Chapter 3). Third, we derive inventory-dependent priceskimming policies for the single-leg RM problem, which extends existing competitive ratio results to non-independent demand (Chapter 4). In this module, we test our algorithms using a publicly-available data set from a major hotel chain. In the second module, we study bundling, which is the practice of selling different items together, and show how to learn and price using bundles. First, we introduce bundling as a new, alternate method for learning the price elasticities of items, which does not require any changing of prices; we validate our method on data from a large online retailer (Chapter 5). Second, we show how to sell bundles of goods profitably even when the goods have high production costs, and derive both distribution-dependent and distribution-free guarantees on the profitability (Chapter 6). In the final module, we study the Markovian multi-armed bandit problem under an undiscounted finite time horizon (Chapter 7). We improve existing approximation algorithms using LP rounding and random sampling techniques, which result in a (1/2 - eps)- approximation for the correlated stochastic knapsack problem that is tight relative to the LP. In this work, we introduce a framework for designing self-sampling algorithms, which is also used in our chronologically-later-to-appear work on add-on recommendation and single-leg RM.
by Will (Wei) Ma.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
38

de, Carvalho Tiago Filipe Rodrigues. "Integrated Approach to Dynamic and Distributed Cloud Data Center Management." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/739.

Full text
Abstract:
Management solutions for current and future Infrastructure-as-a-Service (IaaS) Data Centers (DCs) face complex challenges. First, DCs are now very large infrastructures holding hundreds of thousands if not millions of servers and applications. Second, DCs are highly heterogeneous. DC infrastructures consist of servers and network devices with different capabilities from various vendors and different generations. Cloud applications are owned by different tenants and have different characteristics and requirements. Third, most DC elements are highly dynamic. Applications can change over time. During their lifetime, their logical architectures evolve and change according to workload and resource requirements. Failures and bursty resource demand can lead to unstable states affecting a large number of services. Global and centralized approaches limit scalability and are not suitable for large dynamic DC environments with multiple tenants with different application requirements. We propose a novel fully distributed and dynamic management paradigm for highly diverse and volatile DC environments. We develop LAMA, a novel framework for managing large scale cloud infrastructures based on a multi-agent system (MAS). Provider agents collaborate to advertise and manage available resources, while app agents provide integrated and customized application management. Distributing management tasks allows LAMA to scale naturally. Integrated approach improves its efficiency. The proximity to the application and knowledge of the DC environment allow agents to quickly react to changes in performance and to pre-plan for potential failures. We implement and deploy LAMA in a testbed server cluster. We demonstrate how LAMA improves scalability of management tasks such as provisioning and monitoring. We evaluate LAMA in light of state-of-the-art open source frameworks. LAMA enables customized dynamic management strategies to multi-tier applications. These strategies can be configured to respond to failures and workload changes within the limits of the desired SLA for each application.
APA, Harvard, Vancouver, ISO, and other styles
39

Uichanco, Joline Ann Villaranda. "Data-driven optimization and analytics for operations management applications." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85695.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 163-166).
In this thesis, we study data-driven decision making in operation management contexts, with a focus on both theoretical and practical aspects. The first part of the thesis analyzes the well-known newsvendor model but under the assumption that, even though demand is stochastic, its probability distribution is not part of the input. Instead, the only information available is a set of independent samples drawn from the demand distribution. We analyze the well-known sample average approximation (SAA) approach, and obtain new tight analytical bounds on the accuracy of the SAA solution. Unlike previous work, these bounds match the empirical performance of SAA observed in extensive computational experiments. Our analysis reveals that a distribution's weighted mean spread (WMS) impacts SAA accuracy. Furthermore, we are able to derive distribution parametric free bound on SAA accuracy for log-concave distributions through an innovative optimization-based analysis which minimizes WMS over the distribution family. In the second part of the thesis, we use spread information to introduce new families of demand distributions under the minimax regret framework. We propose order policies that require only a distribution's mean and spread information. These policies have several attractive properties. First, they take the form of simple closed-form expressions. Second, we can quantify an upper bound on the resulting regret. Third, under an environment of high profit margins, they are provably near-optimal under mild technical assumptions on the failure rate of the demand distribution. And finally, the information that they require is easy to estimate with data. We show in extensive numerical simulations that when profit margins are high, even if the information in our policy is estimated from (sometimes few) samples, they often manage to capture at least 99% of the optimal expected profit. The third part of the thesis describes both applied and analytical work in collaboration with a large multi-state gas utility. We address a major operational resource allocation problem in which some of the jobs are scheduled and known in advance, and some are unpredictable and have to be addressed as they appear. We employ a novel decomposition approach that solves the problem in two phases. The first is a job scheduling phase, where regular jobs are scheduled over a time horizon. The second is a crew assignment phase, which assigns jobs to maintenance crews under a stochastic number of future emergencies. We propose heuristics for both phases using linear programming relaxation and list scheduling. Using our models, we develop a decision support tool for the utility which is currently being piloted in one of the company's sites. Based on the utility's data, we project that the tool will result in 55% reduction in overtime hours.
by Joline Ann Villaranda Uichanco.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
40

SMITH, DARREN C. "A-6E FLIGHT DATA MANAGEMENT AT CHINA LAKE NAVAL WEAPONS CENTER." International Foundation for Telemetering, 1990. http://hdl.handle.net/10150/613795.

Full text
Abstract:
International Telemetering Conference Proceedings / October 29-November 02, 1990 / Riviera Hotel and Convention Center, Las Vegas, Nevada
The Naval Weapons Center (NWC) A-6E flight test program, like so many DOD efforts, is caught in the vise of declining budgets and increasing demands and requirements. The A-6E data management system has evolved over 30 years by extensive testing and reflects all the “real world” experience obtained over that period of time. This paper will address that data management system, specifically how data is recorded on the A-6E during flight test and some associated issues as well as how that data is managed for analysis use, all within the environment of tight budgets and increased requirements.
APA, Harvard, Vancouver, ISO, and other styles
41

Gog, Ionel Corneliu. "Flexible and efficient computation in large data centres." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/271804.

Full text
Abstract:
Increasingly, online computer applications rely on large-scale data analyses to offer personalised and improved products. These large-scale analyses are performed on distributed data processing execution engines that run on thousands of networked machines housed within an individual data centre. These execution engines provide, to the programmer, the illusion of running data analysis workflows on a single machine, and offer programming interfaces that shield developers from the intricacies of implementing parallel, fault-tolerant computations. Many such execution engines exist, but they embed assumptions about the computations they execute, or only target certain types of computations. Understanding these assumptions involves substantial study and experimentation. Thus, developers find it difficult to determine which execution engine is best, and even if they did, they become “locked in” because engineering effort is required to port workflows. In this dissertation, I first argue that in order to execute data analysis computations efficiently, and to flexibly choose the best engines, the way we specify data analysis computations should be decoupled from the execution engines that run the computations. I propose an architecture for decoupling data processing, together with Musketeer, my proof-of-concept implementation of this architecture. In Musketeer, developers express data analysis computations using their preferred programming interface. These are translated into a common intermediate representation from which code is generated and executed on the most appropriate execution engine. I show that Musketeer can be used to write data analysis computations directly, and these can execute on many execution engines because Musketeer automatically generates code that is competitive with optimised hand-written implementations. The diverse execution engines cause different workflow types to coexist within a data centre, opening up both opportunities for sharing and potential pitfalls for co-location interference. However, in practice, workflows are either placed by high-quality schedulers that avoid co-location interference, but choose placements slowly, or schedulers that choose placements quickly, but with unpredictable workflow run time due to co-location interference. In this dissertation, I show that schedulers can choose high-quality placements with low latency. I develop several techniques to improve Firmament, a high-quality min-cost flow-based scheduler, to choose placements quickly in large data centres. Finally, I demonstrate that Firmament chooses placements at least as good as other sophisticated schedulers, but at the speeds associated with simple schedulers. These contributions enable more efficient and effective use of data centres for large-scale computation than current solutions.
APA, Harvard, Vancouver, ISO, and other styles
42

Singh, Mohan G. "Data base management system for the placement center of the Atlanta University." DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 1985. http://digitalcommons.auctr.edu/dissertations/2137.

Full text
Abstract:
The Placement Center of the Atlanta University organizes interviews of the students with the companies around the country. A database management system is being developed for signing up and preparation of interview schedules on the IBM PC. The dBASE II database manager is used for creating the database and writing the programs to access the database. In the first phase, a pilot database management system was tested and suggestions were collected. This system us called Model I. In this model, the database exists in the third normal form. The students and the Placement Office personnel found this system to be not too user-friendly. Then Model I was modified to make the system more user-friendly and cut down the user-time. The modified system is called Model II, where the database is the unnormalized form. This study makes a comparison of Model I and Model II and analyzes the advantages and disadvantages of both the models and concludes that in order to make a database system user-friendly and cut down the user-time, sometimes a database may have to be designed in the unnormalized form; knowing that a database in unnormalized form has insertion, deletion and update anomalies.
APA, Harvard, Vancouver, ISO, and other styles
43

Haddad, Maroua. "Sizing and management of hybrid renewable energy system for data center supply." Thesis, Bourgogne Franche-Comté, 2019. http://www.theses.fr/2019UBFCD036.

Full text
Abstract:
Le secteur du numérique est récemment devenu un secteur majeur de la consommation d’électricité dans le monde, notamment avec l’avènement des data centers qui concentrent un très grand nombre de machines traitant des informations et fournissant des services. L’utilisation de sources d’énergie renouvelables sur site est un moyen prometteur de réduire l’impact écologique des data centers. Cependant, certaines énergies renouvelables comme les énergies solaire et éolienne sont intermittentes, étant liées aux conditions météorologiques. Étant donné qu’un centre de données doit maintenir une certaine qualité de service, l’utilisation efficace de ces sources nécessite l’utilisation de stockages. Cette thèse explore à la fois une méthode dimensionnement et une méthode de gestion optimale d’une infrastructure hybride d’énergie renouvelable, composée de panneaux photovoltaïques, d’éoliennes, de batteries et de système de stockage hydrogène.Une première contribution aborde le problème du dimensionnement de cette infrastructure électrique afin de répondre à la demande du data center. Un outil de dimensionnement est proposé, prenant en compte plusieurs métriques et fournissant trois configurations différentes. L’utilisateur choisit donc la configuration approprié, en fonction de son plan économique global de son écosystème H2. Une deuxième contribution étudie le problème de la gestion de l’énergie par programmation linéaire en nombres entiers. Un outil de gestion optimal est fourni pour trouver différents engagements optimaux des sources en fonction des objectifs de l’utilisateur. Les solutions obtenues sont ensuite discutées avec plusieurs métriques et avec différents horizons temporelles afin de trouver la meilleure solution pour répondre à la demande du data center. Enfin, une troisième contribution vise à prévoir évolution temporelle de l’ensoleillement et de la vitesse du vent à gros grain pour obtenir un dimensionnement plus précis à l’aide du modèle SARIMA
Information and communication technologies haverecently become a major sector in energy consumption,particularly with the advent of large platforms on the Internet. These platforms use data centers, which concentrate a very large number of machines processing information and providing services, causing a high energy consumption. The use of renewable energy sources (RES)on-site is then a promising way to reduce their ecological impact. However, some renewable energies such as solar and wind energy are intermittent and uncertain,being related to weather conditions. Since a data center must maintain a certain quality of service, using these sources effectively requires the usage of storage devices.This thesis explores an efficient sizing and management methods for a hybrid renewable energy infrastructure composed of wind turbines, photovoltaic panels, batteries and a hydrogen system..A first contribution addresses the problem of sizing the electrical plateform in order to meet the data center demand. A sizing tool is proposed, taking several metrics into account and providing three different system configurations as solutions. The user therefore chooses an appropriate configuration, according to his global economic plan of his H2 ecosystem. A second contribution studies the problem of energy management using amixed integer linear programming approach. An optimal management tool is therefore provided to find various source schedules according to different user’s objectives.The obtained solutions are discussed with several metrics considering different time horizon in order to find the beststorage management to meet the data center requests.Finally, a third contribution aims to forecast the weather data to obtain a preciser sizing of the sources using SARIMA model in order to reduce forecasts errors
APA, Harvard, Vancouver, ISO, and other styles
44

Rambo, Jeffrey. "Reduced order modeling of turbulent convection application to data center thermal management." Saarbrücken VDM Verlag Dr. Müller, 2006. http://d-nb.info/989386961/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Macias, Lloret Mario. "Business-driven resource allocation and management for data centres in cloud computing markets." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/144562.

Full text
Abstract:
Cloud Computing markets arise as an efficient way to allocate resources for the execution of tasks and services within a set of geographically dispersed providers from different organisations. Client applications and service providers meet in a market and negotiate for the sales of services by means of the signature of a Service Level Agreement that contains the Quality of Service terms that the Cloud provider has to guarantee by managing properly its resources. Current implementations of Cloud markets suffer from a lack of information flow between the negotiating agents, which sell the resources, and the resource managers that allocate the resources to fulfil the agreed Quality of Service. This thesis establishes an intermediate layer between the market agents and the resource managers. In consequence, agents can perform accurate negotiations by considering the status of the resources in their negotiation models, and providers can manage their resources considering both the performance and the business objectives. This thesis defines a set of policies for the negotiation and enforcement of Service Level Agreements. Such policies deal with different Business-Level Objectives: maximisation of the revenue, classification of clients, trust and reputation maximisation, and risk minimisation. This thesis demonstrates the effectiveness of such policies by means of fine-grained simulations. A pricing model may be influenced by many parameters. The weight of such parameters within the final model is not always known, or it can change as the market environment evolves. This thesis models and evaluates how the providers can self-adapt to changing environments by means of genetic algorithms. Providers that rapidly adapt to changes in the environment achieve higher revenues than providers that do not. Policies are usually conceived for the short term: they model the behaviour of the system by considering the current status and the expected immediate after their application. This thesis defines and evaluates a trust and reputation system that enforces providers to consider the impact of their decisions in the long term. The trust and reputation system expels providers and clients with dishonest behaviour, and providers that consider the impact of their reputation in their actions improve on the achievement of their Business-Level Objectives. Finally, this thesis studies the risk as the effects of the uncertainty over the expected outcomes of cloud providers. The particularities of cloud appliances as a set of interconnected resources are studied, as well as how the risk is propagated through the linked nodes. Incorporating risk models helps providers differentiate Service Level Agreements according to their risk, take preventive actions in the focus of the risk, and pricing accordingly. Applying risk management raises the fulfilment rate of the Service-Level Agreements and increases the profit of the provider
APA, Harvard, Vancouver, ISO, and other styles
46

LeBlanc, Robert-Lee Daniel. "Analysis of Data Center Network Convergence Technologies." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4150.

Full text
Abstract:
The networks in traditional data centers have remained unchanged for decades and have grown large, complex and costly. Many data centers have a general purpose Ethernet network and one or more additional specialized networks for storage or high performance low latency applications. Network convergence promises to lower the cost and complexity of the data center network by virtualizing the different networks onto a single wire. There is little evidence, aside from vendors' claims, that validate network convergence actually achieves these goals. This work defines a framework for creating a series of unbiased tests to validate converged technologies and compare them to traditional configurations. A case study involving two different network converged technologies was developed to validate the defined methodology and framework. The study also shows that these two technologies do indeed perform similarly to non-virtualized network, reduce costs, cabling, power consumption and are easy to operate.
APA, Harvard, Vancouver, ISO, and other styles
47

Rambo, Jeffrey D. "Reduced-Order Modeling of Multiscale Turbulent Convection: Application to Data Center Thermal Management." Diss., Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-03272006-080024/.

Full text
Abstract:
Thesis (Ph. D.)--Mechanical Engineering, Georgia Institute of Technology, 2006.
Marc Smith, Committee Member ; P.K. Yeung, Committee Member ; Benjamin Shapiro, Committee Member ; Sheldon Jeter, Committee Member ; Yogendra Joshi, Committee Chair.
APA, Harvard, Vancouver, ISO, and other styles
48

Gille, Marika. "Design of Modularized Data Center with a Wooden Construction." Thesis, Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-65297.

Full text
Abstract:
The purpose of this thesis is to investigate the possibility to build a modular data center in wood. The goals is to investigate how to build data centers using building system modules, making it easier to build more flexible data centers and expand the business later on. Investigations have been conducted to find out advantages and disadvantages for using wood in a modularized data center structure. The investigation also includes analysing the moistures effect on the material and if there are any other advantages than environmental benefits in using wood as a building material. A literature study were conducted to examine where research already have been conducted and how those studies can be applicable to this thesis. Although the ICT sector is a rapidly growing industry little research has been published in regards to how to build a data center. Most published information involves electric and cooling, not measurements of the building and how materials is affected by the special climate in a data center. As a complement to the little research interviews were conducted and site visits were made. Interviews were conducted with Hydro66, RISE SICS North, Sunet and Swedish modules, whilst site visits were made at Hydro66, RISE SICS North, Sunet and Facebook. As a result of these studies, limitations were identified with regards to maximum and minimum measurements for the building system and service spaces in a data center. These limitations were used as an input when designing a construction proposal using stated building systems and a design proposal for a data center. During the study, access have been granted to measurements of temperature and humidity for the in- and outgoing air of the Hydro66 data center. These measurements have been analyzed with the facts about HVAC systems and the climates effect on wood, for example in regards to strength and stability. This analysis has shown that more data needs to be collected during the winter and that further analysis needs to be conducted, to beable to draw conclusions if the indoor climate of a data center has an effect on the wooden structure. A design proposal for a data center have been produced with regards to the information gathered by the litterature and empirical studies. The proposal were designed to show how the information could be implemented. The result have increased the understanding on how to build data center buildings in wood and how this type of buildings could be made more flexible towards future changes through modularization.
APA, Harvard, Vancouver, ISO, and other styles
49

Cruz, Ethan E. "Coupled inviscid-viscous solution methodology for bounded domains: Application to data center thermal management." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54316.

Full text
Abstract:
Computational fluid dynamics and heat transfer (CFD/HT) models have been employed as the dominant technique for the design and optimization of both new and existing data centers. Inviscid modeling has shown great speed advantages over the full Navier-Stokes CFD/HT models (over 20 times faster), but is incapable of capturing the physics in the viscous regions of the domain. A coupled inviscid-viscous solution method (CIVSM) for bounded domains has been developed in order to increase both the solution speed and accuracy of CFD/HT models. The methodology consists of an iterative solution technique that divides the full domain into multiple regions consisting of at least one set of viscous, inviscid, and interface regions. The full steady, Reynolds-Averaged Navier-Stokes (RANS) equations with turbulence modeling are used to solve the viscous domain, while the inviscid domain is solved using the Euler equations. By combining the increased speed of the inviscid solver in the inviscid regions, along with the viscous solver’s ability to capture the turbulent flow physics in the viscous regions, a faster and potentially more accurate solution can be obtained for bounded domains that contain inviscid regions which encompass more than half of the domain, such as data centers.
APA, Harvard, Vancouver, ISO, and other styles
50

Martin, Enrico <1979&gt. "Virtualization and containerization: a new concept for data center management to optimize resources distribution." Master's Degree Thesis, Università Ca' Foscari Venezia, 2022. http://hdl.handle.net/10579/20605.

Full text
Abstract:
The role of cloud computing has led the way data center services are offered, utilized and handled over the internet. Typically, applications run inside the virtual machines in an isolated environment. Nevertheless, a considerable hardware virtualization overhead seems to be inevitable. Recently, Docker containers have gained noticeable attention because of their substantially lower overhead if compared to virtual machines using operating system virtualization. This prominent technology mainly provides isolation, portability, interoperability, scalability and high availability. Hence that is being widely adopted and everybody is trying to shift their software to Docker containers with the support of tools and frameworks. Containers are revolutionizing the way software is delivered and deployed. Particularly with the use of cloud orchestration tools like Kubernetes, a new approach to high availability and fault tolerance is emerging. The thesis illustrates the evolution of virtualization technology and the switch to containerization, to focus on the migration of data center processes in a containerized environment using Kubernetes as an orchestration tool. Two case examples are discussed in order to evaluate the performance of a web server and a file server. In addition to that, a cost/benefits analysis is introduced to estimate the advantages such a strategy could lead to.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography