To see the other types of publications on this topic, follow the link: Virtualized server.

Journal articles on the topic 'Virtualized server'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Virtualized server.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Da Silva, Luana Barreto, and Leonardo Henrique Silva Bomfim. "Análise de Disponibilidade de Servidores Virtualizados com Cadeias de Markov." Interfaces Científicas - Exatas e Tecnológicas 1, no. 2 (May 28, 2015): 21–34. http://dx.doi.org/10.17564/2359-4942.2015v1n2p21-34.

Full text
Abstract:
The analysis of availability of virtualized servers is an important tool for managers in information technology and communication especially, when it comes to planning and design of datacenters to provide many services for general companies. If the use of virtualization enables a cost reduction, it can also make the system more susceptible to downtime. This work analyzes the availability of two environments, one with a virtualized server and the other environment with non-virtualized servers. The services offered are e-mail, Domain Name System (DNS), Web Server and File Server, a typical scenario in many companies. It is developed a case study using analytical modeling with Fault Tree and Markov Chains. The Fault Tree is used to model the servers and Markov Chains to model the behavior of each component of hardware and software. The non-virtualized environment is composed of four servers, each one providing specific services, while the virtualized consists of a single server with four virtual machines, each one providing a service. By analyzing the models developed, the results show that although the non-virtualized system has less downtime, because has less dependence between the services, the difference in this case is 0.06% annually, becomes irrelevant when compared to the benefits brought by virtualization in the companies.
APA, Harvard, Vancouver, ISO, and other styles
2

Da Silva, Luana Barreto, Leonardo Henrique da Silva Bomfim, George Leite Junior, and Marcelino Nascimento De Oliveira. "TI Verde: Uma Proposta de Economia Energética usando Virtualização." Interfaces Científicas - Exatas e Tecnológicas 1, no. 2 (May 28, 2015): 57–74. http://dx.doi.org/10.17564/2359-4942.2015v1n2p57-74.

Full text
Abstract:
Information Technology (IT) is one of the main responsable for enviroment troubles. This is a challenge to be overcome by IT Managers to reduce the cust and the maintenance of datacenteres. To promote the user of computer resources on a efficient and less harmfull to enviroment, the Green IT method propose sustainable ways to support a datacenter. One of those ways is the datacenters virtualization, which means that one physical server has virtual ones, working as singles server. It is important to analyze the viability of keep a virtualized datacenter by the analysis of availability of the servers. If the use of virtualization enables a cost reduction, it can also make the system more susceptible to downtime. This work analyzes the availability of two environments, one with a virtualized server and the other with non-virtualized servers. The services offered are e-mail, DNS, Web Server and File Server, a typical scenario in many companies. It is developed a case study using analytical modeling with Fault Tree and Markov Chains. The Fault Tree is used to model the servers and Markov Chains to model the behavior of each component of hardware and software. The non-virtualized environment is composed of four servers, each one providing specific services, while the virtualized consists of a single server with four virtual machines, each one providing a service. By analyzing the models developed, the results show that although the non-virtualized system has less downtime, because has less dependence between the services, the difference in this case is 0.06% annually, becomes irrelevant when compared to the benefits brought by virtualization.
APA, Harvard, Vancouver, ISO, and other styles
3

Homayoun, Sajad, Ahmad Jalili, and Manijeh Keshtgari. "Performance Analysis of Multiple Virtualized Servers." Computer Engineering and Applications Journal 4, no. 3 (September 20, 2015): 183–88. http://dx.doi.org/10.18495/comengapp.v4i3.150.

Full text
Abstract:
Server virtualization is considered as one of the most significant changes in IT operations in the past decade, making it possible to manage groups of servers with a greater degree of reliability at a lower cost. It is driven by the goal of reducing the total number of physical servers in an organization by consolidating multiple applications on shared servers. In this paper we construct several x86_64 servers based on VMware vSphere, and then analyze their performances using open source analyzing tools Pylot and Curl-loader. The results show that despite the enormous potential benefits of virtualization techniques, the efficiency decreased by increasing the number of virtual machines. So, a trade-off is needed between number of virtual machines and expected efficiency of servers.
APA, Harvard, Vancouver, ISO, and other styles
4

Matos, R. D. S., P. R. M. Maciel, F. Machida, Dong Seong Kim, and K. S. Trivedi. "Sensitivity Analysis of Server Virtualized System Availability." IEEE Transactions on Reliability 61, no. 4 (December 2012): 994–1006. http://dx.doi.org/10.1109/tr.2012.2220711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sung, Guo-Ming, Yen-Shin Shen, Jia-Hong Hsieh, and Yu-Kai Chiu. "Internet of Things–based smart home system using a virtualized cloud server and mobile phone app." International Journal of Distributed Sensor Networks 15, no. 9 (September 2019): 155014771987935. http://dx.doi.org/10.1177/1550147719879354.

Full text
Abstract:
This article proposes an Internet of Things–based smart home system composed of a virtualized cloud server and a mobile phone app. The smart Internet of Things–based system includes a sensing network, which is developed with the ZigBee wireless communication protocol, a message queuing telemetry transport, a virtualized cloud server and a mobile phone app. A Raspberry Pi development board is used to receive packet information from the terminal sensors using ZigBee wireless communication. Then, the message queuing telemetry transport broker not only completes transmission of the message but also publishes it to the virtualized cloud server. The transmission can then be viewed through the website using a mobile phone. The designed app combines the application of the virtualized cloud server, client sensors and the database. Verification experiments revealed the measured average response time and throughput of approximately 4.0 s and 6069 requests per second, respectively, for the virtualized web server and approximately 0.144 s and 8866 packets per second, respectively, for the message queuing telemetry transport broker. The designed functions of the mobile phone app are a global positioning system home monitoring, family memo, medical care and near-field communication key. Both interlinkage and handler methods are proposed to facilitate a powerful function without delay in displaying information. The proposed system integrates with software and hardware to complete the data analysis and information management quickly and correctly. It can cater to user needs with superior ease and convenience.
APA, Harvard, Vancouver, ISO, and other styles
6

Chiang, Mei-Ling, and Tsung-Te Hou. "A Scalable Virtualized Server Cluster Providing Sensor Data Storage and Web Services." Symmetry 12, no. 12 (November 25, 2020): 1942. http://dx.doi.org/10.3390/sym12121942.

Full text
Abstract:
With the rapid development of the Internet of Things (IoT) technology, diversified applications deploy extensive sensors to monitor objects, such as PM2.5 air quality monitoring. The sensors transmit data to the server periodically and continuously. However, a single server cannot provide efficient services for the ever-growing IoT devices and the data they generate. This study bases on the concept of symmetry of architecture and quantities in system design and explores the load balancing issue to improve performance. This study uses the Linux Virtual Server (LVS) and virtualization technology to deploy a virtual machine (VM) cluster. It consists of a front-end server, also a load balancer, to dispatch requests, and several back-end servers to provide services. These receive data from sensors and provide Web services for browsing real-time sensor data. The Hadoop Distributed File System (HDFS) and HBase are used to store the massive amount of received sensor data. Because load-balancing is critical for resource utilization, this study also proposes a new load distribution algorithm for VM-based server clusters that simultaneously provide multiple services, such as sensor services and Web service. It considers the aggregate load of all back-end servers on the same physical server that provides multiple services. It also considers the difference between physical machines and VMs. Algorithms such as those for LVS, which do not consider these factors, can cause load imbalance between physical servers. The experimental results demonstrate that the proposed system is fault tolerant, highly scalable, and offers high availability and high performance.
APA, Harvard, Vancouver, ISO, and other styles
7

Rong Chang, Bao, Hsiu-Fen Tsai, Chi-Ming Chen, and Chien-Feng Huang. "Analysis of virtualized cloud server together with shared storage and estimation of consolidation ratio and TCO/ROI." Engineering Computations 31, no. 8 (October 28, 2014): 1746–60. http://dx.doi.org/10.1108/ec-11-2012-0295.

Full text
Abstract:
Purpose – The physical server transition to virtualized infrastructure server have encountered crucial problems such as server consolidation, virtual machine (VM) performance, workload density, total cost of ownership (TCO), and return on investments (ROIs). In order to solve the problems mentioned above, the purpose of this paper is to perform the analysis of virtualized cloud server together with shared storage as well as the estimation of consolidation ratio and TCO/ROI in server virtualization. Design/methodology/approach – This paper introduces five distinct virtualized cloud computing servers (VCCSs), and provides the appropriate assessment to five well-known hypervisors built in VCCSs. The methodology the authors proposed in this paper will gives people an insight into the problem of physical server transition to virtualized infrastructure server. Findings – As a matter of fact, VM performance seems almost to achieve the same level, but the estimation of VM density and TCO/ROI are totally different among hypervisors. As a result, the authors have the recommendation to choose the hypervisor ESX server if you need a scheme with higher ROI and lower TCO. Alternatively, Proxmox VE would be the second choice if you like to save the initial investment at first and own a pretty well management interface at console. Research limitations/implications – In the performance analysis, instead of ESX 5.0, the authors adopted ESXi 5.0 that is free software, its function is limited, and does not have the full functionality of ESX server, such as: distributed resource scheduling, high availability, consolidated backup, fault tolerance, and disaster recovery. Moreover, this paper do not discuss the security problem on VCCS which is related to access control and cryptograph in VMs to be explored in the further work. Practical implications – In the process of virtualizing the network, ESX/ESXi has restrictions on the brand of the physical network card, only certain network cards can be detected by the VM. For instance: Intel and Broadcom network cards. The newer versions of ESXi 5.0.0 and above now support parts of Realtek series (Realtek 8186, Realtek 8169, and Realtek 8111E). Originality/value – How to precisely assess the hypervisor for server/desktop virtualization is also of hard question needed to deal with it crisply before deploying new IT with VCCS on site. The authors have utilized the VMware calculator and developed an approach to server/desktop consolidation, virtualization performance, VM density, TCO, and ROIs. As a result, in this paper the authors conducted a comprehensive approach to analyze five well-known hypervisors and will give the recommendation for IT manager to choose a right solution for server virtualization.
APA, Harvard, Vancouver, ISO, and other styles
8

Banushri, A., and Dr R. A. Karthika. "Implementation levels of virtualization and security issues in cloud computing." International Journal of Engineering & Technology 7, no. 3.3 (June 8, 2018): 678. http://dx.doi.org/10.14419/ijet.v7i2.33.15474.

Full text
Abstract:
Cloud is the buzz word in the industry. The initiation of virtualization technology in the infrastructure domain gives us the options to procure the benefits of the cloud deployments. Virtualization is a fast-growing infrastructure in the IT industry. Technology providers and user communities have introduced a new set of terms to describe the technologies and their features for virtualization. Virtualization characterizes the logical vision of data representation. The authority to compute in virtualized environment, storing the data at dissimilar geographies and diverse computing resources. Virtualization technology allows the creation of the virtual versions of hardware, networking resources, Operating systems and storage devices. It supports multiple OS run on single physical machine called host machine and multiple guest application run on single server called host server. Hypervisors assistance in virtualization of hardware. That is, the software interrelates with the physical system, providing virtualized environment to maintain multiple operating system running parallel using one physical server. This paper provides the information about the implementation levels of virtualization, the benefits and security problems of Virtualization in virtualized hardware environment.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Zhikui, Yuan Chen, Daniel Gmach, Sharad Singhal, Brian Watson, Wilson Rivera, Xiaoyun Zhu, and Chris Hyser. "AppRAISE: application-level performance management in virtualized server environments." IEEE Transactions on Network and Service Management 6, no. 4 (December 2009): 240–54. http://dx.doi.org/10.1109/tnsm.2009.04.090404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Varasteh, Amir, and Maziar Goudarzi. "Server Consolidation Techniques in Virtualized Data Centers: A Survey." IEEE Systems Journal 11, no. 2 (June 2017): 772–83. http://dx.doi.org/10.1109/jsyst.2015.2458273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Kim, Sang-Kon, Seung-Young Ma, and Jongsub Moon. "A novel secure architecture of the virtualized server system." Journal of Supercomputing 72, no. 1 (March 15, 2015): 24–37. http://dx.doi.org/10.1007/s11227-015-1401-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Escheikh, Mohamed, Zayneb Tayachi, and Kamel Barkaoui. "Performability evaluation of server virtualized systems under bursty workload." IFAC-PapersOnLine 51, no. 7 (2018): 45–50. http://dx.doi.org/10.1016/j.ifacol.2018.06.277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ferreto, Tiago C., Marco A. S. Netto, Rodrigo N. Calheiros, and César A. F. De Rose. "Server consolidation with migration control for virtualized data centers." Future Generation Computer Systems 27, no. 8 (October 2011): 1027–34. http://dx.doi.org/10.1016/j.future.2011.04.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Nguyen, Tuan Anh, Dong Seong Kim, and Jong Sou Park. "A Comprehensive Availability Modeling and Analysis of a Virtualized Servers System Using Stochastic Reward Nets." Scientific World Journal 2014 (2014): 1–18. http://dx.doi.org/10.1155/2014/165316.

Full text
Abstract:
It is important to assess availability of virtualized systems in IT business infrastructures. Previous work on availability modeling and analysis of the virtualized systems used a simplified configuration and assumption in which only one virtual machine (VM) runs on a virtual machine monitor (VMM) hosted on a physical server. In this paper, we show a comprehensive availability model using stochastic reward nets (SRN). The model takes into account (i) the detailed failures and recovery behaviors of multiple VMs, (ii) various other failure modes and corresponding recovery behaviors (e.g., hardware faults, failure and recovery due to Mandelbugs and aging-related bugs), and (iii) dependency between different subcomponents (e.g., between physical host failure and VMM, etc.) in a virtualized servers system. We also show numerical analysis on steady state availability, downtime in hours per year, transaction loss, and sensitivity analysis. This model provides a new finding on how to increase system availability by combining both software rejuvenations at VM and VMM in a wise manner.
APA, Harvard, Vancouver, ISO, and other styles
15

Nuryadin, Ridho Akbar, Tarisa A. Ramadhani, Jamilah Karaman, and Muhammad Reza. "ANALISIS PERBANDINGAN PERFORMA VIRTUALISASI SERVER MENGGUNAKAN VMWARE ESXI, ORACLE VIRTUAL BOX, VMWARE WORKSTATION 16 DAN PROXMOX." METHOMIKA Jurnal Manajemen Informatika dan Komputerisasi Akuntansi 7, no. 2 (October 31, 2023): 175–80. http://dx.doi.org/10.46880/jmika.vol7no2.pp175-180.

Full text
Abstract:
In the era of advancing digitization, server infrastructure plays a key role in the development of applications and web services. To effectively and efficiently manage and develop virtualized servers, server virtualization techniques can be employed. There are several virtualization platforms available, such as VMware Workstation 16, VMware vSphere (ESXi), Oracle VirtualBox, and Proxmox, each with their own strengths and weaknesses. The objective of this research is to analyze the performance of these four virtualization platforms in developing and managing virtualized servers used for web services, taking into consideration response time, throughput, CPU performance, storage performance, and RAM performance. Experimental methods were used to test these four platforms and measure CPU performance, RAM performance, disk performance, throughput, and response time using Moodle benchmark. The data was then analyzed to draw conclusions about the performance of each platform. The research results show that VMware vSphere ESXi and Proxmox have better CPU performance and response time when handling multiple virtual machines, and are more efficient in disk and memory usage compared to VMware Workstation 16 and Oracle VirtualBox. Significant differences in data transfer speed were found among the four platforms. Overall, VMware vSphere ESXi and Proxmox can be considered better choices for running web servers.
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Xiaorui, and Yefu Wang. "Coordinating Power Control and Performance Management for Virtualized Server Clusters." IEEE Transactions on Parallel and Distributed Systems 22, no. 2 (February 2011): 245–59. http://dx.doi.org/10.1109/tpds.2010.91.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Machida, Fumio, Victor F. Nicola, and Kishor S. Trivedi. "Job completion time on a virtualized server with software rejuvenation." ACM Journal on Emerging Technologies in Computing Systems 10, no. 1 (January 2014): 1–26. http://dx.doi.org/10.1145/2539121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Sowmia, D., and B. Muruganantham. "A survey on trade-off between storage and repair traffic in distributed storage systems." International Journal of Engineering & Technology 7, no. 2.8 (March 19, 2018): 379. http://dx.doi.org/10.14419/ijet.v7i2.8.10466.

Full text
Abstract:
Distributed storage systems give dependable access to information through excess spread over independently unreliable hubs. Application scenarios incorporate server farms, distributed capacity frameworks, and capacity in remote systems. This paper gives a study on the cloud storage model of networked online storage where data is stored in virtualized pools of storage which are generally hosted by third parties. Hosting companies operate large data centersand people who require their data to be encouraged buy or lease accumulating limit from them. The server cultivate overseers, outside of anyone's ability to see, virtualize the advantages according to the necessities of the customer and reveal them as limit pools, which the customers would themselves have the capacity to use to store records or data objects. . The data is stored across various locations, when the user wants to retrieve them, it could be done by any of the encryption methods. At last, in view of existing procedures, promising future research bearings are recommended.
APA, Harvard, Vancouver, ISO, and other styles
19

Ally, Said, and Noorali Jiwaji. "Common inhibiting factors for technology shifting from physical to virtual computing." Ethiopian Journal of Science and Technology 15, no. 2 (June 1, 2022): 125–39. http://dx.doi.org/10.4314/ejst.v15i2.2.

Full text
Abstract:
Due to the rapid growth of cloud computing demands and the high cost of managing traditional physical IT infrastructure, virtualization technique has emerged as a foremost and key success factor for technology adopters to attain the intended benefits. However, the transition from physical to virtual computing is confronted with overwhelming adoption inhibitors rarely known to adopters. This paper examines inhibiting factors which have triggered to low adoption rate of virtualized computing infrastructure despite being the fastest growing and globally accepted technology. Survey results from 24 companies indicate that lack of relevant virtualization skills, security uncertainties, low computing demands and change management issues are the utmost inhibitors. In public entities, the slowness in the adoption process is highly caused by the low computing demands, lack of virtualization coverage in ICT policies, resistance to change, choice of technology and the lack of virtualization project priority in the ICT master plans. On the other hand, the use of open-source hypervisors and support and maintenance are specific inhibitors affecting the private sectors. This paper is useful for adopters who have virtualized their server resources or have a plan to virtualize in the near future.
APA, Harvard, Vancouver, ISO, and other styles
20

de Castro Tomé, Mauricio, Pedro H. J. Nardelli, Hafiz Majid Hussain, Sohail Wahid, and Arun Narayanan. "A Cyber-Physical Residential Energy Management System via Virtualized Packets." Energies 13, no. 3 (February 6, 2020): 699. http://dx.doi.org/10.3390/en13030699.

Full text
Abstract:
This paper proposes a cyber-physical system to manage flexible residential loads based on virtualized energy packets. Before being used, flexible loads need to request packets to an energy server, which may be granted or not. If granted, the energy server guarantees that the request will be fulfilled. Each different load has a specific consumption profile and user requirement. In the proposed case study, the residential consumers share a pool of energy resources that need to be allocated by the energy server whose aim is to minimize the imports related to such a group. The proposed solution shows qualitative advantages compared to the existing approaches in relation to computational complexity, fairness of the resource allocation outcomes and effectiveness in peak reduction. We demonstrate our solution based on three different representative flexible loads; namely, electric vehicles, saunas and dishwashers. The numerical results show the efficacy of the proposed solution for three different representative examples, demonstrating the advantages and drawbacks of different allocation rules.
APA, Harvard, Vancouver, ISO, and other styles
21

Kumar, Sarvesh, Anubha Jain, and Astha Pareek. "Designing heuristic driven hybrid optimization algorithms for efficient workflow scheduling in IaaS cloud." Journal of Autonomous Intelligence 7, no. 5 (March 21, 2024): 1024. http://dx.doi.org/10.32629/jai.v7i5.1024.

Full text
Abstract:
<p class="Abstract">Each one of the real resources or hardware like workstations, work ranges, joins, switches, switches, server ranches, and capacity contraptions are fundamental for the establishment. In disseminated computing, all the establishment is virtualized and given to infrastructure as a service. Usually called IaaS. IaaS quickly increments or down with ask and avoids the got to procure genuine servers and distinctive server cultivate establishment; each resource is displayed as a specific help portion. A conveyed computing pro-organization bargains with the establishment, whereas the client presents, plans, and manages programming, counting applications, middleware, and working systems. IaaS dispersed computing offers clients induction to figuring resources like servers, stockpiling, and frameworks organization. Affiliations utilize their possess establishment and applications interior a master organization’s system. For cloud service providers, cloud schedulers automate IT procedures. Schedulers are used by end users to automate jobs, or tasks, that support everything from big data pipelines to machine learning processes to cloud infrastructure. Infrastructure as a service, or IaaS, is a type of cloud computing that uses the internet to provide virtualized computing resources. IaaS is one of the three main types of cloud computing services, along with Platform as a Service (PaaS) and Software as a Service (SaaS). The precise task is assigned to the CPU, the network, and the storage by resource scheduling. The point behind this is the outrageous use of assets. However, both cloud providers and users require well-organized scheduling.</p>
APA, Harvard, Vancouver, ISO, and other styles
22

Kumar, Mohan Raj Velayudhan, and Shriram Raghunathan. "ENERGY EFFICIENT AND INTERFERENCE AWARE PROVISIONING IN VIRTUALIZED SERVER CLUSTER ENVIRONMENT." Journal of Computer Science 10, no. 1 (January 1, 2014): 143–56. http://dx.doi.org/10.3844/jcssp.2014.143.156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Escheikh, Mohamed, Kamel Barkaoui, and Hana Jouini. "Versatile workload-aware power management performability analysis of server virtualized systems." Journal of Systems and Software 125 (March 2017): 365–79. http://dx.doi.org/10.1016/j.jss.2016.12.037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Jeevitha, J. K., and Athisha G. "Energy-Efficient Virtualized Scheduling and Load Balancing Algorithm in Cloud Data Centers." International Journal of Information Retrieval Research 11, no. 3 (July 2021): 34–48. http://dx.doi.org/10.4018/ijirr.2021070103.

Full text
Abstract:
To scale back the energy consumption, this paper proposed three algorithms: The first one is identifying the load balancing factors and redistribute the load. The second one is finding out the most suitable server to assigning the task to the server, achieved by most efficient first fit algorithm (MEFFA), and the third algorithm is processing the task in the server in an efficient way by energy efficient virtual round robin (EEVRR) scheduling algorithm with FAT tree topology architecture. This EEVRR algorithm improves the quality of service via sending the task scheduling performance and cutting the delay in cloud data centers. It increases the energy efficiency by achieving the quality of service (QOS).
APA, Harvard, Vancouver, ISO, and other styles
25

Archana, E., V. Dickson Irudaya Raj, M. Vidhya, and J. S. Umashankar. "Secured information exchange in cloud using cross breed property based encryption." International Journal of Engineering & Technology 7, no. 1.9 (March 1, 2018): 205. http://dx.doi.org/10.14419/ijet.v7i1.9.9823.

Full text
Abstract:
Individuals have the ability to get to the web anyplace and whenever. Distributed computing is an idea that treats the assets on the Internet as a brought together element, to be specific cloud. The server farm administrators virtualized the assets as per the necessities of the clients and uncover them as capacity pools, which the clients can themselves use to store documents or information protest s. Physically, the assets may get put away over numerous servers. Thus, Data heartiness is a noteworthy prerequisite for such stockpiling frameworks. In this paper we have proposed one approach to give information power that is by imitating the message with the end goal that every capacity server stores a duplicate of the message. We have improved the safe distributed storage framework by utilizing a limit intermediary re-encryption strategy. This encryption conspire bolsters decentralized deletion codes connected over scrambled messages and information sending operations over encoded and encoded messages. Our framework is exceedingly dispersed where every capacity server freely encodes and forward messages and key servers autonomously perform incomplete decoding.
APA, Harvard, Vancouver, ISO, and other styles
26

Villarroel, Adrián, Danny Toapanta, Santiago Naranjo, and Jessica S. Ortiz. "Hardware in the Loop Simulation for Bottle Sealing Process Virtualized on Unity 3D." Electronics 12, no. 13 (June 24, 2023): 2799. http://dx.doi.org/10.3390/electronics12132799.

Full text
Abstract:
This paper details the design and implementation of a virtualized bottle sealing plant using the Hardware in the Loop technique, for which it is divided into two parts: (i) Software consists of a virtualized environment in Unity 3D to visualize its behavior in real time; and (ii) Hardware was implemented through a PLC S7 1200 AC/DC/RLY (Programmable Logic Controller), which is responsible for the automation of the plant, programmed through the software TIA Portal V16 (Totally Integrated Automation Portal) and a control panel with buttons and indicator lights. The two developed parts communicate through bidirectional TCP/IP Ethernet, achieving a Server–Client architecture. For real-time monitoring and visualization, a SCADA (Supervisory Control and Data Acquisition) system implemented in InTouch is considered. In addition, the data acquisition is accomplished through the OPC (Open Platform Communication) server; the functionality of the OPC server is to transmit the information generated in an industrial plant at the enterprise level. This allows the process to execute its tasks of connectivity of automated processes and their supervision, as well as having scalability so that more tags can be included in other processes over time and ensure its operability.
APA, Harvard, Vancouver, ISO, and other styles
27

Yin, Chunxia, Jian Liu, and Shunfu Jin. "An Energy-Efficient Task Scheduling Mechanism with Switching On/Sleep Mode of Servers in Virtualized Cloud Data Centers." Mathematical Problems in Engineering 2020 (February 18, 2020): 1–11. http://dx.doi.org/10.1155/2020/4176308.

Full text
Abstract:
In recent years, the energy consumption of cloud data centers has continued to increase. A large number of servers run at a low utilization rate, which results in a great waste of power. To save more energy in a cloud data center, we propose an energy-efficient task-scheduling mechanism with switching on/sleep mode of servers in the virtualized cloud data center. The key idea is that when the number of idle VMs reaches a specified threshold, the server with the most idle VMs will be switched to sleep mode after migrating all the running tasks to other servers. From the perspective of the total number of tasks and the number of servers in sleep mode in the system, we establish a two-dimensional Markov chain to analyse the proposed energy-efficient mechanism. By using the method of the matrix-geometric solution, we mathematically estimate the energy consumption and the response performance. Both numerical and simulated experiments show that our proposed energy-efficient mechanism can effectively reduce the energy consumption and guarantee the response performance. Finally, by constructing a cost function, the number of VMs hosted on each server is optimized.
APA, Harvard, Vancouver, ISO, and other styles
28

Kwon, Dongup, Wonsik Lee, Dongryeong Kim, Junehyuk Boo, and Jangwoo Kim. "SmartFVM: A Fast, Flexible, and Scalable Hardware-based Virtualization for Commodity Storage Devices." ACM Transactions on Storage 18, no. 2 (May 31, 2022): 1–27. http://dx.doi.org/10.1145/3511213.

Full text
Abstract:
A computational storage device incorporating a computation unit inside or near its storage unit is a highly promising technology to maximize a storage server’s performance. However, to apply such computational storage devices and take their full potential in virtualized environments, server architects must resolve a fundamental challenge: cost-effective virtualization . This critical challenge can be directly addressed by the following questions: (1) how to virtualize two different hardware units (i.e., computation and storage), and (2) how to integrate them to construct virtual computational storage devices, and (3) how to provide them to users. However, the existing methods for computational storage virtualization severely suffer from their low performance and high costs due to the lack of hardware-assisted virtualization support. In this work, we propose SmartFVM-Engine , an FPGA card designed to maximize the performance and cost-effectiveness of computational storage virtualization. SmartFVM-Engine introduces three key ideas to achieve the design goals. First, it achieves high virtualization performance by applying hardware-assisted virtualization to both computation and storage units. Second, it further improves the performance by applying hardware-assisted resource orchestration for the virtualized units. Third, it achieves high cost-effectiveness by dynamically constructing and scheduling virtual computational storage devices. To the best of our knowledge, this is the first work to implement a hardware-assisted virtualization mechanism for modern computational storage devices.
APA, Harvard, Vancouver, ISO, and other styles
29

Kjær, Martin A., Maria Kihl, and Anders Robertsson. "Response–Time Control of a Processor–Sharing System using Virtualized Server Environments." IFAC Proceedings Volumes 41, no. 2 (2008): 3612–18. http://dx.doi.org/10.3182/20080706-5-kr-1001.00610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Speitkamp, B., and M. Bichler. "A Mathematical Programming Approach for Server Consolidation Problems in Virtualized Data Centers." IEEE Transactions on Services Computing 3, no. 4 (October 2010): 266–78. http://dx.doi.org/10.1109/tsc.2010.25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Monteiro, Andre Felipe, Marcus Vinicius Azevedo, and Alexandre Sztajnberg. "Virtualized Web server cluster self-configuration to optimize resource and power use." Journal of Systems and Software 86, no. 11 (November 2013): 2779–96. http://dx.doi.org/10.1016/j.jss.2013.06.033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Yefu, and Xiaorui Wang. "Virtual Batching: Request Batching for Server Energy Conservation in Virtualized Data Centers." IEEE Transactions on Parallel and Distributed Systems 24, no. 8 (August 2013): 1695–705. http://dx.doi.org/10.1109/tpds.2012.237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Yefu, and Xiaorui Wang. "Performance-controlled server consolidation for virtualized data centers with multi-tier applications." Sustainable Computing: Informatics and Systems 4, no. 1 (March 2014): 52–65. http://dx.doi.org/10.1016/j.suscom.2014.02.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Nirwansyah, Ferdy, and Suharjito Suharjito. "Hybrid Disk Drive Configuration on Database Server Virtualization." Indonesian Journal of Electrical Engineering and Computer Science 2, no. 3 (June 1, 2016): 720. http://dx.doi.org/10.11591/ijeecs.v2.i3.pp720-728.

Full text
Abstract:
SSD is a revolutionary new storage technologies. Enterprise storage system using full SSD is still very expensive, while HDD is still widely used. This study discusses hybrid configuration storage in virtualized server database with benchmark against four hybrid storage configuration for four databases, ORACLE, SQL Server, MySQL and PostgreSQL on Windows Server virtualization. Benchmark using TPC-C and TPC-H to get the best performance of four configurations were tested. The results of this study indicate HDD storage configurations as visual disk drive OS and SSD as visual disk drives database get better performance as OLTP and OLAP database server compared with SSD as visual disk drive OS and HDD as a visual disk drive database. Based on the data research TPC-C, OLTP get best performance at HDD storage configurations as visual disk drive OS and SSD as a visual disk drives database and temporary files.
APA, Harvard, Vancouver, ISO, and other styles
35

Ahmadi, Mohammad Reza. "Performance Evaluation of Virtualization Techniques for Control and Access of Storage Systems in Data Center Applications." Journal of Electrical Engineering 64, no. 5 (September 1, 2013): 272–82. http://dx.doi.org/10.2478/jee-2013-0040.

Full text
Abstract:
Abstract Virtualization is a new technology that creates virtual environments based on the existing physical resources. This article evaluates effect of virtualization techniques on control servers and access method in storage systems [1, 2]. In control server virtualization, we have presented a tile based evaluation based on heterogeneous workloads to compare several key parameters and demonstrate effectiveness of virtualization techniques. Moreover, we have evaluated the virtualized model using VMotion techniques and maximum consolidation. In access method, we have prepared three different scenarios using direct, semi-virtual, and virtual attachment models. We have evaluated the proposed models with several workloads including OLTP database, data streaming, file server, web server, etc. Results of evaluation for different criteria confirm that server virtualization technique has high throughput and CPU usage as well as good performance with noticeable agility. Also virtual technique is a successful alternative for accessing to the storage systems especially in large capacity systems. This technique can therefore be an effective solution for expansion of storage area and reduction of access time. Results of different evaluation and measurements demonstrate that the virtualization in control server and full virtual access provide better performance and more agility as well as more utilization in the systems and improve business continuity plan.
APA, Harvard, Vancouver, ISO, and other styles
36

Koratagere, Sreelakshmi, Ravi Kumar Chandrashekarappa Koppal, and Iyyanahalli Math Umesh. "Server virtualization in higher educational institutions: a case study." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 4 (August 1, 2023): 4477. http://dx.doi.org/10.11591/ijece.v13i4.pp4477-4487.

Full text
Abstract:
Virtualization is a concept in which multiple guest operating systems share a single piece of hardware. Server virtualization is the widely used type of virtualization in which each operating system believes that it has sole control of the underlying hardware. Server virtualization has already got its place in companies. Higher education institutes have also started to migrate to virtualized servers. The motivation for higher education institutes to adopt server virtualization is to reduce the maintenance of the complex information technology (IT) infrastructure. Data security is also one of the parameters considered by higher education institutes to move to virtualization. Virtualization enables organizations to reduce expenditure by avoiding building out more data center space. Server consolidation benefits the educational institutes by reducing energy costs, easing maintenance, optimizing the use of hardware, provisioning the resources for research. As the hybrid mode of learning is gaining momentum, the online mode of teaching and working from home options can be enabled with a strengthened infrastructure. The paper presents activities conducted during server virtualization implementation at RV College of Engineering, Bengaluru, one of the reputed engineering institutes in India. The activities carried out include study of the current scenario, evaluation of new proposals and post-implementation review.
APA, Harvard, Vancouver, ISO, and other styles
37

Trombeta, Lucas, and Nunzio Torrisi. "DHCP Hierarchical Failover (DHCP-HF) Servers over a VPN Interconnected Campus." Big Data and Cognitive Computing 3, no. 1 (March 5, 2019): 18. http://dx.doi.org/10.3390/bdcc3010018.

Full text
Abstract:
This work presents a strategy to scale out the fault-tolerant dynamic host configuration protocol (DHCP) algorithm over multiple interconnected local networks. The proposed model is open and used as an alternative to commercial solutions for a multi-campus institution with facilities in different regions that are interconnected point-to-point using a dedicated link. When the DHCP scope has to be managed and structured over multiple geographic locations that are VPN connected, it requires physical redundancy, which can be provided by a failover server. The proposed solution overcomes the limitation placed on the number of failover servers as defined in the DHCP failover (DHCP-F) protocol, which specifies the use of one primary and one secondary server. Moreover, the presented work also contributes to improving the DHCP-F specification relative to a number of practical workarounds, such as the use of a virtualized DHCP server. Therefore, this research assumes a recovery strategy that is based on physical servers distributed among different locations and not centralized as clustered virtual machines. The proposed method was evaluated by simulations to investigate the impact of this solution in terms of network traffic generated over the VPN links in order to keep the failover service running using the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
38

Huang, Wei, Zhen Wang, Mianxiong Dong, and Zhuzhong Qian. "A Two-Tier Energy-Aware Resource Management for Virtualized Cloud Computing System." Scientific Programming 2016 (2016): 1–15. http://dx.doi.org/10.1155/2016/4386362.

Full text
Abstract:
The economic costs caused by electric power take the most significant part in total cost of data center; thus energy conservation is an important issue in cloud computing system. One well-known technique to reduce the energy consumption is the consolidation of Virtual Machines (VMs). However, it may lose some performance points on energy saving and the Quality of Service (QoS) for dynamic workloads. Fortunately, Dynamic Frequency and Voltage Scaling (DVFS) is an efficient technique to save energy in dynamic environment. In this paper, combined with the DVFS technology, we propose a cooperative two-tier energy-aware management method including local DVFS control and global VM deployment. The DVFS controller adjusts the frequencies of homogenous processors in each server at run-time based on the practical energy prediction. On the other hand, Global Scheduler assigns VMs onto the designate servers based on the cooperation with the local DVFS controller. The final evaluation results demonstrate the effectiveness of our two-tier method in energy saving.
APA, Harvard, Vancouver, ISO, and other styles
39

Makrani, Hosein Mohamamdi, Hossein Sayadi, Najmeh Nazari, Sai Mnoj Pudukotai Dinakarrao, Avesta Sasan, Tinoosh Mohsenin, Setareh Rafatirad, and Houman Homayoun. "Adaptive Performance Modeling of Data-intensive Workloads for Resource Provisioning in Virtualized Environment." ACM Transactions on Modeling and Performance Evaluation of Computing Systems 5, no. 4 (March 2021): 1–24. http://dx.doi.org/10.1145/3442696.

Full text
Abstract:
The processing of data-intensive workloads is a challenging and time-consuming task that often requires massive infrastructure to ensure fast data analysis. The cloud platform is the most popular and powerful scale-out infrastructure to perform big data analytics and eliminate the need to maintain expensive and high-end computing resources at the user side. The performance and the cost of such infrastructure depend on the overall server configuration, such as processor, memory, network, and storage configurations. In addition to the cost of owning or maintaining the hardware, the heterogeneity in the server configuration further expands the selection space, leading to non-convergence. The challenge is further exacerbated by the dependency of the application’s performance on the underlying hardware. Despite an increasing interest in resource provisioning, few works have been done to develop accurate and practical models to proactively predict the performance of data-intensive applications corresponding to the server configuration and provision a cost-optimal configuration online. In this work, through a comprehensive real-system empirical analysis of performance, we address these challenges by introducing ProMLB: a proactive machine-learning-based methodology for resource provisioning. We first characterize diverse types of data-intensive workloads across different types of server architectures. The characterization aids in accurately capture applications’ behavior and train a model for prediction of their performance. Then, ProMLB builds a set of cross-platform performance models for each application. Based on the developed predictive model, ProMLB uses an optimization technique to distinguish close-to-optimal configuration to minimize the product of execution time and cost. Compared to the oracle scheduler, ProMLB achieves 91% accuracy in terms of application-resource matching. On average, ProMLB improves the performance and resource utilization by 42.6% and 41.1%, respectively, compared to baseline scheduler. Moreover, ProMLB improves the performance per cost by 2.5× on average.
APA, Harvard, Vancouver, ISO, and other styles
40

Uddin, Mueen, Mohammed Hamdi, Abdullah Alghamdi, Mesfer Alrizq, Mohammad Sulleman Memon, Maha Abdelhaq, and Raed Alsaqour. "Server consolidation: A technique to enhance cloud data center power efficiency and overall cost of ownership." International Journal of Distributed Sensor Networks 17, no. 3 (March 2021): 155014772199721. http://dx.doi.org/10.1177/1550147721997218.

Full text
Abstract:
Cloud computing is a well-known technology that provides flexible, efficient, and cost-effective information technology solutions for multinationals to offer improved and enhanced quality of business services to end-users. The cloud computing paradigm is instigated from grid and parallel computing models as it uses virtualization, server consolidation, utility computing, and other computing technologies and models for providing better information technology solutions for large-scale computational data centers. The recent intensifying computational demands from multinationals enterprises have motivated the magnification for large complicated cloud data centers to handle business, monetary, Internet, and commercial applications of different enterprises. A cloud data center encompasses thousands of millions of physical server machines arranged in racks along with network, storage, and other equipment that entails an extensive amount of power to process different processes and amenities required by business firms to run their business applications. This data center infrastructure leads to different challenges like enormous power consumption, underutilization of installed equipment especially physical server machines, CO2 emission causing global warming, and so on. In this article, we highlight the data center issues in the context of Pakistan where the data center industry is facing huge power deficits and shortcomings to fulfill the power demands to provide data and operational services to business enterprises. The research investigates these challenges and provides solutions to reduce the number of installed physical server machines and their related device equipment. In this article, we proposed server consolidation technique to increase the utilization of already existing server machines and their workloads by migrating them to virtual server machines to implement green energy-efficient cloud data centers. To achieve this objective, we also introduced a novel Virtualized Task Scheduling Algorithm to manage and properly distribute the physical server machine workloads onto virtual server machines. The results are generated from a case study performed in Pakistan where the proposed server consolidation technique and virtualized task scheduling algorithm are applied on a tier-level data center. The results obtained from the case study demonstrate that there are annual power savings of 23,600 W and overall cost savings of US$78,362. The results also highlight that the utilization ratio of already existing physical server machines has increased to 30% compared to 10%, whereas the number of server machines has reduced to 50% contributing enormously toward huge power savings.
APA, Harvard, Vancouver, ISO, and other styles
41

Lama, Palden, and Xiaobo Zhou. "Coordinated Power and Performance Guarantee with Fuzzy MIMO Control in Virtualized Server Clusters." IEEE Transactions on Computers 64, no. 1 (January 2015): 97–111. http://dx.doi.org/10.1109/tc.2013.184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Kramer, Hugo H., Vinicius Petrucci, Anand Subramanian, and Eduardo Uchoa. "A column generation approach for power-aware optimization of virtualized heterogeneous server clusters." Computers & Industrial Engineering 63, no. 3 (November 2012): 652–62. http://dx.doi.org/10.1016/j.cie.2011.07.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Marotta, Antonio, Stefano Avallone, and Andreas Kassler. "A Joint Power Efficient Server and Network Consolidation approach for virtualized data centers." Computer Networks 130 (January 2018): 65–80. http://dx.doi.org/10.1016/j.comnet.2017.11.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Wang, Bo, Ying Song, Yuzhong Sun, and Jun Liu. "Analysis model for server consolidation of virtualized heterogeneous data centers providing internet services." Cluster Computing 22, no. 3 (December 3, 2018): 911–28. http://dx.doi.org/10.1007/s10586-018-2880-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Deng, Wei, Fangming Liu, Hai Jin, Xiaofei Liao, and Haikun Liu. "Reliability-aware server consolidation for balancing energy-lifetime tradeoff in virtualized cloud datacenters." International Journal of Communication Systems 27, no. 4 (October 29, 2013): 623–42. http://dx.doi.org/10.1002/dac.2687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Manikandan, J., and Sri Lakshmi Uppalapati. "Critical Analysis on Detection and Mitigation of Security Vulnerabilities in Virtualization Data Centers." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 3s (March 13, 2023): 238–46. http://dx.doi.org/10.17762/ijritcc.v11i3s.6187.

Full text
Abstract:
There is an increasing demand for IT resources in growing business enterprises. Data center virtualization helps to meet this increasing demand by driving higher server utilization and utilizing un-used CPU cycles without causes much increase in new servers. Reduction in infrastructure complexities, Optimization of cost of IT system management, power and cooling are some of the additional benefits of virtualization. Virtualization also brings various security vulnerabilities. They are prone to attacks like hyperjacking, intrusion, data thefts, denial of service attacks on virtualized servers and web facing applications etc. This works identifies the security challenges in virtualization. A critical analysis on existing state of art works on detection and mitigation of various vulnerabilities is presented. The aim is to identify the open issues and propose prospective solutions in brief for these open issues.
APA, Harvard, Vancouver, ISO, and other styles
47

V. Samuel Blessed Nayagam, P., and A. Shajin Nargunam. "Secure Data Verification and Virtual Machine Monitoring." International Journal of Engineering & Technology 7, no. 4.36 (December 9, 2018): 574. http://dx.doi.org/10.14419/ijet.v7i4.36.24140.

Full text
Abstract:
Powerfully configurable virtualized assets make the physical area of the information and process autonomous of its portrayal and the clients have no influence over the physical arrangement of information and running procedure. In a multi-cloud condition the layer of deliberation between the physical equipment and virtualized frameworks gives a great way to convey cost reserve funds through server union and also expanded operational productivity and adaptability. This additional usefulness presents a virtualization layer that it turns into a chance of assault for the facilitated virtual administrations. The proposed access control show ensures virtual machines by receiving access control at various layers. The information shading plan help to secure the virtualized information utilized in the virtual machines. The information confirmation structure, which gives a grouping of trust wipes out the untrusted special virtual machines, and additionally utilize the confided in processing standards to guarantee the respectability of the checking condition. Safeguarding security plot ceaselessly screens the working and trade of information between the virtual machine. The test results demonstrate that this plan can viably counteract virtual machine escape without influencing the general productivity of the framework.
APA, Harvard, Vancouver, ISO, and other styles
48

Mohanaprakash T A, Et al. "Cloud Storage Level Service Offering in Virtualized Load Balancer using AWS." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 10 (November 2, 2023): 765–71. http://dx.doi.org/10.17762/ijritcc.v11i10.8572.

Full text
Abstract:
Distributed computing epitomizes an approach perfectly suited to the realm of IT commitments, leveraging the aggregation of information and resources through electronic cloud service providers utilizing interconnected hardware and software primarily based online, all at a reasonable cost. However, resource sharing can lead to challenges in their accessibility, potentially causing system crashes. To counter this, the technique of distributing network traffic across multiple servers, known as load balancing, plays a pivotal role. This paper ensures that no single server is overwhelmed, thereby preventing overloads and enhancing user responsiveness by equitably distributing tasks. Moreover, it significantly enhances the accessibility of tasks and websites to users. The fundamental objective of this concept is to comprehend load regulation, which operates in tandem with associated frameworks within communication structures like the Web. Load balancing stands as a critical domain within distributed computing, designed to prevent overburdening and to provide equally significant support. Various algorithms are employed to assess the system's complexity. In our proposed strategy, a process is outlined to determine optimal storage space utilization in real-time, utilizing 100 virtual computers, achieving an impressive 92% accuracy rate in its computations. This innovative approach promises efficient resource allocation within the distributed computing framework, thereby optimizing performance and accessibility for end-users.
APA, Harvard, Vancouver, ISO, and other styles
49

Baig, Mirza Moiz, Rohan Kokate, Shanon Paul, and Aafreen Qureshi. "The Extensive Reliable Cloud Service with a Low Throughput in The Data Transmission." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 9, no. 3 (December 17, 2018): 1262–69. http://dx.doi.org/10.61841/turcomat.v9i3.14447.

Full text
Abstract:
Demonstrating a distributed computing community is urgent to assess and anticipate its inward availability unwavering quality and accessibility. A significant number of past investigations on framework accessibility/unwavering quality appraisal of virtualized frameworks comprising of servers in cloud server farms have been accounted for. In this paper, we propose a various levelled demonstrating system for dependability and accessibility assessment of tree-based server farm organizations. The progressive model comprises of three layers, including (I) unwavering quality charts in the top layer to show the framework network geography, (ii) an issue tree to demonstrate the engineering of the subsystems, and (iii) stochastic prize nets to catch the ways of behaving and reliance of the parts in the subsystems exhaustively. Two agent server farm networks considering threelevel and fat-tree geographies are demonstrated and dissected in a thorough way. We explicitly think about several contextual analyses to explore the effect of systems administration and the board on distributed computing habitats. Besides, we perform different nitty gritty investigations concerning unwavering quality and accessibility measures for the framework models. The examination results show that fitting systems administration to improve the dissemination of hubs inside the server farm organizations can upgrade the dependability/accessibility. The finish of this study can be utilized toward the reasonable administration and development of distributed computing communities.
APA, Harvard, Vancouver, ISO, and other styles
50

Rivaldi, Ahmad, Ucuk Darusalam, and Deny Hidayatullah. "Perancangan Multi Node Web Server Menggunakan Docker Swarm dengan Metode Highavability." JURNAL MEDIA INFORMATIKA BUDIDARMA 4, no. 3 (July 20, 2020): 529. http://dx.doi.org/10.30865/mib.v4i3.2147.

Full text
Abstract:
Container-based virtualization is very popular among programming development in a lightweight virtualization because where the Linux kernel can divide resource-using containers to prevent uninterrupted performance between And as a burden divider in tackling the many incoming bandwidth. One of the most commonly used container-based virtualization is Docker. Docker itself is an open source software that can be changed to your liking. Docker containers can be used for clustering Web servers. It aims to reduce "a single point of failure" (SPOF) in a Web server. However, arranging a lot of containers is very complicated, but Docker has an engine to set it up called Docker Swarm. With it in the NGINX management help so that it can cause the resource between hosts is not divided on average. Therefore the research aims to distribute Web server traffic across the host with loadbalancing based on time-based monitoring and failover resources. By winning the low resources or resources used by Docker in the operation of an application that can be virtualized only what is needed
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography