Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: VM instance.

Zeitschriftenartikel zum Thema „VM instance“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-39 Zeitschriftenartikel für die Forschung zum Thema "VM instance" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Fan, Chih-Tien, Yue-Shan Chang und Shyan-Ming Yuan. „VM instance selection for deadline constraint job on agent-based interconnected cloud“. Future Generation Computer Systems 87 (Oktober 2018): 470–87. http://dx.doi.org/10.1016/j.future.2018.04.017.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wan, Jianxiong, Limin Liu, Jie Lv und Zhiwei Xu. „Coarse-Grain QoS-Aware Dynamic Instance Provisioning for Interactive Workload in the Cloud“. Mathematical Problems in Engineering 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/215016.

Der volle Inhalt der Quelle
Annotation:
Cloud computing paradigm renders the Internet service providers (ISPs) with a new approach to deliver their service with less cost. ISPs can rent virtual machines from the Infrastructure-as-a-Service (IaaS) provided by the cloud rather than purchasing them. In addition, commercial cloud providers (CPs) offer diverse VM instance rental services in various time granularities, which provide another opportunity for ISPs to reduce cost. We investigate a Coarse-grain QoS-aware Dynamic Instance Provisioning (CDIP) problem for interactive workload in the cloud from the perspective of ISPs. We formulate the CDIP problem as an optimization problem where the objective is to minimize the VM instance rental cost and the constraint is the percentile delay bound. Since the Internet traffic shows a strong self-similar property, it is hard to get an analytical form of the percentile delay constraint. To address this issue, we purpose a lookup table structure together with a learning algorithm to estimate the performance of the instance provisioning policy. This approach is further extended with two function approximations to enhance the scalability of the learning algorithm. We also present an efficient dynamic instance provisioning algorithm, which takes full advantage of the rental service diversity, to determine the instance rental policy. Extensive simulations are conducted to validate the effectiveness of the proposed algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

B.Rathod, Suresh, und V. Krishna Reddy. „Decision Making Framework for Decentralized Virtual Machine Placement in Cloud Computing“. International Journal of Engineering & Technology 7, Nr. 2.7 (18.03.2018): 705. http://dx.doi.org/10.14419/ijet.v7i2.7.10926.

Der volle Inhalt der Quelle
Annotation:
In distributed cloud environment hosts are configured with Local Resource Monitors (LRM). This LRM monitors the underlying hosts’ resource usage, runs independently and balances the underling host’s load by migrating Virtual Machine (VM) instance. For the dynamic environment, each hosts has varying resource requirement, hosts load cannot remain constant. LRM at each host takes decision for VM migration considering static threshold on its own and other hosts current CPU utilization. This result in chances of getting selected same host for VM placement by multiple hosts to reduce resource usage of underlying hosts. The decision making at each server causes the problem of same host identification by multiple hosts during VM placement and consumes extra CPU power and network bandwidth consumption towards each server. This paper addresses the above said issue by proposing decentralized decision making framework for cloud considering hybrid Peer to Peer (P2P) network topology. Proposed solution results avoiding above said issues and balances the load across servers in DC.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Nkenyereye, Lionel, Lewis Nkenyereye, Bayu Adhi Tama, Alavalapati Reddy und JaeSeung Song. „Software-Defined Vehicular Cloud Networks: Architecture, Applications and Virtual Machine Migration“. Sensors 20, Nr. 4 (17.02.2020): 1092. http://dx.doi.org/10.3390/s20041092.

Der volle Inhalt der Quelle
Annotation:
Cloud computing supports many unprecedented cloud-based vehicular applications. To improve connectivity and bandwidth through programmable networking architectures, Software- Defined (SD) Vehicular Network (SDVN) is introduced. SDVN architecture enables vehicles to be equipped with SDN OpenFlow switch on which the routing rules are updated from a SDN OpenFlow controller. From SDVN, new vehicular architectures are introduced, for instance SD Vehicular Cloud (SDVC). In SDVC, vehicles are SDN devices that host virtualization technology for enabling deployment of cloud-based vehicular applications. In addition, the migration of Virtual Machines (VM) over SDVC challenges the performance of cloud-based vehicular applications due the highly mobility of vehicles. However, the current literature that discusses VM migration in SDVC is very limited. In this paper, we first analyze the evolution of computation and networking technologies of SDVC with a focus on its architecture within the cloud-based vehicular environment. Then, we discuss the potential cloud-based vehicular applications assisted by the SDVC along with its ability to manage several VM migration scenarios. Lastly, we provide a detailed comparison of existing frameworks in SDVC that integrate the VM migration approach and different emulators or simulators network used to evaluate VM frameworks’ use cases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Dow, Eli M. „Decomposed multi-objective bin-packing for virtual machine consolidation“. PeerJ Computer Science 2 (24.02.2016): e47. http://dx.doi.org/10.7717/peerj-cs.47.

Der volle Inhalt der Quelle
Annotation:
In this paper, we describe a novel solution to the problem of virtual machine (VM) consolidation, otherwise known as VM-Packing, as applicable to Infrastructure-as-a-Service cloud data centers. Our solution relies on the observation that virtual machines are not infinitely variable in resource consumption. Generally, cloud compute providers offer them in fixed resource allocations. Effectively this makes all VMs of that allocation type (or instance type) generally interchangeable for the purposes of consolidation from a cloud compute provider viewpoint. The main contribution of this work is to demonstrate the advantages to our approach of deconstructing the VM consolidation problem into a two-step process of multidimensional bin packing. The first step is to determine the optimal, but abstract, solution composed of finite groups of equivalent VMs that should reside on each host. The second step selects concrete VMs from the managed compute pool to satisfy the optimal abstract solution while enforcing anti-colocation and preferential colocation of the virtual machines through VM contracts. We demonstrate our high-performance, deterministic packing solution generation, with over 7,500 VMs packed in under 2 min. We demonstrating comparable runtimes to other VM management solutions published in the literature allowing for favorable extrapolations of the prior work in the field in order to deal with larger VM management problem sizes our solution scales to.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Shin, Youngjoo. „A VM-Based Detection Framework against Remote Code Execution Attacks for Closed Source Network Devices“. Applied Sciences 9, Nr. 7 (28.03.2019): 1294. http://dx.doi.org/10.3390/app9071294.

Der volle Inhalt der Quelle
Annotation:
Remote code execution attacks against network devices become major challenges in securing networking environments. In this paper, we propose a detection framework against remote code execution attacks for closed source network devices using virtualization technologies. Without disturbing a target device in any way, our solution deploys an emulated device as a virtual machine (VM) instance running the same firmware image as the target in a way that ingress packets are mirrored to the emulated device. By doing so, remote code execution attacks mounted by maliciously crafted packets will be captured in memory of the VM. This way, our solution enables successful detection of any kind of intrusions that leaves memory footprints.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Haragi L, Darshan, Mahith S und Prof Sahana B. „Infrastructure Optimization in Kubernetes Cluster“. Journal of University of Shanghai for Science and Technology 23, Nr. 06 (17.06.2021): 546–55. http://dx.doi.org/10.51201/jusst/21/05292.

Der volle Inhalt der Quelle
Annotation:
Kubernetes is a compact, extensible, open-source stage for overseeing containerized responsibilities and administrations, that works with both decisive setup and robotization. Kubernetes is like VMs, however having loosened up isolation properties to share the Operating System (OS) among the applications. The container conversely with VM, has its own document framework, a portion of Central Processing Unit(CPU), memory, process space, and much more. Kubernetes cluster is a bunch of node machines for running containerized applications. Each cluster contains a control plane and at least one node. Infrastructure Optimization is the process of analyzing and arranging the portion of cloud resources that power applications and workloads to augment the presentation and limit squander due to over-provisioning. In the paper, a “Movie Review System” web application is designed using GoLang for backend components and HTML, CSS, and JS for frontend components. Using AWS, an EC2 instance is created and the web application is deployed onto EC2 and hosted in the instance server. The web application is also deployed on Kubernetes locally using the MiniKube tool. A performance analysis is performed for both the deployments on considering common performance metrics for both AWS EC2 / Virtual Machine (VM) and Kubernetes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Kernodle, Michael W., Robert N. McKethan und Erik Rabinowitz. „Observational Learning of Fly Casting Using Traditional and Virtual Modeling with and without Authority Figure“. Perceptual and Motor Skills 107, Nr. 2 (Oktober 2008): 535–46. http://dx.doi.org/10.2466/pms.107.2.535-546.

Der volle Inhalt der Quelle
Annotation:
Traditional and virtual modeling were compared during learning of a multiple degree-of-freedom skill (fly casting) to assess the effect of the presence or absence of an authority figure on observational learning via virtual modeling. Participants were randomly assigned to one of four groups: Virtual Modeling with an authority figure present (VM-A) ( n = 16), Virtual Modeling without an authority figure (VM-NA) ( n = 16), Traditional Instruction ( n = 17), and Control ( n = 19). Results showed significant between-group differences on Form and Skill Acquisition scores. Except for one instance, all three learning procedures resulted in significant learning of fly casting. Virtual modeling with or without an authority figure present was as effective as traditional instruction; however, learning without an authority figure was less effective with regard to Accuracy scores.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Kenga, Derdus, Vincent Omwenga und Patrick Ogao. „Virtual Machine Customization Using Resource Using Prediction for Efficient Utilization of Resources in IaaS Public Clouds“. Journal of Information Technology and Computer Science 6, Nr. 2 (03.09.2021): 170–82. http://dx.doi.org/10.25126/jitecs.202162196.

Der volle Inhalt der Quelle
Annotation:
The main cause of energy wastage in cloud data centres is the low level of server utilization. Low server utilization is a consequence of allocating more resources than required for running applications. For instance, in Infrastructure as a Service (IaaS) public clouds, cloud service providers (CSPs) deliver computing resources in the form of virtual machines (VMs) templates, which the cloud users have to choose from. More often, inexperienced cloud users tend to choose bigger VMs than their application requirements. To address the problem of inefficient resources utilization, the existing approaches focus on VM allocation and migration, which only leads to physical machine (PM) level optimization. Other approaches use horizontal auto-scaling, which is not a visible solution in the case of IaaS public cloud. In this paper, we propose an approach of customizing user VM’s size to match the resources requirements of their application workloads based on an analysis of real backend traces collected from a VM in a production data centre. In this approach, a VM is given fixed size resources that match applications workload demands and any demand that exceeds the fixed resource allocation is predicted and handled through vertical VM auto-scaling. In this approach, energy consumption by PMs is reduced through efficient resource utilization. Experimental results obtained from a simulation on CloudSim Plus using GWA-T-13 Materna real backend traces shows that data center energy consumption can be reduced via efficient resource utilization
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Seok, Soonhwa, Boaventura DaCosta, Mikayla McHenry-Powell, Linda Heitzman-Powell und Katrina Ostmeyer. „A Systematic Review of Evidence-Based Video Modeling for Students with Emotional and Behavioral Disorders“. Education Sciences 8, Nr. 4 (16.10.2018): 170. http://dx.doi.org/10.3390/educsci8040170.

Der volle Inhalt der Quelle
Annotation:
This systematic review examined eight studies showing that video modeling (VM) can have a positive and significant effect for students with emotional and behavioral disorders (EBD). Building upon meta-analyses that sought evidence of video-based interventions decreasing problem behaviors of students with EBD in K-12 education, the review examined the standards of the Council for Exceptional Children (CEC) for evidence-based practice as well as additional quality indicators, neglected quality indicators, strategies combined with VM, the impact of the independent variables on the dependent variables, and common recommendations offered for future research. Findings revealed that the eight studies met the CEC standards for evidence-based practices as well as other quality indicators. For instance, all studies reported content and setting, participants, intervention agents, description of practice, as well as interobserver agreement and experimental control. According to the findings, fidelity index and effect size were the two most neglected quality indicators. Furthermore, instructions, reinforcement system, and feedback or discussion were the most common strategies used. Finally, generalizability—across settings, populations, treatment agents, target behaviors in the real world, and subject matter—was the most common recommendation for future research. While further investigation is warranted, these findings suggest that VM is an effective evidence-based practice for students with EBD when the CEC standards are met.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Rong Chang, Bao, Hsiu-Fen Tsai, Chi-Ming Chen und Chien-Feng Huang. „Analysis of virtualized cloud server together with shared storage and estimation of consolidation ratio and TCO/ROI“. Engineering Computations 31, Nr. 8 (28.10.2014): 1746–60. http://dx.doi.org/10.1108/ec-11-2012-0295.

Der volle Inhalt der Quelle
Annotation:
Purpose – The physical server transition to virtualized infrastructure server have encountered crucial problems such as server consolidation, virtual machine (VM) performance, workload density, total cost of ownership (TCO), and return on investments (ROIs). In order to solve the problems mentioned above, the purpose of this paper is to perform the analysis of virtualized cloud server together with shared storage as well as the estimation of consolidation ratio and TCO/ROI in server virtualization. Design/methodology/approach – This paper introduces five distinct virtualized cloud computing servers (VCCSs), and provides the appropriate assessment to five well-known hypervisors built in VCCSs. The methodology the authors proposed in this paper will gives people an insight into the problem of physical server transition to virtualized infrastructure server. Findings – As a matter of fact, VM performance seems almost to achieve the same level, but the estimation of VM density and TCO/ROI are totally different among hypervisors. As a result, the authors have the recommendation to choose the hypervisor ESX server if you need a scheme with higher ROI and lower TCO. Alternatively, Proxmox VE would be the second choice if you like to save the initial investment at first and own a pretty well management interface at console. Research limitations/implications – In the performance analysis, instead of ESX 5.0, the authors adopted ESXi 5.0 that is free software, its function is limited, and does not have the full functionality of ESX server, such as: distributed resource scheduling, high availability, consolidated backup, fault tolerance, and disaster recovery. Moreover, this paper do not discuss the security problem on VCCS which is related to access control and cryptograph in VMs to be explored in the further work. Practical implications – In the process of virtualizing the network, ESX/ESXi has restrictions on the brand of the physical network card, only certain network cards can be detected by the VM. For instance: Intel and Broadcom network cards. The newer versions of ESXi 5.0.0 and above now support parts of Realtek series (Realtek 8186, Realtek 8169, and Realtek 8111E). Originality/value – How to precisely assess the hypervisor for server/desktop virtualization is also of hard question needed to deal with it crisply before deploying new IT with VCCS on site. The authors have utilized the VMware calculator and developed an approach to server/desktop consolidation, virtualization performance, VM density, TCO, and ROIs. As a result, in this paper the authors conducted a comprehensive approach to analyze five well-known hypervisors and will give the recommendation for IT manager to choose a right solution for server virtualization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Ya'u, Badamasi Imam, Azlin Nordin und Norsaremah Salleh. „META-MODELING CONSTRUCTS FOR REQUIREMENTS REUSE (RR): SOFTWARE REQUIREMENTS PATTERNS, VARIABILITY AND TRACEABILITY“. MALAYSIAN JOURNAL OF COMPUTING 3, Nr. 2 (31.12.2018): 119. http://dx.doi.org/10.24191/mjoc.v3i2.4181.

Der volle Inhalt der Quelle
Annotation:
Reuse is a fundamental activity, which increases quality and productivity of software products. Reuse of software artifacts, such as requirements, architectures, and codes can be employed at any developmental stage of software. However, reuse at a higher level of abstraction, for instance at requirements level, provides greater benefits in software development than when applied at lower level of abstraction for example at coding level. To achieve full benefits of reuse, a systematic approach and appropriate strategy need to be followed. Although several reuse approaches are reported in the literature, these approaches lack a key strategy to synergize some essential drivers of reuse, which include reusable structure, variability management (VM) and traceability of software artifacts. In line with this, we make our contribution in this paper by (1) presenting the concepts and importance of software requirements patterns (SRP) for reusable structure; (2) proposing a strategy, which combines three sub-disciplines of Software Engineering (SE) such as Requirements Engineering (RE), Software Product Line Engineering (SPLE) and Model-driven Engineering (MDE); (3) proposing a meta-modeling constructs, which include SRP, VM and traceability and; (4) Relationship amongst the three sub-disciplines of the SE. This is a novel approach and we believe it can support and guide researchers and practitioners in SE community to have greater benefits of reuse during software developments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Zhao, Jiaqi, Yousri Mhedheb, Jie Tao, Foued Jrad, Qinghuai Liu und Achim Streit. „Using a vision cognitive algorithm to schedule virtual machines“. International Journal of Applied Mathematics and Computer Science 24, Nr. 3 (01.09.2014): 535–50. http://dx.doi.org/10.2478/amcs-2014-0039.

Der volle Inhalt der Quelle
Annotation:
Abstract Scheduling virtual machines is a major research topic for cloud computing, because it directly influences the performance, the operation cost and the quality of services. A large cloud center is normally equipped with several hundred thousand physical machines. The mission of the scheduler is to select the best one to host a virtual machine. This is an NPhard global optimization problem with grand challenges for researchers. This work studies the Virtual Machine (VM) scheduling problem on the cloud. Our primary concern with VM scheduling is the energy consumption, because the largest part of a cloud center operation cost goes to the kilowatts used. We designed a scheduling algorithm that allocates an incoming virtual machine instance on the host machine, which results in the lowest energy consumption of the entire system. More specifically, we developed a new algorithm, called vision cognition, to solve the global optimization problem. This algorithm is inspired by the observation of how human eyes see directly the smallest/largest item without comparing them pairwisely. We theoretically proved that the algorithm works correctly and converges fast. Practically, we validated the novel algorithm, together with the scheduling concept, using a simulation approach. The adopted cloud simulator models different cloud infrastructures with various properties and detailed runtime information that can usually not be acquired from real clouds. The experimental results demonstrate the benefit of our approach in terms of reducing the cloud center energy consumption
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

U, Dhanush, Prasannasai S Hulikatti, Raghavendra H Malager, Sandur Shreesha und Prakash Biswagar. „A Secure File Transfer over Virtual Machine Instances using Hybrid Encryption Technique“. Journal of University of Shanghai for Science and Technology 23, Nr. 06 (31.05.2021): 77–84. http://dx.doi.org/10.51201/jusst/21/05219.

Der volle Inhalt der Quelle
Annotation:
Cloud Computing is used to share data, services, and resources via a network but this system is vulnerable to cyber-attacks by an unauthorized person denying the user privacy and confidentiality. The exponential growth in Information technology especially in the field of Cloud Computing has seen a rise in security attacks such as Interruption, Interception, Modification, Fabrication making it absolutely necessary to enhance cloud security as well as network security. To tackle the menace of security threats is to make use of the various encryption techniques and to ensure secure transmission of the data to ensure the user of his rights of Privacy, Confidentiality, Integrity, Authentication, and access controls of data. This can be achieved by Cryptographic techniques. In the former implementation, a new virtual instance is created and embedded with all the requested resources and allocated to the user whereas in the later implementation the size of the allotted VM is being altered to account for the extra requested resource or to free the unused resource to increase efficiency. To achieve Secure file transfer between instances Hypervisor tool Virtual Box developed by Oracle Corporation is used. To interface with Hypervisor via Host CLI commands provided by the Virtual Box is used. Thus, developing a model that mimics the cloud environment on small scale using the laptop/desktop which enacts a cloud with a limited resource pool.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Patra, Sudhansu Shekhar, und R. K. Barik. „Dynamic Dedicated Server Allocation for Service Oriented Multi-Agent Data Intensive Architecture in Biomedical and Geospatial Cloud“. International Journal of Cloud Applications and Computing 4, Nr. 1 (Januar 2014): 50–62. http://dx.doi.org/10.4018/ijcac.2014010105.

Der volle Inhalt der Quelle
Annotation:
Cloud computing has recently received considerable attention, as a promising approach for delivering Information and Communication Technologies (ICT) services as a utility. In the process of providing these services it is necessary to improve the utilization of data centre resources which are operating in most dynamic workload environments. Datacenters are integral parts of cloud computing. In the datacenter generally hundreds and thousands of virtual servers run at any instance of time, hosting many tasks and at the same time the cloud system keeps receiving the batches of task requests. It provides services and computing through the networks. Service Oriented Architecture (SOA) and agent frameworks renders tools for developing distributed and multi agent systems which can be used for the administration of cloud computing environments which supports the above characteristics. This paper presents a SOQM (Service Oriented QoS Assured and Multi Agent Cloud Computing) architecture which supports QoS assured cloud service provision and request. Biomedical and geospatial data on cloud can be analyzed through SOQM and has allowed the efficient management of the allocation of resources to the different system agents. It has proposed a finite heterogeneous multiple vm model which are dynamically allocated depending on the request from biomedical and geospatial stakeholders.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Tsagkaropoulos, Andreas, Yiannis Verginadis, Maxime Compastié, Dimitris Apostolou und Gregoris Mentzas. „Extending TOSCA for Edge and Fog Deployment Support“. Electronics 10, Nr. 6 (20.03.2021): 737. http://dx.doi.org/10.3390/electronics10060737.

Der volle Inhalt der Quelle
Annotation:
The emergence of fog and edge computing has complemented cloud computing in the design of pervasive, computing-intensive applications. The proximity of fog resources to data sources has contributed to minimizing network operating expenditure and has permitted latency-aware processing. Furthermore, novel approaches such as serverless computing change the structure of applications and challenge the monopoly of traditional Virtual Machine (VM)-based applications. However, the efforts directed to the modeling of cloud applications have not yet evolved to exploit these breakthroughs and handle the whole application lifecycle efficiently. In this work, we present a set of Topology and Orchestration Specification for Cloud Applications (TOSCA) extensions to model applications relying on any combination of the aforementioned technologies. Our approach features a design-time “type-level” flavor and a run time “instance-level” flavor. The introduction of semantic enhancements and the use of two TOSCA flavors enables the optimization of a candidate topology before its deployment. The optimization modeling is achieved using a set of constraints, requirements, and criteria independent from the underlying hosting infrastructure (i.e., clouds, multi-clouds, edge devices). Furthermore, we discuss the advantages of such an approach in comparison to other notable cloud application deployment approaches and provide directions for future research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Syed, Iffath Unissa. „Remittance Flows from Healthcare Workers in Toronto, Canada“. Sustainability 13, Nr. 17 (25.08.2021): 9536. http://dx.doi.org/10.3390/su13179536.

Der volle Inhalt der Quelle
Annotation:
Previous research indicates that Canadian healthcare workers, particularly long-term care (LTC) workers, are frequently composed of immigrant and racialized/visible minorities (VM) who are often precariously employed, underpaid, and face significant work-related stress, violence, injuries, illness, and health inequities. Few studies, however, have analyzed the contributions and impact of their labor in international contexts and on global communities. For instance, it is estimated that over CAD 5 billion-worth of remittances originate from Canada, yet no studies to date have examined the contributions of these remittances from Canadian workers, especially from urbanized regions consisting of VM and immigrants who live and/or work in diverse and multicultural places like Toronto. The present study is the first to investigate health and LTC workers’ roles and behaviors as related to remittances. The rationale for this study is to fill important knowledge gaps. Accordingly, this study asked: Do health/LTC workers in the site of study send remittances? If so, which workers send remittances, and who are the recipients of these remittances? What is the range of monetary value of annual remittances that each worker is able to send? What is the purpose of these remittances? What motivates the decision to send remittances? This mixed-methods study used a single-case design and relied on interviews and a survey. The results indicate that many LTC workers provided significant financial support to transnational families, up to CAD 15,000 annually, for a variety of reasons, including support for education and healthcare costs, or as gifts during cultural festivals. However, the inability to send remittances was also a source of distress for those who wanted to assist their families but were unable to do so. These findings raise important questions that could be directed for future research. For example, are there circumstances under which financial remittances are funded through loans or debt? What are the implications for the sustainability and impact of remittances, given the current COVID-19 pandemic and its economic effect of dampening incomes and wages, worsening migrants’ health, wellbeing, and quality of life, as well as adversely affecting recipient economies and the quality of life of global communities?
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Choi, Yeongho, Yujin Lim und Jaesung Park. „Optimal Bidding Strategy for VM Spot Instances for Cloud Computing“. Journal of Korean Institute of Communications and Information Sciences 40, Nr. 9 (30.09.2015): 1802–7. http://dx.doi.org/10.7840/kics.2015.40.9.1802.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Yexi Jiang, Chang-Shing Perng, Tao Li und R. N. Chang. „Cloud Analytics for Capacity Planning and Instant VM Provisioning“. IEEE Transactions on Network and Service Management 10, Nr. 3 (September 2013): 312–25. http://dx.doi.org/10.1109/tnsm.2013.051913.120278.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Mishra, Ashish Kumar, Brajesh Kumar Umrao und Dharmendra K. Yadav. „A survey on optimal utilization of preemptible VM instances in cloud computing“. Journal of Supercomputing 74, Nr. 11 (30.07.2018): 5980–6032. http://dx.doi.org/10.1007/s11227-018-2509-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Mohanty, Suneeta, Prasant Kumar Pattnaik und G. B. Mund. „Privacy Preserving Auction Based Virtual Machine Instances Allocation Scheme for Cloud Computing Environment“. International Journal of Electrical and Computer Engineering (IJECE) 7, Nr. 5 (01.10.2017): 2645. http://dx.doi.org/10.11591/ijece.v7i5.pp2645-2650.

Der volle Inhalt der Quelle
Annotation:
<p>Cloud Computing Environment provides computing resources in the form of Virtual Machines (VMs), to the cloud users through Internet. Auction-based VM instances allocation allows different cloud users to participate in an auction for a bundle of Virtual Machine instances where the user with the highest bid value will be selected as the winner by the auctioneer (Cloud Service Provider) to gain more. In this auction mechanism, individual bid values are revealed to the auctioneer in order to select the winner as a result of which privacy of bid values are lost. In this paper, we proposed an auction scheme to select the winner without revealing the individual bid values to the auctioneer to maintain privacy of bid values. The winner will get the access to the bundle of VM instances. This scheme relies on a set of cryptographic protocols including Oblivious Transfer (OT) protocol and Yao’s protocol to maintain privacy of bid values.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Saber, Takfarinas, Joao Marques-Silva, James Thorburn und Anthony Ventresque. „Exact and Hybrid Solutions for the Multi-Objective VM Reassignment Problem“. International Journal on Artificial Intelligence Tools 26, Nr. 01 (Februar 2017): 1760004. http://dx.doi.org/10.1142/s0218213017600041.

Der volle Inhalt der Quelle
Annotation:
Machine Reassignment is a challenging problem for constraint programming (CP) and mixed integer linear programming (MILP) approaches, especially given the size of data centres. Hybrid solutions mixing CP and heuristic algorithms, such as, large neighbourhood search (CBLNS), also struggle to address the problem given its size and number of constraints. The multi-objective version of the Machine Reassignment Problem is even more challenging and it seems unlikely for CP, MILP or hybrid solutions to obtain good results in this context. As a result, the first approaches to address this problem have been based on other optimisation methods, including metaheuristics. In this paper we study three things: (i) under which conditions a mixed integer optimisation solver, such as IBM ILOG CPLEX, can be used for the Multi-objective Machine Reassignment Problem; (ii) how much of the search space can a well-known hybrid method such as CBLNS explore; and (iii) can we find a better hybrid approach combining MILP or CBLNS and an- other recent metaheuristic proposed for the problem (GeNePi). We show that MILP can handle only small or medium scale data centres, and with some relaxations, such as, an optimality tolerance gap and a limited number of directions explored in the search space. CBLNS on the other hand struggles with the problem in general but achieves reasonable performance for large instances of the problem. However, we show that our hybridisation improves both the quality of the set of solutions (CPLEX+GeNePi and CBLNS+GeNePi improve the solutions by +17.8% against CPLEX alone and +615% against CBLNS alone) and number of solutions (8.9 times more solutions than CPLEX alone and 56.76 times more solutions than CBLNS alone), while the processing time of CPLEX+GeNePi and CBLNS+GeNePi increases only by 6% and 16.4% respectively. Overall, the study shows that CPLEX+GeNePi is the best algorithm for small instances (CBLNS+GeNePi only gets 45.2% of CPLEX+GeNePi’s hypervolume) while CBLNS+GeNePi is better than the others on large instances (that CPLEX+GeNePi cannot address).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Jeyarani, R., und N. Nagaveni. „A Heuristic Meta Scheduler for Optimal Resource Utilization and Improved QoS in Cloud Computing Environment“. International Journal of Cloud Applications and Computing 2, Nr. 1 (Januar 2012): 41–52. http://dx.doi.org/10.4018/ijcac.2012010103.

Der volle Inhalt der Quelle
Annotation:
This paper presents a novel Meta scheduler algorithm using Particle Swarm Optimization (PSO) for cloud computing environment that focuses on fulfilling deadline requirements of the resource consumers as well as energy conservation requirement of the resource provider contributing towards green IT. PSO is a population-based heuristic method which can be used to solve NP-hard problems. The nature of jobs is considered to be independent, non pre-emptive, parallel and time critical. In order to execute jobs in a cloud, primarily Virtual Machine (VM) instances are launched in appropriate physical servers available in a data-center. The number of VM instances to be created across different servers to complete the time critical jobs successfully, is identified using PSO by exploiting the idle resources in powered-on servers. The scheduler postpones the power-up/activation of new servers/hosts for launching enqueued VM requests, as long as it is possible to meet the deadline requirements of the user. The Meta Scheduler also incorporates Backfilling Strategy which improves makespan. The results conclude that the proposed novel Meta scheduler gives optimization in terms of number of jobs meeting their deadlines (QoS) and utilization of computing resources, helping both cloud service consumer as well as cloud service provider.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Sharma, Oshin, und Hemraj Saini. „Performance Evaluation of VM Placement Using Classical Bin Packing and Genetic Algorithm for Cloud Environment“. International Journal of Business Data Communications and Networking 13, Nr. 1 (Januar 2017): 45–57. http://dx.doi.org/10.4018/ijbdcn.2017010104.

Der volle Inhalt der Quelle
Annotation:
In current era, the trend of cloud computing is increasing with every passing day due to one of its dominant service i.e. Infrastructure as a service (IAAS), which virtualizes the hardware by creating multiple instances of VMs on single physical machine. Virtualizing the hardware leads to the improvement of resource utilization but it also makes the system over utilized with inefficient performance. Therefore, these VMs need to be migrated to another physical machine using VM consolidation process in order to reduce the amount of host machines and to improve the performance of system. Thus, the idea of placing the virtual machines on some other hosts leads to the proposal of many new algorithms of VM placement. However, the reduced set of physical machines needs the lesser amount of power consumption therefore; in current work the authors have presented a decision making VM placement system based on genetic algorithm and compared it with three predefined VM placement techniques based on classical bin packing. This analysis contributes to better understand the effects of the placement strategies over the overall performance of cloud environment and how the use of genetic algorithm delivers the better results for VM placement than classical bin packing algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Mishra, Ashish Kumar, Dharmendra K. Yadav, Yogesh Kumar und Naman Jain. „Improving reliability and reducing cost of task execution on preemptible VM instances using machine learning approach“. Journal of Supercomputing 75, Nr. 4 (08.12.2018): 2149–80. http://dx.doi.org/10.1007/s11227-018-2717-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Jeyarani, R., N. Nagaveni und R. Vasanth Ram. „Self Adaptive Particle Swarm Optimization for Efficient Virtual Machine Provisioning in Cloud“. International Journal of Intelligent Information Technologies 7, Nr. 2 (April 2011): 25–44. http://dx.doi.org/10.4018/jiit.2011040102.

Der volle Inhalt der Quelle
Annotation:
Cloud Computing provides dynamic leasing of server capabilities as a scalable, virtualized service to end users. The discussed work focuses on Infrastructure as a Service (IaaS) model where custom Virtual Machines (VM) are launched in appropriate servers available in a data-center. The context of the environment is a large scale, heterogeneous and dynamic resource pool. Nonlinear variation in the availability of processing elements, memory size, storage capacity, and bandwidth causes resource dynamics apart from the sporadic nature of workload. The major challenge is to map a set of VM instances onto a set of servers from a dynamic resource pool so the total incremental power drawn upon the mapping is minimal and does not compromise the performance objectives. This paper proposes a novel Self Adaptive Particle Swarm Optimization (SAPSO) algorithm to solve the intractable nature of the above challenge. The proposed approach promptly detects and efficiently tracks the changing optimum that represents target servers for VM placement. The experimental results of SAPSO was compared with Multi-Strategy Ensemble Particle Swarm Optimization (MEPSO) and the results show that SAPSO outperforms the latter for power aware adaptive VM provisioning in a large scale, heterogeneous and dynamic cloud environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Khaleel, Mustafa, und Mengxia Zhu. „Efficient and Fair Bandwidth Scheduling in Cloud Environments“. ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY 6, Nr. 2 (29.11.2018): 20. http://dx.doi.org/10.14500/aro.10441.

Der volle Inhalt der Quelle
Annotation:
Hundreds of thousands of servers from data centers are operated to provide users with pay-as-yougo infrastructure as a service, platform as a service, and software as a service. Many different types of virtual machine (VM) instances hosted on these servers oftentimes need to efficiently communicate with data movement under current bandwidth capacity. This motivates providers to seek for a bandwidth scheduler to satisfy objectives, namely assuring the minimum bandwidth per VM for the guaranteed deadline and eliminating network congestion as much as possible. Based on some rigorous mathematical models, we formulated a cloud-based bandwidth scheduling algorithm which enables dynamic and fair bandwidth management by categorizing the total bandwidth into several categories and adjusting the allocated bandwidth limit per VM for both upstream and downstream traffics in real time. The simulation showed that paradigm was able to utilize the total assigned bandwidth more efficiently compared to algorithms such as bandwidth efficiency persistence proportional sharing (PPS), PPS, and PS at the network level.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Samuel, Madan, David M. Burge und Robert J. Marchbanks. „Tympanic membrane displacement testing in regular assessment of intracranial pressure in eight children with shunted hydrocephalus“. Journal of Neurosurgery 88, Nr. 6 (Juni 1998): 983–95. http://dx.doi.org/10.3171/jns.1998.88.6.0983.

Der volle Inhalt der Quelle
Annotation:
Object. The authors assessed the accuracy and repeatability of the tympanic membrane displacement (TMD) test, an audiometric technique that is used to evaluate changes in intracranial pressure (ICP) in children with shunted hydrocephalus. Methods. A prospective comparative evaluation of 31 clinical episodes of shunt malfunction was made by using the serial TMD test and direct ICP measurement in eight children with shunted hydrocephalus between January 1995 and February 1996. The volume displacement of the tympanic membrane (Vm) on stapedial contraction was inward for raised ICP in 11 instances and ranged from −120 to −539 nl (mean −263.5 nl). This was confirmed by direct ICP monitoring, which showed values ranging from 20 to 30 mm Hg (mean 26 mm Hg). The TMD test measurement (Vm) in 18 instances of low ICP ranged from 263 to 717 nl (mean 431.3 nl); this was corroborated by direct ICP measurement, which ranged from 3 to 7 mm Hg (mean 4.2 mm Hg). The normal baseline Vm values obtained when patients were asymptomatic ranged from −98 to 197 nl (mean 110 nl). As a noninvasive diagnostic tool used in predicting changes in ICP, the TMD test had a sensitivity of 83% and specificity of 100%. The positive predictive value of the test was 100% and the negative predictive value was 29%. Conclusions. The TMD test can be used on a regular basis as a reproducible investigative tool in the assessment of ICP in children with shunted hydrocephalus, thereby reducing the need for invasive ICP monitoring. The equipment necessary to perform this testing is mobile. It will provide a useful serial guide to ICP abnormalities in children with shunted hydrocephalus.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Hassan, M. K., A. Babiker, M. Baker und M. Hamad. „SLA Management For Virtual Machine Live Migration Using Machine Learning with Modified Kernel and Statistical Approach“. Engineering, Technology & Applied Science Research 8, Nr. 1 (20.02.2018): 2459–63. http://dx.doi.org/10.48084/etasr.1692.

Der volle Inhalt der Quelle
Annotation:
Application of cloud computing is rising substantially due to its capability to deliver scalable computational power. System attempts to allocate a maximum number of resources in a manner that ensures that all the service level agreements (SLAs) are maintained. Virtualization is considered as a core technology of cloud computing. Virtual machine (VM) instances allow cloud providers to utilize datacenter resources more efficiently. Moreover, by using dynamic VM consolidation using live migration, VMs can be placed according to their current resource requirements on the minimal number of physical nodes and consequently maintaining SLAs. Accordingly, non optimized and inefficient VMs consolidation may lead to performance degradation. Therefore, to ensure acceptable quality of service (QoS) and SLA, a machine learning technique with modified kernel for VMs live migrations based on adaptive prediction of utilization thresholds is presented. The efficiency of the proposed technique is validated with different workload patterns from Planet Lab servers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Chandrakala, N., und B. Thirumala Rao. „Migration of Virtual Machine to improve the Security of Cloud Computing“. International Journal of Electrical and Computer Engineering (IJECE) 8, Nr. 1 (01.02.2018): 210. http://dx.doi.org/10.11591/ijece.v8i1.pp210-219.

Der volle Inhalt der Quelle
Annotation:
Cloud services help individuals and organization to use data that are managed by third parties or another person at remote locations. With the increase in the development of cloud computing environment, the security has become the major concern that has been raised more consistently in order to move data and applications to the cloud as individuals do not trust the third party cloud computing providers with their private and most sensitive data and information. This paper presents, the migration of virtual machine to improve the security in cloud computing. Virtual machine (VM) is an emulation of a particular computer system. In cloud computing, virtual machine migration is a useful tool for migrating operating system instances across multiple physical machines. It is used to load balancing, fault management, low-level system maintenance and reduce energy consumption. Virtual machine (VM) migration is a powerful management technique that gives data center operators the ability to adapt the placement of VMs in order to better satisfy performance objectives, improve resource utilization and communication locality, achieve fault tolerance, reduce energy consumption, and facilitate system maintenance activities. In the migration based security approach, proposed the placement of VMs can make enormous difference in terms of security levels. On the bases of survivability analysis of VMs and Discrete Time Markov Chain (DTMC) analysis, we design an algorithm that generates a secure placement arrangement that the guest VMs can moves before succeeds the attack.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Chen, Junjie, und Hongjun Li. „A Two-Phase Cloud Resource Provisioning Algorithm for Cost Optimization“. Mathematical Problems in Engineering 2020 (08.10.2020): 1–10. http://dx.doi.org/10.1155/2020/1310237.

Der volle Inhalt der Quelle
Annotation:
Cloud computing is a new computing paradigm to deliver computing resources as services over the Internet. Under such a paradigm, cloud users can rent computing resources from cloud providers to provide their services. The goal of cloud users is to minimize the resource rental cost while meeting the service requirements. In reality, cloud providers often offer multiple pricing models for virtual machine (VM) instances, including on-demand and reserved pricing models. Moreover, the workload of cloud users varies with time and is not known a priori. Therefore, it is challenging for cloud users to determine the optimal cloud resource provisioning. In this paper, we propose a two-phase cloud resource provisioning algorithm. In the first phase, we formulate the resource reservation problem as a two-stage stochastic programming problem, and solve it by the sample average approximation method and the dual decomposition method. In the second phase, we propose a hybrid ARIMA-Kalman model to predict the workload, and determine the number of on-demand instances based on the predicted workload. The effectiveness of the proposed two-phase algorithm is evaluated using a real-world workload trace and Amazon EC2’s pricing models. The simulation results show that the proposed algorithm can significantly reduce the operational cost while guaranteeing the service level agreement (SLA).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Chowdhury, Sandipan, und Baron Chanda. „Estimating the voltage-dependent free energy change of ion channels using the median voltage for activation“. Journal of General Physiology 139, Nr. 1 (12.12.2011): 3–17. http://dx.doi.org/10.1085/jgp.201110722.

Der volle Inhalt der Quelle
Annotation:
Voltage-gated ion channels are crucial for electrical activity and chemical signaling in a variety of cell types. Structure-activity studies involving electrophysiological characterization of mutants are widely used and allow us to quickly realize the energetic effects of a mutation by measuring macroscopic currents and fitting the observed voltage dependence of conductance to a Boltzmann equation. However, such an approach is somewhat limiting, principally because of the inherent assumption that the channel activation is a two-state process. In this analysis, we show that the area delineated by the gating charge displacement curve and its ordinate axis is related to the free energy of activation of a voltage-gated ion channel. We derive a parameter, the median voltage of charge transfer (Vm), which is proportional to this area, and prove that the chemical component of free energy change of a system can be obtained from the knowledge of Vm and the maximum number of charges transferred. Our method is not constrained by the number or connectivity of intermediate states and is applicable to instances in which the observed responses show a multiphasic behavior. We consider various models of ion channel gating with voltage-dependent steps, latent charge movement, inactivation, etc. and discuss the applicability of this approach in each case. Notably, our method estimates a net free energy change of approximately −14 kcal/mol associated with the full-scale activation of the Shaker potassium channel, in contrast to −2 to −3 kcal/mol estimated from a single Boltzmann fit. Our estimate of the net free energy change in the system is consistent with those derived from detailed kinetic models (Zagotta et al. 1994. J. Gen. Physiol. doi:10.1085/jgp.103.2.321). The median voltage method can reliably quantify the magnitude of free energy change associated with activation of a voltage-dependent system from macroscopic equilibrium measurements. This will be particularly useful in scanning mutagenesis experiments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Wei, Yi, Daniel Kudenko, Shijun Liu, Li Pan, Lei Wu und Xiangxu Meng. „A Reinforcement Learning Based Auto-Scaling Approach for SaaS Providers in Dynamic Cloud Environment“. Mathematical Problems in Engineering 2019 (03.02.2019): 1–11. http://dx.doi.org/10.1155/2019/5080647.

Der volle Inhalt der Quelle
Annotation:
Cloud computing is an emerging paradigm which provides a flexible and diversified trading market for Infrastructure-as-a-Service (IaaS) providers, Software-as-a-Service (SaaS) providers, and cloud-based application customers. Taking the perspective of SaaS providers, they offer various SaaS services using rental cloud resources supplied by IaaS providers to their end users. In order to maximize their utility, the best behavioural strategy is to reduce renting expenses as much as possible while providing sufficient processing capacity to meet customer demands. In reality, public IaaS providers such as Amazon offer different types of virtual machine (VM) instances with different pricing models. Moreover, service requests from customers always change as time goes by. In such heterogeneous and changing environments, how to realize application auto-scaling becomes increasingly significant for SaaS providers. In this paper, we first formulate this problem and then propose a Q-learning based self-adaptive renting plan generation approach to help SaaS providers make efficient IaaS facilities adjustment decisions dynamically. Through a series of experiments and simulation, we evaluate the auto-scaling approach under different market conditions and compare it with two other resource allocation strategies. Experimental results show that our approach could automatically generate optimal renting policies for the SaaS provider in the long run.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Ochs, J., M. L. Brecher, D. Mahoney, R. Vega, B. H. Pollock, G. R. Buchanan, V. M. Whitehead, Y. Ravindranath und A. I. Freeman. „Recombinant interferon alfa given before and in combination with standard chemotherapy in children with acute lymphoblastic leukemia in first marrow relapse: a Pediatric Oncology Group pilot study.“ Journal of Clinical Oncology 9, Nr. 5 (Mai 1991): 777–82. http://dx.doi.org/10.1200/jco.1991.9.5.777.

Der volle Inhalt der Quelle
Annotation:
Recombinant interferon alfa (rIFN-alpha) was given to 31 children with acute lymphoblastic leukemia (ALL) in first on-therapy marrow relapse as the sole treatment (30 megaunits/m2/d intravenously x 10 days) before standard four-drug reinduction and during multiagent continuation therapy (30 megaunits/m2 subcutaneously x 3 consecutive days every 3 weeks). After 10 days of rIFN-alpha, there were two partial remissions (PRs); seven additional patients had either greater than or equal to 25% reduction in the percentage of marrow blast cells or hypoplastic marrow. Two patients had progressive disease with an increase in leukocyte counts. All patients experienced influenza-like symptoms, and there were isolated instances of severe abdominal pain and personality change. Dose-limiting toxicity comprised grade III/IV transaminase elevation (two patients) and syncope with personality change (one patient). Twenty-three of 31 children (74%) subsequently achieved marrow remission using standard agents. One patient was taken off study during teniposide (VM-26) and cytarabine (ara-C) consolidation due to toxicity. Continuation therapy including rIFN-alpha pulse was well tolerated in the remaining children; only one patient required rIFN-alpha dosage reduction (for CNS toxicity). rIFN-alpha toxicity did not necessitate reductions in doses of standard chemotherapy agents or significant delays in therapy. Five patients remain in remission at 26+ to 36+ months; 13 patients relapsed in marrow, one in the meninges (7 months), and one in meninges, mediastinum, and lymph nodes (2 months). Two children were removed from study for marrow transplant. In summary, high-dose rIFN-alpha alone had a modest antileukemic effect. In contrast to the clinical experience with combined rIFN-alpha and chemotherapy in adults, rIFN-alpha given in a pulse-like manner throughout continuation therapy did not compromise the intensity of the standard chemotherapy regimen.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Gangadhar, Pvss, Ashok Kumar Hota, Mandapati Venkateswara Rao und Vedula Venkateswara Rao. „Performance of Memory Virtualization Using Global Memory Resource Balancing“. International Journal of Cloud Applications and Computing 9, Nr. 1 (Januar 2019): 16–32. http://dx.doi.org/10.4018/ijcac.2019010102.

Der volle Inhalt der Quelle
Annotation:
Virtualization has become a universal generalization layer in contemporary data centers. By multiplexing hardware resources into multiple virtual machines and facilitating several operating systems to run on the same physical platform at the same time, it can effectively decrease power consumption and building size or improve security by isolating virtual machines. In a virtualized system, memory resource supervision acts as a decisive task in achieving high resource employment and performance. Insufficient memory allocation to a virtual machine will degrade its performance drastically. On the contrasting, over allocation reasons ravage of memory resources. In the meantime, a virtual machine's memory stipulates may differ drastically. As a consequence, effective memory resource management calls for a dynamic memory balancer, which, preferably, can alter memory allocation in a timely mode for each virtual machine-based on their present memory stipulate and therefore realize the preeminent memory utilization and the best possible overall performance. Migrating operating system instances across discrete physical hosts is a helpful tool for administrators of data centers and clusters: It permits a clean separation among hardware and software, and make easy fault management. In order to approximate the memory, the stipulate of each virtual machine and to adjudicate probable memory resource disagreement, an extensively planned approach is to build an Least Recently Used based miss ratio curve which provides not only the current working set size but also the correlation between performance and the target memory allocation size. In this paper, the authors initially present a low overhead LRU-based memory demand tracking scheme, which includes three orthogonal optimizations: AVL based Least Recently Used association, dynamic hot set sizing. This assessment outcome confirms that, for the complete SPEC CPU 2006 benchmark set, subsequent to pertaining the 3 optimizing techniques, the mean overhead of MRC construction are lowered from 173% to only 2%. Based on current WSS, the authors then predict its trend in the near future and take different tactics for different forecast results. When there is an adequate amount of physical memory on the host, it locally balances its memory resource for the VMs. Once the local memory resource is insufficient and the memory pressure is predicted to sustain for a sufficiently long time, VM live migration, is used to move one or more VMs from the hot host to other host(s). Finally, for transient memory pressure, a remote cache is used to alleviate the temporary performance penalty. These experimental results show that this design achieves 49% center-wide speedup.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Tangaro, Marco Antonio, Giacinto Donvito, Marica Antonacci, Matteo Chiara, Pietro Mandreoli, Graziano Pesole und Federico Zambelli. „Laniakea: an open solution to provide Galaxy “on-demand” instances over heterogeneous cloud infrastructures“. GigaScience 9, Nr. 4 (01.04.2020). http://dx.doi.org/10.1093/gigascience/giaa033.

Der volle Inhalt der Quelle
Annotation:
Abstract Background While the popular workflow manager Galaxy is currently made available through several publicly accessible servers, there are scenarios where users can be better served by full administrative control over a private Galaxy instance, including, but not limited to, concerns about data privacy, customisation needs, prioritisation of particular job types, tools development, and training activities. In such cases, a cloud-based Galaxy virtual instance represents an alternative that equips the user with complete control over the Galaxy instance itself without the burden of the hardware and software infrastructure involved in running and maintaining a Galaxy server. Results We present Laniakea, a complete software solution to set up a “Galaxy on-demand” platform as a service. Building on the INDIGO-DataCloud software stack, Laniakea can be deployed over common cloud architectures usually supported both by public and private e-infrastructures. The user interacts with a Laniakea-based service through a simple front-end that allows a general setup of a Galaxy instance, and then Laniakea takes care of the automatic deployment of the virtual hardware and the software components. At the end of the process, the user gains access with full administrative privileges to a private, production-grade, fully customisable, Galaxy virtual instance and to the underlying virtual machine (VM). Laniakea features deployment of single-server or cluster-backed Galaxy instances, sharing of reference data across multiple instances, data volume encryption, and support for VM image-based, Docker-based, and Ansible recipe-based Galaxy deployments. A Laniakea-based Galaxy on-demand service, named Laniakea@ReCaS, is currently hosted at the ELIXIR-IT ReCaS cloud facility. Conclusions Laniakea offers to scientific e-infrastructures a complete and easy-to-use software solution to provide a Galaxy on-demand service to their users. Laniakea-based cloud services will help in making Galaxy more accessible to a broader user base by removing most of the burdens involved in deploying and running a Galaxy service. In turn, this will facilitate the adoption of Galaxy in scenarios where classic public instances do not represent an optimal solution. Finally, the implementation of Laniakea can be easily adapted and expanded to support different services and platforms beyond Galaxy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Derdus, Kenga Mosoti, Vincent Oteke Omwenga und Patrick Job Ogao. „Energy-Aware Virtual Machine Clustering for Consolidation in Multi-tenant IaaS Public Clouds“. International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 01.04.2019, 1123–36. http://dx.doi.org/10.32628/cseit1952309.

Der volle Inhalt der Quelle
Annotation:
Cloud computing has gained a lot of interest from both small and big academic and commercial organizations because of its success in delivering service on a pay-as-you-go basis. Moreover, many users (organizations) can share server computing resources, which is made possible by virtualization. However, the amount of energy consumed by cloud data centres is a major concern. One of the major causes of energy wastage is the inefficient utilization of resources. For instance, in IaaS public clouds, users select Virtual Machine (VM) sizes set beforehand by the Cloud Service Providers (CSPs) without the knowledge of the kind of workloads to be executed in the VM. More often, the users overprovision the resources, which go to waste. Additionally, the CSPs do not have control over the types of applications that are executed and thus VM consolidation is performed blindly. There have been efforts to address the problem of energy consumption by efficient resource utilization through VM allocation and migration. However, these techniques lack collection and analysis of active real cloud traces from the IaaS cloud. This paper proposes an architecture for VM consolidation through VM profiling and analysis of VM resource usage and resource usage patterns, and a VM allocation policy. We have implemented our policy on CloudSim Plus cloud simulator and results show that it outperforms Worst Fit, Best Fit and First Fit VM allocation algorithms. Energy consumption is reduced through efficient consolidation that is informed by VM resource consumption.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Saxena, Deepika, und Ashutosh Kumar Singh. „Communication Cost Aware Resource Efficient Load Balancing (CARE-LB) Framework for Cloud Datacenter“. Recent Advances in Computer Science and Communications 13 (18.08.2020). http://dx.doi.org/10.2174/2666255813999200818173107.

Der volle Inhalt der Quelle
Annotation:
Background: Load balancing of communication-intensive applications, allowing efficient resource utilization and minimization of power consumption is a challenging multi-objective virtual machine (VM) placement problem. The communication among inter-dependent VMs, raises network traffic, hampers cloud client's experience and degrades overall performance, by saturating the network. Introduction: Cloud computing has become an indispensable part of Information Technology (IT), which supports the backbone of digitization throughout the world. It provides shared pool of IT resources, which are: always on, accessible from anywhere, at anytime and delivered on demand, as a service. The scalability and pay-per-use benefits of cloud computing has driven the entire world towards on-demand IT services that facilitates increased usage of virtualized resources. The rapid growth in the demands of cloud resources has amplified the network traffic in and out of the datacenter. Cisco Global Cloud Index predicts that by the year 2021, the network traffic among the devices within the datacenter will grow at Compound Annual Growth Rate (CAGR) of 23.4% Methods: To address these issues, a communication cost aware and resource efficient load balancing (CARE-LB) framework is presented, that minimizes communication cost, power consumption and maximize resource utilization. To reduce the communication cost, VMs with high affinity and inter-dependency are intentionally placed closer to each other. The VM placement is carried out by applying the proposed integration of Particle Swarm Optimization and non-dominated sorting based Genetic Algorithm i.e. PSOGA algorithm encoding VM allocation as particles as well as chromosomes. Results: The performance of proposed framework is evaluated by the execution of numerous experiments in the simulated datacenter environment and it is compared with the state-of-the-art methods like, Genetic Algorithm, First-Fit, Random-Fit and Best-Fit heuristic algorithms. The experimental outcome reveals that the CARE-LB framework improves 11% resource utilization, minimize 4.4% power consumption, 20.3% communication cost with reduction of execution time up to 49.7% over Genetic Algorithm based Load Balancing framework. Conclusion: The proposed CARE-LB framework provides promising solution for faster execution of data-intensive applications with improved resource utilization and reduced power consumption. Discussion: In the observed simulation, we analyze all the three objectives, after execution of the proposed multi-objective VM allocations and results are shown in Table 4. To choose the number of users for analysis of communication cost, the experiments are conducted with different number of users. For instance, for 100 VMs we choose 10, 20,...,80 users, and their request for VMs (number of VMs and type of VMs) are generated randomly, such that the total number of requested VMs do not exceed number of available VMs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Shidik, Guruh Fajar, Azhari . und Khabib Mustofa. „Evaluation of Selection Policy with Various Virtual Machine Instances in Dynamic VM Consolidation for Energy Efficient at Cloud Data Centers“. Journal of Networks 10, Nr. 7 (17.08.2015). http://dx.doi.org/10.4304/jnw.10.7.397-406.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie