Literatura científica selecionada sobre o tema "Virtual Machines (VM)"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Virtual Machines (VM)".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Virtual Machines (VM)"

1

Hasan, Waqas. "Virtual Machine Migration in Cloud Computing". Oriental journal of computer science and technology 14, n.º 010203 (28 de fevereiro de 2022): 46–51. http://dx.doi.org/10.13005/ojcst14.010203.06.

Texto completo da fonte
Resumo:
Cloud computing provides multiple services to users through the internet and these services include cloud storage, applications, servers, security and large network access. Virtual Machine allows the user to emulate multiple operating systems on a single computer; with the help of virtual machine migration users can transfer operating system instances from one computer to multiple computer machines. In this paper we will be discussing VM migration in cloud and also I will explain the whole procedure of VM migration. The two methods through which we can perform VM migration are Live VM migration and NON-live VM migration.VM migration also helps in managing the loads of the multiple machines and with VM we can save power consumption. People have written about cloud computing and virtual machines in previous studies, but in this research, we'll speak about virtual machine migration in cloud computing, as well as the techniques that are used in the VM migration process. I have used table to show the differences between VM migration techniques.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Hasan, Waqas. "A Survey on Virtual Machine Migration in Cloud Computing". International Journal of Scientific & Engineering Research 13, n.º 03 (25 de março de 2022): 648–53. http://dx.doi.org/10.14299/ijser.2022.03.03.

Texto completo da fonte
Resumo:
Cloud computing provides multiple services to users through the internet and these services include cloud storage, applications, servers, security and large network access. Virtual Machine allows the user to emulate multiple operating systems on a single computer; with the help of virtual machine migration users can transfer operating system instances from one computer to multiple computer machines. In this paper we will be discussing VM migration in cloud and also I will explain the whole procedure of VM migration. The two methods through which we can perform VM migration are Live VM migration and NON-live VM migration.VM migration also helps in managing the loads of the multiple machines and with VM we can save power consumption. People have written about cloud computing and virtual machines in previous studies, but in this research, we'll speak about virtual machine migration in cloud computing, as well as the techniques that are used in the VM migration process. I have used table to show the differences between VM migration techniques.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Ge, Jun Wei, Hai Ming Zheng e Yi Qiu Fang. "A Hybird Virtual Machine Placement Aglrithm for Virtualized Desktop Infrastructure". Advanced Materials Research 760-762 (setembro de 2013): 1906–10. http://dx.doi.org/10.4028/www.scientific.net/amr.760-762.1906.

Texto completo da fonte
Resumo:
As we all kown, The virtual machine placement is one kind of bin-packing problem. By optimizing placement of virtual machine. We can improve VM performance, enhance resource utilization, reduce energy comsumption. After analysis the existing virtual machine placement aglrithm. We propose a hybird virtual machine placement aglrithm (HTA) which based on network latency threshold for the requirement of low network latence and low VM migraiton ratio in Virtualized Desktop Infrastructure. It elect qualified node set based on network latency threshold and palce the virtual machines with load-balance policy, taking into account the preformance of the network and vitual machines. According to analysis and comparison. The simulation result show that the algorithm can effectively lessen the network latency and reduce the VM migration ratio.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Chen, Ji-Ming, Shi Chen, Xiang Wang, Lin Lin e Li Wang. "A Virtual Machine Migration Strategy Based on the Relevance of Services against Side-Channel Attacks". Security and Communication Networks 2021 (21 de dezembro de 2021): 1–17. http://dx.doi.org/10.1155/2021/2729949.

Texto completo da fonte
Resumo:
With the rapid development of Internet of Things technology, a large amount of user information needs to be uploaded to the cloud server for computing and storage. Side-channel attacks steal the private information of other virtual machines by coresident virtual machines to bring huge security threats to edge computing. Virtual machine migration technology is currently the main way to defend against side-channel attacks. VM migration can effectively prevent attackers from realizing coresident virtual machines, thereby ensuring data security and privacy protection of edge computing based on the Internet of Things. This paper considers the relevance between application services and proposes a VM migration strategy based on service correlation. This strategy defines service relevance factors to quantify the degree of service relevance, build VM migration groups through service relevance factors, and effectively reduce communication overhead between servers during migration, design and implement the VM memory migration based on the post-copy method, effectively reduce the occurrence of page fault interruption, and improve the efficiency of VM migration.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Liu, Yanbing, Bo Gong, Congcong Xing e Yi Jian. "A Virtual Machine Migration Strategy Based on Time Series Workload Prediction Using Cloud Model". Mathematical Problems in Engineering 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/973069.

Texto completo da fonte
Resumo:
Aimed at resolving the issues of the imbalance of resources and workloads at data centers and the overhead together with the high cost of virtual machine (VM) migrations, this paper proposes a new VM migration strategy which is based on the cloud model time series workload prediction algorithm. By setting the upper and lower workload bounds for host machines, forecasting the tendency of their subsequent workloads by creating a workload time series using the cloud model, and stipulating a general VM migration criterion workload-aware migration (WAM), the proposed strategy selects a source host machine, a destination host machine, and a VM on the source host machine carrying out the task of the VM migration. Experimental results and analyses show, through comparison with other peer research works, that the proposed method can effectively avoid VM migrations caused by momentary peak workload values, significantly lower the number of VM migrations, and dynamically reach and maintain a resource and workload balance for virtual machines promoting an improved utilization of resources in the entire data center.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Liu, Zhenpeng, Jiahuan Lu, Nan Su, Bin Zhang e Xiaofei Li. "Location-Constrained Virtual Machine Placement (LCVP) Algorithm". Scientific Programming 2020 (5 de novembro de 2020): 1–8. http://dx.doi.org/10.1155/2020/8846087.

Texto completo da fonte
Resumo:
Virtual machine (VM) placement is the current day research topic in cloud computing area. In order to solve the problem of imposing location constraints on VMs to meet their requirements in the process of VM placement, the location-constrained VM placement (LCVP) algorithm is proposed in this paper. In LCVP, each VM can only be placed onto one of the specified candidate physical machines (PMs) with enough computing resources and there must be sufficient bandwidth between the selected PMs to meet the communication requirement of the corresponding VMs. Simulation results show that LCVP is feasible and outperforms other benchmark algorithms in terms of computation time and blocking probability.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

T. Y. J., Naga Malleswari, Senthil Kumar T. e JothiKumar C. "Resumption of virtual machines after adaptive deduplication of virtual machine images in live migration". International Journal of Electrical and Computer Engineering (IJECE) 11, n.º 1 (1 de fevereiro de 2021): 654. http://dx.doi.org/10.11591/ijece.v11i1.pp654-663.

Texto completo da fonte
Resumo:
In cloud computing, load balancing, energy utilization are the critical problems solved by virtual machine (VM) migration. Live migration is the live movement of VMs from an overloaded/underloaded physical machine to a suitable one. During this process, transferring large disk image files take more time, hence more migration and down time. In the proposed adaptive deduplication, based on the image file size, the file undergoes both fixed, variable length deduplication processes. The significance of this paper is resumption of VMs with reunited deduplicated disk image files. The performance measured by calculating the percentage reduction of VM image size after deduplication, the time taken to migrate the deduplicated file and the time taken for each VM to resume after the migration. The results show that 83%, 89.76% reduction overall image size and migration time respectively. For a deduplication ratio of 92%, it takes an overall time of 3.52 minutes, 7% reduction in resumption time, compared with the time taken for the total QCOW2 files with original size. For VMDK files the resumption time reduced by a maximum 17% (7.63 mins) compared with that of for original files.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Sushmitha, G. M. Karthik e M. Sayeekumar. "Power and Performance Based Genetic Ant Colony Algorithm for Virtual Machine Placement". Journal of Computational and Theoretical Nanoscience 17, n.º 1 (1 de janeiro de 2020): 32–36. http://dx.doi.org/10.1166/jctn.2020.8625.

Texto completo da fonte
Resumo:
Cloud Computing is the provisioning of computing services over the Internet. A Virtual Machine (VM) creation request has to be processed in any one data center of the physical machines. Virtual Machine Placement refers to choosing appropriate host for the VM. One of the major concerns in datacenter management is reducing the power consumption and performance filth of virtual machines. For solving the problem, GACO algorithm is proposed which uses PpW, IPR and LDR as heuristic information for ACO algorithm and for selection in Genetic algorithm. It also uses a non-linear power consumption model for quantifying power. The performance evaluation shows the efficiency of the algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Srinivasa Rao, L., e I. Raviprakash Reddy. "A novel energy efficient virtual machine configuration and migration technique". International Journal of Engineering & Technology 7, n.º 4 (17 de setembro de 2018): 2391. http://dx.doi.org/10.14419/ijet.v7i4.13236.

Texto completo da fonte
Resumo:
The recent growth in the data centre usage and the higher cost of managing virtual machines clearly demands focused research in reducing the cost of managing and migrating virtual machines. The cost of virtual machine management majorly includes the energy cost, thus the best available virtual machine management and migration techniques must have the lowest energy consumption. The management of virtual machine is solely dependent on the number of applications running on that virtual machine, where there is a very little scope for researchers to improve the energy. The second parameter is migration in order to balance the load, where a number of researches are been carried out to reduce the energy consumption. This work addresses the issue of energy consumption during virtual machine migration and proposes a novel virtual machine migration technique with improvement of energy consumption. The novel algorithm is been proposed in two enhancements as VM selection and VM migration, which demonstrates over 47% reduction in energy consumption.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Muhammad, Shoaib, Muhammad Nabeel Mustafa Syed e Shabhi Ul Hasan Naqvi Syed. "Techniques of migration in live virtual machine and its challenges". i-manager's Journal on Computer Science 9, n.º 4 (2022): 31. http://dx.doi.org/10.26634/jcom.9.4.18540.

Texto completo da fonte
Resumo:
Cloud computing is the on-demand availability of computer system resources. Most technology industries are moving to the cloud. Cloud structures can be costly for users. Virtualization is used in cloud computing that helps the cloud at a low cost. Migrating virtual machines (VMs) helps to manage computation. Migration of virtual machines is a core feature of virtualization. The technique of migrating a running virtual machine from one physical host to another with minimal downtime is called "live virtual machine migration." This paper discusses the migration technique, i.e., migration before and after copying, and also issues related to live migration. This paper presents a better approach to the VM migration method and future challenges by differentiating from the previous live VM migration method.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Virtual Machines (VM)"

1

George, Sharath. "Usermode kernel : running the kernel in userspace in VM environments". Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2858.

Texto completo da fonte
Resumo:
In many instances of virtual machine deployments today, virtual machine instances are created to support a single application. Traditional operating systems provide an extensive framework for protecting one process from another. In such deployments, this protection layer becomes an additional source of overhead as isolation between services is provided at an operating system level and each instance of an operating system supports only one service. This makes the operating system the equivalent of a process from the traditional operating system perspective. Isolation between these operating systems and indirectly the services they support, is ensured by the virtual machine monitor in these deployments. In these scenarios the process protection provided by the operating system becomes redundant and a source of additional overhead. We propose a new model for these scenarios with operating systems that bypass this redundant protection offered by the traditional operating systems. We prototyped such an operating system by executing parts of the operating system in the same protection ring as user applications. This gives processes more power and access to kernel memory bypassing the need to copy data from user to kernel and vice versa as is required when the traditional ring protection layer is enforced. This allows us to save the system call trap overhead and allows application program mers to directly call kernel functions exposing the rich kernel library. This does not compromise security on the other virtual machines running on the same physical machine, as they are protected by the VMM. We illustrate the design and implementation of such a system with the Xen hypervisor and the XenoLinux kernel.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Yoginath, Srikanth B. "Virtual time-aware virtual machine systems". Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52321.

Texto completo da fonte
Resumo:
Discrete dynamic system models that track, maintain, utilize, and evolve virtual time are referred to as virtual time systems (VTS). The realization of VTS using virtual machine (VM) technology offers several benefits including fidelity, scalability, interoperability, fault tolerance and load balancing. The usage of VTS with VMs appears in two ways: (a) VMs within VTS, and (b) VTS over VMs. The former is prevalent in high-fidelity cyber infrastructure simulations and cyber-physical system simulations, wherein VMs form a crucial component of VTS. The latter appears in the popular Cloud computing services, where VMs are offered as computing commodities and the VTS utilizes VMs as parallel execution platforms. Prior to our work presented here, the simulation community using VM within VTS (specifically, cyber infrastructure simulations) had little awareness of the existence of a fundamental virtual time-ordering problem. The correctness problem was largely unnoticed and unaddressed because of the unrecognized effects of fair-share multiplexing of VMs to realize virtual time evolution of VMs within VTS. The dissertation research reported here demonstrated the latent incorrectness of existing methods, defined key correctness benchmarks, quantitatively measured the incorrectness, proposed and implemented novel algorithms to overcome incorrectness, and optimized the solutions to execute without a performance penalty. In fact our novel, correctness-enforcing design yields better runtime performance than the traditional (incorrect) methods. Similarly, the VTS execution over VM platforms such as Cloud computing services incurs large performance degradation, which was not known until our research uncovered the fundamental mismatch between the scheduling needs of VTS execution and those of traditional parallel workloads. Consequently, we designed a novel VTS-aware hypervisor scheduler and showed significant performance gains in VTS execution over VM platforms. Prior to our work, the performance concern of VTS over VM was largely unaddressed due to the absence of an understanding of execution policy mismatch between VMs and VTS applications. VTS follows virtual-time order execution whereas the conventional VM execution follows fair-share policy. Our research quantitatively uncovered the exact cause of poor performance of VTS in VM platforms. Moreover, we proposed and implemented a novel virtual time-aware execution methodology that relieves the degradation and provides over an order of magnitude faster execution than the traditional virtual time-unaware execution.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Atchukatla, Mahammad suhail. "Algorithms for efficient VM placement in data centers : Cloud Based Design and Performance Analysis". Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17221.

Texto completo da fonte
Resumo:
Content: Recent trends show that cloud computing adoption is continuously increasing in every organization. So, demand for the cloud datacenters tremendously increases over a period, resulting in significantly increased resource utilization of the datacenters. In this thesis work, research was carried out on optimizing the energy consumption by using packing of the virtual machines in the datacenter. The CloudSim simulator was used for evaluating bin-packing algorithms and for practical implementation OpenStack cloud computing environment was chosen as the platform for this research.   Objectives:  In this research, our objectives are as follows
    Perform simulation of algorithms in CloudSim simulator. Estimate and compare the energy consumption of different packing algorithms. Design an OpenStack testbed to implement the Bin packing algorithm.   Methods: We use CloudSim simulator to estimate the energy consumption of the First fit, the First fit decreasing, Best fit and Enhanced best-fit algorithms. Design a heuristic model for implementation in the OpenStack environment for optimizing the energy consumption for the physical machines. Server consolidation and live migration are used for the algorithms design in the OpenStack implementation. Our research also extended to the Nova scheduler functionality in an OpenStack environment.   Results: Most of the case the enhanced best-fit algorithm gives the better results. The results are obtained from the default OpenStack VM placement algorithm as well as from the heuristic algorithm developed in this simulation work. The comparison of results indicates that the total energy consumption of the data center is reduced without affecting potential service level agreements.   Conclusions: The research tells that energy consumption of the physical machines can be optimized without compromising the offered service quality. A Python wrapper was developed to implement this model in the OpenStack environment and minimize the energy consumption of the Physical machine by shutdown the unused physical machines. The results indicate that CPU Utilization does not vary much when live migration of the virtual machine is performed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Ducasse, Quentin. "Sécurisation matérielle de la compilation à la volée des machines virtuelles langage". Electronic Thesis or Diss., Brest, École nationale supérieure de techniques avancées Bretagne, 2024. http://www.theses.fr/2024ENTA0003.

Texto completo da fonte
Resumo:
Les machines virtuelles langage (VM) sont l’environnement d’exécution des langages de haut niveau les plus répandus. Elles permettent une portabilité du code applicatif et la gestion automatique de la mémoire. Leur large diffusion couplée à l’exécution de tâches de bas niveau les rendent intéressantes pour les attaquants. Les solutions purement logicielles entraînent souvent une perte de performance incompatible avec la compilation just-in-time (JIT). Des solutions accélérées matériellement sont ajoutées dans des processeurs commerciaux pour concilier des garanties de sécurité fortes avec la performance. Pour comparer ces solutions, cette thèse s’intéresse au jeu d’instructions RISC-V et à ses capacités d’extension. Nous présentons Gigue, un générateur de binaires similaires au code JIT directement exécutables sur les softcores RISC-V. Il fournit une interface pour des instructions personnalisées et garantit leur exécution. Nous présentons une solution d’isolation de domaine au niveau des instructions ajoutée aux binaires de Gigue et déployée dans un processeur avec des modifications minimales. La solution ajoute un surcoût de performance négligeable tout en garantissant des propriétés fortes sur les domaines. Afin de motiver le déploiement dans des cas d’utilisation réels, nous étendons le compilateur JIT Pharo au jeu d’instructions RISC-V, ainsi que son infrastructure de test
Language Virtual Machines (VMs) are the run-time environment of popular high level managed languages. They offer portability and memory handling for the developer and are deployed on most computing devices. Their widespread distribution, handling of untrusted user inputs, and low-level task execution make them interesting to attackers. Software-only solutions that isolate their different components often incur a high performance overhead incompatible with just-in-time (JIT) compilation. Hardware-accelerated run time protections are pushed in vendor processors as a solution to conciliate strong security guarantees with performance. To allow experimentation in the design and comparison of such solutions, this thesis is interested in the RISC-V instruction set and its extension capabilities. We present Gigue, a workload generator that outputs binaries similar to JIT code directly executable on RISC-V softcores. It provides an interface for custom instructions and guarantees their execution. We present an instruction-level domain isolation solution added to Gigue binaries and implemented in an application-class processor with processor modifications. The solution adds negligible performance overhead while enforcing strong properties on domains. As an effort to motivate deployment in real use cases, we extend the Pharo JIT compiler to the RISC-V instruction set along with its testing infrastructure
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Ahvar, Ehsan. "Cost-efficient resource allocation for green distributed clouds". Thesis, Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0001.

Texto completo da fonte
Resumo:
L'objectif de cette thèse est de présenter de nouveaux algorithmes de placement de machines virtuelles (VMs) à fin d’optimiser le coût et les émissions de carbone dans les Clouds distribués. La thèse se concentre d’abord sur la rentabilité des Clouds distribués, et développe ensuite les raisons d’optimiser les coûts ainsi que les émissions de carbone. La thèse comprend deux principales parties: la première propose, développe et évalue les algorithmes de placement statiques de VMs (où un premier placement d'une VM détient pendant toute la durée de vie de la VM). La deuxième partie propose des algorithmes de placement dynamiques de VMs où le placement initial de VM peut changer dynamiquement (par exemple, grâce à la migration de VMs et à leur consolidation). Cette thèse comprend cinq contributions. La première contribution est une étude de l'état de l'art sur la répartition des coûts et des émissions de carbone dans les environnements de clouds distribués. La deuxième contribution propose une méthode d'allocation des ressources, appelée NACER, pour les clouds distribués. L'objectif est de minimiser le coût de communication du réseau pour exécuter une tâche dans un cloud distribué. La troisième contribution propose une méthode de placement VM (appelée NACEV) pour les clouds distribués. NACEV est une version étendue de NACER. Tandis que NACER considère seulement le coût de communication parmi les DCs, NACEV optimise en même temps les coûts de communication et de calcul. Il propose également un algorithme de cartographie pour placer des machines virtuelles sur des machines physiques (PM). La quatrième contribution présente une méthode de placement VM efficace en termes de coûts et de carbone (appelée CACEV) pour les clouds distribués verts. CACEV est une version étendue de NACEV. En plus de la rentabilité, CACEV considère l'efficacité des émissions de carbone pour les clouds distribués. Pour obtenir une meilleure performance, la cinquième contribution propose une méthode dynamique de placement VM (D-CACEV) pour les clouds distribués. D-CACEV est une version étendue de notre travail précédent, CACEV, avec des chiffres supplémentaires, une description et également des mécanismes de migration de VM en direct. Nous montrons que notre mécanisme conjoint de réallocation-placement de VM peut constamment optimiser à la fois le coût et l'émission de carbone dans un cloud distribué
Virtual machine (VM) placement (i.e., resource allocation) method has a direct effect on both cost and carbon emission. Considering the geographic distribution of data centers (DCs), there are a variety of resources, energy prices and carbon emission rates to consider in a distributed cloud, which makes the placement of VMs for cost and carbon efficiency even more critical and complex than in centralized clouds. The goal of this thesis is to present new VM placement algorithms to optimize cost and carbon emission in a distributed cloud. It first focuses on cost efficiency in distributed clouds and, then, extends the goal to optimization of both cost and carbon emission at the same time. Thesis includes two main parts. The first part of thesis proposes, develops and evaluates static VM placement algorithms to reach the mentioned goal where an initial placement of a VM holds throughout the lifetime of the VM. The second part proposes dynamic VM placement algorithms where the initial placement of VMs is allowed to change (e.g., through VM migration and consolidation). The first contribution is a survey of the state of the art on cost and carbon emission resource allocation in distributed cloud environments. The second contribution targets the challenge of optimizing inter-DC communication cost for large-scale tasks and proposes a Network-Aware Cost-Efficient Resource allocation method, called NACER, for distributed clouds. The goal is to minimize the network communication cost of running a task in a distributed cloud by selecting the DCs to provision the VMs in such a way that the total network distance (hop count or any reasonable measure) among the selected DCs is minimized. The third contribution proposes a Network-Aware Cost Efficient VM Placement method (called NACEV) for Distributed Clouds. NACEV is an extended version of NACER. While NACER only considers inter-DC communication cost, NACEV optimizes both communication and computing cost at the same time and also proposes a mapping algorithm to place VMs on Physical Machines (PMs) inside of the selected DCs. NACEV also considers some aspects such as heterogeneity of VMs, PMs and switches, variety of energy prices, multiple paths between PMs, effects of workload on cost (energy consumption) of cloud devices (i.e., switches and PMs) and also heterogeneity of energy model of cloud elements. The forth contribution presents a Cost and Carbon Emission-Efficient VM Placement Method (called CACEV) for green distributed clouds. CACEV is an extended version of NACEV. In addition to cost efficiency, CACEV considers carbon emission efficiency and green distributed clouds. It is a VM placement algorithm for joint optimization of computing and network resources, which also considers price, location and carbon emission rate of resources. It also, unlike previous contributions of thesis, considers IaaS Service Level Agreement (SLA) violation in the system model. To get a better performance, the fifth contribution proposes a dynamic Cost and Carbon Emission-Efficient VM Placement method (D-CACEV) for green distributed clouds. D-CACEV is an extended version of our previous work, CACEV, with additional figures, description and also live VM migration mechanisms. We show that our joint VM placement-reallocation mechanism can constantly optimize both cost and carbon emission at the same time in a distributed cloud
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Ahvar, Ehsan. "Cost-efficient resource allocation for green distributed clouds". Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0001.

Texto completo da fonte
Resumo:
L'objectif de cette thèse est de présenter de nouveaux algorithmes de placement de machines virtuelles (VMs) à fin d’optimiser le coût et les émissions de carbone dans les Clouds distribués. La thèse se concentre d’abord sur la rentabilité des Clouds distribués, et développe ensuite les raisons d’optimiser les coûts ainsi que les émissions de carbone. La thèse comprend deux principales parties: la première propose, développe et évalue les algorithmes de placement statiques de VMs (où un premier placement d'une VM détient pendant toute la durée de vie de la VM). La deuxième partie propose des algorithmes de placement dynamiques de VMs où le placement initial de VM peut changer dynamiquement (par exemple, grâce à la migration de VMs et à leur consolidation). Cette thèse comprend cinq contributions. La première contribution est une étude de l'état de l'art sur la répartition des coûts et des émissions de carbone dans les environnements de clouds distribués. La deuxième contribution propose une méthode d'allocation des ressources, appelée NACER, pour les clouds distribués. L'objectif est de minimiser le coût de communication du réseau pour exécuter une tâche dans un cloud distribué. La troisième contribution propose une méthode de placement VM (appelée NACEV) pour les clouds distribués. NACEV est une version étendue de NACER. Tandis que NACER considère seulement le coût de communication parmi les DCs, NACEV optimise en même temps les coûts de communication et de calcul. Il propose également un algorithme de cartographie pour placer des machines virtuelles sur des machines physiques (PM). La quatrième contribution présente une méthode de placement VM efficace en termes de coûts et de carbone (appelée CACEV) pour les clouds distribués verts. CACEV est une version étendue de NACEV. En plus de la rentabilité, CACEV considère l'efficacité des émissions de carbone pour les clouds distribués. Pour obtenir une meilleure performance, la cinquième contribution propose une méthode dynamique de placement VM (D-CACEV) pour les clouds distribués. D-CACEV est une version étendue de notre travail précédent, CACEV, avec des chiffres supplémentaires, une description et également des mécanismes de migration de VM en direct. Nous montrons que notre mécanisme conjoint de réallocation-placement de VM peut constamment optimiser à la fois le coût et l'émission de carbone dans un cloud distribué
Virtual machine (VM) placement (i.e., resource allocation) method has a direct effect on both cost and carbon emission. Considering the geographic distribution of data centers (DCs), there are a variety of resources, energy prices and carbon emission rates to consider in a distributed cloud, which makes the placement of VMs for cost and carbon efficiency even more critical and complex than in centralized clouds. The goal of this thesis is to present new VM placement algorithms to optimize cost and carbon emission in a distributed cloud. It first focuses on cost efficiency in distributed clouds and, then, extends the goal to optimization of both cost and carbon emission at the same time. Thesis includes two main parts. The first part of thesis proposes, develops and evaluates static VM placement algorithms to reach the mentioned goal where an initial placement of a VM holds throughout the lifetime of the VM. The second part proposes dynamic VM placement algorithms where the initial placement of VMs is allowed to change (e.g., through VM migration and consolidation). The first contribution is a survey of the state of the art on cost and carbon emission resource allocation in distributed cloud environments. The second contribution targets the challenge of optimizing inter-DC communication cost for large-scale tasks and proposes a Network-Aware Cost-Efficient Resource allocation method, called NACER, for distributed clouds. The goal is to minimize the network communication cost of running a task in a distributed cloud by selecting the DCs to provision the VMs in such a way that the total network distance (hop count or any reasonable measure) among the selected DCs is minimized. The third contribution proposes a Network-Aware Cost Efficient VM Placement method (called NACEV) for Distributed Clouds. NACEV is an extended version of NACER. While NACER only considers inter-DC communication cost, NACEV optimizes both communication and computing cost at the same time and also proposes a mapping algorithm to place VMs on Physical Machines (PMs) inside of the selected DCs. NACEV also considers some aspects such as heterogeneity of VMs, PMs and switches, variety of energy prices, multiple paths between PMs, effects of workload on cost (energy consumption) of cloud devices (i.e., switches and PMs) and also heterogeneity of energy model of cloud elements. The forth contribution presents a Cost and Carbon Emission-Efficient VM Placement Method (called CACEV) for green distributed clouds. CACEV is an extended version of NACEV. In addition to cost efficiency, CACEV considers carbon emission efficiency and green distributed clouds. It is a VM placement algorithm for joint optimization of computing and network resources, which also considers price, location and carbon emission rate of resources. It also, unlike previous contributions of thesis, considers IaaS Service Level Agreement (SLA) violation in the system model. To get a better performance, the fifth contribution proposes a dynamic Cost and Carbon Emission-Efficient VM Placement method (D-CACEV) for green distributed clouds. D-CACEV is an extended version of our previous work, CACEV, with additional figures, description and also live VM migration mechanisms. We show that our joint VM placement-reallocation mechanism can constantly optimize both cost and carbon emission at the same time in a distributed cloud
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Hu, Ji. "A virtual machine architecture for IT-security laboratories". Phd thesis, [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=980935652.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Albaaj, Hassan, e Victor Berggren. "Benchmark av Containers och Unikernels". Thesis, Tekniska Högskolan, Jönköping University, JTH, Datateknik och informatik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-50214.

Texto completo da fonte
Resumo:
Purpose – The purpose of this paper is to explore the possibility to effectivize local networks and databases using unikernels and compare this to containers. This could also apply to reliability of executing programs the same way on different hardware in software development. Method – Two experiments have been performed to explore if the purpose could be realized, quantitative data have been gatheredand displayed in both cases. Python-scripts have been used to start C-scripts, acting client and server. Algorithms have been timed running in unikernels as well as in containers along with compared measurements of memory in multiple simultaneous instantiations. Findings – Intermittent response times spiked made the data hard to parse correctly. Containers had a lower average response time when running lighter algorithms. The average response times of unikernels dives below that of containers when heavier programs are simulated. Few minor bugs were discovered in Unikraft unikernels. Implications – unikernels havecharacteristics that make them more suitable for certain tasks compared to their counterpart, this is also true for containers. Unikraft unikernels are unstable which makes it seem like containers are faster during lighter simulations. Unikernels are onlyfaster and more secure if the tools used to build them does so in a manner that makes them stable. Limitations – The lack of standards, the lack of a support community together with the fact that unikernels is a small and niche field means that unikernels have a relatively high learning curve. Keywords – Unikraft, Unikernels, Docker, Container
Syfte – Syftet med denna studie är att undersöka möjligheten att effektivisera lokala nätverk och databaser med hjälp av unikernels och att jämföra denna möjlighet med containrar. Detta kan även gälla utveckling av programvara för att säkerställa att programvaran exekveras på servern på exakt samma sätt som den tidigare gjort lokalt på utvecklarens lokala dator. Metod – Två experiment utförs för att undersöka om det går besvara syftet, kvantitativa data samlas in i båda fallen, datan är även redovisad kvantitativt. Python-script används föratt starta C-script som agerar klient och server. Tidtagning på algoritmer i unikernels respektive containrar samt minnesanvändning vid multipel instansiering mättes för att analyseras och jämföras. Resultat – Intermittenta svarstids-toppar gjorde datan från unikernels svår att korrekt utvärdera. Containrar hade ett lägre medelvärde på svarstider vid mindre krävande algoritm-användning. Unikernels medelvärde dyker under container-svarstiderna när mer krävande program simuleras. Några små buggar upptäcktesi Unikraft unikernels. Implikationer – Unikernels har egenskaper som gör de mer passande för vissa uppgifter jämfört med dess motsvarighet medan detsamma gäller för Containrar. Unikraft unikernels är instabila och ger därfören bild av att containrar vidmindre processorkrävande program faktiskt är snabbare än unikernels. Unikernels är bara snabbare och säkrare i den mån verktyget som bygger dem, gör det på ett sätt att de är stabila. Begränsningar – Avsaknaden av standarder, avsaknaden av ett communitysom kan svara på frågor tillsammans med att unikernelsär ett litet och nischat fält gör att unikernels har en relativ hög inlärningskurva. Nyckelord – Unikernel, Unikraft, Container, Docker
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Durelli, Vinicius Humberto Serapilha. "Toward harnessing a Java high-level language virtual machine for supporting software testing". Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-06012014-150025/.

Texto completo da fonte
Resumo:
High-level language virtual machines (HLL VMs) have been playing a key role as a mechanism for implementing programming languages. Languages that run on these execution environments have many advantages over languages that are compiled to native code. These advantages have led HLL VMs to gain broad acceptance in both academy and industry. However, much of the research in this area has been devoted to boosting the performance of these execution environments. Few eorts have attempted to introduce features that automate or facilitate some software engineering activities, including software testing. This research argues that HLL VMs provide a reasonable basis for building an integrated software testing environment. To this end, two software testing features that build on the characteristics of a Java virtual machine (JVM) were devised. The purpose of the rst feature is to automate weak mutation. Augmented with mutation support, the chosen JVM achieved speedups of as much as 95% in comparison to a strong mutation tool. To support the testing of concurrent programs, the second feature is concerned with enabling the deterministic re-execution of Java programs and exploration of new scheduling sequences
Máquinas virtuais de linguagens de programação têm desempenhado um papel importante como mecanismo para a implementação de linguagens de programação. Linguagens voltadas para esses ambientes de execução possuem várias vantagens em relação às linguagens compiladas. Essas vantagens fizeram com que tais ambientes de execução se tornassem amplamente utilizados pela indústria e academia. Entretanto, a maioria dos estudos nessa area têm se dedicado a aprimorar o desempenho desses ambientes de execução e poucos têm enfocado o desenvolvimento de funcionalidades que automatizem ou facilitem a condução de atividades de engenharia de software, incluindo atividades de teste de software. Este trabalho apresenta indícios de que máquinas virtuais de linguagens de programação podem apoiar a criação de ambientes de teste de software integrado. Para tal, duas funcionalidades que tiram proveito das características de uma máquina virtual Java foram desenvolvidas. O propósito da primeira funcionalidade e automatizar a condução de atividades de mutação fraca. Após a implementação de tal funcionalidade na máquina virtual Java selecionada, observou-se um desempenho até 95% melhor em relação a uma ferramenta de mutação forte. Afim de apoiar o teste de programas concorrentes, a segunda funcionalidade permite reexecutá-los de forma determinística além de automatizar a exploração de que novas sequências de escalonamento
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Mohammad, Taha, e Chandra Sekhar Eati. "A Performance Study of VM Live Migration over the WAN". Thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1529.

Texto completo da fonte
Resumo:
Virtualization is the key technology that has provided the Cloud computing platforms a new way for small and large enterprises to host their applications by renting the available resources. Live VM migration allows a Virtual Machine to be transferred form one host to another while the Virtual Machine is active and running. The main challenge in Live migration over WAN is maintaining the network connectivity during and after the migration. We have carried out live VM migration over the WAN migrating different sizes of VM memory states and presented our solutions based on Open vSwitch/VXLAN and Cisco GRE approaches. VXLAN provides the mobility support needed to maintain the network connectivity between the client and the Virtual machine. We have setup an experimental testbed to calculate the concerned performance metrics and analyzed the performance of live migration in VXLAN and GRE network. Our experimental results present that the network connectivity was maintained throughout the migration process with negligible signaling overhead and minimal downtime. The downtime variation experience with change in the applied network delay was relatively higher when compared to variation experienced when migrating different VM memory states. The total migration time experienced showed a strong relationship with size of the migrating VM memory state.
0763472814
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Virtual Machines (VM)"

1

Corporation, International Business Machines, ed. Conversion guide and notebook for VM/XA SP and VM/ESA, release 2.2: Virtual machine/enterprise systems architecture. 5a ed. Endicott, NY (1701 North St., Endicott 13760-5553): International Business Machines Corp., 1994.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Abreu, Peter, e Paul Olenick. Windows Azure Virtual Machines: Deploy and Run Windows Server or Linux VM. Wiley & Sons, Incorporated, John, 2013.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Virtual Machines (VM)"

1

Yuan, Shenghao, Frédéric Besson, Jean-Pierre Talpin, Samuel Hym, Koen Zandberg e Emmanuel Baccelli. "End-to-End Mechanized Proof of an eBPF Virtual Machine for Micro-controllers". In Computer Aided Verification, 293–316. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13188-2_15.

Texto completo da fonte
Resumo:
AbstractRIOT is a micro-kernel dedicated to IoT applications that adopts eBPF (extended Berkeley Packet Filters) to implement so-called femto-containers. As micro-controllers rarely feature hardware memory protection, the isolation of eBPF virtual machines (VM) is critical to ensure system integrity against potentially malicious programs. This paper shows how to directly derive, within the Coq proof assistant, the verified C implementation of an eBPF virtual machine from a Gallina specification. Leveraging the formal semantics of the CompCert C compiler, we obtain an end-to-end theorem stating that the C code of our VM inherits the safety and security properties of the Gallina specification. Our refinement methodology ensures that the isolation property of the specification holds in the verified C implementation. Preliminary experiments demonstrate satisfying performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Choi, Brendan. "Creating an Ubuntu Server Virtual Machine (VM)". In Introduction to Python Network Automation Volume I - Laying the Groundwork, 271–309. Berkeley, CA: Apress, 2024. http://dx.doi.org/10.1007/979-8-8688-0146-4_5.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Choi, Brendan. "Creating a Fedora Server Virtual Machine (VM)". In Introduction to Python Network Automation Volume I - Laying the Groundwork, 311–50. Berkeley, CA: Apress, 2024. http://dx.doi.org/10.1007/979-8-8688-0146-4_6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Guo, Feng, Dong Zhang, Zhengwei Liu e Kaiyuan Qi. "VM $$^3$$ : Virtual Machine Multicast Migration Based on Comprehensive Load Forecasting". In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 66–75. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16050-4_6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Kwon, Donghyun, Jiwon Seo, Sehyun Baek, Giyeol Kim, Sunwoo Ahn e Yunheung Paek. "VM-CFI: Control-Flow Integrity for Virtual Machine Kernel Using Intel PT". In Computational Science and Its Applications – ICCSA 2018, 127–37. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-95174-4_10.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Hussain, Mohammad Rashid, Arshi Naim e Mohammed Abdul Khaleel. "Implementation of Wireless Sensor Network Using Virtual Machine (VM) for Insect Monitoring". In Lecture Notes in Networks and Systems, 73–78. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-3172-9_8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Padhi, Biswajit, Motahar Reza, Indrajeet Gupta, Poorna Sai Nagendra e Sarath S. Kumar. "Prediction of Dynamic Virtual Machine (VM) Provisioning in Cloud Computing Using Deep Learning". In Computational Intelligence in Data Mining, 607–18. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9447-9_46.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Almamun, Shawakat Akbar, E. Balamurugan, Shahidul Hasan, N. M. Saravana Kumar e K. Sangeetha. "Intelligent Stackelberg Game Theory with Threshold-Based VM Allocation Strategy for Detecting Malicious Co-Resident Virtual Nodes in Cloud Computing Networks". In Machine Learning and Deep Learning Techniques in Wireless and Mobile Networking Systems, 249–67. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003107477-14.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Sharma, Oshin, e Hemraj Saini. "Performance Evaluation of Energy-Aware Virtual Machine Placement Techniques for Cloud Environment". In Advances in Human Resources Management and Organizational Development, 45–72. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5323-6.ch003.

Texto completo da fonte
Resumo:
The most dominant service of cloud computing is infrastructure as a service (IaaS). Virtualization is the most important feature of IaaS and it is very important for the improvement of resource utilization; but along with this, it also degrades the system's performance and makes them overutilized. Therefore, to solve the problem of overutilization or underutilization of machines and performance improvement of machine, the VMs present inside the physical machine needs to be migrated to another physical machine using the process of VM consolidation, and the reduced set of physical machines after placement needs a lesser amount of power or energy consumption, which is the main aim of energy-aware VM placement. This chapter presents a decision-making VM placement system and compares it with other predefined VM placement techniques. This analysis contributes to a better understanding of the effects of the placement strategies over the overall performance of cloud environment and also shows how the one algorithm delivers better results for VM placement than another.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

"Modularity Design of VM". In Advanced Design and Implementation of Virtual Machines, 229–42. Taylor & Francis Group, 6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742: CRC Press, 2016. http://dx.doi.org/10.1201/9781315386706-17.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Virtual Machines (VM)"

1

Amamou, Ahmed, Manel Bourguiba, Kamel Haddadou e Guy Pujolle. "DBA-VM: Dynamic bandwidth allocator for virtual machines". In 2012 IEEE Symposium on Computers and Communications (ISCC). IEEE, 2012. http://dx.doi.org/10.1109/iscc.2012.6249382.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Da Silva, Rodrigo A. C., e Nelson L. S. Da Fonseca. "Energy-aware load balancing in distributed data centers". In XXIX Concurso de Teses e Dissertações da SBC. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/ctd.2016.9133.

Texto completo da fonte
Resumo:
This paper summarizes the dissertation ”Energy-aware load balancing in distributed data centers”, which proposed two new algorithms for minimizing energy consumption in cloud data centers. Both algorithms consider hierarchical data center network topologies and requests for the allocation of groups of virtual machines (VMs). The Topology-aware Virtual Machine Placement (TAVMP) algorithm deals with the placement of virtual machines in a single data center. It reduces the blocking of requests and yet maintains acceptable levels of energy consumption. The Topology-aware Virtual Machine Selection (TAVMS) algorithm chooses sets of VM groups for migration between different data centers. Its employment leads to relevant overall energy savings.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Li, Nan, Bo Li, Jianxin Li, Tianyu Wo e Jinpeng Huai. "vMON: An Efficient Out-of-VM Process Monitor for Virtual Machines". In 2013 IEEE International Conference on High Performance Computing and Communications (HPCC) & 2013 IEEE International Conference on Embedded and Ubiquitous Computing (EUC). IEEE, 2013. http://dx.doi.org/10.1109/hpcc.and.euc.2013.194.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Adeshara, Nandan, Ajinkya Rede, Suhani Jain, Krishna Dhoot e Sunil Mhamane. "Optimizing Resource Utilization by Vm Migration Among Virtual Machines of a Cloud Server". In 2020 5th International Conference on Communication and Electronics Systems (ICCES). IEEE, 2020. http://dx.doi.org/10.1109/icces48766.2020.9138010.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Li, Yaqiong, e Yongbing Huang. "TMemCanal: A VM-oblivious Dynamic Memory Optimization Scheme for Virtual Machines in Cloud Computing". In 2010 IEEE 10th International Conference on Computer and Information Technology (CIT). IEEE, 2010. http://dx.doi.org/10.1109/cit.2010.68.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Braun, Tom, Marcel Taeumel, Eliot Miranda e Robert Hirschfeld. "Transpiling Slang Methods to C Functions: An Example of Static Polymorphism for Smalltalk VM Objects". In VMIL '23: 15th ACM SIGPLAN International Workshop on Virtual Machines and Intermediate Languages. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3623507.3623548.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Jaison, Feon, e Gulista Khan. "Migration and Scheduling of Virtual Machines (VM) using Priority Based Load Balancing in Cloud Environment". In 2023 International Conference on Advances in Computation, Communication and Information Technology (ICAICCIT). IEEE, 2023. http://dx.doi.org/10.1109/icaiccit60255.2023.10465709.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Lange, Adriano, Marcos Sunye e Luis Carlos Bona. "Upstream: Exposing Performance Information from Cloud Providers to Tenants". In XX Simpósio em Sistemas Computacionais de Alto Desempenho. Sociedade Brasileira de Computação, 2019. http://dx.doi.org/10.5753/wscad.2019.8673.

Texto completo da fonte
Resumo:
Infrastructure-as-a-Service (IaaS) is a widely adopted cloud computing paradigm due to its flexibility and competitive prices. To improve resource efficiency, most IaaS providers consolidate several tenants in the same virtualization server, which usually incurs variable performance experiences. In this paper, we evaluate the CPU time received by tenants’ virtual machines (VMs). We present a model that estimates the probability of a VM to receive, at least, a determined fraction of CPU time using limited information about the host and VMs running on it. We constructed this model using a series of experiments with different numbers of CPU cores and co-located VMs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Sheng, Junjie, Shengliang Cai, Haochuan Cui, Wenhao Li, Yun Hua, Bo Jin, Wenli Zhou et al. "VMAgent: A Practical Virtual Machine Scheduling Platform". In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/860.

Texto completo da fonte
Resumo:
Virtual machine (VM) scheduling is one of the critical tasks in cloud computing. Many works have attempted to incorporate machine learning, especially reinforcement learning, to empower VM scheduling procedures. Although improved results are shown in several demo simulators, the performances in real-world scenarios are still underexploited. In this paper, we design a practical VM scheduling platform, i.e., VMAgent, to assist researchers in developing their methods on the VM scheduling problem. VMAgent consists of three components: simulator, scheduler, and visualizer. The simulator abstracts three general realistic scheduling scenarios (fading, recovering, and expansion) based on Huawei Cloud’s scheduling data, which is the core of our platform. Flexible configurations are further provided to make the simulator compatible with practical cloud computing architecture (i.e., Multi Non-Uniform Memory Access) and scenarios. Researchers then need to instantiate the scheduler to interact with the simulator, which is also pre-built in various types (e.g., heuristic, machine learning, and operations research) of scheduling algorithms to speed up the algorithm design. The visualizer, as an auxiliary component of the simulator and scheduler, facilitates researchers to conduct an in-depth analysis of the scheduling procedure and comprehensively compare different scheduling algorithms. We believe that VMAgent would shed light on the AI for the VM scheduling community, and the demo video is presented in https://bit.ly/vmagent-demo-video.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Luo, Chuan, Bo Qiao, Xin Chen, Pu Zhao, Randolph Yao, Hongyu Zhang, Wei Wu, Andrew Zhou e Qingwei Lin. "Intelligent Virtual Machine Provisioning in Cloud Computing". In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/208.

Texto completo da fonte
Resumo:
Virtual machine (VM) provisioning is a common and critical problem in cloud computing. In industrial cloud platforms, there are a huge number of VMs provisioned per day. Due to the complexity and resource constraints, it needs to be carefully optimized to make cloud platforms effectively utilize the resources. Moreover, in practice, provisioning a VM from scratch requires fairly long time, which would degrade the customer experience. Hence, it is advisable to provision VMs ahead for upcoming demands. In this work, we formulate the practical scenario as the predictive VM provisioning (PreVMP) problem, where upcoming demands are unknown and need to be predicted in advance, and then the VM provisioning plan is optimized based on the predicted demands. Further, we propose Uncertainty-Aware Heuristic Search (UAHS) for solving the PreVMP problem. UAHS first models the prediction uncertainty, and then utilizes the prediction uncertainty in optimization. Moreover, UAHS leverages Bayesian optimization to interact prediction and optimization to improve its practical performance. Extensive experiments show that UAHS performs much better than state-of-the-art competitors on two public datasets and an industrial dataset. UAHS has been successfully applied in Microsoft Azure and brought practical benefits in real-world applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "Virtual Machines (VM)"

1

Oleksiuk, Vasyl P., Olesia R. Oleksiuk, Oleg M. Spirin, Nadiia R. Balyk e Yaroslav P. Vasylenko. Some experience in maintenance of an academic cloud. [б. в.], junho de 2021. http://dx.doi.org/10.31812/123456789/4436.

Texto completo da fonte
Resumo:
The article is devoted to the systematization of experience in the deployment, maintenance and servicing of the private academic cloud. The article contains model of the authors’ cloud infrastructure. It was developed at Ternopil Volodymyr Hnatiuk National Pedagogical University (Ukraine) on the basis of the Apache CloudStack platform. The authors identify the main tasks for maintaining a private academic cloud. Here they are making changes to the cloud infrastructure; maintenance of virtual machines (VM) to determine the performance and migration of VM instances; work with VMs; backup of all cloud infrastructure. The analysis of productivity and providing students with computing resources is carried out. The main types of VM used in training are given. The number and characteristics of VM that can be served by a private academic cloud are calculated. Approaches and schemes for performing backup are analysed. Some theoretical and practical experience of using cloud services to perform backup has been studied. Several scripts have been developed for archiving the platform database and its repositories. They allow you to upload backups to the Google Drive cloud service. The performance of these scripts for the author’s deployment of private cloud infrastructure was evaluated.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

McGarrigle, Malachy. Watchpoints for Consideration When Utilising a VDI Network to Teach Archicad BIM Software Within an Educational Programme. Unitec ePress, outubro de 2023. http://dx.doi.org/10.34074/ocds.099.

Texto completo da fonte
Resumo:
This research identifies factors to be considered in the adoption of a virtual desktop infrastructure (VDI) accommodating the software needs of a tertiary institution. The study discusses the potential advantages and disadvantages of VDI, focusing specifically on the performance of the architectural software Archicad when used virtually. The findings will be relevant to similar programmes, such as Revit, and software used in other disciplines, especially where processing power is important. Aims discussed include reducing high-specification computers rarely used to capacity, assessing user experience, and feasibility of VDI remote access. Primarily a case study, this project centres around delivery of papers in the New Zealand Diploma of Architectural Technology programme at Unitec | Te Pūkenga that employ Archicad. Software efficiency and performance was monitored throughout teaching across numerous semesters. Incidents were logged and VDI operation tracked, especially during complex tasks such as image rendering. Load testing was also carried out to assess the implications of large user numbers simultaneously performing such complex tasks. Project findings indicate that Archicad performance depends on the design and specification of the virtual platform. Factors such as processing power, RAM allocation and ratio of users to virtual machines (VM)s proved crucial. Tasks executed by the software and how software itself uses hardware are other considerations. This research is important, as its findings could influence the information technology strategies of both academic institutions and industry in coming years. Virtual computing provides many benefits, and this project could provide the confidence for stakeholders to adopt new strategies using VDI instead of the traditional approach of computers with locally installed software applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Chandramouli, Ramaswamy. Secure Virtual Network Configuration for Virtual Machine (VM) Protection. National Institute of Standards and Technology, março de 2016. http://dx.doi.org/10.6028/nist.sp.800-125b.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Yu, Ken F. Android Virtual Machine (VM) Setup on Linux. Fort Belvoir, VA: Defense Technical Information Center, dezembro de 2014. http://dx.doi.org/10.21236/ada612920.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia