Academic literature on the topic 'Kubernetes (logiciel)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Kubernetes (logiciel).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Kubernetes (logiciel)"

1

Ming Zhao, Ming Zhao, Zhen Wang Ming Zhao, Yalong Li Zhen Wang, and Xiumei Qin Yalong Li. "Mitigating Cloud Computing Virtualization Performance Problems with an Upgraded Logical Convergence Strategy." 電腦學刊 34, no. 6 (December 2023): 133–43. http://dx.doi.org/10.53106/199115992023123406010.

Full text
Abstract:
<p>In the domain of cloud computing and network resource virtualization, existing fusion techniques for containers and virtual machines suffer from high energy consumption, inflexible scheduling requirements, and suboptimal resource utilization. This study critically examined the current methods, accounted for the contemporary requirements, and developed a novel strategy aimed at maximizing resource utilization while minimizing energy consumption. Comprehensive experiments illustrate the superiority of our approach over state-of-the-art fusion strategies such as Kubernetes+Kubevirt and OpenStack+Kubernetes, demonstrating significant reductions in energy consumption, improved resource utilization, and enhanced system performance.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
2

Cuadra, Julen, Ekaitz Hurtado, Federico Pérez, Oskar Casquero, and Aintzane Armentia. "OpenFog-Compliant Application-Aware Platform: A Kubernetes Extension." Applied Sciences 13, no. 14 (July 19, 2023): 8363. http://dx.doi.org/10.3390/app13148363.

Full text
Abstract:
Distributed computing paradigms have evolved towards low latency and highly virtualized environments. Fog Computing, as its latest iteration, enables the usage of Cloud-like services closer to the generators and consumers of data. The processing in this layer is performed by Fog Applications, which are decomposed into smaller components following the microservice paradigm and encapsulated into containers. Current state-of-the-art container orchestrators can manage hundreds of simultaneous containers. However, Kubernetes, being the de facto standard, does not consider the application itself as a top-level entity, which limits its orchestration capabilities. This raises the need to rearchitect Kubernetes to benefit from application-awareness, which refers to an orchestration method optimized for managing the applications and the set of components that comprise them. Thus, this paper proposes an application-aware and OpenFog-compliant architecture that manages applications as first-level entities during their lifecycle. Furthermore, the proposed architecture allows the definition of organizational structures to group subordinated applications based on user-defined hierarchies. This logical structuring makes it possible to outline how orchestration should be shaped to reflect the operating model of a system or an organization. The proposed architecture is implemented as a Kubernetes extension and provided as an operator.
APA, Harvard, Vancouver, ISO, and other styles
3

Fajardo, Edgar, Matevz Tadel, Justas Balcas, Alja Tadel, Frank Würthwein, Diego Davila, Jonathan Guiang, and Igor Sfiligoi. "Moving the California distributed CMS XCache from bare metal into containers using Kubernetes." EPJ Web of Conferences 245 (2020): 04042. http://dx.doi.org/10.1051/epjconf/202024504042.

Full text
Abstract:
The University of California system maintains excellent networking between its campuses and a number of other Universities in California, including Caltech, most of them being connected at 100 Gbps. UCSD and Caltech Tier2 centers have joined their disk systems into a single logical caching system, with worker nodes from both sites accessing data from disks at either site. This successful setup has been in place for the last two years. However, coherently managing nodes at multiple physical locations is not trivial and requires an update on the operations model used. The Pacific Research Platform (PRP) provides Kubernetes resource pool spanning resources in the science demilitarized zones (DMZs) in several campuses in California and worldwide. We show how we migrated the XCache services from bare-metal deployments into containers using the PRP cluster. This paper presents the reasoning behind our hardware decisions and the experience in migrating to and operating in a mixed environment.
APA, Harvard, Vancouver, ISO, and other styles
4

Chouhan, Durga, Nilima Gautam, Gaurav Purohit, and Rajesh Bhdada. "A survey on virtualization techniques in Mobile edge computing." WEENTECH Proceedings in Energy, March 13, 2021, 455–68. http://dx.doi.org/10.32438/wpe.412021.

Full text
Abstract:
In the present scenario, the field of Information Technology(IT) is moving from physical storage to cloud storage as "cloud" providers deliver on-demand resources over the Internet. MEC's key idea is to provide an IT infrastructure system and cloud computing services at the mobile network's edge, within the RAN and close to mobile users. MEC expands the idea of cloud computing by taking the benefits of the cloud closer to consumers in the form of a network edge, resulting in less latency from end to end. It is a decentralized computing infrastructure where some applications use signal processing, storage, control and computing are distributed between the data source and the cloud in the most effective and logical way. Virtualization is the main cloud infrastructure technology used in MEC. Virtualization is accomplished by virtualizing the software or hardware resource layer. Virtualization in MEC can be done by the hypervisor, Virtual machine, Docker Container or by Kubernetes. Hypervisors and VMs are the technologies used earlier. Docker is the technology we use nowadays, and Kubernetes is the future of Virtualization. In the face of large-scale and highly scalable needs, the cloud computing infrastructure is hard to fulfil in a short time, and the conventional virtual machine-based cloud host absorbs a lot of device resources on its own hence in this paper, we will address Docker as new container technology and introduce you to how this technology has solved previous problems in Virtualization, including the creation and deployment of large applications. The purpose of this paper is to provide a detailed survey of related MEC research and technological developments where specifically relevant research and future directions are illustrated.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Kubernetes (logiciel)"

1

Şenel, Berat. "Container Orchestration for the Edge Cloud." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS202.

Full text
Abstract:
Avec l'essor des infrastructures de type edge où les ressources informatiques sont en périphérie de réseau, la tendance est une fois de plus orientée vers la décentralisation. En plus des appareils à ressources contraintes qui peuvent effectuer des tâches limitées, le « edge cloud » se compose de nœuds de calcul de classe serveur qui sont colocalisées avec des stations de base des réseaux sans-fil et qui sont soutenus par des serveurs dans des centres informatiques régionaux. Ces nœuds de calcul ont des capacités de type cloud et ils sont capables d'exécuter des charges de travail (workloads) typiques du cloud. En outre, de nombreux appareils intelligents qui supportent la conteneurisation et la virtualisation peuvent exécuter de telles tâches. Nous pensons que le modèle de service « containers as a service », ou CaaS, avec sa surcharge minime sur des nœuds de calcul, est particulièrement bien adapté pour l'environnement edge cloud qui est moins évolutif que le cloud classique. Pourtant, les systèmes d'orchestration de conteneurs en cloud ne sont pas encore intégrés dans les nouveaux environnements edge cloud. Dans cette thèse nous montrons une voie à suivre pour l'orchestration des conteneurs pour des edge clouds. Nous apportons nos contributions de deux manières principales : la conception raisonnée d'un ensemble de fonctionnalités testées empiriquement pour simplifier et améliorer l'orchestration des conteneurs pour des edge clouds et le déploiement de ces fonctionnalités pour fournir une plateforme edge durable, basée sur des conteneurs, pour la communauté de recherche sur Internet. Ce logiciel et cette plateforme s'appellent EdgeNet. Elle consiste en une extension de Kubernetes, qui est l'outil de facto standard d'orchestration de conteneurs pour l'industrie cloud. L'edge cloud nécessite une architecture mutualisée, ou « multitenancy », pour le partage de ressources limitées. Cependant, cela n'est pas une fonctionnalité native de Kubernetes et alors un cadre spécifique doit être ajouté au système afin d'activer cette fonctionnalité. En étudiant la littérature scientifique sur les cadres multitenancy dans le cloud ainsi que les cadres multitenancy déjà existants pour Kubernetes, nous avons développé une nouvelle classification de ces cadres en trois approches principles: (1) multi-instance via plusieurs clusters, (2) multi-instance via plusieurs plans de contrôle et (3) instance-unique. Compte tenu des contraintes de ressources à l'edge, nous défendons et apportons des preuves empiriques en faveur d'un cadre multitenancy qui est instance-unique. Notre conception comprend un mécanisme léger pour la fédération des clusters de calcul de l'edge cloud dans lequel chaque cluster local implémente notre cadre multitenancy, et un utilisateur accède à des ressources fédérées par le biais du cluster local fourni par son opérateur de cloud local. Nous introduisons en outre plusieurs fonctionnalités et méthodes qui adaptent l'orchestration des conteneurs à l'edge cloud, telles qu'un moyen de permettre aux utilisateurs de déployer des charges de travail en fonction de l'emplacement du nœud, et un VPN en cluster qui permet aux nœuds de fonctionner derrière des NAT. Nous mettons ces fonctionnalités en production avec la plateforme d’expérimentation d'EdgeNet, un cluster de calcul distribué à l'échelle mondiale qui est intrinsèquement moins coûteux à déployer et à entretenir, et plus facile à documenter et à programmer que les plateformes d’expérimentation précédents
The pendulum again swings away from centralized IT infrastructure back towards decentralization, with the rise of edge computing. Besides resource-constrained devices that can only run tiny tasks, edge computing infrastructure consists of server-class compute nodes that are collocated with wireless base stations, complemented by servers in regional data centers. These compute nodes have cloud-like capabilities, and are thus able to run cloud-like workloads. Furthermore, many smart devices that support containerization and virtualization can also handle cloud-like workloads. The « containers as a service » (CaaS) service model, with its minimal overhead on compute nodes, is particularly well adapted to the less scalable cloud environment that is found at the edge, but cloud container orchestration systems have yet to catch up to the new edge cloud environment. This thesis shows a way forward for edge cloud container orchestration. We make our contributions in two primary ways: the reasoned conception of a set of empirically tested features to simplify and improve container orchestration at the edge, and the deployment of these features to provide EdgeNet, a sustainable container-based edge cloud testbed for the internet research community. We have built EdgeNet on Kubernetes, as it is open-source software that has become today’s de facto industry standard cloud container orchestration tool. The edge cloud requires multitenancy for the sharing of limited resources. However, this is not a Kubernetes-native feature, and a specific framework must be integrated into the tool to enable this functionality. Surveying the scientific literature on cloud multitenancy and existing frameworks to extend Kubernetes to offer multitenancy, we have identified three main approaches: (1) multi-instance through multiple clusters, (2) multi-instance through multiple control planes, and (3) single-instance native. Considering the resource constraints at the edge, we argue for and provide empirical evidence in favor of a single-instance multitenancy framework. Our design includes a lightweight mechanism for the federation of edge cloud compute clusters in which each local cluster implements our multitenancy framework, and a user gains access to federated resources through the local cluster that her local cloud operator provides. We further introduce several features and methods that adapt container orchestration for the edge cloud, such as a means to allow users to deploy workloads according to node location, and an in-cluster VPN that allows nodes to operate from behind NATs. We put these features into production through the EdgeNet testbed, a globally distributed compute cluster that is inherently less costly to deploy and maintain, and easier to document and to program than previous such testbeds
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography