Articles de revues sur le sujet « Orchestration cloud native »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Orchestration cloud native.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 37 meilleurs articles de revues pour votre recherche sur le sujet « Orchestration cloud native ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Chelliah, Pethuru Raj, et Chellammal Surianarayanan. « Multi-Cloud Adoption Challenges for the Cloud-Native Era ». International Journal of Cloud Applications and Computing 11, no 2 (avril 2021) : 67–96. http://dx.doi.org/10.4018/ijcac.2021040105.

Texte intégral
Résumé :
With the ready availability of appropriate technologies and tools for crafting hybrid clouds, the move towards employing multiple clouds for hosting and running various business workloads is garnering subtle attention. The concept of cloud-native computing is gaining prominence with the faster proliferation of microservices and containers. The faster stability and maturity of container orchestration platforms also greatly contribute towards the cloud-native era. This paper guarantees the following contributions: 1) It describes the key motivations for multi-cloud concept and implementations. 2) It also highlights various key drivers of the multi-cloud paradigm. 3) It presents a brew of challenges that are likely to occur while setting up multi-cloud. 4) It elaborates the proven and potential solution approaches to solve the challenges. The technology-inspired and tool-enabled solution approaches significantly simplify and speed up the adoption of the fast-emerging and evolving multi-cloud concept in the cloud-native era.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Leiter, Ákos, Edina Lami, Attila Hegyi, József Varga et László Bokor. « Closed-loop Orchestration for Cloud-native Mobile IPv6 ». Infocommunications journal 15, no 1 (2023) : 44–54. http://dx.doi.org/10.36244/icj.2023.1.5.

Texte intégral
Résumé :
With the advent of Network Function Virtualization (NFV) and Software-Defined Networking (SDN), every network service type faces significant challenges induced by novel requirements. Mobile IPv6, the well-known IETF standard for network-level mobility management, is not an exemption. Cloud-native Mobile IPv6 has acquired several new capabilities due to the technological advancements of NFV/SDN evolution. This paper presents how automatic failover and scaling can be envisioned in the context of cloud-native Mobile IPv6 with closed-loop orchestration on the top of the Open Network Automation Platform. Numerical results are also presented to indicate the usefulness of the new operational features (failover, scaling) driven by the cloud-native approach and highlight the advantages of network automation in virtualized and softwarized environments.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Vaño, Rafael, Ignacio Lacalle, Piotr Sowiński, Raúl S-Julián et Carlos E. Palau. « Cloud-Native Workload Orchestration at the Edge : A Deployment Review and Future Directions ». Sensors 23, no 4 (16 février 2023) : 2215. http://dx.doi.org/10.3390/s23042215.

Texte intégral
Résumé :
Cloud-native computing principles such as virtualization and orchestration are key to transferring to the promising paradigm of edge computing. Challenges of containerization, operative models and scarce availability of established tools make a thorough review indispensable. Therefore, the authors have described the practical methods and tools found in the literature as well as in current community-led development projects, and have thoroughly exposed the future directions of the field. Container virtualization and its orchestration through Kubernetes have dominated the cloud computing domain, while major efforts have been recently recorded focused on the adaptation of these technologies to the edge. Such initiatives have addressed either the reduction of container engines and the development of specific tailored operating systems or the development of smaller K8s distributions and edge-focused adaptations (such as KubeEdge). Finally, new workload virtualization approaches, such as WebAssembly modules together with the joint orchestration of these heterogeneous workloads, seem to be the topics to pay attention to in the short to medium term.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Aelken, Jörg, Joan Triay, Bruno Chatras et Arturo Martin de Nicolas. « Toward Cloud-Native VNFs : An ETSI NFV Management and Orchestration Standards Approach ». IEEE Communications Standards Magazine 8, no 2 (juin 2024) : 12–19. http://dx.doi.org/10.1109/mcomstd.0002.2200079.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Chandrasehar, Amreth. « ML Powered Container Management Platform : Revolutionizing Digital Transformation through Containers and Observability ». Journal of Artificial Intelligence & ; Cloud Computing 1, no 1 (31 mars 2022) : 1–3. http://dx.doi.org/10.47363/jaicc/2023(1)130.

Texte intégral
Résumé :
As companies adopt digital transformation, cloud-native applications become a critical part of their architecture and roadmap. Enterprise applications and tools are developed using cloud native architecture are containerized and are deployed on container orchestration platforms. Containers have revolutionized application deployments to help management, scaling and operations of workloads deployed on container platforms. But a lot of issues are faced by operators of the platforms such as complexity in managing large scale environments, security, networking, storage, observability and cost. This paper will discuss on how to build a container management platform using monitoring data to implement AI and ML models to aid organization digital transformation journey
Styles APA, Harvard, Vancouver, ISO, etc.
6

Liu, Peng, Jinsong Wang, Weisen Zhao et Xiangjun Li. « Research and Implementation of Container Based Application Orchestration Service Technology ». Journal of Physics : Conference Series 2732, no 1 (1 mars 2024) : 012012. http://dx.doi.org/10.1088/1742-6596/2732/1/012012.

Texte intégral
Résumé :
Abstract With the rapid development of cloud computing technology, Kubernetes(K8S), as the main orchestration tool for cloud native applications, has become the preferred choice for enterprises and developers. This article is based on container based application orchestration service technology. Through a set of templates containing cloud resource descriptions, it quickly completes functions such as application creation and configuration, application batch cloning, and application multi environment deployment. It simplifies and automates the lifecycle management capabilities required for cloud applications, such as resource planning, application design, deployment, status monitoring, and scaling. Users can more conveniently complete infrastructure management and operation and maintenance work, In order to focus more on innovation and research and development, and improve work efficiency. The actual application effect of the technology used in this article depends to a certain extent on the ability level of basic service resources, and manual template creation is required for the first use. In production use, a certain professional ability is required to create a good application layout template, adjust and optimize resources according to the production environment, in order to significantly improve the effectiveness and efficiency of practical applications.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Oyekunle Claudius Oyeniran, Oluwole Temidayo Modupe, Aanuoluwapo Ayodeji Otitoola, Oluwatosin Oluwatimileyin Abiona, Adebunmi Okechukwu Adewusi et Oluwatayo Jacob Oladapo. « A comprehensive review of leveraging cloud-native technologies for scalability and resilience in software development ». International Journal of Science and Research Archive 11, no 2 (30 mars 2024) : 330–37. http://dx.doi.org/10.30574/ijsra.2024.11.2.0432.

Texte intégral
Résumé :
In the landscape of modern software development, the demand for scalability and resilience has become paramount, particularly with the rapid growth of online services and applications. Cloud-native technologies have emerged as a transformative force in addressing these challenges, offering dynamic scalability and robust resilience through innovative architectural approaches. This paper presents a comprehensive review of leveraging cloud-native technologies to enhance scalability and resilience in software development. The review begins by examining the foundational concepts of cloud-native architecture, emphasizing its core principles such as containerization, microservices, and declarative APIs. These principles enable developers to build and deploy applications that can dynamically scale based on demand while maintaining high availability and fault tolerance. Furthermore, the review explores the key components of cloud-native ecosystems, including container orchestration platforms like Kubernetes, which provide automated management and scaling of containerized applications. Additionally, it discusses the role of service meshes in enhancing resilience by facilitating secure and reliable communication between microservices. Moreover, the paper delves into best practices and patterns for designing scalable and resilient cloud-native applications, covering topics such as distributed tracing, circuit breaking, and chaos engineering. These practices empower developers to proactively identify and mitigate potential failure points, thereby improving the overall robustness of their systems. This review underscores the significance of cloud-native technologies in enabling software developers to build scalable and resilient applications. By embracing cloud-native principles and adopting appropriate tools and practices, organizations can effectively meet the evolving demands of modern software development in an increasingly dynamic and competitive landscape.
Styles APA, Harvard, Vancouver, ISO, etc.
8

A., Shevchenko, et Puzyrov S. « DEVELOPMENT OF THE HARDWARE AND SOFTWARE PLATFORM FOR MODERN IOT SOLUTIONS BASED ON FOG COMPUTING USING CLOUD-NATIVE TECHNOLOGIES ». Computer systems and network 2, no 1 (23 mars 2017) : 102–12. http://dx.doi.org/10.23939/csn2020.01.102.

Texte intégral
Résumé :
The concept of digital transformation is very relevant at the moment due to the epidemiological situation and the transition of the world to the digital environment. IoT is one of the main drivers of digital transformation. The Internet of Things (IoT) is an extension of the Internet, which consists of sensors, controllers, and other various devices, the so-called "things," that communicate with each other over the network. In this paper, the development of hardware and software for the organization of fog and edge computing was divided into three levels: hardware, orchestration, application. Application level also was divided into two parts: software and architectural. The hardware was implemented using two versions of the Raspberry Pi: Raspberry Pi 4 and Raspberry Pi Zero, which are connected in master-slave mode. The orchestration used K3S, Knative and Nuclio technologies. Technologies such as Linkerd service network, NATS messaging system, implementation of RPC - GRPC protocol, TDengine database, Apache Ignite, Badger were used to implement the software part of the application level. The architecture part is designed as an API development standard, so it can be applied to a variety of IoT software solutions in any programming language. The system can be used as a platform for construction of modern IoT-solutions on the principle of fog\edge computing. Keywords: Internet of Things, IoT-platform, Container technologies, Digital Twin, API.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Li, Feifei. « Modernization of Databases in the Cloud Era : Building Databases that Run Like Legos ». Proceedings of the VLDB Endowment 16, no 12 (août 2023) : 4140–51. http://dx.doi.org/10.14778/3611540.3611639.

Texte intégral
Résumé :
Utilizing cloud for common and critical computing infrastructures has already become the norm across the board. The rapid evolvement of the underlying cloud infrastructure and the revolutionary development of AI present both challenges and opportunities for building new database architectures and systems. It is crucial to modernize database systems in the cloud era, so that next generation cloud native databases may run like legos-they are adaptive, flexible, reliable, and smart towards dynamic workloads and varying requirements. That said, we observe four critical trends and requirements for the modernization of cloud databases: embracing cloud-native architecture, full integration with cloud platform and orchestration, co-design for data fabric, and moving towards being AI augmented. Modernizing database systems by adopting these critical trends and addressing key challenges associated with them provide ample opportunities for data management communities from both academia and industry to explore. We will provide an in-depth case study of how we modernize PolarDB with respect to embracing these four trends in the cloud era. Our ultimate goal is to build databases that run just like playing with legos, so that a database system fits for rich and dynamic workloads and requirements in a self-adaptive, performant, easy-/intuitive-to use, reliable, and intelligent manner.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Aijaz, Usman, Mohammed Abubakar, Aditya Reddy, Abuzar . et Alok C Pujari. « An Analysis on Security Issues and Research Challenges in Cloud Computing ». Journal of Security in Computer Networks and Distributed Systems 1, no 1 (26 avril 2024) : 37–44. http://dx.doi.org/10.46610/joscnds.2024.v01i01.005.

Texte intégral
Résumé :
Cloud computing has completely transformed how businesses handle, store, and process data and applications. However, using cloud services brings several security issues that need to be resolved to guarantee the availability, confidentiality, and integrity of critical data. This abstract overviews vital aspects of cloud computing security and highlights emerging trends and research directions. Essential challenges of cloud computing security include ensuring data privacy and confidentiality, maintaining data integrity and trustworthiness, and addressing compliance with regulatory requirements. Identity and access management (IAM) remains a critical area, focusing on enhancing authentication mechanisms and access controls to mitigate the risks of unauthorized access and insider threats. Additionally, active research areas include securing cloud orchestration and management platforms, resilience and availability of cloud services, and addressing the unique security considerations of cloud native technologies. Interdisciplinary collaboration between researchers, industry practitioners, and policymakers is essential to develop innovative security solutions and best practices for cloud computing environments. By addressing these challenges and advancing the state of the art in cloud security, organizations can leverage the benefits of cloud computing while mitigating associated risks and ensuring the protection of sensitive data and resources. In summary, this review paper provides a holistic understanding of cloud computing security, offering insights into current practices, challenges, and future directions for ensuring the confidentiality, integrity, and availability of cloud based systems and data.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Vasireddy, Indrani, Prathima Kandi et SreeRamya Gandu. « Efficient Resource Utilization in Kubernetes : A Review of Load Balancing Solutions ». International Journal of Innovative Research in Engineering and Management 10, no 6 (décembre 2023) : 44–48. http://dx.doi.org/10.55524/ijirem.2023.10.6.6.

Texte intégral
Résumé :
Modern distributed systems face the challenge of efficiently distributing workloads across nodes to ensure optimal resource utilization, high avail-ability, and performance. In this context, Kubernetes, an open-source container orchestration engine, plays a pivotal role in automating deployment, scaling, and management of containerized applications. This paper explores the landscape of load balancing strategies within Kubernetes, aiming to provide a comprehensive overview of existing techniques, challenges, and best practices. The paper delves into the dynamic nature of Kubernetes environments, where applications scale dynamically, and demand for resources fluctuates. We review various load balancing approaches, including those based on traffic, resource-aware algorithms, and affinity policies. Special attention is given to the unique characteristics of containerized workloads and their impact on load balancing decisions. In this paper the implications of load balancing on the scalability and performance of applications deployed in Kubernetes clusters. It explores the trade-offs between different strategies, considering factors such as response time, throughput, and the adapt-ability to varying workloads. As cloud-native architectures continue to evolve, understanding and addressing the intricacies of load balancing in dynamic con-tainer orchestration environments become increasingly crucial. In this paper we had consolidated the current state of knowledge on load balancing in Kubernetes, providing researchers and practitioners with valuable insights and a foundation for further advancements in the quest for efficient, scalable, and resilient distrib-uted systems.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Szmeja, Paweł, Alejandro Fornés-Leal, Ignacio Lacalle, Carlos E. Palau, Maria Ganzha, Wiesław Pawłowski, Marcin Paprzycki et Johan Schabbink. « ASSIST-IoT : A Modular Implementation of a Reference Architecture for the Next Generation Internet of Things ». Electronics 12, no 4 (8 février 2023) : 854. http://dx.doi.org/10.3390/electronics12040854.

Texte intégral
Résumé :
Next Generation Internet of Things (NGIoT) addresses the deployment of complex, novel IoT ecosystems. These ecosystems are related to different technologies and initiatives, such as 5G/6G, AI, cybersecurity, and data science. The interaction with these disciplines requires addressing complex challenges related with the implementation of flexible solutions that mix heterogeneous software and hardware, while providing high levels of customisability and manageability, creating the need for a blueprint reference architecture (RA) independent of particular existing vertical markets (e.g., energy, automotive, or smart cities). Different initiatives have partially dealt with the requirements of the architecture. However, the first complete, consolidated NGIoT RA, covering the hardware and software building blocks, and needed for the advent of NGIoT, has been designed in the ASSIST-IoT project. The ASSIST-IoT RA delivers a layered and modular design that divides the edge-cloud continuum into independent functions and cross-cutting capabilities. This contribution discusses practical aspects of implementation of the proposed architecture within the context of real-world applications. In particular, it is shown how use of cloud-native concepts (microservices and applications, containerisation, and orchestration) applied to the edge-cloud continuum IoT systems results in bringing the ASSIST-IoT concepts to reality. The description of how the design elements can be implemented in practice is presented in the context of an ecosystem, where independent software packages are deployed and run at the selected points in the hardware environment. Both implementation aspects and functionality of selected groups of virtual artefacts (micro-applications called enablers) are described, along with the hardware and software contexts in which they run.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Valavandan, Ramamurthy, Subramanian Jagathambal, Gothandapani Balakrishnan, Malarvizhi Balakrishnan, Valavandan Valvandan, Archana Gnanavel, Subramanian Kangalakshmi et Ramamurthy Savitha. « Digitalization of Hindu Temples in India in Google Cloud and SerpAPI Automation in Python ». International Journal of Information and Communication Sciences 7, no 3 (17 août 2022) : 66–81. http://dx.doi.org/10.11648/j.ijics.20220703.12.

Texte intégral
Résumé :
The study on scraping the Google Search Engine via the SerpAPI and automatically generating a dataset of the temple name, location of the temple, temple description, the state of India, longitude and latitude coordinates, and distance from the major cities in India is demonstrated. The workload orchestration is controlled by Google Composer in Apache Airflow. Google Kubernetes Engine (GKE) categorizes applications into microservices under each container. The docker image is created in the GKE control plane, API scheduler segments the granular levels of microservices in the DevOps operation of GitHub, Jenkins. YAML provides the configuration of the pod, of GKE carrying multiple containers, of the main type of input for Kubernetes configurations. Python application creates the YAML for GKE configuration data to define the Kubernetes object and the business rule. The Google native artificial intelligence VertexAPI picks the keyword of the temple name and the location of every state in India. The SerpAPI application digitized the Hindu temples in India for the complete project life cycle for requirement gathering, design, development, and deployment in Google Cloud. The search parameters are designed for Google Search Engine for the collection of historians, and architectural details of the Hindu temples in India. The Google Search application uses the Google Compute Engine, Big Query, Google Kubernetes Engine, Cloud Composer, and big data services of Data Fusion, Dataproc, and Dataflow. The solution uses Google cloud services of Web Hosting, Load balancing, Google Storage, and the compute engine performance tuning in Big Query optimization. The Google search results are in JSON format of every state in temple details and a python application parses the facts on the Search API algorithm. The temple images are visualized in Python Application and integrated for the data visualization of Google inbuilt Google Studio.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Oztoprak, Kasim, Yusuf Kursat Tuncel et Ismail Butun. « Technological Transformation of Telco Operators towards Seamless IoT Edge-Cloud Continuum ». Sensors 23, no 2 (15 janvier 2023) : 1004. http://dx.doi.org/10.3390/s23021004.

Texte intégral
Résumé :
This article investigates and discusses challenges in the telecommunication field from multiple perspectives, both academic and industry sides are catered for, surveying the main points of technological transformation toward edge-cloud continuum from the view of a telco operator to show the complete picture, including the evolution of cloud-native computing, Software-Defined Networking (SDN), and network automation platforms. The cultural shift in software development and management with DevOps enabled the development of significant technologies in the telecommunication world, including network equipment, application development, and system orchestration. The effect of the aforementioned cultural shift to the application area, especially from the IoT point of view, is investigated. The enormous change in service diversity and delivery capabilities to mass devices are also discussed. During the last two decades, desktop and server virtualization has played an active role in the Information Technology (IT) world. With the use of OpenFlow, SDN, and Network Functions Virtualization (NFV), the network revolution has got underway. The shift from monolithic application development and deployment to micro-services changed the whole picture. On the other hand, the data centers evolved in several generations where the control plane cannot cope with all the networks without an intelligent decision-making process, benefiting from the AI/ML techniques. AI also enables operators to forecast demand more accurately, anticipate network load, and adjust capacity and throughput automatically. Going one step further, zero-touch networking and service management (ZSM) is proposed to get high-level human intents to generate a low-level configuration for network elements with validated results, minimizing the ratio of faults caused by human intervention. Harmonizing all signs of progress in different communication technologies enabled the use of edge computing successfully. Low-powered (from both energy and processing perspectives) IoT networks have disrupted the customer and end-point demands within the sector, as such paved the path towards devising the edge computing concept, which finalized the whole picture of the edge-cloud continuum.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Carreño-Barrera, Javier, Luis Alberto Núñez-Avellaneda, Maria José Sanín et Artur Campos D. Maia. « Orchestrated Flowering and Interspecific Facilitation : Key Factors in the Maintenance of the Main Pollinator of Coexisting Threatened Species of Andean Wax Palms (Ceroxylon spp.) ». Annals of the Missouri Botanical Garden 105, no 3 (28 septembre 2020) : 281–99. http://dx.doi.org/10.3417/2020590.

Texte intégral
Résumé :
Solitary, dioecious, and mostly endemic to Andean cloud forests, wax palms (Ceroxylon Bonpl. ex DC. spp.) are currently under worrisome conservation status. The establishment of management plans for their dwindling populations rely on detailed biological data, including their reproductive ecology. As in the case of numerous other Neotropical palm taxa, small beetles are assumed to be selective pollinators of wax palms, but their identity and relevance in successful fruit yield were unknown. During three consecutive reproductive seasons we collected data on population phenology and reproductive and floral biology of three syntopic species of wax palms native to the Colombian Andes. We also determined the composition of the associated flower-visiting entomofauna, quantifying the extent of the role of individual species as effective pollinators through standardized value indexes that take into consideration abundance, constancy, and pollen transport efficiency. The studied populations of C. parvifrons (Engel) H. Wendl., C. ventricosum Burret, and C. vogelianum (Engel) H. Wendl. exhibit seasonal reproductive cycles with marked temporal patterns of flower and fruit production. The composition of the associated flower-visiting entomofauna, comprised by ca. 50 morphotypes, was constant across flowering seasons and differed only marginally among species. Nonetheless, a fraction of the insect species associated with pistillate inflorescences actually carried pollen, and calculated pollinator importance indexes demonstrated that one insect species alone, Mystrops rotundula Sharp, accounted for 94%–99% of the effective pollination services for all three species of wax palms. The sequential asynchronous flowering of C. parvifrons, C. ventricosum, and C. vogelianum provides an abundant and constant supply of pollen, pivotal for the maintenance of large populations of their shared pollinators, a cooperative strategy proven effective by high fruit yield rates (up to 79%). Reproductive success might be compromised for all species by the population decline of one of them, as it would tamper with the temporal orchestration of pollen offer.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Nayak, Deekshith, et Dr H. V. Ravish Aradhya. « Orchestrating a stateful application using Operator ». Journal of University of Shanghai for Science and Technology 23, no 06 (17 juin 2021) : 514–20. http://dx.doi.org/10.51201/jusst/21/05278.

Texte intégral
Résumé :
Containerization is a leading technological advancement in cloud-native developments. Virtualization isolates the running processes at the bare metal level but containerization isolates the processes at the operating system level. Virtualization encapsulates all the new virtual instances with a new operating system but containerization encapsulates the software only with its dependencies. Containerization avoids the problem of dependency missing between different operating systems and their distributions. The concept of containerization is old but the development of open-source tools like Docker, Kubernetes, and Openshift accelerated the adaption of this technology. Docker builds container images and Openshift or Kubernetes is an Orchestrating tool. For stateful applications, Kubernetes workload resources are not a better option to orchestrate the application, as each resource has its own identity. In such cases, the operator can be built to manage the entire life cycle of resources in the Kubernetes cluster. Operator combines human operational knowledge into software code in a systematic way. The paper discusses the default control mechanism in Kubernetes and then it explains the procedure to build the operator to orchestrate the stateful application.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Sochat, Vanessa, Aldo Culquicondor, Antonio Ojea et Daniel Milroy. « The Flux Operator ». F1000Research 13 (21 mars 2024) : 203. http://dx.doi.org/10.12688/f1000research.147989.1.

Texte intégral
Résumé :
Converged computing is an emerging area of computing that brings together the best of both worlds for high performance computing (HPC) and cloud-native communities. The economic influence of cloud computing and the need for workflow portability, flexibility, and manageability are driving this emergence. Navigating the uncharted territory and building an effective space for both HPC and cloud require collaborative technological development and research. In this work, we focus on developing components for the converged workload manager, the central component of batch workflows running in any environment. From the cloud we base our work on Kubernetes, the de facto standard batch workload orchestrator. From HPC the orchestrator counterpart is Flux Framework, a fully hierarchical resource management and graph-based scheduler with a modular architecture that supports sophisticated scheduling and job management. Bringing these managers together consists of implementing Flux inside of Kubernetes, enabling hierarchical resource management and scheduling that scales without burdening the Kubernetes scheduler. This paper introduces the Flux Operator – an on-demand HPC workload manager deployed in Kubernetes. Our work describes design decisions, mapping components between environments, and experimental features. We perform experiments that compare application performance when deployed by the Flux Operator and the MPI Operator and present the results. Finally, we review remaining challenges and describe our vision of the future for improved technological innovation and collaboration through converged computing.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Esmaeily, Ali, et Katina Kralevska. « Orchestrating Isolated Network Slices in 5G Networks ». Electronics 13, no 8 (18 avril 2024) : 1548. http://dx.doi.org/10.3390/electronics13081548.

Texte intégral
Résumé :
Sharing resources through network slicing in a physical infrastructure facilitates service delivery to various sectors and industries. Nevertheless, ensuring security of the slices remains a significant hurdle. In this paper, we investigate the utilization of State-of-the-Art (SoA) Virtual Private Network (VPN) solutions in 5G networks to enhance security and performance when isolating slices. We deploy and orchestrate cloud-native network functions to create multiple scenarios that emulate real-life cellular networks. We evaluate the performance of the WireGuard, IPSec, and OpenVPN solutions while ensuring confidentiality and data protection within 5G network slices. The proposed architecture provides secure communication tunnels and performance isolation. Evaluation results demonstrate that WireGuard provides slice isolation in the control and data planes with higher throughput for enhanced Mobile Broadband (eMBB) and lower latency for Ultra-Reliable Low-Latency Communications (URLLC) slices compared to IPSec and OpenVPN. Our developments show the potential of implementing WireGuard isolation, as a promising solution, for providing secure and efficient network slicing, which fulfills the 5G key performance indicator values.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Li, Hanqi, Xianhui Liu et Weidong Zhao. « Research on Lightweight Microservice Composition Technology in Cloud-Edge Device Scenarios ». Sensors 23, no 13 (26 juin 2023) : 5939. http://dx.doi.org/10.3390/s23135939.

Texte intégral
Résumé :
In recent years, cloud-native technology has become popular among Internet companies. Microservice architecture solves the complexity problem for multiple service methods by decomposing a single application so that each service can be independently developed, independently deployed, and independently expanded. At the same time, domestic industrial Internet construction is still in its infancy, and small and medium-sized enterprises still face many problems in the process of digital transformation, such as difficult resource integration, complex control equipment workflow, slow development and deployment process, and shortage of operation and maintenance personnel. The existing traditional workflow architecture is mainly aimed at the cloud scenario, which consumes a lot of resources and cannot be used in resource-limited scenarios at the edge. Moreover, traditional workflow is not efficient enough to transfer data and often needs to rely on various storage mechanisms. In this article, a lightweight and efficient workflow architecture is proposed to optimize the defects of these traditional workflows by combining cloud-edge scene. By orchestrating a lightweight workflow engine with a Kubernetes Operator, the architecture can significantly reduce workflow execution time and unify data flow between cloud microservices and edge devices.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Taylor, Ryan Paul, Jeffrey Ryan Albert et Fernando Harald Barreiro Megino. « A grid site reimagined : Building a fully cloud-native ATLAS Tier 2 on Kubernetes ». EPJ Web of Conferences 295 (2024) : 07001. http://dx.doi.org/10.1051/epjconf/202429507001.

Texte intégral
Résumé :
The University of Victoria (UVic) operates an Infrastructure-asa-Service scientific cloud for Canadian researchers, and a Tier 2 site for the ATLAS experiment at CERN as part of the Worldwide LHC Computing Grid (WLCG). At first, these were two distinctly separate systems, but over time we have taken steps to migrate the Tier 2 grid services to the cloud. This process has been significantly facilitated by basing our approach on Kubernetes, a versatile, robust, and very widely adopted automation platform for orchestrating containerized applications. Previous work exploited the batch capabilities of Kubernetes to run grid computing jobs and replace the conventional grid computing elements by interfacing with the Harvester workload management system of the ATLAS experiment. However, the required functionality of a Tier 2 site encompasses more than just batch computing. Likewise, the capabilities of Kubernetes extend far beyond running batch jobs, and include for example scheduling recurring tasks and hosting long-running externally-accessible services in a resilient way. We are now undertaking the more complex and challenging endeavour of adapting and migrating all remaining services of the Tier 2 site — such as APEL accounting and Squid caching proxies, and in particular the grid storage element — to cloud-native deployments on Kubernetes. We aim to enable fully comprehensive deployment of a complete ATLAS Tier 2 site on a Kubernetes cluster via Helm charts, which will benefit the community by providing a streamlined and replicable way to install and configure an ATLAS site. We also describe our experience running a high-performance self-managed Kubernetes ATLAS Tier 2 cluster at the scale of 8 000 CPU cores for the last two years, and compare with the conventional setup of grid services.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Bocchi, Enrico, Abhishek Lekshmanan, Roberto Valverde et Zachary Goggin. « Enabling Storage Business Continuity and Disaster Recovery with Ceph distributed storage ». EPJ Web of Conferences 295 (2024) : 01021. http://dx.doi.org/10.1051/epjconf/202429501021.

Texte intégral
Résumé :
The Storage Group in the CERN IT Department operates several Ceph storage clusters with an overall capacity exceeding 100 PB. Ceph is a crucial component of the infrastructure delivering IT services to all the users of the Organization as it provides: i) Block storage for OpenStack, ii) CephFS, used as persistent storage by containers (OpenShift and Kubernetes) and as shared filesystems by HPC clusters and iii) S3 object storage for cloud-native applications, monitoring and software distribution across the WLCG. The Ceph infrastructure at CERN is being rationalized and restructured to allow for the implementation of a Business Continuity/Disaster Recovery plan. In this paper, we give an overview of how we transitioned from a single cluster providing block storage to multiple ones, enabling Storage Availability zones, and how block storage backups can be achieved. We also illustrate future plans for file systems backups through cback,a restic-based scalable orchestrator, and how S3 implements data immutability and provides a highly available, Multi-Data Centre object storage service.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Stan, Ioan-Mihail, Siarhei Padolski et Christopher Jon Lee. « Exploring the self-service model to visualize the results of the ATLAS Machine Learning analysis jobs in BigPanDA with Openshift OKD3 ». EPJ Web of Conferences 251 (2021) : 02009. http://dx.doi.org/10.1051/epjconf/202125102009.

Texte intégral
Résumé :
A large scientific computing infrastructure must offer versatility to host any kind of experiment that can lead to innovative ideas. The ATLAS experiment offers wide access possibilities to perform intelligent algorithms and analyze the massive amount of data produced in the Large Hadron Collider at CERN. The BigPanDA monitoring is a component of the PanDA (Production ANd Distributed Analysis) system, and its main role is to monitor the entire lifecycle of a job/task running in the ATLAS Distributed Computing infrastructure. Because many scientific experiments now rely upon Machine Learning algorithms, the BigPanDA community desires to expand the platform’s capabilities and fill the gap between Machine Learning processing and data visualization. In this regard, BigPanDA partially adopts the cloud-native paradigm and entrusts the data presentation to MLFlow services running on Openshift OKD. Thus, BigPanDA interacts with the OKD API and instructs the containers orchestrator how to locate and expose the results of the Machine Learning analysis. The proposed architecture also introduces various DevOps-specific patterns, including continuous integration for MLFlow middleware configuration and continuous deployment pipelines that implement rolling upgrades. The Machine Learning data visualization services operate on demand and run for a limited time, thus optimizing the resource consumption.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Arora, Sagar, Adlen Ksentini et Christian Bonnet. « Cloud native Lightweight Slice Orchestration (CLiSO) framework ». Computer Communications, octobre 2023. http://dx.doi.org/10.1016/j.comcom.2023.10.010.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Satyanarayanan, Mahadev, Jan Harkes, Jim Blakley, Marc Meunier, Govindarajan Mohandoss, Kiel Friedt, Arun Thulasi, Pranav Saxena et Brian Barritt. « Sinfonia : Cross-tier orchestration for edge-native applications ». Frontiers in the Internet of Things 1 (19 octobre 2022). http://dx.doi.org/10.3389/friot.2022.1025247.

Texte intégral
Résumé :
The convergence of 5G wireless networks and edge computing enables new edge-native applications that are simultaneously bandwidth-hungry, latency-sensitive, and compute-intensive. Examples include deeply immersive augmented reality, wearable cognitive assistance, privacy-preserving video analytics, edge-triggered serendipity, and autonomous swarms of featherweight drones. Such edge-native applications require network-aware and load-aware orchestration of resources across the cloud (Tier-1), cloudlets (Tier-2), and device (Tier-3). This paper describes the architecture of Sinfonia, an open-source system for such cross-tier orchestration. Key attributes of Sinfonia include:support for multiple vendor-specific Tier-1 roots of orchestration, providing end-to-end runtime control that spans technical and non-technical criteria;use of third-party Kubernetes clusters as cloudlets, with unified treatment of telco-managed, hyperconverged, and just-in-time variants of cloudlets;masking of orchestration complexity from applications, thus lowering the barrier to creation of new edge-native applications.We describe an initial release of Sinfonia (https://github.com/cmusatyalab/sinfonia), and share our thoughts on evolving it in the future.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Metsch, Thijs, Magdalena Viktorsson, Adrian Hoban, Monica Vitali, Ravi Iyer et Erik Elmroth. « Intent-Driven Orchestration : Enforcing Service Level Objectives for Cloud Native Deployments ». SN Computer Science 4, no 3 (17 mars 2023). http://dx.doi.org/10.1007/s42979-023-01698-0.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Chowdhury, Rasel, Chamseddine Talhi, Hakima Ould-Slimane et Azzam Mourad. « Proactive and Intelligent Monitoring and Orchestration Of Cloud-Native Ip Multimedia Subsystem ». SSRN Electronic Journal, 2023. http://dx.doi.org/10.2139/ssrn.4363466.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Chowdhury, Rasel, Chamseddine Talhi, Hakima Ould-Slimane et Azzam Mourad. « Proactive and Intelligent Monitoring and Orchestration of Cloud-Native IP Multimedia Subsystem ». IEEE Open Journal of the Communications Society, 2023, 1. http://dx.doi.org/10.1109/ojcoms.2023.3341002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Chandrasehar, Amreth. « ML Powered Container Management Platform : Revolutionizing Digital Transformation through Containers and Observability ». Journal of Artificial Intelligence & ; Cloud Computing, 31 mars 2022, 1–3. http://dx.doi.org/10.47363/jaicc/2022(1)122.

Texte intégral
Résumé :
As companies adopt digital transformation, cloud-native applications become a critical part of their architecture and roadmap. Enterprise applications and tools are developed using cloud native architecture are containerized and are deployed on container orchestration platforms. Containers have revolutionized application deployments to help management, scaling and operations of workloads deployed on container platforms. But a lot of issues are faced by operators of the platforms such as complexity in managing large scale environments, security, networking, storage, observability and cost. This paper will discuss on how to build a container management platform using monitoring data to implement AI and ML models to aid organization digital transformation journey
Styles APA, Harvard, Vancouver, ISO, etc.
29

Li, Zijun, Linsong Guo, Jiagan Cheng, Quan Chen, BingSheng He et Minyi Guo. « The Serverless Computing Survey : A Technical Primer for Design Architecture ». ACM Computing Surveys, 14 janvier 2022. http://dx.doi.org/10.1145/3508360.

Texte intégral
Résumé :
The development of cloud infrastructures inspires the emergence of cloud-native computing. As the most promising architecture for deploying microservices, serverless computing has recently attracted more and more attention in both industry and academia. Due to its inherent scalability and flexibility, serverless computing becomes attractive and more pervasive for ever-growing Internet services. Despite the momentum in the cloud-native community, the existing challenges and compromises still wait for more advanced research and solutions to further explore the potentials of the serverless computing model. As a contribution to this knowledge, this article surveys and elaborates the research domains in the serverless context by decoupling the architecture into four stack layers: Virtualization, Encapsule, System Orchestration, and System Coordination. Inspired by the security model, we highlight the key implications and limitations of these works in each layer, and make suggestions for potential challenges to the field of future serverless computing.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Camacho, Christiam, Grzegorz M. Boratyn, Victor Joukov, Roberto Vera Alvarez et Thomas L. Madden. « ElasticBLAST : accelerating sequence search via cloud computing ». BMC Bioinformatics 24, no 1 (26 mars 2023). http://dx.doi.org/10.1186/s12859-023-05245-9.

Texte intégral
Résumé :
Abstract Background Biomedical researchers use alignments produced by BLAST (Basic Local Alignment Search Tool) to categorize their query sequences. Producing such alignments is an essential bioinformatics task that is well suited for the cloud. The cloud can perform many calculations quickly as well as store and access large volumes of data. Bioinformaticians can also use it to collaborate with other researchers, sharing their results, datasets and even their pipelines on a common platform. Results We present ElasticBLAST, a cloud native application to perform BLAST alignments in the cloud. ElasticBLAST can handle anywhere from a few to many thousands of queries and run the searches on thousands of virtual CPUs (if desired), deleting resources when it is done. It uses cloud native tools for orchestration and can request discounted instances, lowering cloud costs for users. It is supported on Amazon Web Services and Google Cloud Platform. It can search BLAST databases that are user provided or from the National Center for Biotechnology Information. Conclusion We show that ElasticBLAST is a useful application that can efficiently perform BLAST searches for the user in the cloud, demonstrating that with two examples. At the same time, it hides much of the complexity of working in the cloud, lowering the threshold to move work to the cloud.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Boutouchent, Akram, Abdellah N. Meridja, Youcef Kardjadja, Adyson M. Maia, Yacine Ghamri-Doudane, Mouloud Koudil, Roch H. Glitho et Halima Elbiaze. « AMANOS : An Intent-Driven Management And Orchestration System For Next-Generation Cloud-Native Networks ». IEEE Communications Magazine, 2023, 1–7. http://dx.doi.org/10.1109/mcom.003.2300367.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Soumplis, Polyzois, Georgios Kontos, Panagiotis Kokkinos, Aristotelis Kretsis, Sergio Barrachina-Muñoz, Rasoul Nikbakht, Jorge Baranda, Miquel Payaró, Josep Mangues-Bafalluy et Emmanuel Varvarigos. « Performance Optimization Across the Edge-Cloud Continuum : A Multi-agent Rollout Approach for Cloud-Native Application Workload Placement ». SN Computer Science 5, no 3 (13 mars 2024). http://dx.doi.org/10.1007/s42979-024-02630-w.

Texte intégral
Résumé :
AbstractThe advancements in virtualization technologies and distributed computing infrastructures have sparked the development of cloud-native applications. This is grounded in the breakdown of a monolithic application into smaller, loosely connected components, often referred to as microservices, enabling enhancements in the application’s performance, flexibility, and resilience, along with better resource utilization. When optimizing the performance of cloud-native applications, specific demands arise in terms of application latency and communication delays between microservices that are not taken into consideration by generic orchestration algorithms. In this work, we propose mechanisms for automating the allocation of computing resources to optimize the service delivery of cloud-native applications over the edge-cloud continuum. We initially introduce the problem’s Mixed Integer Linear Programming (MILP) formulation. Given the potentially overwhelming execution time for real-sized problems, we propose a greedy algorithm, which allocates resources sequentially in a best-fit manner. To further improve the performance, we introduce a multi-agent rollout mechanism that evaluates the immediate effect of decisions but also leverages the underlying greedy heuristic to simulate the decisions anticipated from other agents, encapsulating this in a Reinforcement Learning framework. This approach allows us to effectively manage the performance–execution time trade-off and enhance performance by controlling the exploration of the Rollout mechanism. This flexibility ensures that the system remains adaptive to varied scenarios, making the most of the available computational resources while still ensuring high-quality decisions.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Dalgitsis, Michail, Nicola Cadenelli, Maria A. Serrano, Nikolaos Bartzoudis, Luis Alonso et Angelos Antonopoulos. « Cloud-native orchestration framework for network slice federation across administrative domains in 5G/6G mobile networks ». IEEE Transactions on Vehicular Technology, 2024, 1–14. http://dx.doi.org/10.1109/tvt.2024.3362583.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Tang, Bing, Xiaoyuan Zhang, Qing Yang, Xin Qi, Fayez Alqahtani et Amr Tolba. « Cost‐optimized Internet of Things application deployment in edge computing environment ». International Journal of Communication Systems, 25 septembre 2023. http://dx.doi.org/10.1002/dac.5618.

Texte intégral
Résumé :
SummaryWith the increasing popularity of cloud native and DevOps, container technology has become widely used in combination with microservices. However, deploying container‐based microservices in distributed edge‐cloud infrastructure requires complex selection strategies to ensure high‐quality service for users. Existing container orchestration tools lack flexibility in selecting the best deployment location based on user cost budgets and are insufficient in providing personalized deployment solutions. This paper proposes a genetic algorithm‐based Internet of Things (IoT) application deployment and selection strategy for personalized cost budgets. The application deployment problem is defined as an optimization problem that minimizes user service latency under cost constraints, which is an NP‐hard problem. The genetic algorithm is introduced to solve this problem effectively and improve deployment efficiency. Comparative results show that the proposed algorithm outperforms four baseline algorithms, including time‐greedy, cost‐greedy, random, and PSO, using real datasets and some synthetic datasets. The proposed algorithm provides personalized deployment solutions for edge‐cloud infrastructure.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Dähling, Stefan, Lukas Razik et Antonello Monti. « Enabling scalable and fault-tolerant multi-agent systems by utilizing cloud-native computing ». Autonomous Agents and Multi-Agent Systems 35, no 1 (25 janvier 2021). http://dx.doi.org/10.1007/s10458-020-09489-0.

Texte intégral
Résumé :
AbstractMulti-agent systems (MAS) represent a distributed computing paradigm well suited to tackle today’s challenges in the field of the Internet of Things (IoT). Both share many similarities such as the interconnection of distributed devices and their cooperation. The combination of MAS and IoT would allow the transfer of the experience gained in MAS research to the broader range of IoT applications. The key enabler for utilizing MAS in the IoT is the ability to build large-scale and fault-tolerant MASs since IoT concepts comprise possibly thousands or even millions of devices. However, well known multi-agent platforms (MAP), e. g., Java Agent DE-velopment Framework (JADE), are not able to deal with these challenges. To this aim, we present a cloud-native Multi-Agent Platform (cloneMAP) as a modern MAP based on cloud-computing techniques to enable scalability and fault-tolerance. A microservice architecture is used to implement it in a distributed way utilizing the open-source container orchestration system Kubernetes. Thereby, bottlenecks and single-points of failure are conceptually avoided. A comparison with JADE via relevant performance metrics indicates the massively improved scalability. Furthermore, the implementation of a large-scale use case verifies cloneMAP’s suitability for IoT applications. This leads to the conclusion that cloneMAP extends the range of possible MAS applications and enables the integration with IoT concepts.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Jian, Zhaolong, Xueshuo Xie, Yaozheng Fang, Yibing Jiang, Ye Lu, Ankan Dash, Tao Li et Guiling Wang. « DRS : A deep reinforcement learning enhanced Kubernetes scheduler for microservice‐based system ». Software : Practice and Experience, 25 octobre 2023. http://dx.doi.org/10.1002/spe.3284.

Texte intégral
Résumé :
SummaryRecently, Kubernetes is widely used to manage and schedule the resources of microservices in cloud‐native distributed applications, as the most famous container orchestration framework. However, Kubernetes preferentially schedules microservices to nodes with rich and balanced CPU and memory resources on a single node. The native scheduler of Kubernetes, called Kube‐scheduler, may cause resource fragmentation and decrease resource utilization. In this paper, we propose a deep reinforcement learning enhanced Kubernetes scheduler named DRS. We initially frame the Kubernetes scheduling problem as a Markov decision process with intricately designed state, action, and reward structures in an effort to increase resource usage and decrease load imbalance. Then, we design and implement DRS mointor to perceive six parameters concerning resource utilization and create a thorough picture of all available resources globally. Finally, DRS can automatically learn the scheduling policy through interaction with the Kubernetes cluster, without relying on expert knowledge about workload and cluster status. We implement a prototype of DRS in a Kubernetes cluster with five nodes and evaluate its performance. Experimental results highlight that DRS overcomes the shortcomings of Kube‐scheduler and achieves the expected scheduling target with three workloads. With only 3.27% CPU overhead and 0.648% communication delay, DRS outperforms Kube‐scheduler by 27.29% in terms of resource utilization and reduces load imbalance by 2.90 times on average.
Styles APA, Harvard, Vancouver, ISO, etc.
37

« Digital Transformation of Capital Market Infrastructure ». Economic Policy 15, no 5 (2020) : 8–31. http://dx.doi.org/10.18288/1994-5124-2020-5-8-31.

Texte intégral
Résumé :
Digital transformation is taken in the article as changes in business models inspired by “new technologies” (cloud, artificial intelligence, big data, distributed ledgers etc.), by threats of disruption from new competitors (FinTech startups and big techs), and by growing demand for individualized and integrated services distributed via digital channels. In this sense, digital transformation means platformification, i.e. when business is developed through digital information systems connecting buyers with sellers, including third-party service providers. Financial institutes, while selling their native services on platforms and/or orchestrating platforms, are able to position their proposals as technological, thus making the modern financial mantra (“Work like Google”) true. The article outlines the dis- tinctions between the platform business model and the traditional pipeline model; compares the definitions of “platform”, “marketplace” and “ecosystem”; summarizes domestic and international experience of platformification in capital market infrastructure (exchanges/trading venues, CCP clearing houses, central securities depositories etc.); differentiates the models of such platformification according to where they lead away from business as usual: onto other markets (mono- and multi-product platforms), to other services (“universal platforms” and digital asset platforms) or to other interactions (platforms for non-core services, “at the top of exchange” platforms, “apps warehouses”); and describes the place of platformification in growth strategies—both real ones, including the Moscow Exchange Group case, and hypothetical ones in line with the Ansoff Matrix.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie