Academic literature on the topic 'Orchestration cloud native'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Orchestration cloud native.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Orchestration cloud native"

1

Chelliah, Pethuru Raj, and Chellammal Surianarayanan. "Multi-Cloud Adoption Challenges for the Cloud-Native Era." International Journal of Cloud Applications and Computing 11, no. 2 (April 2021): 67–96. http://dx.doi.org/10.4018/ijcac.2021040105.

Full text
Abstract:
With the ready availability of appropriate technologies and tools for crafting hybrid clouds, the move towards employing multiple clouds for hosting and running various business workloads is garnering subtle attention. The concept of cloud-native computing is gaining prominence with the faster proliferation of microservices and containers. The faster stability and maturity of container orchestration platforms also greatly contribute towards the cloud-native era. This paper guarantees the following contributions: 1) It describes the key motivations for multi-cloud concept and implementations. 2) It also highlights various key drivers of the multi-cloud paradigm. 3) It presents a brew of challenges that are likely to occur while setting up multi-cloud. 4) It elaborates the proven and potential solution approaches to solve the challenges. The technology-inspired and tool-enabled solution approaches significantly simplify and speed up the adoption of the fast-emerging and evolving multi-cloud concept in the cloud-native era.
APA, Harvard, Vancouver, ISO, and other styles
2

Leiter, Ákos, Edina Lami, Attila Hegyi, József Varga, and László Bokor. "Closed-loop Orchestration for Cloud-native Mobile IPv6." Infocommunications journal 15, no. 1 (2023): 44–54. http://dx.doi.org/10.36244/icj.2023.1.5.

Full text
Abstract:
With the advent of Network Function Virtualization (NFV) and Software-Defined Networking (SDN), every network service type faces significant challenges induced by novel requirements. Mobile IPv6, the well-known IETF standard for network-level mobility management, is not an exemption. Cloud-native Mobile IPv6 has acquired several new capabilities due to the technological advancements of NFV/SDN evolution. This paper presents how automatic failover and scaling can be envisioned in the context of cloud-native Mobile IPv6 with closed-loop orchestration on the top of the Open Network Automation Platform. Numerical results are also presented to indicate the usefulness of the new operational features (failover, scaling) driven by the cloud-native approach and highlight the advantages of network automation in virtualized and softwarized environments.
APA, Harvard, Vancouver, ISO, and other styles
3

Vaño, Rafael, Ignacio Lacalle, Piotr Sowiński, Raúl S-Julián, and Carlos E. Palau. "Cloud-Native Workload Orchestration at the Edge: A Deployment Review and Future Directions." Sensors 23, no. 4 (February 16, 2023): 2215. http://dx.doi.org/10.3390/s23042215.

Full text
Abstract:
Cloud-native computing principles such as virtualization and orchestration are key to transferring to the promising paradigm of edge computing. Challenges of containerization, operative models and scarce availability of established tools make a thorough review indispensable. Therefore, the authors have described the practical methods and tools found in the literature as well as in current community-led development projects, and have thoroughly exposed the future directions of the field. Container virtualization and its orchestration through Kubernetes have dominated the cloud computing domain, while major efforts have been recently recorded focused on the adaptation of these technologies to the edge. Such initiatives have addressed either the reduction of container engines and the development of specific tailored operating systems or the development of smaller K8s distributions and edge-focused adaptations (such as KubeEdge). Finally, new workload virtualization approaches, such as WebAssembly modules together with the joint orchestration of these heterogeneous workloads, seem to be the topics to pay attention to in the short to medium term.
APA, Harvard, Vancouver, ISO, and other styles
4

Aelken, Jörg, Joan Triay, Bruno Chatras, and Arturo Martin de Nicolas. "Toward Cloud-Native VNFs: An ETSI NFV Management and Orchestration Standards Approach." IEEE Communications Standards Magazine 8, no. 2 (June 2024): 12–19. http://dx.doi.org/10.1109/mcomstd.0002.2200079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chandrasehar, Amreth. "ML Powered Container Management Platform: Revolutionizing Digital Transformation through Containers and Observability." Journal of Artificial Intelligence & Cloud Computing 1, no. 1 (March 31, 2022): 1–3. http://dx.doi.org/10.47363/jaicc/2023(1)130.

Full text
Abstract:
As companies adopt digital transformation, cloud-native applications become a critical part of their architecture and roadmap. Enterprise applications and tools are developed using cloud native architecture are containerized and are deployed on container orchestration platforms. Containers have revolutionized application deployments to help management, scaling and operations of workloads deployed on container platforms. But a lot of issues are faced by operators of the platforms such as complexity in managing large scale environments, security, networking, storage, observability and cost. This paper will discuss on how to build a container management platform using monitoring data to implement AI and ML models to aid organization digital transformation journey
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Peng, Jinsong Wang, Weisen Zhao, and Xiangjun Li. "Research and Implementation of Container Based Application Orchestration Service Technology." Journal of Physics: Conference Series 2732, no. 1 (March 1, 2024): 012012. http://dx.doi.org/10.1088/1742-6596/2732/1/012012.

Full text
Abstract:
Abstract With the rapid development of cloud computing technology, Kubernetes(K8S), as the main orchestration tool for cloud native applications, has become the preferred choice for enterprises and developers. This article is based on container based application orchestration service technology. Through a set of templates containing cloud resource descriptions, it quickly completes functions such as application creation and configuration, application batch cloning, and application multi environment deployment. It simplifies and automates the lifecycle management capabilities required for cloud applications, such as resource planning, application design, deployment, status monitoring, and scaling. Users can more conveniently complete infrastructure management and operation and maintenance work, In order to focus more on innovation and research and development, and improve work efficiency. The actual application effect of the technology used in this article depends to a certain extent on the ability level of basic service resources, and manual template creation is required for the first use. In production use, a certain professional ability is required to create a good application layout template, adjust and optimize resources according to the production environment, in order to significantly improve the effectiveness and efficiency of practical applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Oyekunle Claudius Oyeniran, Oluwole Temidayo Modupe, Aanuoluwapo Ayodeji Otitoola, Oluwatosin Oluwatimileyin Abiona, Adebunmi Okechukwu Adewusi, and Oluwatayo Jacob Oladapo. "A comprehensive review of leveraging cloud-native technologies for scalability and resilience in software development." International Journal of Science and Research Archive 11, no. 2 (March 30, 2024): 330–37. http://dx.doi.org/10.30574/ijsra.2024.11.2.0432.

Full text
Abstract:
In the landscape of modern software development, the demand for scalability and resilience has become paramount, particularly with the rapid growth of online services and applications. Cloud-native technologies have emerged as a transformative force in addressing these challenges, offering dynamic scalability and robust resilience through innovative architectural approaches. This paper presents a comprehensive review of leveraging cloud-native technologies to enhance scalability and resilience in software development. The review begins by examining the foundational concepts of cloud-native architecture, emphasizing its core principles such as containerization, microservices, and declarative APIs. These principles enable developers to build and deploy applications that can dynamically scale based on demand while maintaining high availability and fault tolerance. Furthermore, the review explores the key components of cloud-native ecosystems, including container orchestration platforms like Kubernetes, which provide automated management and scaling of containerized applications. Additionally, it discusses the role of service meshes in enhancing resilience by facilitating secure and reliable communication between microservices. Moreover, the paper delves into best practices and patterns for designing scalable and resilient cloud-native applications, covering topics such as distributed tracing, circuit breaking, and chaos engineering. These practices empower developers to proactively identify and mitigate potential failure points, thereby improving the overall robustness of their systems. This review underscores the significance of cloud-native technologies in enabling software developers to build scalable and resilient applications. By embracing cloud-native principles and adopting appropriate tools and practices, organizations can effectively meet the evolving demands of modern software development in an increasingly dynamic and competitive landscape.
APA, Harvard, Vancouver, ISO, and other styles
8

A., Shevchenko, and Puzyrov S. "DEVELOPMENT OF THE HARDWARE AND SOFTWARE PLATFORM FOR MODERN IOT SOLUTIONS BASED ON FOG COMPUTING USING CLOUD-NATIVE TECHNOLOGIES." Computer systems and network 2, no. 1 (March 23, 2017): 102–12. http://dx.doi.org/10.23939/csn2020.01.102.

Full text
Abstract:
The concept of digital transformation is very relevant at the moment due to the epidemiological situation and the transition of the world to the digital environment. IoT is one of the main drivers of digital transformation. The Internet of Things (IoT) is an extension of the Internet, which consists of sensors, controllers, and other various devices, the so-called "things," that communicate with each other over the network. In this paper, the development of hardware and software for the organization of fog and edge computing was divided into three levels: hardware, orchestration, application. Application level also was divided into two parts: software and architectural. The hardware was implemented using two versions of the Raspberry Pi: Raspberry Pi 4 and Raspberry Pi Zero, which are connected in master-slave mode. The orchestration used K3S, Knative and Nuclio technologies. Technologies such as Linkerd service network, NATS messaging system, implementation of RPC - GRPC protocol, TDengine database, Apache Ignite, Badger were used to implement the software part of the application level. The architecture part is designed as an API development standard, so it can be applied to a variety of IoT software solutions in any programming language. The system can be used as a platform for construction of modern IoT-solutions on the principle of fog\edge computing. Keywords: Internet of Things, IoT-platform, Container technologies, Digital Twin, API.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Feifei. "Modernization of Databases in the Cloud Era: Building Databases that Run Like Legos." Proceedings of the VLDB Endowment 16, no. 12 (August 2023): 4140–51. http://dx.doi.org/10.14778/3611540.3611639.

Full text
Abstract:
Utilizing cloud for common and critical computing infrastructures has already become the norm across the board. The rapid evolvement of the underlying cloud infrastructure and the revolutionary development of AI present both challenges and opportunities for building new database architectures and systems. It is crucial to modernize database systems in the cloud era, so that next generation cloud native databases may run like legos-they are adaptive, flexible, reliable, and smart towards dynamic workloads and varying requirements. That said, we observe four critical trends and requirements for the modernization of cloud databases: embracing cloud-native architecture, full integration with cloud platform and orchestration, co-design for data fabric, and moving towards being AI augmented. Modernizing database systems by adopting these critical trends and addressing key challenges associated with them provide ample opportunities for data management communities from both academia and industry to explore. We will provide an in-depth case study of how we modernize PolarDB with respect to embracing these four trends in the cloud era. Our ultimate goal is to build databases that run just like playing with legos, so that a database system fits for rich and dynamic workloads and requirements in a self-adaptive, performant, easy-/intuitive-to use, reliable, and intelligent manner.
APA, Harvard, Vancouver, ISO, and other styles
10

Aijaz, Usman, Mohammed Abubakar, Aditya Reddy, Abuzar ., and Alok C Pujari. "An Analysis on Security Issues and Research Challenges in Cloud Computing." Journal of Security in Computer Networks and Distributed Systems 1, no. 1 (April 26, 2024): 37–44. http://dx.doi.org/10.46610/joscnds.2024.v01i01.005.

Full text
Abstract:
Cloud computing has completely transformed how businesses handle, store, and process data and applications. However, using cloud services brings several security issues that need to be resolved to guarantee the availability, confidentiality, and integrity of critical data. This abstract overviews vital aspects of cloud computing security and highlights emerging trends and research directions. Essential challenges of cloud computing security include ensuring data privacy and confidentiality, maintaining data integrity and trustworthiness, and addressing compliance with regulatory requirements. Identity and access management (IAM) remains a critical area, focusing on enhancing authentication mechanisms and access controls to mitigate the risks of unauthorized access and insider threats. Additionally, active research areas include securing cloud orchestration and management platforms, resilience and availability of cloud services, and addressing the unique security considerations of cloud native technologies. Interdisciplinary collaboration between researchers, industry practitioners, and policymakers is essential to develop innovative security solutions and best practices for cloud computing environments. By addressing these challenges and advancing the state of the art in cloud security, organizations can leverage the benefits of cloud computing while mitigating associated risks and ensuring the protection of sensitive data and resources. In summary, this review paper provides a holistic understanding of cloud computing security, offering insights into current practices, challenges, and future directions for ensuring the confidentiality, integrity, and availability of cloud based systems and data.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Orchestration cloud native"

1

Arora, Sagar. "Cloud Native Network Slice Orchestration in 5G and Beyond." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS278.

Full text
Abstract:
La virtualisation des fonctions réseau (NFV) est le pilier fondateur de l'architecture 5G basée sur les services. La NFV a débuté en 2012, avec les fonctions de réseau virtuelles (VNF) basées sur les machines virtuelles (VM). Les conteneurs sont devenus une technologie alternative de conditionnement intéressante pour la virtualisation des fonctions réseau. Le conteneur est léger en termes de consommation de ressources ce qui améliore son temps d'instanciation. Outre les fonctions de réseau, la conteneurisation peut être un outil prometteur pour les applications multi-access edge computing (MEC) qui abritent des services exigeants à faible latence. La rareté des ressources à la périphérie du réseau exige des technologies qui utilisent efficacement les ressources de calcul, de stockage et de mise en réseau. La conteneurisation est censée être utilisée dans le cadre des principes fondamentaux de la conception d'applications cloud-native, une architecture basée sur des microservices à couplage lâche, d'une évolutivité à la demande et d'une résilience élevée. La flexibilité et l'agilité des conteneurs peuvent certainement profiter au découpage du réseau 5G en tranches,ces derniers reposent fortement sur NFV et MEC. Le concept de découpage du réseau permet de créer des réseaux logiques isolés au-dessus du même réseau physique. Une tranche de réseau peut avoir des fonctions de réseau dédiées, partagées entre plusieurs tranches. En effet, l'orchestration des tranches de réseau nécessite une interaction avec de multiples orchestrateurs de domaines technologiques: l'accès radio, le transport, le réseau central et l'informatique périphérique. Le changement de paradigme consistant à utiliser des principes de conception d'applications cloud-natives a créé des défis pour les systèmes d'orchestration existants et les normes NFV et MEC de l'ETSI. Ces derniers ont été conçus pour gérer des fonctions de réseau basées sur des machines virtuelles. Ils sont donc limités dans leur approche de la gestion d'une fonction de réseau cloud-native. Par le présent manuscrit, nous examinons les normes existantes de l'ETSI NFV, de l'ETSI MEC et des orchestrateurs de services/tranches de réseau, nous proposons de résoudre les défis liés à l'orchestration de tranches de réseau multi-domaines cloud-native. Pour cela, nous proposons tout d'abord un service d'information sur le réseau radio (RNIS) MEC qui a la capacité de fournir des informations radio au niveau de l'abonné dans un environnement NFV. Deuxièmement, nous fournissons un algorithme d'allocation et de placement dynamique des ressources (DRAP) pour placer les services réseau cloud-natives en tenant compte de leur matrice de coût et de disponibilité. Troisièmement, en combinant NFV, MEC et Network Slicing, nous proposons un nouveau mécanisme d'orchestration de tranches MEC (LeSO) pour surmonter les défis liés à l'orchestration de tranches MEC. Quatrièmement, le mécanisme proposé offre un modèle de déploiement de tranches de réseau qui permet de multiples possibilités de conception d'applications MEC. Ces possibilités ont été étudiées plus en détails pour comprendre l'impact de l'architecture de conception microservice sur la disponibilité et la latence de l'application. Enfin, tous ces travaux sont combinés pour proposer une nouvelle approche d'orchestration de tranches légères Cloud-native (CLiSO) étendant le précédant mécanisme d'orchestration de tranches légères de bord (LeSO). Cette nouvelle approche offre un modèle de tranche de réseau agnostique sur le plan technologique et orienté déploiement. La solution a été évaluée de manière approfondie en orchestrant les fonctions réseau du conteneur OpenAirInterface sur des plateformes de cloud public et privé. Les résultats expérimentaux montrent que la solution proposée a des empreintes de ressources plus faibles que les orchestrateurs existants et prend moins de temps pour orchestrer les tranches de réseau
Network Function Virtualization (NFV) is the founding pillar of 5G Service Based Architecture. It has the potential to revolutionize the future mobile communication generations. NFV started long back in 2012 with Virtual-Machine (VM) based Virtual Network Functions (VNFs). The use of VMs raised multiple questions because of the compatibility issues between VM hypervisors and their high resource consumption. This made containers to be an alternative network function packaging technology. The lightweight design of containers improves their instantiation time and resource footprints. Apart from network functions, containerization can be a promising enabler for Multi-access Edge Computing (MEC) applications that provides a home to low-latency demanding services. Edge computing is one of the key technology of the last decade, enabling several emerging services beyond 5G (e.g., autonomous driving, robotic networks, Augmented Reality (AR)) requiring high availability and low latency communications. The resource scarcity at the edge of the network requires technologies that efficiently utilize computational, storage, and networking resources. Containers' low-resource footprints make them suitable for designing MEC applications. Containerization is meant to be used in the framework of cloud-native application design fundamentals, loosely coupled microservices-based architecture, on-demand scalability, and high resilience. The flexibility and agility of containers can certainly benefit 5G Network Slicing that highly relies on NFV and MEC. The concept of Network slicing allows the creation of isolated logical networks on top of the same physical network. A network slice can have dedicated network functions or its network functions can be shared among multiple slices. Indeed, network slice orchestration requires interaction with multiple technological domain orchestrators, access, transport, core network, and edge computing. The paradigm shift of using cloud-native application design principles has created challenges for legacy orchestration systems and the ETSI NFV and MEC standards. They were designed for handling virtual machine-based network functions, restricting them in their approach to managing a cloud-native network function. The thesis examines the existing standards of ETSI NFV, ETSI MEC, and network service/slice orchestrators. Aiming to overcome the challenges around multi-domain cloud-native network slice orchestration. To reach the goal, the thesis first proposes MEC Radio Network Information Service (RNIS) that can provide radio information at the subscriber level in an NFV environment. Second, it provides a Dynamic Resource Allocation and Placement (DRAP) algorithm to place cloud-native network services considering their cost and availability matrix. Third, by combining NFV, MEC, and Network Slicing, the thesis proposes a novel Lightweight edge Slice Orchestration framework to overcome the challenges around edge slice orchestration. Fourth, the proposed framework offers an edge slice deployment template that allows multiple possibilities for designing MEC applications. These possibilities were further studied to understand the impact of the microservice design architecture on application availability and latency. Finally, all this work is combined to propose a novel Cloud-native Lightweight Slice Orchestration (CLiSO) framework extending the previously proposed Lightweight edge Slice Orchestration (LeSO) framework. In addition, the framework offers a technology-agnostic and deployment-oriented network slice template. The framework has been thoroughly evaluated via orchestrating OpenAirInterface container network functions on public and private cloud platforms. The experimental results show that the framework has lower resource footprints than existing orchestrators and takes less time to orchestrate network slices
APA, Harvard, Vancouver, ISO, and other styles
2

Dridi, Mohamed Amine. "Platform-based 5G service design and orchestration." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAS002.

Full text
Abstract:
A partir de la cinquième génération, les réseaux mobiles devront supporter une croissance exponentielle du nombre d'appareils connectés de différents types, ceci étant l'un des piliers d'une stratégie globale de numérisation accélérée. En plus de cette croissance de connectivité, ces réseaux devront également supporter et offrir divers services pour de nouvelles industries aux exigences hétérogènes. Les concepteurs et développeurs de réseaux 5G sont alors contraints de fournir de nouvelles solutions et d'optimiser celles qui existent pour contenir les demandes croissantes de bande passante et les attentes plus élevées en termes de qualité d'expérience (QoE). Ces réseaux doivent également être hautement personnalisables pour s'adapter au divers cas d'usage et hautement automatisés pour raccourcir les délais de mise sur le marché. Les caractéristiques attendues des réseaux 5G ont incité les fournisseurs de réseaux mobiles à changer radicalement la façon dont ils conçoivent et développent leurs solutions, en adoptant une stratégie où les solutions logicielles sont privilégiées. Le domaine des réseaux mobiles et le reste du monde IT sont alors en train de converger et les fournisseurs de réseaux mobiles peuvent alors bénéficier de pratiques et outils à la pointe d'écosystèmes logiciels et d'informatique en nuage en plein essor. Les fonctions de réseau logicielles permettraient à ces fournisseurs d'avoir les niveaux de programmabilité et de reconfigurabilité dont ils ont besoin pour faire face à une évolution aussi rapide de la connectivité mobile. Cette thèse a pour objectif de fournir quelques optimisations de différentes parties des réseaux 5G et de la façon dont ils sont déployés et gérés, en espérant que cela contribuera à résoudre certains des problèmes auxquels sont confrontés les concepteurs de réseaux mobiles. Cette thèse propose des solutions à des problèmes spécifiques liés au traitement de la couche physique dans les réseaux 5G pour l'atténuation des interférences, ainsi qu'aux problèmes génériques liés à l'automatisation et à la personnalisation du réseau. Nous avons construit dans cette thèse une plateforme, qui sert à créer des réseaux mobiles de bout en bout, composée d'une plateforme réseau d'accès radio (RAN), coeur de réseau et orchestration, en utilisant les concepts et outils de la métaplateforme. La première partie traite la question d'interférence intercellulaire, qui risque d'être un handicap avec une densification prévue d'antennes dans les réseaux 5G. Nous proposons une solution pour atténuer les effets de cette interférence pour les transmissions du mobiles vers les stations de base. Cette solution est basée sur la technique de détection multi-récepteurs (JD). Elle répond aux exigences architecturales, fonctionnelles et techniques de l'intégration de JD dans des réseaux pratiques. Nous intégrons la solution JD dans une plateforme RAN dans la deuxième partie et étendons cette plateforme avec d'autres fonctionnalités. Nous adoptons la même approche dans la troisième partie de cette thèse pour fournir une solution pour automatiser le déploiement du coeur de réseau et la gestion du cycle de vie dans un environnement de virtualisation des fonctions réseau (NFV) et créer une plateforme de coeur de réseau réutilisable et orchestrée par open network automation platform (ONAP)
5G networks and beyond will have to support an exponential growth in numbers of connected devices of different types, as a pillar of a global accelerated digitization movement. In addition to hyperscale characteristics, these networks will also have to support a diverse set of connectivity services for new industries with heterogeneous requirements. 5G network designers and developers are then compelled to provide new solutions and optimize the existing ones to contain increasing bandwidth demands and higher Quality of Experience (QoE) expectations. These networks also need to be highly customizable to adapt to varying use-cases and highly automated to shorten time-to-market delays. The expected characteristics of 5G networks inspired mobile network providers to radically change the way they design and develop their solutions by adopting an extensive softwarization strategy. Mobile networking domain and the rest of the IT world are then converging and mobile network providers can then benefit from thriving software and cloud computing ecosystems with state-of-the-art practices and tools. Software-based network functions would allow these providers to have the necessary levels of programmability and reconfigurability they need to deal with such a fast-paced evolution of mobile connectivity. This thesis aims at providing a few optimizations of different parts of 5G networks and the way they are deployed and managed, hoping that it would contribute in solving some of the problems that network designers are facing. It proposes solutions to specific problems related to the physical layer processing in 5G networks for interference mitigation, as well as generic issues related to network automation and customization. We built in this thesis an end-to-end network service fabric composed of a Radio Access Network (RAN), core and orchestration platform using Metaplatform concepts and tools. The first part treats the issue of Intercell Interference (ICI), which is expected to be a liability with a foreseen antenna densification in 5G networks. We propose a solution to mitigate ICI in Uplink (UL) transmissions, based on Joint Detection (JD) technique. The proposed solution satisfies the architectural, functional and technical requirement of JD integration in practical networks. We incorporate the JD solution in a RAN platform in the second part and extend this platform with other capabilities. We adopt the same approach in the third part of this thesis to provide a solution to automate core network deployment and life-cycle management in a Network Function Virtualization (NFV) environment and create a reusable core network platform orchestrated by Open Network Automation Platform (ONAP)
APA, Harvard, Vancouver, ISO, and other styles
3

Noroozi, Hamid. "A Cloud-native Vehicular Public Key Infrastructure : Towards a Highly-available and Dynamically- scalable VPKIaaS." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300658.

Full text
Abstract:
Efforts towards standardization of Vehicular Communication Systems (VCSs) have been conclusive on the use of Vehicular Public-Key Infrastructure (VPKI) for the establishment of trust among network participants. Employing VPKI in Vehicular Communication (VC) guarantees the integrity and authenticity of Cooperative Awareness Messages (CAMs) and Decentralized Environmental Notification Messages (DENMs). It also offers a level of privacy for vehicles as VPKI provides them with a set of non-linkable short-lived certificates, called pseudonyms, which are used to sign outgoing messages by vehicles while they communicate with other vehicles referred to as Vehicle-to-Vehicle (V2V) or Roadside Units (RSUs) referred to as Vehicle-to-Infrastructure (V2I). Each vehicle uses a pseudonym for its lifetime and by switching to a not- previously- used pseudonym, it continues to communicate without risking its privacy. There have been two approaches suggested by the literature on how to provide vehicles with pseudonyms. One is the so-called pre-loading mode, suggesting to pre-load vehicles with all pseudonyms they need, which increases the cost of revocation in case they are compromised. The other one is the on-demand mode, suggesting a real-time offering of pseudonyms by VPKI at vehicles request e.g., on starting each trip. Choosing the on-demand approach imposes a considerable burden of availability and resilience on VPKI services. In this work, we are confronting the problems regarding a large-scale deployment of an on-demand VPKI that is resilient, highly available, and dynamically scalable. In order to achieve that, by leveraging state-of-the-art tools and design paradigms, we have enhanced a VPKI system to ensure that it is capable of meeting enterprise-grade Service Level Agreement (SLA) in terms of availability, and it can also be cost-efficient as services can dynamically scale-out in the presence of high load, or possibly scale-in when facing less demand. That has been made possible by re-architecting and refactoring an existing VPKI into a cloud-native solution deployed as microservices. Towards having a reliable architecture based on distributed microservices, one of the key challenges to deal with is Sybil-based misbehavior. By exploiting Sybil-based attacks in VPKI, malicious vehicles can gain influential advantage in the system, e.g., one can affect the traffic to serve its own will. Therefore, preventing the occurrence of Sybil attacks is paramount. On the other hand, traditional approaches to stop them, often come with a performance penalty as they verify requests against a relational database which is a bottleneck of the operations. We propose a solution to address Sybil-based attacks, utilizing Redis, an in-memory data store, without compromising the system efficiency and performance considerably. Running our VPKI services on Google Cloud Platform (GCP) shows that a large-scale deployment of VPKI as a Service (VPKIaaS) can be done efficiently. Conducting various stress tests against the services indicates that the VPKIaaS is capable of serving real world traffic. We have tested VPKIaaS under synthetically generated normal traffic flow and flash crowd scenarios. It has been shown that VPKIaaS managed to issue 100 pseudonyms per request, submitted by 1000 vehicles where vehicles kept asking for a new set of pseudonyms every 1 to 5 seconds. Each vehicle has been served in less than 77 milliseconds. We also demonstrate that, under a flash crowd situation, with 50000 vehicles, VPKIaaS dynamically scales out, and takes ≈192 milliseconds to serve 100 pseudonyms per request submitted by vehicles.
Ansträngningar för standardisering av Vehicular Communication Systems har varit avgörande för användandet av Vehicular Public-Key Infrastructure (VPKI) för att etablera förtroende mellan nätverksdeltagare. Användande av VPKI i Vehicular Communication (VC) garanterar integritet och autenticitet av meddelanden. Det erbjuder ett lager av säkerhet för fordon då VPKI ger dem en mängd av icke länkbara certifikat, kallade pseudonym, som används medan de kommunicerar med andra fordon, kallat Vehicle-to-Vehicle (V2V) eller Roadside Units (RSUs) kallat Vehicle-to-Infrastructure (V2I). Varje fordon använder ett pseudonym under en begränsad tid och genom att byta till ett icke tidigare använt pseudonym kan det fortsätta kommunicera utan att riskera sin integritet. I litteratur har två metoder föreslagits för hur man ska ladda fordon med pseudonym de behöver. Den ena metoden det så kallade offline-läget, som proponerar att man för-laddar fordonen med alla pseudonym som det behöver vilket ökar kostnaden för revokering i fall de blir komprometterat. Den andra metoden föreslår ett on-demand tillvägagångssätt som erbjuder pseudonym via VPKI på fordonets begäran vid början av varje färd. Valet av på begäran metoden sätter en stor börda på tillgänglighet och motståndskraft av VPKI tjänster. I det här arbetet, möter vi problem med storskaliga driftsättningar av en på begäran VPKI som är motståndskraftig, har hög tillgänglighet och dynamiskt skalbarhet i syfte att uppnå dessa attribut genom att nyttja toppmoderna verktyg och designparadigmer. Vi har förbättrat ett VPKI system för att säkerställa att det är kapabelt att möta SLA:er av företagsklass gällande tillgänglighet och att det även kan vara kostnadseffektivt eftersom tjänster dynamiskt kan skala ut vid högre last eller skala ner vid lägre last. Detta har möjliggjorts genom att arkitekta om en existerande VPKI till en cloud-native lösning driftsatt som mikrotjänster. En av nyckelutmaningarna till att ha en pålitlig arkitektur baserad på distribuerade mikrotjänster är sybil-baserad missuppförande. Genom att utnyttja Sybil baserade attacker på VPKI, kan illvilliga fordon påverka trafik att tjäna dess egna syften. Därför är det av största vikt att förhindra Sybil attacker. Å andra sidan så dras traditionella metoder att stoppa dem med prestandakostnader. Vi föreslår en lösning för att adressera Sybilbaserade attacker genom att nyttja Redis, en in-memory data-store utan att märkbart kompromissa på systemets effektivitet och prestanda. Att köra våra VPKI tjänster på Google Cloud Platform (GCP) och genomföra diverse stresstester mot dessa har visat att storskaliga driftsättningar av VPKI as a Service (VPKIaaS) kan göras effektivt samtidigt som riktigt trafik hanteras. Vi har testat VPKIaaS under syntetisk genererat normalt trafikflöde samt flow och flash mängd scenarier. Det har visat sig att VPKIaaS klarar att utfärda 100 pseudonym per förfråga utsänt av 1000 fordon (där fordonen bad om en ny uppsättning pseudonym varje 1 till 5 sekunder), och varje fordon fått svar inom 77 millisekunder. Vi demonstrerar även att under en flashcrowd situation, där antalet fordon höjs till 50000 med en kläckningsgrad på 100. VPKIaaS dynamiskt skalar ut och tar ≈192 millisekunder att betjäna 100 pseudonymer per förfrågan gjord av fordon.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Orchestration cloud native"

1

Gutierrez, Felipe. Spring Cloud Data Flow: Native Cloud Orchestration Services for Microservice Applications on Modern Runtimes. Apress, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ruecker, Bernd. Practical Process Automation: Orchestration and Integration in Microservices and Cloud Native Architectures. O'Reilly Media, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Orchestration cloud native"

1

Quenum, José Ghislain, and Gervasius Ishuuwa. "Abstracting Containerisation and Orchestration for Cloud-Native Applications." In Lecture Notes in Computer Science, 164–80. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59635-4_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Christian, Juan, Luis Paulino, and Alan Oliveira de Sá. "A Low-Cost and Cloud Native Solution for Security Orchestration, Automation, and Response." In Information Security Practice and Experience, 115–39. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-21280-2_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Janjua, Hafiza Kanwal, Ignacio de Miguel, Ramón J. Durán Barroso, Maryam Masoumi, Soheil Hosseini, Juan Carlos Aguado, Noemí Merayo, and Evaristo J. Abril. "A Framework for Next Generation Cloud-Native SDN Cognitive Resource Orchestrator for IoTs (NG2CRO)." In Distributed Computing and Artificial Intelligence, Special Sessions I, 20th International Conference, 399–407. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-38318-2_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pusapati, Sai Samin Varma. "Containerization." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 98–122. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-1682-5.ch006.

Full text
Abstract:
This chapter delves into the synergy between containerization and serverless computing, pivotal for advancing cloud-native application deployment. It outlines the architectural foundations and benefits of each paradigm, emphasizing their combined impact on scalability, efficiency, and agility. The discussion progresses to technical integrations, focusing on container orchestration and serverless platforms, enhancing management and deployment. Addressing challenges like security and operational complexity, it highlights strategies for navigating these issues. Real-world examples illustrate the practical application across sectors, showcasing the integration's capacity to meet diverse computational needs. This convergence is posited as a significant driver for future cloud-native innovations, offering a glimpse into evolving trends and the potential reshaping of software development landscapes. The exploration underscores the critical role of this amalgamation in optimizing resource utilization and simplifying cloud infrastructure complexities.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Orchestration cloud native"

1

LUONG, Duc-Hung, Huu-Trung THIEU, Abdelkader OUTTAGARTS, and Yacine GHAMRI-DOUDANE. "Predictive Autoscaling Orchestration for Cloud-native Telecom Microservices." In 2018 IEEE 5G World Forum (5GWF). IEEE, 2018. http://dx.doi.org/10.1109/5gwf.2018.8516950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Di Stefano, Alessandro, Antonella Di Stefano, and Giovanni Morana. "Ananke: A framework for Cloud-Native Applications smart orchestration." In 2020 IEEE 29th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE). IEEE, 2020. http://dx.doi.org/10.1109/wetice49692.2020.00024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chowdhury, Rasel, Chamseddine Talhi, Hakima Ould-Slimane, and Azzam Mourad. "A Framework for Automated Monitoring and Orchestration of Cloud-Native applications." In 2020 International Symposium on Networks, Computers and Communications (ISNCC). IEEE, 2020. http://dx.doi.org/10.1109/isncc49221.2020.9297238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Barrachina-Munoz, Sergio, Jorge Baranda, Miquel Payaro, and Josep Mangues-Bafalluy. "Intent-Based Orchestration for Application Relocation in a 5G Cloud-native Platform." In 2022 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN). IEEE, 2022. http://dx.doi.org/10.1109/nfv-sdn56302.2022.9974703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Di Stefano, Alessandro, Antonella Di Stefano, Giovanni Morana, and Daniele Zito. "Prometheus and AIOps for the orchestration of Cloud-native applications in Ananke." In 2021 IEEE 30th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE). IEEE, 2021. http://dx.doi.org/10.1109/wetice53228.2021.00017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Grings, Felipe Hauschild, Lucas Baleeiro Dominato Silveira, Kleber Vieira Cardoso, Sand Correa, Lucio Rene Prade, and Cristiano Bonato Both. "Full dynamic orchestration in 5G core network slicing over a cloud-native platform." In GLOBECOM 2022 - 2022 IEEE Global Communications Conference. IEEE, 2022. http://dx.doi.org/10.1109/globecom48099.2022.10000663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Milroy, Daniel J., Claudia Misale, Giorgis Georgakoudis, Tonia Elengikal, Abhik Sarkar, Maurizio Drocco, Tapasya Patki, et al. "One Step Closer to Converged Computing: Achieving Scalability with Cloud-Native HPC." In 2022 IEEE/ACM 4th International Workshop on Containers and New Orchestration Paradigms for Isolated Environments in HPC (CANOPIE-HPC). IEEE, 2022. http://dx.doi.org/10.1109/canopie-hpc56864.2022.00011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chiu, Yi-Sung, Li-Hsing Yen, Tse-Han Wang, and Chien-Chao Tseng. "A Cloud Native Management and Orchestration Framework for 5G End-to-End Network Slicing." In 2022 IEEE International Conference on Service-Oriented System Engineering (SOSE). IEEE, 2022. http://dx.doi.org/10.1109/sose55356.2022.00014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Harb, A., P. Amoudruz, S. Roy, H. Hayek, M. Hurtado, and R. Torrens. "Automated Development Concept Generation—Digital Transformation of Field Development Planning." In ADIPEC. SPE, 2023. http://dx.doi.org/10.2118/216332-ms.

Full text
Abstract:
Abstract Effective field development planning is critical to maximize the value of opportunities. It can be a complex process due to factors like time, resource constraints, and siloed domain applications. To overcome these challenges an effective dataflow orchestration is required between subsurface, well, facility & economics to ensure coherency and auditability. This paper presents the possible digital transformation of field development planning using smart algorithms and automated dataflow orchestration to generate & evaluate numerous optimized development concepts rapidly. Extensive research has resulted in smart algorithms that work back-to-back and can automatically generate field layouts for different development concepts early stage of field development. These algorithms include the blackhole operator for specifying optimal reservoir targets using quality maps, an industry-standard trajectory engine for designing drillable wells, an evolutionary algorithm for placing facilities and the a-star algorithm for laying out the shortest pipeline route while avoiding surface no-go zones. These algorithms now function on a cloud-native digital technology that can automate the evaluation of field development plans by orchestrating data flow between subsurface, well, facility & economics. In the traditional waterfall approach for field development planning, it takes several months for each discipline to prepare data and takes many iterations between disciplines to ensure feasibility for different development concepts. In the early phase of development, teams often do not have enough time to screen a wide range of development concepts, and the opportunities presented for sanction with limited options, and often not sanctioned or recycled. The results demonstrate its exceptional ability to identify multiple reservoir targets while seamlessly adhering to a predefined injection scheme. Moreover, this solution connects these targets to optimally placed facilities using drillable, optimized trajectories and then links the facilities with pipelines that are positioned in the most efficient manner possible. To showcase our solution, we utilized the synthetic field, Olympus, which was developed by TNO for EAGE Olympus Challenge. The transformational digital solution presented here would enable coherent data sharing across all discipline and empower multi-disciplinary team to achieve faster screening of a larger number of development scenarios, leading to more efficient decision-making in field development planning. The modular and flexible solution enables refinement of the field development plan throughout the project maturation journey with different trade-offs between accuracy and efficiency. The presented innovative solution breaks down organizational silos between the reservoir, wells, and facility domains by integrating discipline specific consideration upfront and allowing them to perform detailed analysis on coherent and consistent data. Having these smart algorithms on a cloud-native data flow orchestrator allows for fast production of multiple technically feasible development concepts. The solution has been successfully validated by multiple field development teams across the globe.
APA, Harvard, Vancouver, ISO, and other styles
10

Chakraborty, Jayjeet, Carlos Maltzahn, and Ivo Jimenez. "Enabling Seamless Execution of Computational and Data Science Workflows on HPC and Cloud with the Popper Container-native Automation Engine." In 2020 2nd International Workshop on Containers and New Orchestration Paradigms for Isolated Environments in HPC (CANOPIE-HPC). IEEE, 2020. http://dx.doi.org/10.1109/canopiehpc51917.2020.00007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography