Дисертації з теми "Placement de Serveurs Edge"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Placement de Serveurs Edge.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-22 дисертацій для дослідження на тему "Placement de Serveurs Edge".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Khamari, Sabri. "Architectures et protocoles pour les véhicules connectés." Electronic Thesis or Diss., Bordeaux, 2023. http://www.theses.fr/2023BORD0483.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'avènement des Systèmes de Transport Intelligents (STI) marque un changement de paradigme dans l'approche de la gestion et de l'optimisation des infrastructures de transport. Ancrés dans l'intégration des technologies de communication de pointe, les STI englobent une variété d'applications visant à améliorer la sécurité routière, l'efficacité du trafic et le confort de conduite. Cependant, l'exécution de ces applications de plus en plus gourmandes en calcul pose des défis inhérents liés à la latence, au traitement des données, et à la continuité des services. L'émergence de l'Edge Computing se présente comme une avancée transformatrice prête à redéfinir l'efficacité des applications véhiculaires dans les Systèmes de Transport Intelligents (STI). En contraste avec les paradigmes conventionnels de Cloud Computing, qui rencontrent fréquemment des problèmes de latence attribuables à la nature distante du traitement des données, l'Edge Computing décentralise les tâches computationnelles pour être plus proche du point de génération des données. Cette proximité réduit drastiquement la latence, optimise l'agrégation des données, et améliore l'utilisation globale des ressources. Par conséquent, l'Edge Computing est idéalement positionné pour adresser et potentiellement atténuer les limitations qui ont précédemment entravé l'optimisation des fonctionnalités des STI. Néanmoins, l'incorporation de l'Edge Computing dans les réseaux véhiculaires révèle un éventail unique de complexités, allant du placement stratégique des serveurs de bord et des techniques efficaces de déchargement de données à la mise en œuvre de protocoles robustes de migration de services et la sauvegarde des mesures de confidentialité et de sécurité.Cette thèse examine les problèmes de placement des serveurs Edge et de migration des services dans l'architecture de l’Edge Computing pour véhicules. Nos contributions dans cette thèse sont triples. Premièrement, nous introduisons "ESIAS", un Système d'Assistance de Sécurité à l'Intersection basé sur l'Edge, spécialement conçu pour améliorer la sécurité des intersections. Le système vise à distribuer proactivement des messages d'avertissement précis aux conducteurs, atténuant ainsi le risque d'accidents courants liés aux intersections. Deuxièmement, nous abordons le défi du placement optimal des serveurs en bordure dans les réseaux véhiculaires, en utilisant la programmation linéaire en nombres entiers pour trouver les solutions les plus efficaces. La méthodologie prend en compte la latence, le coût et la capacité des serveurs dans des conditions de trafic réelles. Le cadre proposé vise non seulement à minimiser le coût global de déploiement, mais aussi à équilibrer les charges de travail computationnelles entre les serveurs en bordure, tout en maintenant la latence dans des seuils acceptables. Enfin, nous nous plongeons dans la question complexe de la migration des services dans les réseaux véhiculaires, en abordant le dilemme du maintien de la qualité de service (QoS) tout en minimisant les coûts de migration. À mesure que les véhicules se déplacent à travers différentes régions, le maintien de la qualité du service nécessite une migration de service stratégique, qui pose des défis en termes de timing et de localisation. Pour résoudre ce problème, nous formulons le problème en tant que processus décisionnel de Markov (PDM) et appliquons des techniques d'apprentissage par renforcement profond, spécifiquement les Deep Q Networks (DQN), pour découvrir des stratégies de migration optimales adaptées aux exigences de chaque service. Le cadre résultant assure une continuité de service transparente, même dans des contraintes de haute mobilité, en réalisant un équilibre optimal entre la latence et les coûts de migration
The advent of Intelligent Transportation Systems (ITS) marks a paradigm shift in the approach to managing and optimizing transportation infrastructures. Rooted in the integration of state-of-the-art communication technologies, ITS encompass a variety of applications aimed at enhancing road safety, traffic efficiency, and driving comfort. However, the execution of these increasingly computation-intensive applications raises inherent challenges related to latency, data processing, and service continuity. The emergence of Edge Computing stands as a transformative advancement poised to redefine the efficacy of vehicular applications in Intelligent Transportation Systems (ITS). Contrasting with conventional cloud computing paradigms, which frequently encounter latency issues attributable to the remote nature of data processing, Edge Computing decentralizes computational tasks to be nearer to the point of data generation. This proximity drastically diminishes latency, optimizes data aggregation, and enhances overall resource utilization. Consequently, Edge Computing is uniquely positioned to address and potentially mitigate the limitations that have previously impeded the optimization of ITS functionalities. Nevertheless, the incorporation of Edge Computing into vehicular networks unveils a unique array of complexities, ranging from the strategic placement of edge servers and efficient data offloading techniques to the implementation of robust service migration protocols and safeguarding privacy and security measures.This thesis investigates the problems of edge server placement and service migration in vehicular networks. Our contributions in this thesis are threefold. First, we introduce "ESIAS," an Edge-based Safety Intersection Assistance System, specifically designed to improve safety intersections. The system aims to proactively distribute precise warning messages to drivers, mitigating the risk of common intersection-related accidents. Second, we tackle the challenge of optimal Edge server placement in vehicular networks, employing integer linear programming to find the most effective solutions. The methodology considers latency, cost, and server capacity in real-world traffic conditions. The proposed framework aims not only to minimize the overall deployment cost but also to balance the computational workloads among Edge servers, all while maintaining latency within acceptable thresholds. Finally, we delve into the complex issue of service migration in MEC-enabled vehicular networks, addressing the quandary of maintaining quality of service (QoS) while minimizing migration costs. As vehicles move through different regions, maintaining service quality requires strategic service migration, which poses challenges in terms of timing and location. To resolve this problem, we formulate it as a Markov Decision Process (MDP) and apply deep reinforcement learning techniques, specifically Deep Q-Networks (DQN), to discover optimal migration strategies tailored to each service's requirements. The resulting framework ensures seamless service continuity even within high-mobility constraints, achieving an optimal balance between latency and migration costs
2

Santoyo, González Alejandro. "Edge computing infrastructure for 5G networks: a placement optimization solution." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/669552.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis focuses on how to optimize the placement of the Edge Computing infrastructure for upcoming 5G networks. To this aim, the core contributions of this research are twofold: 1) a novel heuristic called Hybrid Simulated Annealing to tackle the NP-hard nature of the problem and, 2) a framework called EdgeON providing a practical tool for real-life deployment optimization. In more detail, Edge Computing has grown into a key solution to 5G latency, reliability and scalability requirements. By bringing computing, storage and networking resources to the edge of the network, delay-sensitive applications, location-aware systems and upcoming real-time services leverage the benefits of a reduced physical and logical path between the end-user and the data or service host. Nevertheless, the edge node placement problem raises critical concerns regarding deployment and operational expenditures (i.e., mainly due to the number of nodes to be deployed), current backhaul network capabilities and non-technical placement limitations. Common approaches to the placement of edge nodes are based on: Mobile Edge Computing (MEC), where the processing capabilities are deployed at the Radio Access Network nodes and Facility Location Problem variations, where a simplistic cost function is used to determine where to optimally place the infrastructure. However, these methods typically lack the flexibility to be used for edge node placement under the strict technical requirements identified for 5G networks. They fail to place resources at the network edge for 5G ultra-dense networking environments in a network-aware manner. This doctoral thesis focuses on rigorously defining the Edge Node Placement Problem (ENPP) for 5G use cases and proposes a novel framework called EdgeON aiming at reducing the overall expenses when deploying and operating an Edge Computing network, taking into account the usage and characteristics of the in-place backhaul network and the strict requirements of a 5G-EC ecosystem. The developed framework implements several placement and optimization strategies thoroughly assessing its suitability to solve the network-aware ENPP. The core of the framework is an in-house developed heuristic called Hybrid Simulated Annealing (HSA), seeking to address the high complexity of the ENPP while avoiding the non-convergent behavior of other traditional heuristics (i.e., when applied to similar problems). The findings of this work validate our approach to solve the network-aware ENPP, the effectiveness of the heuristic proposed and the overall applicability of EdgeON. Thorough performance evaluations were conducted on the core placement solutions implemented revealing the superiority of HSA when compared to widely used heuristics and common edge placement approaches (i.e., a MEC-based strategy). Furthermore, the practicality of EdgeON was tested through two main case studies placing services and virtual network functions over the previously optimally placed edge nodes. Overall, our proposal is an easy-to-use, effective and fully extensible tool that can be used by operators seeking to optimize the placement of computing, storage and networking infrastructure at the users’ vicinity. Therefore, our main contributions not only set strong foundations towards a cost-effective deployment and operation of an Edge Computing network, but directly impact the feasibility of upcoming 5G services/use cases and the extensive existing research regarding the placement of services and even network service chains at the edge.
3

Fernandez-Rubiera, Francisco Jose. "Clitics at the edge clitic placement in Western Iberian Romance languages /." Connect to Electronic Thesis (CONTENTdm), 2009. http://worldcat.org/oclc/450998700/viewonline.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Schäfer, Dominik [Verfasser], and Christian [Akademischer Betreuer] Becker. "Elastic computation placement in edge-based environments / Dominik Schäfer ; Betreuer: Christian Becker." Mannheim : Universitätsbibliothek Mannheim, 2019. http://d-nb.info/1181692911/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Schäfer, Dominik Verfasser], and Christian [Akademischer Betreuer] [Becker. "Elastic computation placement in edge-based environments / Dominik Schäfer ; Betreuer: Christian Becker." Mannheim : Universitätsbibliothek Mannheim, 2019. http://nbn-resolving.de/urn:nbn:de:bsz:180-madoc-488322.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

POLTRONIERI, Filippo. "Value-of-Information Middlewares for Fog and Edge Computing." Doctoral thesis, Università degli studi di Ferrara, 2021. http://hdl.handle.net/11392/2488252.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Fog and Edge Computing aim to deliver low-latency, immersive, and powerful services by processing information close to both devices and users. This is well suited for IoT applications in Smart City, where IoT gateways, Cloudlets, Base Stations, and other computational nodes can process (part of) the data generated by the multitude of IoT sensors directly at the edge of the network. However, the implementation of Fog and Edge Computing is challenging because it requires to deal with a (limited number of) constrained devices, dynamic services' requirements, and heterogeneous network conditions. Differently from the Cloud, where computational resources are supposed to be unlimited, Fog and Edge services should be capable to adapt to scarce and constrained resources and deal with the deluge of IoT data. To facilitate the adoption of Fog and Edge Computing this thesis proposes innovative middlewares capable of providing comprehensive solutions to address the highly dynamic characteristics of these environments. These middlewares provide functions to allocate and distribute Fog and Edge services among the available computational devices, monitor the status of the environment, and promptly modify their configuration. To deal with the IoT data deluge this thesis investigates the interesting criterion of Value-of-Information (VoI). Originally born as an extension of Shannon's Information Theory for decision making science, researchers have studied VoI as an information management tool to select and prioritize information processing and dissemination. For this purpose, this thesis proposes the adoption of information management policies allowing the definition of service components, composable software modules that can be chained to create larger and more complex services. In addition, the middlewares presented in this thesis leverage the promising concept of VoI to select only the most valuable piece of information for processing and dissemination and to scale computational workload in an automated and lossiness fashion. This would enable to reduce the computational and network load and to propose innovative methodologies to optimize the available resources. The research efforts presented in this thesis are the results of the collaboration with international institutes and a research period at the Florida Institute for Human and Machine Cognition (IHMC), FL, USA.
Con i termini Fog ed Edge Computing si indicano dei paradigmi computazionali che, spostando l'elaborazione dei dati IoT nelle prossimità sia dei dispositivi che degli utenti, mirano a fornire servizi a bassa latenza, immersivi e real-time. Fog ed Edge Computing trovano applicazione in contesti Smart Cities, dove è possibile sfruttare la capacità computazionale di gateway IoT, Cloudlet e Base Station per elaborare parte dei dati generati dall'IoT direttamente ai margini della rete. L'adozione dei paradigmi di Fog ed Edge Computing è tuttavia complessa in quanto pone una serie di sfide tra cui il processamento dell’enorme mole di dati generati dall’IoT, la presenza di un numero limitato di dispositivi altamente eterogenei e con capacità computazionali scarse, requisiti di servizio altamente dinamici e reti con caratteristiche eterogenee. Per garantire i requisiti stringenti di bassa latenza, soluzioni per Fog ed Edge Computing devono essere in grado di utilizzare al meglio le scarse risorse a disposizione, gestendole al meglio. Se questi paradigmi sono oggetto di ampie ricerche, vi è la necessità di investigare soluzioni innovative che consentano di gestire l’enorme mole dati IoT e permettere una concreta applicazione di Fog ed Edge Computing. Questa tesi propone middleware innovativi in grado di fornire soluzioni complete per fronteggiare al meglio le caratteristiche altamente dinamiche di scenari Smart Cities, fornendo metodologie e strumenti per allocare e distribuire servizi tra le risorse a disposizione, monitorare lo stato delle risorse e modificare prontamente la loro configurazione. Come criterio innovativo per la prioritizzazione dei dati IoT per processamento e disseminazione, questa tesi adotta il concetto di Value-of-Information (VoI), nato come estensione della Teoria dell'Informazione di Shannon e applicato in ambiti decisionali. A tal fine, questa tesi propone politiche di gestione delle informazioni che consentono di realizzare servizi modulari e facilmente (ri-)componibili e tecniche di ottimizzazione innovative che ben si adattano a questi servizi. Inoltre, i middleware presentati in questa tesi integrano il concetto di VoI sia a livello di servizio che a livello di gestione per selezionare le informazioni più preziose per l'elaborazione e la diffusione, riducendo così il carico computazionale e garantendo una gestione ottimale dei dispositivi e della rete. Le ricerche presentate in questa tesi sono il risultato della collaborazione con istituti di ricerca internazionali e di un periodo di ricerca trascorso presso il Florida Institute for Human and Machine Cognition (IHMC), FL, USA.
7

Santi, Nina. "Prédiction des besoins pour la gestion de serveurs mobiles en périphérie." Electronic Thesis or Diss., Université de Lille (2022-....), 2023. http://www.theses.fr/2023ULILB050.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'informatique en périphérie est un paradigme émergent au sein de l'Internet des Objets (IoT) et complémentaire à l'informatique en nuage. Ce paradigme propose l'implémentation de serveurs de calcul situés à proximité des utilisateurs, réduisant ainsi la pression et les coûts de l'infrastructure réseau locale. La proximité avec les utilisateurs suscite de nouveaux cas d'utilisation, tels que le déploiement de serveurs mobiles embarqués sur des drones ou des robots, offrant une alternative moins coûteuse, plus éco-énergétique et flexible par rapport aux infrastructures fixes lors d'événements ponctuels ou exceptionnels. Cependant, cette approche soulève également de nouveaux enjeux pour le déploiement et l'allocation de ressources en temps et en espace, souvent dépendants de la batterie.Dans le cadre de cette thèse, nous proposons des outils et des algorithmes de prédiction pour la prise de décision concernant l'allocation de ressources fixes et mobiles, à la fois en termes de temps et d'espace, au sein d'environnements dynamiques. Nous mettons à disposition des jeux de données riches et reproductibles qui reflètent l'hétérogénéité inhérente aux applications de l'Internet des Objets (IoT), tout en présentant un taux de contention et d'interférence élevé. Pour cela, nous utilisons le FIT-IoT Lab, un banc d'essai ouvert dédié à l'IoT, et nous mettons l'ensemble du code à disposition de manière ouverte. De plus, nous avons développé un outil permettant de générer de manière automatisée et reproductible des traces de l'IoT. Nous exploitons ces jeux de données pour entraîner des algorithmes d'apprentissage automatique basés sur des techniques de régression afin de les évaluer dans leur capacité à prédire le débit des applications de l'IoT. Dans une démarche similaire, nous avons également entraîné et analysé un réseau neuronal de type transformateur temporel pour prédire plusieurs métriques de la Qualité de Service (QoS). Afin de tenir compte de la mobilité des ressources, nous générons des traces de l'IoT intégrant des points d'accès mobiles embarqués sur des robots TurtleBot. Ces traces, qui intègrent la mobilité, sont utilisées pour valider et tester un framework d'apprentissage fédéré reposant sur des transformateurs temporels parcimonieux. Enfin, nous proposons un algorithme décentralisé de prédiction de la densité de la population humaine par régions, basé sur l'utilisation d'un filtre à particules. Nous testons et validons cet algorithme à l'aide du simulateur Webots dans un contexte de serveurs embarqués sur des robots, et du simulateur ns-3 pour la partie réseaux
Multi-access Edge computing is an emerging paradigm within the Internet of Things (IoT) that complements Cloud computing. This paradigm proposes the implementation of computing servers located close to users, reducing the pressure and costs of local network infrastructure. This proximity to users is giving rise to new use cases, such as the deployment of mobile servers mounted on drones or robots, offering a cheaper, more energy-efficient and flexible alternative to fixed infrastructures for one-off or exceptional events. However, this approach also raises new challenges for the deployment and allocation of resources in terms of time and space, which are often battery-dependent.In this thesis, we propose predictive tools and algorithms for making decisions about the allocation of fixed and mobile resources, in terms of both time and space, within dynamic environments. We provide rich and reproducible datasets that reflect the heterogeneity inherent in Internet of Things (IoT) applications, while exhibiting a high rate of contention and interference. To achieve this, we are using the FIT-IoT Lab, an open testbed dedicated to the IoT, and we are making all the code available in an open manner. In addition, we have developed a tool for generating IoT traces in an automated and reproducible way. We use these datasets to train machine learning algorithms based on regression techniques to evaluate their ability to predict the throughput of IoT applications. In a similar approach, we have also trained and analysed a neural network of the temporal transformer type to predict several Quality of Service (QoS) metrics. In order to take into account the mobility of resources, we are generating IoT traces integrating mobile access points embedded in TurtleBot robots. These traces, which incorporate mobility, are used to validate and test a federated learning framework based on parsimonious temporal transformers. Finally, we propose a decentralised algorithm for predicting human population density by region, based on the use of a particle filter. We test and validate this algorithm using the Webots simulator in the context of servers embedded in robots, and the ns-3 simulator for the network part
8

Abderrahim, Mohamed. "Conception d’un système de supervision programmable et reconfigurable pour une infrastructure informatique et réseau répartie." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0119/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le Cloud offre le calcul, stockage etréseau en tant que services. Pour réduire le coûtde cette offre, les opérateurs ont tendance à s’appuyer sur des infrastructures centralisées et gigantesques. Cependant, cette configuration entrave la satisfaction des exigences de latence et de bande passante des applications de nouvelle génération. L'Edge cherche à relever ce défi en s'appuyant sur des ressources massivement distribuées. Afin de satisfaire les attentes des opérateurs et des utilisateurs du Edge, des services de gestion ayant des capacités similaires à celles qui ont permis le succès du Cloud doivent être conçus. Dans cette thèse, nous nous concentrons sur le service de supervision. Nous proposons un canevas logiciel pour la mise en place d’un service holistique. Ce canevas permet de déterminer une architecture de déploiement pair-à-pair pour les fonctions d'observation, de traitement et d'exposition des mesures. Il vérifie que cette architecture satisfait les exigences fonctionnelles et de qualité de service des utilisateurs. Ces derniers peuvent être exprimés à l'aide d'un langage de description offert par le canevas. Le canevas offre également un langage de description pour unifier la description de l'infrastructure Edge. L’architecture de déploiement est déterminée avec l’objectif de minimiser l'empreinte de calcul et réseau du service de supervision. Pour cela, les fonctions de supervision sont mutualisées entre les différents utilisateurs. Les tests que nous avons faits ont montré la capacité de notre proposition à réduire l'empreinte de supervision avec un gain qui atteint -28% pour le calcul et -24% pour leréseau
Cloud offers compute, storage and network as services. To reduce the offer cost, the operators tend to rely on centralized and massive infrastructures. However, such a configuration hinders the satisfaction of the latency and bandwidth requirements of new generation applications. The Edge aims to rise this challenge by relying on massively distributed resources. To satisfy the operators and the users of Edge, management services similar to the ones that made the success of Cloud should be designed. In this thesis, we focus on the monitoring service. We design a framework to establish a holistic monitoring service. This framework determines a peer-to-peer deployment architecture for the observation, processing, and exposition of measurements. It verifies that this architecture satisfies the functional and quality of service constraints of the users. For this purpose, it relies on a description of users requirement sand a description of the Edge infrastructure.The expression of these two elements can be unified with two languages offered by the Framework. The deployment architecture is determined with the aim of minimizing the compute and network footprint of the monitoring service. For this purpose, the functions are mutualized as much as possible among the different users. The tests we did showed the relevance of our proposal for reducing monitoring footprint with a gain of -28% for the compute and -24% for the network
9

Sténson, Carl. "Object Placement in AR without Occluding Artifacts in Reality." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-211112.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Placement of virtual objects in Augmented Reality is often done without regarding the artifacts in the physical environment. This thesis investigates how placement can be done with the artifacts included. It only considers placement of wall mounted objects. Through the development of two prototypes, using detected edges in RGB-images in combination with volumetric properties to identify the artifacts, arreas will be suggested for placement of virtual objects. The first prototype analyze each triangle in the model, which is an intensive and with low precision on the localization of the physical artifacts. The second prototype analyzed the detected RGB-edges in world space, which proved to detect the features with precise localization and a reduce calculation time. The second prototype manages this in a controlled setting. However, a more challenging environment would possibly pose other issues. In conclusion, placement in relation to volumetric and edge information from images in the environment is possible and could enhance the experience of being in a mixed reality, where physical and virtual objects coexist in the same world.
Placering av virtuella objekt i Augumented Reality görs ofta utan att ta hänsyn till objekt i den fysiska miljön. Den här studien utreder hur placering kan göras med hänsyn till den fysiska miljön och dess objekt. Den behandlar enbart placering av objekt på vertikala ytor. För undersökningen utvecklas två prototyper som använder sig av kantigenkänning i foton samt en volymmetrisk representation av den fysiska miljön. I denna miljö föreslår prototyperna var placering av objekt kan ske. Den första prototypen analyserar varje triangel i den volymmetriska representationen av rummet, vilket visade sig vara krävande och med låg precision av lokaliseringen av objekt i miljön. Den andra prototypen analyserar de detekterade kanterna i fotona och projicerar dem till deras positioner i miljön. Vilket var något som visade sig hitta objekt i rummet med god precision samt snabbare än den första prototypen. Den andra prototypen lyckas med detta i en kontrollerad miljö. I en mer komplex och utmanande miljö kan problem uppstå. Placering av objekt i Augumented Reality med hänsyn till både en volymmetrisk och texturerad representation av en miljö kan uppnås. Placeringen kan då ske på ett mer naturligt sätt och därmed förstärka upplevelsen av att virtuella och verkliga objekt befinner sig i samma värld.
10

Shinde, Swapnil Sadashiv. "Radio Access Network Function Placement Algorithms in an Edge Computing Enabled C-RAN with Heterogeneous Slices Demands." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20063/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Network slicing provides a scalable and flexible solution for resource allocation with performance guaranty and isolation from other services in the 5G architecture. 5G has to handle several active use cases with different requirements. The single solution to satisfy all the extreme requirements requires overspecifies and high-cost network architecture. Further, to fulfill the diverse requirements, each service will require different resources from a radio access network (RAN), edge, and central offices of 5G architecture and hence various deployment options. Network function virtualization allocates radio access network (RAN) functions in different nodes. URLLC services require function placement nearer to the ran to fulfill the lower latency requirement while eMBB require cloud access for implementation. Therefore arbitrary allocation of network function for different services is not possible. We aim to developed algorithms to find service-based placement for RAN functions in a multitenant environment with heterogeneous demands. We considered three generic classes of slices of eMBB, URLLC, mMTC. Every slice is characterized by some specific requirements, while the nodes and the links are resources constrained. The function placement problem corresponds to minimize the overall cost of allocating the different functions to the different nodes organized in layers for respecting the requirements of the given slices. Specifically, we proposed three algorithms based on the normalized preference associated with each slice on different layers of RAN architecture. The maximum preference algorithm places the functions on the most preferred position defined in the preference matrix. On the other hand, the proposed modified preference algorithm provides solutions by keeping track of the availability of computational resources and latency requirements of different services. We also used the Exhaustive Search Method for solving a function allocation problem.
11

Singh, Navjot. "Planning of Mobile Edge Computing Resources in 5G Based on Uplink Energy Efficiency." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38444.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Increasing number of devices demand for low latency and high-speed data transmission require that the computation resources to be closer to users. The emerging Mobile Edge Computing (MEC) technology aims to bring the advantages of cloud computing which are computation, storage and networking capabilities in close proximity of user. MEC servers are also integrated with cloud servers which give them flexibility of reaching vast computational power whenever needed. In this thesis, leveraging the idea of Mobile Edge Computing, we propose algorithms for cost-efficient and energy-efficient the placement of Mobile Edge nodes. We focus on uplink energy-efficiency which is essential for certain applications including augmented reality and connected vehicles, as well as extending battery life of user equipment that is favorable for all applications. The experimental results show that our proposed schemes significantly reduce the uplink energy of devices and minimizes the number of edge nodes required in the network.
12

Awais, Hussein Sani. "Bipartite edge coloring approach for designing parallel hardware interleaver architecture." Phd thesis, Université de Bretagne Sud, 2012. http://tel.archives-ouvertes.fr/tel-00790045.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nowadays, Turbo and LDPC codes are two families of codes that are extensively used in current communication standards due to their excellent error correction capabilities. However, hardware design of coders and decoders for high data rate applications is not a straightforward process. For high data rates, decoders are implemented on parallel architectures in which more than one processing elements decode the received data. To achieve high memory bandwidth, the main memory is divided into smaller memory banks so that multiple data values can be fetched from or stored to memory concurrently. However, due to scrambling caused by interleaving law, this parallelization results in communication or memory access conflicts which occur when multiple data values are fetched from or stored in the same memory bank at the same time. This problem is called Memory conflict Problem. It increases latency of memory accesses due to the presence of conflict management mechanisms in communication network and unfortunately decreases system throughput while augmenting system cost. To tackle the memory conflict problems, three different types of approaches are used in literature. In first type of approaches, different algorithms to construct conflict free interleaving law are proposed. The main reason to develop these techniques is to construct "architecture friendly" codes with good error correction capabilities in order to reduce hardware cost. However, architectural constraints applied during code design may impede error correction performance of the codes. In a second type of approaches, different design innovations are introduced to tackle memory conflict problem. Flexible and scalable interconnection network with sufficient path diversity and additional storing elements are introduced to handle memory conflicts. However, flexible networks require large silicon area and cost. In addition, delay introduced due to conflict management mechanisms degrades the maximum throughput and makes these approaches inefficient for high data rate and low power applications. In third type of approaches deals with algorithms that assign data in memory in such a manner that all the processing elements can access memory banks concurrently without any conflict. The benefit of this technique is that decoder implementation does not need any specific network and extra storage elements to support particular interleaving law. However, till now no algorithm exists that can solve memory mapping problem for both turbo and LDPC codes in polynomial time. The work presented in this thesis belongs to the last type of approaches. We propose several methods based on graph theory to solve memory mapping problem for both turbo and LDPC codes. Different formal models based on bipartite and tripartite graphs along with different algorithms to color the edges of these graphs are detailed. The complete path we followed before it is possible to solve mapping problem in polynomial time is hence presented. For the first two approaches, mapping problem is modeled as bipartite graph and then each graph is divided into different sub-graphs in order to facilitate the coloring of the edges. First approach deals with Turbo codes and uses transportation problem algorithms to divide and color the bipartite graph. It can find memory mapping that supports particular interconnection network if the interleaving rule of the application allows it. Second approach solves memory mapping problem for LDPC codes using two different complex algorithms to partition and color each partition. In the third algorithm, each time instance and edge is divided into two parts to model our problem as tripartite graph. Tripartite graph is partitioned into different sub-graphs by using an algorithm based on divide and conquer strategy. Then each subgraph is colored individually by using a simple algorithm to find a conflict free memory mapping for both Turbo and LDPC codes. Finally, in the last approach tripartite graph is transformed into bipartite graph on which coloring algorithm based on Euler partitioning principle is applied to find memory mapping in polynomial time. Several experiments have been performed using interleaving laws coming from different communication standards to show the interest of the proposed mapping methods. All the experiments have been done by using a software tool we developed. This tool first finds conflict free memory mapping and then generates VHDL files that can be synthesized to design complete architecture i.e. network, memory banks and associated controllers. In first experiment, bit interleaver used in Ultra Wide Band (UWB) interleaver is considered and a barrel shifter is used as constraint to design the interconnection network. Results are compared regarding area and runtime with state of the art solutions. In second experiments, a turbo interleaving law defined in High Speed Packet Access (HSPA) standard is used as test case. Memory mapping problems have been solved and associated architectures have been generated for this interleaving law which is not conflict free for any type of parallelism used in turbo decoding. Results are compared with techniques used in state of the art in terms of runtime and area. Third experiment focuses on LDPC. First, last algorithm we proposed is used to find conflict free memory mapping for non-binary LDPC codes defined in the DaVinci Codes FP7 ICT European project. Then, conflict free memory mapping have also been found for partially parallel architecture of LDPC codes used in WiMAX and WiFi for different level of parallelism. It is shown that the proposed algorithm can be used to map data in memory banks for any structured codes used in current and future standards for partially parallel architecture. In last experiment, thanks to the proposed approach we explored the design space of Quadratic Permutation Polynomial (QPP) interleaver that is used in 3GPP-LTE standard. The QPP interleaver is maximum contention-free i.e., for every window size W which is a factor of the interleaver length N, the interleaver is contention free. However, when trellis and recursive units parallelism are also included in each SISO, QPP interleaver is no more contention-free. Results highlight tradeoffs between area and performances based on for different radixes, parallelisms, scheduling (replica versus butterfly)...
13

Tanfener, Ozan. "Design and Evaluation of a Microservice Testing Tool for Edge Computing Environments." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-287171.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Edge computing can provide decentralized computation and storage resources with low latency and high bandwidth. It is a promising infrastructure to host services with stringent latency requirements, for example autonomous driving, cloud gaming, and telesurgery to the customers. Because of the structural complexity associated with the edge computing applications, research topics like service placement gain great importance. To provide a realistic and efficient general environment for evaluating service placement solutions that can be used to analyze latency requirements of services at scale, a new testing tool for mobile edge cloud is designed and implemented in this thesis. The proposed tool is implemented as a cloud native application, and allows deploying applications in an edge computing infrastructure that consists of Kubernetes and Istio, it can be easily scaled up to several hundreds of microservices, and deployment into the edge clusters is automated. With the help of the designed tool, two different microservice placement algorithms are evaluated in an emulated edge computing environment based on Federated Kubernetes. The results have shown how the performance of algorithms varies when the parameters of the environment, and the applications instantiated and deployed by the tool are changed. For example, increasing the request rate 200% can increase the delay by 100% for different algorithms. Moreover, complicating the mobile network can improve the latency performance up to 20% depending on the microservice placement algorithm.
Edge computing kan ge decentraliserad beräkning och lagringsresurser med låg latens och hög bandbredd. Det är en lovande infrastruktur för att vara värd för tjänster med strängt prestandakrav, till exempel autonom körning, molnspel och telekirurgi till kunderna. På grund av den strukturella komplexiteten som är associerad med edge computing applikationerna, får forskningsämnen som tjänsteplacering stor betydelse. För att tillhandahålla en realistisk och effektiv allmän miljö för utvärdering av lösningar för tjänsteplacering, designas och implementeras ett nytt testverktyg för mobilt kantmoln i denna avhandling. Det föreslagna verktyget implementeras på molnmässigt sätt som gör det möjligt att distribuera applikationer i en edge computing-infrastruktur som består av Kubernetes och Istio. Med hjälp av det konstruerade verktyget utvärderas två olika placeringsalgoritmer för mikrotjänster i en realistisk edge computing miljö. Resultaten visar att en ökning av förfrågningsgraden 200 % kan öka förseningen med 100 % för olika algoritmer. Dessutom kan komplicering av mobilnätet förbättra latensprestanda upp till 20% beroende på algoritmen för mikroserviceplaceringen.
14

Valicov, Petru. "Problèmes de placement, de coloration et d’identification." Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14549/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, nous nous intéressons à trois problèmes issus de l'informatique théorique, à savoir le placement de formes rectangulaires dans un conteneur (OPP), la coloration dite "forte" d'arêtes des graphes et les codes identifiants dans les graphes. L'OPP consiste à décider si un ensemble d'items rectangulaires peut être placé sans chevauchement dans un conteneur rectangulaire et sans dépassement des bords de celui-ci. Une contrainte supplémentaire est prise en compte, à savoir l'interdiction de rotation des items. Le problème est NP-difficile même dans le cas où le conteneur et les formes sont des carrés. Nous présentons un algorithme de résolution efficace basé sur une caractérisation du problème par des graphes d'intervalles, proposée par Fekete et Schepers. L'algorithme est exact et utilise les MPQ-arbres - structures de données qui encodent ces graphes de manière compacte tout en capturant leurs propriétés remarquables. Nous montrons les résultats expérimentaux de notre approche en les comparant aux performances d'autres algorithmes existants. L'étude de la coloration forte d'arêtes et des codes identifiants porte sur les aspects structurels et de calculabilité de ces deux problèmes. Dans le cas de la coloration forte d'arêtes nous nous intéressons plus particulièrement aux familles des graphes planaires et des graphes subcubiques. Nous montrons des bornes optimales pour l'indice chromatique fort des graphes subcubiques en fonction du degré moyen maximum et montrons que tout graphe planaire subcubique sans cycles induits de longueur 4 et 5 est coloriable avec neuf couleurs. Enfin nous confirmons la difficulté du problème de décision associé, en prouvant qu'il est NP-complet dans des sous-classes restreintes des graphes planaires subcubiques.La troisième partie de la thèse est consacrée aux codes identifiants. Nous proposons une caractérisation des graphes identifiables dont la cardinalité du code identifiant minimum ID est n-1, où n est l'ordre du graphe. Nous étudions la classe des graphes adjoints et nous prouvons des bornes inférieures et supérieures serrées pour le paramètre ID dans cette classe. Finalement, nous montrons qu'il existe un algorithme linéaire de calcul de ID dans la classe des graphes adjoints L(G) où G a une largeur arborescente bornée par une constante. En revanche nous nous apercevons que le problème est NP-complet dans des sous-classes très restreintes des graphes parfaits
In this thesis we study three theoretical computer science problems, namely the orthogonal packing problem (OPP for short), strong edge-colouring and identifying codes.OPP consists in testing whether a set of rectangular items can be packed in a rectangular container without overlapping and without exceeding the borders of this container. An additional constraint is that the rotation of the items is not allowed. The problem is NP-hard even when the problem is reduced to packing squares in a square. We propose an exact algorithm for solving OPP efficiently using the characterization of the problem by interval graphs proposed by Fekete and Schepers. For this purpose we use some compact representation of interval graphs - MPQ-trees. We show experimental results of our approach by comparing them to the results of other algorithms known in the literature. we observe promising gains.The study of strong edge-colouring and identifying codes is focused on the structural and computational aspects of these combinatorial problems. In the case of strong edge-colouring we are interested in the families of planar graphs and subcubic graphs. We show optimal upper bounds for the strong chromatic index of subcubic graphs as a function of the maximum average degree. We also show that every planar subcubic graph without induced cycles of length 4 and 5 can be strong edge-coloured with at most nine colours. Finally, we confirm the difficulty of the problem by showing that it remains NP-complete even in some restricted classes of planar subcubic graphs.For the subject of identifying codes we propose a characterization of non-trivial graphs having maximum identifying code number ID, that is n-1, where n is the number of vertices. We study the case of line graphs and prove lower and upper bounds for ID parameter in this class. At last we investigate the complexity of the corresponding decision problem and show the existence of a linear algorithm for computing ID of the line graph L(G) where G has the size of the tree-width bounded by a constant. On the other hand, we show that the identifying code problem is NP-complete in various subclasses of planar graphs
15

Khatiwada, Raju. "Speciation of phosphorus in reduced tillage systems: placement and source effect." Thesis, Kansas State University, 2011. http://hdl.handle.net/2097/9973.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Master of Science
Department of Agronomy
Ganga M. Hettiarachchi
Phosphorus (P) management in reduced tillage systems has been a great concern for farmers. Conclusive results for benefits of deep banding of P fertilizers for plant yield in reduced tillage system are still lacking. Knowledge of the dominant solid P species present in soil following application of P fertilizers and linking that to potential P availability would help us to design better P management practices. The objectives of this research were to understand the influence of placement (broadcast- vs. deep band-P or deep placed-P), fertilizer source (granular- versus liquid-P), and time on reaction products of P. Greenhouse and field based experiments were conducted to study P behavior in soils. Soil pH, resin extractable P, total P, and speciation of P were determined at different distances from the point of fertilizer application at 5 weeks (greenhouse and field) and 6 months (field) after P application (at rate 75 kg/ha) to a soil system that was under long-term reduced tillage. X-ray absorption near edge structure spectroscopy technique was used to speciate reaction products of fertilizer P in the soil. The reaction products of P formed upon addition of P fertilizers to soils were found to be influenced by soil pH, P placement methods, and P sources. Acidic pH (below~5.8) tended to favor formation of Fe-P and Al-P like forms whereas slightly acidic near neutral pH soils favored formation of Ca-P like forms. Scanning electron microscope with energy dispersive X-ray analysis of applied fertilizer granules at 5-wk showed enrichment of Al, Fe and Ca in granule- indicating these elements begin to react with applied P even before granules dissolve completely. The availability of an applied P fertilizer was found to be enhanced as a result of the deep banding as compared to the surface broadcasting or deep placed methods. Deep banded liquid MAP was found to be in more adsorbed P like forms and resulted greater resin extractable P both at 5 wk and 6 month after application. Deep banding of liquid MAP would most likely result both agronomically and environmentally efficient solution for no-till farmers.
16

Sigwele, Tshiamo. "Energy Efficient Cloud Computing Based Radio Access Networks in 5G. Design and evaluation of an energy aware 5G cloud radio access networks framework using base station sleeping, cloud computing based workload consolidation and mobile edge computing." Thesis, University of Bradford, 2017. http://hdl.handle.net/10454/16062.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Fifth Generation (5G) cellular networks will experience a thousand-fold increase in data traffic with over 100 billion connected devices by 2020. In order to support this skyrocketing traffic demand, smaller base stations (BSs) are deployed to increase capacity. However, more BSs increase energy consumption which contributes to operational expenditure (OPEX) and CO2 emissions. Also, an introduction of a plethora of 5G applications running in the mobile devices cause a significant amount of energy consumption in the mobile devices. This thesis presents a novel framework for energy efficiency in 5G cloud radio access networks (C-RAN) by leveraging cloud computing technology. Energy efficiency is achieved in three ways; (i) at the radio side of H-C-RAN (Heterogeneous C-RAN), a dynamic BS switching off algorithm is proposed to minimise energy consumption while maintaining Quality of Service (QoS), (ii) in the BS cloud, baseband workload consolidation schemes are proposed based on simulated annealing and genetic algorithms to minimise energy consumption in the cloud, where also advanced fuzzy based admission control with pre-emption is implemented to improve QoS and resource utilisation (iii) at the mobile device side, Mobile Edge Computing (MEC) is used where computer intensive tasks from the mobile device are executed in the MEC server in the cloud. The simulation results show that the proposed framework effectively reduced energy consumption by up to 48% within RAN and 57% in the mobile devices, and improved network energy efficiency by a factor of 10, network throughput by a factor of 2.7 and resource utilisation by 54% while maintaining QoS.
17

Tena, Frezewd Lemma. "Energy-Efficient Key/Value Store." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-228586.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Energy conservation is a major concern in todays data centers, which are the 21st century data processing factories, and where large and complex software systems such as distributed data management stores run and serve billions of users. The two main drivers of this major concern are the pollution impact data centers have on the environment due to their waste heat, and the expensive cost data centers incur due to their enormous energy demand. Among the many subsystems of data centers, the storage system is one of the main sources of energy consumption. Among the many types of storage systems, key/value stores happen to be the widely used in the data centers. In this work, I investigate energy saving techniques that enable a consistent hash based key/value store save energy during low activity times, and whenever there is an opportunity to reuse the waste heat of data centers.
18

Mehamel, Sarra. "New intelligent caching and mobility strategies for MEC /ICN based architectures." Electronic Thesis or Diss., Paris, CNAM, 2020. http://www.theses.fr/2020CNAM1284.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le paradigme de MEC (Mobile Edge Computing) consiste à mettre les ressources de calcul et de stockage aux « extrémités » du réseau à proximité des utilisateurs finaux. Le terme « edge » désigne n’importe quel type de station de base de réseau. Les motivations pour l’adoption de ce nouveau concept sont principalement la réduction de la charge au cœur du réseau et la diminution de la latence grâce à la proximité des ressources et ainsi améliorer l’expérience utilisateur. Les serveurs MEC sont de bons candidats pour héberger les applications mobiles et diffuser le contenu Web. La mise en cache à l’extrémité du réseau, ou Edge Caching en anglais, est l’une des technologies les plus émergentes connues comme solution de récupération de contenu au bord du réseau. Elle est aussi considérée comme une technologie permettant la mise en place du concept MEC puisqu’elle présente une opportunité intéressante pour implémenter les services de mise en cache. En particulier, les serveurs MEC sont implémentés directement au niveau des stations de base, ce qui permet la mise en cache à l’extrémité du réseau et assure un déploiement à proximité des utilisateurs finaux. Cependant, l’intégration des serveurs MEC dans les stations de base complexifie le problème de la consommation de l’énergie, particulièrement dans un tel environnement qui est dynamique et sujet à des changements au fil du temps. Par ailleurs, la demande des utilisateurs des appareils mobiles est en constante augmentation ainsi que leur expectation d’une expérience meilleure. Sachant que le cache est d’une taille limitée, il est donc nécessaire et crucial que les mécanismes de mise en cache soient en mesure de faire face à cette situation et de proposer des solutions valables et satisfaisants à long terme. La plupart des études existantes se sont focalisées sur l’allocation de cache, la popularité du contenu ou encore la manière de concevoir le cache. Dans cette thèse, nous présentons une nouvelle stratégie de mise en cache écoénergétique basée sur la logique floue (Fuzzy logic). Notre proposition prend en compte les quatre caractéristiques d’un environnement mobile et introduit une implémentation matérielle en utilisant les FPGA (Field-Programmable Gate Array) pour réduire les besoins globaux en énergie. L’adoption d’une stratégie de mise en cache adéquate sur les serveurs MEC ouvre la possibilité d’utiliser des techniques d’intelligence artificielle (IA) et d’apprentissage automatique (Machine Learning) aux extrémités des réseaux mobiles. L’exploitation des informations de contexte des utilisateurs permet de concevoir une mise en cache intelligente sensible au contexte. La reconnaissance du contexte permet au cache de connaître son environnement, tandis que l’intelligence lui permet de prendre les bonnes décisions en sélectionnant le contenu approprié à mettre en cache afin d’optimiser les performances du caching. Inspiré par le succès de l’apprentissage par renforcement utilisant des agents pour traiter des problèmes de prise de décision, nous avons étendu notre système de mise en cache basé sur la logique floue à un modèle d’apprentissage par renforcement modifié. Le cadre proposé vise à maximiser le taux de réussite du cache (hit rate) et nécessite une prise de conscience multiple sure les conditions de web et l’utilisateur final. La méthode d’apprentissage par renforcement modifiée diffère des autres algorithmes par le taux d’apprentissage qui utilise la méthode du gradient stochastique décent (stochastic gradient decent) en plus de tirer parti de l’apprentissage en utilisant la décision de mise en cache optimale obtenue à partir des règles de la logique floue
Mobile edge computing (MEC) concept proposes to bring the computing and storage resources in close proximity to the end user by placing these resources at the network edge. The motivation is to alleviate the mobile core and to reduce latency for mobile users due to their close proximity to the edge. MEC servers are candidates to host mobile applications and serve web contents. Edge caching is one of the most emerging technologies recognized as a content retrieval solution in the edge of the network. It has been also considered as enabling technology of mobile edge computing that presents an interesting opportunity to perform caching services. Particularly, the MEC servers are implemented directly at the base stations which enable edge caching and ensure deployment in close-proximity to the mobile users. However, the integration of servers in mobile edge computing environment (base stations) complicates the energy saving issue because the power consumed by mobile edge computing servers is costly especially when the load changes dynamically over time. Furthermore, users with mobile devices arise their demands, introducing the challenge of handling such mobile content requests beside the limited caching size. Thus, it is necessary and crucial for caching mechanisms to consider context-aware factors, meanwhile most existing studies focus on cache allocation, content popularity and cache design. In this thesis, we present a novel energy-efficient fuzzy caching strategy for edge devices that takes into consideration four influencing features of mobile environment, while introducing a hardware implementation using Field-Programmable Gate Array (FPGA) to cut the overall energy requirements. Performing an adequate caching strategy on MEC servers opens the possibility of employing artificial intelligence (AI) techniques and machine learning at mobile network edges. Exploiting users context information intelligently makes it possible to design an intelligent context-aware mobile edge caching. Context awareness enables the cache to be aware of its environment, while intelligence enables each cache to make the right decisions of selecting appropriate contents to be cached so that to maximize the caching performance. Inspired by the success of reinforcement learning (RL) that uses agents to deal with decision making problems, we extended our fuzzy-caching system into a modified reinforcement learning model. The proposed framework aims to maximize the cache hit rate and requires a multi awareness. The modified RL differs from other RL algorithms in the learning rate that uses the method of stochastic gradient decent beside taking advantage of learning using the optimal caching decision obtained from fuzzy rules
19

Tahraoui, Mohammed Amin. "Coloring, packing and embedding of graphs." Phd thesis, Université Claude Bernard - Lyon I, 2012. http://tel.archives-ouvertes.fr/tel-00995041.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis, we investigate some problems in graph theory, namelythe graph coloring problem, the graph packing problem and tree pattern matchingfor XML query processing. The common point between these problems is that theyuse labeled graphs.In the first part, we study a new coloring parameter of graphs called the gapvertex-distinguishing edge coloring. It consists in an edge-coloring of a graph G whichinduces a vertex distinguishing labeling of G such that the label of each vertex isgiven by the difference between the highest and the lowest colors of its adjacentedges. The minimum number of colors required for a gap vertex-distinguishing edgecoloring of G is called the gap chromatic number of G and is denoted by gap(G).We will compute this parameter for a large set of graphs G of order n and we evenprove that gap(G) 2 fn E 1; n; n + 1g.In the second part, we focus on graph packing problems, which is an area ofgraph theory that has grown significantly over the past several years. However, themajority of existing works focuses on unlabeled graphs. In this thesis, we introducefor the first time the packing problem for a vertex labeled graph. Roughly speaking,it consists of graph packing which preserves the labels of the vertices. We studythe corresponding optimization parameter on several classes of graphs, as well asfinding general bounds and characterizations.The last part deal with the query processing of a core subset of XML query languages:XML twig queries. An XML twig query, represented as a small query tree,is essentially a complex selection on the structure of an XML document. Matching atwig query means finding all the occurrences of the query tree embedded in the XMLdata tree. Many holistic twig join algorithms have been proposed to match XMLtwig pattern. Most of these algorithms find twig pattern matching in two steps. Inthe first one, a query tree is decomposed into smaller pieces, and solutions againstthese pieces are found. In the second step, all of these partial solutions are joinedtogether to generate the final solutions. In this part, we propose a novel holistictwig join algorithm, called TwigStack++, which features two main improvementsin the decomposition and matching phase. The proposed solutions are shown to beefficient and scalable, and should be helpful for the future research on efficient queryprocessing in a large XML database.
20

Gupta, Devyani. "Optimal Placement and Traffic Steering of VNFs and Edge Servers using Column Generation in Data Center Networks." Thesis, 2022. https://etd.iisc.ac.in/handle/2005/5974.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Telecom Service Providers (TSPs) were traditionally dependent on physical devices to provide end-to-end communication. The services provided were high quality and stable but low in agility and hardware-dependent. As the demand for quick deployment of diverse services increased, TSP-s needed much higher flexibility and agility. This is how Network Functions Virtualization (NFV) came into being. NFV is the concept of replacing dedicated hardware with commercial-off-the-shelf (COTS) servers. It decouples the physical hardware and the function running on it. A network function can be dispatched as an instance of the software, called a Virtual Network Function (VNF). Thus, a service can be decomposed into several VNFs that can be run on industry-standard physical servers. The optimal placement of these VNFs is a potential question for TSPs to reduce the overall cost. We first study a network operations problem where we optimally deploy VNFs in Service Chains (SCs) such that the maximum consumed bandwidth across network links is minimized. The network parameters (link bandwidths, compute capacities of nodes, link propagation delays, etc.) and the number of SCs are known a priori. The problem formulated is a large Mixed-Integer Linear Program (MILP). We use the Column Generation (CG) technique to solve the problem optimally. Through various examples, we show the power of CG. We compare our results with recent heuristics and demonstrate that our approach performs better as it gives exact optimal solutions quickly. Second, we extend our previous setup to the online case where the number of SCs is not known a priori. We serve SC requests as they come. A new SC is implemented on the "residual network" while the previously deployed SCs are undisturbed. The problem formulated is a large MILP, and we use CG as the solution technique. The results show the percentage improvement in the solutions over those obtained using heuristics. Next, we study a network design problem in an Edge Computing Environment. A general communication network has a single Data Center (DC) in its "core," which serves as a gateway to the Internet. For delay-constrained services of the kind needed by online gaming, this model does not suffice because the propagation delay between the subscriber and the DC may be too high. This requires some servers to be located close to the network edge. Thus, the question of the optimal placement of these edge servers arises. To lower the network design cost, it is also essential to ensure good traffic routing, so that aggregate traffic on each link remains as low as possible. This enables lower capacity assignment on each link and thereby minimizes design cost. We study a novel joint optimization problem of network design cost minimization. Edge server placement cost and link capacity assignment cost constitute the total cost. The problem formulated is a large MILP, and we again use CG to solve it. We compare our results with many heuristics and show the improvement in design cost. Finally, we extend the above work by relaxing some assumptions and constraints. Unlike previously, we consider servers with different capacities. Also, a server can serve more than one request depending on its core capabilities. We also consider the split-and-merge of an SC through various paths in the network. The formulated problem can also provide the minimum number of servers to be used. Again, the formulation is a large MILP, and CG is used to solve it exactly.
21

Lyu, Kun-Yu, and 呂昆育. "Edge Cloud Placement with Traffic Load Alleviation for Cloud Computing." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/k3kk38.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
國立臺灣科技大學
資訊工程系
107
In the modern world, the Internet is very widespread. People ask the Internet to provide kinds of services. The user sends a request message to the cloud server on the Internet. Then, the cloud server provides service to the user. Hence, between the user and the server, a traffic is generated as the user is served. However, if there a large traffic be generated and it is a long distance between the user and the cloud server, not only the data transmission time is longer, but also the data flow has more chance to compete with other data flow for the bandwidth, which leads to increase of traffic load. A solution is to deploy edge clouds in the networks. The user requests are served in edge clouds. Due to the cost of deploying an edge cloud is lower than setting up a cloud server, the more edge clouds can be deployed in the networks under the same budget. The deployment of edge cloud is resilient. The edge cloud is close to the user so the influence of data flow generated by service is small, the traffic load of the networks is alleviated. Locations of edge clouds influence data transmission time and traffic load of the networks. This thesis investigates the problem of selecting the locations of edge clouds in a cloud network and allocating user requests to appropriate edge clouds to share the workload of the cloud server while minimizing the traffic originally generated between the user and the cloud server. The problem is formally defined in an integer programming problem. The thesis proposed novel schemes to select the locations of edge clouds and allocate user requests to appropriate edge clouds. User requests are preferred to be processed locally in the nearby edge cloud such that the traffic load in the network could be minimized. Simulation results show that the proposed scheme outperforms other schemes in the recent literature. It is shown to be effective in reducing the traffic load in the network.
22

Yu, Peng. "Fast and accurate lithography simulation and optical proximity correction for nanometer design for manufacturing." 2009. http://hdl.handle.net/2152/6664.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
As semiconductor manufacture feature sizes scale into the nanometer dimension, circuit layout printability is significantly reduced due to the fundamental limit of lithography systems. This dissertation studies related research topics in lithography simulation and optical proximity correction. A recursive integration method is used to reduce the errors in transmission cross coefficient (TCC), which is an important factor in the Hopkins Equation in aerial image simulation. The runtime is further reduced, without increasing the errors, by using the fact that TCC is usually computed on uniform grids. A flexible software framework, ELIAS, is also provided, which can be used to compute TCC for various lithography settings, such as different illuminations. Optimal coherent approximations (OCAs), which are used for full-chip image simulation, can be speeded up by considering the symmetric properties of lithography systems. The runtime improvement can be doubled without loss of accuracy. This improvement is applicable to vectorial imaging models as well. Even in the case where the symmetric properties do not hold strictly, the new method can be generalized such that it could still be faster than the old method. Besides new numerical image simulation algorithms, variations in lithography systems are also modeled. A Variational LIthography Model (VLIM) as well as its calibration method are provided. The Variational Edge Placement Error (V-EPE) metrics, which is an improvement of the original Edge Placement Error (EPE) metrics, is introduced based on the model. A true process-variation aware OPC (PV-OPC) framework is proposed using the V-EPE metric. Due to the analytical nature of VLIM, our PV-OPC is only about 2-3× slower than the conventional OPC, but it explicitly considers the two main sources of process variations (exposure dose and focus variations) during OPC. The EPE metrics have been used in conventional OPC algorithms, but it requires many intensity simulations and takes the majority of the OPC runtime. By making the OPC algorithm intensity based (IB-OPC) rather than EPE based, we can reduce the number of intensity simulations and hence reduce the OPC runtime. An efficient intensity derivative computation method is also provided, which makes the new algorithm converge faster than the EPE based algorithm. Our experimental results show a runtime speedup of more than 10× with comparable result quality compared to the EPE based OPC. The above mentioned OPC algorithms are vector based. Other categories of OPC algorithms are pixel based. Vector based algorithms in general generate less complex masks than those of pixel based ones. But pixel based algorithms produce much better results than vector based ones in terms of contour fidelity. Observing that vector based algorithms preserve mask shape topologies, which leads to lower mask complexities, we combine the strengths of both categories—the topology invariant property and the pixel based mask representation. A topological invariant pixel based OPC (TIP-OPC) algorithm is proposed, with lithography friendly mask topological invariant operations and an efficient Fast Fourier Transform (FFT) based cost function sensitivity computation. The experimental results show that TIP-OPC can achieve much better post-OPC contours compared with vector based OPC while maintaining the mask shape topologies.
text

До бібліографії