Academic literature on the topic 'Edge Computation Offloading'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Edge Computation Offloading.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Edge Computation Offloading":

1

Patel, Minal Parimalbhai, and Sanjay Chaudhary. "Edge Computing." International Journal of Fog Computing 3, no. 1 (January 2020): 64–74. http://dx.doi.org/10.4018/ijfc.2020010104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this article, the researchers have provided a discussion on computation offloading and the importance of docker-based containers, known as light weight virtualization, to improve the performance of edge computing systems. At the end, they have also proposed techniques and a case study for computation offloading and light weight virtualization.
2

Man, Junfeng, Longqian Zhao, Bowen Xu, Cheng Peng, Junjie Jiang, and Yi Liu. "Computation Offloading Method for Large-Scale Factory Access in Edge-Edge Collaboration Mode." Journal of Database Management 34, no. 1 (February 24, 2023): 1–29. http://dx.doi.org/10.4018/jdm.318451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Large-scale manufacturing enterprises have complex business processes in their production workshops, and the edge-edge collaborative business model cannot adapt to the traditional computation offloading methods, which leads to the problem of load imbalance. For this problem, a computation offloading algorithm based on edge-edge collaboration mode for large-scale factory access is proposed, called the edge and edge collaborative computation offloading (EECCO) algorithm. First, the method partitions the directed acyclic graphs (DAGs) on edge server and terminal industrial equipment, then updates the tasks using a synchronization policy based on set theory to improve the accuracy effectively, and finally achieves load balancing through processor allocation. The experimental results show that the method shortens the processing time by improving computational resource utilization and employs a heterogeneous distributed system to achieve high computing performance when processing large-scale task sets.
3

Xiao, Yong, Ling Wei, Junhao Feng, and Wang En. "Two-tier end-edge collaborative computation offloading for edge computing." Journal of Computational Methods in Sciences and Engineering 22, no. 2 (March 28, 2022): 677–88. http://dx.doi.org/10.3233/jcm-215923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Edge computing has emerged for meeting the ever-increasing computation demands from delay-sensitive Internet of Things (IoT) applications. However, the computing capability of an edge device, including a computing-enabled end user and an edge server, is insufficient to support massive amounts of tasks generated from IoT applications. In this paper, we aim to propose a two-tier end-edge collaborative computation offloading policy to support as much as possible computation-intensive tasks while making the edge computing system strongly stable. We formulate the two-tier end-edge collaborative offloading problem with the objective of minimizing the task processing and offloading cost constrained to the stability of queue lengths of end users and edge servers. We perform analysis of the Lyapunov drift-plus-penalty properties of the problem. Then, a cost-aware computation offloading (CACO) algorithm is proposed to find out optimal two-tier offloading decisions so as to minimize the cost while making the edge computing system stable. Our simulation results show that the proposed CACO outperforms the benchmarked algorithms, especially under various number of end users and edge servers.
4

Shan, Nanliang, Yu Li, and Xiaolong Cui. "A Multilevel Optimization Framework for Computation Offloading in Mobile Edge Computing." Mathematical Problems in Engineering 2020 (June 27, 2020): 1–17. http://dx.doi.org/10.1155/2020/4124791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Mobile edge computing is a new computing paradigm that can extend cloud computing capabilities to the edge network, supporting computation-intensive applications such as face recognition, natural language processing, and augmented reality. Notably, computation offloading is a key technology of mobile edge computing to improve mobile devices’ performance and users’ experience by offloading local tasks to edge servers. In this paper, the problem of computation offloading under multiuser, multiserver, and multichannel scenarios is researched, and a computation offloading framework is proposed that considering the quality of service (QoS) of users, server resources, and channel interference. This framework consists of three levels. (1) In the offloading decision stage, the offloading decision is made based on the beneficial degree of computation offloading, which is measured by the total cost of the local computing of mobile devices in comparison with the edge-side server. (2) In the edge server selection stage, the candidate is comprehensively evaluated and selected by a multiobjective decision based on the Analytic Hierarchy Process based on Covariance (Cov-AHP) for computation offloading. (3) In the channel selection stage, a multiuser and multichannel distributed computation offloading strategy based on the potential game is proposed by considering the influence of channel interference on the user’s overall overhead. The corresponding multiuser and multichannel task scheduling algorithm is designed to maximize the overall benefit by finding the Nash equilibrium point of the potential game. Amounts of experimental results show that the proposed framework can greatly increase the number of beneficial computation offloading users and effectively reduce the energy consumption and time delay.
5

Lin, Li, Xiaofei Liao, Hai Jin, and Peng Li. "Computation Offloading Toward Edge Computing." Proceedings of the IEEE 107, no. 8 (August 2019): 1584–607. http://dx.doi.org/10.1109/jproc.2019.2922285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Maftah, Sara, Mohamed El Ghmary, Hamid El Bouabidi, Mohamed Amnai, and Ali Ouacha. "Intelligent task processing using mobile edge computing: processing time optimization." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 1 (March 1, 2024): 143. http://dx.doi.org/10.11591/ijai.v13.i1.pp143-152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<p>The fast-paced development of the internet of things led to the increase of computing resource services that could provide a fast response time, which is an unsatisfied feature when using cloud infrastructures due to network latency. Therefore, mobile edge computing became an emerging model by extending computation and storage resources to the network edge, to meet the demands of delaysensitive and heavy computing applications. Computation offloading is the main feature that makes Edge computing surpass the existing cloud-based technologies to break limitations such as computing capabilities, battery resources, and storage availability, it enhances the durability and performance of mobile devices by offloading local intensive computation tasks to edge servers. However, the optimal solution is not always guaranteed by offloading computation, therefore, the offloading decision is a crucial step depending on many parameters that should be taken in consideration. In this paper, we use a simulator to compare a two tier edge orchestrator architecture with the results obtained by implementing a system model that aims to minimize a task’s processing time constrained by time delay and the limited device’s computational resource and usage based on a modified version.</p>
7

Li, Feixiang, Chao Fang, Mingzhe Liu, Ning Li, and Tian Sun. "Intelligent Computation Offloading Mechanism with Content Cache in Mobile Edge Computing." Electronics 12, no. 5 (March 6, 2023): 1254. http://dx.doi.org/10.3390/electronics12051254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Edge computing is a promising technology to enable user equipment to share computing resources for task offloading. Due to the characteristics of the computing resource, how to design an efficient computation incentive mechanism with the appropriate task offloading and resource allocation strategies is an essential issue. In this manuscript, we proposed an intelligent computation offloading mechanism with content cache in mobile edge computing. First, we provide the network framework for computation offloading with content cache in mobile edge computing. Then, by deriving necessary and sufficient conditions, an optimal contract is designed to obtain the joint task offloading, resource allocation, and a computation strategy with an intelligent mechanism. Simulation results demonstrate the efficiency of our proposed approach.
8

Sheng, Jinfang, Jie Hu, Xiaoyu Teng, Bin Wang, and Xiaoxia Pan. "Computation Offloading Strategy in Mobile Edge Computing." Information 10, no. 6 (June 2, 2019): 191. http://dx.doi.org/10.3390/info10060191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Mobile phone applications have been rapidly growing and emerging with the Internet of Things (IoT) applications in augmented reality, virtual reality, and ultra-clear video due to the development of mobile Internet services in the last three decades. These applications demand intensive computing to support data analysis, real-time video processing, and decision-making for optimizing the user experience. Mobile smart devices play a significant role in our daily life, and such an upward trend is continuous. Nevertheless, these devices suffer from limited resources such as CPU, memory, and energy. Computation offloading is a promising technique that can promote the lifetime and performance of smart devices by offloading local computation tasks to edge servers. In light of this situation, the strategy of computation offloading has been adopted to solve this problem. In this paper, we propose a computation offloading strategy under a scenario of multi-user and multi-mobile edge servers that considers the performance of intelligent devices and server resources. The strategy contains three main stages. In the offloading decision-making stage, the basis of offloading decision-making is put forward by considering the factors of computing task size, computing requirement, computing capacity of server, and network bandwidth. In the server selection stage, the candidate servers are evaluated comprehensively by multi-objective decision-making, and the appropriate servers are selected for the computation offloading. In the task scheduling stage, a task scheduling model based on the improved auction algorithm has been proposed by considering the time requirement of the computing tasks and the computing performance of the mobile edge computing server. Extensive simulations have demonstrated that the proposed computation offloading strategy could effectively reduce service delay and the energy consumption of intelligent devices, and improve user experience.
9

Huang, Yan-Yun, and Pi-Chung Wang. "Computation Offloading and User-Clustering Game in Multi-Channel Cellular Networks for Mobile Edge Computing." Sensors 23, no. 3 (January 19, 2023): 1155. http://dx.doi.org/10.3390/s23031155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Mobile devices may use mobile edge computing to improve energy efficiency and responsiveness by offloading computation tasks to edge servers. However, the transmissions of mobile devices may result in interference that decreases the upload rate and prolongs transmission delay. Clustering has been shown as an effective approach to improve the transmission efficiency for dense devices, but there is no distributed algorithm for the optimization of clustering and computation offloading. In this work, we study the optimization problem of computation offloading to minimize the energy consumption of mobile devices in mobile edge computing by adaptively clustering devices to improve the transmission efficiency. To address the optimization problem in a distributed manner, the decision problem of clustering and computation offloading for mobile devices is formulated as a potential game. We introduce the construction of the potential game and show the existence of Nash equilibrium in the game with a finite enhancement ability. Then, we propose a distributed algorithm of clustering and computation offloading based on game theory. We conducted a simulation to evaluate the proposed algorithm. The numerical results from our simulation show that our algorithm can improve offloading efficiency for mobile devices in mobile edge computing by improving transmission efficiency. By offloading more tasks to edge servers, both the energy efficiency of mobile devices and the responsiveness of computation-intensive applications can be improved simultaneously.
10

Abbas, Aamir, Ali Raza, Farhan Aadil, and Muazzam Maqsood. "Meta-heuristic-based offloading task optimization in mobile edge computing." International Journal of Distributed Sensor Networks 17, no. 6 (June 2021): 155014772110230. http://dx.doi.org/10.1177/15501477211023021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the recent advancements in communication technologies, the realization of computation-intensive applications like virtual/augmented reality, face recognition, and real-time video processing becomes possible at mobile devices. These applications require intensive computations for real-time decision-making and better user experience. However, mobile devices and Internet of things have limited energy and computational power. Executing such computationally intensive tasks on edge devices either leads to high computation latency or high energy consumption. Recently, mobile edge computing has been evolved and used for offloading these complex tasks. In mobile edge computing, Internet of things devices send their tasks to edge servers, which in turn perform fast computation. However, many Internet of things devices and edge server put an upper limit on concurrent task execution. Moreover, executing a very small size task (1 KB) over an edge server causes increased energy consumption due to communication. Therefore, it is required to have an optimal selection for tasks offloading such that the response time and energy consumption will become minimum. In this article, we proposed an optimal selection of offloading tasks using well-known metaheuristics, ant colony optimization algorithm, whale optimization algorithm, and Grey wolf optimization algorithm using variant design of these algorithms according to our problem through mathematical modeling. Executing multiple tasks at the server tends to provide high response time that leads to overloading and put additional latency at task computation. We also graphically represent the tradeoff between energy and delay that, how both parameters are inversely proportional to each other, using values from simulation. Results show that Grey wolf optimization outperforms the others in terms of optimizing energy consumption and execution latency while selected optimal set of offloading tasks.

Dissertations / Theses on the topic "Edge Computation Offloading":

1

Yu, Shuai. "Multi-user computation offloading in mobile edge computing." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Mobile Edge Computing (MEC) est un modèle informatique émergent qui étend le cloud et ses services à la périphérie du réseau. Envisager l'exécution d'applications émergentes à forte intensité de ressources dans le réseau MEC, le déchargement de calcul est un paradigme éprouvé réussi pour activer des applications gourmandes en ressources sur les appareils mobiles. De plus, compte tenu de l'émergence de l'application collaborative mobile (MCA), les tâches déchargées peuvent être dupliquées lorsque plusieurs utilisateurs se trouvent à proximité. Cela nous motive à concevoir un schéma de déchargement de calcul collaboratif pour un réseau MEC multi-utilisateurs. Dans ce contexte, nous étudions séparément les schémas de déchargement par calcul collaboratif pour les scénarios de déchargement de MEC, de déchargement de périphérique à périphérique (D2D) et de déchargement hybride, respectivement. Dans le scénario de déchargement de MEC, nous supposons que plusieurs utilisateurs mobiles déchargent des tâches de calcul dupliquées sur les serveurs de périphérie du réseau et partagent les résultats de calcul entre eux. Notre objectif est de développer les stratégies optimales de déchargement collaboratif avec des améliorations de mise en cache afin de minimiser le délai d'exécution global du côté du terminal mobile. À cette fin, nous proposons un déchargement optimal avec un schéma d'amélioration de la mise en cache (OOCS) pour le scénario femto-cloud et le scénario d'informatique mobile, respectivement. Les résultats de la simulation montrent que comparé à six solutions alternatives dans la littérature, notre OOCS mono-utilisateur peut réduire les délais d'exécution jusqu'à 42,83% et 33,28% respectivement pour le femto-cloud mono-utilisateur et l'informatique mobile mono-utilisateur. D'un autre côté, notre système OOCS multi-utilisateur peut encore réduire le délai de 11,71% par rapport à l'OOCS mono-utilisateur grâce à la coopération des utilisateurs. Dans le scénario de déchargement D2D, nous supposons que lorsque des tâches de calcul en double sont traitées sur des utilisateurs mobiles spécifiques et que les résultats de calcul sont partagés via le canal de multidiffusion Device-to-Device (D2D). Notre objectif ici est de trouver une partition réseau optimale pour le déchargement multicast D2D, afin de minimiser la consommation d'énergie globale du côté du terminal mobile. À cette fin, nous proposons d'abord un cadre de déchargement de calcul basé sur la multidiffusion D2D où le problème est modélisé comme un problème d'optimisation combinatoire, puis résolu en utilisant les concepts de correspondance bipartite pondérée maximale et de jeu de coalition. Notez que notre proposition considère la contrainte de délai pour chaque utilisateur mobile ainsi que le niveau de la batterie pour garantir l'équité. Pour évaluer l'efficacité de notre proposition, nous simulons trois composants interactifs typiques. Les résultats de la simulation montrent que notre algorithme peut réduire considérablement la consommation d'énergie et garantir l'équité de la batterie entre plusieurs utilisateurs en même temps. Nous étendons ensuite le déchargement du D2D au déchargement hybride en tenant compte des relations sociales. Dans ce contexte, nous proposons un cadre d'exécution de tâches hybride multicast pour l'informatique mobile, où une foule d'appareils mobiles à la périphérie du réseau s'appuient sur la collaboration D2D assistée par réseau pour l'informatique distribuée sans fil et le partage des résultats. Le cadre est socialement conscient afin de construire des liens D2D efficaces. Un objectif clé de ce cadre est de mettre en place une politique d'attribution de tâches écoénergétique pour les utilisateurs mobiles. Pour ce faire, nous introduisons d'abord le modèle de système de déchargement de calcul hybride social-aware, puis nous formulons le problème d'affectation de tâches économe en énergie en prenant en compte les contraintes nécessaires [...]
Mobile Edge Computing (MEC) is an emerging computing model that extends the cloud and its services to the edge of the network. Consider the execution of emerging resource-intensive applications in MEC network, computation offloading is a proven successful paradigm for enabling resource-intensive applications on mobile devices. Moreover, in view of emerging mobile collaborative application (MCA), the offloaded tasks can be duplicated when multiple users are in the same proximity. This motivates us to design a collaborative computation offloading scheme for multi-user MEC network. In this context, we separately study the collaborative computation offloading schemes for the scenarios of MEC offloading, device-to-device (D2D) offloading and hybrid offloading, respectively. In the MEC offloading scenario, we assume that multiple mobile users offload duplicated computation tasks to the network edge servers, and share the computation results among them. Our goal is to develop the optimal fine-grained collaborative offloading strategies with caching enhancements to minimize the overall execution delay at the mobile terminal side. To this end, we propose an optimal offloading with caching-enhancement scheme (OOCS) for femto-cloud scenario and mobile edge computing scenario, respectively. Simulation results show that compared to six alternative solutions in literature, our single-user OOCS can reduce execution delay up to 42.83% and 33.28% for single-user femto-cloud and single-user mobile edge computing, respectively. On the other hand, our multi-user OOCS can further reduce 11.71% delay compared to single-user OOCS through users' cooperation. In the D2D offloading scenario, we assume that where duplicated computation tasks are processed on specific mobile users and computation results are shared through Device-to-Device (D2D) multicast channel. Our goal here is to find an optimal network partition for D2D multicast offloading, in order to minimize the overall energy consumption at the mobile terminal side. To this end, we first propose a D2D multicast-based computation offloading framework where the problem is modelled as a combinatorial optimization problem, and then solved using the concepts of from maximum weighted bipartite matching and coalitional game. Note that our proposal considers the delay constraint for each mobile user as well as the battery level to guarantee fairness. To gauge the effectiveness of our proposal, we simulate three typical interactive components. Simulation results show that our algorithm can significantly reduce the energy consumption, and guarantee the battery fairness among multiple users at the same time. We then extend the D2D offloading to hybrid offloading with social relationship consideration. In this context, we propose a hybrid multicast-based task execution framework for mobile edge computing, where a crowd of mobile devices at the network edge leverage network-assisted D2D collaboration for wireless distributed computing and outcome sharing. The framework is social-aware in order to build effective D2D links [...]
2

Hansson, Gustav. "Computation offloading of 5G devices at the Edge using WebAssembly." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-85898.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With an ever-increasing percentage of the human population connected to the internet, the amount of data produced and processed is at an all-time high. Edge Computing has emerged as a paradigm to handle this growth and, combined with 5G, enables complex time-sensitive applications running on resource-restricted devices. This master thesis investigates the use of WebAssembly in the context of computa¬tional offloading at the Edge. The focus is on utilizing WebAssembly to move computa¬tional heavy parts of a system from an end device to an Edge Server. An objective is to improve program performance by reducing the execution time and energy consumption on the end device. A proof-of-concept offloading system is developed to research this. The system is evaluated on three different use cases; calculating Fibonacci numbers, matrix multipli¬cation, and image recognition. Each use case is tested on a Raspberry Pi 3 and Pi 4 comparing execution of the WebAssembly module both locally and offloaded. Each test will also run natively on both the server and the end device to provide some baseline for comparison.
3

Bozorgchenani, Arash <1989&gt. "Energy and Delay Efficient Computation Offloading Solutions for Edge Computing." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amsdottorato.unibo.it/9356/1/PhD%20Thesis_Arash%20Bozorgchenani.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis collects a selective set of outcomes of a PhD course in Electronics, Telecommunications, and Information Technologies Engineering and it is focused on designing techniques to optimize computational resources in different wireless communication environments. Mobile Edge Computing (MEC) is a novel and distributed computational paradigm that has emerged to address the high users demand in 5G. In MEC, edge devices can share their resources to collaborate in terms of storage and computation. One of the computational sharing techniques is computation offloading, which brings a lot of advantages to the network edge, from lower communication, to lower energy consumption for computation. However, the communication among the devices should be managed such that the resources are exploited efficiently. To this aim, in this dissertation, computation offloading in different wireless environments with different number of users, network traffic, resource availability and devices' location are analyzed in order to optimize the resource allocation at the network edge. To better organize the dissertation, the studies are classified in four main sections. In the first section, an introduction on computational sharing technologies is given. Later, the problem of computation offloading is defined, and the challenges are introduced. In the second section, two partial offloading techniques are proposed. While in the first one, centralized and distributed architectures are proposed, in the second work, an Evolutionary Algorithm for task offloading is proposed. In the third section, the offloading problem is seen from a different perspective where the end users can harvest energy from either renewable sources of energy or through Wireless Power Transfer. In the fourth section, the MEC in vehicular environments is studied. In one work a heuristic is introduced in order to perform the computation offloading in Internet of Vehicles and in the other a learning-based approach based on bandit theory is proposed.
4

Soto, Garcia Victor. "Mobility-Oriented Data Retrieval for Computation Offloading in Vehicular Edge Computing." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/38836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Vehicular edge computing (VEC) brings the cloud paradigm to the edge of the network, allowing nodes such as Roadside Units (RSUs) and On-Board Units (OBUs) in vehicles to perform services with location awareness and low delay requirements. Furthermore, it alleviates the bandwidth congestion caused by the large amount of data requests in the network. One of the major components of VEC, computation offloading, has gained increasing attention with the emergence of mobile and vehicular applications with high-computing and low-latency demands, such as Intelligent Transportation Systems and IoT-based applications. However, existing challenges need to be addressed for vehicles' resources to be used in an efficient manner. The primary challenge consists of the mobility of the vehicles, followed by intermittent or lack of connectivity. Therefore, the MPR (Mobility Prediction Retrieval) data retrieval protocol proposed in this work allows VEC to efficiently retrieve the output processed data of the offloaded application by using both vehicles and road side units as communication nodes. The developed protocol uses geo-location information of the network infrastructure and the users to accomplish an efficient data retrieval in a Vehicular Edge Computing environment. Moreover, the proposed MPR Protocol relies on both Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communication to achieve a reliable retrieval of data, giving it a higher retrieval rate than methods that use V2I or V2V only. Finally, the experiments performed show the proposed protocol to achieve a more reliable data retrieval with lower communication delay when compared to related techniques.
5

Messaoudi, Farouk. "User equipment based-computation offloading for real-time applications in the context of Cloud and edge networks." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S104/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le délestage de calcul ou de code est une technique qui permet à un appareil mobile avec une contrainte de ressources d'exécuter à distance, entièrement ou partiellement, une application intensive en calcul dans un environnement Cloud avec des ressources suffisantes. Le délestage de code est effectué principalement pour économiser de l'énergie, améliorer les performances, ou en raison de l'incapacité des appareils mobiles à traiter des calculs intensifs. Plusieurs approches et systèmes ont été proposés pour délester du code dans le Cloud tels que CloneCloud, MAUI et Cyber Foraging. La plupart de ces systèmes offrent une solution complète qui traite différents objectifs. Bien que ces systèmes présentent en général de bonnes performances, un problème commun entre eux est qu'ils ne sont pas adaptés aux applications temps réel telles que les jeux vidéo, la réalité augmentée et la réalité virtuelle, qui nécessitent un traitement particulier. Le délestage de code a connu un récent engouement avec l'avènement du MEC et son évolution vers le edge à multiple accès qui élargit son applicabilité à des réseaux hétérogènes comprenant le WiFi et les technologies d'accès fixe. Combiné avec l'accès mobile 5G, une pléthore de nouveaux services mobiles apparaîtront, notamment des service type URLLC et eV2X. De tels types de services nécessitent une faible latence pour accéder aux données et des capacités de ressources suffisantes pour les exécuter. Pour mieux trouver sa position dans une architecture 5G et entre les services 5G proposés, le délestage de code doit surmonter plusieurs défis; la latence réseau élevée, hétérogénéité des ressources, interopérabilité des applications et leur portabilité, la consommation d'énergie, la sécurité, et la mobilité, pour citer quelques uns. Dans cette thèse, nous étudions le paradigme du délestage de code pour des applications a temps réel, par exemple; les jeux vidéo sur équipements mobiles et le traitement d'images. L'accent sera mis sur la latence réseau, la consommation de ressources, et les performances accomplies. Les contributions de la thèse sont organisées sous les axes suivants : Étudier le comportement des moteurs de jeu sur différentes plateformes en termes de consommation de ressources (CPU / GPU) par image et par module de jeu ; Étudier la possibilité de distribuer les modules du moteur de jeu en fonction de la consommation de ressources, de la latence réseau, et de la dépendance du code ; Proposer une stratégie de déploiement pour les fournisseurs de jeux dans le Cloud, afin de mieux exploiter les ressources, en fonction de la demande variable en ressource par des moteurs de jeu et de la QoE du joueur ; Proposer une solution de délestage statique de code pour les moteurs de jeu en divisant la scène 3D en différents objets du jeu. Certains de ces objets sont distribués en fonction de la consommation de ressources, de la latence réseau et de la dépendance du code ; Proposer une solution de délestage dynamique de code pour les moteurs de jeu basée sur une heuristique qui calcule pour chaque objet du jeu, le gain du délestage. En fonction de ce gain, un objet peut être distribué ou non ; Proposer une nouvelle approche pour le délestage de code vers le MEC en déployant une application sur la bordure du réseau (edge) responsable de la décision de délestage au niveau du terminal et proposer deux algorithmes pour prendre la meilleure décision concernant les tâches à distribuer entre le terminal et le serveur hébergé dans le MEC
Computation offloading is a technique that allows resource-constrained mobile devices to fully or partially offload a computation-intensive application to a resourceful Cloud environment. Computation offloading is performed mostly to save energy, improve performance, or due to the inability of mobile devices to process a computation heavy task. There have been a numerous approaches and systems on offloading tasks in the classical Mobile Cloud Computing (MCC) environments such as, CloneCloud, MAUI, and Cyber Foraging. Most of these systems are offering a complete solution that deal with different objectives. Although these systems present in general good performance, one common issue between them is that they are not adapted to real-time applications such as mobile gaming, augmented reality, and virtual reality, which need a particular treatment. Computation offloading is widely promoted especially with the advent of Mobile Edge Computing (MEC) and its evolution toward Multi-access Edge Computing which broaden its applicability to heterogeneous networks including WiFi and fixed access technologies. Combined with 5G mobile access, a plethora of novel mobile services will appear that include Ultra-Reliable Low-latency Communications (URLLC) and enhanced Vehicle-toeverything (eV2X). Such type of services requires low latency to access data and high resource capabilities to compute their behaviour. To better find its position inside a 5G architecture and between the offered 5G services, computation offloading needs to overcome several challenges; the high network latency, resources heterogeneity, applications interoperability and portability, offloading frameworks overhead, power consumption, security, and mobility, to name a few. In this thesis, we study the computation offloading paradigm for real-time applications including mobile gaming and image processing. The focus will be on the network latency, resource consumption, and accomplished performance. The contributions of the thesis are organized on the following axes : Study game engines behaviour on different platforms regarding resource consumption (CPU/GPU) per frame and per game module; study the possibility to offload game engine modules based on resource consumption, network latency, and code dependency ; propose a deployment strategy for Cloud gaming providers to better exploit their resources based on the variability of the resource demand of game engines and the QoE ; propose a static computation offloading-based solution for game engines by splitting 3D world scene into different game objects. Some of these objects are offloaded based on resource consumption, network latency, and code dependency ; propose a dynamic offloading solution for game engines based on an heuristic that compute for each game object, the offloading gain. Based on that gain, an object may be offloaded or not ; propose a novel approach to offload computation to MEC by deploying a mobile edge application that is responsible for driving the UE decision for offloading, as well as propose two algorithms to make best decision regarding offloading tasks on UE to a server hosted on the MEC
6

Djemai, Ibrahim. "Joint offloading-scheduling policies for future generation wireless networks." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAS007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les défis posés par le nombre croissant d'appareils connectés, la forte consommation d'énergie et l'impact environnemental dans les réseaux sans fil d'aujourd'hui et de demain retiennent de plus en plus l'attention. De nouvelles technologies telles que le cloud mobile de périphérie (Mobile Edge Computing) ont vu le jour pour rapprocher les services en nuage des appareils et remédier à leurs limitations en matière de calcul. Le fait de doter ces appareils et les nœuds du réseau de capacités de récolte d'énergie (Energy Harvesting) est également prometteur pour permettre de consommer de l'énergie à partir de sources durables et respectueuses de l'environnement. En outre, l'accès multiple non orthogonal (Non-Orthogonal Multiple Access) est une technique essentielle pour améliorer l'efficacité spectral mobile. Avec l'aide des progrès de l'intelligence artificielle, en particulier des modèles d'apprentissage par renforcement (Reinforcement Learning), le travail de thèse porte sur la conception de politiques qui optimisent conjointement l'ordonnancement et la décharge de calcul pour les appareils dotés de capacités EH, les communications compatibles avec le NOMA et l'accès MEC. En outre, lorsque le nombre d'appareils augmente et que la complexité du système s'accroît, le regroupement NOMA est effectué et l'apprentissage fédéré (Federated Learning) est utilisé pour produire des politiques RL de manière distribuée. Les résultats de la thèse valident la performance des politiques RL proposées, ainsi que l'intérêt de l'utilisation de la technique NOMA
The challenges posed by the increasing number of connected devices, high energy consumption, and environmental impact in today's and future wireless networks are gaining more attention. New technologies like Mobile Edge Computing (MEC) have emerged to bring cloud services closer to the devices and address their computation limitations. Enabling these devices and the network nodes with Energy Harvesting (EH) capabilities is also promising to allow for consuming energy from sustainable and environmentally friendly sources. In addition, Non-Orthogonal Multiple Access (NOMA) is a pivotal technique to achieve enhanced mobile broadband. Aided by the advancement of Artificial Intelligence, especially Reinforcement Learning (RL) models, the thesis work revolves around devising policies that jointly optimize scheduling and computational offloading for devices with EH capabilities, NOMA-enabled communications, and MEC access. Moreover, when the number of devices increases and so does the system complexity, NOMA clustering is performed and Federated Learning is used to produce RL policies in a distributed way. The thesis results validate the performance of the proposed RL-based policies, as well as the interest of using NOMA technique
7

Krishna, Nitesh. "Software-Defined Computational Offloading for Mobile Edge Computing." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Computational offloading advances the deployment of Mobile Edge Computing (MEC) in the next generation communication networks. However, the distributed nature of the mobile users and the complex applications make it challenging to schedule the tasks reasonably among multiple devices. Therefore, by leveraging the idea of Software-Defined Networking (SDN) and Service Composition (SC), we propose a Software-Defined Service Composition model (SDSC). In this model, the SDSC controller is deployed at the edge of the network and composes service in a centralized manner to reduce the latency of the task execution and the traffic on the access links by satisfying the user-specific requirement. We formulate the low latency service composition as a Constraint Satisfaction Problem (CSP) to make it a user-centric approach. With the advent of the SDN, the global view and the control of the entire network are made available to the network controller which is further leveraged by our SDSC approach. Furthermore, the service discovery and the offloading of tasks are designed for MEC environment so that the users can have a complex and robust system. Moreover, this approach performs the task execution in a distributed manner. We also define the QoS model which provides the composition rule that forms the best possible service composition at the time of need. Moreover, we have extended our SDSC model to involve the constant mobility of the mobile devices. To solve the mobility issue, we propose a mobility model and a mobility-aware QoS approach enabled in the SDSC model. The experimental simulation results demonstrate that our approach can obtain better performance than the energy saving greedy algorithm and the random offloading approach in a mobile environment.
8

Silva, Joaquim Magalhães Esteves da. "Adaptive Computation Offloading in Mobile Edge Clouds." Doctoral thesis, 2021. https://hdl.handle.net/10216/139189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Maurício, Bruno Alexandre de Salabert. "Modelling edge computation offloading for automotive video analytics." Master's thesis, 2021. https://hdl.handle.net/10216/135579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Intelligent vehicles are becoming more common and affordable, and with each new model come complex and resource-intensive applications, starting at simple sensors, into assistant AI, and more recently full vehicle automation. These applications can be mostly segmented into two categories, infotainment and driving assistance. The latter category requires strict adherence to time limits, lest they become useless or even dangerous to the driver, and is the focus of the present work. The spread and availability of powerful computation devices throughout city streets as a result of a variety of factors, including the emergence of new technologies that demand higher node density like the fifth generation of mobile networks, raises the question as to whether there is an effective way that vehicles can take advantage of this spread out capacity, instead of depending solely on the conventional on board computing unit (OBU). Furthermore, given the complexity of these systems, how can one model them and perform simulations that are both valid and credible, as well as liable to verification through real world experiments. According to IEEExplore, even though the study of vehicular ad-hoc networks (VANET) goes all the way back to 2005, computation offloading within VANETs is a much more recent focus of general study (~2017). Nevertheless, dozens of different approaches with respective algorithms have been proposed. In terms of communication, both vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) communication have been considered, with the technologies in use ranging from mobile and wifi, to VANET specific such as Dedicated Short Range Communications (DSRC / 802.11p). The most differentiating factor is the chosen parameters for the algorithms, that can be categorized in communication (available/used bandwidth), load (size, delay requirements), computation (required CPU cycles) and car movement (cell stay time). The main goals of this work are twofold. First, to provide a realistic and verifiable simulation environment with mathematical models for the load (based on a real video stream) and for the computation (based on a simple object detection engine). Second, to provide a simple proof-of-concept computation offloading algorithm that takes advantage of the information in the models to perform sensible offloading decisions.
Intelligent vehicles are becoming more common and affordable, and with each new model come complex and resource-intensive applications, starting at simple sensors, into assistant AI, and more recently full vehicle automation. These applications can be mostly segmented into two categories, infotainment and driving assistance. The latter category requires strict adherence to time limits, lest they become useless or even dangerous to the driver, and is the focus of the present work. The spread and availability of powerful computation devices throughout city streets as a result of a variety of factors, including the emergence of new technologies that demand higher node density like the fifth generation of mobile networks, raises the question as to whether there is an effective way that vehicles can take advantage of this spread out capacity, instead of depending solely on the conventional on board computing unit (OBU). Furthermore, given the complexity of these systems, how can one model them and perform simulations that are both valid and credible, as well as liable to verification through real world experiments. According to IEEExplore, even though the study of vehicular ad-hoc networks (VANET) goes all the way back to 2005, computation offloading within VANETs is a much more recent focus of general study (~2017). Nevertheless, dozens of different approaches with respective algorithms have been proposed. In terms of communication, both vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) communication have been considered, with the technologies in use ranging from mobile and wifi, to VANET specific such as Dedicated Short Range Communications (DSRC / 802.11p). The most differentiating factor is the chosen parameters for the algorithms, that can be categorized in communication (available/used bandwidth), load (size, delay requirements), computation (required CPU cycles) and car movement (cell stay time). The main goals of this work are twofold. First, to provide a realistic and verifiable simulation environment with mathematical models for the load (based on a real video stream) and for the computation (based on a simple object detection engine). Second, to provide a simple proof-of-concept computation offloading algorithm that takes advantage of the information in the models to perform sensible offloading decisions.
10

Maurício, Bruno Alexandre de Salabert. "Modelling edge computation offloading for automotive video analytics." Dissertação, 2021. https://hdl.handle.net/10216/135579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Intelligent vehicles are becoming more common and affordable, and with each new model come complex and resource-intensive applications, starting at simple sensors, into assistant AI, and more recently full vehicle automation. These applications can be mostly segmented into two categories, infotainment and driving assistance. The latter category requires strict adherence to time limits, lest they become useless or even dangerous to the driver, and is the focus of the present work. The spread and availability of powerful computation devices throughout city streets as a result of a variety of factors, including the emergence of new technologies that demand higher node density like the fifth generation of mobile networks, raises the question as to whether there is an effective way that vehicles can take advantage of this spread out capacity, instead of depending solely on the conventional on board computing unit (OBU). Furthermore, given the complexity of these systems, how can one model them and perform simulations that are both valid and credible, as well as liable to verification through real world experiments. According to IEEExplore, even though the study of vehicular ad-hoc networks (VANET) goes all the way back to 2005, computation offloading within VANETs is a much more recent focus of general study (~2017). Nevertheless, dozens of different approaches with respective algorithms have been proposed. In terms of communication, both vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) communication have been considered, with the technologies in use ranging from mobile and wifi, to VANET specific such as Dedicated Short Range Communications (DSRC / 802.11p). The most differentiating factor is the chosen parameters for the algorithms, that can be categorized in communication (available/used bandwidth), load (size, delay requirements), computation (required CPU cycles) and car movement (cell stay time). The main goals of this work are twofold. First, to provide a realistic and verifiable simulation environment with mathematical models for the load (based on a real video stream) and for the computation (based on a simple object detection engine). Second, to provide a simple proof-of-concept computation offloading algorithm that takes advantage of the information in the models to perform sensible offloading decisions.
Intelligent vehicles are becoming more common and affordable, and with each new model come complex and resource-intensive applications, starting at simple sensors, into assistant AI, and more recently full vehicle automation. These applications can be mostly segmented into two categories, infotainment and driving assistance. The latter category requires strict adherence to time limits, lest they become useless or even dangerous to the driver, and is the focus of the present work. The spread and availability of powerful computation devices throughout city streets as a result of a variety of factors, including the emergence of new technologies that demand higher node density like the fifth generation of mobile networks, raises the question as to whether there is an effective way that vehicles can take advantage of this spread out capacity, instead of depending solely on the conventional on board computing unit (OBU). Furthermore, given the complexity of these systems, how can one model them and perform simulations that are both valid and credible, as well as liable to verification through real world experiments. According to IEEExplore, even though the study of vehicular ad-hoc networks (VANET) goes all the way back to 2005, computation offloading within VANETs is a much more recent focus of general study (~2017). Nevertheless, dozens of different approaches with respective algorithms have been proposed. In terms of communication, both vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) communication have been considered, with the technologies in use ranging from mobile and wifi, to VANET specific such as Dedicated Short Range Communications (DSRC / 802.11p). The most differentiating factor is the chosen parameters for the algorithms, that can be categorized in communication (available/used bandwidth), load (size, delay requirements), computation (required CPU cycles) and car movement (cell stay time). The main goals of this work are twofold. First, to provide a realistic and verifiable simulation environment with mathematical models for the load (based on a real video stream) and for the computation (based on a simple object detection engine). Second, to provide a simple proof-of-concept computation offloading algorithm that takes advantage of the information in the models to perform sensible offloading decisions.

Books on the topic "Edge Computation Offloading":

1

Chen, Ying, Ning Zhang, Yuan Wu, and Sherman Shen. Energy Efficient Computation Offloading in Mobile Edge Computing. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16822-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Ning, Ying Chen, Yuan Wu, and Sherman Shen. Energy Efficient Computation Offloading in Mobile Edge Computing. Springer International Publishing AG, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Edge Computation Offloading":

1

Taheri, Javid, Schahram Dustdar, Albert Zomaya, and Shuiguang Deng. "AI/ML for Computation Offloading." In Edge Intelligence, 111–57. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-22155-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Yan. "Mobile Edge Computing." In Simula SpringerBriefs on Computing, 9–21. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83944-4_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractMobile edge computing is a promising paradigm that brings computing resources to mobile users at the network edge, allowing computing-intensive and delay-sensitive applications to be quickly processed by edge servers to satisfy the requirements of mobile users. In this chapter, we first introduce a hierarchical architecture of mobile edge computing that consists of a cloud plane, an edge plane, and a user plane. We then introduce three typical computation offloading decisions. Finally, we review state-of-the-art works on computation offloading and present the use case of joint computation offloading.
3

Ma, Xiao, Mengwei Xu, Qing Li, Yuanzhe Li, Ao Zhou, and Shangguang Wang. "Edge Computing Based Computation Offloading." In 5G Edge Computing, 63–79. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-0213-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Peng, Kai, Yiwen Zhang, Xiaofei Wang, Xiaolong Xu, Xiuhua Li, and Victor C. M. Leung. "Computation Offloading in Mobile Edge Computing." In Encyclopedia of Wireless Networks, 216–20. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-319-78262-1_331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Peng, Kai, Yiwen Zhang, Xiaofei Wang, Xiaolong Xu, Xiuhua Li, and Victor C. M. Leung. "Computation Offloading in Mobile Edge Computing." In Encyclopedia of Wireless Networks, 1–5. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-32903-1_331-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cha, Narisu, Celimuge Wu, Tsutomu Yoshinaga, and Yusheng Ji. "Virtual Edge: Collaborative Computation Offloading in VANETs." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 79–93. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64002-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Ying, Ning Zhang, Yuan Wu, and Sherman Shen. "Dynamic Computation Offloading for Energy Efficiency in Mobile Edge Computing." In Energy Efficient Computation Offloading in Mobile Edge Computing, 27–60. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16822-2_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Ying, Ning Zhang, Yuan Wu, and Sherman Shen. "Energy-Efficient Multi-Task Multi-Access Computation Offloading via NOMA." In Energy Efficient Computation Offloading in Mobile Edge Computing, 123–52. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16822-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cheng, Xiaolan, Xin Zhou, Congfeng Jiang, and Jian Wan. "Towards Computation Offloading in Edge Computing: A Survey." In High-Performance Computing Applications in Numerical Simulation and Edge Computing, 3–15. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-32-9987-0_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tefera, Natnael, and Ayalew Belay Habtie. "Mobility Aware Computation Offloading Model for Edge Computing." In Communications in Computer and Information Science, 54–71. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-23606-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Edge Computation Offloading":

1

Droob, Alexander, Daniel Morratz, Frederik Langkilde Jakobsen, Jacob Carstensen, Magnus Mathiesen, Rune Bohnstedt, Michele Albano, Sergio Moreschini, and Davide Taibi. "Fault Tolerant Horizontal Computation Offloading." In 2023 IEEE International Conference on Edge Computing and Communications (EDGE). IEEE, 2023. http://dx.doi.org/10.1109/edge60047.2023.00036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wei, Xiaojuan, Shangguang Wang, Ao Zhou, Jinliang Xu, Sen Su, Sathish Kumar, and Fangchun Yang. "MVR: An Architecture for Computation Offloading in Mobile Edge Computing." In 2017 IEEE International Conference on Edge Computing (EDGE). IEEE, 2017. http://dx.doi.org/10.1109/ieee.edge.2017.42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Letian, and Jie Xu. "Fooling Edge Computation Offloading via Stealthy Interference Attack." In 2020 IEEE/ACM Symposium on Edge Computing (SEC). IEEE, 2020. http://dx.doi.org/10.1109/sec50012.2020.00062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ma, Weibin, and Lena Mashayekhy. "Truthful Computation Offloading Mechanisms for Edge Computing." In 2020 7th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2020 6th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). IEEE, 2020. http://dx.doi.org/10.1109/cscloud-edgecom49738.2020.00043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cheng, Lei, Gang Feng, Yao Sun, Mengjie Liu, and Shuang Qin. "Dynamic Computation Offloading in Satellite Edge Computing." In ICC 2022 - IEEE International Conference on Communications. IEEE, 2022. http://dx.doi.org/10.1109/icc45855.2022.9838943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Xiong, Jingyu, Hongzhi Guo, Jiajia Liu, Nei Kato, and Yanning Zhang. "Collaborative Computation Offloading at UAV-Enhanced Edge." In GLOBECOM 2019 - 2019 IEEE Global Communications Conference. IEEE, 2019. http://dx.doi.org/10.1109/globecom38437.2019.9013956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhu, Shichao, Lin Gui, Jiacheng Chen, Qi Zhang, and Ning Zhang. "Cooperative Computation Offloading for UAVs: A Joint Radio and Computing Resource Allocation Approach." In 2018 IEEE International Conference on Edge Computing (EDGE). IEEE, 2018. http://dx.doi.org/10.1109/edge.2018.00017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Meng, Xianling, Wei Wang, Yitu Wang, Vincent K. N. Lau, and Zhaoyang Zhang. "Delay-Optimal Computation Offloading for Computation-Constrained Mobile Edge Networks." In GLOBECOM 2018 - 2018 IEEE Global Communications Conference. IEEE, 2018. http://dx.doi.org/10.1109/glocom.2018.8647703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

You, Changsheng, Yong Zeng, Rui Zhang, and Kaibin Huang. "Resource Management for Asynchronous Mobile-Edge Computation Offloading." In 2018 IEEE International Conference on Communications Workshops (ICC Workshops). IEEE, 2018. http://dx.doi.org/10.1109/iccw.2018.8403495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Crutcher, Andrew, Caleb Koch, Kyle Coleman, Jon Patman, Flavio Esposito, and Prasad Calyam. "Hyperprofile-Based Computation Offloading for Mobile Edge Networks." In 2017 IEEE 14th International Conference on Mobile Ad-Hoc and Sensor Systems (MASS). IEEE, 2017. http://dx.doi.org/10.1109/mass.2017.91.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography