Статті в журналах з теми "Edge server placement"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Edge server placement.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Edge server placement".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Ma, Rong. "Edge Server Placement for Service Offloading in Internet of Things." Security and Communication Networks 2021 (September 30, 2021): 1–16. http://dx.doi.org/10.1155/2021/5109163.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the rapid development of the Internet of Things, a large number of smart devices are being connected to the Internet while the data generated by these devices have put unprecedented pressure on existing network bandwidth and service operations. Edge computing, as a new paradigm, places servers at the edge of the network, effectively relieving bandwidth pressure and reducing delay caused by long-distance transmission. However, considering the high cost of deploying edge servers, as well as the waste of resources caused by the placement of idle servers or the degradation of service quality caused by resource conflicts, the placement strategy of edge servers has become a research hot spot. To solve this problem, an edge server placement method orienting service offloading in IoT called EPMOSO is proposed. In this method, Genetic Algorithm and Particle Swarm Optimization are combined to obtain a set of edge server placements strategies, and Simple Additive Weighting Method is utilized to determine the most balanced edge server placement, which is measured by minimum delay and energy consumption while achieving the load balance of edge servers. Multiple experiments are carried out, and results show that EPMOSO fulfills the multiobjective optimization with an acceptable convergence speed.
2

Luo, Fei, Shuai Zheng, Weichao Ding, Joel Fuentes, and Yong Li. "An Edge Server Placement Method Based on Reinforcement Learning." Entropy 24, no. 3 (February 23, 2022): 317. http://dx.doi.org/10.3390/e24030317.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In mobile edge computing systems, the edge server placement problem is mainly tackled as a multi-objective optimization problem and solved with mixed integer programming, heuristic or meta-heuristic algorithms, etc. These methods, however, have profound defect implications such as poor scalability, local optimal solutions, and parameter tuning difficulties. To overcome these defects, we propose a novel edge server placement algorithm based on deep q-network and reinforcement learning, dubbed DQN-ESPA, which can achieve optimal placements without relying on previous placement experience. In DQN-ESPA, the edge server placement problem is modeled as a Markov decision process, which is formalized with the state space, action space and reward function, and it is subsequently solved using a reinforcement learning algorithm. Experimental results using real datasets from Shanghai Telecom show that DQN-ESPA outperforms state-of-the-art algorithms such as simulated annealing placement algorithm (SAPA), Top-K placement algorithm (TKPA), K-Means placement algorithm (KMPA), and random placement algorithm (RPA). In particular, with a comprehensive consideration of access delay and workload balance, DQN-ESPA achieves up to 13.40% and 15.54% better placement performance for 100 and 300 edge servers respectively.
3

Zhang, Qiyang, Shangguang Wang, Ao Zhou, and Xiao Ma. "Cost-aware edge server placement." International Journal of Web and Grid Services 18, no. 1 (2022): 83. http://dx.doi.org/10.1504/ijwgs.2022.119275.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ma, Xiao, Ao Zhou, Qiyang Zhang, and Shangguang Wang. "Cost-aware edge server placement." International Journal of Web and Grid Services 18, no. 1 (2022): 83. http://dx.doi.org/10.1504/ijwgs.2022.10042204.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wang, Shangguang, Yali Zhao, Jinlinag Xu, Jie Yuan, and Ching-Hsien Hsu. "Edge server placement in mobile edge computing." Journal of Parallel and Distributed Computing 127 (May 2019): 160–68. http://dx.doi.org/10.1016/j.jpdc.2018.06.008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Yin, Hao, Xu Zhang, Hongqiang H. Liu, Yan Luo, Chen Tian, Shuoyao Zhao, and Feng Li. "Edge Provisioning with Flexible Server Placement." IEEE Transactions on Parallel and Distributed Systems 28, no. 4 (April 1, 2017): 1031–45. http://dx.doi.org/10.1109/tpds.2016.2604803.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Guo, Feiyan, Bing Tang, and Jiaming Zhang. "Mobile edge server placement based on meta-heuristic algorithm." Journal of Intelligent & Fuzzy Systems 40, no. 5 (April 22, 2021): 8883–97. http://dx.doi.org/10.3233/jifs-200933.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The rapid development of the Internet of Things and 5G networks have generated a large amount of data. By offloading computing tasks from mobile devices to edge servers with sufficient computing resources, network congestion and data transmission delays can be effectively reduced. The placement of edge server is the core of task offloading and is a multi-objective optimization problem with multiple resource constraints. Efficient placement approach can effectively meet the needs of mobile users to access services with low latency and high bandwidth. To this end, an optimization model of edge server placement has been established in this paper through minimizing both communication delay and load difference as the optimization goal. Then, an Edge Server placement based on meta-Heuristic alGorithM (ESH-GM) has been proposed to achieve multi-objective optimization. Firstly, the K-means algorithm is combined with the ant colony algorithm, and the pheromone feedback mechanism is introduced into the placement of edge servers by emulating the mechanism of ant colony sharing pheromone in the foraging process, and the ant colony algorithm is improved by setting the taboo table to improve the convergence speed of the algorithm. Then, the improved heuristic algorithm is used to solve the optimal placement of edge servers. Experimental results using Shanghai Telecom’s real datasets show that the proposed ESH-GM achieves an optimal balance between low latency and load balancing, while guaranteeing quality of service, which outperforms several existing representative approaches.
8

Kasi, Mumraiz Khan, Sarah Abu Ghazalah, Raja Naeem Akram, and Damien Sauveron. "Secure Mobile Edge Server Placement Using Multi-Agent Reinforcement Learning." Electronics 10, no. 17 (August 30, 2021): 2098. http://dx.doi.org/10.3390/electronics10172098.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Mobile edge computing is capable of providing high data processing capabilities while ensuring low latency constraints of low power wireless networks, such as the industrial internet of things. However, optimally placing edge servers (providing storage and computation services to user equipment) is still a challenge. To optimally place mobile edge servers in a wireless network, such that network latency is minimized and load balancing is performed on edge servers, we propose a multi-agent reinforcement learning (RL) solution to solve a formulated mobile edge server placement problem. The RL agents are designed to learn the dynamics of the environment and adapt a joint action policy resulting in the minimization of network latency and balancing the load on edge servers. To ensure that the action policy adapted by RL agents maximized the overall network performance indicators, we propose the sharing of information, such as the latency experienced from each server and the load of each server to other RL agents in the network. Experiment results are obtained to analyze the effectiveness of the proposed solution. Although the sharing of information makes the proposed solution obtain a network-wide maximation of overall network performance at the same time it makes it susceptible to different kinds of security attacks. To further investigate the security issues arising from the proposed solution, we provide a detailed analysis of the types of security attacks possible and their countermeasures.
9

Shao, Yanling, Zhen Shen, Siliang Gong, and Hanyao Huang. "Cost-Aware Placement Optimization of Edge Servers for IoT Services in Wireless Metropolitan Area Networks." Wireless Communications and Mobile Computing 2022 (July 27, 2022): 1–17. http://dx.doi.org/10.1155/2022/8936576.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Edge computing migrates cloud computing capacity to the edge of the network to reduce latency caused by congestion and long propagation distance of the core network. And the Internet of things (IoT) service requests with large data traffic submitted by users need to be processed quickly by corresponding edge servers. The closer the edge computing resources are to the user network access point, the better the user experience can be improved. On the other hand, the closer the edge server is to users, the fewer users will access simultaneously, and the utilization efficiency of nodes will be reduced. The capital investment cost is limited for edge resource providers, so the deployment of edge servers needs to consider the trade-off between user experience and capital investment cost. In our study, for edge server deployment problems, we summarize three critical issues: edge location, user association, and capacity at edge locations through the research and analysis of edge resource allocation in a real edge computing environment. For these issues, this study considers the user distribution density (load density), determines a reasonable deployment location of edge servers, and deploys an appropriate number of edge computing nodes in this location to improve resource utilization and minimize the deployment cost of edge servers. Based on the objective minimization function of construction cost and total access delay cost, we formulate the edge server placement as a mixed-integer nonlinear programming problem (MINP) and then propose an edge server deployment optimization algorithm to seek the optimal solution (named Benders_SD). Extensive simulations and comparisons with the other three existing deployment methods show that our proposed method achieved an intended performance. It not only meets the low latency requirements of users but also reduces the deployment cost.
10

Lähderanta, Tero, Teemu Leppänen, Leena Ruha, Lauri Lovén, Erkki Harjula, Mika Ylianttila, Jukka Riekki, and Mikko J. Sillanpää. "Edge computing server placement with capacitated location allocation." Journal of Parallel and Distributed Computing 153 (July 2021): 130–49. http://dx.doi.org/10.1016/j.jpdc.2021.03.007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Zeng, Feng, Yongzheng Ren, Xiaoheng Deng, and Wenjia Li. "Cost-Effective Edge Server Placement in Wireless Metropolitan Area Networks." Sensors 19, no. 1 (December 21, 2018): 32. http://dx.doi.org/10.3390/s19010032.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Remote clouds are gradually unable to achieve ultra-low latency to meet the requirements of mobile users because of the intolerable long distance between remote clouds and mobile users and the network congestion caused by the tremendous number of users. Mobile edge computing, a new paradigm, has been proposed to mitigate aforementioned effects. Existing studies mostly assume the edge servers have been deployed properly and they just pay attention to how to minimize the delay between edge servers and mobile users. In this paper, considering the practical environment, we investigate how to deploy edge servers effectively and economically in wireless metropolitan area networks. Thus, we address the problem of minimizing the number of edge servers while ensuring some QoS requirements. Aiming at more consistence with a generalized condition, we extend the definition of the dominating set, and transform the addressed problem into the minimum dominating set problem in graph theory. In addition, two conditions are considered for the capacities of edge servers: one is that the capacities of edge servers can be configured on demand, and the other is that all the edge servers have the same capacities. For the on-demand condition, a greedy based algorithm is proposed to find the solution, and the key idea is to iteratively choose nodes that can connect as many other nodes as possible under the delay, degree and cluster size constraints. Furthermore, a simulated annealing based approach is given for global optimization. For the second condition, a greedy based algorithm is also proposed to satisfy the capacity constraint of edge servers and minimize the number of edge servers simultaneously. The simulation results show that the proposed algorithms are feasible.
12

Zhang, Jianshan, Ming Li, Xianghan Zheng, and Ching-Hsien Hsu. "A Time-Driven Cloudlet Placement Strategy for Workflow Applications in Wireless Metropolitan Area Networks." Sensors 22, no. 9 (April 29, 2022): 3422. http://dx.doi.org/10.3390/s22093422.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the rapid development of mobile technology, mobile applications have increasing requirements for computational resources, and mobile devices can no longer meet these requirements. Mobile edge computing (MEC) has emerged in this context and has brought innovation into the working mode of traditional cloud computing. By provisioning edge server placement, the computing power of the cloud center is distributed to the edge of the network. The abundant computational resources of edge servers compensate for the lack of mobile devices and shorten the communication delay between servers and users. Constituting a specific form of edge servers, cloudlets have been widely studied within academia and industry in recent years. However, existing studies have mainly focused on computation offloading for general computing tasks under fixed cloudlet placement positions. They ignored the impact on computation offloading results from cloudlet placement positions and data dependencies among mobile application components. In this paper, we study the cloudlet placement problem based on workflow applications (WAs) in wireless metropolitan area networks (WMANs). We devise a cloudlet placement strategy based on a particle swarm optimization algorithm using genetic algorithm operators with the encoding library updating mode (PGEL), which enables the cloudlet to be placed in appropriate positions. The simulation results show that the proposed strategy can obtain a near-optimal cloudlet placement scheme. Compared with other classic algorithms, this algorithm can reduce the execution time of WAs by 15.04–44.99%.
13

Liu, Yanpei, Yanru Bin, Ningning Chen, and Shuaijie Zhu. "Caching Placement Optimization Strategy Based on Comprehensive Utility in Edge Computing." Applied Sciences 13, no. 16 (August 14, 2023): 9229. http://dx.doi.org/10.3390/app13169229.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the convergence of the Internet of Things, 5G, and artificial intelligence, limited network bandwidth and bursts of incoming service requests seem to be the most important factors affecting user experience. Therefore, caching technology was introduced. In this paper, a caching placement optimization strategy based on comprehensive utility (CPOSCU) in edge computing is proposed. Firstly, the strategy involves quantifying the placement factors of data blocks, which include the popularity of data blocks, the remaining validity ratio of data blocks, and the substitution rate of servers. By analyzing the characteristics of cache objects and servers, these placement factors are modeled to determine the cache value of data blocks. Then, the optimization problem for cache placement is quantified comprehensively based on the cache value of data blocks, data block retrieval costs, data block placement costs, and replacement costs. Finally, to break out of the partial optimal solution for cache placement, a penalty strategy is introduced, and an improved tabu search algorithm is used to find the best edge server placement for cached objects. Experimental results demonstrate that the proposed caching strategy enhances the cache service rate, reduces user request latency and system overhead, and enhances the user experience.
14

He, Zhenli, Kenli Li, and Keqin Li. "Cost-Efficient Server Configuration and Placement for Mobile Edge Computing." IEEE Transactions on Parallel and Distributed Systems 33, no. 9 (September 1, 2022): 2198–212. http://dx.doi.org/10.1109/tpds.2021.3135955.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Shen, Bowen, Xiaolong Xu, Lianyong Qi, Xuyun Zhang, and Gautam Srivastava. "Dynamic server placement in edge computing toward Internet of Vehicles." Computer Communications 178 (October 2021): 114–23. http://dx.doi.org/10.1016/j.comcom.2021.07.021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Hadzic, Ilija, Yoshihisa Abe, and Hans Christian Woithe. "Server Placement and Selection for Edge Computing in the ePC." IEEE Transactions on Services Computing 12, no. 5 (September 1, 2019): 671–84. http://dx.doi.org/10.1109/tsc.2018.2850327.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Nangu, Shota, Ayaka Takeda, Tomotaka Kimura, and Kouji Hirata. "Integrated Design of Edge Computing Systems with Edge Server Placement and Virtual Machine Allocation." IEEJ Transactions on Electronics, Information and Systems 141, no. 12 (December 1, 2021): 1321–30. http://dx.doi.org/10.1541/ieejeiss.141.1321.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Yuan, Lyuzerui, Jie Gu, Jinghuan Ma, Honglin Wen, and Zhijian Jin. "Optimal Network Partition and Edge Server Placement for Distributed State Estimation." Journal of Modern Power Systems and Clean Energy 10, no. 6 (2022): 1637–47. http://dx.doi.org/10.35833/mpce.2021.000512.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Parveen Shaik, Dr Sajeeda. "Strategic Placement of Servers in Mobile Cloud Computing: A Comprehensive Exploration of Edge Computing, Fog Computing, and Cloudlet Technologies." INTERNATIONAL RESEARCH JOURNAL OF ENGINEERING & APPLIED SCIENCES 8, no. 4 (2020): 24–28. http://dx.doi.org/10.55083/irjeas.2020.v08i04009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Mobile Cloud Computing (MCC) has become integral to the advancement of mobile applications, necessitating strategic server placement for optimized performance. This review article explores three pivotal technologies—Edge Computing, Fog Computing, and Cloudlet Technologies—aimed at addressing the challenges posed by traditional cloud-centric models in MCC environments. The paper provides a thorough analysis of each approach, elucidating their architectural principles, benefits, and applications. Edge Computing’s proximity to end-users, Fog Computing’s intermediary role, and the localized Cloudlet Technologies are scrutinized. A comparative analysis offers insights into their strengths and limitations, aiding in determining their suitability based on diverse use cases. Real-world applications showcase the transformative impact of these technologies in enhancing mobile experiences across sectors such as healthcare, gaming, and more. The review concludes with a discussion on the challenges inherent in each strategy and proposes future research directions. This comprehensive exploration serves as a valuable resource for researchers, practitioners, and decision-makers navigating the dynamic landscape of strategic server placement in MCC, contributing to the optimization of performance and user experience in mobile applications and services.
20

Lu, Yongling, Zhen Wang, Chengbo Hu, Ziquan Liu, and Xueqiong Zhu. "Edge Computing Server Placement Strategy Based on SPEA2 in Power Internet of Things." Security and Communication Networks 2022 (August 24, 2022): 1–11. http://dx.doi.org/10.1155/2022/3810670.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In order to meet the edge services placement demand for multiobjective optimization of Power Internet of Things, an edge services placement strategy based on an improved strength Pareto evolutionary algorithm (SPEA2) is proposed in this paper. Firstly, we model the delay, resource utilization, and energy consumption. Then, a multiobjective optimization is proposed. Finally, an enhanced genetic algorithm is used to derive the decision candidate set. Moreover, the optimal solution in the candidate set is selected to be utilized in the iteration of the multicriteria decision and the superior-inferior solution distance method. Numerical results and analysis show that the proposed strategy is more effective in reducing system delay, improving resource utilization, and saving energy consumption than the other two benchmark algorithms.
21

Li, Jiaqi, Yiqiang Sheng, and Haojiang Deng. "Two Optimization Algorithms for Name-Resolution Server Placement in Information-Centric Networking." Applied Sciences 10, no. 10 (May 22, 2020): 3588. http://dx.doi.org/10.3390/app10103588.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Information-centric networking (ICN) is an emerging network architecture that has the potential to address demands related to transmission latency and reliability in fifth-generation (5G) communication technology and the Internet of Things (IoT). As an essential component of ICN, name resolution provides the capability to translate identifiers into locators. Applications have different demands on name-resolution latency. To meet the demands, deploying name-resolution servers at the edge of the network by dividing it into multilayer overlay networks is effective. Moreover, optimization of the deployment of distributed name-resolution servers in such networks to minimize deployment costs is significant. In this paper, we first study the placement problem of the name-resolution server in ICN. Then, two algorithms called IIT-DOWN and IIT-UP are developed based on the heuristic ideas of inter-layer information transfer (IIT) and server reuse. They transfer server placement information and latency information between adjacent layers from different directions. Finally, experiments are conducted on both simulation networks and a real-world dataset. The experimental results reveal that the proposed algorithms outperform state-of-the-art algorithms such as the latency-aware hierarchical elastic area partitioning (LHP) algorithm in finding more cost-efficient solutions with a shorter execution time.
22

Huang, Ping-Chun, Tai-Lin Chin, and Tzu-Yi Chuang. "Server Placement and Task Allocation for Load Balancing in Edge-Computing Networks." IEEE Access 9 (2021): 138200–138208. http://dx.doi.org/10.1109/access.2021.3117870.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Huang, Ping-Chun, Tai-Lin Chin, and Tzu-Yi Chuang. "Server Placement and Task Allocation for Load Balancing in Edge-Computing Networks." IEEE Access 9 (2021): 138200–138208. http://dx.doi.org/10.1109/access.2021.3117870.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Lu, Jiawei, Jielin Jiang, Venki Balasubramanian, Mohammad R. Khosravi, and Xiaolong Xu. "Deep reinforcement learning-based multi-objective edge server placement in Internet of Vehicles." Computer Communications 187 (April 2022): 172–80. http://dx.doi.org/10.1016/j.comcom.2022.02.011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Cai, Chao, Bin Chen, Jiahui Qiu, Yanan Xu, Mengfei Li, and Yujia Yang. "Migratory Perception in Edge-Assisted Internet of Vehicles." Electronics 12, no. 17 (August 30, 2023): 3662. http://dx.doi.org/10.3390/electronics12173662.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Autonomous driving technology heavily relies on the accurate perception of traffic environments, mainly through roadside cameras and LiDARs. Although several popular and robust 2D and 3D object detection methods exist, including R-CNN, YOLO, SSD, PointPillar, and VoxelNet, the perception range and accuracy of an individual vehicle can be limited by blocking from other vehicles or buildings. A solution is to harness roadside perception infrastructures for vehicle–infrastructure cooperative perception, using edge computing for real-time intermediate features extraction and V2X networks for transmitting these features to vehicles. This emerging migratory perception paradigm requires deploying exclusive cooperative perception services on edge servers and involves the migration of perception services to reduce response time. In such a setup, competition among multiple cooperative perception services exists due to limited edge resources. This study proposes a multi-agent reinforcement learning (MADRL)-based service scheduling method for migratory perception in vehicle–infrastructure cooperative perception, utilizing a discrete time-varying graph to model the relationship between service nodes and edge server nodes. This MADRL-based approach can efficiently address the challenges of service placement and migration in resource-limited environments, minimize latency, and maximize resource utilization for migratory perception services on edge servers.
26

Li, Xingcun, Feng Zeng, Guanyun Fang, Yinan Huang, and Xunlin Tao. "Load balancing edge server placement method with QoS requirements in wireless metropolitan area networks." IET Communications 14, no. 21 (December 2020): 3907–16. http://dx.doi.org/10.1049/iet-com.2020.0651.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Liu, Chunyu, Heli Zhang, Xi Li, and Hong Ji. "Dynamic Rendering-Aware VR Service Module Placement Strategy in MEC Networks." Wireless Communications and Mobile Computing 2022 (August 18, 2022): 1–17. http://dx.doi.org/10.1155/2022/1237619.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Combining multiaccess edge computing (MEC) technology and wireless virtual reality (VR) game is a promising computing paradigm. Offloading the rendering tasks to the edge node can make up for the lack of computing resources of mobile devices. However, the current offloading works ignored that when rendering is enabled at the MEC server, the rendering operation depends heavily on the environment deployed on this MEC serve. In this paper, we propose a dynamically rendering-aware service module placement scheme for wireless VR games over the MEC networks. In this scheme, the rendering tasks of VR games are offloaded to the MEC server and closely coupled with service module placement. At the same time, to further optimize the end-to-end latency of VR video delivery, the routing delay of the rendered VR video stream and the costs of the service module migration are jointly considered with the proposed placement scheme. The goal of this scheme is to minimize the sum of the network costs over a long time under satisfying the delay constraint of each player. We model our strategy as a high-order, nonconvex, and time-varying function. To solve this problem, we transform the placement problem into the min-cut problem by constructing a series of auxiliary graphs. Then, we propose a two-stage iterative algorithm based on convex optimization and graphs theory to solve our object function. Finally, extensive simulation results show that our proposed algorithm can ensure low end-to-end latency for players and low network costs over the other baseline algorithms.
28

Son, Min-Sik, Sang-Hwa Chung, and Won-Suk Kim. "Fog-Server Placement Technique Based on Network Edge Area Traffic for a Fog-Computing Environment." Journal of KIISE 45, no. 6 (June 30, 2018): 598–610. http://dx.doi.org/10.5626/jok.2018.45.6.598.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Rui, Lanlan, Shuyun Wang, Zhili Wang, Ao Xiong, and Huiyong Liu. "A dynamic service migration strategy based on mobility prediction in edge computing." International Journal of Distributed Sensor Networks 17, no. 2 (February 2021): 155014772199340. http://dx.doi.org/10.1177/1550147721993403.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Mobile edge computing is a new computing paradigm, which pushes cloud computing capabilities away from the centralized cloud to the network edge to satisfy the increasing amounts of low-latency tasks. However, challenges such as service interruption caused by user mobility occur. In order to address this problem, in this article, we first propose a multiple service placement algorithm, which initializes the placement of each service according to the user’s initial location and their service requests. Furthermore, we build a network model and propose a based on Lyapunov optimization method with long-term cost constraints. Considering the importance of user mobility, we use the Kalman filter to correct the user’s location to improve the success rate of communication between the device and the server. Compared with the traditional scheme, extensive simulation results show that the dynamic service migration strategy can effectively improve the service efficiency of mobile edge computing in the user’s mobile scene, reduce the delay of requesting terminal nodes, and reduce the service interruption caused by frequent user movement.
30

Asghari, Ali, and Mohammad Karim Sohrabi. "Server placement in mobile cloud computing: A comprehensive survey for edge computing, fog computing and cloudlet." Computer Science Review 51 (February 2024): 100616. http://dx.doi.org/10.1016/j.cosrev.2023.100616.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Peng, Kai, Victor C. M. Leung, Xiaolong Xu, Lixin Zheng, Jiabin Wang, and Qingjia Huang. "A Survey on Mobile Edge Computing: Focusing on Service Adoption and Provision." Wireless Communications and Mobile Computing 2018 (October 10, 2018): 1–16. http://dx.doi.org/10.1155/2018/8267838.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Mobile cloud computing (MCC) integrates cloud computing (CC) into mobile networks, prolonging the battery life of the mobile users (MUs). However, this mode may cause significant execution delay. To address the delay issue, a new mode known as mobile edge computing (MEC) has been proposed. MEC provides computing and storage service for the edge of network, which enables MUs to execute applications efficiently and meet the delay requirements. In this paper, we present a comprehensive survey of the MEC research from the perspective of service adoption and provision. We first describe the overview of MEC, including the definition, architecture, and service of MEC. After that we review the existing MUs-oriented service adoption of MEC, i.e., offloading. More specifically, the study on offloading is divided into two key taxonomies: computation offloading and data offloading. In addition, each of them is further divided into single MU offloading scheme and multi-MU offloading scheme. Then we survey edge server- (ES-) oriented service provision, including technical indicators, ES placement, and resource allocation. In addition, other issues like applications on MEC and open issues are investigated. Finally, we conclude the paper.
32

Khan, Akif Quddus, Nikolay Nikolov, Mihhail Matskin, Radu Prodan, Dumitru Roman, Bekir Sahin, Christoph Bussler, and Ahmet Soylu. "Smart Data Placement Using Storage-as-a-Service Model for Big Data Pipelines." Sensors 23, no. 2 (January 4, 2023): 564. http://dx.doi.org/10.3390/s23020564.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Big data pipelines are developed to process data characterized by one or more of the three big data features, commonly known as the three Vs (volume, velocity, and variety), through a series of steps (e.g., extract, transform, and move), making the ground work for the use of advanced analytics and ML/AI techniques. Computing continuum (i.e., cloud/fog/edge) allows access to virtually infinite amount of resources, where data pipelines could be executed at scale; however, the implementation of data pipelines on the continuum is a complex task that needs to take computing resources, data transmission channels, triggers, data transfer methods, integration of message queues, etc., into account. The task becomes even more challenging when data storage is considered as part of the data pipelines. Local storage is expensive, hard to maintain, and comes with several challenges (e.g., data availability, data security, and backup). The use of cloud storage, i.e., storage-as-a-service (StaaS), instead of local storage has the potential of providing more flexibility in terms of scalability, fault tolerance, and availability. In this article, we propose a generic approach to integrate StaaS with data pipelines, i.e., computation on an on-premise server or on a specific cloud, but integration with StaaS, and develop a ranking method for available storage options based on five key parameters: cost, proximity, network performance, server-side encryption, and user weights/preferences. The evaluation carried out demonstrates the effectiveness of the proposed approach in terms of data transfer performance, utility of the individual parameters, and feasibility of dynamic selection of a storage option based on four primary user scenarios.
33

Satish Kumar Mahariya, Awaneesh Kumar, Rajesh Singh, Anita Gehlot, Shaik Vaseem Akram, Bhekisipho Twala, Mohammed Ismail Iqbal, and Neeraj Priyadarshi. "Smart Campus 4.0: Digitalization of University Campus with Assimilation of Industry 4.0 for Innovation and Sustainability." Journal of Advanced Research in Applied Sciences and Engineering Technology 32, no. 1 (August 19, 2023): 120–38. http://dx.doi.org/10.37934/araset.32.1.120138.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
According to the United Nations, global sustainability in terms of social, economic, and environmental issues must be achieved by 2030. SDGs 4 and 9 are related to education and strengthen the attainment of quality education and infrastructure innovation. Resilient infrastructure plays a significant role in strengthening the campus in terms of education, management, placement and environment. These all aspects come under the smart campus. Smart campus 4.0 is the amalgamation of multitude industry 4.0 enabling technologies for delivering smart and innovative facilities with the aspect of sustainability. The previous studies have proved that the sustainable development goals (SDGs) can be achieved with the amalgamation of industry 4.0 enabling technologies in the campus such as cloud computing, artificial intelligence (AI), Internet of things (IoT), edge/fog computing, blockchain, robot process automation (RPA), drones, augmented reality (AR), virtual reality (VR), big data, digital twin, and metaverse. The main objective of this study to provide the detailed discussion of all industry 4.0 enabling technologies in single research related to smart campus. The findings observed are IoT-Based Drone system is intended to ground patrolling, and a cloud server to develop a smart campus energy monitoring system. AI for campus placement prediction model; cloud and Edge computing architecture to build an intelligent air-quality monitoring system. The novelty of the study, it has discussed all industry 4.0 enabling technologies for a smart campus with challenges, recommendations, and future directions.
34

Sulieman, Nour Alhuda, Lorenzo Ricciardi Celsi, Wei Li, Albert Zomaya, and Massimo Villari. "Edge-Oriented Computing: A Survey on Research and Use Cases." Energies 15, no. 2 (January 10, 2022): 452. http://dx.doi.org/10.3390/en15020452.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Edge computing is a distributed computing paradigm such that client data are processed at the periphery of the network, as close as possible to the originating source. Since the 21st century has come to be known as the century of data due to the rapid increase in the quantity of exchanged data worldwide (especially in smart city applications such as autonomous vehicles), collecting and processing such data from sensors and Internet of Things devices operating in real time from remote locations and inhospitable operating environments almost anywhere in the world is a relevant emerging need. Indeed, edge computing is reshaping information technology and business computing. In this respect, the paper is aimed at providing a comprehensive overview of what edge computing is as well as the most relevant edge use cases, tradeoffs, and implementation considerations. In particular, this review article is focused on highlighting (i) the most recent trends relative to edge computing emerging in the research field and (ii) the main businesses that are taking operations at the edge as well as the most used edge computing platforms (both proprietary and open source). First, the paper summarizes the concept of edge computing and compares it with cloud computing. After that, we discuss the challenges of optimal server placement, data security in edge networks, hybrid edge-cloud computing, simulation platforms for edge computing, and state-of-the-art improved edge networks. Finally, we explain the edge computing applications to 5G/6G networks and industrial internet of things. Several studies review a set of attractive edge features, system architectures, and edge application platforms that impact different industry sectors. The experimental results achieved in the cited works are reported in order to prove how edge computing improves the efficiency of Internet of Things networks. On the other hand, the work highlights possible vulnerabilities and open issues emerging in the context of edge computing architectures, thus proposing future directions to be investigated.
35

Huang, Yunlong, and Yanqiu Wang. "The Application of Graph Neural Network Based on Edge Computing in English Teaching Mode Reform." Wireless Communications and Mobile Computing 2022 (March 12, 2022): 1–12. http://dx.doi.org/10.1155/2022/2611923.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The latest developments in edge computing have paved the way for more efficient data processing especially for simple tasks and lightweight models on the edge of the network, sinking network functions from cloud to edge of the network closer to users. For the reform of English teaching mode, this is also an opportunity to integrate information technology, providing new ideas and new methods for the optimization of English teaching. It improves the efficiency of English reading teaching, stimulates the interest of English learning, enhances students’ autonomous learning ability, and creates favorable conditions for students’ learning and development. This paper designs a MEC-based GNN (GCN-GAN) user preference prediction recommendation model, which can recommend high-quality video or picture text content to the local MEC server based on user browsing history and user preferences. In the experiment, the LFU-LRU joint cache placement strategy used in this article has a cache hit rate of up to 99%. Comparing the GCN-GAN model with other traditional graph neural network models, it performs caching experiments on the Douban English book data and Douban video data sets. The GCN-GAN model has a higher score on the cache task, and the highest speculation accuracy value F1 can reach 86.7.
36

Ning, Han. "Design of the Physical Education Teaching System by Using Edge Calculation and the Fuzzy Clustering Algorithm." Mobile Information Systems 2022 (September 27, 2022): 1–10. http://dx.doi.org/10.1155/2022/7473614.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the increasing number and types of terminal access, real-time processing of increasingly complex Internet of things applications has become increasingly difficult. On the one hand, cloud computing environment in virtual reality, ultrahigh definition live video, intelligent manufacturing, and other application fields put forward complex, diverse, real-time, and other new business requirements. On the other hand, modern IOT terminals have shortcomings such as insufficient computing power and limited battery capacity, which make it difficult to provide real-time processing for Internet applications. The emergence of edge computing services provides effective solutions for these applications, which can improve the local data processing capacity, shorten the data transmission delay, and reduce the hardware cost to a certain extent. Since computing offload, resource allocation, cache content placement, and edge server deployment are the basis of localized data processing and resource allocation, their performance is closely related to the efficiency and accuracy of data processing in the whole system. In the application of the current system, online learning resources and cutting-edge software and hardware teaching environment continue to emerge, shaping a unique smart classroom. The system proposed in this paper not only has the traditional physical education knowledge but also the advanced visualization principle. The combination of them can promote the practice of the new concept of physical education. Through research on edge computing and resource allocation, this paper applies it to the development of cloud computing environment and sports teaching visualization system so that the cloud computing environment and sports teaching visualization system can flourish.
37

Caro-Via, Selene, Ester Vidaña-Vila, Gerardo José Ginovart-Panisello, Carme Martínez-Suquía, Marc Freixes, and Rosa Ma Alsina-Pagès. "Edge-Computing Meshed Wireless Acoustic Sensor Network for Indoor Sound Monitoring." Sensors 22, no. 18 (September 17, 2022): 7032. http://dx.doi.org/10.3390/s22187032.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This work presents the design of a wireless acoustic sensor network (WASN) that monitors indoor spaces. The proposed network would enable the acquisition of valuable information on the behavior of the inhabitants of the space. This WASN has been conceived to work in any type of indoor environment, including houses, hospitals, universities or even libraries, where the tracking of people can give relevant insight, with a focus on ambient assisted living environments. The proposed WASN has several priorities and differences compared to the literature: (i) presenting a low-cost flexible sensor able to monitor wide indoor areas; (ii) balance between acoustic quality and microphone cost; and (iii) good communication between nodes to increase the connectivity coverage. A potential application of the proposed network could be the generation of a sound map of a certain location (house, university, offices, etc.) or, in the future, the acoustic detection of events, giving information about the behavior of the inhabitants of the place under study. Each node of the network comprises an omnidirectional microphone and a computation unit, which processes acoustic information locally following the edge-computing paradigm to avoid sending raw data to a cloud server, mainly for privacy and connectivity purposes. Moreover, this work explores the placement of acoustic sensors in a real scenario, following acoustic coverage criteria. The proposed network aims to encourage the use of real-time non-invasive devices to obtain behavioral and environmental information, in order to take decisions in real-time with the minimum intrusiveness in the location under study.
38

Xiao, Tuo, Taiping Cui, S. M. Riazul Islam, and Qianbin Chen. "Joint Content Placement and Storage Allocation Based on Federated Learning in F-RANs." Sensors 21, no. 1 (December 31, 2020): 215. http://dx.doi.org/10.3390/s21010215.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the rapid development of mobile communication and the sharp increase of smart mobile devices, wireless data traffic has experienced explosive growth in recent years, thus injecting tremendous traffic into the network. Fog Radio Access Network (F-RAN) is a promising wireless network architecture to accommodate the fast growing data traffic and improve the performance of network service. By deploying content caching in F-RAN, fast and repeatable data access can be achieved, which reduces network traffic and transmission latency. Due to the capacity limit of caches, it is essential to predict the popularity of the content and pre-cache them in edge nodes. In general, the classic prediction approaches require the gathering of users’ personal information at a central unit, giving rise to users’ privacy issues. In this paper, we propose an intelligent F-RANs framework based on federated learning (FL), which does not require gathering user data centrally on the server for training, so it can effectively ensure the privacy of users. In the work, federated learning is applied to user demand prediction, which can accurately predict the content popularity distribution in the network. In addition, to minimize the total traffic cost of the network in consideration of user content requests, we address the allocation of storage resources and content placement in the network as an integrated model and formulate it as an Integer Linear Programming (ILP) problem. Due to the high computational complexity of the ILP problem, two heuristic algorithms are designed to solve it. Simulation results show that the performance of our proposed algorithm is close to the optimal solution.
39

Velayudhan, Nibi Kulangara, Aiswarya S, Aryadevi Remanidevi Devidas, and Maneesha Vinodini Ramesh. "Delay and Energy Efficient Offloading Strategies for an IoT Integrated Water Distribution System in Smart Cities." Smart Cities 7, no. 1 (January 16, 2024): 179–207. http://dx.doi.org/10.3390/smartcities7010008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the fast-moving world of information and communications technologies, one significant issue in metropolitan cities is water scarcity and the need for an intelligent water distribution system for sustainable water management. An IoT-based monitoring system can improve water distribution system management and mitigate challenges in the distribution network networks such as leakage, breakage, theft, overflow, dry running of pumps and so on. However, the increase in the number of communication and sensing devices within smart cities has evoked challenges to existing communication networks due to the increase in delay and energy consumption within the network. The work presents different strategies for efficient delay and energy offloading in IoT-integrated water distribution systems in smart cities. Different IoT-enabled communication network topology diagrams are proposed, considering the different water network design parameters, land cover patterns and wireless channels for communication. From these topologies and by considering all the relevant communication parameters, the optimum communication network architecture to continuously monitor a water distribution network in a metropolitan city in India is identified. As a case study, an IoT design and analysis model is studied for a secondary metropolitan city in India. The selected study area is in Kochi, India. Based on the site-specific model and land use and land cover pattern, delay and energy modeling of the IoT-based water distribution system is discussed. Algorithms for node categorisation and edge-to-fog allocation are discussed, and numerical analyses of delay and energy models are included. An approximation of the delay and energy of the network is calculated using these models. On the basis of these study results, and state transition diagrams, the optimum placement of fog nodes linked with edge nodes and a cloud server could be carried out. Also, by considering different scenarios, up to a 40% improvement in energy efficiency can be achieved by incorporating a greater number of states in the state transition diagram. These strategies could be utilized in implementing delay and energy-efficient IoT-enabled communication networks for site-specific applications.
40

Qazi, Faiza, Osman Khalid, Rao Naveed Bin Rais, Imran Ali Khan, and Atta ur Rehman Khan. "Optimal Content Caching in Content-Centric Networks." Wireless Communications and Mobile Computing 2019 (January 23, 2019): 1–15. http://dx.doi.org/10.1155/2019/6373960.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Content-Centric Networking (CCN) is a novel architecture that is shifting host-centric communication to a content-centric infrastructure. In recent years, in-network caching in CCNs has received significant attention from research community. To improve the cache hit ratio, most of the existing schemes store the content at maximum number of routers along the downloading path of content from source. While this helps in increased cache hits and reduction in delay and server load, the unnecessary caching significantly increases the network cost, bandwidth utilization, and storage consumption. To address the limitations in existing schemes, we propose an optimization based in-network caching policy, named as opt-Cache, which makes more efficient use of available cache resources, in order to reduce overall network utilization with reduced latency. Unlike existing schemes that mostly focus on a single factor to improve the cache performance, we intend to optimize the caching process by simultaneously considering various factors, e.g., content popularity, bandwidth, and latency, under a given set of constraints, e.g., available cache space, content availability, and careful eviction of existing contents in the cache. Our scheme determines optimized set of content to be cached at each node towards the edge based on content popularity and content distance from the content source. The contents that have less frequent requests have their popularity decreased with time. The optimal placement of contents across the CCN routers allows the overall reduction in bandwidth and latency. The proposed scheme is compared with the existing schemes and depicts better performance in terms of bandwidth consumption and latency while using less network resources.
41

Reyana, A., Sandeep Kautish, Khalid Abdulaziz Alnowibet, Hossam M. Zawbaa, and Ali Wagdy Mohamed. "Opportunities of IoT in Fog Computing for High Fault Tolerance and Sustainable Energy Optimization." Sustainability 15, no. 11 (May 27, 2023): 8702. http://dx.doi.org/10.3390/su15118702.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Today, the importance of enhanced quality of service and energy optimization has promoted research into sensor applications such as pervasive health monitoring, distributed computing, etc. In general, the resulting sensor data are stored on the cloud server for future processing. For this purpose, recently, the use of fog computing from a real-world perspective has emerged, utilizing end-user nodes and neighboring edge devices to perform computation and communication. This paper aims to develop a quality-of-service-based energy optimization (QoS-EO) scheme for the wireless sensor environments deployed in fog computing. The fog nodes deployed in specific geographical areas cover the sensor activity performed in those areas. The logical situation of the entire system is informed by the fog nodes, as portrayed. The implemented techniques enable services in a fog-collaborated WSN environment. Thus, the proposed scheme performs quality-of-service placement and optimizes the network energy. The results show a maximum turnaround time of 8 ms, a minimum turnaround time of 1 ms, and an average turnaround time of 3 ms. The costs that were calculated indicate that as the number of iterations increases, the path cost value decreases, demonstrating the efficacy of the proposed technique. The CPU execution delay was reduced to a minimum of 0.06 s. In comparison, the proposed QoS-EO scheme has a lower network usage of 611,643.3 and a lower execution cost of 83,142.2. Thus, the results show the best cost estimation, reliability, and performance of data transfer in a short time, showing a high level of network availability, throughput, and performance guarantee.
42

Kosta, Sokol, Francisco Airton Silva, Patricia Takako Endo, Daniel Carvalho, and Laécio Rodrigues. "Edge servers placement in mobile edge computing using stochastic Petri nets." International Journal of Computational Science and Engineering 23, no. 4 (2020): 352. http://dx.doi.org/10.1504/ijcse.2020.10035558.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Carvalho, Daniel, Laécio Rodrigues, Patricia Takako Endo, Sokol Kosta, and Francisco Airton Silva. "Edge servers placement in mobile edge computing using stochastic Petri nets." International Journal of Computational Science and Engineering 23, no. 4 (2020): 352. http://dx.doi.org/10.1504/ijcse.2020.113181.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Fang, Juan, Kai Li, Juntao Hu, Xiaobin Xu, Ziyi Teng, and Wei Xiang. "SAP: An IoT Application Module Placement Strategy Based on Simulated Annealing Algorithm in Edge-Cloud Computing." Journal of Sensors 2021 (October 7, 2021): 1–12. http://dx.doi.org/10.1155/2021/4758677.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Internet of Things (IoT) is rapidly growing and provides the foundation for the development of smart cities, smart home, and health care. With more and more devices connecting to the Internet, huge amounts of data are produced, creating a great challenge for data processing. Traditional cloud computing has the problems of long delays. Edge computing is an extension of cloud computing, processing data at the edge of the network can reduce the long processing delay of cloud computing. Due to the limited computing resources of edge servers, resource management of edge servers has become a critical research problem. However, the structural characteristics of the subtask chain between each pair of sensors and actuators are not considered to address the task scheduling problem in most existing research. To reduce processing latency and energy consumption of the edge-cloud system, we propose a multilayer edge computing system. The application deployed in the system is based on directed digraph. To fully use the edge servers, we proposed an application module placement strategy using Simulated Annealing module Placement (SAP) algorithm. The modules in an application are bounded to each sensor. The SAP algorithm is designed to find a module placement scheme for each sensor and to generate a module chain including the mapping of the module and servers for each sensor. Thus, the edge servers can transmit the tuples in the network with the module chain. To evaluate the efficacy of our algorithm, we simulate the strategy in iFogSim. Results show the scheme is able to achieve significant reductions in latency and energy consumption.
45

Nayyar, Anand, Rudra Rameshwar, and Piyush Kanti Dutta. "Special Issue on Recent Trends and Future of Fog and Edge Computing, Services and Enabling Technologies." Scalable Computing: Practice and Experience 20, no. 2 (May 2, 2019): iii—vi. http://dx.doi.org/10.12694/scpe.v20i2.1558.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recent Trends and Future of Fog and Edge Computing, Services, and Enabling Technologies Cloud computing has been established as the most popular as well as suitable computing infrastructure providing on-demand, scalable and pay-as-you-go computing resources and services for the state-of-the-art ICT applications which generate a massive amount of data. Though Cloud is certainly the most fitting solution for most of the applications with respect to processing capability and storage, it may not be so for the real-time applications. The main problem with Cloud is the latency as the Cloud data centres typically are very far from the data sources as well as the data consumers. This latency is ok with the application domains such as enterprise or web applications, but not for the modern Internet of Things (IoT)-based pervasive and ubiquitous application domains such as autonomous vehicle, smart and pervasive healthcare, real-time traffic monitoring, unmanned aerial vehicles, smart building, smart city, smart manufacturing, cognitive IoT, and so on. The prerequisite for these types of application is that the latency between the data generation and consumption should be minimal. For that, the generated data need to be processed locally, instead of sending to the Cloud. This approach is known as Edge computing where the data processing is done at the network edge in the edge devices such as set-top boxes, access points, routers, switches, base stations etc. which are typically located at the edge of the network. These devices are increasingly being incorporated with significant computing and storage capacity to cater to the need for local Big Data processing. The enabling of Edge computing can be attributed to the Emerging network technologies, such as 4G and cognitive radios, high-speed wireless networks, and energy-efficient sophisticated sensors. Different Edge computing architectures are proposed (e.g., Fog computing, mobile edge computing (MEC), cloudlets, etc.). All of these enable the IoT and sensor data to be processed closer to the data sources. But, among them, Fog computing, a Cisco initiative, has attracted the most attention of people from both academia and corporate and has been emerged as a new computing-infrastructural paradigm in recent years. Though Fog computing has been proposed as a different computing architecture than Cloud, it is not meant to replace the Cloud. Rather, Fog computing extends the Cloud services to network edges for providing computation, networking, and storage services between end devices and data centres. Ideally, Fog nodes (edge devices) are supposed to pre-process the data, serve the need of the associated applications preliminarily, and forward the data to the Cloud if the data are needed to be stored and analysed further. Fog computing enhances the benefits from smart devices operational not only in network perimeter but also under cloud servers. Fog-enabled services can be deployed anywhere in the network, and with these services provisioning and management, huge potential can be visualized to enhance intelligence within computing networks to realize context-awareness, high response time, and network traffic offloading. Several possibilities of Fog computing are already established. For example, sustainable smart cities, smart grid, smart logistics, environment monitoring, video surveillance, etc. To design and implementation of Fog computing systems, various challenges concerning system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. are needed to be addressed. Also, to make Fog compatible with Cloud several factors such as Fog and Cloud system integration, service collaboration between Fog and Cloud, workload balance between Fog and Cloud, and so on need to be taken care of. It is our great privilege to present before you Volume 20, Issue 2 of the Scalable Computing: Practice and Experience. We had received 20 Research Papers and out of which 14 Papers are selected for Publication. The aim of this special issue is to highlight Recent Trends and Future of Fog and Edge Computing, Services and Enabling technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to Fog Computing, Cloud Computing and Edge Computing. Sujata Dash et al. contributed a paper titled “Edge and Fog Computing in Healthcare- A Review” in which an in-depth review of fog and mist computing in the area of health care informatics is analysed, classified and discussed. The review presented in this paper is primarily focussed on three main aspects: The requirements of IoT based healthcare model and the description of services provided by fog computing to address then. The architecture of an IoT based health care system embedding fog computing layer and implementation of fog computing layer services along with performance and advantages. In addition to this, the researchers have highlighted the trade-off when allocating computational task to the level of network and also elaborated various challenges and security issues of fog and edge computing related to healthcare applications. Parminder Singh et al. in the paper titled “Triangulation Resource Provisioning for Web Applications in Cloud Computing: A Profit-Aware” proposed a novel triangulation resource provisioning (TRP) technique with a profit-aware surplus VM selection policy to ensure fair resource utilization in hourly billing cycle while giving the quality of service to end-users. The proposed technique use time series workload forecasting, CPU utilization and response time in the analysis phase. The proposed technique is tested using CloudSim simulator and R language is used to implement prediction model on ClarkNet weblog. The proposed approach is compared with two baseline approaches i.e. Cost-aware (LRM) and (ARMA). The response time, CPU utilization and predicted request are applied in the analysis and planning phase for scaling decisions. The profit-aware surplus VM selection policy used in the execution phase for select the appropriate VM for scale-down. The result shows that the proposed model for web applications provides fair utilization of resources with minimum cost, thus provides maximum profit to application provider and QoE to the end users. Akshi kumar and Abhilasha Sharma in the paper titled “Ontology driven Social Big Data Analytics for Fog enabled Sentic-Social Governance” utilized a semantic knowledge model for investigating public opinion towards adaption of fog enabled services for governance and comprehending the significance of two s-components (sentic and social) in aforesaid structure that specifically visualize fog enabled Sentic-Social Governance. The results using conventional TF-IDF (Term Frequency-Inverse Document Frequency) feature extraction are empirically compared with ontology driven TF-IDF feature extraction to find the best opinion mining model with optimal accuracy. The results concluded that implementation of ontology driven opinion mining for feature extraction in polarity classification outperforms the traditional TF-IDF method validated over baseline supervised learning algorithms with an average of 7.3% improvement in accuracy and approximately 38% reduction in features has been reported. Avinash Kaur and Pooja Gupta in the paper titled “Hybrid Balanced Task Clustering Algorithm for Scientific workflows in Cloud Computing” proposed novel hybrid balanced task clustering algorithm using the parameter of impact factor of workflows along with the structure of workflow and using this technique, tasks can be considered for clustering either vertically or horizontally based on value of impact factor. The testing of the algorithm proposed is done on Workflowsim- an extension of CloudSim and DAG model of workflow was executed. The Algorithm was tested on variables- Execution time of workflow and Performance Gain and compared with four clustering methods: Horizontal Runtime Balancing (HRB), Horizontal Clustering (HC), Horizontal Distance Balancing (HDB) and Horizontal Impact Factor Balancing (HIFB) and results stated that proposed algorithm is almost 5-10% better in makespan time of workflow depending on the workflow used. Pijush Kanti Dutta Pramanik et al. in the paper titled “Green and Sustainable High-Performance Computing with Smartphone Crowd Computing: Benefits, Enablers and Challenges” presented a comprehensive statistical survey of the various commercial CPUs, GPUs, SoCs for smartphones confirming the capability of the SCC as an alternative to HPC. An exhaustive survey is presented on the present and optimistic future of the continuous improvement and research on different aspects of smartphone battery and other alternative power sources which will allow users to use their smartphones for SCC without worrying about the battery running out. Dhanapal and P. Nithyanandam in the paper titled “The Slow HTTP Distributed Denial of Service (DDOS) Attack Detection in Cloud” proposed a novel method to detect slow HTTP DDoS attacks in cloud to overcome the issue of consuming all available server resources and making it unavailable to the real users. The proposed method is implemented using OpenStack cloud platform with slowHTTPTest tool. The results stated that proposed technique detects the attack in efficient manner. Mandeep Kaur and Rajni Mohana in the paper titled “Static Load Balancing Technique for Geographically partitioned Public Cloud” proposed a novel approach focused upon load balancing in the partitioned public cloud by combining centralized and decentralized approaches, assuming the presence of fog layer. A load balancer entity is used for decentralized load balancing at partitions and a controller entity is used for centralized level to balance the overall load at various partitions. Results are compared with First Come First Serve (FCFS) and Shortest Job First (SJF) algorithms. In this work, the researchers compared the Waiting Time, Finish Time and Actual Run Time of tasks using these algorithms. To reduce the number of unhandled jobs, a new load state is introduced which checks load beyond conventional load states. Major objective of this approach is to reduce the need of runtime virtual machine migration and to reduce the wastage of resources, which may be occurring due to predefined values of threshold. Mukta and Neeraj Gupta in the paper titled “Analytical Available Bandwidth Estimation in Wireless Ad-Hoc Networks considering Mobility in 3-Dimensional Space” proposes an analytical approach named Analytical Available Bandwidth Estimation Including Mobility (AABWM) to estimate ABW on a link. The major contributions of the proposed work are: i) it uses mathematical models based on renewal theory to calculate the collision probability of data packets which makes the process simple and accurate, ii) consideration of mobility under 3-D space to predict the link failure and provides an accurate admission control. To test the proposed technique, the researcher used NS-2 simulator to compare the proposed technique i.e. AABWM with AODV, ABE, IAB and IBEM on throughput, Packet loss ratio and Data delivery. Results stated that AABWM performs better as compared to other approaches. R.Sridharan and S. Domnic in the paper titled “Placement Strategy for Intercommunicating Tasks of an Elastic Request in Fog-Cloud Environment” proposed a novel heuristic IcAPER,(Inter-communication Aware Placement for Elastic Requests) algorithm. The proposed algorithm uses the network neighborhood machine for placement, once current resource is fully utilized by the application. The performance IcAPER algorithm is compared with First Come First Serve (FCFS), Random and First Fit Decreasing (FFD) algorithms for the parameters (a) resource utilization (b) resource fragmentation and (c) Number of requests having intercommunicating tasks placed on to same PM using CloudSim simulator. Simulation results shows IcAPER maps 34% more tasks on to the same PM and also increase the resource utilization by 13% while decreasing the resource fragmentation by 37.8% when compared to other algorithms. Velliangiri S. et al. in the paper titled “Trust factor based key distribution protocol in Hybrid Cloud Environment” proposed a novel security protocol comprising of two stages: first stage, Group Creation using the trust factor and develop key distribution security protocol. It performs the communication process among the virtual machine communication nodes. Creating several groups based on the cluster and trust factors methods. The second stage, the ECC (Elliptic Curve Cryptography) based distribution security protocol is developed. The performance of the Trust Factor Based Key Distribution protocol is compared with the existing ECC and Diffie Hellman key exchange technique. The results state that the proposed security protocol has more secure communication and better resource utilization than the ECC and Diffie Hellman key exchange technique in the Hybrid cloud. Vivek kumar prasad et al. in the paper titled “Influence of Monitoring: Fog and Edge Computing” discussed various techniques involved for monitoring for edge and fog computing and its advantages in addition to a case study based on Healthcare monitoring system. Avinash Kaur et al. elaborated a comprehensive view of existing data placement schemes proposed in literature for cloud computing. Further, it classified data placement schemes based on their assess capabilities and objectives and in addition to this comparison of data placement schemes. Parminder Singh et al. presented a comprehensive review of Auto-Scaling techniques of web applications in cloud computing. The complete taxonomy of the reviewed articles is done on varied parameters like auto-scaling, approach, resources, monitoring tool, experiment, workload and metric, etc. Simar Preet Singh et al. in the paper titled “Dynamic Task Scheduling using Balanced VM Allocation Policy for Fog Computing Platform” proposed a novel scheme to improve the user contentment by improving the cost to operation length ratio, reducing the customer churn, and boosting the operational revenue. The proposed scheme is learnt to reduce the queue size by effectively allocating the resources, which resulted in the form of quicker completion of user workflows. The proposed method results are evaluated against the state-of-the-art scene with non-power aware based task scheduling mechanism. The results were analyzed using parameters-- energy, SLA infringement and workflow execution delay. The performance of the proposed schema was analyzed in various experiments particularly designed to analyze various aspects for workflow processing on given fog resources. The LRR (35.85 kWh) model has been found most efficient on the basis of average energy consumption in comparison to the LR (34.86 kWh), THR (41.97 kWh), MAD (45.73 kWh) and IQR (47.87 kWh). The LRR model has been also observed as the leader when compared on the basis of number of VM migrations. The LRR (2520 VMs) has been observed as best contender on the basis of mean of number of VM migrations in comparison with LR (2555 VMs), THR (4769 VMs), MAD (5138 VMs) and IQR (5352 VMs).
46

Hou, Peng, Bo Li, Zongshan Wang, and Hongwei Ding. "Joint hierarchical placement and configuration of edge servers in C-V2X." Ad Hoc Networks 131 (June 2022): 102842. http://dx.doi.org/10.1016/j.adhoc.2022.102842.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Cao, Kun, Liying Li, Yangguang Cui, Tongquan Wei, and Shiyan Hu. "Exploring Placement of Heterogeneous Edge Servers for Response Time Minimization in Mobile Edge-Cloud Computing." IEEE Transactions on Industrial Informatics 17, no. 1 (January 2021): 494–503. http://dx.doi.org/10.1109/tii.2020.2975897.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Khoshkholghi, Mohammad Ali, Michel Gokan Khan, Kyoomars Alizadeh Noghani, Javid Taheri, Deval Bhamare, Andreas Kassler, Zhengzhe Xiang, Shuiguang Deng, and Xiaoxian Yang. "Service Function Chain Placement for Joint Cost and Latency Optimization." Mobile Networks and Applications 25, no. 6 (November 21, 2020): 2191–205. http://dx.doi.org/10.1007/s11036-020-01661-w.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractNetwork Function Virtualization (NFV) is an emerging technology to consolidate network functions onto high volume storages, servers and switches located anywhere in the network. Virtual Network Functions (VNFs) are chained together to provide a specific network service, called Service Function Chains (SFCs). Regarding to Quality of Service (QoS) requirements and network features and states, SFCs are served through performing two tasks: VNF placement and link embedding on the substrate networks. Reducing deployment cost is a desired objective for all service providers in cloud/edge environments to increase their profit form demanded services. However, increasing resource utilization in order to decrease deployment cost may lead to increase the service latency and consequently increase SLA violation and decrease user satisfaction. To this end, we formulate a multi-objective optimization model to joint VNF placement and link embedding in order to reduce deployment cost and service latency with respect to a variety of constraints. We, then solve the optimization problem using two heuristic-based algorithms that perform close to optimum for large scale cloud/edge environments. Since the optimization model involves conflicting objectives, we also investigate pareto optimal solution so that it optimizes multiple objectives as much as possible. The efficiency of proposed algorithms is evaluated using both simulation and emulation. The evaluation results show that the proposed optimization approach succeed in minimizing both cost and latency while the results are as accurate as optimal solution obtained by Gurobi (5%).
49

Li, Yuanzhe, Ao Zhou, Xiao Ma, and Shangguang Wang. "Profit-aware Edge Server Placement." IEEE Internet of Things Journal, 2021, 1. http://dx.doi.org/10.1109/jiot.2021.3082898.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Zhang, Xinglin, Zhenjiang Li, Chang Lai, and Junna Zhang. "Joint Edge Server Placement and Service Placement in Mobile Edge Computing." IEEE Internet of Things Journal, 2021, 1. http://dx.doi.org/10.1109/jiot.2021.3125957.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії