Journal articles on the topic 'Link caching'

To see the other types of publications on this topic, follow the link: Link caching.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Link caching.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yang, Hunster. "A Review of the Association Between Environmental Harshness, Neurogenesis and Caching Behaviour." STEM Fellowship Journal 5, no. 1 (December 1, 2019): 13–18. http://dx.doi.org/10.17975/sfj-2019-005.

Full text
Abstract:
Memory is one of the most crucial cognitive functions in many organisms. It is highly implicated in everyday functioning and is an essential component for survival. Past research has revealed that spatial memory facilitates bird caching behaviours such as remembering the exact locations of their hidden food. However, there are many factors that alter the demands on memory and consequently impact the function of caching. Specifically, neurogenesis, the process of forming new neurons, has been shown to affect this behaviour. Likewise, environmental variables and selective pressures (i.e., severity of the environment) can also influence caching in birds. In this review, we present evidence for a link between environmental harshness, hippocampal neurogenesis, and caching behaviour in chickadees, with specific focus on work by Chancellor et al. [6]. Neurogenesis in chickadees may be a mechanism subject to selective pressures, in which chickadees from harsher environments have increased neurogenesis rates and consequently enhanced caching ability. However, there remain gaps in the understanding of how exactly hippocampal neurogenesis, environmental harshness, and caching behaviour interact, and future studies are needed to further explore this interaction and its implications.
APA, Harvard, Vancouver, ISO, and other styles
2

Vettigli, Giuseppe, Mingyue Ji, Karthikeyan Shanmugam, Jaime Llorca, Antonia Tulino, and Giuseppe Caire. "Efficient Algorithms for Coded Multicasting in Heterogeneous Caching Networks." Entropy 21, no. 3 (March 25, 2019): 324. http://dx.doi.org/10.3390/e21030324.

Full text
Abstract:
Coded multicasting has been shown to be a promising approach to significantly improve the performance of content delivery networks with multiple caches downstream of a common multicast link. However, the schemes that have been shown to achieve order-optimal performance require content items to be partitioned into several packets that grows exponentially with the number of caches, leading to codes of exponential complexity that jeopardize their promising performance benefits. In this paper, we address this crucial performance-complexity tradeoff in a heterogeneous caching network setting, where edge caches with possibly different storage capacity collect multiple content requests that may follow distinct demand distributions. We extend the asymptotic (in the number of packets per file) analysis of shared link caching networks to heterogeneous network settings, and present novel coded multicast schemes, based on local graph coloring, that exhibit polynomial-time complexity in all the system parameters, while preserving the asymptotically proven multiplicative caching gain even for finite file packetization. We further demonstrate that the packetization order (the number of packets each file is split into) can be traded-off with the number of requests collected by each cache, while preserving the same multiplicative caching gain. Simulation results confirm the superiority of the proposed schemes and illustrate the interesting request aggregation vs. packetization order tradeoff within several practical settings. Our results provide a compelling step towards the practical achievability of the promising multiplicative caching gain in next generation access networks.
APA, Harvard, Vancouver, ISO, and other styles
3

Sheraz, Muhammad, Shahryar Shafique, Sohail Imran, Muhammad Asif, Rizwan Ullah, Muhammad Ibrar, Andrzej Bartoszewicz, and Saleh Mobayen. "Mobility-Aware Data Caching to Improve D2D Communications in Heterogeneous Networks." Electronics 11, no. 21 (October 24, 2022): 3434. http://dx.doi.org/10.3390/electronics11213434.

Full text
Abstract:
User Equipment (UE) is equipped with limited cache resources that can be utilized to offload data traffic through device-to-device (D2D) communications. Data caching at a UE level has the potential to significantly alleviate data traffic burden from the backhaul link. Moreover, in wireless networks, users exhibit mobility that poses serious challenges to successful data transmission via D2D communications due to intermittent connectivity among users. Users’ mobility can be exploited to efficiently cache contents by observing connectivity patterns among users. Therefore, it is crucial to develop an efficient data caching mechanism for UE while taking into account users’ mobility patterns. In this work, we propose a mobility-aware data caching approach to enhance data offloading via D2D communication. First, we model users’ connectivity patterns. Then, contents are cached in UE’ cache resources based on users’ data preferences. In addition, we also take into account signal-to-interference and noise ratio (SINR) requirements of the users. Hence, our proposed caching mechanism exploits connectivity patterns of users to perform data placement based on users’ own demands and neighboring users to enhance data offloading via cache resources. We performed extensive simulations to investigate the performance of our proposed mobility-aware data caching mechanism. The performance of our proposed caching mechanism is compared to most deployed data caching mechanisms, while taking into account the dynamic nature of the wireless channel and the interference experienced by the users. From the obtained results, it is evident that our proposed approach achieves 14%, 16%, and 11% higher data offloading gain than the least frequently used, the Zipf-based probabilistic, and the random caching schemes in case of an increasing number of users, cache capacity, and number of contents, respectively. Moreover, we also analyzed cache hit rates, and our proposed scheme achieves 8% and 5% higher cache hit rate than the least frequently used, the Zipf-based probabilistic, and the random caching schemes in case of an increasing number of contents and cache capacity, respectively. Hence, our proposed caching mechanism brings significant improvement in data sharing via D2D communications.
APA, Harvard, Vancouver, ISO, and other styles
4

Soleimani, Somayeh, and Xiaofeng Tao. "Cooperative Crossing Cache Placement in Cache-Enabled Device to Device-Aided Cellular Networks." Applied Sciences 8, no. 9 (September 7, 2018): 1578. http://dx.doi.org/10.3390/app8091578.

Full text
Abstract:
In cache-enabled device-to-device (D2D) -aided cellular networks, the technique of caching contents in the cooperative crossing between base stations (BSs) and devices can significantly reduce core traffic and enhance network capacity. In this paper, we propose a scheme that establishes device availability, which indicates whether a cache-enabled device can handle the transmission of the desired content within the required sending time, called the delay, while achieving optimal probabilistic caching. We also investigate the impact of transmission device availability on the effectiveness of a scenario of cooperative crossing cache placement, where content delivery traffic can be offloaded from the local cache, a D2D transmitter’s cache via a D2D link, or else directly from a BS via a cellular link, in order to maximize the offloading probability. Further, we derive the cooperation content offloading strategy while considering successful content transmission by D2D transmitters or BSs to guarantee the delay, even though reducing the delay is not the focus of this study. Finally, the proposed problem is formulated. Owing to the non-convexity of the optimization problem, it can be rewritten as a minimization of the difference between the convex functions; thus, it can be solved by difference of convex (DC) programming using a low-complexity algorithm. Simulation results show that the proposed cache placement scheme improves the offloading probability by 13.5% and 23% compared to Most Popular Content (MPC) scheme, in which both BSs and devices cache the most popular content and Coop. BS/D2D caching scheme, in which each BS tier and user tier applies cooperative content caching separately.
APA, Harvard, Vancouver, ISO, and other styles
5

Tang, Aimin, Sumit Roy, and Xudong Wang. "Coded Caching for Wireless Backhaul Networks With Unequal Link Rates." IEEE Transactions on Communications 66, no. 1 (January 2018): 1–13. http://dx.doi.org/10.1109/tcomm.2017.2746106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zolfaghari, Behrouz, Vikrant Singh, Brijesh Kumar Rai, Khodakhast Bibak, and Takeshi Koshiba. "Cryptography in Hierarchical Coded Caching: System Model and Cost Analysis." Entropy 23, no. 11 (November 3, 2021): 1459. http://dx.doi.org/10.3390/e23111459.

Full text
Abstract:
The idea behind network caching is to reduce network traffic during peak hours via transmitting frequently-requested content items to end users during off-peak hours. However, due to limited cache sizes and unpredictable access patterns, this might not totally eliminate the need for data transmission during peak hours. Coded caching was introduced to further reduce the peak hour traffic. The idea of coded caching is based on sending coded content which can be decoded in different ways by different users. This allows the server to service multiple requests by transmitting a single content item. Research works regarding coded caching traditionally adopt a simple network topology consisting of a single server, a single hub, a shared link connecting the server to the hub, and private links which connect the users to the hub. Building on the results of Sengupta et al. (IEEE Trans. Inf. Forensics Secur., 2015), we propose and evaluate a yet more complex system model that takes into consideration both throughput and security via combining the mentioned ideas. It is demonstrated that the achievable rates in the proposed model are within a constant multiplicative and additive gap with the minimum secure rates.
APA, Harvard, Vancouver, ISO, and other styles
7

Sageer Karat, Nujoom, Anoop Thomas, and Balaji Sundar Rajan. "Optimal Linear Error Correcting Delivery Schemes for Two Optimal Coded Caching Schemes." Entropy 22, no. 7 (July 13, 2020): 766. http://dx.doi.org/10.3390/e22070766.

Full text
Abstract:
For coded caching problems with small buffer sizes and the number of users no less than the amount of files in the server, an optimal delivery scheme was proposed by Chen, Fan, and Letaief in 2016. This scheme is referred to as the CFL scheme. In this paper, an extension to the coded caching problem where the link between the server and the users is error prone, is considered. The closed form expressions for average rate and peak rate of error correcting delivery scheme are found for the CFL prefetching scheme using techniques from index coding. Using results from error correcting index coding, an optimal linear error correcting delivery scheme for caching problems employing the CFL prefetching is proposed. Another scheme that has lower sub-packetization requirement as compared to CFL scheme for the same cache memory size was considered by J. Gomez-Vilardebo in 2018. An optimal linear error correcting delivery scheme is also proposed for this scheme.
APA, Harvard, Vancouver, ISO, and other styles
8

Chao, Yichao, Hong Ni, and Rui Han. "A Path Load-Aware Based Caching Strategy for Information-Centric Networking." Electronics 11, no. 19 (September 27, 2022): 3088. http://dx.doi.org/10.3390/electronics11193088.

Full text
Abstract:
Ubiquitous in-network caching plays an important role in improving the efficiency of content access and distribution in Information-Centric Networks (ICN). Content placement strategies, which determine the location distribution of content replicas in the network, have a decisive impact on the performance of the cache system. Existing strategies primarily focus on pushing popular content to the network edge, aiming to improve the overall cache hit ratio while neglecting to effectively balance the traffic load between network links; this leads to insufficient utilization of network bandwidth resources and further excessive content delivery time and user QoE degradation. In this paper, a Path Load-Aware Based Caching strategy (PLABC) is proposed, in which content-related information and dynamic network-related information are comprehensively considered to make cache decisions. Specifically, the utility of caching the content at each on-path node is calculated according to the bandwidth consumption savings and the load level of the transmission path, and the node with the greatest utility value is selected as the caching node. Extensive simulations are conducted to compare the performance of PLABC with other state-of-the-art schemes by quantitative analysis. Simulation results validate the PLABC strategy’s effectiveness, especially in balancing link load and reducing content delivery time.
APA, Harvard, Vancouver, ISO, and other styles
9

Alghamdi, Fatimah, Saoucene Mahfoudh, and Ahmed Barnawi. "A Novel Fog Computing Based Architecture to Improve the Performance in Content Delivery Networks." Wireless Communications and Mobile Computing 2019 (January 23, 2019): 1–13. http://dx.doi.org/10.1155/2019/7864094.

Full text
Abstract:
Along with the continuing evolution of the Internet and its applications, Content Delivery Networks (CDNs) have become a hot topic with both opportunities and challenges. CDNs were mainly proposed to solve content availability and download time issues by delivering content through edge cache servers deployed around the world. In our previous work, we presented a novel CDN architecture based on a Fog computing environment as a promising solution for real-time applications. In such architecture, we proposed to use a name-based routing protocol following the Information Centric Networking (ICN) approach, with a popularity-based caching strategy to guarantee overall delivery performance. To validate our design principle, we have implemented the proposed Fog-based CDN architecture with its major protocol components and evaluated its performance, as shown through this article. On the one hand, we have extended the Optimized Link-State Routing (OLSR) protocol to be content aware (CA-OLSR), i.e., so that it uses content names as routing labels. Then, we have integrated CA-OLSR with the popularity-based caching strategy, which caches only the most popular content (MPC). On the other hand, we have considered two similar architectures for conducting performance comparative studies. The first is pure Fog-based CDN implemented by the original OLSR (IP-based routing) protocol along with the default caching strategy. The second is a classical cloud-based CDN implemented by the original OLSR. Through extensive simulation experiments, we have shown that our Fog-based CDN architecture outperforms the other compared architectures. CA-OLSR achieves the highest packet delivery ratio (PDR) and the lowest delay for all simulated numbers of connected users. Furthermore, the MPC caching strategy shows higher cache hit rates with fewer numbers of caching operations compared to the existing default caching strategy, which caches all the pass-by content.
APA, Harvard, Vancouver, ISO, and other styles
10

Damaraju, Padmaleela, and M. Sesha Shayee. "LINK STABLE INTELLIGENT CACHING MULTIPATH AND MULTICAST ROUTING PROTOCOL FOR WSN." International Journal of Computer Sciences and Engineering 6, no. 10 (October 31, 2018): 365–72. http://dx.doi.org/10.26438/ijcse/v6i10.365372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Kim, Junbeom, Daesung Yu, Seung-Eun Hong, and Seok-Hwan Park. "Energy-Efficient Joint Design of Fronthaul and Edge Links for Cache-Aided C-RAN Systems with Wireless Fronthaul." Entropy 21, no. 9 (September 3, 2019): 860. http://dx.doi.org/10.3390/e21090860.

Full text
Abstract:
This work addresses the joint design of fronthaul and edge links for a cache-aided cloud radio access network (C-RAN) system with a wireless fronthaul link. Motivated by the fact that existing techniques, such as C-RAN and edge caching, come at the cost of increased energy consumption, an energy efficiency (EE) metric is defined and adopted as the performance metric for optimization. As the fronthaul links can be used to transfer quantized and precoded baseband signals or hard information of uncached files, both soft- and hard-transfer fronthauling strategies are considered. Extensive numerical results validate the impact of edge caching, as well as the advantages of the energy-efficient design over the spectrally-efficient scheme. Additionally, the two fronthauling strategies—the soft- and hard-transfer schemes—are compared in terms of EE.
APA, Harvard, Vancouver, ISO, and other styles
12

Yang, Li, Cheng Chi, Chengsheng Pan, and Yaowen Qi. "An Intelligent Caching and Replacement Strategy Based on Cache Profit Model for Space-Ground Integrated Network." Mobile Information Systems 2021 (October 19, 2021): 1–13. http://dx.doi.org/10.1155/2021/7844929.

Full text
Abstract:
Compared with the stable states of the ground networks, the space-ground integrated networks (SGIN) have limited resources, high transmission delay, and vulnerable topology, which make traditional caching strategies unable to adapt to the complex space network environment. An intelligent and efficient caching strategy is required to improve the edge service capabilities of satellites. Therefore, we investigate these problems in this paper and make the following contributions. First, the content value evaluation model based on classification and regression tree is proposed to solve the problem of “what to cache” by describing the cache value of content, which considers the multidimensional content characteristics. Second, we propose a cache decision strategy based on the node caching cost model to answer “where to cache.” This strategy modified the genetic algorithm to adapt the 0-1 knapsack problem under SDN architecture, which greatly improved the cache hit rate and the network service quality. Finally, we propose a cache replacement strategy by establishing an effective service time model between the satellite and ground transmission link, which solves the problem of “when to replace.” Numerical results demonstrate that the proposed strategy in SGIN can improve the nodes’ cache hit rate and reduce the network transmission delay and transmission hops.
APA, Harvard, Vancouver, ISO, and other styles
13

Jiang, Chunxiao, and Zhen Li. "Decreasing Big Data Application Latency in Satellite Link by Caching and Peer Selection." IEEE Transactions on Network Science and Engineering 7, no. 4 (October 1, 2020): 2555–65. http://dx.doi.org/10.1109/tnse.2020.2994638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Cao, Daming, Deyao Zhang, Pengyao Chen, Nan Liu, Wei Kang, and Deniz Gunduz. "Coded Caching With Asymmetric Cache Sizes and Link Qualities: The Two-User Case." IEEE Transactions on Communications 67, no. 9 (September 2019): 6112–26. http://dx.doi.org/10.1109/tcomm.2019.2921711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lida, Zou, Hassan A. Alterazi, and Mohammed Helmi Qeshta. "Multi-level cache management of quantitative trading platform." Applied Mathematics and Nonlinear Sciences 6, no. 2 (July 1, 2021): 249–60. http://dx.doi.org/10.2478/amns.2021.2.00045.

Full text
Abstract:
Abstract With the rapid development of quantitative trading business in the field of investment, quantitative trading platform is becoming an important tool for numerous investing users to participate in quantitative trading. In using the platform, return time of backtesting historical data is a key factor that influences user experience. In the aspect of optimising data access time, cache management is a critical link. Research work on cache management has achieved many referential results. However, quantitative trading platform has its special demands. (1) Data access of users has overlapping characteristics for time-series data. (2) This platform uses a wide variety of caching devices with heterogeneous performance. To address the above problems, a cache management approach adapting quantitative trading platform is proposed. It not only merges the overlapping data in the cache to save space but also places data into multi-level caching devices driven by user experience. Our extensive experiments demonstrate that the proposed approach could improve user experience up to >50% compared with the benchmark algorithms.
APA, Harvard, Vancouver, ISO, and other styles
16

Choi, Minseok, Andreas F. Molisch, and Joongheon Kim. "Joint Distributed Link Scheduling and Power Allocation for Content Delivery in Wireless Caching Networks." IEEE Transactions on Wireless Communications 19, no. 12 (December 2020): 7810–24. http://dx.doi.org/10.1109/twc.2020.3016562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Weiguang, Hui Li, Wenjie Zhang, and Shanlin Wei. "Energy Efficiency for Data Offloading in D2D Cooperative Caching Networks." Wireless Communications and Mobile Computing 2020 (June 27, 2020): 1–11. http://dx.doi.org/10.1155/2020/2730478.

Full text
Abstract:
D2D communication improves the cellular network performance by using proximity-based services between adjacent devices, which considered is an effective way to solve the problem of spectrum scarcity caused by tremendous mobile data traffic. If the cache-enabled users are willing to send the cached file to the requesters, the content delivery traffic can be offloaded through the D2D link. In this paper, we strive to find the maximum energy efficiency of the D2D caching network through the joint optimization of cache policy and content transmit power. Specifically, based on stochastic geometry-aided modeling of the network, we derive the data offloading rate in closed form, which jointly considers the effects of success sensing probability and success transmission probability. According to the data offloading rate, we formulate a joint optimization problem integrating cache policy and transmit power to maximize the system energy efficiency. To solve this problem, we propose two optimization algorithms that the cache policy optimization algorithm based on gradient update and the joint optimization algorithm. The simulation results demonstrate that the joint optimization has twice the superiority in improving the energy efficiency of the D2D caching network compared with other schemes.
APA, Harvard, Vancouver, ISO, and other styles
18

Ahmad, Fawad, Ayaz Ahmad, Irshad Hussain, Ghulam Muhammad, Zahoor Uddin, and Salman A. AlQahtani. "Proactive Caching in D2D Assisted Multitier Cellular Network." Sensors 22, no. 14 (July 6, 2022): 5078. http://dx.doi.org/10.3390/s22145078.

Full text
Abstract:
Cache-enabled networks suffer hugely from the challenge of content caching and content delivery. In this regard, cache-enabled device-to-device (D2D) assisted multitier cellular networks are expected to relieve the network data pressure and effectively solve the problem of content placement and content delivery. Consequently, the user can have a better opportunity to get their favored contents from nearby cache-enabled transmitters (CETs) through reliable and good-quality links; however, as expected, designing an effective caching policy is a challenging task due to the limited cache memory of CETs and uncertainty in user preferences. In this article, we introduce a joint content placement and content delivery technique for D2D assisted multitier cellular networks (D2DMCN). A support vector machine (SVM) is employed to predict the content popularity to determine which content is to be cached and where it is to be cached, thereby increasing the overall cache hit ratio (CHR). The content request is satisfied either by the neighboring node through the D2D link or by the cache-enabled base stations (BSs) of the multitier cellular networks (MCNs). Similarly, to solve the problem of optimal content delivery, the Hungarian algorithm is employed aiming to improve the quality of satisfaction. The simulation results indicate that the proposed content placement strategy effectively optimizes the overall cache hit ratio of the system. Similarly, an effective content delivery approach reduces the request content delivery delay and power consumption.
APA, Harvard, Vancouver, ISO, and other styles
19

Jing, Wenpeng, Xiangming Wen, Zhaoming Lu, and Haijun Zhang. "Multi-Location-Aware Joint Optimization of Content Caching and Delivery for Backhaul-Constrained UDN." Sensors 19, no. 11 (May 29, 2019): 2449. http://dx.doi.org/10.3390/s19112449.

Full text
Abstract:
Mobile edge caching is regarded as a promising way to reduce the backhaul load of the base stations (BSs). However, the capacity of BSs’ cache tends to be small, while mobile users’ content preferences are diverse. Furthermore, both the locations of users and user-BS association are uncertain in wireless networks. All of these pose great challenges on the content caching and content delivery. This paper studies the joint optimization of the content placement and content delivery schemes in the cache-enabled ultra-dense small-cell network (UDN) with constrained-backhaul link. Considering the differences in decision time-scales, the content placement and content delivery are investigated separately, but their interplay is taken into consideration. Firstly, a content placement problem is formulated, where the uncertainty of user-BS association is considered. Specifically, different from the existing works, the specific multi-location request pattern is considered that users tend to send content requests from more than one but limited locations during one day. Secondly, a user-BS association and wireless resources allocation problem is formulated, with the objective of maximizing users’ data rates under the backhaul bandwidth constraint. Due to the non-convex nature of these two problems, the problem transformation and variables relaxation are adopted, which convert the original problems into more tractable forms. Then, based on the convex optimization methods, a content placement algorithm, and a cache-aware user association and resources allocation algorithm are proposed, respectively. Finally, simulation results are given, which validate that the proposed algorithms have obvious performance advantages in terms of the network utility, the hit ratio of the cache, and the quality of service guarantee, and are suitable for the cache-enabled UDN with constrained-backhaul link.
APA, Harvard, Vancouver, ISO, and other styles
20

Ji, Baofeng, Bingbing Xing, Kang Song, Chunguo Li, Hong Wen, and Luxi Yang. "Performance Analysis of Multihop Relaying Caching for Internet of Things under Nakagami Channels." Wireless Communications and Mobile Computing 2018 (2018): 1–9. http://dx.doi.org/10.1155/2018/2437361.

Full text
Abstract:
Performance analysis is studied in this paper for the wireless transmissions in Internet of Things (IoT) system, where both the direct link and the multihop relaying caching wireless transmission from the source node to the destination node are taken into the consideration. The key feature is the Nakagami channels of the wireless channel from the source node to the destination node, which results in the difficulty of the theoretical analysis over the system performance. To tackle this difficulty, the probability distribution function (PDF) of the received signal-to-noise ratio (SNR) at the destination node is derived by exploiting the function and integral properties. Then, the outage probability and bit error rate (BER) of the whole wireless IoT system are derived in the analytical expression without any approximation. Numerical simulations demonstrate the accuracy of the derived theoretical analysis for this system.
APA, Harvard, Vancouver, ISO, and other styles
21

Rashid, Salman, Shukor Abd Razak, and Fuad A. Ghaleb. "IMU: A Content Replacement Policy for CCN, Based on Immature Content Selection." Applied Sciences 12, no. 1 (December 30, 2021): 344. http://dx.doi.org/10.3390/app12010344.

Full text
Abstract:
In-network caching is the essential part of Content-Centric Networking (CCN). The main aim of a CCN caching module is data distribution within the network. Each CCN node can cache content according to its placement policy. Therefore, it is fully equipped to meet the requirements of future networks demands. The placement strategy decides to cache the content at the optimized location and minimize content redundancy within the network. When cache capacity is full, the content eviction policy decides which content should stay in the cache and which content should be evicted. Hence, network performance and cache hit ratio almost equally depend on the content placement and replacement policies. Content eviction policies have diverse requirements due to limited cache capacity, higher request rates, and the rapid change of cache states. Many replacement policies follow the concept of low or high popularity and data freshness for content eviction. However, when content loses its popularity after becoming very popular in a certain period, it remains in the cache space. Moreover, content is evicted from the cache space before it becomes popular. To handle the above-mentioned issue, we introduced the concept of maturity/immaturity of the content. The proposed policy, named Immature Used (IMU), finds the content maturity index by using the content arrival time and its frequency within a specific time frame. Also, it determines the maturity level through a maturity classifier. In the case of a full cache, the least immature content is evicted from the cache space. We performed extensive simulations in the simulator (Icarus) to evaluate the performance (cache hit ratio, path stretch, latency, and link load) of the proposed policy with different well-known cache replacement policies in CCN. The obtained results, with varying popularity and cache sizes, indicate that our proposed policy can achieve up to 14.31% more cache hits, 5.91% reduced latency, 3.82% improved path stretch, and 9.53% decreased link load, compared to the recently proposed technique. Moreover, the proposed policy performed significantly better compared to other baseline approaches.
APA, Harvard, Vancouver, ISO, and other styles
22

Park, Gi Seok, and Hwangjun Song. "Cooperative Base Station Caching and X2 Link Traffic Offloading System for Video Streaming Over SDN-Enabled 5G Networks." IEEE Transactions on Mobile Computing 18, no. 9 (September 1, 2019): 2005–19. http://dx.doi.org/10.1109/tmc.2018.2869756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Yi, Changyan, Shiwei Huang, and Jun Cai. "An Incentive Mechanism Integrating Joint Power, Channel and Link Management for Social-Aware D2D Content Sharing and Proactive Caching." IEEE Transactions on Mobile Computing 17, no. 4 (April 1, 2018): 789–802. http://dx.doi.org/10.1109/tmc.2017.2741481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sapio, Amedeo, Mario Baldi, Fulvio Risso, Narendra Anand, and Antonio Nucci. "Packet Capture and Analysis on MEDINA, A Massively Distributed Network Data Caching Platform." Parallel Processing Letters 27, no. 03n04 (December 2017): 1750010. http://dx.doi.org/10.1142/s0129626417500104.

Full text
Abstract:
Traffic capture and analysis is key to many domains including network management, security and network forensics. Traditionally, it is performed by a dedicated device accessing traffic at a specific point within the network through a link tap or a port of a node mirroring packets. This approach is problematic because the dedicated device must be equipped with a large amount of computation and storage resources to store and analyze packets. Alternatively, in order to achieve scalability, analysis can be performed by a cluster of hosts. However, this is normally located at a remote location with respect to the observation point, hence requiring to move across the network a large volume of captured traffic. To address this problem, this paper presents an algorithm to distribute the task of capturing, processing and storing packets traversing a network across multiple packet forwarding nodes (e.g., IP routers). Essentially, our solution allows individual nodes on the path of a flow to operate on subsets of packets of that flow in a completely distributed and decentralized manner. The algorithm ensures that each packet is processed by n nodes, where n can be set to 1 to minimize overhead or to a higher value to achieve redundancy. Nodes create a distributed index that enables efficient retrieval of packets they store (e.g., for forensics applications). Finally, the basic principles of the presented solution can also be applied, with minimal changes, to the distributed execution of generic tasks on data flowing through a network of nodes with processing and storage capabilities. This has applications in various fields ranging from Fog Computing, to microservice architectures and the Internet of Things.
APA, Harvard, Vancouver, ISO, and other styles
25

dell’Agnello, Luca, Tommaso Boccali, Daniele Cesini, Lorenzo Chiarelli, Andrea Chierici, Stefano Dal Pra, Donato De Girolamo, et al. "INFN Tier–1: a distributed site." EPJ Web of Conferences 214 (2019): 08002. http://dx.doi.org/10.1051/epjconf/201921408002.

Full text
Abstract:
The INFN Tier-1 center at CNAF has been extended in 2016 and 2017 in order to include a small amount of resources (∼ 22 kHS06 corresponding to ∼ 10% of the CNAF pledges for LHC in 2017) physically located at the Bari-ReCas site (∼ 600 km distant from CNAF). In 2018, a significant fraction of the CPU power (∼ 170 kHS06, equivalent to ∼ 50% of the total CNAF pledges) is going to be provided via a collaboration with the PRACE Tier-0 CINECA center (a few km from CNAF), thus building a truly geographically distributed (WAN) center. The two sites are going to be interconnected via an high bandwidth link (400-1200 Gb/s), in order to ensure a transparent access to data residing on CNAF storage; the latency between the centers is small enough not to require particular caching strategies. In this contribution we describe the issues and the results of the production configuration, focusing both on the management aspects and on the performance provided to end-users.
APA, Harvard, Vancouver, ISO, and other styles
26

LaDage, Lara D., Timothy C. Roth, Rebecca A. Fox, and Vladimir V. Pravosudov. "Ecologically relevant spatial memory use modulates hippocampal neurogenesis." Proceedings of the Royal Society B: Biological Sciences 277, no. 1684 (November 25, 2009): 1071–79. http://dx.doi.org/10.1098/rspb.2009.1769.

Full text
Abstract:
The adult hippocampus in birds and mammals undergoes neurogenesis and the resulting new neurons appear to integrate structurally and functionally into the existing neural architecture. However, the factors underlying the regulation of new neuron production is still under scrutiny. In recent years, the concept that spatial memory affects adult hippocampal neurogenesis has gained acceptance, although results attempting to causally link memory use to neurogenesis remain inconclusive, possibly owing to confounds of motor activity, task difficulty or training for the task. Here, we show that ecologically relevant, spatial memory-based experiences of food caching and retrieving directly affect hippocampal neurogenesis in mountain chickadees ( Poecile gambeli ). We found that restricting memory experiences in captivity caused significantly lower rates of neurogenesis, as determined by doublecortin expression, compared with captive individuals provided with such experiences. However, neurogenesis rates in both groups of captive birds were still greatly lower than those in free-ranging conspecifics. These findings show that ecologically relevant spatial memory experiences can directly modulate neurogenesis, separate from other confounds that may also independently affect neurogenesis.
APA, Harvard, Vancouver, ISO, and other styles
27

Zeng, Ruibin, Jiali You, Yang Li, and Rui Han. "An ICN-Based IPFS High-Availability Architecture." Future Internet 14, no. 5 (April 19, 2022): 122. http://dx.doi.org/10.3390/fi14050122.

Full text
Abstract:
The Interplanetary File System (IPFS), a new type of P2P file system, enables people to obtain data from other peer nodes in a distributed system without the need to establish a connection with a distant server. However, IPFS suffers from low resolution efficiency and duplicate data delivery, resulting in poor system availability. The new Information-Centric Networking (ICN), on the other hand, applies the features of name resolution service and caching to achieve fast location and delivery of content. Therefore, there is a potential to optimize the availability of IPFS systems from the network layer. In this paper, we propose an ICN-based IPFS high-availability architecture, called IBIHA, which introduces enhanced nodes and information tables to manage data delivery based on the original IPFS network, and uses the algorithm of selecting high-impact nodes from the entitled network (PwRank) as the basis for deploying enhanced nodes in the network, thus achieving the effect of optimizing IPFS availability. The experimental results show that this architecture outperforms the IPFS network in terms of improving node resolution efficiency, reducing network redundant packets, and improving the rational utilization of network link resources.
APA, Harvard, Vancouver, ISO, and other styles
28

Song, Yaqin, Hong Ni, and Xiaoyong Zhu. "Two-Level Congestion Control Mechanism (2LCCM) for Information-Centric Networking." Future Internet 13, no. 6 (June 7, 2021): 149. http://dx.doi.org/10.3390/fi13060149.

Full text
Abstract:
As an emerging network architecture, Information-Centric Networking (ICN) is considered to have the potential to meet the new requirements of the Fifth Generation (5G) networks. ICN uses a name decoupled from location to identify content, supports the in-network caching technology, and adopts a receiver-driven model for data transmission. Existing ICN congestion control mechanisms usually first select a nearby replica by opportunistic cache-hits and then insist on adjusting the transmission rate regardless of the congestion state, which cannot fully utilize the characteristics of ICN to improve the performance of data transmission. To solve this problem, this paper proposes a two-level congestion control mechanism, called 2LCCM. It switches the replica location based on a node state table to avoid congestion paths when heavy congestion happens. This 2LCCM mechanism also uses a receiver-driven congestion control algorithm to adjust the request sending rate, in order to avoid link congestion under light congestion. In this paper, the design and implementation of the proposed mechanism are described in detail, and the experimental results show that 2LCCM can effectively reduce the transmission delay when heavy congestion occurs, and the bandwidth-delay product-based congestion control algorithm has better transmission performance compared with a loss-based algorithm.
APA, Harvard, Vancouver, ISO, and other styles
29

Suzuki, Toshitaka N., and Nobuyuki Kutsukake. "Foraging intention affects whether willow tits call to attract members of mixed-species flocks." Royal Society Open Science 4, no. 6 (June 2017): 170222. http://dx.doi.org/10.1098/rsos.170222.

Full text
Abstract:
Understanding how individual behaviour influences the spatial and temporal distribution of other species is necessary to resolve the complex structure of species assemblages. Mixed-species bird flocks provide an ideal opportunity to investigate this issue, because members of the flocks are involved in a variety of behavioural interactions between species. Willow tits ( Poecile montanus ) often produce loud calls when visiting a new foraging patch to recruit other members of mixed-species flocks. The costs and benefits of flocking would differ with individual foraging behaviours (i.e. immediate consumption or caching); thus, willow tits may adjust the production of loud calls according to their foraging intention. In this study, we investigated the link between foraging decisions and calling behaviour in willow tits and tested its influence on the temporal cohesion with members of mixed-species flocks. Observations at experimental foraging patches showed that willow tits produced more calls when they consumed food items compared with when they cached them. Playback experiments revealed that these calls attracted flock members and helped to maintain their presence at foraging patches. Thus, willow tits adjusted calling behaviour according to their foraging intention, thereby coordinating the associations with members of mixed-species flocks. Our findings demonstrate the influence of individual decision-making on temporal cohesion with other species and highlight the importance of interspecific communication in mixed-species flocking dynamics.
APA, Harvard, Vancouver, ISO, and other styles
30

Fang, Chao, Tianyi Zhang, Jingjing Huang, Hang Xu, Zhaoming Hu, Yihui Yang, Zhuwei Wang, Zequan Zhou, and Xiling Luo. "A DRL-Driven Intelligent Optimization Strategy for Resource Allocation in Cloud-Edge-End Cooperation Environments." Symmetry 14, no. 10 (October 12, 2022): 2120. http://dx.doi.org/10.3390/sym14102120.

Full text
Abstract:
Complex dynamic services and heterogeneous network environments make the asymmetrical control a curial issue to handle on the Internet. With the advent of the Internet of Things (IoT) and the fifth generation (5G), the emerging network applications lead to the explosive growth of mobile traffic while bringing forward more challenging service requirements to future radio access networks. Therefore, how to effectively allocate limited heterogeneous network resources to improve content delivery for massive application services to ensure network quality of service (QoS) becomes particularly urgent in heterogeneous network environments. To cope with the explosive mobile traffic caused by emerging Internet services, this paper designs an intelligent optimization strategy based on deep reinforcement learning (DRL) for resource allocation in heterogeneous cloud-edge-end collaboration environments. Meanwhile, the asymmetrical control problem caused by complex dynamic services and heterogeneous network environments is discussed and overcome by distributed cooperation among cloud-edge-end nodes in the system. Specifically, the multi-layer heterogeneous resource allocation problem is formulated as a maximal traffic offloading model, where content caching and request aggregation mechanisms are utilized. A novel DRL policy is proposed to improve content distribution by making cache replacement and task scheduling for arriving content requests in accordance with the information about users’ history requests, in-network cache capacity, available link bandwidth and topology structure. The performance of our proposed solution and its similar counterparts are analyzed in different network conditions.
APA, Harvard, Vancouver, ISO, and other styles
31

Nam, Youngju, Jaejeong Bang, Hyunseok Choi, Yongje Shin, and Euisin Lee. "Cooperative Content Precaching Scheme Based on the Mobility Information of Vehicles in Intermittently Connected Vehicular Networks." Electronics 11, no. 22 (November 9, 2022): 3663. http://dx.doi.org/10.3390/electronics11223663.

Full text
Abstract:
Intermittently connected vehicular networks (ICVNs) consist of vehicles moving on roads and stationary roadside units (RSUs) deployed along roads. In ICVNs, the long distances between RSUs and the large volume of vehicular content lead to long download delays to vehicles and high traffic overhead on backhaul links. Fortunately, the improved content storage size and the enhanced vehicular mobility prediction afford opportunities to ameliorate these problems by proactively caching (i.e., precaching) content. However, existing precaching schemes exploits RSUs and vehicles individually for content precaching, even though the cooperative precaching between them can reduce download delays and backhaul link traffic. Thus, this paper proposes a cooperative content precaching scheme that exploits the precaching ability of both vehicles and RSUs to enhance the performance of content downloads in ICVNs. Based on the trajectory and velocity information of vehicles, we first select the optimal relaying vehicle and the next RSUs to cache the requested content proactively and provide it to the requester vehicle optimally. Next, we calculate the optimal content precaching amount for each of the relaying vehicle and the downloading RSUs by using a mathematical model that exploits both the dwell time in an RSU and the contact time between vehicles. To compensate for the error of the mobility prediction in determining both the dwell time and the contact time, our scheme adds a guardband to the optimal content precaching amount by considering the expected reduced delay. Finally, we evaluate the proposed scheme in various simulation environments to prove the achievement of efficient content download performance by comparing with the existing schemes.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Heng, Xiaofei Wang, Jiawen Chen, Chenyang Wang, and Jianxin Li. "D2D-LSTM: LSTM-Based Path Prediction of Content Diffusion Tree in Device-to-Device Social Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 295–302. http://dx.doi.org/10.1609/aaai.v34i01.5363.

Full text
Abstract:
With the proliferation of mobile device users, the Device-to-Device (D2D) communication has ascended to the spotlight in social network for users to share and exchange enormous data. Different from classic online social network (OSN) like Twitter and Facebook, each single data file to be shared in the D2D social network is often very large in data size, e.g., video, image or document. Sometimes, a small number of interesting data files may dominate the network traffic, and lead to heavy network congestion. To reduce the traffic congestion and design effective caching strategy, it is highly desirable to investigate how the data files are propagated in offline D2D social network and derive the diffusion model that fits to the new form of social network. However, existing works mainly concern about link prediction, which cannot predict the overall diffusion path when network topology is unknown. In this article, we propose D2D-LSTM based on Long Short-Term Memory (LSTM), which aims to predict complete content propagation paths in D2D social network. Taking the current user's time, geography and category preference into account, historical features of the previous path can be captured as well. It utilizes prototype users for prediction so as to achieve a better generalization ability. To the best of our knowledge, it is the first attempt to use real world large-scale dataset of mobile social network (MSN) to predict propagation path trees in a top-down order. Experimental results corroborate that the proposed algorithm can achieve superior prediction performance than state-of-the-art approaches. Furthermore, D2D-LSTM can achieve 95% average precision for terminal class and 17% accuracy for tree path hit.
APA, Harvard, Vancouver, ISO, and other styles
33

Kalafatidis, Sarantis, Sotiris Skaperas, Vassilis Demiroglou, Lefteris Mamatas, and Vassilis Tsaoussidis. "Logically-Centralized SDN-Based NDN Strategies for Wireless Mesh Smart-City Networks." Future Internet 15, no. 1 (December 29, 2022): 19. http://dx.doi.org/10.3390/fi15010019.

Full text
Abstract:
The Internet of Things (IoT) is a key technology for smart community networks, such as smart-city environments, and its evolution calls for stringent performance requirements (e.g., low delay) to support efficient communication among a wide range of objects, including people, sensors, vehicles, etc. At the same time, these ecosystems usually adopt wireless mesh technology to extend their communication range in large-scale IoT deployments. However, due to the high range of coverage, the smart-city WMNs may face different network challenges according to the network characteristic, for example, (i) areas that include a significant number of wireless nodes or (ii) areas with frequent dynamic changes such as link failures due to unstable topologies. Named-Data Networking (NDN) can enhance WMNs to meet such IoT requirements, thanks to the content naming scheme and in-network caching, but it necessitates adaptability to the challenging conditions of WMNs. In this work, we aim at efficient end-to-end NDN communication in terms of performance (i.e., delay), performing extended experimentation over a real WMN, evaluating and discussing the benefits provided by two SDN-based NDN strategies: (1) a dynamic SDN-based solution that integrates the NDN operation with the routing decisions of a WMN routing protocol; (2) a static one which based on SDN-based clustering and real WMN performance measurements. Our key contributions include (i) the implementation of two types of NDN path selection strategies; (ii) experimentation and data collection over the w-iLab.t Fed4FIRE+ testbed with real WMN conditions; (ii) real measurements released as open-data, related to the performance of the wireless links in terms of RSSI, delay, and packet loss among the wireless nodes of the corresponding testbed.
APA, Harvard, Vancouver, ISO, and other styles
34

Young, N. E. "On-Line File Caching." Algorithmica 33, no. 3 (January 1, 2002): 371–83. http://dx.doi.org/10.1007/s00453-001-0124-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, James Z., Zhidian Du, and Pradip K. Srimani. "Cooperative Proxy Caching for Wireless Base Stations." Mobile Information Systems 3, no. 1 (2007): 1–18. http://dx.doi.org/10.1155/2007/371572.

Full text
Abstract:
This paper proposes a mobile cache model to facilitate the cooperative proxy caching in wireless base stations. This mobile cache model uses a network cache line to record the caching state information about a web document for effective data search and cache space management. Based on the proposed mobile cache model, a P2P cooperative proxy caching scheme is proposed to use a self-configured and self-managed virtual proxy graph (VPG), independent of the underlying wireless network structure and adaptive to the network and geographic environment changes, to achieve efficient data search, data cache and date replication. Based on demand, the aggregate effect of data caching, searching and replicating actions by individual proxy servers automatically migrates the cached web documents closer to the interested clients. In addition, a cache line migration (CLM) strategy is proposed to flow and replicate the heads of network cache lines of web documents associated with a moving mobile host to the new base station during the mobile host handoff. These replicated cache line heads provide direct links to the cached web documents accessed by the moving mobile hosts in the previous base station, thus improving the mobile web caching performance. Performance studies have shown that the proposed P2P cooperative proxy caching schemes significantly outperform existing caching schemes.
APA, Harvard, Vancouver, ISO, and other styles
36

Baranovskiy, Nikolay Viktorovich, Aleksey Podorovskiy, and Aleksey Malinin. "Parallel Implementation of the Algorithm to Compute Forest Fire Impact on Infrastructure Facilities of JSC Russian Railways." Algorithms 14, no. 11 (November 15, 2021): 333. http://dx.doi.org/10.3390/a14110333.

Full text
Abstract:
Forest fires have a negative impact on the economy in a number of regions, especially in Wildland Urban Interface (WUI) areas. An important link in the fight against fires in WUI areas is the development of information and computer systems for predicting the fire safety of infrastructural facilities of Russian Railways. In this work, a numerical study of heat transfer processes in the enclosing structure of a wooden building near the forest fire front was carried out using the technology of parallel computing. The novelty of the development is explained by the creation of its own program code, which is planned to be put into operation either in the Information System for Remote Monitoring of Forest Fires ISDM-Rosleskhoz, or in the information and computing system of JSC Russian Railways. In the Russian Federation, it is forbidden to use foreign systems in the security services of industrial facilities. The implementation of the deterministic model of heat transfer in the enclosing structure with the complexity of the algorithm O (2N2 + 2K) is presented. The program is implemented in Python 3.x using the NumPy and Concurrent libraries. Calculations were carried out on a multiprocessor cluster in the Sirius University of Science and Technology. The results of calculations and the acceleration coefficient for operating modes for 1, 2, 4, 8, 16, 32, 48 and 64 processes are presented. The developed algorithm can be applied to assess the fire safety of infrastructure facilities of Russian Railways. The main merit of the new development should be noted, which is explained by the ability to use large computational domains with a large number of computational grid nodes in space and time. The use of caching intermediate data in files made it possible to distribute a large number of computational nodes among the processors of a computing multiprocessor system. However, one should also note a drawback; namely, a decrease in the acceleration of computational operations with a large number of involved nodes of a multiprocessor computing system, which is explained by the write and read cycles in cache files.
APA, Harvard, Vancouver, ISO, and other styles
37

CAI, Zhao-quan. "On-line algorithm of loose competitive caching." Journal of Computer Applications 28, no. 10 (September 30, 2009): 2604–7. http://dx.doi.org/10.3724/sp.j.1087.2008.02604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Fang, Tao, Hua Tian, Xiaobo Zhang, Xueqiang Chen, Xinhong Shao, and Yuli Zhang. "Context-Aware Caching Distribution and UAV Deployment: A Game-Theoretic Approach." Applied Sciences 8, no. 10 (October 17, 2018): 1959. http://dx.doi.org/10.3390/app8101959.

Full text
Abstract:
This paper investigates the problem of the optimal arrangement for both unmanned aerial vehicles’ (UAVs’) caching contents and service locations in UAV-assisted networks based on the context awareness, which considers the influence between users and environment. In the existing work, users within the coverage of UAVs are considered to be served perfectly, which ignores the communication probability caused by line-of-sight (LOS) and non-line-of-sight (NLOS) links. However, the links are related to the deployment of UAVs. Moreover, the transmission overhead should be taken into account. To balance the tradeoff between these two factors, we design the ratio of users’ probability and transmission overhead as the performance measure mechanism to evaluate the performance of UAV-assisted networks. Then, we formulate the objective for maximizing the performance of UAV-assisted networks as a UAV-assisted caching game. It is proved that the game is an exact potential game with the performance of UAV-assisted networks serving as the potential function. Next, we propose a log-linear caching algorithm (LCA) to achieve the Nash equilibrium (NE). Finally, related simulation results reflect the great performance of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
39

Chrobak, Marek, Elias Koutsoupias, and John Noga. "More on randomized on-line algorithms for caching." Theoretical Computer Science 290, no. 3 (January 2003): 1997–2008. http://dx.doi.org/10.1016/s0304-3975(02)00045-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Chattopadhyay, Arpan, Bartlomiej Blaszczyszyn, and H. Paul Keeler. "Gibbsian On-Line Distributed Content Caching Strategy for Cellular Networks." IEEE Transactions on Wireless Communications 17, no. 2 (February 2018): 969–81. http://dx.doi.org/10.1109/twc.2017.2772911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Cao, Xiaoyong, and Pu Tian. "“Dividing and Conquering” and “Caching” in Molecular Modeling." International Journal of Molecular Sciences 22, no. 9 (May 10, 2021): 5053. http://dx.doi.org/10.3390/ijms22095053.

Full text
Abstract:
Molecular modeling is widely utilized in subjects including but not limited to physics, chemistry, biology, materials science and engineering. Impressive progress has been made in development of theories, algorithms and software packages. To divide and conquer, and to cache intermediate results have been long standing principles in development of algorithms. Not surprisingly, most important methodological advancements in more than half century of molecular modeling are various implementations of these two fundamental principles. In the mainstream classical computational molecular science, tremendous efforts have been invested on two lines of algorithm development. The first is coarse graining, which is to represent multiple basic particles in higher resolution modeling as a single larger and softer particle in lower resolution counterpart, with resulting force fields of partial transferability at the expense of some information loss. The second is enhanced sampling, which realizes “dividing and conquering” and/or “caching” in configurational space with focus either on reaction coordinates and collective variables as in metadynamics and related algorithms, or on the transition matrix and state discretization as in Markov state models. For this line of algorithms, spatial resolution is maintained but results are not transferable. Deep learning has been utilized to realize more efficient and accurate ways of “dividing and conquering” and “caching” along these two lines of algorithmic research. We proposed and demonstrated the local free energy landscape approach, a new framework for classical computational molecular science. This framework is based on a third class of algorithm that facilitates molecular modeling through partially transferable in resolution “caching” of distributions for local clusters of molecular degrees of freedom. Differences, connections and potential interactions among these three algorithmic directions are discussed, with the hope to stimulate development of more elegant, efficient and reliable formulations and algorithms for “dividing and conquering” and “caching” in complex molecular systems.
APA, Harvard, Vancouver, ISO, and other styles
42

Deleli, Mesay, Deleli Mesay Adinew, and Ayall Tewodros Alemu. "Spark Performance Optimization Analysis With Multi-Layer Parameter Using Shuffling and Scheduling With Data Serialization in Different Data Caching Options." Journal of Technological Advancements 1, no. 1 (January 2021): 1–17. http://dx.doi.org/10.4018/jta.290326.

Full text
Abstract:
As social networking services and e-commerce are growing rapidly, the number of online users also dynamically growing that facilitate contribution of huge contents to digital world. In such dynamic environment, meeting the demand of computing is very challenging special with existing computing model. Although Spark is recently introduced to alleviate the problems with concept of in-memory computing for big data analytic with many parameters configuration that allow to configure and improve its performance, still it has performance bottleneck which require to investigate performance improvement mechanism by focus on the combinations of Scheduling and Shuffle Manager with data serialization with intermediate data caching options. Standalone cluster computing model was selected as experimental methodology with submit command line for data submission. Three Spark application such as WorkCount, TeraSort and PageRank were selected and developed for experiment. As a result, 2.45% and 8.01% performance improvement are achieved in OFFHEAP and Memory Only Ser data caching option, respectively.
APA, Harvard, Vancouver, ISO, and other styles
43

Qian, Yuwen, Liuqiang Shi, Long Shi, Kui Cai, Jun Li, and Feng Shu. "Cache-Enabled Power Line Communication Networks: Caching Node Selection and Backhaul Energy Optimization." IEEE Transactions on Green Communications and Networking 4, no. 2 (June 2020): 606–15. http://dx.doi.org/10.1109/tgcn.2020.2985378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Din, Ikram, Byung-Seo Kim, Suhaidi Hassan, Mohsen Guizani, Mohammed Atiquzzaman, and Joel Rodrigues. "Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities." Sensors 18, no. 11 (November 15, 2018): 3957. http://dx.doi.org/10.3390/s18113957.

Full text
Abstract:
Information Centric Network (ICN) is expected to be the favorable deployable future Internet paradigm. ICN intends to replace the current IP-based model with the name-based content-centric model, as it aims at providing better security, scalability, and content distribution. However, it is a challenging task to conceive how ICN can be linked with the other most emerging paradigm, i.e., Vehicular Ad hoc Network (VANET). In this article, we present an overview of the ICN-based VANET approach in line with its contributions and research challenges.In addition, the connectivity issues of vehicular ICN model is presented with some other emerging paradigms, such as Software Defined Network (SDN), Cloud, and Edge computing. Moreover, some ICN-based VANET research opportunities, in terms of security, mobility, routing, naming, caching, and fifth generation (5G) communications, are also covered at the end of the paper.
APA, Harvard, Vancouver, ISO, and other styles
45

Luna, Augustin, Fathi Elloumi, Sudhir Varma, Yanghsin Wang, Vinodh N. Rajapakse, Mirit I. Aladjem, Jacques Robert, Chris Sander, Yves Pommier, and William C. Reinhold. "CellMiner Cross-Database (CellMinerCDB) version 1.2: Exploration of patient-derived cancer cell line pharmacogenomics." Nucleic Acids Research 49, no. D1 (November 16, 2020): D1083—D1093. http://dx.doi.org/10.1093/nar/gkaa968.

Full text
Abstract:
Abstract CellMiner Cross-Database (CellMinerCDB, discover.nci.nih.gov/cellminercdb) allows integration and analysis of molecular and pharmacological data within and across cancer cell line datasets from the National Cancer Institute (NCI), Broad Institute, Sanger/MGH and MD Anderson Cancer Center (MDACC). We present CellMinerCDB 1.2 with updates to datasets from NCI-60, Broad Cancer Cell Line Encyclopedia and Sanger/MGH, and the addition of new datasets, including NCI-ALMANAC drug combination, MDACC Cell Line Project proteomic, NCI-SCLC DNA copy number and methylation data, and Broad methylation, genetic dependency and metabolomic datasets. CellMinerCDB (v1.2) includes several improvements over the previously published version: (i) new and updated datasets; (ii) support for pattern comparisons and multivariate analyses across data sources; (iii) updated annotations with drug mechanism of action information and biologically relevant multigene signatures; (iv) analysis speedups via caching; (v) a new dataset download feature; (vi) improved visualization of subsets of multiple tissue types; (vii) breakdown of univariate associations by tissue type; and (viii) enhanced help information. The curation and common annotations (e.g. tissues of origin and identifiers) provided here across pharmacogenomic datasets increase the utility of the individual datasets to address multiple researcher question types, including data reproducibility, biomarker discovery and multivariate analysis of drug activity.
APA, Harvard, Vancouver, ISO, and other styles
46

Aranda, Luis, Pedro Reviriego, and Juan Maestro. "Protecting Image Processing Pipelines against Configuration Memory Errors in SRAM-Based FPGAs." Electronics 7, no. 11 (November 15, 2018): 322. http://dx.doi.org/10.3390/electronics7110322.

Full text
Abstract:
Image processing systems are widely used in space applications, so different radiation-induced malfunctions may occur in the system depending on the device that is implementing the algorithm. SRAM-based FPGAs are commonly used to speed up the image processing algorithm, but then the system could be vulnerable to configuration memory errors caused by single event upsets (SEUs). In those systems, the captured image is streamed pixel by pixel from the camera to the FPGA. Certain local operations such as median or rank filters need to process the image locally instead of pixel by pixel, so some particular pixel caching structures such as line-buffer-based pipelines can be used to accelerate the filtering process. However, an SRAM-based FPGA implementation of these pipelines may have malfunctions due to the mentioned configuration memory errors, so an error mitigation technique is required. In this paper, a novel method to protect line-buffer-based pipelines against SRAM-based FPGA configuration memory errors is presented. Experimental results show that, using our protection technique, considerable savings in terms of FPGA resources can be achieved while maintaining the SEU protection coverage provided by other classic pipeline protection schemes.
APA, Harvard, Vancouver, ISO, and other styles
47

Sarkar, D., U. K. Sarkar, and G. Peng. "Bandwidth Requirement of Links in a Hierarchical Caching Network: A Graph-Based Formulation, An Algorithm and Its Performance Evaluation." International Journal of Computers and Applications 29, no. 1 (January 2007): 70–78. http://dx.doi.org/10.1080/1206212x.2007.11441834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Lee, Seongjae, and Taehyoun Kim. "Parallel Dislocation Model Implementation for Earthquake Source Parameter Estimation on Multi-Threaded GPU." Applied Sciences 11, no. 20 (October 11, 2021): 9434. http://dx.doi.org/10.3390/app11209434.

Full text
Abstract:
Graphics processing units (GPUs) have been in the spotlight in various fields because they can process a massive amount of computation at a relatively low price. This research proposes a performance acceleration framework applied to Monte Carlo method-based earthquake source parameter estimation using multi-threaded compute unified device architecture (CUDA) GPU. The Monte Carlo method takes an exhaustive computational burden because iterative nonlinear optimization is performed more than 1000 times. To alleviate this problem, we parallelize the rectangular dislocation model, i.e., the Okada model, since the model consists of independent point-wise computations and takes up most of the time in the nonlinear optimization. Adjusting the degree of common subexpression elimination, thread block size, and constant caching, we obtained the best CUDA optimization configuration that achieves 134.94×, 14.00×, and 2.99× speedups over sequential CPU, 16-threads CPU, and baseline CUDA GPU implementation from the 1000×1000 mesh size, respectively. Then, we evaluated the performance and correctness of four different line search algorithms for the limited memory Broyden–Fletcher–Goldfarb–Shanno with boundaries (L-BFGS-B) optimization in the real earthquake dataset. The results demonstrated Armijo line search to be the most efficient one among the algorithms. The visualization results with the best-fit parameters finally derived by the proposed framework confirm that our framework also approximates the earthquake source parameters with an excellent agreement with the geodetic data, i.e., at most 0.5 cm root-mean-square-error (RMSE) of residual displacement.
APA, Harvard, Vancouver, ISO, and other styles
49

ROTH, YUVAL, and RAMESH JAIN. "SIMULATION AND EXPECTATION IN SENSOR-BASED SYSTEMS." International Journal of Pattern Recognition and Artificial Intelligence 07, no. 01 (February 1993): 145–73. http://dx.doi.org/10.1142/s0218001493000091.

Full text
Abstract:
Simulations have traditionally been used as off-line tools for examining process models and experimenting with system models for which it would have been either impossible or too dangerous, expensive, or time-consuming, to perform with physical systems. We propose a novel way of regarding simulations as part of both the development and the working phases of systems. In our approach simulation is used within the processing and control loop of the system to provide sensor and state expectations. This minimizes the inverse sensory data analysis and model maintenance problems. We refer to this mode of operation as the verification mode, in contrast to the traditional discovery mode. In order to provide simulations and planning that are intertwined with the control of a physical system, temporal issues have to be considered. By limiting the focus of the system to small portions of complex models which are temporarily relevant to the system’s operation, the system is able to maintain its models and respond faster. For this we employ the Context-based Caching (CbC) mechanism within our Mobile Platform Control and Simulation Program (MOSIM). CbC is a knowledge management technique which maintains large knowledge bases by making the necessary information available at the right time.
APA, Harvard, Vancouver, ISO, and other styles
50

Darwiche, A., and G. Provan. "Query DAGs: A Practical Paradigm for Implementing Belief-Network Inference." Journal of Artificial Intelligence Research 6 (May 1, 1997): 147–76. http://dx.doi.org/10.1613/jair.330.

Full text
Abstract:
We describe a new paradigm for implementing inference in belief networks, which consists of two steps: (1) compiling a belief network into an arithmetic expression called a Query DAG (Q-DAG); and (2) answering queries using a simple evaluation algorithm. Each node of a Q-DAG represents a numeric operation, a number, or a symbol for evidence. Each leaf node of a Q-DAG represents the answer to a network query, that is, the probability of some event of interest. It appears that Q-DAGs can be generated using any of the standard algorithms for exact inference in belief networks (we show how they can be generated using clustering and conditioning algorithms). The time and space complexity of a Q-DAG generation algorithm is no worse than the time complexity of the inference algorithm on which it is based. The complexity of a Q-DAG evaluation algorithm is linear in the size of the Q-DAG, and such inference amounts to a standard evaluation of the arithmetic expression it represents. The intended value of Q-DAGs is in reducing the software and hardware resources required to utilize belief networks in on-line, real-world applications. The proposed framework also facilitates the development of on-line inference on different software and hardware platforms due to the simplicity of the Q-DAG evaluation algorithm. Interestingly enough, Q-DAGs were found to serve other purposes: simple techniques for reducing Q-DAGs tend to subsume relatively complex optimization techniques for belief-network inference, such as network-pruning and computation-caching.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography