Journal articles on the topic 'Caching'

To see the other types of publications on this topic, follow the link: Caching.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Caching.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Prasad, M., P. R. Sudha Rani, Raja Rao PBV, Pokkuluri Kiran Sree, P. T. Satyanarayana Murty, A. Satya Mallesh, M. Ramesh Babu, and Chintha Venkata Ramana. "Blockchain-Enabled On-Path Caching for Efficient and Reliable Content Delivery in Information-Centric Networks." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (October 27, 2023): 358–63. http://dx.doi.org/10.17762/ijritcc.v11i9.8397.

Full text
Abstract:
As the demand for online content continues to grow, traditional Content Distribution Networks (CDNs) are facing significant challenges in terms of scalability and performance. Information-Centric Networking (ICN) is a promising new approach to content delivery that aims to address these issues by placing content at the center of the network architecture. One of the key features of ICNs is on-path caching, which allows content to be cached at intermediate routers along the path from the source to the destination. On-path caching in ICNs still faces some challenges, such as the scalability of the cache and the management of cache consistency. To address these challenges, this paper proposes several alternative caching schemes that can be integrated into ICNs using blockchain technology. These schemes include Bloom filters, content-based routing, and hybrid caching, which combine the advantages of off-path and on-path cachings. The proposed blockchain-enabled on-path caching mechanism ensures the integrity and authenticity of cached content, and smart contracts automate the caching process and incentivize caching nodes. To evaluate the performance of these caching alternatives, the authors conduct experiments using real-world datasets. The results show that on-path caching can significantly reduce network congestion and improve content delivery efficiency. The Bloom filter caching scheme achieved a cache hit rate of over 90% while reducing the cache size by up to 80% compared to traditional caching. The content-based routing scheme also achieved high cache hit rates while maintaining low latency.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhou, Mo, Bo Ji, Kun Peng Han, and Hong Sheng Xi. "A Cooperative Hybrid Caching Strategy for P2P Mobile Network." Applied Mechanics and Materials 347-350 (August 2013): 1992–96. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.1992.

Full text
Abstract:
Recently mobile network technologies develop quickly. To meet the increasing demand of wireless users, many multimedia proxies have been deployed over wireless networks. The caching nodes constitute a wireless caching system with an architecture of P2P and provide better service to mobile users. In this paper, we formulate the caching system to optimize the consumption of network bandwidth and guarantee the response time of mobile users. Two strategies: single greedy caching strategy and cooperative hybrid caching strategy are proposed to achieve this goal. Single greedy caching aims to reduce bandwidth consumption from the standpoint of each caching node, while cooperative hybrid caching allows sharing and coordination of multiple nodes, taking both bandwidth consumption and popularity into account. Simulation results show that cooperative hybrid caching outperforms single greedy caching in both bandwidth consumption and delay time.
APA, Harvard, Vancouver, ISO, and other styles
3

Dinh, Ngocthanh, and Younghan Kim. "An Energy Reward-Based Caching Mechanism for Information-Centric Internet of Things." Sensors 22, no. 3 (January 19, 2022): 743. http://dx.doi.org/10.3390/s22030743.

Full text
Abstract:
Existing information-centric networking (ICN) designs for Internet of Things (IoT) mostly make caching decisions based on probability or content popularity. From the energy-efficient perspective, those strategies may not always be energy efficient in resource-constrained IoT because without considering the energy reward of caching decisions, inappropriate routers and content objects may be selected for caching, which may lead to negative energy rewards. In this paper, we analyze the energy consumption of content caching and content retrieval in resource-constrained IoT and calculate caching energy reward as a key metric to measure the energy efficiency of a caching decision. We then propose an efficient cache placement and cache replacement mechanism based on the caching energy reward to improve the energy efficiency of caching decisions. Through analysis and experimental results, we show that the proposed mechanism achieves a significant improvement in terms of energy efficiency, stretch ratio, and cache hit ratio compared to state-of-the-art caching schemes.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Yali, and Jiachao Chen. "Collaborative Caching in Edge Computing via Federated Learning and Deep Reinforcement Learning." Wireless Communications and Mobile Computing 2022 (December 22, 2022): 1–15. http://dx.doi.org/10.1155/2022/7212984.

Full text
Abstract:
By deploying resources in the vicinity of users, edge caching can substantially reduce the latency for users to retrieve content and relieve the pressure on the backbone network. Due to the capacity limitation of caching and the dynamic nature of user requests, how to allocate caching resources reasonably must be considered. Some edge caching studies improve network performance by predicting content popularity and actively caching the most popular content, thereby ignoring the privacy and security issues caused by the need to collect user information at the central unit. To this end, a collaborative caching strategy based on federated learning is proposed. First, federated learning is used to make distributed predictions of the preferences of users in the nodes to develop an effective content caching policy. Then, the problem of allocating caching resources to optimize the cost of video providers is formulated as a Markov decision process, and a reinforcement learning method is used to optimize the caching decisions. Compared with several basic caching strategies in terms of cache hit rate, transmission delay, and cost, the simulation results show that the proposed content caching strategy reduces the cost of video providers, and has higher cache hit rate and lower average transmission delay.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Feng, Kwok-Yan Lam, Li Wang, Zhenyu Na, Xin Liu, and Qing Pan. "Caching Efficiency Enhancement at Wireless Edges with Concerns on User’s Quality of Experience." Wireless Communications and Mobile Computing 2018 (2018): 1–10. http://dx.doi.org/10.1155/2018/1680641.

Full text
Abstract:
Content caching is a promising approach to enhancing bandwidth utilization and minimizing delivery delay for new-generation Internet applications. The design of content caching is based on the principles that popular contents are cached at appropriate network edges in order to reduce transmission delay and avoid backhaul bottleneck. In this paper, we propose a cooperative caching replacement and efficiency optimization scheme for IP-based wireless networks. Wireless edges are designed to establish a one-hop scope of caching information table for caching replacement in cases when there is not enough cache resource available within its own space. During the course, after receiving the caching request, every caching node should determine the weight of the required contents and provide a response according to the availability of its own caching space. Furthermore, to increase the caching efficiency from a practical perspective, we introduce the concept of quality of user experience (QoE) and try to properly allocate the cache resource of the whole networks to better satisfy user demands. Different caching allocation strategies are devised to be adopted to enhance user QoE in various circumstances. Numerical results are further provided to justify the performance improvement of our proposal from various aspects.
APA, Harvard, Vancouver, ISO, and other styles
6

Santhanakrishnan, Ganesh, Ahmed Amer, and Panos K. Chrysanthis. "Self-tuning caching: the Universal Caching algorithm." Software: Practice and Experience 36, no. 11-12 (2006): 1179–88. http://dx.doi.org/10.1002/spe.755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Han, Luchao, Zhichuan Guo, and Xuewen Zeng. "Research on Multicore Key-Value Storage System for Domain Name Storage." Applied Sciences 11, no. 16 (August 12, 2021): 7425. http://dx.doi.org/10.3390/app11167425.

Full text
Abstract:
This article proposes a domain name caching method for the multicore network-traffic capture system, which significantly improves insert latency, throughput and hit rate. The caching method is composed of caching replacement algorithm, cache set method. The method is easy to implement, low in deployment cost, and suitable for various multicore caching systems. Moreover, it can reduce the use of locks by changing data structures and algorithms. Experimental results show that compared with other caching system, our proposed method reaches the highest throughput under multiple cores, which indicates that the cache method we proposed is best suited for domain name caching.
APA, Harvard, Vancouver, ISO, and other styles
8

Soleimani, Somayeh, and Xiaofeng Tao. "Caching and Placement for In-Network Caching in Device-to-Device Communications." Wireless Communications and Mobile Computing 2018 (September 26, 2018): 1–9. http://dx.doi.org/10.1155/2018/9539502.

Full text
Abstract:
Caching content by users constitutes a promising solution to decrease the costly transmissions with going through the base stations (BSs). To improve the performance of in-network caching in device-to-device (D2D) communications, caching placement and content delivery should be jointly optimized. To this end, we jointly optimize caching decision and content discovery strategies by considering the successful content delivery in D2D links for maximizing the in-network caching gain through D2D communications. Moreover, an in-network caching placement problem is formulated as an integer nonlinear optimization problem. To obtain the optimal solution for the proposed problem, Lagrange dual decomposition is applied in order to reduce the complexity. Simulation results show that the proposed algorithm has a near-optimal performance, approaching that of the exhaustive search method. Furthermore, the proposed scheme has a notable in-network caching gain and an improvement in traffic offloading compared to that of other caching placement schemes.
APA, Harvard, Vancouver, ISO, and other styles
9

Naeem, Nor, Hassan, and Kim. "Compound Popular Content Caching Strategy in Named Data Networking." Electronics 8, no. 7 (July 10, 2019): 771. http://dx.doi.org/10.3390/electronics8070771.

Full text
Abstract:
The aim of named data networking (NDN) is to develop an efficient data dissemination approach by implementing a cache module within the network. Caching is one of the most prominent modules of NDN that significantly enhances the Internet architecture. NDN-cache can reduce the expected flood of global data traffic by providing cache storage at intermediate nodes for transmitted contents, making data broadcasting in efficient way. It also reduces the content delivery time by caching popular content close to consumers. In this study, a new content caching mechanism named the compound popular content caching strategy (CPCCS) is proposed for efficient content dissemination and its performance is measured in terms of cache hit ratio, content diversity, and stretch. The CPCCS is extensively and comparatively studied with other NDN-based caching strategies, such as max-gain in-network caching (MAGIC), WAVE popularity-based caching strategy, hop-based probabilistic caching (HPC), LeafPopDown, most popular cache (MPC), cache capacity aware caching (CCAC), and ProbCache through simulations. The results shows that the CPCCS performs better in terms of the cache hit ratio, content diversity ratio, and stretch ratio than all other strategies.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Yantong, and Vasilis Friderikos. "A Survey of Deep Learning for Data Caching in Edge Network." Informatics 7, no. 4 (October 13, 2020): 43. http://dx.doi.org/10.3390/informatics7040043.

Full text
Abstract:
The concept of edge caching provision in emerging 5G and beyond mobile networks is a promising method to deal both with the traffic congestion problem in the core network, as well as reducing latency to access popular content. In that respect, end user demand for popular content can be satisfied by proactively caching it at the network edge, i.e., at close proximity to the users. In addition to model-based caching schemes, learning-based edge caching optimizations have recently attracted significant attention, and the aim hereafter is to capture these recent advances for both model-based and data-driven techniques in the area of proactive caching. This paper summarizes the utilization of deep learning for data caching in edge network. We first outline the typical research topics in content caching and formulate a taxonomy based on network hierarchical structure. Then, many key types of deep learning algorithms are presented, ranging from supervised learning to unsupervised learning, as well as reinforcement learning. Furthermore, a comparison of state-of-the-art literature is provided from the aspects of caching topics and deep learning methods. Finally, we discuss research challenges and future directions of applying deep learning for caching.
APA, Harvard, Vancouver, ISO, and other styles
11

Chae, Seong Ho, and Wan Choi. "Caching Placement in Stochastic Wireless Caching Helper Networks: Channel Selection Diversity via Caching." IEEE Transactions on Wireless Communications 15, no. 10 (October 2016): 6626–37. http://dx.doi.org/10.1109/twc.2016.2586841.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hao, Yixue, Min Chen, Donggang Cao, Wenlai Zhao, Ivan Petrov, Vitaly Antonenko, and Ruslan Smeliansky. "Cognitive-Caching: Cognitive Wireless Mobile Caching by Learning Fine-Grained Caching-Aware Indicators." IEEE Wireless Communications 27, no. 1 (February 2020): 100–106. http://dx.doi.org/10.1109/mwc.001.1900273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Bai, Jingpan, Silei Zhu, and Houling Ji. "Blockchain Based Decentralized and Proactive Caching Strategy in Mobile Edge Computing Environment." Sensors 24, no. 7 (April 3, 2024): 2279. http://dx.doi.org/10.3390/s24072279.

Full text
Abstract:
In the mobile edge computing (MEC) environment, the edge caching can provide the timely data response service for the intelligent scenarios. However, due to the limited storage capacity of edge nodes and the malicious node behavior, the question of how to select the cached contents and realize the decentralized security data caching faces challenges. In this paper, a blockchain-based decentralized and proactive caching strategy is proposed in an MEC environment to address this problem. The novelty is that the blockchain was adopted in an MEC environment with a proactive caching strategy based on node utility, and the corresponding optimization problem was built. The blockchain was adopted to build a secure and reliable service environment. The employed methodology is that the optimal caching strategy was achieved based on the linear relaxation technology and the interior point method. Additionally, in a content caching system, there is a trade-off between cache space and node utility, and the caching strategy was proposed to solve this problem. There was also a trade-off between the consensus process delay of blockchain and the caching latency of content. An offline consensus authentication method was adopted to reduce the influence of the consensus process delay on the content caching. The key finding was that the proposed algorithm can reduce latency and can ensure the security data caching in an IoT environment. Finally, the simulation experiment showed that the proposed algorithm can achieve up to 49.32%, 43.11%, and 34.85% improvements on the cache hit rate, the average content response latency, and the average system utility, respectively, compared to the random content caching algorithm, and it achieved up to 9.67%, 8.11%, and 5.95% increases, successively, compared to the greedy content caching algorithm.
APA, Harvard, Vancouver, ISO, and other styles
14

Nguyen, Quang Ngoc, Jiang Liu, Zhenni Pan, Ilias Benkacem, Toshitaka Tsuda, Tarik Taleb, Shigeru Shimamoto, and Takuro Sato. "PPCS: A Progressive Popularity-Aware Caching Scheme for Edge-Based Cache Redundancy Avoidance in Information-Centric Networks." Sensors 19, no. 3 (February 8, 2019): 694. http://dx.doi.org/10.3390/s19030694.

Full text
Abstract:
This article proposes a novel chunk-based caching scheme known as the Progressive Popularity-Aware Caching Scheme (PPCS) to improve content availability and eliminate the cache redundancy issue of Information-Centric Networking (ICN). Particularly, the proposal considers both entire-object caching and partial-progressive caching for popular and non-popular content objects, respectively. In the case that the content is not popular enough, PPCS first caches initial chunks of the content at the edge node and then progressively continues caching subsequent chunks at upstream Content Nodes (CNs) along the delivery path over time, according to the content popularity and each CN position. Therefore, PPCS efficiently avoids wasting cache space for storing on-path content duplicates and improves cache diversity by allowing no more than one replica of a specified content to be cached. To enable a complete ICN caching solution for communication networks, we also propose an autonomous replacement policy to optimize the cache utilization by maximizing the utility of each CN from caching content items. By simulation, we show that PPCS, utilizing edge-computing for the joint optimization of caching decision and replacement policies, considerably outperforms relevant existing ICN caching strategies in terms of latency (number of hops), cache redundancy, and content availability (hit rate), especially when the CN’s cache size is small.
APA, Harvard, Vancouver, ISO, and other styles
15

Yan, Li, and Yan Sheng Qu. "Research on Caching Mechanism Based on User Community." Applied Mechanics and Materials 672-674 (October 2014): 2013–16. http://dx.doi.org/10.4028/www.scientific.net/amm.672-674.2013.

Full text
Abstract:
This paper produced a data caching system framework based on two-layer Chord. Caching is shared by users in domain and hot accessing information is shared by inter-domain users. It effectively reduces the caching system’s overhead. We also introduced cache replacement algorithm based on the user community, especially the user’s influence in the community and the information flow dynamics. The result of simulation and experiment of test-bed environment shows the caching scheme based on user community outperforms most existing distributed caching schemes.
APA, Harvard, Vancouver, ISO, and other styles
16

Li, Qi, Xiaoxiang Wang, Dongyu Wang, Yibo Zhang, Yanwen Lan, Qiang Liu, and Lei Song. "Analysis of an SDN-Based Cooperative Caching Network with Heterogeneous Contents." Electronics 8, no. 12 (December 6, 2019): 1491. http://dx.doi.org/10.3390/electronics8121491.

Full text
Abstract:
The ubiquity of data-enabled mobile devices and wireless-enabled data applications has fostered the rapid development of wireless content caching, which is an efficient approach to mitigating cellular traffic pressure. Considering the content characteristics and real caching circumstances, a software-defined network (SDN)-based cooperative caching system is presented. First, we define a new file block library with heterogeneous content attributes [file popularity, mobile user (MU) preference, file size]. An SDN-based three-tier caching network is presented in which the base station supplies control coverage for the entire macrocell and cache helpers (CHs), MUs with cache capacities offer data coverage. Using the ‘most popular content’ and ‘largest diversity content’, a distributed cooperative caching strategy is proposed in which the caches of the MUs store the most popular contents of the file block library to mitigate the effect of MU mobility, and those of the CHs store the remaining contents in a probabilistic caching manner to enrich the content diversity and reduce the MU caching pressure. The request meet probability (RMPro) is subsequently proposed, and the optimal caching distribution of the contents in the probabilistic caching strategy is obtained via optimization. Finally, using the result of RMPro optimization, we also analyze the content retrieval delays that occur when a typical MU requests a file block or a whole file. Simulation results demonstrate that the proposed caching system can achieve quasi-optimal revenue performance compared with other contrasting schemes.
APA, Harvard, Vancouver, ISO, and other styles
17

Al-Sayeh, Hani, Muhammad Attahir Jibril, Muhammad Waleed Bin Saeed, and Kai-Uwe Sattler. "SparkCAD." Proceedings of the VLDB Endowment 15, no. 12 (August 2022): 3694–97. http://dx.doi.org/10.14778/3554821.3554877.

Full text
Abstract:
Developers of Apache Spark applications can accelerate their workloads by caching suitable intermediate results in memory and reusing them rather than recomputing them all over again every time they are needed. However, as scientific workflows are becoming more complex, application developers are becoming more prone to making wrong caching decisions, which we refer to as caching anomalies , that lead to poor performance. We present and give a demonstration of Spark Caching Anomalies Detector (SparkCAD) , a developer decision support tool that visualizes the logical plan of Spark applications and detects caching anomalies.
APA, Harvard, Vancouver, ISO, and other styles
18

Naeem, Muhammad, Rashid Ali, Byung-Seo Kim, Shahrudin Nor, and Suhaidi Hassan. "A Periodic Caching Strategy Solution for the Smart City in Information-Centric Internet of Things." Sustainability 10, no. 7 (July 23, 2018): 2576. http://dx.doi.org/10.3390/su10072576.

Full text
Abstract:
Named Data Networking is an evolving network model of the Information-centric networking (ICN) paradigm which provides Named-based data contents. In-network caching is the responsible for dissemination of these contents in a scalable and cost-efficient way. Due to the rapid expansion of Internet of Things (IoT) traffic, ICN is envisioned to be an appropriate architecture to maintain the IoT networks. In fact, ICN offers unique naming, multicast communications and, most beneficially, in-network caching that minimizes the response latency and server load. IoT environment involves a study of ICN caching policies in terms of content placement strategies. This paper addressed the caching strategies with the aim to recognize which caching strategy is the most suitable for IoT networks. Simulation results show the impact of different IoT ICN-based caching strategies, out of these; periodic caching is the most appropriate strategy for IoT environments in terms of stretch that results in decreasing the retrieval latency and improves the cache-hit ratio.
APA, Harvard, Vancouver, ISO, and other styles
19

Krause, Douglas J., and Tracey L. Rogers. "Food caching by a marine apex predator, the leopard seal (Hydrurga leptonyx)." Canadian Journal of Zoology 97, no. 6 (June 2019): 573–78. http://dx.doi.org/10.1139/cjz-2018-0203.

Full text
Abstract:
The foraging behaviors of apex predators can fundamentally alter ecosystems through cascading predator–prey interactions. Food caching is a widely studied, taxonomically diverse behavior that can modify competitive relationships and affect population viability. We address predictions that food caching would not be observed in the marine environment by summarizing recent caching reports from two marine mammal and one marine reptile species. We also provide multiple caching observations from disparate locations for a fourth marine predator, the leopard seal (Hydrurga leptonyx (de Blainville, 1820)). Drawing from consistent patterns in the terrestrial literature, we suggest the unusual diversity of caching strategies observed in leopard seals is due to high variability in their polar marine habitat. We hypothesize that caching is present across the spectrum of leopard seal social dominance; however, prevalence is likely to increase in smaller, less-dominant animals that hoard to gain competitive advantage. Given the importance of this behavior, we draw attention to the high probability of observing food caching behavior in other marine species.
APA, Harvard, Vancouver, ISO, and other styles
20

Ma, Zhenjie, Haoran Wang, Ke Shi, and Xinda Wang. "Learning Automata Based Caching for Efficient Data Access in Delay Tolerant Networks." Wireless Communications and Mobile Computing 2018 (2018): 1–19. http://dx.doi.org/10.1155/2018/3806907.

Full text
Abstract:
Effective data access is one of the major challenges in Delay Tolerant Networks (DTNs) that are characterized by intermittent network connectivity and unpredictable node mobility. Currently, different data caching schemes have been proposed to improve the performance of data access in DTNs. However, most existing data caching schemes perform poorly due to the lack of global network state information and the changing network topology in DTNs. In this paper, we propose a novel data caching scheme based on cooperative caching in DTNs, aiming at improving the successful rate of data access and reducing the data access delay. In the proposed scheme, learning automata are utilized to select a set of caching nodes as Caching Node Set (CNS) in DTNs. Unlike the existing caching schemes failing to address the challenging characteristics of DTNs, our scheme is designed to automatically self-adjust to the changing network topology through the well-designed voting and updating processes. The proposed scheme improves the overall performance of data access in DTNs compared with the former caching schemes. The simulations verify the feasibility of our scheme and the improvements in performance.
APA, Harvard, Vancouver, ISO, and other styles
21

Keller, Robert M., and M. R. Sleep. "Applicative caching." ACM Transactions on Programming Languages and Systems 8, no. 1 (January 2, 1986): 88–108. http://dx.doi.org/10.1145/5001.5004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

DeBrabant, Justin, Andrew Pavlo, Stephen Tu, Michael Stonebraker, and Stan Zdonik. "Anti-caching." Proceedings of the VLDB Endowment 6, no. 14 (September 2013): 1942–53. http://dx.doi.org/10.14778/2556549.2556575.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Gray, Howard Richard. "Geo-Caching." Journal of Museum Education 32, no. 3 (September 2007): 285–91. http://dx.doi.org/10.1080/10598650.2007.11510578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Englert, Matthias, Heiko Röglin, Jacob Spönemann, and Berthold Vöcking. "Economical Caching." ACM Transactions on Computation Theory 5, no. 2 (July 2013): 1–21. http://dx.doi.org/10.1145/2493246.2493247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Afek, Yehuda, Geoffrey Brown, and Michael Merritt. "Lazy caching." ACM Transactions on Programming Languages and Systems 15, no. 1 (January 1993): 182–205. http://dx.doi.org/10.1145/151646.151651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Barr, Thomas W., Alan L. Cox, and Scott Rixner. "Translation caching." ACM SIGARCH Computer Architecture News 38, no. 3 (June 19, 2010): 48–59. http://dx.doi.org/10.1145/1816038.1815970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Srinath, Harsha, and Shiva Shankar Ramanna. "Web caching." Resonance 7, no. 7 (July 2002): 54–62. http://dx.doi.org/10.1007/bf02836754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Nanda, Pranay, Shamsher Singh, and G. L. Saini. "A Review of Web Caching Techniques and Caching Algorithms for Effective and Improved Caching." International Journal of Computer Applications 128, no. 10 (October 15, 2015): 41–45. http://dx.doi.org/10.5120/ijca2015906656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Kim, Yunkon, and Eui-Nam Huh. "EDCrammer: An Efficient Caching Rate-Control Algorithm for Streaming Data on Resource-Limited Edge Nodes." Applied Sciences 9, no. 12 (June 23, 2019): 2560. http://dx.doi.org/10.3390/app9122560.

Full text
Abstract:
This paper explores data caching as a key factor of edge computing. State-of-the-art research of data caching on edge nodes mainly considers reactive and proactive caching, and machine learning based caching, which could be a heavy task for edge nodes. However, edge nodes usually have relatively lower computing resources than cloud datacenters as those are geo-distributed from the administrator. Therefore, a caching algorithm should be lightweight for saving computing resources on edge nodes. In addition, the data caching should be agile because it has to support high-quality services on edge nodes. Accordingly, this paper proposes a lightweight, agile caching algorithm, EDCrammer (Efficient Data Crammer), which performs agile operations to control caching rate for streaming data by using the enhanced PID (Proportional-Integral-Differential) controller. Experimental results using this lightweight, agile caching algorithm show its significant value in each scenario. In four common scenarios, the desired cache utilization was reached in 1.1 s on average and then maintained within a 4–7% deviation. The cache hit ratio is about 96%, and the optimal cache capacity is around 1.5 MB. Thus, EDCrammer can help distribute the streaming data traffic to the edge nodes, mitigate the uplink load on the central cloud, and ultimately provide users with high-quality video services. We also hope that EDCrammer can improve overall service quality in 5G environment, Augmented Reality/Virtual Reality (AR/VR), Intelligent Transportation System (ITS), Internet of Things (IoT), etc.
APA, Harvard, Vancouver, ISO, and other styles
30

Man, Dapeng, Yao Wang, Hanbo Wang, Jiafei Guo, Jiguang Lv, Shichang Xuan, and Wu Yang. "Information-Centric Networking Cache Placement Method Based on Cache Node Status and Location." Wireless Communications and Mobile Computing 2021 (September 14, 2021): 1–13. http://dx.doi.org/10.1155/2021/5648765.

Full text
Abstract:
Information-Centric Networking with caching is a very promising future network architecture. The research on its cache deployment strategy is divided into three categories, namely, noncooperative cache, explicit collaboration cache, and implicit collaboration cache. Noncooperative caching can cause problems such as high content repetition rate in the web cache space. Explicit collaboration caching generally reflects the best caching effect but requires a lot of communication to satisfy the exchange of cache node information and depends on the controller to perform the calculation. On this basis, implicit cooperative caching can reduce the information exchange and calculation between cache nodes while maintaining a good caching effect. Therefore, this paper proposes an on-path implicit cooperative cache deployment method based on the dynamic LRU-K cache replacement strategy. This method evaluates the cache nodes based on their network location and state and selects the node with the best state value on the transmission path for caching. Each request will only select one or two nodes for caching on the request path to reduce the redundancy of the data. Simulation experiments show that the cache deployment method based on the state and location of the cache node can improve the hit rate and reduce the average request length.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhou, Tianchi, Peng Sun, and Rui Han. "An Active Path-Associated Cache Scheme for Mobile Scenes." Future Internet 14, no. 2 (January 19, 2022): 33. http://dx.doi.org/10.3390/fi14020033.

Full text
Abstract:
With the widespread growth of mass content, information-centric networks (ICN) have become one of the research hotspots of future network architecture. One of the important features of ICN is ubiquitous in-network caching. In recent years, the explosive growth of mobile devices has brought content dynamics, which poses a new challenge to the original ICN caching mechanism. This paper focuses on the WiFi mobile scenario of ICN. We design a new path-associated active caching scheme to shorten the time delay of users obtaining content to enhance the user experience. In this article, based on the WiFi scenario, we first propose a solution for neighbor access point selection from a theoretical perspective, considering the caching cost and transition probability. The cache content is then forwarded based on the selected neighbor set. For cached content, we propose content freshness according to mobile characteristics and consider content popularity at the same time. For cache nodes, we focus on the size of the remaining cache space and the number of hops from the cache to the user. We have implemented this strategy based on the value of caching on the forwarding path. The simulation results show that our caching strategy has a significant improvement in performance compared with active caching and other caching strategies.
APA, Harvard, Vancouver, ISO, and other styles
32

Zheng, Yi-fan, Ning Wei, and Yi Liu. "Collaborative Computation for Offloading and Caching Strategy Using Intelligent Edge Computing." Mobile Information Systems 2022 (July 30, 2022): 1–12. http://dx.doi.org/10.1155/2022/4840801.

Full text
Abstract:
Computation offloading and caching strategy is a well-established concept for allowing mobile applications that are high in resources. Furthermore, the unloaded duties can be replicated when several customers are within easy access because of the rising mobile cooperation applications. However, the problematic characteristics of offloading and caching strategy delay bandwidth transfer from mobile computing devices to cloud computing. A new technical approach to restrict the issues and unwanted functions in offloading and caching is called the intellectual power computing framework (IPCF). IPCF depends on two conventional offloading and caching strategies called systematic offloading technique and managerial migrant caching. The migration of data transfer from the destination to location basis lies in the systematic offloading technique to restrict network delays. Managerial migrant caching duplicates the data required by the mobile terminals (MTs) from the remote cloud storage to the mobile application to reduce the access time. The forbidden actions in current techniques are refused, and solutions are enhanced for better communication strategy. Thus, the simulation analysis performs better in IPCF to reach efficient outcomes for offloading and caching processes.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Xinyu, Zhigang Hu, Meiguang Zheng, Yang Liang, Hui Xiao, Hao Zheng, and Aikun Xu. "LFDC: Low-Energy Federated Deep Reinforcement Learning for Caching Mechanism in Cloud–Edge Collaborative." Applied Sciences 13, no. 10 (May 16, 2023): 6115. http://dx.doi.org/10.3390/app13106115.

Full text
Abstract:
The optimization of caching mechanisms has long been a crucial research focus in cloud–edge collaborative environments. Effective caching strategies can substantially enhance user experience quality in these settings. Deep reinforcement learning (DRL), with its ability to perceive the environment and develop intelligent policies online, has been widely employed for designing caching strategies. Recently, federated learning, when combined with DRL, has been in gaining popularity for optimizing caching strategies and protecting data training privacy from eavesdropping attacks. However, online federated deep reinforcement learning algorithms face high environmental dynamics, and real-time training can result in increased training energy consumption despite improving caching efficiency. To address this issue, we propose a low-energy federated deep reinforcement learning strategy for caching mechanisms (LFDC) that balances caching efficiency and training energy consumption. The LFDC strategy encompasses a novel energy efficiency model, a deep reinforcement learning mechanism, and a dynamic energy-saving federated policy. Our experimental results demonstrate that the proposed LFDC strategy significantly outperforms existing benchmarks in terms of energy efficiency.
APA, Harvard, Vancouver, ISO, and other styles
34

Zulfa, Mulki Indana, Rudy Hartanto, and Adhistya Erna Permanasari. "Caching strategy for Web application – a systematic literature review." International Journal of Web Information Systems 16, no. 5 (October 5, 2020): 545–69. http://dx.doi.org/10.1108/ijwis-06-2020-0032.

Full text
Abstract:
Purpose Internet users and Web-based applications continue to grow every day. The response time on a Web application really determines the convenience of its users. Caching Web content is one strategy that can be used to speed up response time. This strategy is divided into three main techniques, namely, Web caching, Web prefetching and application-level caching. The purpose of this paper is to put forward a literature review of caching strategy research that can be used in Web-based applications. Design/methodology/approach The methods used in this paper were as follows: determined the review method, conducted a review process, pros and cons analysis and explained conclusions. The review method is carried out by searching literature from leading journals and conferences. The first search process starts by determining keywords related to caching strategies. To limit the latest literature in accordance with current developments in website technology, search results are limited to the past 10 years, in English only and related to computer science only. Findings Note in advance that Web caching and Web prefetching are slightly overlapping techniques because they have the same goal of reducing latency on the user’s side. But actually, the two techniques are motivated by different basic mechanisms. Web caching uses the basic mechanism of cache replacement or the algorithm to change cache objects in memory when the cache capacity is full, whereas Web prefetching uses the basic mechanism of predicting cache objects that can be accessed in the future. This paper also contributes practical guidelines for choosing the appropriate caching strategy for Web-based applications. Originality/value This paper conducts a state-of-the art review of caching strategies that can be used in Web applications. Exclusively, this paper presents taxonomy, pros and cons of selected research and discusses data sets that are often used in caching strategy research. This paper also provides another contribution, namely, practical instructions for Web developers to decide the caching strategy.
APA, Harvard, Vancouver, ISO, and other styles
35

Hurly, T. Andrew, and Raleigh J. Robertson. "Scatterhoarding by territorial red squirrels: a test of the optimal density model." Canadian Journal of Zoology 65, no. 5 (May 1, 1987): 1247–52. http://dx.doi.org/10.1139/z87-194.

Full text
Abstract:
We observed a high degree of scatterhoarding in a population of red squirrels and tested two predictions of the Optimal Density Model (ODM): (1) large food items will be cached at a greater distance from their source than small items; and (2) caches will be uniformly distributed about their source. Caching experiments supported prediction 1. Red squirrels carried large food items farther than small items before caching them. Prediction 2 was not supported; caches were distributed nonuniformly about their source both within and among caching bouts. We present a simple null model for scatterhoarding, which demonstrates that prediction 1 is not exclusive to the Optimal Density Model. Analyses of our cone-caching data and published data suggested that "optimal densities" were not the primary goal of the caching animal, but rather the result of a positive relationship between food value and investment in caching (carrying distance).
APA, Harvard, Vancouver, ISO, and other styles
36

Park, Seongsoo, Minseop Jeong, and Hwansoo Han. "CCA: Cost-Capacity-Aware Caching for In-Memory Data Analytics Frameworks." Sensors 21, no. 7 (March 26, 2021): 2321. http://dx.doi.org/10.3390/s21072321.

Full text
Abstract:
To process data from IoTs and wearable devices, analysis tasks are often offloaded to the cloud. As the amount of sensing data ever increases, optimizing the data analytics frameworks is critical to the performance of processing sensed data. A key approach to speed up the performance of data analytics frameworks in the cloud is caching intermediate data, which is used repeatedly in iterative computations. Existing analytics engines implement caching with various approaches. Some use run-time mechanisms with dynamic profiling and others rely on programmers to decide data to cache. Even though caching discipline has been investigated long enough in computer system research, recent data analytics frameworks still leave a room to optimize. As sophisticated caching should consider complex execution contexts such as cache capacity, size of data to cache, victims to evict, etc., no general solution often exists for data analytics frameworks. In this paper, we propose an application-specific cost-capacity-aware caching scheme for in-memory data analytics frameworks. We use a cost model, built from multiple representative inputs, and an execution flow analysis, extracted from DAG schedule, to select primary candidates to cache among intermediate data. After the caching candidate is determined, the optimal caching is automatically selected during execution even if the programmers no longer manually determine the caching for the intermediate data. We implemented our scheme in Apache Spark and experimentally evaluated our scheme on HiBench benchmarks. Compared to the caching decisions in the original benchmarks, our scheme increases the performance by 27% on sufficient cache memory and by 11% on insufficient cache memory, respectively.
APA, Harvard, Vancouver, ISO, and other styles
37

Sheraz, Muhammad, Shahryar Shafique, Sohail Imran, Muhammad Asif, Rizwan Ullah, Muhammad Ibrar, Jahanzeb Khan, and Lunchakorn Wuttisittikulkij. "A Reinforcement Learning Based Data Caching in Wireless Networks." Applied Sciences 12, no. 11 (June 3, 2022): 5692. http://dx.doi.org/10.3390/app12115692.

Full text
Abstract:
Data caching has emerged as a promising technique to handle growing data traffic and backhaul congestion of wireless networks. However, there is a concern regarding how and where to place contents to optimize data access by the users. Data caching can be exploited close to users by deploying cache entities at Small Base Stations (SBSs). In this approach, SBSs cache contents through the core network during off-peak traffic hours. Then, SBSs provide cached contents to content-demanding users during peak traffic hours with low latency. In this paper, we exploit the potential of data caching at the SBS level to minimize data access delay. We propose an intelligence-based data caching mechanism inspired by an artificial intelligence approach known as Reinforcement Learning (RL). Our proposed RL-based data caching mechanism is adaptive to dynamic learning and tracks network states to capture users’ diverse and varying data demands. Our proposed approach optimizes data caching at the SBS level by observing users’ data demands and locations to efficiently utilize the limited cache resources of SBS. Extensive simulations are performed to evaluate the performance of proposed caching mechanism based on various factors such as caching capacity, data library size, etc. The obtained results demonstrate that our proposed caching mechanism achieves 4% performance gain in terms of delay vs. contents, 3.5% performance gain in terms of delay vs. users, 2.6% performance gain in terms of delay vs. cache capacity, 18% performance gain in terms of percentage traffic offloading vs. popularity skewness (γ), and 6% performance gain in terms of backhaul saving vs. cache capacity.
APA, Harvard, Vancouver, ISO, and other styles
38

Yin, Jiliang, Congfeng Jiang, Hidetoshi Mino, and Christophe Cérin. "Popularity-Aware In-Network Caching for Edge Named Data Network." Wireless Communications and Mobile Computing 2021 (August 30, 2021): 1–13. http://dx.doi.org/10.1155/2021/3791859.

Full text
Abstract:
The traditional centralized network architecture can lead to a bandwidth bottleneck in the core network. In contrast, in the information-centric network, decentralized in-network caching can alleviate the traffic flow pressure from the network center to the edge. In this paper, a popularity-aware in-network caching policy, namely, Pop, is proposed to achieve an optimal caching of network contents in the resource-constrained edge networks. Specifically, Pop senses content popularity and distributes content caching without adding additional hardware and traffic overhead. We conduct extensive performance evaluation experiments by using ndnSIM. The experiments showed that the Pop policy achieves 54.39% cloud service hit reduction ratio and 22.76% user request average hop reduction ratio and outperforms other policies including Leave Copy Everywhere, Leave Copy Down, Probabilistic Caching, and Random choice caching. In addition, we proposed an ideal caching policy (Ideal) as a baseline whose popularity is known in advance; the gap of Pop and Ideal in cloud service hit reduction ratio is 4.36%, and the gap in user request average hop reduction ratio is only 1.47%. More simulation results further show the accuracy of Pop in perceiving popularity of contents, and Pop has good robustness in different request scenarios.
APA, Harvard, Vancouver, ISO, and other styles
39

Jia, Qingmin, RenChao Xie, Tao Huang, Jiang Liu, and Yunjie Liu. "Caching Resource Sharing for Network Slicing in 5G Core Network." Journal of Organizational and End User Computing 31, no. 4 (October 2019): 1–18. http://dx.doi.org/10.4018/joeuc.2019100101.

Full text
Abstract:
Network slicing has been considered a promising technology in next generation mobile networks (5G), which can create virtual networks and provide customized service on demand. Most existing works on network slicing mainly focus on virtualization technology, and have not considered in-network caching well. However, in-network caching, as the one of the key technologies for information-centric networking (ICN), has been considered as a significant approach in 5G network to cope with the traffic explosion and network challenges. In this article, the authors jointly consider in-network caching combining with network slicing. They propose an efficient caching resource sharing scheme for network slicing in 5G core network, aiming at solving the problem of how to efficiently share the limited physical caching resource of Infrastructure Provider (InP) among multiple network slices. In addition, from the perspective of network slicing, the authors formulate caching resource sharing problem as a non-cooperative game, and propose an iteration algorithm based on caching resource updating to obtain the Nash Equilibrium solution. Simulation results show that the proposed algorithm has good convergence performance, and illustrate the effectiveness of the proposed scheme.
APA, Harvard, Vancouver, ISO, and other styles
40

Dinh, Ngoc-Thanh, and Young-Han Kim. "An Efficient Correlation-Based Cache Retrieval Scheme at the Edge for Internet of Things." Sensors 20, no. 23 (November 30, 2020): 6846. http://dx.doi.org/10.3390/s20236846.

Full text
Abstract:
Existing caching mechanisms considers content objects individually without considering the semantic correlation among content objects. We argue that this approach can be inefficient in Internet of Things due to the highly redundant nature of IoT device deployments and the data accuracy tolerance of IoT applications. In many IoT applications, an approximate answer is acceptable. Therefore, a cache of an information object having a high semantic correlation with the requested information object can be used instead of a cache of the exact requested information object. In this case, caching both of the information objects can be inefficient and redundant. This paper proposes a caching retrieval scheme which considers the semantic information correlation of information objects of nodes for cache retrieval. We illustrate the benefits of considering the semantic information correlation in caching by studying IoT data caching at the edge. Our experiments and analysis show that semantic correlated caching can significantly improve the efficiency, cache hit, and reduce the resource consumption of IoT devices.
APA, Harvard, Vancouver, ISO, and other styles
41

Gul-E-Laraib, Sardar Khaliq uz Zaman, Tahir Maqsood, Faisal Rehman, Saad Mustafa, Muhammad Amir Khan, Neelam Gohar, Abeer D. Algarni, and Hela Elmannai. "Content Caching in Mobile Edge Computing Based on User Location and Preferences Using Cosine Similarity and Collaborative Filtering." Electronics 12, no. 2 (January 5, 2023): 284. http://dx.doi.org/10.3390/electronics12020284.

Full text
Abstract:
High-speed internet has boosted clients’ traffic needs. Content caching on mobile edge computing (MEC) servers reduces traffic and latency. Caching with MEC faces difficulties such as user mobility, limited storage, varying user preferences, and rising video streaming needs. The current content caching techniques consider user mobility and content popularity to improve the experience. However, no present solution addresses user preferences and mobility, affecting caching decisions. We propose mobility- and user-preferences-aware caching for MEC. Using time series, the proposed system finds mobility patterns and groups nearby trajectories. Using cosine similarity and CF, we predict and cache user-requested content. CF predicts the popularity of grouped-based content to improve the cache hit ratio and reduce delay compared to baseline techniques.
APA, Harvard, Vancouver, ISO, and other styles
42

Luan, Yuchen, Fukun Sun, and Jiaen Zhou. "A Service-Caching Strategy Assisted by Double DQN in LEO Satellite Networks." Sensors 24, no. 11 (May 24, 2024): 3370. http://dx.doi.org/10.3390/s24113370.

Full text
Abstract:
Satellite fog computing (SFC) achieves computation, caching, and other functionalities through collaboration among fog nodes. Satellites can provide real-time and reliable satellite-to-ground fusion services by pre-caching content that users may request in advance. However, due to the high-speed mobility of satellites, the complexity of user-access conditions poses a new challenge in selecting optimal caching locations and improving caching efficiency. Motivated by this, in this paper, we propose a real-time caching scheme based on a Double Deep Q-Network (Double DQN). The overarching objective is to enhance the cache hit rate. The simulation results demonstrate that the algorithm proposed in this paper improves the data hit rate by approximately 13% compared to methods without reinforcement learning assistance.
APA, Harvard, Vancouver, ISO, and other styles
43

Chuang, Shu-Min, Chia-Sheng Chen, and Eric Hsiao-Kuang Wu. "The Implementation of Interactive VR Application and Caching Strategy Design on Mobile Edge Computing (MEC)." Electronics 12, no. 12 (June 16, 2023): 2700. http://dx.doi.org/10.3390/electronics12122700.

Full text
Abstract:
Virtual reality (VR) and augmented reality (AR) have been proposed as revolutionary applications for the next generation, especially in education. Many VR applications have been designed to promote learning via virtual environments and 360° video. However, due to the strict requirements of end-to-end latency and network bandwidth, numerous VR applications using 360° video streaming may not achieve a high-quality experience. To address this issue, we propose relying on tile-based 360° video streaming and the caching capacity in Mobile Edge Computing (MEC) to predict the field of view (FoV) in the head-mounted device, then deliver the required tiles. Prefetching tiles in MEC can save the bandwidth of the backend link and support multiple users. Smart caching decisions may reduce the memory at the edge and compensate for the FoV prediction error. For instance, caching whole tiles at each small cell has a higher storage cost compared to caching one small cell that covers multiple users. In this paper, we define a tile selection, caching, and FoV coverage model as the Tile Selection and Caching Problem and propose a heuristic algorithm to solve it. Using a dataset of real users’ head movements, we compare our algorithm to the Least Recently Used (LRU) and Least Frequently Used (LFU) caching policies. The results show that our proposed approach improves FoV coverage by 30% and reduces caching costs by 25% compared to LFU and LRU.
APA, Harvard, Vancouver, ISO, and other styles
44

Sheraz, Muhammad, Shahryar Shafique, Sohail Imran, Muhammad Asif, Rizwan Ullah, Muhammad Ibrar, Andrzej Bartoszewicz, and Saleh Mobayen. "Mobility-Aware Data Caching to Improve D2D Communications in Heterogeneous Networks." Electronics 11, no. 21 (October 24, 2022): 3434. http://dx.doi.org/10.3390/electronics11213434.

Full text
Abstract:
User Equipment (UE) is equipped with limited cache resources that can be utilized to offload data traffic through device-to-device (D2D) communications. Data caching at a UE level has the potential to significantly alleviate data traffic burden from the backhaul link. Moreover, in wireless networks, users exhibit mobility that poses serious challenges to successful data transmission via D2D communications due to intermittent connectivity among users. Users’ mobility can be exploited to efficiently cache contents by observing connectivity patterns among users. Therefore, it is crucial to develop an efficient data caching mechanism for UE while taking into account users’ mobility patterns. In this work, we propose a mobility-aware data caching approach to enhance data offloading via D2D communication. First, we model users’ connectivity patterns. Then, contents are cached in UE’ cache resources based on users’ data preferences. In addition, we also take into account signal-to-interference and noise ratio (SINR) requirements of the users. Hence, our proposed caching mechanism exploits connectivity patterns of users to perform data placement based on users’ own demands and neighboring users to enhance data offloading via cache resources. We performed extensive simulations to investigate the performance of our proposed mobility-aware data caching mechanism. The performance of our proposed caching mechanism is compared to most deployed data caching mechanisms, while taking into account the dynamic nature of the wireless channel and the interference experienced by the users. From the obtained results, it is evident that our proposed approach achieves 14%, 16%, and 11% higher data offloading gain than the least frequently used, the Zipf-based probabilistic, and the random caching schemes in case of an increasing number of users, cache capacity, and number of contents, respectively. Moreover, we also analyzed cache hit rates, and our proposed scheme achieves 8% and 5% higher cache hit rate than the least frequently used, the Zipf-based probabilistic, and the random caching schemes in case of an increasing number of contents and cache capacity, respectively. Hence, our proposed caching mechanism brings significant improvement in data sharing via D2D communications.
APA, Harvard, Vancouver, ISO, and other styles
45

Sathiyamoorthi and Murali Bhaskaran. "Novel Approaches for Integrating MART1 Clustering Based Pre-Fetching Technique with Web Caching." International Journal of Information Technology and Web Engineering 8, no. 2 (April 2013): 18–32. http://dx.doi.org/10.4018/jitwe.2013040102.

Full text
Abstract:
Web caching and Web pre-fetching are two important techniques for improving the performance of Web based information retrieval system. These two techniques would complement each other, since Web caching provides temporal locality whereas Web pre-fetching provides spatial locality of Web objects. However, if the web caching and pre-fetching are integrated inefficiently, this might cause increasing the network traffic as well as the Web server load. Conventional policies are most suitable only for memory caching since it involves fixed page size. But when one deals with web caching which involves pages of different size. Hence one need an efficient algorithm that works better in web cache environment. Moreover conventional replacement policies are not suitable in clustering based pre-fetching environment since multiple objects were pre-fetched. Hence, it cannot be handled by conventional algorithms. Therefore, care must be taken while integrating web caching with web pre-fetching technique in order to overcome these limitations. In this paper, novel algorithms have been proposed for integrating web caching with clustering based pre-fetching technique. Here Modified ART1 has been used for clustering based pre-fetching technique. The proposed algorithm outperforms the traditional algorithms in terms of hit rate and number of objects to be pre-fetched. Hence saves bandwidth.
APA, Harvard, Vancouver, ISO, and other styles
46

Wang, James Z., Zhidian Du, and Pradip K. Srimani. "Cooperative Proxy Caching for Wireless Base Stations." Mobile Information Systems 3, no. 1 (2007): 1–18. http://dx.doi.org/10.1155/2007/371572.

Full text
Abstract:
This paper proposes a mobile cache model to facilitate the cooperative proxy caching in wireless base stations. This mobile cache model uses a network cache line to record the caching state information about a web document for effective data search and cache space management. Based on the proposed mobile cache model, a P2P cooperative proxy caching scheme is proposed to use a self-configured and self-managed virtual proxy graph (VPG), independent of the underlying wireless network structure and adaptive to the network and geographic environment changes, to achieve efficient data search, data cache and date replication. Based on demand, the aggregate effect of data caching, searching and replicating actions by individual proxy servers automatically migrates the cached web documents closer to the interested clients. In addition, a cache line migration (CLM) strategy is proposed to flow and replicate the heads of network cache lines of web documents associated with a moving mobile host to the new base station during the mobile host handoff. These replicated cache line heads provide direct links to the cached web documents accessed by the moving mobile hosts in the previous base station, thus improving the mobile web caching performance. Performance studies have shown that the proposed P2P cooperative proxy caching schemes significantly outperform existing caching schemes.
APA, Harvard, Vancouver, ISO, and other styles
47

Shaw, Rachael C., and Nicola S. Clayton. "Careful cachers and prying pilferers: Eurasian jays ( Garrulus glandarius ) limit auditory information available to competitors." Proceedings of the Royal Society B: Biological Sciences 280, no. 1752 (February 7, 2013): 20122238. http://dx.doi.org/10.1098/rspb.2012.2238.

Full text
Abstract:
Food-storing corvids use many cache-protection and pilfering strategies. We tested whether Eurasian jays ( Garrulus glandarius ) reduce the transfer of auditory information to a competitor when caching and pilfering. We gave jays a noisy and a quiet substrate to cache in. Compared with when alone, birds cached less in the noisy substrate when with a conspecific that could hear but could not see them caching. By contrast, jays did not change the amount cached in the noisy substrate when they were with a competitor that could see and hear them caching compared with when they were alone. Together, these results suggest that jays reduce auditory information during caching as a cache-protection strategy. By contrast, as pilferers, jays did not attempt to conceal their presence from a cacher and did not prefer a silent viewing perch over a noisy one when observing caching. However, birds vocalized less when watching caching compared with when they were alone, when they were watching a non-caching conspecific or when they were watching their own caches being pilfered. Pilfering jays may therefore attempt to suppress some types of auditory information. Our results raise the possibility that jays both understand and can attribute auditory perception to another individual.
APA, Harvard, Vancouver, ISO, and other styles
48

Zaman, Sardar Khaliq uz, Saad Mustafa, Hajira Abbasi, Tahir Maqsood, Faisal Rehman, Muhammad Amir Khan, Mushtaq Ahmed, Abeer D. Algarni, and Hela Elmannai. "Cooperative Content Caching Framework Using Cuckoo Search Optimization in Vehicular Edge Networks." Applied Sciences 13, no. 2 (January 5, 2023): 780. http://dx.doi.org/10.3390/app13020780.

Full text
Abstract:
Vehicular edge networks (VENs) connect vehicles to share data and infotainment content collaboratively to improve network performance. Due to technological advancements, data growth is accelerating, making it difficult to always connect mobile devices and locations. For vehicle-to-vehicle (V2V) communication, vehicles are equipped with onboard units (OBU) and roadside units (RSU). Through back-haul, all user-uploaded data is cached in the cloud server’s main database. Caching stores and delivers database data on demand. Pre-caching the data on the upcoming predicted server, closest to the user, before receiving the request will improve the system’s performance. OBUs, RSUs, and base stations (BS) cache data in VENs to fulfill user requests rapidly. Pre-caching reduces data retrieval costs and times. Due to storage and computing expenses, complete data cannot be stored on a single device for vehicle caching. We reduce content delivery delays by using the cuckoo search optimization algorithm with cooperative content caching. Cooperation among end users in terms of data sharing with neighbors will positively affect delivery delays. The proposed model considers cooperative content caching based on popularity and accurate vehicle position prediction using K-means clustering. Performance is measured by caching cost, delivery cost, response time, and cache hit ratio. Regarding parameters, the new algorithm outperforms the alternative.
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Xiaohui, Kehe Wu, and Fei Chen. "Smart Caching Based on Mobile Agent of Power WebGIS Platform." Scientific World Journal 2013 (2013): 1–8. http://dx.doi.org/10.1155/2013/757182.

Full text
Abstract:
Power information construction is developing towards intensive, platform, distributed direction with the expansion of power grid and improvement of information technology. In order to meet the trend, power WebGIS was designed and developed. In this paper, we first discuss the architecture and functionality of power WebGIS, and then we study caching technology in detail, which contains dynamic display cache model, caching structure based on mobile agent, and cache data model. We have designed experiments of different data capacity to contrast performance between WebGIS with the proposed caching model and traditional WebGIS. The experimental results showed that, with the same hardware environment, the response time of WebGIS with and without caching model increased as data capacity growing, while the larger the data was, the higher the performance of WebGIS with proposed caching model improved.
APA, Harvard, Vancouver, ISO, and other styles
50

SANTOS, EUNICE E., and EUGENE SANTOS. "EFFECTIVE AND EFFICIENT CACHING IN GENETIC ALGORITHMS." International Journal on Artificial Intelligence Tools 10, no. 01n02 (March 2001): 273–301. http://dx.doi.org/10.1142/s0218213001000520.

Full text
Abstract:
Hard discrete optimization problems using randomized methods such as genetic algorithms require large numbers of samples from the solution space. Each candidate sample/solution must be evaluated using the target fitness/energy function being optimized. Such fitness computations are a bottleneck in sampling methods such as genetic algorithms. We observe that the caching of partial results from these fitness computations can reduce this bottleneck. We provide a rigorous analysis of the run-times of GAs with and without caching. By representing fitness functions as classic Divide and Conquer algorithms, we provide a formal model to predict the efficiency of caching GAs vs. non-caching GAs. Finally, we explore the domain of protein folding with GAs and demonstrate that caching can significantly reduce expected run-times through both theoretical and extensive empirical analyses.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography