Academic literature on the topic 'Caching'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Caching.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Caching"

1

Prasad, M., P. R. Sudha Rani, Raja Rao PBV, Pokkuluri Kiran Sree, P. T. Satyanarayana Murty, A. Satya Mallesh, M. Ramesh Babu, and Chintha Venkata Ramana. "Blockchain-Enabled On-Path Caching for Efficient and Reliable Content Delivery in Information-Centric Networks." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (October 27, 2023): 358–63. http://dx.doi.org/10.17762/ijritcc.v11i9.8397.

Full text
Abstract:
As the demand for online content continues to grow, traditional Content Distribution Networks (CDNs) are facing significant challenges in terms of scalability and performance. Information-Centric Networking (ICN) is a promising new approach to content delivery that aims to address these issues by placing content at the center of the network architecture. One of the key features of ICNs is on-path caching, which allows content to be cached at intermediate routers along the path from the source to the destination. On-path caching in ICNs still faces some challenges, such as the scalability of the cache and the management of cache consistency. To address these challenges, this paper proposes several alternative caching schemes that can be integrated into ICNs using blockchain technology. These schemes include Bloom filters, content-based routing, and hybrid caching, which combine the advantages of off-path and on-path cachings. The proposed blockchain-enabled on-path caching mechanism ensures the integrity and authenticity of cached content, and smart contracts automate the caching process and incentivize caching nodes. To evaluate the performance of these caching alternatives, the authors conduct experiments using real-world datasets. The results show that on-path caching can significantly reduce network congestion and improve content delivery efficiency. The Bloom filter caching scheme achieved a cache hit rate of over 90% while reducing the cache size by up to 80% compared to traditional caching. The content-based routing scheme also achieved high cache hit rates while maintaining low latency.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhou, Mo, Bo Ji, Kun Peng Han, and Hong Sheng Xi. "A Cooperative Hybrid Caching Strategy for P2P Mobile Network." Applied Mechanics and Materials 347-350 (August 2013): 1992–96. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.1992.

Full text
Abstract:
Recently mobile network technologies develop quickly. To meet the increasing demand of wireless users, many multimedia proxies have been deployed over wireless networks. The caching nodes constitute a wireless caching system with an architecture of P2P and provide better service to mobile users. In this paper, we formulate the caching system to optimize the consumption of network bandwidth and guarantee the response time of mobile users. Two strategies: single greedy caching strategy and cooperative hybrid caching strategy are proposed to achieve this goal. Single greedy caching aims to reduce bandwidth consumption from the standpoint of each caching node, while cooperative hybrid caching allows sharing and coordination of multiple nodes, taking both bandwidth consumption and popularity into account. Simulation results show that cooperative hybrid caching outperforms single greedy caching in both bandwidth consumption and delay time.
APA, Harvard, Vancouver, ISO, and other styles
3

Dinh, Ngocthanh, and Younghan Kim. "An Energy Reward-Based Caching Mechanism for Information-Centric Internet of Things." Sensors 22, no. 3 (January 19, 2022): 743. http://dx.doi.org/10.3390/s22030743.

Full text
Abstract:
Existing information-centric networking (ICN) designs for Internet of Things (IoT) mostly make caching decisions based on probability or content popularity. From the energy-efficient perspective, those strategies may not always be energy efficient in resource-constrained IoT because without considering the energy reward of caching decisions, inappropriate routers and content objects may be selected for caching, which may lead to negative energy rewards. In this paper, we analyze the energy consumption of content caching and content retrieval in resource-constrained IoT and calculate caching energy reward as a key metric to measure the energy efficiency of a caching decision. We then propose an efficient cache placement and cache replacement mechanism based on the caching energy reward to improve the energy efficiency of caching decisions. Through analysis and experimental results, we show that the proposed mechanism achieves a significant improvement in terms of energy efficiency, stretch ratio, and cache hit ratio compared to state-of-the-art caching schemes.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Yali, and Jiachao Chen. "Collaborative Caching in Edge Computing via Federated Learning and Deep Reinforcement Learning." Wireless Communications and Mobile Computing 2022 (December 22, 2022): 1–15. http://dx.doi.org/10.1155/2022/7212984.

Full text
Abstract:
By deploying resources in the vicinity of users, edge caching can substantially reduce the latency for users to retrieve content and relieve the pressure on the backbone network. Due to the capacity limitation of caching and the dynamic nature of user requests, how to allocate caching resources reasonably must be considered. Some edge caching studies improve network performance by predicting content popularity and actively caching the most popular content, thereby ignoring the privacy and security issues caused by the need to collect user information at the central unit. To this end, a collaborative caching strategy based on federated learning is proposed. First, federated learning is used to make distributed predictions of the preferences of users in the nodes to develop an effective content caching policy. Then, the problem of allocating caching resources to optimize the cost of video providers is formulated as a Markov decision process, and a reinforcement learning method is used to optimize the caching decisions. Compared with several basic caching strategies in terms of cache hit rate, transmission delay, and cost, the simulation results show that the proposed content caching strategy reduces the cost of video providers, and has higher cache hit rate and lower average transmission delay.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Feng, Kwok-Yan Lam, Li Wang, Zhenyu Na, Xin Liu, and Qing Pan. "Caching Efficiency Enhancement at Wireless Edges with Concerns on User’s Quality of Experience." Wireless Communications and Mobile Computing 2018 (2018): 1–10. http://dx.doi.org/10.1155/2018/1680641.

Full text
Abstract:
Content caching is a promising approach to enhancing bandwidth utilization and minimizing delivery delay for new-generation Internet applications. The design of content caching is based on the principles that popular contents are cached at appropriate network edges in order to reduce transmission delay and avoid backhaul bottleneck. In this paper, we propose a cooperative caching replacement and efficiency optimization scheme for IP-based wireless networks. Wireless edges are designed to establish a one-hop scope of caching information table for caching replacement in cases when there is not enough cache resource available within its own space. During the course, after receiving the caching request, every caching node should determine the weight of the required contents and provide a response according to the availability of its own caching space. Furthermore, to increase the caching efficiency from a practical perspective, we introduce the concept of quality of user experience (QoE) and try to properly allocate the cache resource of the whole networks to better satisfy user demands. Different caching allocation strategies are devised to be adopted to enhance user QoE in various circumstances. Numerical results are further provided to justify the performance improvement of our proposal from various aspects.
APA, Harvard, Vancouver, ISO, and other styles
6

Santhanakrishnan, Ganesh, Ahmed Amer, and Panos K. Chrysanthis. "Self-tuning caching: the Universal Caching algorithm." Software: Practice and Experience 36, no. 11-12 (2006): 1179–88. http://dx.doi.org/10.1002/spe.755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Han, Luchao, Zhichuan Guo, and Xuewen Zeng. "Research on Multicore Key-Value Storage System for Domain Name Storage." Applied Sciences 11, no. 16 (August 12, 2021): 7425. http://dx.doi.org/10.3390/app11167425.

Full text
Abstract:
This article proposes a domain name caching method for the multicore network-traffic capture system, which significantly improves insert latency, throughput and hit rate. The caching method is composed of caching replacement algorithm, cache set method. The method is easy to implement, low in deployment cost, and suitable for various multicore caching systems. Moreover, it can reduce the use of locks by changing data structures and algorithms. Experimental results show that compared with other caching system, our proposed method reaches the highest throughput under multiple cores, which indicates that the cache method we proposed is best suited for domain name caching.
APA, Harvard, Vancouver, ISO, and other styles
8

Soleimani, Somayeh, and Xiaofeng Tao. "Caching and Placement for In-Network Caching in Device-to-Device Communications." Wireless Communications and Mobile Computing 2018 (September 26, 2018): 1–9. http://dx.doi.org/10.1155/2018/9539502.

Full text
Abstract:
Caching content by users constitutes a promising solution to decrease the costly transmissions with going through the base stations (BSs). To improve the performance of in-network caching in device-to-device (D2D) communications, caching placement and content delivery should be jointly optimized. To this end, we jointly optimize caching decision and content discovery strategies by considering the successful content delivery in D2D links for maximizing the in-network caching gain through D2D communications. Moreover, an in-network caching placement problem is formulated as an integer nonlinear optimization problem. To obtain the optimal solution for the proposed problem, Lagrange dual decomposition is applied in order to reduce the complexity. Simulation results show that the proposed algorithm has a near-optimal performance, approaching that of the exhaustive search method. Furthermore, the proposed scheme has a notable in-network caching gain and an improvement in traffic offloading compared to that of other caching placement schemes.
APA, Harvard, Vancouver, ISO, and other styles
9

Naeem, Nor, Hassan, and Kim. "Compound Popular Content Caching Strategy in Named Data Networking." Electronics 8, no. 7 (July 10, 2019): 771. http://dx.doi.org/10.3390/electronics8070771.

Full text
Abstract:
The aim of named data networking (NDN) is to develop an efficient data dissemination approach by implementing a cache module within the network. Caching is one of the most prominent modules of NDN that significantly enhances the Internet architecture. NDN-cache can reduce the expected flood of global data traffic by providing cache storage at intermediate nodes for transmitted contents, making data broadcasting in efficient way. It also reduces the content delivery time by caching popular content close to consumers. In this study, a new content caching mechanism named the compound popular content caching strategy (CPCCS) is proposed for efficient content dissemination and its performance is measured in terms of cache hit ratio, content diversity, and stretch. The CPCCS is extensively and comparatively studied with other NDN-based caching strategies, such as max-gain in-network caching (MAGIC), WAVE popularity-based caching strategy, hop-based probabilistic caching (HPC), LeafPopDown, most popular cache (MPC), cache capacity aware caching (CCAC), and ProbCache through simulations. The results shows that the CPCCS performs better in terms of the cache hit ratio, content diversity ratio, and stretch ratio than all other strategies.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Yantong, and Vasilis Friderikos. "A Survey of Deep Learning for Data Caching in Edge Network." Informatics 7, no. 4 (October 13, 2020): 43. http://dx.doi.org/10.3390/informatics7040043.

Full text
Abstract:
The concept of edge caching provision in emerging 5G and beyond mobile networks is a promising method to deal both with the traffic congestion problem in the core network, as well as reducing latency to access popular content. In that respect, end user demand for popular content can be satisfied by proactively caching it at the network edge, i.e., at close proximity to the users. In addition to model-based caching schemes, learning-based edge caching optimizations have recently attracted significant attention, and the aim hereafter is to capture these recent advances for both model-based and data-driven techniques in the area of proactive caching. This paper summarizes the utilization of deep learning for data caching in edge network. We first outline the typical research topics in content caching and formulate a taxonomy based on network hierarchical structure. Then, many key types of deep learning algorithms are presented, ranging from supervised learning to unsupervised learning, as well as reinforcement learning. Furthermore, a comparison of state-of-the-art literature is provided from the aspects of caching topics and deep learning methods. Finally, we discuss research challenges and future directions of applying deep learning for caching.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Caching"

1

Miller, Jason Eric 1976. "Software instruction caching." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40317.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 185-193).
As microprocessor complexities and costs skyrocket, designers are looking for ways to simplify their designs to reduce costs, improve energy efficiency, or squeeze more computational elements on each chip. This is particularly true for the embedded domain where cost and energy consumption are paramount. Software instruction caches have the potential to provide the required performance while using simpler, more efficient hardware. A software cache consists of a simple array memory (such as a scratchpad) and a software system that is capable of automatically managing that memory as a cache. Software caches have several advantages over traditional hardware caches. Without complex cache-management logic, the processor hardware is cheaper and easier to design, verify and manufacture. The reduced access energy of simple memories can result in a net energy savings if management overhead is kept low. Software caches can also be customized to each individual program's needs, improving performance or eliminating unpredictable timing for real-time embedded applications. The greatest challenge for a software cache is providing good performance using general-purpose instructions for cache management rather than specially-designed hardware. This thesis designs and implements a working system (Flexicache) on an actual embedded processor and uses it to investigate the strengths and weaknesses of software instruction caches. Although both data and instruction caches can be implemented in software, very different techniques are used to optimize performance; this work focuses exclusively on software instruction caches. The Flexicache system consists of two software components: a static off-line preprocessor to add caching to an application and a dynamic runtime system to manage memory during execution. Key interfaces and optimizations are identified and characterized. The system is evaluated in detail from the standpoints of both performance and energy consumption. The results indicate that software instruction caches can perform comparably to hardware caches in embedded processors. On most benchmarks, the overhead relative to a hardware cache is less than 12% and can be as low as 2.4%. At the same time, the software cache uses up to 6% less energy. This is achieved using a simple, directly-addressed memory and without requiring any complex, specialized hardware structures.
by Jason Eric Miller.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
2

Gu, Wenzheng. "Ubiquitous Web caching." [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0002406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Logren, Dély Tobias. "Caching HTTP : A comparative study of caching reverse proxies Varnish and Nginx." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-9679.

Full text
Abstract:
With the amount of users on the web steadily increasing websites must at times endure heavy loads and risk grinding to a halt beneath the flood of visitors. One solution to this problem is by using HTTP reverse proxy caching, which acts as an intermediate between web application and user. Content from the application is stored and passed on, avoiding the need for the application produce it anew for every request. One popular application designed solely for this task is Varnish; another interesting application for the task is Nginx which is primarily designed as a web server. This thesis compares the performance of the two applications in terms of number of requests served in relation to response time, as well as system load and free memory. With both applications using their default configuration, the experiments find that Nginx performs better in the majority of tests performed. The difference is however very slightly in tests with low request rate.
APA, Harvard, Vancouver, ISO, and other styles
4

Caheny, Paul. "Runtime-assisted coherent caching." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/670564.

Full text
Abstract:
In the middle of the 2000s a fundamental change of course occurred in computer architecture because techniques such as frequency scaling and instruction level parallelism were providing rapidly diminishing returns. Since then, scaling up threadlevel parallelism through increasingly parallel multicore processors has become the primary driver of performance gains, exacerbating the pre-existing problem of the Memory Wall. In response to this, cache and memory architecture have become more complex, while still maintaining a shared view of memory to software. As trends such as increasing parallelism and heterogeneity continue apace, the contribution of the memory hierarchy as a proportion of the overall system performance profile will continue to grow. Since the middle of the 2000s, thread-level parallelism has increased across almost all computer processor designs, bringing the problem of programmability into sharper focus. One of the most promising developments in programming models in the past fifteen years has been task-based programming models. Such programming models provide ease programmability for the user, at a level which is abstract enough to allow the runtime system layer to expertly optimise execution for the underlying hardware. The main goal of this thesis is to exploit information available in task-based programming models to drive optimisations in the memory hierarchy, through a hardware/software co-design approach. Data movement becomes the primary factor affecting power and performance as shared memory system architectures scale up in core count and therefore network diameter. The first contribution of this thesis studies the ability of a task-based programming model to constrain data movement in a real, very large shared memory system. It characterises directly and in detail the effectiveness of the programming model’s runtime system at minimising data traffic in the hardware. The analysis demonstrates that the runtime system can maximise locality between tasks and the data they use, thus minimising the traffic in the cache coherent interconnect. The second and third contributions of this thesis investigate hardware/software co-design proposals to increase efficiency within the on-chip memory hierarchy. These two contributions exploit information already captured in existing taskbased programming models. They communicate this information from the runtime system to the hardware and use it there to drive power, performance and area improvements in the memory hierarchy. A simulator based approach is used to model and analyse both the second and third contributions. Scaling cache coherence among growing numbers of private caches is a crucial issue in computer architecture as core counts continue to increase. Improving the scalability of cache coherence is the topic of the second contribution. It demonstrates the ability of a runtime system and hardware co-design approach to dramatically reduce capacity demand on the coherence directory, which is a central issue in scaling cache coherence among private caches. Non-uniform cache access (NUCA) shared caches are also increasing in size and share of on-chip resources, as they are the last line of defence against costly off-chip memory accesses. The third proposal focuses on optimising such NUCA caches to increase their effectiveness at dealing with the bottleneck between computation and memory. It shows a runtime system and hardware co-design approach can successfully reduce the network distance costs in a shared NUCA cache. Together the three contributions of this thesis demonstrate the potential for task-based programming models to address key elements of scalability in the memory hierarchy of future systems at the level of private caches, shared caches and main memory.
A mediados de los 2000 se produjo un cambio fundamental en el campo de la arquitectura de computadores debido a que técnicas como el escalado de frecuencia y el paralelismo a nivel de instrucción dejaron de proveer mejoras significativas. Desde entonces, la mejora en rendimiento se ha basado en explotar el paralelismo a través de incrementar el número de núcleos en los procesadores, lo que ha exacerbado el problema ya existente del muro de moria. En respuesta a este problema, se han desarrollado jerarquías de caché y de memoria más complejas, aún manteniendo el paradigma de memoria compartida desde el punto de vista del software. Como consecuencia de la tendencia de incrementar el paralelismo y la heterogeneidad, la importancia de la jerarquía de la memoria en el rendimiento global del sistema no ha parado de crecer. Otra consecuencia del aumento en el número de núcleos desde mediados de los 2000 es el deterioro de la programabilidad. Unos de los avances más importantes en el área de los modelos de programación han sido los modelos de programación paralelos basados en tareas. Estos modelos de programación facilitan la programación para el usuario y ofrecen un nivel de abstracción suficiente para que sus librerías de gestión optimicen la ejecución paralela para el hardware sobre el que se ejecutan las aplicaciones. El objetivo de esta tesis es aprovechar la información disponible en las librerías de gestión de modelos de programación paralelos basados en tareas para optimizar las jerarquías de memoria en un enfoque de co-diseño de hardware y software. La primera contribución de esta tesis estudia la habilidad de las librerías de gestión de modelos de programación paralelos basados en tareas para restringir las transferencias de datos en un sistema real de memoria compartida de gran escala. Esta contribución caracteriza directamente y en detalle la capacidad de las librerías de gestión de minimizar el tráfico de datos en el hardware. El análisis demuestra que las librerías de gestión pueden maximizar la localidad entre las tareas y los datos que utilizan, minimizando el tráfico de coherencia de cachés en la red de interconexión. La segunda y la tercera contribución de esta tesis proponen co-diseños de hardware y software para mejorar la eficiencia de las jerarquías de cachés. Estas dos contribuciones aprovechan la información disponible en las librerías de gestión de modelos de programación paralelos basados en tareas, comunican la información de las librerías al hardware y éste utiliza la información para mejorar el consumo energético y el rendimiento en la jerarquía de cachés. La segunda contribución trata de mejorar la escalabilidad de los protocolos de coherencia de cachés. El escalado de los protocolos de coherencia es un problema fundamental en arquitecturas con elevado número de núcleos. Esta contribución demuestra los beneficios de co-diseñar las librerías de gestión y el hardware, que consigue reducir drásticamente la presión sobre el directorio del protocolo de coherencia de caches, que es uno de los mayores problemas para escalar los protocolos de coherencia. La tercera contribución de esta tesis propone optimizar las cachés compartidas con tiempo de acceso no uniforme (NUCA) y aumentar su eficiencia para lidiar con el problema del muro de memoria. Las cachés NUCA también son cada vez más grandes y tienen más importancia, ya que son las última línea de defensa ante los costosos accesos a memoria. Esta contribución muestra que un co-diseño de las librerías de gestión y las cachés NUCA puede mejorar la gestión de estas memorias y reducir los costes de las transferencias de memoria en la red de interconexión. Las tres contribuciones de esta tesis demuestran el potencial que poseen las librerías de gestión de los modelos de programación basados en tareas para optimizar aspectos claves de las jerarquías de memoria y mejorar la escalabilidad
APA, Harvard, Vancouver, ISO, and other styles
5

Irwin, James Patrick John. "Systems with predictable caching." Thesis, University of Bristol, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.288213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kimbrel, Tracy. "Parallel prefetching and caching /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/6943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sarkar, Prasenjit 1970. "Hint-based cooperative caching." Diss., The University of Arizona, 1998. http://hdl.handle.net/10150/288892.

Full text
Abstract:
This dissertation focuses on caching in distributed file systems, where the performance is constrained by expensive server accesses. This has led to the evolution of cooperative caching, an innovative technique which effectively utilizes the client memories in a distributed file system to reduce the impact of server accesses. This is achieved by adding another layer to the storage hierarchy called the cooperative cache, allowing clients to access and store file blocks in the caches of other clients. The major contribution of this dissertation is to show that a cooperative caching system that relies on local hints to manage the cooperative cache performs better than a more tightly coordinated fact-based system. To evaluate the performance of hint-based cooperative caching, trace-driven simulations are used to show that the hit ratios to the different layers of the storage hierarchy are as good as those of the existing tightly-coordinated algorithms, but with significantly reduced overhead. Following this, a prototype was implemented on a cluster of Linux machines, where the use of hints reduced the average block access time to almost half that of NFS, and incurred minimal overhead.
APA, Harvard, Vancouver, ISO, and other styles
8

Recayte, Estefania <1988&gt. "Caching in Heterogeneous Networks." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amsdottorato.unibo.it/8974/1/0_Thesis.pdf.

Full text
Abstract:
A promising solution in order to cope with the massive request of wireless data traffic consists of having replicas of the potential requested content memorized across the network. In cache-enabled heterogeneous networks, content is pre-fetched close to the users during network off-peak periods in order to directly serve the users when the network is congested. Caching content at the edge of heterogeneous networks not only leads to significantly reduce the traffic congestion in the backhaul link but also leads to achieve higher levels of energy efficiency. However, the good performance of a system foresees a deep analysis of the possible caching techniques. Due to the physical limitation of the caches' size and the excessive amount of content, the design of caching policies which define how the content has to be cached and select the likely data to store is crucial. Within this thesis, caching techniques for storing and delivering the content in heterogeneous networks are investigated from two different aspects. The first part of the thesis is focused on the reduction of the power consumption when the cached content is delivered over an Gaussian interference channel and per-file rate constraints are imposed. Cooperative approaches between the transmitters in order to mitigate the interference experienced by the users are analyzed. Based on such approaches, the caching optimization problem for obtaining the best cache allocation solution (in the sense of minimizing the average power consumption) is proposed. The second part of the thesis is focused on caching techniques at packet level with the aim of reducing the transmissions from the core of an heterogeneous network. The design of caching schemes based on rate-less codes for storing and delivering the cached content are proposed. For each design, the placement optimization problem which minimizes the transmission over the backhaul link is formulated.
APA, Harvard, Vancouver, ISO, and other styles
9

Ou, Yi [Verfasser]. "Caching for flash-based databases and flash-based caching for databases / Yi Ou." München : Verlag Dr. Hut, 2012. http://d-nb.info/1028784120/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pohl, Christoph. "Adaptive Caching of Distributed Components." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2005. http://nbn-resolving.de/urn:nbn:de:swb:14-1117701363347-79965.

Full text
Abstract:
Die Zugriffslokalität referenzierter Daten ist eine wichtige Eigenschaft verteilter Anwendungen. Lokales Zwischenspeichern abgefragter entfernter Daten (Caching) wird vielfach bei der Entwicklung solcher Anwendungen eingesetzt, um diese Eigenschaft auszunutzen. Anschliessende Zugriffe auf diese Daten können so beschleunigt werden, indem sie aus dem lokalen Zwischenspeicher bedient werden. Gegenwärtige Middleware-Architekturen bieten dem Anwendungsprogrammierer jedoch kaum Unterstützung für diesen nicht-funktionalen Aspekt. Die vorliegende Arbeit versucht deshalb, Caching als separaten, konfigurierbaren Middleware-Dienst auszulagern. Durch die Einbindung in den Softwareentwicklungsprozess wird die frühzeitige Modellierung und spätere Wiederverwendung caching-spezifischer Metadaten gewährleistet. Zur Laufzeit kann sich das entwickelte System außerdem bezüglich der Cachebarkeit von Daten adaptiv an geändertes Nutzungsverhalten anpassen
Locality of reference is an important property of distributed applications. Caching is typically employed during the development of such applications to exploit this property by locally storing queried data: Subsequent accesses can be accelerated by serving their results immediately form the local store. Current middleware architectures however hardly support this non-functional aspect. The thesis at hand thus tries outsource caching as a separate, configurable middleware service. Integration into the software development lifecycle provides for early capturing, modeling, and later reuse of cachingrelated metadata. At runtime, the implemented system can adapt to caching access characteristics with respect to data cacheability properties, thus healing misconfigurations and optimizing itself to an appropriate configuration. Speculative prefetching of data probably queried in the immediate future complements the presented approach
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Caching"

1

Web caching. Sebastopol, CA: O'Reilly & Associates, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Franklin, Michael J. Client Data Caching. Boston, MA: Springer US, 1996. http://dx.doi.org/10.1007/978-1-4613-1363-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Phillips, Mark. Caching for image processing. Birmingham: University of Birmingham, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Oliver, Spatscheck, ed. Web caching and replication. Boston: Addison-Wesley, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chi, Chi-Hung, Maarten van Steen, and Craig Wills, eds. Web Content Caching and Distribution. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/b101692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Douglis, Fred, and Brian D. Davison, eds. Web Content Caching and Distribution. Dordrecht: Springer Netherlands, 2004. http://dx.doi.org/10.1007/1-4020-2258-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nagaraj, S. V. Web caching and its applications. Boston: Kluwer Academic Publishers, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Křivánek, Jaroslav. Practical global illumination with irradiance caching. San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA): Morgan & Claypool Publishers, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Huaqing, Feng Lyu, and Xuemin Shen. Mobile Edge Caching in Heterogeneous Vehicular Networks. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-88878-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

International Business Machines Corporation. International Technical Support Organization, ed. Scalable, integrated solutions for elastic caching using WebSphere eXtreme Scale. [Poughkeepsie, N.Y.?]: IBM Corp., International Technical Support Organization, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Caching"

1

Krogh, Jesper Wisborg. "Caching." In MySQL 8 Query Performance Tuning, 917–45. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5584-1_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wintermeyer, Stefan. "Caching." In Learn Rails 5.2, 363–91. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3489-1_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bae, Sammie. "Caching." In JavaScript Data Structures and Algorithms, 193–203. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-3988-9_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

VanDyk, John K. "Caching." In Pro Drupal Development, 349–64. Berkeley, CA: Apress, 2008. http://dx.doi.org/10.1007/978-1-4302-0990-4_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kao, Ming-Yang. "Caching." In Encyclopedia of Algorithms, 129. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-30162-4_64.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Romer, Michael. "Caching." In PHP Persistence, 97–100. Berkeley, CA: Apress, 2016. http://dx.doi.org/10.1007/978-1-4842-2559-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shekhar, Shashi, and Hui Xiong. "Caching." In Encyclopedia of GIS, 65. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-35973-1_110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Holovaty, Adrian, and Jacob Kaplan-Moss. "Caching." In The Definitive Guide to Django, 197–208. Berkeley, CA: Apress, 2008. http://dx.doi.org/10.1007/978-1-4302-0331-5_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

So, Preston. "Caching." In Decoupled Drupal in Practice, 455–64. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-4072-4_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kumar, Pranish, Jasjit Singh Grewal, Bogdan Crivat, and Eric Lee. "Caching." In ATL Server: High Performance C++ on .NET, 177–96. Berkeley, CA: Apress, 2003. http://dx.doi.org/10.1007/978-1-4302-0768-9_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Caching"

1

Drolia, Utsav, Katherine Guo, Jiaqi Tan, Rajeev Gandhi, and Priya Narasimhan. "Cachier: Edge-Caching for Recognition Applications." In 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS). IEEE, 2017. http://dx.doi.org/10.1109/icdcs.2017.94.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mertz, Jhonny, and Ingrid Nunes. "Understanding and Automating Application-level Caching." In XXXI Concurso de Teses e Dissertações da SBC. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/ctd.2018.3666.

Full text
Abstract:
Application-level caching has increasingly been adopted to improve the performance and scalability of web applications. It consists of an additional caching layer that is manually added to the application code in selected locations. Because it requires a manual application analysis and selection of cacheable points as well as implementation, it is a time-consuming and error prone activity. In this paper, we introduce our key contributions in the context of application-level caching: (i) a comprehensive survey and taxonomy of work on this topic; (ii) a qualitative study that captures the state-of-practice of application-level caching, complemented by proposed guidelines and patterns; (iii) an adaptive component that autonomously manages admission of cache content; (iv) a framework that implements our proposal; and finally (v) an evaluation that provides evidence of the effectiveness of our proposal.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Zhe, Gwendal Simon, and Annie Gravey. "Caching Policies for In-Network Caching." In 2012 21st International Conference on Computer Communications and Networks - ICCCN 2012. IEEE, 2012. http://dx.doi.org/10.1109/icccn.2012.6289289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Khoshkholgh, M. G., Keivan Navaie, Kang G. Shin, V. C. M. Leung, and Halim Yanikomeroglu. "Caching or No Caching in Dense HetNets?" In 2019 IEEE Wireless Communications and Networking Conference (WCNC). IEEE, 2019. http://dx.doi.org/10.1109/wcnc.2019.8885724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Haibo, Prasanna Venkatesh Rengasamy, Shulin Zhao, Nachiappan Chidambaram Nachiappan, Anand Sivasubramaniam, Mahmut T. Kandemir, Ravi Iyer, and Chita R. Das. "Race-to-sleep + content caching + display caching." In MICRO-50: The 50th Annual IEEE/ACM International Symposium on Microarchitecture. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3123939.3123948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chae, Seong Ho, and Wan Choi. "Optimal probabilistic caching with wireless caching helpers." In 2016 IEEE 17th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC). IEEE, 2016. http://dx.doi.org/10.1109/spawc.2016.7536891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Musoll, Enric, and Mario Nemirovsky. "A study on the performance of two-level exclusive caching." In International Symposium on Computer Architecture and High Performance Computing. Sociedade Brasileira de Computação, 1999. http://dx.doi.org/10.5753/sbac-pad.1999.19771.

Full text
Abstract:
This work presents a study on the performance of a level-two cache configured as a victim storage for the evicted lines of the level-one cache. This two-level cache configuration, known as exclusive caching, is evaluated for a wide range of level-one and level-two sizes and associalivily degrees, and the miss ratios of both levels are compared to those of the two-level inclusive caching. Although the two-level exclusive strategy has lower miss ratios than the inclusive one by increasing the effective associativity and capacity, the replacement policy of the exclusive caching organization forces the invalidation of cnlries in the level-two cache, which reduces the benefits of having a victim level-two cache. The effect of these invalidalions on the overall performance of a level-two exclusive caching organization is evaluated. For typical two-level cache configurations in which the level-two cache is direct-mapped, the performance of the exclusive caching is as much as 60% better for code fetches and as much as 75% for data accesses.
APA, Harvard, Vancouver, ISO, and other styles
8

Sanadhya, Shruti, Raghupathy Sivakumar, Kyu-Han Kim, Paul Congdon, Sriram Lakshmanan, and Jatinder Pal Singh. "Asymmetric caching." In the 18th annual international conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2348543.2348565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Arens, Yigal, and Craig A. Knoblock. "Intelligent caching." In the third international conference. New York, New York, USA: ACM Press, 1994. http://dx.doi.org/10.1145/191246.191318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chierichetti, Flavio, Ravi Kumar, and Sergei Vassilvitskii. "Similarity caching." In the twenty-eighth ACM SIGMOD-SIGACT-SIGART symposium. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1559795.1559815.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Caching"

1

Fielding, R., M. Nottingham, and J. Reschke, eds. HTTP Caching. RFC Editor, June 2022. http://dx.doi.org/10.17487/rfc9111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cooper, I., and J. Dilley. Known HTTP Proxy/Caching Problems. RFC Editor, June 2001. http://dx.doi.org/10.17487/rfc3143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vixie, P., and D. Wessels. Hyper Text Caching Protocol (HTCP/0.0). RFC Editor, January 2000. http://dx.doi.org/10.17487/rfc2756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cooper, I., I. Melve, and G. Tomlinson. Internet Web Replication and Caching Taxonomy. RFC Editor, January 2001. http://dx.doi.org/10.17487/rfc3040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fielding, R., M. Nottingham, and J. Reschke, eds. Hypertext Transfer Protocol (HTTP/1.1): Caching. RFC Editor, June 2014. http://dx.doi.org/10.17487/rfc7234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

DeMarle, David, and Andrew Bauer. In situ visualization with temporal caching. Engineer Research and Development Center (U.S.), January 2022. http://dx.doi.org/10.21079/11681/43042.

Full text
Abstract:
In situ visualization is a technique in which plots and other visual analyses are performed in tandem with numerical simulation processes in order to better utilize HPC machine resources. Especially with unattended exploratory engineering simulation analyses, events may occur during the run, which justify supplemental processing. Sometimes though, when the events do occur, the phenomena of interest includes the physics that precipitated the events and this may be the key insight into understanding the phenomena that is being simulated. In situ temporal caching is the temporary storing of produced data in memory for possible later analysis including time varying visualization. The later analysis and visualization still occurs during the simulation run but not until after the significant events have been detected. In this article, we demonstrate how temporal caching can be used with in-line in situ visualization to reduce simulation run-time while still capturing essential simulation results.
APA, Harvard, Vancouver, ISO, and other styles
7

Wessels, D., W. Carroll, and M. Thomas. Negative Caching of DNS Resolution Failures. RFC Editor, December 2023. http://dx.doi.org/10.17487/rfc9520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Andrews, M. Negative Caching of DNS Queries (DNS NCACHE). RFC Editor, March 1998. http://dx.doi.org/10.17487/rfc2308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nelson, Michael, Brent Welch, and John Ousterhout. Caching in the Sprite Network File System. Fort Belvoir, VA: Defense Technical Information Center, March 1987. http://dx.doi.org/10.21236/ada619418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Danzig, Peter B., Richard S. Hall, and Michael F. Schwartz. A Case for Caching File Objects Inside Internetworks. Fort Belvoir, VA: Defense Technical Information Center, March 1993. http://dx.doi.org/10.21236/ada458004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography