Dissertations / Theses on the topic 'Caching'

To see the other types of publications on this topic, follow the link: Caching.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Caching.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Miller, Jason Eric 1976. "Software instruction caching." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40317.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 185-193).
As microprocessor complexities and costs skyrocket, designers are looking for ways to simplify their designs to reduce costs, improve energy efficiency, or squeeze more computational elements on each chip. This is particularly true for the embedded domain where cost and energy consumption are paramount. Software instruction caches have the potential to provide the required performance while using simpler, more efficient hardware. A software cache consists of a simple array memory (such as a scratchpad) and a software system that is capable of automatically managing that memory as a cache. Software caches have several advantages over traditional hardware caches. Without complex cache-management logic, the processor hardware is cheaper and easier to design, verify and manufacture. The reduced access energy of simple memories can result in a net energy savings if management overhead is kept low. Software caches can also be customized to each individual program's needs, improving performance or eliminating unpredictable timing for real-time embedded applications. The greatest challenge for a software cache is providing good performance using general-purpose instructions for cache management rather than specially-designed hardware. This thesis designs and implements a working system (Flexicache) on an actual embedded processor and uses it to investigate the strengths and weaknesses of software instruction caches. Although both data and instruction caches can be implemented in software, very different techniques are used to optimize performance; this work focuses exclusively on software instruction caches. The Flexicache system consists of two software components: a static off-line preprocessor to add caching to an application and a dynamic runtime system to manage memory during execution. Key interfaces and optimizations are identified and characterized. The system is evaluated in detail from the standpoints of both performance and energy consumption. The results indicate that software instruction caches can perform comparably to hardware caches in embedded processors. On most benchmarks, the overhead relative to a hardware cache is less than 12% and can be as low as 2.4%. At the same time, the software cache uses up to 6% less energy. This is achieved using a simple, directly-addressed memory and without requiring any complex, specialized hardware structures.
by Jason Eric Miller.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
2

Gu, Wenzheng. "Ubiquitous Web caching." [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0002406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Logren, Dély Tobias. "Caching HTTP : A comparative study of caching reverse proxies Varnish and Nginx." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-9679.

Full text
Abstract:
With the amount of users on the web steadily increasing websites must at times endure heavy loads and risk grinding to a halt beneath the flood of visitors. One solution to this problem is by using HTTP reverse proxy caching, which acts as an intermediate between web application and user. Content from the application is stored and passed on, avoiding the need for the application produce it anew for every request. One popular application designed solely for this task is Varnish; another interesting application for the task is Nginx which is primarily designed as a web server. This thesis compares the performance of the two applications in terms of number of requests served in relation to response time, as well as system load and free memory. With both applications using their default configuration, the experiments find that Nginx performs better in the majority of tests performed. The difference is however very slightly in tests with low request rate.
APA, Harvard, Vancouver, ISO, and other styles
4

Caheny, Paul. "Runtime-assisted coherent caching." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/670564.

Full text
Abstract:
In the middle of the 2000s a fundamental change of course occurred in computer architecture because techniques such as frequency scaling and instruction level parallelism were providing rapidly diminishing returns. Since then, scaling up threadlevel parallelism through increasingly parallel multicore processors has become the primary driver of performance gains, exacerbating the pre-existing problem of the Memory Wall. In response to this, cache and memory architecture have become more complex, while still maintaining a shared view of memory to software. As trends such as increasing parallelism and heterogeneity continue apace, the contribution of the memory hierarchy as a proportion of the overall system performance profile will continue to grow. Since the middle of the 2000s, thread-level parallelism has increased across almost all computer processor designs, bringing the problem of programmability into sharper focus. One of the most promising developments in programming models in the past fifteen years has been task-based programming models. Such programming models provide ease programmability for the user, at a level which is abstract enough to allow the runtime system layer to expertly optimise execution for the underlying hardware. The main goal of this thesis is to exploit information available in task-based programming models to drive optimisations in the memory hierarchy, through a hardware/software co-design approach. Data movement becomes the primary factor affecting power and performance as shared memory system architectures scale up in core count and therefore network diameter. The first contribution of this thesis studies the ability of a task-based programming model to constrain data movement in a real, very large shared memory system. It characterises directly and in detail the effectiveness of the programming model’s runtime system at minimising data traffic in the hardware. The analysis demonstrates that the runtime system can maximise locality between tasks and the data they use, thus minimising the traffic in the cache coherent interconnect. The second and third contributions of this thesis investigate hardware/software co-design proposals to increase efficiency within the on-chip memory hierarchy. These two contributions exploit information already captured in existing taskbased programming models. They communicate this information from the runtime system to the hardware and use it there to drive power, performance and area improvements in the memory hierarchy. A simulator based approach is used to model and analyse both the second and third contributions. Scaling cache coherence among growing numbers of private caches is a crucial issue in computer architecture as core counts continue to increase. Improving the scalability of cache coherence is the topic of the second contribution. It demonstrates the ability of a runtime system and hardware co-design approach to dramatically reduce capacity demand on the coherence directory, which is a central issue in scaling cache coherence among private caches. Non-uniform cache access (NUCA) shared caches are also increasing in size and share of on-chip resources, as they are the last line of defence against costly off-chip memory accesses. The third proposal focuses on optimising such NUCA caches to increase their effectiveness at dealing with the bottleneck between computation and memory. It shows a runtime system and hardware co-design approach can successfully reduce the network distance costs in a shared NUCA cache. Together the three contributions of this thesis demonstrate the potential for task-based programming models to address key elements of scalability in the memory hierarchy of future systems at the level of private caches, shared caches and main memory.
A mediados de los 2000 se produjo un cambio fundamental en el campo de la arquitectura de computadores debido a que técnicas como el escalado de frecuencia y el paralelismo a nivel de instrucción dejaron de proveer mejoras significativas. Desde entonces, la mejora en rendimiento se ha basado en explotar el paralelismo a través de incrementar el número de núcleos en los procesadores, lo que ha exacerbado el problema ya existente del muro de moria. En respuesta a este problema, se han desarrollado jerarquías de caché y de memoria más complejas, aún manteniendo el paradigma de memoria compartida desde el punto de vista del software. Como consecuencia de la tendencia de incrementar el paralelismo y la heterogeneidad, la importancia de la jerarquía de la memoria en el rendimiento global del sistema no ha parado de crecer. Otra consecuencia del aumento en el número de núcleos desde mediados de los 2000 es el deterioro de la programabilidad. Unos de los avances más importantes en el área de los modelos de programación han sido los modelos de programación paralelos basados en tareas. Estos modelos de programación facilitan la programación para el usuario y ofrecen un nivel de abstracción suficiente para que sus librerías de gestión optimicen la ejecución paralela para el hardware sobre el que se ejecutan las aplicaciones. El objetivo de esta tesis es aprovechar la información disponible en las librerías de gestión de modelos de programación paralelos basados en tareas para optimizar las jerarquías de memoria en un enfoque de co-diseño de hardware y software. La primera contribución de esta tesis estudia la habilidad de las librerías de gestión de modelos de programación paralelos basados en tareas para restringir las transferencias de datos en un sistema real de memoria compartida de gran escala. Esta contribución caracteriza directamente y en detalle la capacidad de las librerías de gestión de minimizar el tráfico de datos en el hardware. El análisis demuestra que las librerías de gestión pueden maximizar la localidad entre las tareas y los datos que utilizan, minimizando el tráfico de coherencia de cachés en la red de interconexión. La segunda y la tercera contribución de esta tesis proponen co-diseños de hardware y software para mejorar la eficiencia de las jerarquías de cachés. Estas dos contribuciones aprovechan la información disponible en las librerías de gestión de modelos de programación paralelos basados en tareas, comunican la información de las librerías al hardware y éste utiliza la información para mejorar el consumo energético y el rendimiento en la jerarquía de cachés. La segunda contribución trata de mejorar la escalabilidad de los protocolos de coherencia de cachés. El escalado de los protocolos de coherencia es un problema fundamental en arquitecturas con elevado número de núcleos. Esta contribución demuestra los beneficios de co-diseñar las librerías de gestión y el hardware, que consigue reducir drásticamente la presión sobre el directorio del protocolo de coherencia de caches, que es uno de los mayores problemas para escalar los protocolos de coherencia. La tercera contribución de esta tesis propone optimizar las cachés compartidas con tiempo de acceso no uniforme (NUCA) y aumentar su eficiencia para lidiar con el problema del muro de memoria. Las cachés NUCA también son cada vez más grandes y tienen más importancia, ya que son las última línea de defensa ante los costosos accesos a memoria. Esta contribución muestra que un co-diseño de las librerías de gestión y las cachés NUCA puede mejorar la gestión de estas memorias y reducir los costes de las transferencias de memoria en la red de interconexión. Las tres contribuciones de esta tesis demuestran el potencial que poseen las librerías de gestión de los modelos de programación basados en tareas para optimizar aspectos claves de las jerarquías de memoria y mejorar la escalabilidad
APA, Harvard, Vancouver, ISO, and other styles
5

Irwin, James Patrick John. "Systems with predictable caching." Thesis, University of Bristol, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.288213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kimbrel, Tracy. "Parallel prefetching and caching /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/6943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sarkar, Prasenjit 1970. "Hint-based cooperative caching." Diss., The University of Arizona, 1998. http://hdl.handle.net/10150/288892.

Full text
Abstract:
This dissertation focuses on caching in distributed file systems, where the performance is constrained by expensive server accesses. This has led to the evolution of cooperative caching, an innovative technique which effectively utilizes the client memories in a distributed file system to reduce the impact of server accesses. This is achieved by adding another layer to the storage hierarchy called the cooperative cache, allowing clients to access and store file blocks in the caches of other clients. The major contribution of this dissertation is to show that a cooperative caching system that relies on local hints to manage the cooperative cache performs better than a more tightly coordinated fact-based system. To evaluate the performance of hint-based cooperative caching, trace-driven simulations are used to show that the hit ratios to the different layers of the storage hierarchy are as good as those of the existing tightly-coordinated algorithms, but with significantly reduced overhead. Following this, a prototype was implemented on a cluster of Linux machines, where the use of hints reduced the average block access time to almost half that of NFS, and incurred minimal overhead.
APA, Harvard, Vancouver, ISO, and other styles
8

Recayte, Estefania <1988&gt. "Caching in Heterogeneous Networks." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amsdottorato.unibo.it/8974/1/0_Thesis.pdf.

Full text
Abstract:
A promising solution in order to cope with the massive request of wireless data traffic consists of having replicas of the potential requested content memorized across the network. In cache-enabled heterogeneous networks, content is pre-fetched close to the users during network off-peak periods in order to directly serve the users when the network is congested. Caching content at the edge of heterogeneous networks not only leads to significantly reduce the traffic congestion in the backhaul link but also leads to achieve higher levels of energy efficiency. However, the good performance of a system foresees a deep analysis of the possible caching techniques. Due to the physical limitation of the caches' size and the excessive amount of content, the design of caching policies which define how the content has to be cached and select the likely data to store is crucial. Within this thesis, caching techniques for storing and delivering the content in heterogeneous networks are investigated from two different aspects. The first part of the thesis is focused on the reduction of the power consumption when the cached content is delivered over an Gaussian interference channel and per-file rate constraints are imposed. Cooperative approaches between the transmitters in order to mitigate the interference experienced by the users are analyzed. Based on such approaches, the caching optimization problem for obtaining the best cache allocation solution (in the sense of minimizing the average power consumption) is proposed. The second part of the thesis is focused on caching techniques at packet level with the aim of reducing the transmissions from the core of an heterogeneous network. The design of caching schemes based on rate-less codes for storing and delivering the cached content are proposed. For each design, the placement optimization problem which minimizes the transmission over the backhaul link is formulated.
APA, Harvard, Vancouver, ISO, and other styles
9

Ou, Yi [Verfasser]. "Caching for flash-based databases and flash-based caching for databases / Yi Ou." München : Verlag Dr. Hut, 2012. http://d-nb.info/1028784120/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pohl, Christoph. "Adaptive Caching of Distributed Components." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2005. http://nbn-resolving.de/urn:nbn:de:swb:14-1117701363347-79965.

Full text
Abstract:
Die Zugriffslokalität referenzierter Daten ist eine wichtige Eigenschaft verteilter Anwendungen. Lokales Zwischenspeichern abgefragter entfernter Daten (Caching) wird vielfach bei der Entwicklung solcher Anwendungen eingesetzt, um diese Eigenschaft auszunutzen. Anschliessende Zugriffe auf diese Daten können so beschleunigt werden, indem sie aus dem lokalen Zwischenspeicher bedient werden. Gegenwärtige Middleware-Architekturen bieten dem Anwendungsprogrammierer jedoch kaum Unterstützung für diesen nicht-funktionalen Aspekt. Die vorliegende Arbeit versucht deshalb, Caching als separaten, konfigurierbaren Middleware-Dienst auszulagern. Durch die Einbindung in den Softwareentwicklungsprozess wird die frühzeitige Modellierung und spätere Wiederverwendung caching-spezifischer Metadaten gewährleistet. Zur Laufzeit kann sich das entwickelte System außerdem bezüglich der Cachebarkeit von Daten adaptiv an geändertes Nutzungsverhalten anpassen
Locality of reference is an important property of distributed applications. Caching is typically employed during the development of such applications to exploit this property by locally storing queried data: Subsequent accesses can be accelerated by serving their results immediately form the local store. Current middleware architectures however hardly support this non-functional aspect. The thesis at hand thus tries outsource caching as a separate, configurable middleware service. Integration into the software development lifecycle provides for early capturing, modeling, and later reuse of cachingrelated metadata. At runtime, the implemented system can adapt to caching access characteristics with respect to data cacheability properties, thus healing misconfigurations and optimizing itself to an appropriate configuration. Speculative prefetching of data probably queried in the immediate future complements the presented approach
APA, Harvard, Vancouver, ISO, and other styles
11

Liu, Wei. "Distributed Collaborative Caching for WWW." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0014/MQ53180.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Dev, Kashinath. "Concurrency control in distributed caching." NCSU, 2005. http://www.lib.ncsu.edu/theses/available/etd-10112005-172329/.

Full text
Abstract:
Replication and caching strategies are increasingly being used to improve performance and reduce delays in distributed environments. A query can be answered more quickly by accessing a cached copy than making a database round trip. Numerous techniques have been proposed to achieve caching and replication in various contexts. In our context of flat cluster-based networks, we have observed that none of the schemes prove to be optimal for all scenarios. In this thesis we look at concurrency control techniques for achieving consistency in distributed caching in flat cluster-based networks. We then come up with heuristics to choose some concurrency control mechanisms over others, depending on the parameters such as the number of data requests and the ratio of read to write requests.
APA, Harvard, Vancouver, ISO, and other styles
13

Fineman, Jeremy T. "Algorithms incorporating concurrency and caching." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/55110.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 189-203).
This thesis describes provably good algorithms for modern large-scale computer systems, including today's multicores. Designing efficient algorithms for these systems involves overcoming many challenges, including concurrency (dealing with parallel accesses to the same data) and caching (achieving good memory performance.) This thesis includes two parallel algorithms that focus on testing for atomicity violations in a parallel fork-join program. These algorithms augment a parallel program with a data structure that answers queries about the program's structure, on the fly. Specifically, one data structure, called SP-ordered-bags, maintains the series-parallel relationships among threads, which is vital for uncovering race conditions (bugs) in the program. Another data structure, called XConflict, aids in detecting conflicts in a transactional-memory system with nested parallel transactions. For a program with work T and span To, maintaining either data structure adds an overhead of PT, to the running time of the parallel program when executed on P processors using an efficient scheduler, yielding a total runtime of O(T1/P + PTo). For each of these data structures, queries can be answered in 0(1) time. This thesis also introduces the compressed sparse rows (CSB) storage format for sparse matrices, which allows both Ax and ATx to be computed efficiently in parallel, where A is an n x n sparse matrix with nnz > n nonzeros and x is a dense n-vector. The parallel multiplication algorithm uses e(nnz) work and ... span, yielding a parallelism of ... , which is amply high for virtually any large matrix.
(cont.) Also addressing concurrency, this thesis considers two scheduling problems. The first scheduling problem, motivated by transactional memory, considers randomized backoff when jobs have different lengths. I give an analysis showing that binary exponential backoff achieves makespan V2e(6v 1- i ) with high probability, where V is the total length of all n contending jobs. This bound is significantly larger than when jobs are all the same size. A variant of exponential backoff, however, achieves makespan of ... with high probability. I also present the size-hashed backoff protocol, specifically designed for jobs having different lengths, that achieves makespan ... with high probability. The second scheduling problem considers scheduling n unit-length jobs on m unrelated machines, where each job may fail probabilistically. Specifically, an input consists of a set of n jobs, a directed acyclic graph G describing the precedence constraints among jobs, and a failure probability qij for each job j and machine i. The goal is to find a schedule that minimizes the expected makespan. I give an O(log log(min {m, n}))-approximation for the case of independent jobs (when there are no precedence constraints) and an O(log(n + m) log log(min {m, n}))-approximation algorithm when precedence constraints form disjoint chains. This chain algorithm can be extended into one that supports precedence constraints that are trees, which worsens the approximation by another log(n) factor. To address caching, this thesis includes several new variants of cache-oblivious dynamic dictionaries.
(cont.) A cache-oblivious dictionary fills the same niche as a classic B-tree, but it does so without tuning for particular memory parameters. Thus, cache-oblivious dictionaries optimize for all levels of a multilevel hierarchy and are more portable than traditional B-trees. I describe how to add concurrency to several previously existing cache-oblivious dictionaries. I also describe two new data structures that achieve significantly cheaper insertions with a small overhead on searches. The cache-oblivious lookahead array (COLA) supports insertions/deletions and searches in O((1/B) log N) and O(log N) memory transfers, respectively, where B is the block size, M is the memory size, and N is the number of elements in the data structure. The xDict supports these operations in O((1/1B E1-) logB(N/M)) and O((1/)0logB(N/M)) memory transfers, respectively, where 0 < E < 1 is a tunable parameter. Also on caching, this thesis answers the question: what is the worst possible page-replacement strategy? The goal of this whimsical chapter is to devise an online strategy that achieves the highest possible fraction of page faults / cache misses as compared to the worst offline strategy. I show that there is no deterministic strategy that is competitive with the worst offline. I also give a randomized strategy based on the most recently used heuristic and show that it is the worst possible pagereplacement policy. On a more serious note, I also show that direct mapping is, in some sense, a worst possible page-replacement policy. Finally, this thesis includes a new algorithm, following a new approach, for the problem of maintaining a topological ordering of a dag as edges are dynamically inserted.
(cont.) The main result included here is an O(n2 log n) algorithm for maintaining a topological ordering in the presence of up to m < n(n - 1)/2 edge insertions. In contrast, the previously best algorithm has a total running time of O(min { m3/ 2, n5/2 }). Although these algorithms are not parallel and do not exhibit particularly good locality, some of the data structural techniques employed in my solution are similar to others in this thesis.
by Jeremy T. Fineman.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
14

Stafford, Matthew. "Online searching and connecting caching /." Digital version accessible at:, 2000. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Chen, Li. "Semantic caching for XML queries." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0129104-174457.

Full text
Abstract:
Thesis (Ph. D.)--Worcester Polytechnic Institute.
Keywords: Replacement strategy; Query rewriting; Query containment; Semantic caching; Query; XML. Includes bibliographical references (p. 210-222).
APA, Harvard, Vancouver, ISO, and other styles
16

Buerli, Michael. "Radiance Caching with Environment Maps." DigitalCommons@CalPoly, 2013. https://digitalcommons.calpoly.edu/theses/991.

Full text
Abstract:
The growing demand for realistic renderings in both film and games has led to a number of proposed solutions to the Global Illumination problem. In order to imitate natural lighting, it is necessary to gather indirect illumination of the surrounding environment for lighting computations. This is a computationally expensive problem, requiring the sampling or rasterization of the hemisphere surrounding each ray intersection, to which there is no standardized solution. In this thesis we propose a new method of approximation using environment maps for caching radiance. The proposed method leverages a voxelized scene representation for storing direct illumination and a cache of environment maps for integrating indirect illumination. By using a voxelized scene to gather indirect lighting contributions and caching these contributions spatially, we are able to achieve fast and convincing renders of large complex scenes. The result of our implementation produces images comparable to those of existing Monte Carlo integration methods with render speeds a magnitude or more faster.
APA, Harvard, Vancouver, ISO, and other styles
17

Herber, Robert. "Distributed Caching in a Multi-Server Environment : A study of Distributed Caching mechanisms and an evaluation of Distributed Caching Platforms available for the .NET Framework." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-10394.

Full text
Abstract:
This paper discusses the problems Distributed Caching can be used to solve and evaluates a couple of Distributed Caching Platforms targeting the .NET Framework. Basic concepts and functionality that is general for all distributed caching platforms is covered in chapter 2. We discuss how Distributed Caching can resolve synchronization problems when using multiple local caches, how a caching tier can relieve the database and improve the scalability of the system, and also how memory consumption can be reduced by storing data distributed. A couple of .NET-based caching platforms are evaluated and tested, these are Microsoft AppFabric Caching, ScaleOut StateServer and Alachisoft NCache. For a quick overview see the feature comparison-table in chapter 3 and for the main advantages and disadvantages of each platform see section 6.1. The benchmark results shows the difference in read performance, between local caching and distributed caching as well as distributed caching with a coherent local cache, for each evaluated Caching Platform. Local caching frameworks and database read times are included for comparison. These benchmark results are in chapter 5.
APA, Harvard, Vancouver, ISO, and other styles
18

Mahdavi, Mehregan Computer Science &amp Engineering Faculty of Engineering UNSW. "Caching dynamic data for web applications." Awarded by:University of New South Wales. Computer Science and Engineering, 2006. http://handle.unsw.edu.au/1959.4/32316.

Full text
Abstract:
Web portals are one of the rapidly growing applications, providing a single interface to access different sources (providers). The results from the providers are typically obtained by each provider querying a database and returning an HTML or XML document. Performance and in particular providing fast response time is one of the critical issues in such applications. Dissatisfaction of users dramatically increases with increasing response time, resulting in abandonment of Web sites, which in turn could result in loss of revenue by the providers and the portal. Caching is one of the key techniques that address the performance of such applications. In this work we focus on improving the performance of portal applications via caching. We discuss the limitations of existing caching solutions in such applications and introduce a caching strategy based on collaboration between the portal and its providers. Providers trace their logs, extract information to identify good candidates for caching and notify the portal. Caching at the portal is decided based on scores calculated by providers and associated with objects. We evaluate the performance of the collaborative caching strategy using simulation data. We show how providers can trace their logs and calculate cache-worthiness scores for their objects and notify the portal. We also address the issue of heterogeneous scoring policies by different providers and introduce mechanisms to regulate caching scores. We also show how portal and providers can synchronize their meta-data in order to minimize the overhead associated with collaboration for caching.
APA, Harvard, Vancouver, ISO, and other styles
19

Chupisanyarote, Sanpetch. "Content Caching in Opportunistic Wireless Networks." Thesis, KTH, Kommunikationsnät, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-91875.

Full text
Abstract:
Wireless networks have become popular in the last two decades and their use is since growing significantly. There is a concern that the existing resource of centralized networks may not sufficiently serve the enormous demand of customers. This thesis proposes one solution that has a potential to improve the network. We introduce decentralized networks particularly wireless ad-hoc networks, where users communicate and exchange information only with their neighbors. Thus, our main focus is to enhance the performance of data dissemination in wireless ad-hoc networks. In this thesis, we first examine a content distribution concept, in which nodes only focus on downloading and sharing the contents that are of their own interest. We call it private content and it is stored in a private cache. Then, we design and implement a relay-request caching strategy, where a node will generously help to fetch contents that another node asks for, although the contents are not of its interest. The node is not interested in these contents but fetches them on behalf of others; they are considered public contents. Thesepublic contents are stored in a public cache. We also propose three public caching options for optimizing network resources: relay request on demand, hop-limit, and greedy relay request. The proposed strategies are implemented in the OMNeT++ simulator and evaluated on mobility traces from Legion Studio. We also campare our novel caching strategy with an optimal channel choice strategy. The results are analyzed and they show that the use of public cache in the relay request strategy can enhance the performance marginally while overhead increases significantly.
APA, Harvard, Vancouver, ISO, and other styles
20

Gaspar, Cristian. "Variations on the Theme of Caching." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/1048.

Full text
Abstract:
This thesis is concerned with caching algorithms. We investigate three variations of the caching problem: web caching in the Torng framework, relative competitiveness and caching with request reordering.

In the first variation we define different cost models involving page sizes and page costs. We also present the Torng cost framework introduced by Torng in [29]. Next we analyze the competitive ratio of online deterministic marking algorithms in the BIT cost model combined with the Torng framework. We show that given some specific restrictions on the set of possible request sequences, any marking algorithm is 2-competitive.

The second variation consists in using the relative competitiveness ratio on an access graph as a complexity measure. We use the concept of access graphs introduced by Borodin [11] to define our own concept of relative competitive ratio. We demonstrate results regarding the relative competitiveness of two cache eviction policies in both the basic and the Torng framework combined with the CLASSICAL cost model.

The third variation is caching with request reordering. Two reordering models are defined. We prove some important results about the value of a move and number of orderings, then demonstrate results about the approximation factor and competitive ratio of offline and online reordering schemes, respectively.
APA, Harvard, Vancouver, ISO, and other styles
21

Arteaga, Clavijo Dulcardo Ariel. "Flash Caching for Cloud Computing Systems." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2496.

Full text
Abstract:
As the size of cloud systems and the number of hosted virtual machines (VMs) rapidly grow, the scalability of shared VM storage systems becomes a serious issue. Client-side flash-based caching has the potential to improve the performance of cloud VM storage by employing flash storage available on the VM hosts to exploit the locality inherent in VM IOs. However, there are several challenges to the effective use of flash caching in cloud systems. First, cache configurations such as size, write policy, metadata persistency and RAID level have significant impacts on flash caching. Second, the typical capacity of flash devices is limited compared to the dataset size of consolidated VMs. Finally, flash devices wear out and face serious endurance issues which are aggravated by the use for caching. This dissertation presents the research for addressing these problems of cloud flash caching in the following three aspects. First, it presents a thorough study of different cache configurations including a new cache-optimized RAID configuration using a large amount of long-term traces collected from real-world public and private clouds. Second, it studies an on-demand flash cache management solution for meeting VM cache demands and minimizing device wear-out. It uses a new cache demand model Reuse Working Set (RWS) to capture the data with good temporal locality, and uses the RWS size (RWSS) to model a workload?s cache demand. Finally, to handle situations where a cache is insufficient for VMs? demands, it employs dynamic cache migration to balance cache load across hosts by live migrating cached data along with the VMs. The results show that the cache-optimized RAID improves performance by 137% without sacrificing reliability, compared to traditional RAID. The RWSS-based on-demand cache allocation reduces workload?s cache usage by 78% and lowers the amount of writes sent to cache device by 40%, compared to traditional working set based cache allocation. Combining on-demand cache allocation with dynamic cache migration for 12 concurrent VMs, results show 28% higher hit ratio and 28% lower 90th percentile IO latency, compared to the case without cache allocation.
APA, Harvard, Vancouver, ISO, and other styles
22

Ho, Henry, and Axel Odelberg. "Efficient caching of rich data sets." Thesis, KTH, Data- och elektroteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-145765.

Full text
Abstract:
The importance of a smooth user experience in applications is increasing. To achieve more performance when interacting with resource intensive data it is important to implement an efficient caching method. The goal of this thesis is to investigate how to implement an efficient cache in an Android application. The use case is to download metadata and images of movies from a WebAPI provided by June AB. In order to investigate which caching method is the most efficient, a pre-study was done on some of the most common caching methods today. Based on the results of the pre-study, two different caching algorithms were tested and evaluated: First-In First-Out (FIFO) and Least Recently Used (LRU). These two algorithms were then implemented in an Android application. The resulting prototype has a responsive user interface capable of caching large amounts of data without noticeable performance loss compared to a non-cached version. The results from the prototype showed that LRU is the better strategy in our use case, however what we discovered was that the buffer size of the cache has the biggest impact on performance, not the cache eviction strategy.
Vikten av en snabb användarupplevelse ökar i nya applikationer. För att få ut mer prestanda när användare interagerar med resurstung data är det viktigt att implementera en effektiv cachingsmetod. Målet med arbetet är att undersöka hur man implementerar en effektiv cache i en Android-applikation. Användarfallet är att ladda ner metadata och bilder på filmer från ett WebAPI som tillhandahölls av June AB. För att undersöka vilken cachingsmetod som är effektivast gjordes en förstudie på några av de mest vanliga cachingsmetoderna idag. Baserat på förstudiens resultat valdes två cachingsalgoritmer för testning och utvärdering: First-In First-Out (FIFO) och Least Recently Used (LRU). Dessa två algoritmer implementerades i en Android-applikation Prototypen som gjordes har ett responsivt användargränsnitt som kan cacha stora mängder data utan märkbar prestandaförlust jämfört med en icke-cachad version. Prototypen visade att LRU är den bättre strategin för vårt användarfall, men upptäckte att bufferstorleken på cachen har den största påverkan av prestandan, inte cachestrategin.
APA, Harvard, Vancouver, ISO, and other styles
23

Liang, Zhengang. "Transparent Web caching with load balancing." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ59383.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Gupta, Priya S. M. Massachusetts Institute of Technology. "Providing caching abstractions for web applications." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62453.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 99-101).
Web-based applications are used by millions of users daily, and as a result a key challenge facing web application designers is scaling their applications to handle this load. A crucial component of this challenge is scaling the data storage layer, especially for the newer class of social networking applications that have huge amounts of shared data. Caching is an important scaling technique and is a critical part of the storage layer for such high-traffic web applications. Usually, building caching mechanisms involves significant effort from the application developer to maintain and invalidate data in the cache. In this work we present CacheGenie, a system which aims to make it easy for web application developers to build caching mechanisms in their applications. It achieves this by proposing high-level caching abstractions for frequently observed query patterns in web applications. These abstractions take the form of declarative query objects, and once the developer defines them, she does not have to worry about managing the cache (i.e., insertion and deletion) or maintaining consistency (e.g., invalidation or updates) when writing application code. We designed and implemented CacheGenie in the popular Django web application framework, with PostgreSQL as the database backend and memcached as the caching layer. We use triggers inside the database to automatically invalidate or keep the cache synchronized, as desired by the developer. We have not made any modifications to PostgreSQL or memcached. To evaluate our prototype, we ported several Pinax web applications to use our caching abstractions and performed several experiments. Our results show that it takes little effort for application developers to use CacheGenie, and that caching provides a throughput improvement by a factor of 2-2.5 for read-mostly workloads.
by Priya Gupta.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
25

Ports, Dan R. K. (Dan Robert Kenneth). "Application-level caching with transactional consistency." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/75448.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 147-159).
Distributed in-memory application data caches like memcached are a popular solution for scaling database-driven web sites. These systems increase performance significantly by reducing load on both the database and application servers. Unfortunately, such caches present two challenges for application developers. First, they cannot ensure that the application sees a consistent view of the data within a transaction, violating the isolation properties of the underlying database. Second, they leave the application responsible for locating data in the cache and keeping it up to date, a frequent source of application complexity and programming errors. This thesis addresses both of these problems in a new cache called TxCache. TxCache is a transactional cache: it ensures that any data seen within a transaction, whether from the cache or the database, reflects a slightly stale but consistent snapshot of the database. TxCache also offers a simple programming model. Application developers simply designate certain functions as cacheable, and the system automatically caches their results and invalidates the cached data as the underlying database changes. Our experiments found that TxCache can substantially increase the performance of a web application: on the RUBiS benchmark, it increases throughput by up to 5.2x relative to a system without caching. More importantly, on this application, TxCache achieves performance comparable (within 5%) to that of a non-transactional cache, showing that consistency does not have to come at the price of performance.
by Dan R. K. Ports.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
26

Johnson, Edwin N. (Edwin Neil) 1975. "A protocol for network level caching." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9860.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
Includes bibliographical references (p. 44-45).
by Edwin N. Johnson.
S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
27

Mertz, Jhonny Marcos Acordi. "Understanding and automating application-level caching." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/156813.

Full text
Abstract:
O custo de serviços na Internet tem encorajado o uso de cache a nível de aplicação para suprir as demandas dos usuários e melhorar a escalabilidade e disponibilidade de aplicações. Cache a nível de aplicação, onde desenvolvedores manualmente controlam o conteúdo cacheado, tem sido adotada quando soluções tradicionais de cache não são capazes de atender aos requisitos de desempenho desejados. Apesar de sua crescente popularidade, este tipo de cache é tipicamente endereçado de maneira ad-hoc, uma vez que depende de detalhes específicos da aplicação para ser desenvolvida. Dessa forma, tal cache consiste em uma tarefa que requer tempo e esforço, além de ser altamente suscetível a erros. Esta dissertação avança o trabalho relacionado a cache a nível de aplicação provendo uma compreensão de seu estado de prática e automatizando a identificação de conteúdo cacheável, fornecendo assim suporte substancial aos desenvolvedores para o projeto, implementação e manutenção de soluções de caching. Mais especificamente, este trabalho apresenta três contribuições: a estruturação de conhecimento sobre caching derivado de um estudo qualitativo, um levantamento do estado da arte em abordagens de cache estáticas e adaptativas, e uma técnica que automatiza a difícil tarefa de identificar oportunidades de cache O estudo qualitativo, que envolveu a investigação de dez aplicações web (código aberto e comercial) com características diferentes, permitiu-nos determinar o estado de prática de cache a nível de aplicação, juntamente com orientações práticas aos desenvolvedores na forma de padrões e diretrizes. Com base nesses padrões e diretrizes derivados, também propomos uma abordagem para automatizar a identificação de métodos cacheáveis, que é geralmente realizado manualmente por desenvolvedores. Tal abordagem foi implementada como um framework, que pode ser integrado em aplicações web para identificar automaticamente oportunidades de cache em tempo de execução, com base na monitoração da execução do sistema e gerenciamento adaptativo das decisões de cache. Nós avaliamos a abordagem empiricamente com três aplicações web de código aberto, e os resultados indicam que a abordagem é capaz de identificar oportunidades de cache adequadas, melhorando o desempenho das aplicações em até 12,16%.
Latency and cost of Internet-based services are encouraging the use of application-level caching to continue satisfying users’ demands, and improve the scalability and availability of origin servers. Application-level caching, in which developers manually control cached content, has been adopted when traditional forms of caching are insufficient to meet such requirements. Despite its popularity, this level of caching is typically addressed in an adhoc way, given that it depends on specific details of the application. Furthermore, it forces application developers to reason about a crosscutting concern, which is unrelated to the application business logic. As a result, application-level caching is a time-consuming and error-prone task, becoming a common source of bugs. This dissertation advances work on application-level caching by providing an understanding of its state-of-practice and automating the decision regarding cacheable content, thus providing developers with substantial support to design, implement and maintain application-level caching solutions. More specifically, we provide three key contributions: structured knowledge derived from a qualitative study, a survey of the state-of-the-art on static and adaptive caching approaches, and a technique and framework that automate the challenging task of identifying cache opportunities The qualitative study, which involved the investigation of ten web applications (open-source and commercial) with different characteristics, allowed us to determine the state-of-practice of application-level caching, along with practical guidance to developers as patterns and guidelines to be followed. Based on such patterns and guidelines derived, we also propose an approach to automate the identification of cacheable methods, which is often manually done and is not supported by existing approaches to implement application-level caching. We implemented a caching framework that can be seamlessly integrated into web applications to automatically identify and cache opportunities at runtime, by monitoring system execution and adaptively managing caching decisions. We evaluated our approach empirically with three open-source web applications, and results indicate that we can identify adequate caching opportunities by improving application throughput up to 12.16%. Furthermore, our approach can prevent code tangling and raise the abstraction level of caching.
APA, Harvard, Vancouver, ISO, and other styles
28

Chiang, Cho-Yu. "On building dynamic web caching hierarchies /." The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488199501403111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Arshinov, Alex. "Building high-performance web-caching servers." Thesis, De Montfort University, 2004. http://hdl.handle.net/2086/13257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Xu, Ji. "Data caching in wireless mobile networks /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?COMP%202004%20XU.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 57-60). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
31

Rajani, Meena. "Application Server Caching with Freshness Guarantees." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9730.

Full text
Abstract:
Dynamic websites rely on caching and clustering to achieve high performance and scalability. But while queries benefit from such middle-tier caching, updates introduce a distributed cache consistency problem. In addition, even a high cache hit ratio is not beneficial if the cached data is either not fresh or not consistent enough. We have proposed Multi-Object Freshness-Aware Caching (MOFAC): MOFAC tracks the freshness of the cached data and allows clients to explicitly trade data freshness for faster response times. In the first part of this thesis, we introduce the core MOFAC caching algorithm. MOFAC provides clients a consistent snapshot of data with reduced response time if the client agrees to lower its data freshness expectation. The MOFAC algorithm has been implemented in the Java EE application server cache and the evaluation with an exemplified bookshop application shows that client requests for complex objects can clearly benefit from this approach. For example, for a 20% update ratio, MOFAC can almost double the throughput if clients agree to access up-to 90 seconds old data. The second part of the thesis deals with queries which access several complex objects in a transaction. These queries, which touch more than one complex object in a request, can also benefit from reduced latency by agreeing to receive response with relaxed consistency. To cater for these queries we have extended the MOFAC to work with relaxed consistency for distributed queries. This thesis introduces both the corresponding theoretical concepts for group valid interval and group consistency, and also devises concrete distributed algorithms for MOFAC and MOFAC with relaxed consistency. These algorithms are implemented in a distributed caching system based on the Java EE standard. The evaluation shows that these new freshness and consistency guarantees come for only a reasonable overhead as compared to standard cache invalidation.
APA, Harvard, Vancouver, ISO, and other styles
32

Pervej, Md Ferdous. "Edge Caching for Small Cell Networks." DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7580.

Full text
Abstract:
An idea of storing contents, such as media files, music files, movie clips, etc. is simple yet challenging in terms of required effort to make it count. Some of the benefits of pre-storing the contents are reduced delay of accessing/downloading a content, reduced load to the centralized servers and of course, a higher data rate. However, several challenges need to be addressed to achieve these benefits. Among many, some of the fundamentals are limited storage capacity, storing the right content and minimizing the costs. This thesis aims to address these challenges. First, a framework for predicting the proper contents that need to be stored to the limited storage capacity is presented. Then, the cost is minimized considering several real-world scenarios. While doing that, all possible collaborations among the local nodes are performed to ensure high performance. Therefore, the goal of this thesis is to come up with a solution to the content storing problems so that the network cost is minimized.
APA, Harvard, Vancouver, ISO, and other styles
33

Jurk, Steffen. "A simultaneous execution scheme for database caching." [S.l.] : [s.n.], 2005. http://se6.kobv.de:8000/btu/volltexte/2006/22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zou, Qing. "Transparent Web caching with minimum response time." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2002. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ65661.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Thadani, Prakash. "Local caching architecture for RFID tag resolution." Thesis, Wichita State University, 2007. http://hdl.handle.net/10057/1559.

Full text
Abstract:
Radio Frequency Identification (RFID) is set to become the standard in identification and tracking of items worldwide with projected numbers in tens of billions in a few years. With the reducing size and cost of RFID equipment, the use of RFIDs is being pushed well beyond just replacement of barcodes. Even though the deployment of RFIDs is growing at an exponential rate, the only global standard that currently exists for RFID networks is the EPCglobal Network proposed by EPCglobal, Inc. This thesis reviews the EPCglobal Network Architecture and provides a detailed explanation and critique of the proposed framework. Although this architecture has been designed as a very robust and scalable architecture, there are some limitations related to delay and reliability. The thesis proposes an enhancement to the EPCglobal Architecture to mitigate these issues. The new architecture suggests that RFID users maintain a local database that synchronizes with the Master database, so that under certain conditions, data can be retrieved locally instead of going over the network for identification of every single item. The thesis compares the two architectures using a cost function analysis. Also, as a means to showcase the advantages of the proposed enhancement as well as to enable future users and researchers to have access to a RFID test environment, a complete, scalable RFID simulator has been built from the ground up. The results show that the proposed solution demonstrates distinct performance improvements over the original architecture and also increases the reliability of the system.
Thesis (M.S)--Wichita State University, College of Engineering, Dept. of Electrical and Computer Engineering
"December 2007."
APA, Harvard, Vancouver, ISO, and other styles
36

Race, Nicholas John Paul. "Support for video distribution through multimedia caching." Thesis, Lancaster University, 2000. http://eprints.lancs.ac.uk/11801/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Larsson, Carl-Johan. "User-Based Predictive Caching of Streaming Media." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-246044.

Full text
Abstract:
Streaming media is a growing market all over the world which sets astrict requirement on mobile connectivity. The foundation for a gooduser experience when supplying a streaming media service on a mobiledevice is to ensure that the user can access the requested content.Due to the varying availability of mobile connectivity measures has tobe taken to remove as much dependency as possible on the quality ofthe connection. This thesis investigates the use of a Long Short-TermMemory machine learning model for predicting a future geographicallocation for a mobile device. The predicted location in combinationwith information about cellular connectivity in the geographical areais used to schedule prefetching of media content in order to improveuser experience and to reduce mobile data usage. The Long Short-Term Memory model suggested in this thesis achieves an accuracy of85.15% averaged over 20000 routes and the predictive caching managedto retain user experience while decreasing the amount of dataconsumed.
Strömmande media är en globalt växande marknad som medför striktakrav på de mobila nätverken. Grunden för en god användarupplevelsenär man tillhandahåller en tjänst inom strömmande media är att garanteratillgång till det begärda innehållet. På grund av den varierandetillgängligheten av mobila nätverk är det viktigt att dessa tjänster eliminerardet strikta kravet på den mobila nätverksuppkopplingen. Denhär master-uppsatsen undersöker maskininlärnings-modellen LongShort-Term Memory för att förutspå framtida geografiska positionerför en mobil enhet. Den förutspådda positionen används sedan i kombinationmed information om uppkopplingshastigheten i det förutspåddaområdet för att hämta strömmad media i förtid innan den mobilaenheten träder in i ett område med låg uppkopplingshastighet.Detta görs av två syften, för att förbättra användarupplevelsen menäven för att minska den data som lagras i förtid utan att den mobilaenhetens strömmade media riskeras att avbrytas. Modellen somutvärderades uppnådde en exakthet av 85.15% i medelvärde mätt över20000 olika geografiska rutter. Förtids-lagringen av strömmad mediaupprätthöll användarupplevelsen medan den förbrukade datan minskade.
APA, Harvard, Vancouver, ISO, and other styles
38

Law, Ching 1975. "A new competitive analysis of randomized caching." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Jing, Yuxin. "Evaluating caching mechanisms in future Internet architectures." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106120.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 54-57).
This thesis seeks to test and evaluate the effects of in-network storage in novel proposed Internet architectures in terms of their performance. In a world where more and more people are mobile and connected to the Internet, we look at how the added variable of user mobility can affect how these architectures perform under different loads. Evaluating the effects of in-network storage and caching in these novel architectures will provide another facet to understanding how viable of an alternative they would be to the current TCP/IP paradigm of today's Internet. In Named Data Networking, where the storage is used to directly cache content, we see its use of storage impact the locality of where things are, while in MobilityFirst, where storage is used to cache chunks to provide robust delivery, we look at how its different layers work together in a mobility event.
by Yuxin Jing.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
40

Håkansson, Fredrik, and Carl-Johan Larsson. "User-Based Predictive Caching of Streaming Media." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-151008.

Full text
Abstract:
Streaming media is a growing market all over the world which sets a strict requirement on mobile connectivity. The foundation for a good user experience when supplying a streaming media service on a mobile device is to ensure that the user can access the requested content. Due to the varying availability of mobile connectivity measures has to be taken to remove as much dependency as possible on the quality of the connection. This thesis investigates the use of a Long Short-Term Memory machine learning model for predicting a future geographical location for a mobile device. The predicted location in combination with information about cellular connectivity in the geographical area is used to schedule prefetching of media content in order to improve user experience and to reduce mobile data usage. The Long Short-Term Memory model suggested in this thesis achieves an accuracy of 85.15% averaged over 20000 routes and the predictive caching managed to retain user experience while decreasing the amount of data consumed.

This thesis is written as a joint thesis between two students from different universities. This means the exact same thesis is published at two universities (LiU and KTH) but with different style templates. The other report has identification number: TRITA-EECS-EX-2018:403

APA, Harvard, Vancouver, ISO, and other styles
41

Sherman, Alexander 1975. "Distributed web caching system with consistent hashing." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80121.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Includes bibliographical references (p. 63-64).
by Alexander Sherman.
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
APA, Harvard, Vancouver, ISO, and other styles
42

Marlow, Gregory. "Week 11, Video 06: Caching The Simulation." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/digital-animation-videos-oer/75.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

AlHassoun, Yousef. "TOWARDS EFFICIENT CODED CACHING FOR WIRELESS NETWORKS." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1575632062797432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Mohamed, Rozlina. "Implications of query caching for JXTA peers." Thesis, Aston University, 2014. http://publications.aston.ac.uk/24370/.

Full text
Abstract:
This dissertation studies the caching of queries and how to cache in an efficient way, so that retrieving previously accessed data does not need any intermediary nodes between the data-source peer and the querying peer in super-peer P2P network. A precise algorithm was devised that demonstrated how queries can be deconstructed to provide greater flexibility for reusing their constituent elements. It showed how subsequent queries can make use of more than one previous query and any part of those queries to reconstruct direct data communication with one or more source peers that have supplied data previously. In effect, a new query can search and exploit the entire cached list of queries to construct the list of the data locations it requires that might match any locations previously accessed. The new method increases the likelihood of repeat queries being able to reuse earlier queries and provides a viable way of by-passing shared data indexes in structured networks. It could also increase the efficiency of unstructured networks by reducing traffic and the propensity for network flooding. In addition, performance evaluation for predicting query routing performance by using a UML sequence diagram is introduced. This new method of performance evaluation provides designers with information about when it is most beneficial to use caching and how the peer connections can optimize its exploitation.
APA, Harvard, Vancouver, ISO, and other styles
45

Thadani, Prakash Pendse Ravindra. "Local caching architecture for RFID tag resolution /." Thesis, A link to full text of this thesis in SOAR, 2007. http://hdl.handle.net/10057/1559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Kaplan, Scott Frederick. "Compressed caching and modern virtual memory simulation /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Loukopoulos, Athanasios. "Caching and replication schemes on the Internet /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?COMP%202002%20LOUKOP.

Full text
Abstract:
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 149-163). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
48

Wolman, Alastair. "Sharing and caching characteristics of Internet content /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/6918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Obalappa, Dinesh Tretiak Oleh J. "Optimal caching of large multi-dimensional datasets /." Philadelphia, Pa. : Drexel University, 2004. http://dspace.library.drexel.edu/handle/1860/307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

MAHMOOD, AHSAN. "Caching Techniques in Next Generation Cellular Networks." Doctoral thesis, Politecnico di Torino, 2018. http://hdl.handle.net/11583/2711202.

Full text
Abstract:
Content caching will be an essential feature in the next generations of cellular networks. Indeed, a network equipped with caching capabilities allows users to retrieve content with reduced access delays and consequently reduces the traffic passing through the network backhaul. However, the deployment of the caching nodes in the network is hindered by the following two challenges. First, the storage space of a cache is limited as well as expensive. So, it is not possible to store in the cache every content that can be possibly requested by the user. This calls for efficient techniques to determine the contents that must be stored in the cache. Second, efficient ways are needed to implement and control the caching node. In this thesis, we investigate caching techniques focussing to address the above-mentioned challenges, so that the overall system performance is increased. In order to tackle the challenge of the limited storage capacity, smart proactive caching strategies are needed. In the context of vehicular users served by edge nodes, we believe a caching strategy should be adapted to the mobility characteristics of the cars. In this regard, we propose a scheme called RICH (RoadsIde CacHe), which optimally caches content at the edge nodes where connected vehicles require it most. In particular, our scheme is designed to ensure in-order delivery of content chunks to end users. Unlike blind popularity decisions, the probabilistic caching used by RICH considers vehicular trajectory predictions as well as content service time by edge nodes. We evaluate our approach on realistic mobility datasets against a popularity-based edge approach called POP, and a mobility-aware caching strategy known as netPredict. In terms of content availability, our RICH edge caching scheme provides an enhancement of up to 33% and 190% when compared with netPredict and POP respectively. At the same time, the backhaul penalty bandwidth is reduced by a factor ranging between 57% and 70%. Caching node is an also a key component in Named Data Networking (NDN) that is an innovative paradigm to provide content based services in future networks. As compared to legacy networks, naming of network packets and in-network caching of content make NDN more feasible for content dissemination. However, the implementation of NDN requires drastic changes to the existing network infrastructure. One feasible approach is to use Software Defined Networking (SDN), according to which the control of the network is delegated to a centralized controller, which configures the forwarding data plane. This approach leads to large signaling overhead as well as large end-to-end (e2e) delays. In order to overcome these issues, in this work, we provide an efficient way to implement and control the NDN node. We propose to enable NDN using a stateful data plane in the SDN network. In particular, we realize the functionality of an NDN node using a stateful SDN switch attached with a local cache for content storage, and use OpenState to implement such an approach. In our solution, no involvement of the controller is required once the OpenState switch has been configured. We benchmark the performance of our solution against the traditional SDN approach considering several relevant metrics. Experimental results highlight the benefits of a stateful approach and of our implementation, which avoids signaling overhead and significantly reduces e2e delays.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography