Rozprawy doktorskie na temat „Web Caching”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Web Caching.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Web Caching”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Gu, Wenzheng. "Ubiquitous Web caching". [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0002406.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Mahdavi, Mehregan Computer Science &amp Engineering Faculty of Engineering UNSW. "Caching dynamic data for web applications". Awarded by:University of New South Wales. Computer Science and Engineering, 2006. http://handle.unsw.edu.au/1959.4/32316.

Pełny tekst źródła
Streszczenie:
Web portals are one of the rapidly growing applications, providing a single interface to access different sources (providers). The results from the providers are typically obtained by each provider querying a database and returning an HTML or XML document. Performance and in particular providing fast response time is one of the critical issues in such applications. Dissatisfaction of users dramatically increases with increasing response time, resulting in abandonment of Web sites, which in turn could result in loss of revenue by the providers and the portal. Caching is one of the key techniques that address the performance of such applications. In this work we focus on improving the performance of portal applications via caching. We discuss the limitations of existing caching solutions in such applications and introduce a caching strategy based on collaboration between the portal and its providers. Providers trace their logs, extract information to identify good candidates for caching and notify the portal. Caching at the portal is decided based on scores calculated by providers and associated with objects. We evaluate the performance of the collaborative caching strategy using simulation data. We show how providers can trace their logs and calculate cache-worthiness scores for their objects and notify the portal. We also address the issue of heterogeneous scoring policies by different providers and introduce mechanisms to regulate caching scores. We also show how portal and providers can synchronize their meta-data in order to minimize the overhead associated with collaboration for caching.
Style APA, Harvard, Vancouver, ISO itp.
3

Liang, Zhengang. "Transparent Web caching with load balancing". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ59383.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Gupta, Priya S. M. Massachusetts Institute of Technology. "Providing caching abstractions for web applications". Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62453.

Pełny tekst źródła
Streszczenie:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 99-101).
Web-based applications are used by millions of users daily, and as a result a key challenge facing web application designers is scaling their applications to handle this load. A crucial component of this challenge is scaling the data storage layer, especially for the newer class of social networking applications that have huge amounts of shared data. Caching is an important scaling technique and is a critical part of the storage layer for such high-traffic web applications. Usually, building caching mechanisms involves significant effort from the application developer to maintain and invalidate data in the cache. In this work we present CacheGenie, a system which aims to make it easy for web application developers to build caching mechanisms in their applications. It achieves this by proposing high-level caching abstractions for frequently observed query patterns in web applications. These abstractions take the form of declarative query objects, and once the developer defines them, she does not have to worry about managing the cache (i.e., insertion and deletion) or maintaining consistency (e.g., invalidation or updates) when writing application code. We designed and implemented CacheGenie in the popular Django web application framework, with PostgreSQL as the database backend and memcached as the caching layer. We use triggers inside the database to automatically invalidate or keep the cache synchronized, as desired by the developer. We have not made any modifications to PostgreSQL or memcached. To evaluate our prototype, we ported several Pinax web applications to use our caching abstractions and performed several experiments. Our results show that it takes little effort for application developers to use CacheGenie, and that caching provides a throughput improvement by a factor of 2-2.5 for read-mostly workloads.
by Priya Gupta.
S.M.
Style APA, Harvard, Vancouver, ISO itp.
5

Chiang, Cho-Yu. "On building dynamic web caching hierarchies /". The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488199501403111.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Arshinov, Alex. "Building high-performance web-caching servers". Thesis, De Montfort University, 2004. http://hdl.handle.net/2086/13257.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Logren, Dély Tobias. "Caching HTTP : A comparative study of caching reverse proxies Varnish and Nginx". Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-9679.

Pełny tekst źródła
Streszczenie:
With the amount of users on the web steadily increasing websites must at times endure heavy loads and risk grinding to a halt beneath the flood of visitors. One solution to this problem is by using HTTP reverse proxy caching, which acts as an intermediate between web application and user. Content from the application is stored and passed on, avoiding the need for the application produce it anew for every request. One popular application designed solely for this task is Varnish; another interesting application for the task is Nginx which is primarily designed as a web server. This thesis compares the performance of the two applications in terms of number of requests served in relation to response time, as well as system load and free memory. With both applications using their default configuration, the experiments find that Nginx performs better in the majority of tests performed. The difference is however very slightly in tests with low request rate.
Style APA, Harvard, Vancouver, ISO itp.
8

Zou, Qing. "Transparent Web caching with minimum response time". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2002. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ65661.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Sherman, Alexander 1975. "Distributed web caching system with consistent hashing". Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80121.

Pełny tekst źródła
Streszczenie:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Includes bibliographical references (p. 63-64).
by Alexander Sherman.
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Style APA, Harvard, Vancouver, ISO itp.
10

Acharjee, Utpal. "Personalized and artificial intelligence Web caching and prefetching". Thesis, University of Ottawa (Canada), 2006. http://hdl.handle.net/10393/27215.

Pełny tekst źródła
Streszczenie:
Web caching and prefetching are the most popular and widely used solutions to remedy Internet performance problems. Performance is increased if a combination of caching and prefetching systems is used rather than if these techniques are used individually. Web caching reduces the bandwidth consumption and network latency by serving the user's request from its own cache instead of the original Internet source. Prefetching is a technique that preloads and caches the web object that is not currently requested by the user but can be requested (expected) in the near future. It provides low retrieval latency for users and as well as high hit ratios. Existing methods for caching and prefetching are mostly traditional sharable Proxy cache servers. In our personalized caching and prefetching approach, the system builds up a user profile associated with a user's web behaviour by parsing the keywords from HTML pages that are browsed by the user. The keywords of a user profile are updated by adding a new keyword or incrementing its associated weight if it is already, in the profile. This user profile reflects users' web behaviour or interest. In this cache and prefetch prediction module we considered both static and dynamic users' web behaviour. We have designed and implemented an artificial intelligence multilayer neural network-based caching and prediction algorithm to personalize the Proteus Proxy server with this mechanism. Enhanced Proteus is a multilingual and internationally-supported Proxy system and can work with both mobile and traditional Proxy server-based sharable environments. In the prefetch option of Proteus, time also implemented a unique content filtering feature that blocks the downloading of unwanted web objects.
Style APA, Harvard, Vancouver, ISO itp.
11

Rude, Howard Nathan. "Intelligent Caching to Mitigate the Impact of Web Robots on Web Servers". Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1482416834896541.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Liu, Binzhang M. S. "Characterizing Web Response Time". Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/36741.

Pełny tekst źródła
Streszczenie:
It is critical to understand WWW latency in order to design better HTTP protocols. In this study we characterize Web response time and examine the effects of proxy caching, network bandwidth, traffic load, persistent connections for a page, and periodicity. Based on studies with four workloads, we show that at least a quarter of the total elapsed time is spent on establishing TCP connections with HTTP/1.0. The distributions of connection time and elapsed time can be modeled using Pearson, Weibul, or Log-logistic distributions. We also characterize the effect of a user's network bandwidth on response time. Average connection time from a client via a 33.6 K modem is two times longer than that from a client via switched Ethernet. We estimate the elapsed time savings from using persistent connections for a page to vary from about a quarter to a half. Response times display strong daily and weekly patterns. This study finds that a proxy caching server is sensitive to traffic loads. Contrary to the typical thought about Web proxy caching, this study also finds that a single stand-alone squid proxy cache does not always reduce response time for our workloads. Implications of these results to future versions of the HTTP protocol and to Web application design also are discussed.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
13

Mertz, Jhonny Marcos Acordi. "Understanding and automating application-level caching". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/156813.

Pełny tekst źródła
Streszczenie:
O custo de serviços na Internet tem encorajado o uso de cache a nível de aplicação para suprir as demandas dos usuários e melhorar a escalabilidade e disponibilidade de aplicações. Cache a nível de aplicação, onde desenvolvedores manualmente controlam o conteúdo cacheado, tem sido adotada quando soluções tradicionais de cache não são capazes de atender aos requisitos de desempenho desejados. Apesar de sua crescente popularidade, este tipo de cache é tipicamente endereçado de maneira ad-hoc, uma vez que depende de detalhes específicos da aplicação para ser desenvolvida. Dessa forma, tal cache consiste em uma tarefa que requer tempo e esforço, além de ser altamente suscetível a erros. Esta dissertação avança o trabalho relacionado a cache a nível de aplicação provendo uma compreensão de seu estado de prática e automatizando a identificação de conteúdo cacheável, fornecendo assim suporte substancial aos desenvolvedores para o projeto, implementação e manutenção de soluções de caching. Mais especificamente, este trabalho apresenta três contribuições: a estruturação de conhecimento sobre caching derivado de um estudo qualitativo, um levantamento do estado da arte em abordagens de cache estáticas e adaptativas, e uma técnica que automatiza a difícil tarefa de identificar oportunidades de cache O estudo qualitativo, que envolveu a investigação de dez aplicações web (código aberto e comercial) com características diferentes, permitiu-nos determinar o estado de prática de cache a nível de aplicação, juntamente com orientações práticas aos desenvolvedores na forma de padrões e diretrizes. Com base nesses padrões e diretrizes derivados, também propomos uma abordagem para automatizar a identificação de métodos cacheáveis, que é geralmente realizado manualmente por desenvolvedores. Tal abordagem foi implementada como um framework, que pode ser integrado em aplicações web para identificar automaticamente oportunidades de cache em tempo de execução, com base na monitoração da execução do sistema e gerenciamento adaptativo das decisões de cache. Nós avaliamos a abordagem empiricamente com três aplicações web de código aberto, e os resultados indicam que a abordagem é capaz de identificar oportunidades de cache adequadas, melhorando o desempenho das aplicações em até 12,16%.
Latency and cost of Internet-based services are encouraging the use of application-level caching to continue satisfying users’ demands, and improve the scalability and availability of origin servers. Application-level caching, in which developers manually control cached content, has been adopted when traditional forms of caching are insufficient to meet such requirements. Despite its popularity, this level of caching is typically addressed in an adhoc way, given that it depends on specific details of the application. Furthermore, it forces application developers to reason about a crosscutting concern, which is unrelated to the application business logic. As a result, application-level caching is a time-consuming and error-prone task, becoming a common source of bugs. This dissertation advances work on application-level caching by providing an understanding of its state-of-practice and automating the decision regarding cacheable content, thus providing developers with substantial support to design, implement and maintain application-level caching solutions. More specifically, we provide three key contributions: structured knowledge derived from a qualitative study, a survey of the state-of-the-art on static and adaptive caching approaches, and a technique and framework that automate the challenging task of identifying cache opportunities The qualitative study, which involved the investigation of ten web applications (open-source and commercial) with different characteristics, allowed us to determine the state-of-practice of application-level caching, along with practical guidance to developers as patterns and guidelines to be followed. Based on such patterns and guidelines derived, we also propose an approach to automate the identification of cacheable methods, which is often manually done and is not supported by existing approaches to implement application-level caching. We implemented a caching framework that can be seamlessly integrated into web applications to automatically identify and cache opportunities at runtime, by monitoring system execution and adaptively managing caching decisions. We evaluated our approach empirically with three open-source web applications, and results indicate that we can identify adequate caching opportunities by improving application throughput up to 12.16%. Furthermore, our approach can prevent code tangling and raise the abstraction level of caching.
Style APA, Harvard, Vancouver, ISO itp.
14

Ramaswamy, Lakshmish Macheeri. "Towards Efficient Delivery of Dynamic Web Content". Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7646.

Pełny tekst źródła
Streszczenie:
Advantages of cache cooperation on edge cache networks serving dynamic web content were studied. Design of cooperative edge cache grid a large-scale cooperative edge cache network for delivering highly dynamic web content with varying server update frequencies was presented. A cache clouds-based architecture was proposed to promote low-cost cache cooperation in cooperative edge cache grid. An Internet landmarks-based scheme, called selective landmarks-based server-distance sensitive clustering scheme, for grouping edge caches into cooperative clouds was presented. Dynamic hashing technique for efficient, load-balanced, and reliable documents lookups and updates was presented. Utility-based scheme for cooperative document placement in cache clouds was proposed. The proposed architecture and techniques were evaluated through trace-based simulations using both real-world and synthetic traces. Results showed that the proposed techniques provide significant performance benefits. A framework for automatically detecting cache-effective fragments in dynamic web pages was presented. Two types of fragments in web pages, namely, shared fragments and lifetime-personalization fragments were identified and formally defined. A hierarchical fragment-aware web page model called the augmented-fragment tree model was proposed. An efficient algorithm to detect maximal fragments that are shared among multiple documents was proposed. A practical algorithm for detecting fragments based on their lifetime and personalization characteristics was designed. The proposed framework and algorithms were evaluated through experiments on real web sites. The effect of adopting the detected fragments on web-caches and origin-servers is experimentally studied.
Style APA, Harvard, Vancouver, ISO itp.
15

De, La Rosa Rivera Eugenio. "RBQ: Congestion-adaptive cooperative caching for the World Wide Web". Diss., The University of Arizona, 2004. http://hdl.handle.net/10150/298775.

Pełny tekst źródła
Streszczenie:
The topic of this dissertation is the study of coordination strategies between Web caches, with special attention paid to the effects of cross-traffic congestion. Coordination can be in the form of Web caching cooperation or Content Delivery Networks. There are three major contributions made by this dissertation. First, a new coordination scheme, Restricted Broadcast Queries (RBQ), is proposed to address the problems observed in other coordination strategies in the presence of cross-traffic. RBQ's performance is evaluated using analytic and simulation tools, both in the presence and absence of cross-traffic. Secondly, an analytic model is developed that describes the basic behavior of Web cache coordination. Finally, Web cache coordination in the presence of cross-traffic is investigated using trace-driven simulation with the help of real proxy-based traces and synthetic traffic generators. RBQ lowers retrieval times by nearly 50% in the presence of intense cross-traffic congestion (up to 90% reduction in link bandwidth). In the absence of congestion, and for large number of cooperating caches, RBQ yields retrieval latencies 25% lower than other congestion-aware algorithms.
Style APA, Harvard, Vancouver, ISO itp.
16

Wolman, Alastair. "Sharing and caching characteristics of Internet content /". Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/6918.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Cheng, Kai. "Using Database Technology to Improve Performance of the Web : Caching and Beyond". 京都大学 (Kyoto University), 2002. http://hdl.handle.net/2433/149745.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Lee, David Chunglin. "Pre-fetch document caching to improve World-Wide Web user response time". Thesis, Virginia Tech, 1996. http://hdl.handle.net/10919/44951.

Pełny tekst źródła
Streszczenie:

The World-Wide Web, or the Web, is currently one of the most highly used network services. Because of this, improvements and new technologies are rapidly being developed and deployed. One important area of study is improving user response time through the use of caching mechanisms. Most prior work considered multiple user caches running on cache relay systems. These systems are mostly post-caching systems; they perform no "look ahead," or pre-fetch, functions. This research studies a pre-fetch caching scheme based on Web server access statistics. The scheme employs a least-recently used replacement policy and allows for multiple simultaneous document retrievals to occur. The scheme is based on a combined statistical and locality of reference model associated with the links in hypertext systems. Results show that cache hit rates are doubled over schemes that use only post-caching and are mixed for user response time improvements. The conclusion is that pre-fetch caching Web documents offers an improvement over post-caching methods and should be studied in detail for both single user and multiple user systems.


Master of Science
Style APA, Harvard, Vancouver, ISO itp.
19

Zych, Marc E. "An Analysis of Generational Caching Implemented in a Production Website". DigitalCommons@CalPoly, 2013. https://digitalcommons.calpoly.edu/theses/1002.

Pełny tekst źródła
Streszczenie:
Website scaling has been an issue since the inception of the web. The demand for user generated content and personalized web pages requires the use of a database for a storage engine. Unfortunately, scaling the database to handle large amounts of traffic is still a problem many companies face. One such company is iFixit, a provider of free, publicly-editable, online repair manuals. Like many websites, iFixit uses Memcached to decrease database load and improve response time. However, the caching strategy used is a very ad hoc one and therefore can be greatly improved. Most research regarding web application caching focuses on cache invalidation, the process of keeping cached content consistent with the permanent data store. Generational caching is a technique that involves including the object’s last modified date in the cache key. This ensures that cache keys change when- ever the object is modified, which effectively invalidates all relevant cache entries. Fine-grained caching is now very simple because the developer doesn’t need to explicitly delete all possible cache keys that might need to be invalidated when an object is modified. This is particularly useful for caching arbitrary fragments of HTML without increasing the complexity of cache invalidation. In this work, we describe the process of implementing a caching strategy based on generational caching concepts in iFixit’s website. Our implementation yielded a 20% improvement in page response time by caching fragments of HTML and results of database queries.
Style APA, Harvard, Vancouver, ISO itp.
20

Wu, Haoming. "Least relative benefit algorithm for caching continuous media data at the Web proxy". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ59413.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Suh, Peter. "Performance study of web caching algorithms in a home network with video traffic". Thesis, California State University, Long Beach, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1595795.

Pełny tekst źródła
Streszczenie:

Internet traffic is growing at a significant pace, especially with video content that already accounts for more than 50% of total of internet traffic. The reasons for this trend are increasing numbers in mobile devices, shifting focus from traditional television media to the online content, and advancement in internet technology. Internet infrastructure is facing challenges as it tries to meet the demand for video traffic. One solution is web caching, an area of research with many different proposed algorithms and implementations. This thesis examines the performance of a set of algorithms using a popular web caching software, Squid. The tests are done using a web automation tool, Selenium, to browse the web from a home network in three different scenarios: checking web content, videos, and a mix of web content and videos.

Style APA, Harvard, Vancouver, ISO itp.
22

Vishwasrao, Saket Dilip. "Performance Evaluation of Web Archiving Through In-Memory Page Cache". Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78252.

Pełny tekst źródła
Streszczenie:
This study proposes and evaluates a new method for Web archiving. We leverage the caching infrastructure in Web servers for archiving. Redis is used as the page cache and its persistence mechanism is exploited for archiving. We experimentally evaluate the performance of our archival technique using the Greek version of Wikipedia deployed on Amazon cloud infrastructure. We show that there is a slight increase in latencies of the rendered pages due to archiving. Though the server performance is comparable at larger page cache sizes, the maximum throughput the server can handle decreases significantly at lower cache sizes due to more disk write operations as a result of archiving. Since pages are dynamically rendered and the technology stack of Wikipedia is extensively used in a number of Web applications, our results should have broad impact.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
23

Doswell, Felicia. "Improving Network Performance and Document Dissemination by Enhancing Cache Consistency on the Web Using Proxy and Server Negotiation". Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/28735.

Pełny tekst źródła
Streszczenie:
Use of proxy caches in the World Wide Web is beneficial to the end user, network administrator, and server administrator since it reduces the amount of redundant traffic that circulates through the network. In addition, end users get quicker access to documents that are cached. However, the use of proxies introduces additional issues that need to be addressed. In particular, there is growing concern over how to maintain cache consistency and coherency among cached versions of documents. The existing consistency protocols used in the Web are proving to be insufficient to meet the growing needs of the Internet population. For example, too many messages sent over the network are due to caches guessing when their copy is inconsistent. One option is to apply the cache coherence strategies already in use for many other distributed systems, such as parallel computers. However, these methods are not satisfactory for the World Wide Web due to its larger size and more diverse access patterns. Many decisions must be made when exploring World Wide Web coherency, such as whether to provide consistency at the proxy level (client pull) or to allow the server to handle it (server push). What trade offs are inherent for each of these decisions? The relevant usage of any method strongly depends upon the conditions of the network (e.g., document types that are frequently requested or the state of the network load) and the resources available (e.g., disk space and type of cache available). Version 1.1 of HTTP is the first protocol version to give explicit rules for consistency on the Web. Many proposed algorithms require changes to HTTP/1.1. However, this is not necessary to provide a suitable solution. One goal of this dissertation is to study the characteristics of document retrieval and modification to determine their effect on proposed consistency mechanisms. A set of effective consistency policies is identified from the investigation. The main objective of this dissertation is to use these findings to design and implement a consistency algorithm that provides improved performance over the current mechanisms proposed in the literature. Optimistically, we want an algorithm that provides strong consistency. However, we do not want to further degrade the network or cause undue burden on the server to gain this advantage. We propose a system based on the notion of soft-state and based on server push. In this system, the proxy would have some influence on what state information is maintained at the server (spatial consideration) as well as how long to maintain the information (temporal consideration). We perform a benchmark study of the performance of the new algorithm in comparison with existing proposed algorithms. Our results show that the Synchronous Nodes for Consistency (SINC) framework provides an average of 20% control message savings by limiting how much polling occurs with the current Web cache consistency mechanism, Adaptive Client Polling. In addition, the algorithm shows 30% savings on state space overhead at the server by limiting the amount of per-proxy and per-document state information required at the server.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
24

Guerrero, Tomé Carlos. "Adaptive fragment designs to reduce the latency of web caching in content aggregation systems". Doctoral thesis, Universitat de les Illes Balears, 2012. http://hdl.handle.net/10803/84072.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Balamash, Abdullah S. "Web traffic modeling and its application in the design of caching and prefetching systems". Diss., The University of Arizona, 2004. http://hdl.handle.net/10150/280586.

Pełny tekst źródła
Streszczenie:
Network congestion remains one of the main barriers to the continuing success of the Internet. For web users, congestion manifests itself in unacceptably long response times. One possible remedy to the latency problem is to use caching at the client, at the proxy server, or even within the Internet. However, documents on the World Wide Web (WWW) are becoming increasingly dynamic (i.e., have short lifetimes), which limits the potential benefit of caching. The performance of a WWW caching system can be dramatically increased by integrating document prefetching (a.k.a., "proactive caching") into its design. Prefetching reduces the perceived user response time, but it also increases the network load, which in turn may increase the response time. One main goal of this dissertation is to investigate this tradeoff through a mathematical model of a WWW caching/prefetching system, and to demonstrate how such a model can be used in building a real prefetching system. In our model, the client cache consists of a "regular" cache for on-demand requests and a "prefetching cache" for prefetched requests. A pool of clients connect to a proxy server through bandwidth-limited dedicated lines (e.g., dialup phone lines). The proxy server implements its own caching system. Forecasting of future documents is performed at the client based on the client's access profile and using hints from servers. Our analysis sheds light on the interesting tradeoff between aggressive and conservative prefetching, and can be used to optimize the parameters of a combined caching/prefetching system. We validate our model through simulation. From the analysis and/or simulation, we find that: (1) prefetching all documents whose access probabilities exceed a given threshold value may, surprisingly, degrade the delay performance, (2) the variability of WWW file sizes has a detrimental impact on the effectiveness of prefetching, and (3) coexistence between caching and prefetching is, in general, beneficial for the overall performance of the system, especially under heavy load. Ideally, a caching/prefetching system should account for the intrinsic characteristics of WWW traffic, which include temporal locality, spatial locality, and popularity. A second contribution of this dissertation is in constructing a stochastic model that accurately captures these three characteristics. Such a model can be used to generate synthetic WWW traces and assess WWW caching/prefetching designs. To capture temporal and spatial localities, we use a modified version of Riedi et al.'s multifractal model, where we reduce the complexity of the original model from O(N) to O(1); N being the length of the synthetic trace. Our model has the attractiveness of being parsimonious (characterized by few parameters) and that it avoids the need to apply a transformation to a self-similar model (as often done in previously proposed models), thus retaining the temporal locality of the fitted traffic. Furthermore, because of the scale-dependent nature of multifractal processes, the proposed model is more flexible than monofractal (self-similar) models in describing irregularities in the traffic at various time scales.
Style APA, Harvard, Vancouver, ISO itp.
26

Abdulla, Ghaleb. "Analysis and Modeling of World Wide Web Traffic". Diss., Virginia Tech, 1998. http://hdl.handle.net/10919/30470.

Pełny tekst źródła
Streszczenie:
This dissertation deals with monitoring, collecting, analyzing, and modeling of World Wide Web (WWW) traffic and client interactions. The rapid growth of WWW usage has not been accompanied by an overall understanding of models of information resources and their deployment strategies. Consequently, the current Web architecture often faces performance and reliability problems. Scalability, latency, bandwidth, and disconnected operations are some of the important issues that should be considered when attempting to adjust for the growth in Web usage. The WWW Consortium launched an effort to design a new protocol that will be able to support future demands. Before doing that, however, we need to characterize current users' interactions with the WWW and understand how it is being used. We focus on proxies since they provide a good medium or caching, filtering information, payment methods, and copyright management. We collected proxy data from our environment over a period of more than two years. We also collected data from other sources such as schools, information service providers, and commercial aites. Sampling times range from days to years. We analyzed the collected data looking for important characteristics that can help in designing a better HTTP protocol. We developed a modeling approach that considers Web traffic characteristics such as self-similarity and long-range dependency. We developed an algorithm to characterize users' sessions. Finally we developed a high-level Web traffic model suitable for sensitivity analysis. As a result of this work we develop statistical models of parameters such as arrival times, file sizes, file types, and locality of reference. We describe an approach to model long-range and dependent Web traffic and we characterize activities of users accessing a digital library courseware server or Web search tools. Temporal and spatial locality of reference within examined user communities is high, so caching can be an effective tool to help reduce network traffic and to help solve the scalability problem. We recommend utilizing our findings to promote a smart distribution or push model to cache documents when there is likelihood of repeat accesses.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
27

Vellanki, Vivekanand. "Extending caching for two applications : disseminating live data and accessing data from disks". Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/9243.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Zhang, Zhen. "An integrated prefetching and caching algorithm for Web proxies using a correlation-based prediction model". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0010/MQ61522.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

John, Nitin Abraham. "A study of replicated and distributed web content". Link to electronic thesis, 2002. http://www.wpi.edu/Pubs/ETD/Available/etd-0810102-160719.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Evans, David. "Resource Management for Delivery of Dynamic Information". Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/1056.

Pełny tekst źródła
Streszczenie:
Information delivery via the web has become very popular. Along with a growing user population, systems increasingly are supporting content that changes frequently, personalised information, and differentiation and choice. This thesis is concerned with the design and evaluation of resource management strategies for such systems. An architecture that provides scalability through caching proxies is considered. When a cached page is updated at the server, the cached copy may become stale if the server is not able to transmit the update to the proxies immediately. From the perspective of the server, resources are required to transmit updates for cached pages and to process requests for pages that are not cached. Analytic results on how the available resources should be managed in order to minimise staleness-related cost are presented. An efficient algorithm that the server can use to determine the set of pages that should be cached and a policy for transmitting updates for these pages are also presented. We then apply these results to page fragments, a technique that can provide increased efficiency for delivery of personalised pages.
Style APA, Harvard, Vancouver, ISO itp.
31

Callahan, Thomas Richard. "A Longitudinal Evaluation of HTTP Traffic". Case Western Reserve University School of Graduate Studies / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=case1333637889.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Handfast, Benny. "Utvärdering av cachningsalgoritm för dynamiskt genererade webbsidor". Thesis, University of Skövde, School of Humanities and Informatics, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-951.

Pełny tekst źródła
Streszczenie:

Webbservrar på Internet använder idag dynamiska webbsidor genererade med hjälp av databassystem för sina användare. Detta har lett till en stor belastning på webbservrar och en metod för att minska belastningen är att använda cachning. Detta arbete implementerar och utför tester på en specifik cachningsalgoritm kallad Online View Selection i ett webbspelsscenario. Ett potentiellt problem identifieras hos algoritmen som kan leda till att inaktuell information levereras till klienten och algoritmen modifieras för att hantera problemet. Testresultaten visar att både den modifierade algoritmen och originalet ger likvärdig prestanda. Den modifierade algoritmen visar sig fungera men problemet med den ursprungliga algoritmen uppkommer sällan i webbspelsscenariot.

Style APA, Harvard, Vancouver, ISO itp.
33

Dickinson, Luke Austin. "Certificate Revocation Table: Leveraging Locality of Reference in Web Requests to Improve TLS Certificate Revocation". BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7010.

Pełny tekst źródła
Streszczenie:
X.509 certificate revocation defends against man-in-the-middle attacks involving a compromised certificate. Certificate revocation strategies face scalability, effectiveness, and deployment challenges as HTTPS adoption rates have soared. We propose Certificate Revocation Table (CRT), a new revocation strategy that is competitive with or exceeds alternative state-of-the-art solutions in effectiveness, efficiency, certificate growth scalability, mass revocation event scalability, revocation timeliness, privacy, and deployment requirements. The CRT periodically checks the revocation status of X.509 certificates recently used by an organization, such as clients on a university's private network. By prechecking the revocation status of each certificate the client is likely to use, the client can avoid the security problems of on-demand certificate revocation checking. To validate both the effectiveness and efficiency of using a CRT, we used 60 days of TLS traffic logs from Brigham Young University to measure the effects of actively refreshing certificates for various certificate working set window lengths. Using a certificate working set window size of 45 days, an average of 99.86% of the TLS handshakes from BYU would have revocation information cached in advance using our approach. Revocation status information can be initially downloaded by clients with a 6.7 MB file and then subsequently updated using only 205.1 KB of bandwidth daily. Updates to this CRT that only include revoked certificates require just 215 bytes of bandwidth per day.
Style APA, Harvard, Vancouver, ISO itp.
34

Deshayes, Dan, i Simon Sedvallsson. "Nätverksoptimering med öppen källkod : En studie om nätverksoptimering för sjöfarten". Thesis, Linnéuniversitetet, Sjöfartshögskolan (SJÖ), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-43379.

Pełny tekst źródła
Streszczenie:
Detta examensarbete handlar om hur datatrafik över en satellitlänk kan optimeras för att minska laddningstider och överförd datamängd. Syftet med studien är att undersöka i vilken omfattning datatrafik mellan fartyg och land via satellitlänk kan styras så att trafiken blir effektivare. Genom att använda DNS-mellanlagring, mellanlagring av webbsidor samt annonsblockering med pfSense som plattform har examensarbetet utfört experiment emot olika hemsidor och mätt laddningstid samt överförd datamängd. Resultatet visade att det fanns stora möjligheter att optimera nätverkstrafiken och de uppmätta resultaten visade på en minskning av datamängden med 94% och laddningstiderna med 67%.
The thesis describes how network traffic transmitted via a satellite link can be optimized in order to reduce loading times and transmitted data. The purpose with this study has been to determine what methods are available to control and reduce the amount of data transmitted through a network and how this data is affected. By applying the practice of DNS caching, web caching and ad blocking with the use of pfSense as a platform the study has performed experiments targeting different web sites and measured the loading times and amount of transmitted data. The results showed good possibilities to optimize the network traffic and the measured values indicated a reduction of the network traffic of up to 94% and loading times with 67%.
Style APA, Harvard, Vancouver, ISO itp.
35

Mehamel, Sarra. "New intelligent caching and mobility strategies for MEC /ICN based architectures". Electronic Thesis or Diss., Paris, CNAM, 2020. http://www.theses.fr/2020CNAM1284.

Pełny tekst źródła
Streszczenie:
Le paradigme de MEC (Mobile Edge Computing) consiste à mettre les ressources de calcul et de stockage aux « extrémités » du réseau à proximité des utilisateurs finaux. Le terme « edge » désigne n’importe quel type de station de base de réseau. Les motivations pour l’adoption de ce nouveau concept sont principalement la réduction de la charge au cœur du réseau et la diminution de la latence grâce à la proximité des ressources et ainsi améliorer l’expérience utilisateur. Les serveurs MEC sont de bons candidats pour héberger les applications mobiles et diffuser le contenu Web. La mise en cache à l’extrémité du réseau, ou Edge Caching en anglais, est l’une des technologies les plus émergentes connues comme solution de récupération de contenu au bord du réseau. Elle est aussi considérée comme une technologie permettant la mise en place du concept MEC puisqu’elle présente une opportunité intéressante pour implémenter les services de mise en cache. En particulier, les serveurs MEC sont implémentés directement au niveau des stations de base, ce qui permet la mise en cache à l’extrémité du réseau et assure un déploiement à proximité des utilisateurs finaux. Cependant, l’intégration des serveurs MEC dans les stations de base complexifie le problème de la consommation de l’énergie, particulièrement dans un tel environnement qui est dynamique et sujet à des changements au fil du temps. Par ailleurs, la demande des utilisateurs des appareils mobiles est en constante augmentation ainsi que leur expectation d’une expérience meilleure. Sachant que le cache est d’une taille limitée, il est donc nécessaire et crucial que les mécanismes de mise en cache soient en mesure de faire face à cette situation et de proposer des solutions valables et satisfaisants à long terme. La plupart des études existantes se sont focalisées sur l’allocation de cache, la popularité du contenu ou encore la manière de concevoir le cache. Dans cette thèse, nous présentons une nouvelle stratégie de mise en cache écoénergétique basée sur la logique floue (Fuzzy logic). Notre proposition prend en compte les quatre caractéristiques d’un environnement mobile et introduit une implémentation matérielle en utilisant les FPGA (Field-Programmable Gate Array) pour réduire les besoins globaux en énergie. L’adoption d’une stratégie de mise en cache adéquate sur les serveurs MEC ouvre la possibilité d’utiliser des techniques d’intelligence artificielle (IA) et d’apprentissage automatique (Machine Learning) aux extrémités des réseaux mobiles. L’exploitation des informations de contexte des utilisateurs permet de concevoir une mise en cache intelligente sensible au contexte. La reconnaissance du contexte permet au cache de connaître son environnement, tandis que l’intelligence lui permet de prendre les bonnes décisions en sélectionnant le contenu approprié à mettre en cache afin d’optimiser les performances du caching. Inspiré par le succès de l’apprentissage par renforcement utilisant des agents pour traiter des problèmes de prise de décision, nous avons étendu notre système de mise en cache basé sur la logique floue à un modèle d’apprentissage par renforcement modifié. Le cadre proposé vise à maximiser le taux de réussite du cache (hit rate) et nécessite une prise de conscience multiple sure les conditions de web et l’utilisateur final. La méthode d’apprentissage par renforcement modifiée diffère des autres algorithmes par le taux d’apprentissage qui utilise la méthode du gradient stochastique décent (stochastic gradient decent) en plus de tirer parti de l’apprentissage en utilisant la décision de mise en cache optimale obtenue à partir des règles de la logique floue
Mobile edge computing (MEC) concept proposes to bring the computing and storage resources in close proximity to the end user by placing these resources at the network edge. The motivation is to alleviate the mobile core and to reduce latency for mobile users due to their close proximity to the edge. MEC servers are candidates to host mobile applications and serve web contents. Edge caching is one of the most emerging technologies recognized as a content retrieval solution in the edge of the network. It has been also considered as enabling technology of mobile edge computing that presents an interesting opportunity to perform caching services. Particularly, the MEC servers are implemented directly at the base stations which enable edge caching and ensure deployment in close-proximity to the mobile users. However, the integration of servers in mobile edge computing environment (base stations) complicates the energy saving issue because the power consumed by mobile edge computing servers is costly especially when the load changes dynamically over time. Furthermore, users with mobile devices arise their demands, introducing the challenge of handling such mobile content requests beside the limited caching size. Thus, it is necessary and crucial for caching mechanisms to consider context-aware factors, meanwhile most existing studies focus on cache allocation, content popularity and cache design. In this thesis, we present a novel energy-efficient fuzzy caching strategy for edge devices that takes into consideration four influencing features of mobile environment, while introducing a hardware implementation using Field-Programmable Gate Array (FPGA) to cut the overall energy requirements. Performing an adequate caching strategy on MEC servers opens the possibility of employing artificial intelligence (AI) techniques and machine learning at mobile network edges. Exploiting users context information intelligently makes it possible to design an intelligent context-aware mobile edge caching. Context awareness enables the cache to be aware of its environment, while intelligence enables each cache to make the right decisions of selecting appropriate contents to be cached so that to maximize the caching performance. Inspired by the success of reinforcement learning (RL) that uses agents to deal with decision making problems, we extended our fuzzy-caching system into a modified reinforcement learning model. The proposed framework aims to maximize the cache hit rate and requires a multi awareness. The modified RL differs from other RL algorithms in the learning rate that uses the method of stochastic gradient decent beside taking advantage of learning using the optimal caching decision obtained from fuzzy rules
Style APA, Harvard, Vancouver, ISO itp.
36

Marang, Ah Zau. "Analysis of web performance optimization and its impact on user experience". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231445.

Pełny tekst źródła
Streszczenie:
User experience (UX) is one of the most popular subjects in the industry nowadays and plays a significant role in the business success. As the growth of a business depends on customers, it is essential to emphasize on the UX that can help to enhance customer satisfaction. There has been statements that the overall end-user experience is to a great extent influenced by page load time, and that UX is primarily associated with performance of applications. This paper analyzes the effectiveness of performance optimization techniques and their impact on user experience. Principally, the web performance optimization techniques used in this study were caching data, fewer HTTP requests, Web Workers and prioritizing content. A profiling method Manual Logging was utilized to measure performance improvements. A UX survey consists of User Experience Questionnaire (UEQ) and three qualitative questions, was conducted for UX testing before and after performance improvements. Quantitative and qualitative methods were used to analyze collected data. Implementations and experiments in this study are based on an existing tool, a web-based application. Evaluation results show an improvement of 45% on app load time, but no significant impact on the user experience after performance optimizations, which entails that web performance does not really matter for the user experience. Limitation of the performance techniques and other factors that influence the performance were found during the study.
Användarupplevelse (UX) är idag en av de mest populära ämnena inom IT-branschen och spelar en viktig roll i affärsframgångar. Eftersom tillväxten av ett företag beror på kunder är det viktigt att betona på UX som kan bidra till att öka kundnöjdheten. Det har konstaterats att den övergripande slutanvändarupplevelsen i stor utsträckning påverkas av sidladdningstiden och att UX huvudsakligen är förknippad med applikationernas prestanda. I denna studie analyseras effektiviteten av optimeringstekniker och deras inverkan på användarupplevelse. Huvudsakligen, optimeringstekniker som användes i denna studie var cache-lösning, färre HTTP-förfrågningar, Web Workers och prioritering av innehåll. Profileringsmetoden "Manual Logging" användes för att mäta prestandaförbättringar. En enkätutvärdering som består av User Experience Questionnaire (UEQ) och tre kvalitativa frågor, genomfördes med fokus på användarupplevelsen före och efter prestandaförbättringar. Kvantitativa och kvalitativa metoder användes för att analysera insamlade data. Implementeringar och experiment i denna studie är baserade på ett befintligt verktyg, en webbaserad applikation. Utvärderingsresultatet visar en förbättring på 45% när det gäller sidladdningstid men ingen signifikant inverkan på användarupplevelsen efter prestandaförbättringar, vilket innebär att webbprestanda inte spelar någon roll för användarupplevelsen. Begränsning av optimeringstekniker och andra faktorer som påverkar prestationen hittades under studien.
Style APA, Harvard, Vancouver, ISO itp.
37

Congo, Faïçal Yannick Palingwendé. "Proposition d'un modèle pour la représentation de contexte d'exécution de simulations informatiques à des fins de reproductibilité". Thesis, Université Clermont Auvergne‎ (2017-2020), 2018. http://www.theses.fr/2018CLFAC093/document.

Pełny tekst źródła
Streszczenie:
La reproductibilité en informatique est un concept incontournable au 21ème siècle. Les évolutions matérielles des calculateurs font que le concept de reproductibilité connaît un intérêt croissant au sein de la communauté scientifique. Pour les experts en simulation, ce concept est indissociable de celui de vérification, de confirmation et de validation, que ce soit pour la crédibilité des résultats de recherches ou pour l’établissement de nouvelles connaissances. La reproductibilité est un domaine très vaste. Dans le secteur computationnel et numérique, nous nous attacherons, d’une part, à la vérification de la provenance et de la consistance des données de recherches. D’autre part, nous nous intéressons à la détermination précise des paramètres des systèmes d’exploitation, des options de compilation et de paramétrage des modèles de simulation permettant l’obtention de résultats fiables et reproductibles sur des architectures modernes de calcul. Pour qu’un programme puisse être reproduit de manière consistante il faut un certain nombre d’information de base. On peut citer entre autres le système d’exploitation, l’environnement de virtualisation, les diverses librairies utilisées ainsi que leurs versions, les ressources matérielles utilisées (CPU, GPU, accélérateurs de calcul multi coeurs tel que le précédent Intel Xeon Phi, Mémoires, ...), le niveau de parallélisme et éventuellement les identifiants des threads, le statut du ou des générateurs pseudo-aléatoires et le matériel auxquels ils accèdent, etc. Dans un contexte de calcul scientifique, même évident, il n’est actuellement pas possible d’avoir de manière cohérente toutes ces informations du fait de l’absence d’un modèle standard commun permettant de définir ce que nous appellerons ici contexte d'exécution. Un programme de simulation s'exécutant sur un ordinateur ou sur un noeud de calcul, que ce soit un noeud de ferme de calcul (cluster), un noeud de grille de calcul ou de supercalculateur, possède un état et un contexte d'exécution qui lui sont propres. Le contexte d'exécution doit être suffisamment complet pour qu’à partir de celui-ci, hypothétiquement,l'exécution d’un programme puisse être faite de telle sorte que l’on puisse converger au mieux vers un contexte d’exécution identique à l’original dans une certaine mesure. Cela, en prenant en compte l'architecture de l’environnement d’exécution ainsi que le mode d'exécution du programme. Nous nous efforçons, dans ce travail, de faciliter l'accès aux méthodes de reproductibilité et de fournir une méthode qui permettra d’atteindre une reproductibilité numérique au sens strict. En effet, de manière plus précise, notre aventure s’articule autour de trois aspects majeurs. Le premier aspect englobe les efforts de collaboration, qui favorisent l'éveil des consciences vis à vis du problème de la reproductibilité, et qui aident à implémenter des méthodes pour améliorer la reproductibilité dans les projets de recherche. Le deuxième aspect se focalise sur la recherche d’un modèle unifiant de contexte d'exécution et un mécanisme de fédération d’outils supportant la reproductibilité derrière une plateforme web pour une accessibilité mondiale. Aussi, nous veillons à l’application de ce deuxième aspect sur des projets de recherche. Finalement, le troisième aspect se focalise sur une approche qui garantit une reproductibilité numérique exacte des résultats de recherche
Computational reproducibility is an unavoidable concept in the 21st century. Computer hardware evolutions have driven a growing interest into the concept of reproducibility within the scientificcommunity. Simulation experts press that this concept is strongly correlated to the one ofverification, confirmation and validation either may it be for research results credibility or for theestablishment of new knowledge. Reproducibility is a very large domain. Within the area ofnumerical and computational Science, we aim to ensure the verification of research dataprovenance and integrity. Furthermore, we show interest on the precise identification ofoperating systems parameters, compilation options and simulation models parameterizationwith the goal of obtaining reliable and reproducible results on modern computer architectures.To be able to consistently reproduce a software, some basic information must be collected.Among those we can cite the operating system, virtualization environment, the softwarepackages used with their versions, the hardware used (CPU, GPU, many core architectures suchas the former Intel Xeon Phi, Memory, …), the level of parallelism and eventually the threadsidentifiers, the status of pseudo-random number generators, etc. In the context of scientificcomputing, even obvious, it is currently not possible to consistently gather all this informationdue to the lack of a common model and standard to define what we call here execution context.A scientific software that runs in a computer or a computing node, either as a cluster node, a gridcluster or a supercomputer possesses a unique state and execution context. Gatheringinformation about the latter must be complete enough that it can be hypothetically used toreconstruct an execution context that will at best be identical to the original. This of course whileconsidering the execution environment and the execution mode of the software. Our effortduring this journey can be summarized as seeking an optimal way to both ease genuine access toreproducibility methods to scientists and aim to deliver a method that will provide a strictscientific numerical reproducibility. Moreover, our journey can be laid out around three aspects.The first aspect involves spontaneous efforts in collaborating either to bring awareness or toimplement approaches to better reproducibility of research projects. The second aspect focusesin delivering a unifying execution context model and a mechanism to federate existingreproducibility tools behind a web platform for World Wide access. Furthermore, we investigateapplying the outcome of the second aspect to research projects. Finally, the third aspect focusesin completing the previous one with an approach that guarantees an exact numerical reproducibility of research results
Style APA, Harvard, Vancouver, ISO itp.
38

Holsby, Isak. "The Installation Process of a Progressive Web App : Studying the Impact of "Add to Home screen"". Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-44415.

Pełny tekst źródła
Streszczenie:
Progressive Web Apps (PWA) is a concept of enhanced web apps which aim to erase the difference between web and native apps. The concept of PWA introduces several benefits, including simpler distribution and development, which makes web apps a viable option for many businesses. The installation process of a PWA is different from native apps and this study aims to understand whether or not it has an impact on the users will to install the PWA. PWA:s are installed directly from the browser, rather than an app marketplace. Said installation process is suspected to be unknown to many which might influence the impact. In this study, several papers on the topic are studied. A PWA is developed from scratch which is used as a platform for a user test, hosting a brief introduction of the topic as well as guiding participants through the installation process. In conjunction with the user test a survey is conducted to collect the impressions from their experience in the user test. The result of the survey shows indications that the suspected lack of knowledge and experience was correct, and that the installation process is not to complicated.Additionally, results show that many probably will not bother to install a PWA, even if it is available. Therefore, I argue that the installation process does have an impact in its current form. Alternatives to the installation process used in this study do exist and are discussed in this paper.
Style APA, Harvard, Vancouver, ISO itp.
39

Molina, Moreno Benjamin. "Estudio, análisis y desarrollo de una red de distribución de contenido y su algoritmo de redirección de usuarios para servicios web y streaming". Doctoral thesis, Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/31637.

Pełny tekst źródła
Streszczenie:
Esta tesis se ha creado en el marco de la línea de investigación de Mecanismos de Distribución de Contenidos en Redes IP, que ha desarrollado su actividad en diferentes proyectos de investigación y en la asignatura ¿Mecanismos de Distribución de Contenidos en Redes IP¿ del programa de doctorado ¿Telecomunicaciones¿ impartido por el Departamento de Comunicaciones de la UPV y, actualmente en el Máster Universitario en Tecnologías, Sistemas y Redes de Comunicación. El crecimiento de Internet es ampliamente conocido, tanto en número de clientes como en tráfico generado. Esto permite acercar a los clientes una interfaz multimedia, donde pueden concurrir datos, voz, video, música, etc. Si bien esto representa una oportunidad de negocio desde múltiples dimensiones, se debe abordar seriamente el aspecto de la escalabilidad, que pretende que el rendimiento medio de un sistema no se vea afectado conforme aumenta el número de clientes o el volumen de información solicitada. El estudio y análisis de la distribución de contenido web y streaming empleando CDNs es el objeto de este proyecto. El enfoque se hará desde una perspectiva generalista, ignorando soluciones de capa de red como IP multicast, así como la reserva de recursos, al no estar disponibles de forma nativa en la infraestructura de Internet. Esto conduce a la introducción de la capa de aplicación como marco coordinador en la distribución de contenido. Entre estas redes, también denominadas overlay networks, se ha escogido el empleo de una Red de Distribución de Contenido (CDN, Content Delivery Network). Este tipo de redes de nivel de aplicación son altamente escalables y permiten un control total sobre los recursos y funcionalidad de todos los elementos de su arquitectura. Esto permite evaluar las prestaciones de una CDN que distribuya contenidos multimedia en términos de: ancho de banda necesario, tiempo de respuesta obtenido por los clientes, calidad percibida, mecanismos de distribución, tiempo de vida al utilizar caching, etc. Las CDNs nacieron a finales de la década de los noventa y tenían como objetivo principal la eliminación o atenuación del denominado efecto flash-crowd, originado por una afluencia masiva de clientes. Actualmente, este tipo de redes está orientando la mayor parte de sus esfuerzos a la capacidad de ofrecer streaming media sobre Internet. Para un análisis minucioso, esta tesis propone un modelo inicial de CDN simplificado, tanto a nivel teórico como práctico. En el aspecto teórico se expone un modelo matemático que permite evaluar analíticamente una CDN. Este modelo introduce una complejidad considerable conforme se introducen nuevas funcionalidades, por lo que se plantea y desarrolla un modelo de simulación que permite por un lado, comprobar la validez del entorno matemático y, por otro lado, establecer un marco comparativo para la implementación práctica de la CDN, tarea que se realiza en la fase final de la tesis. De esta forma, los resultados obtenidos abarcan el ámbito de la teoría, la simulación y la práctica.
Molina Moreno, B. (2013). Estudio, análisis y desarrollo de una red de distribución de contenido y su algoritmo de redirección de usuarios para servicios web y streaming [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31637
TESIS
Style APA, Harvard, Vancouver, ISO itp.
40

Fabbrini, Nicola. "EventO: applicazione mobile per la gestione di eventi". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016.

Znajdź pełny tekst źródła
Streszczenie:
La tesi si propone di progettare e sviluppare un'applicazione mobile che permetta agli utenti di gestire i propri eventi di gruppo. Spesso accade che in gruppi numerosi sia difficile determinare un giorno in cui vi è la disponibilità di tutti i membri, per questo motivo è stata ideata un'applicazione che li aiuti a determinare quali sono i giorni in cui tutti, o la maggior parte, sono disponibili. Oltre a questo, il sistema permette di gestire la propria cerchia di amicizie ed i propri gruppi di amici, mantiene sincronizzate le informazioni degli eventi tra tutti i partecipanti e cerca di migliorare la user-experience utilizzando sistemi di caching locale per mostrare i dati il più velocemente possibile all'utente.
Style APA, Harvard, Vancouver, ISO itp.
41

Zhang, Ruiyang. "Cache Design for Massive Heterogeneous Data of Mobile Social Media". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-175759.

Pełny tekst źródła
Streszczenie:
Since social media gains ever increasing popularity, Online Social Networks have become important repositories for information retrieval. The concept of social search, therefore, is gradually being recognized as the next breakthrough in this field, and it is expected to dominate topics in industry. However, retrieving information from OSNs with high Quality of Experience is non-trivial as a result of the prevalence of mobile applications for social networking services. For the sake of shortening user perceived latency Web caching was introduced and has been studied extensively for years. Nevertheless, the previous works seldom focus on the Web caching solutions for social search. In the context of this master’s thesis project, emphasis is given to the design of a Web caching system which is used to cache public data from social media with the objective of improving the user experience in terms of the freshness of data and the perceived service latency. To be more specific, a Web caching strategy named Staleness Bounded LRU algorithm is proposed to limit the term of validity of the cached data. In addition, a Two-Level Web Caching System that adopts the SB-LRU algorithm is proposed in order for shortening the user perceived latency. Results of trace-driven simulations and performance evaluations demonstrate that serving clients with stale data is avoided and the user perceived latencies are significantly shortened when the proposed Web caching system is used in the use case of unauthenticated social search. Besides, the design idea in this project is believed to be helpful to the design of a Web caching system for social search, which is capable of caching user specific data for different clients.
Style APA, Harvard, Vancouver, ISO itp.
42

Ye, Zakaria. "Analyse de Performance des Services de Vidéo Streaming Adaptatif dans les Réseaux Mobiles". Thesis, Avignon, 2017. http://www.theses.fr/2017AVIG0219/document.

Pełny tekst źródła
Streszczenie:
Le trafic vidéo a subi une augmentation fulgurante sur Internet ces dernières années. Pour pallier à cette importante demande de contenu vidéo, la technologie du streaming adaptatif sur HTTP est utilisée. Elle est devenue par ailleurs très populaire car elle a été adoptée par les différents acteurs du domaine de la vidéo streaming. C’est une technologie moins couteuse qui permet aux fournisseurs de contenu, la réutilisation des serveurs web et des caches déjà déployés. En plus, elle est exempt de tout blocage car elle traverse facilement les pare-feux et les translations d’adresses sur Internet. Dans cette thèse, nous proposons une nouvelle méthode de vidéo streaming adaptatif appelé “Backward-Shifted Coding (BSC)”. Il se veut être une solution complémentaire au standard DASH, le streaming adaptatif et dynamique utilisant le protocole HTTP. Nous allons d’abord décrire ce qu’est la technologie BSC qui se base sur le codec (encodeur décodeur) à multi couches SVC, un algorithme de compression extensible ou évolutif. Nous détaillons aussi l’implémentation de BSC dans un environnement DASH. Ensuite,nous réalisons une évaluation analytique de BSC en utilisant des résultats standards de la théorie des files d’attente. Les résultats de cette analyse mathématique montrent que le protocole BSC permet de réduire considérablement le risque d’interruption de la vidéo pendant la lecture, ce dernier étant très pénalisant pour les utilisateurs. Ces résultats vont nous permettre de concevoir des algorithmes d’adaptation de qualité à la bande passante en vue d’améliorer l’expérience utilisateur. Ces algorithmes permettent d’améliorer la qualité de la vidéo même étant dans un environnement où le débit utilisateur est très instable.La dernière étape de la thèse consiste à la conception de stratégies de caching pour optimiser la transmission de contenu vidéo utilisant le codec SVC. En effet, dans le réseau, des serveurs de cache sont déployés dans le but de rapprocher le contenu vidéo auprès des utilisateurs pour réduire les délais de transmission et améliorer la qualité de la vidéo. Nous utilisons la programmation linéaire pour obtenir la solution optimale de caching afin de le comparer avec nos algorithmes proposés. Nous montrons que ces algorithmes augmentent la performance du système tout en permettant de décharger les liens de transmission du réseau cœur
Due to the growth of video traffic over the Internet in recent years, HTTP AdaptiveStreaming (HAS) solution becomes the most popular streaming technology because ithas been succesfully adopted by the different actors in Internet video ecosystem. Itallows the service providers to use traditional stateless web servers and mobile edgecaches for streaming videos. Further, it allows users to access media content frombehind Firewalls and NATs.In this thesis we focus on the design of a novel video streaming delivery solutioncalled Backward-Shifted Coding (BSC), a complementary solution to Dynamic AdaptiveStreaming over HTTP (DASH), the standard version of HAS. We first describe theBackward-Shifted Coding scheme architecture based on the multi-layer Scalable VideoCoding (SVC). We also discuss the implementation of BSC protocol in DASH environment.Then, we perform the analytical evaluation of the Backward-Sihifted Codingusing results from queueing theory. The analytical results show that BSC considerablydecreases the video playback interruption which is the worst event that users can experienceduring the video session. Therefore, we design bitrate adaptation algorithms inorder to enhance the Quality of Experience (QoE) of the users in DASH/BSC system.The results of the proposed adaptation algorithms show that the flexibility of BSC allowsus to improve both the video quality and the variations of the quality during thestreaming session.Finally, we propose new caching policies to be used with video contents encodedusing SVC. Indeed, in DASH/BSC system, cache servers are deployed to make contentsclosed to the users in order to reduce network latency and improve user-perceived experience.We use Linear Programming to obtain optimal static cache composition tocompare with the results of our proposed algorithms. We show that these algorithmsincrease the system overall hit ratio and offload the backhaul links by decreasing thefetched content from the origin web servers
Style APA, Harvard, Vancouver, ISO itp.
43

Mikhailov, Mikhail. "Deterministic object management in large distributed systems". Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0305103-092907.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Worcester Polytechnic Institute.
Keywords: server invalidation; distributed object management; object relationships; web caching; change characteristics; object composition; cache consistency. Includes bibliographical references (p. 156-163).
Style APA, Harvard, Vancouver, ISO itp.
44

Suresha, *. "Caching Techniques For Dynamic Web Servers". Thesis, 2006. https://etd.iisc.ac.in/handle/2005/438.

Pełny tekst źródła
Streszczenie:
Websites are shifting from static model to dynamic model, in order to deliver their users with dynamic, interactive, and personalized experiences. However, dynamic content generation comes at a cost – each request requires computation as well as communication across multiple components within the website and across the Internet. In fact, dynamic pages are constructed on the fly, on demand. Due to their construction overheads and non-cacheability, dynamic pages result in substantially increased user response times, server load and increased bandwidth consumption, as compared to static pages. With the exponential growth of Internet traffic and with websites becoming increasingly complex, performance and scalability have become major bottlenecks for dynamic websites. A variety of strategies have been proposed to address these issues. Many of these solutions perform well in their individual contexts, but have not been analyzed in an integrated fashion. In our work, we have carried out a study of combining a carefully chosen set of these approaches and analyzed their behavior. Specifically, we consider solutions based on the recently-proposed fragment caching technique, since it ensures both correctness and freshness of page contents. We have developed mechanisms for reducing bandwidth consumption and dynamic page construction overheads by integrating fragment caching with various techniques such as proxy-based caching of dynamic contents, pre-generating pages, and caching program code. We start with presenting a dynamic proxy caching technique that combines the benefits of both proxy-based and server-side caching approaches, without suffering from their individual limitations. This technique concentrates on reducing the bandwidth consumption due to dynamic web pages. Then, we move on to presenting mechanisms for reducing dynamic page construction times -- during normal loading, this is done through a hybrid technique of fragment caching and page pre-generation, utilizing the excess capacity with which web servers are typically provisioned to handle peak loads. During peak loading, this is achieved by integrating fragment-caching and code-caching, optionally augmented with page pre-generation. In summary, we present a variety of methods for integrating existing solutions for serving dynamic web pages with the goal of achieving reduced bandwidth consumption from the web infrastructure perspective, and reduced page construction times from user perspective.
Style APA, Harvard, Vancouver, ISO itp.
45

Suresha, *. "Caching Techniques For Dynamic Web Servers". Thesis, 2006. http://hdl.handle.net/2005/438.

Pełny tekst źródła
Streszczenie:
Websites are shifting from static model to dynamic model, in order to deliver their users with dynamic, interactive, and personalized experiences. However, dynamic content generation comes at a cost – each request requires computation as well as communication across multiple components within the website and across the Internet. In fact, dynamic pages are constructed on the fly, on demand. Due to their construction overheads and non-cacheability, dynamic pages result in substantially increased user response times, server load and increased bandwidth consumption, as compared to static pages. With the exponential growth of Internet traffic and with websites becoming increasingly complex, performance and scalability have become major bottlenecks for dynamic websites. A variety of strategies have been proposed to address these issues. Many of these solutions perform well in their individual contexts, but have not been analyzed in an integrated fashion. In our work, we have carried out a study of combining a carefully chosen set of these approaches and analyzed their behavior. Specifically, we consider solutions based on the recently-proposed fragment caching technique, since it ensures both correctness and freshness of page contents. We have developed mechanisms for reducing bandwidth consumption and dynamic page construction overheads by integrating fragment caching with various techniques such as proxy-based caching of dynamic contents, pre-generating pages, and caching program code. We start with presenting a dynamic proxy caching technique that combines the benefits of both proxy-based and server-side caching approaches, without suffering from their individual limitations. This technique concentrates on reducing the bandwidth consumption due to dynamic web pages. Then, we move on to presenting mechanisms for reducing dynamic page construction times -- during normal loading, this is done through a hybrid technique of fragment caching and page pre-generation, utilizing the excess capacity with which web servers are typically provisioned to handle peak loads. During peak loading, this is achieved by integrating fragment-caching and code-caching, optionally augmented with page pre-generation. In summary, we present a variety of methods for integrating existing solutions for serving dynamic web pages with the goal of achieving reduced bandwidth consumption from the web infrastructure perspective, and reduced page construction times from user perspective.
Style APA, Harvard, Vancouver, ISO itp.
46

Gao, Mulong. "Proxy-based adaptive push-pull web caching". Thesis, 2004. http://hdl.handle.net/2429/15503.

Pełny tekst źródła
Streszczenie:
Although the volume of Web traffic on the Internet is staggering, a large percentage of the traffic is redundant — multiple users at any given site request much of the same content. This means that a significant percentage of the W A N infrastructure carries identical content (and identical requests for it) day after day. Web caching performs the local storage of Web content to serve these redundant user requests more quickly, without sending the requests and the resulting content over the WAN. There are two major categories of Web caching mechanisms: client pull and origin server or parent proxy push. This thesis investigates a proxy-based, adaptive push-pull mechanism to enhance user experience by selecting some of the most frequently accessed objects to push to a proxy instead of letting the proxy request it later. Web servers or parent proxies collect the objects access frequency for specific proxies rather than on a global scope. Web servers or parent proxies adopt a push policy to distribute hot objects to cooperating proxies; other objects are pulled by the proxies as usual. As time moves, frequently requested objects may become cold in one region and become hot in another region; Web servers and caching proxies can thus learn the change pattern and adjust their distribution policy accordingly, avoiding pushing objects to proxies which may not request those objects in the near future. By using the adaptive push-pull distribution mechanism, the most frequently updated and accessed objects, which form the major chunk of Internet traffic, are pushed to proxies, saving many If-Modified-Since GET requests. Hence, Web traffic is more productive and user perceived latency is reduced.
Style APA, Harvard, Vancouver, ISO itp.
47

Chen, Kuei-Hui, i 陳桂慧. "Partial Caching Replacement Policies for Web Proxy". Thesis, 2000. http://ndltd.ncl.edu.tw/handle/95988729777781586978.

Pełny tekst źródła
Streszczenie:
碩士
元智大學
資訊工程研究所
88
The performance of accessing information is crucial for the success of Web. The Web proxy server plays an important role in improving the performance and quality of services. The access latency time can be reduced, if users get objects from the near proxy servers. In addition, the load of Web servers and network traffic can be thus reduced. From our studies, we find that the caching replacement algorithm is a key issue in Web proxy server design. It decides which object will be evicted from the cache to get enough space for the new object. The design of the cache replacement algorithm influences the re-usability of the cached information. In this thesis, two novel caching replacement policies are proposed. The first is the Partial Caching LRU( PC-LRU), and the second is called the Partial Replacement LRU-THOLD( PR-LRU-THOLD). Trace-driven simulations are also performed. The experiment results show that our schemes improve the cache hit rate, the byte hit rate and the access latency. In addition, the complexity of the schemes is near O(1) on average. Compared with LRU, PC-LRU improves the hit rate by 26%, the byte hit rate by 32%, and the reduced latency rate by 50% in the receptive best case. PR-LRU-THOLD improves the hit rate by 12%, the byte hit rate by 18%, and the reduced latency rate by 19% in the best case. Compared with LRU-THOLD, PC-LRU improves the hit rate by 17%, the byte hit rate by 114%, and the reduced latency rate by 48% in the best case. PR-LRU-THOLD improves the hit rate by 12%, the byte hit rate by 30%, and the reduced latency rate by 20% in the best case. We conclude that the partial caching replacement policies indeed improve the Web proxy performance. Furthermore, the concept of our schemes can be potentially applied to other categories of replacement algorithms.
Style APA, Harvard, Vancouver, ISO itp.
48

Jung, Keng Hui, i 耿匯融. "Construct Dynamic Web Caching using AJAX Technology". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/07965889196275933919.

Pełny tekst źródła
Streszczenie:
碩士
實踐大學
資訊科技與管理學系碩士班
96
Web caches are widely used in the current Internet web infrastructure to reduce the network traffic and browsing latency. Since most of dynamic web pages are derived from the querying results of database, they are unable to be cached as static web pages. In this paper, we propose a dynamic web page design approach of building partially caching of the querying results and an Ajax programming mechanism to keep the dynamic data consistency. Based on the temporal relation of updating data fields, we can split a table into two kinds of columns: cacheable and non-cacheable columns. The cacheable columns must have the property of a very longer updating cycle or a shorter updating cycle as well as a very higher browsing rate. Therefore, we can cache the data of those columns in the Website. Cache consistency is considered via a timestamp of cache file to make the dynamic data up-to-date. With our proposed approach, it is focus on how to reduce the network load at the campus network, just like this constribution is a lot of students request their classlist and choose which class they want to select.the data of cacheable columns as a static web page can be efficiently cached by the current web browsers or web proxy servers. Comparing with the traditional dynamic web page, we can reduce the network traffic to the size of data of non-cacheable columns and the browsing latency to the time of querying of non-cacheable columns after the first time of web page browsing.
Style APA, Harvard, Vancouver, ISO itp.
49

Huang, Chia-Jung, i 黃嘉榮. "Path Selection and Data Replication in Web Caching". Thesis, 2000. http://ndltd.ncl.edu.tw/handle/68944985963740842317.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣大學
電機工程學研究所
88
This thesis deals with ''Path Selection and Data Replication in Web Caching'', in order to minimize costs for transmission and replication in the network. Taking into account the user's viewing time and costs of web caching transmission and replication, the goal is to choose an optimal solution to decrease costs the transmission and replication cost, while meeting the need of individual user's viewing time in different local severs. So we must find an efficient on-line schedule and network service control mechanism, as well as construct and design an efficient transmission and replication algorithm in web caching. The techniques consider the individual costs of storage and transmission of object programs and compute caching schedules for programs by determining when, where and how long object programs must be stored at strategic locations in the network. We design different network transmission models to construct different network topologies to simulate. We also deal with what is known as the Steiner problem in our network model. Since the Steiner problem in networks is NP-complete, it is practical to develop heuristic algorithms whose costs are close to optimal. In this thesis, we survey previous multicast routing algorithms in the networks and include heuristic algorithms based on the concept of Minimal Cost Path and Distance Network heuristic Steiner tree in the networks. Keyword: Web caching, the Steiner problem in networks, Multicast tree, on-line algorithm
Style APA, Harvard, Vancouver, ISO itp.
50

Lin, Yu-Ren, i 林育任. "Caching Personalized and Database-related Dynamic Web Pages". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/72399481758440703931.

Pełny tekst źródła
Streszczenie:
碩士
國立成功大學
資訊工程學系碩博士班
94
In recent years, while e-commerce and personalize web pages are getting popular, using session objects and database-related dynamic web pages increase more. When a web server serves a request, it needs to queries database again and generate the dynamic web pages by information in the session objects. When serving more and more users, the load of web server to too heavy to serve. In this time, caching dynamic web pages is a inevitable trend. Our thesis focuses and does much research on using session objects and database-related dynamic web cache. We show how to build the dependency between dynamic web pages and underlying database fields, session objects. Furthermore, we implement the dynamic web cache system in the Tomcat web server. Our experiment shows that dynamic web cache can improve the web server performance a lot and increase the stability of web server. Our result shows dynamic web cache improves web server performance up to 208%.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii