Dissertations / Theses on the topic 'Web page'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Web page.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Krupp, Brian. "Exploration of Dynamic Web Page Partitioning for Increased Web Page Delivery Performance." Cleveland State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=csu1290629377.
Full textChiew, Thiam Kian. "Web page performance analysis." Thesis, University of Glasgow, 2009. http://theses.gla.ac.uk/658/.
Full textSanoja, Vargas Andrés. "Segmentation de pages web, évaluation et applications." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066004/document.
Full textWeb pages are becoming more complex than ever, as they are generated by Content Management Systems (CMS). Thus, analyzing them, i.e. automatically identifying and classifying different elements from Web pages, such as main content, menus, among others, becomes difficult. A solution to this issue is provided by Web page segmentation which refers to the process of dividing a Web page into visually and semantically coherent segments called blocks.The quality of a Web page segmenter is measured by its correctness and its genericity, i.e. the variety of Web page types it is able to segment. Our research focuses on enhancing this quality and measuring it in a fair and accurate way. We first propose a conceptual model for segmentation, as well as Block-o-Matic (BoM), our Web page segmenter. We propose an evaluation model that takes the content as well as the geometry of blocks into account in order to measure the correctness of a segmentation algorithm according to a predefined ground truth. The quality of four state of the art algorithms is experimentally tested on four types of pages. Our evaluation framework allows testing any segmenter, i.e. measuring their quality. The results show that BoM presents the best performance among the four segmentation algorithms tested, and also that the performance of segmenters depends on the type of page to segment.We present two applications of BoM. Pagelyzer uses BoM for comparing two Web pages versions and decides if they are similar or not. It is the main contribution of our team to the European project Scape (FP7-IP). We also developed a migration tool of Web pages from HTML4 format to HTML5 format in the context of Web archives
SOUZA, CRISTON PEREIRA DE. "EFFICIENT WEB PAGE REFRESH POLICIES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2010. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=15893@1.
Full textUma máquina de busca precisa constantemente revisitar páginas Web para manter seu repositório local atualizado. Uma política de revisitação deve ser empregada para construir um escalonamento de revisitações que mantenha o repositório o mais atualizado possível utilizando os recursos disponíveis. Para evitar sobrecarga de servidores Web, a política de revisitação deve respeitar um tempo mínimo entre requisições consecutivas a um mesmo servidor. Esta regra é chamada restrição de politeness. Devido ao porte do problema, consideramos que uma política de revisitação é eficiente se o tempo médio para escalonar uma revisitação é sublinear no número de páginas do repositório. Neste sentido, quando a restrição de politeness é considerada, não conhecemos política eficiente com garantia teórica de qualidade. Nesta pesquisa investigamos três políticas eficientes que respeitam a restrição de politeness, chamadas MERGE, RANDOM e DELAYED. Fornecemos fatores de aproximação para o nível de atualização do repositório quando empregamos as política MERGE ou RANDOM. Demonstramos que 0,77 é um limite inferior para este fator de aproximação quando empregamos a política RANDOM, e apresentamos uma conjectura de que 0,927 é um limite inferior para este fator de aproximação quando empregamos a política MERGE. As políticas também são avaliadas através da simulação da execução destas políticas para manter o nível de atualização de um repositório contendo 14,5 milhões de páginas Web. Um repositório contendo artigos da Wikipedia também é utilizado nos experimentos, onde podemos observar que a política MERGE apresenta melhores resultados que uma estratégia gulosa natural para este repositório. A principal conclusão desta pesquisa é que existem políticas simples e eficientes para o problema de revisitação de páginas Web, que perdem pouco em termos do nível de atualização do repositório mesmo quando consideramos a restrição de politeness.
A search engine needs to continuously revisit web pages in order to keep its local repository up-to-date. A page revisiting schedule must be defined to keep the repository up-to-date using the available resources. In order to avoid web server overload, the revisiting policy must respect a minimum amount of time between consecutive requests to the same server. This rule is called politeness constraint. Due to the large number of web pages, we consider that a revisiting policy is efficient when the mean time to schedule a revisit is sublinear on the number of pages in the repository. Therefore, when the politeness constraint is considered, there are no existing efficient policies with theoretical quality guarantees. We investigate three efficient policies that respect the politeness constraint, called MERGE, RANDOM and DELAYED. We provide approximation factors for the repository’s up-to-date level for the MERGE and RANDOM policies. Based on these approximation factors, we devise a 0.77 lower bound for the approximation factor provided by the RANDOM policy and we present a conjecture that 0.927 is a lower bound for the approximation factor provided by the MERGE policy. We evaluate these policies through simulation experiments which try to keep a repository with 14.5 million web pages up-to-date. Additional experiments based on a repository with Wikipedia’s articles concluded that the MERGE policy provides better results than a natural greedy strategy. The main conclusion of this research is that there are simple and efficient policies that can be applied to this problem, even when the politeness constraint must be respected, resulting in a small loss of repository’s up-to-date level.
Hou, Jingyu. "Discovering web page communities for web-based data management." University of Southern Queensland, Faculty of Sciences, 2002. http://eprints.usq.edu.au/archive/00001447/.
Full textMyers, Paul Thomas. "The Cucamonga Middle School web page: Using parent input to redesign an existing school web page." CSUSB ScholarWorks, 2001. https://scholarworks.lib.csusb.edu/etd-project/2008.
Full textMetikurke, Seema Sreenivasamurthy. "Grid-Enabled Automatic Web Page Classification." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/cs_theses/23.
Full textSanoja, Vargas Andrés. "Segmentation de pages web, évaluation et applications." Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066004.
Full textWeb pages are becoming more complex than ever, as they are generated by Content Management Systems (CMS). Thus, analyzing them, i.e. automatically identifying and classifying different elements from Web pages, such as main content, menus, among others, becomes difficult. A solution to this issue is provided by Web page segmentation which refers to the process of dividing a Web page into visually and semantically coherent segments called blocks.The quality of a Web page segmenter is measured by its correctness and its genericity, i.e. the variety of Web page types it is able to segment. Our research focuses on enhancing this quality and measuring it in a fair and accurate way. We first propose a conceptual model for segmentation, as well as Block-o-Matic (BoM), our Web page segmenter. We propose an evaluation model that takes the content as well as the geometry of blocks into account in order to measure the correctness of a segmentation algorithm according to a predefined ground truth. The quality of four state of the art algorithms is experimentally tested on four types of pages. Our evaluation framework allows testing any segmenter, i.e. measuring their quality. The results show that BoM presents the best performance among the four segmentation algorithms tested, and also that the performance of segmenters depends on the type of page to segment.We present two applications of BoM. Pagelyzer uses BoM for comparing two Web pages versions and decides if they are similar or not. It is the main contribution of our team to the European project Scape (FP7-IP). We also developed a migration tool of Web pages from HTML4 format to HTML5 format in the context of Web archives
Khalil, Faten. "Combining web data mining techniques for web page access prediction." University of Southern Queensland, Faculty of Sciences, 2008. http://eprints.usq.edu.au/archive/00004341/.
Full textEriksson, Tobias. "Automatic web page categorizationusing text classication methods." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142424.
Full textUnder de senaste åren så har Webben exploderat i storlek, med miljontals webbsidor av vitt skilda innehåll. Den enorma storleken av Webben gör att det blir ohanterligt att manuellt indexera och kategorisera allt detta innehåll. Uppenbarligen behövs det automatiska metoder för att kategorisera webbsidor. Denna studie undersöker hur metoder för automatiskt textklassicering kan användas för kategorisering av hemsidor. De uppnådda resultatet i denna rapport är jämförbara med resultat i annan litteratur på samma område, men når ej upp till resultatet i studier på ren textklassicering.
Das, Somak R. "Evaluation of QUIC on web page performance." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91444.
Full text19
Title as it appears in MIT commencement exercises program, June 6, 2014: Designing a better transport protocol for the web. Cataloged from PDF version of thesis.
Includes bibliographical references (pages 53-54).
This work presents the first study of a new protocol, QUIC, on Web page performance. Our experiments test the HTTP/1.1, SPDY, and QUIC multiplexing protocols on the Alexa U.S. Top 500 websites, across 100+ network configurations of bandwidth and round-trip time (both static links and cellular networks). To do so, we design and implement QuicShell, a tool for measuring QUIC's Web page performance accurately and reproducibly. Using QuicShell, we evaluate the strengths and weaknesses of QUIC. Due to its design of stream multiplexing over UDP, QUIC outperforms its predecessors over low-bandwidth links and high-delay links by 10 - 60%. It also helps Web pages with small objects and HTTPS-enabled Web pages. To improve QUIC's performance on cellular networks, we implement the Sprout-EWMA congestion control protocol and find that it improves QUIC's performance by > 10% on high-delay links.
by Somak R. Das.
M. Eng.
Mereuta, Alina. "Smart web accessibility platform : dichromacy compensation and web page structure improvement." Thesis, Tours, 2014. http://www.theses.fr/2014TOUR4032/document.
Full textThis thesis works are focused on enhancing web accessibility for users with visual disabilities using tools integrated within the SmartWeb Accessibility Platform (SWAP). After a synthesis on accessibility, SWAP is presented. Our first contribution consists in reducing the contrast loss for textual information in web pages for dichromat users while maintaining the author’s intentions conveyed by colors. The contrast compensation problem is reduced at minimizing a fitness function which depends on the original colors and the relationships between them. The interest and efficiency of three methods (mass-spring system, CMA-ES, API) are assessed on two datasets (real and artificial). The second contribution focuses on enhancing web page structure for screen reader users in order to overcome the effect of contents’linearization. Using heuristics and machine learning techniques, the main zones of the page are identified. The page structure can be enhanced using ARIA statements and access links to improve zone identification by screen readers
Xu, Jingqian. "Full similarity-based page ranking." Diss., Columbia, Mo. : University of Missouri-Columbia, 2008. http://hdl.handle.net/10355/5773.
Full textThe entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on August 19, 2009) Includes bibliographical references.
Mortazavi-Asl, Behzad. "Discovering and mining user Web-page traversal patterns." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ61594.pdf.
Full textRODRIGUES, THORAN ARAGUEZ. "A COMPARATIVE STUDY OF WEB PAGE CLASSIFICATION STRATEGIES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=13890@1.
Full textThe amount of information on the Internet increases every day. Even though this proliferation increases the chances that the subject being searched for by an user is on the Web, it also makes finding the desired information much harder. The automated classification of pages is, therefore, an important tool for organizing Web content, with specific applications on the improvement of results displayed by search engines. In this dissertation, a comparative study of different attribute sets and classification methods for the functional classification of web pages was made, focusing on 4 classes: Blogs, Blog Posts, News Portals and News. Throughout the experiments, it became evident the best approach for this task is to employ attributes that come both from the structure and the text of the web pages. We also presented a new strategy for extracting and building text attribute sets, that takes into account the different writing styles for each page class.
Derryberry, Jonathan C. (Jonathan Carlyle) 1979. "Creating a web page recommendation system for Haystack." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/28472.
Full textIncludes bibliographical references (p. 105).
The driving goal of this thesis was to create a web page recommendation system for Haystack, capable of tracking a user's browsing behavior and suggesting new, interesting web pages to read based on the past behavior. However, during the course of this thesis, 3 salient subgoals were met. First, Haystack's learning framework was unified so that, for example, different types of binary classifiers could be used with black box access under a single interface, regardless of whether they were text learning algorithms or image classifiers. Second, a tree learning module, capable of using hierarchical descriptions of objects and their labels to classify new objects, was designed and implemented. Third, Haystack's learning framework and existing user history faculties were leveraged to create a web page recommendation system that uses the history of a user's visits to web pages to produce recommendations of unvisited links from user-specified web pages. Testing of the recommendation system suggests that using tree learners with both the URL and tabular location of a web page's link as taxonomic descriptions yields a recommender that significantly outperforms traditional, text-based systems.
by Jonathan C. Derryberry.
M.Eng.
Yu, Chen-Hsiang. "Web page enhancement on desktop and mobile browsers." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/79216.
Full text"February 2013." Cataloged from PDF version of thesis.
Includes bibliographical references (p. 154-165).
The Web is a convenient platform to deliver information, but reading web pages is not as easy as it was in 1990s. This thesis focuses on investigating techniques to enhance web pages on desktop and mobile browsers for two specific populations: non-native English readers and mobile users. There are three issues addressed in this thesis: web page readability, web page skimmability and continuous reading support on mobile devices. On today's primarily English-language Web, non-native readers encounter some problems, even if they have some fluency in English. This thesis focuses on content presentation and proposes a new transformation method, Jenga Format, to enhance web page readability. A user study with 30 non-native users showed that Jenga transformation not only improved reading comprehension, but also made the web page reading easier. On the other hand, readability research has found that average reading times for non-native readers has remained the same or even worse. This thesis studies this issue and proposes Froggy GX (Generation neXt) to improve reading under time constraints. A user study with 20 non-native users showed that Froggy GX not only enhanced reading comprehension under time constraints, but also provided higher user satisfaction than reading unaided. When using the Web on mobile devices, the reading situation becomes challenging. Even worse, context switches, such as from walking to sitting, static standing, or hands-free situations like driving, happen in reading in on-the-go situations, but this scenario was not adequately addressed in previous studies. This thesis investigates this scenario and proposes a new mobile browser, Read4Me, to support continuous reading on a mobile device. A user study with 10 mobile users showed that auto-switching not only provided significantly fewer dangerous encounters than visual-reading, but also provided the best reading experience.
by Chen-Hsiang Yu.
Ph.D.
Andr, Ondřej. "Srovnání on-page SEO faktorů pro mobilní web." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-204028.
Full textJackson, Lance Douglas Smith Jon M. 1959. "Introduction to the Internet and Web page design." [Cedar City, Utah : Southern Utah University], 2009. http://unicorn.li.suu.edu/ScholarArchive/Communication/JacksonLanceD/IntrototheInternet&WebPageDesign.pdf.
Full textA workbook CD accompanies this text. For more information contact the author, Lance Jackson, Southern Utah University, 351 W. University Blvd., Cedar city, UT 84720. E-mail: jackson@suu.edu. Telephone: (435) 586-7867. Title from PDF title page. "April 2009." "In partial fulfillment of the requirements for the degree [of] Master of Arts in Professional Communication." "A project presented to the faculty of the Communication Department at Southern Utah University." Dr. Jon Smith, Project Supervisor. Includes bibliographical references (p. 14, 33, 49, 69, 85, 104, 135, 155, 174).
Goodrich, Brian S. "Extending Web Application Development to the User-Editable Space." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2289.pdf.
Full textXiao, Xiangye. "Slicing*-tree based Web page transformation for small displays /." View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?COMP%202005%20XIAO.
Full textLu, Zhengyang. "Web Page Classification Using Features from Titles and Snippets." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/33177.
Full textSalameh, Lynne. "Towards faster web page loads over multiple network paths." Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10046643/.
Full textSantos, Aécio Solano Rodrigues. "Learning to schedule web page updates using genetic programming." Universidade Federal de Minas Gerais, 2013. http://hdl.handle.net/1843/ESBF-97GJSQ.
Full textUm dos principais desafios enfrentados durante o desenvolvimento de políticas de escalonamento para atualizações de páginas web é estimar a probabilidade de uma página que já foi coletada previamente ser modificada na Web. Esta informação pode ser usada pelo escalonador de um coletor de páginas web para determinar a ordem na qual as páginas devem ser recoletadas, permitindo ao sistema reduzir o custo total de monitoramento das páginas coletadas para mantê-las atualizadas. Nesta dissertação é apresentada uma nova abordagem que usa aprendizado de máquina para gerar funções de score que produzem listas ordenadas de páginas com relação a probabilidade de terem sido modificadas na Web quando comparado com a última versão coletada. É proposto um arcabouço flexível que usa Programação Genética para evoluir funções que estimam a probabilidade de a página ter sido modificada. É apresentado ainda uma avaliação experimental dos benefícios de usar o arcabouço proposto em relação a cinco abordagens estado-da-arte. Considerando a métrica Change Ratio, os valores produzidos pela melhor função gerada pelo arcabouço proposto mostram uma melhora de 0.52 para 0.71, em média, em relação aos baselines.
Knowlton, Corey Lamoin. "Web page design class curriculum for the secondary level." CSUSB ScholarWorks, 2002. https://scholarworks.lib.csusb.edu/etd-project/2108.
Full textNamoune, Abdallah. "Investigating visual attention on the web and the development of a web page analyser." Thesis, University of Manchester, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.500473.
Full textAnnadi, Ramakanth Reddy. "Adapting Web Page Tables on Mobile Web Browsers: Results from Two Controlled Empirical Studies." Thesis, North Dakota State University, 2014. https://hdl.handle.net/10365/27281.
Full textCosta, José Henrique Calenzo. "Filtered-page ranking." reponame:Repositório Institucional da UFSC, 2016. https://repositorio.ufsc.br/xmlui/handle/123456789/167840.
Full textMade available in DSpace on 2016-09-20T04:25:42Z (GMT). No. of bitstreams: 1 341906.pdf: 4935734 bytes, checksum: 5630ca8c10871314b7f54120d18ae335 (MD5) Previous issue date: 2016
Algoritmos de ranking de páginas Web podem ser criados usando técnicas baseadas em elementos estruturais da página Web, em segmentação da página ou na busca personalizada. Esta pesquisa aborda um método de ranking de documentos previamente filtrados, que segmenta a página Web em blocos de três categorias para delas eliminar conteúdo irrelevante. O método de ranking proposto, chamado Filtered-Page Ranking (FPR), consta de duas etapas principais: (i) segmentação da página web e eliminação de conteúdo irrelevante e (ii) ranking de páginas Web. O foco da extração de conteúdo irrelevante é eliminar conteúdos não relacionados à consulta do usuário, através do algoritmo proposto Query-Based Blocks Mining (QBM), para que o ranking considere somente conteúdo relevante. O foco da etapa de ranking é calcular quão relevante cada página Web é para determinada consulta, usando critérios considerados em estudos de recuperação da informação. Com a presente pesquisa pretende-se demonstrar que o QBM extrai eficientemente o conteúdo irrelevante e que os critérios utilizados para calcular quão próximo uma página Web é da consulta são relevantes, produzindo uma média de resultados de ranking de páginas Web de qualidade melhor que a do clássico modelo vetorial.
Abstract : Web page ranking algorithms can be created using content-based, structure-based or user search-based techniques. This research addresses an user search-based approach applied over previously filtered documents ranking, which relies in a segmentation process to extract irrelevante content from documents before ranking. The process splits the document into three categories of blocks in order to fragment the document and eliminate irrelevante content. The ranking method, called Page Filtered Ranking, has two main steps: (i) irrelevante content extraction; and (ii) document ranking. The focus of the extraction step is to eliminate irrelevante content from the document, by means of the Query-Based Blocks Mining algorithm, creating a tree that is evaluated in the ranking process. During the ranking step, the focus is to calculate the relevance of each document for a given query, using criteria that give importance to specific parts of the document and to the highlighted features of some HTML elements. Our proposal is compared to two baselines: the classic vectorial model, and the CETR noise removal algorithm, and the results demonstrate that our irrelevante content removal algorithm improves the results and our relevance criteria are relevant to the process.
Visser, Eugene Bourbon. "Fusing website usability variables and on-page search engine optimisation elements." Thesis, Cape Peninsula University of Technology, 2011. http://hdl.handle.net/20.500.11838/1407.
Full textIt was concluded in the literature review that small- to medium-sized enterprises (SMME) should prioritise utilising the websites on the Internet, as it provides a low cost infrastructure, unlocking opportunities and allowing small- to medium-sized enterprises to market to the international customer, promoting business activities in a low-risk environment. However, visitors do not know that they do not know, meaning a need for facilitation exists between the Internet user in terms of the information required and the information available on the Internet. Search engines (governed by their organic ranking algorithms) were created for this very purpose, to facilitate users in finding relevant information on the Internet in the shortest time possible. Search engines interpret and evaluate any given indexed web page from a targeted keywords perspective, indicating that web pages must be optimised from a search engine perspective. However, the elements search engines perceive to be important may not always be aligned with what website visitors perceive to be important. Anything on the web page that may remotely impede the visitors’ experience could be detrimental as alternative website options are but a click away. An example would be the excessive use of content on a given web page. The search engine may find the excessive content useful as it may provide contextual interpretation of the web page. However, the excessive content may impede a visitor’s website interaction as it is estimated that the average visitors will often view a web page for 45-60 seconds and read a maximum of 200 words only. During the process of identifying the contradictory search engine optimisation (SEO) elements and website usability (WU) attributes, three journal articles were written, with two journal articles following their own research methodologies and the third journal article utilising all the research results in order to create the fused SEO and WU model. Journal Article 1: Two websites were used as part of the experiment: • Control Website (CW): http://www.copywriters.co.za • Experimental Website (EW): http://www.copywriters.co.za/ppc/. The CW is an existing website with no special emphasis applied to SEO and/or WU. The EW was developed by implementing the WU attributes and ignoring all contradictory SEO elements. In order to ensure integrity of the experiment, search engines were denied access to the EW. The traffic sources for the CW were search engines (organic) traffic, as well as direct and referrer traffic.
Sundin, Albin. "Word Space Models for Web User Clustering and Page Prefetching." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-82012.
Full textNetravali, Ravi Arun. "Understanding and improving Web page load times on modern networks." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97765.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 77-80).
This thesis first presents a measurement toolkit, Mahimahi, that records websites and replays them under emulated network conditions. Mahimahi improves on prior record-and-replay frameworks by emulating the multi-origin nature of Web pages, isolating its network traffic, and enabling evaluations of a larger set of target applications beyond browsers. Using Mahimahi, we perform a case study comparing current multiplexing protocols, HTTP/1.1 and SPDY, and a protocol in development, QUIC, to a hypothetical optimal protocol. We find that all three protocols are significantly suboptimal and their gaps from the optimal only increase with higher link speeds and RTTs. The reason for these trends is the same for each protocol: inherent source-level dependencies between objects on a Web page and browser limits on the number of parallel flows lead to serialized HTTP requests and prevent links from being fully occupied. To mitigate the effect of these dependencies, we built Cumulus, a user-deployable combination of a content-distribution network and a cloud browser that improves page load times when the user is at a significant delay from a Web page's servers. Cumulus contains a "Mini-CDN"-a transparent proxy running on the user's machine-and a "Puppet": a headless browser run by the user on a well-connected public cloud. When the user loads a Web page, the Mini-CDN forwards the user's request to the Puppet, which loads the entire page and pushes all of the page's objects to the Mini-CDN, which caches them locally. Cumulus benefits from the finding that dependency resolution, the process of learning which objects make up a Web page, accounts for a considerable amount of user-perceived wait time. By moving this task to the Puppet, Cumulus can accelerate page loads without modifying existing Web browsers or servers. We find that on cellular, in-flight Wi-Fi, and transcontinental networks, Cumulus accelerated the page loads of Google's Chrome browser by 1.13-2.36×. Performance was 1.19-2.13× faster than Opera Turbo, and 0.99-1.66× faster than Chrome with Google's Data Compression Proxy.
by Ravi Arun Netravali.
S.M.
Vishwasrao, Saket Dilip. "Performance Evaluation of Web Archiving Through In-Memory Page Cache." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78252.
Full textMaster of Science
Veis, Richard. "Web page analysis of selected airlines on the czech market." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-16701.
Full textWilliams, Rewa Colette. "Patterns Of 4th Graders' Literacy Events In Web Page Development." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000203.
Full textWei, Chenjie. "Using Automated Extraction of the Page Component Hierarchy to Customize and Adapt Web Pages to Mobile Devices." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1338348757.
Full textGrace, Phillip Eulon. "Full-page versus partial-page screen designs in web-based training : their effects on learner satisfaction and performance." [Tampa, Fla] : University of South Florida, 2005. http://purl.fcla.edu/usf/dc/et/SFE0001520.
Full textTian, Ran. "Examining the Complexity of Popular Websites." Thesis, University of Oregon, 2015. http://hdl.handle.net/1794/19347.
Full textSiva, Sahithi Pokala. "Design and delivery : functional colour web pages." Thesis, University of Liverpool, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343620.
Full textAbreu, Luís Pedro Borges. "Morphing Web Pages to Preclude Web Page Tampering Threats." Master's thesis, 2016. https://repositorio-aberto.up.pt/handle/10216/90184.
Full textThe number of Internet users keeps growing every year. Moreover, the Internet is becoming a daily tool, which impacts the individual's lives used either as a work tool or for entertainment purposes. However, by using it, people become possible targets for cyber attacks as they keep exchanging data, sometimes sensitive and private data, with remote servers.Among all the different attacks types, MitB is the reason behind the genesis of this thesis subject. MitB attacks are performed by a computer program running on user's computer that is commonly known as Malware, which has access to what happens inside a browser window. It can be a system library or even a browser extension programmed to, automatically, misrepresent the source code of the client-side server response, and other information stored in user's browsers. They rely on markup and DOM anchors to identify sections of a web page to attack. The end result of an attack will be dictated by the malware's ability to successfully identify the right location on the web page to perform the attack.Polymorphism is a broad concept that can be applied to web pages as a tool to both neutralize and defeat such kind of attacks, as documented by Shape Security, Inc. in 2014. Applying polymorphic techniques to web pages, the server response will be textually different between requests, but the visual display to the user will always be the same. That is, the values of static attributes and the structure of HTML documents may be modified on the server immediately before responses are sent off, creating a polymorphic version of the web page, or by pre-building this new versions on the server to decrease the real time computational costs. Therefore, no two HTML documents will be textually the same, turning web pages in somehow a moving target against MitB attacks. This level of protection is necessary since all changes are made locally, client side, making their detection difficult by control and security structures implemented on the service provider's servers.In this thesis, we aim to develop a tool based on polymorphism to protect web pages and users from MitB attacks based on markup and DOM anchors. This tool will be evaluated by accuracy and efficiency. The first metric will be evaluated by recording and comparing the list of errors and warnings generated by original web pages and by their polymorphic versions created with our tool. The efficiency will be evaluated by running automated attempts for tampering web pages protected by our tool.
Abreu, Luís Pedro Borges. "Morphing Web Pages to Preclude Web Page Tampering Threats." Dissertação, 2016. https://repositorio-aberto.up.pt/handle/10216/90184.
Full textThe number of Internet users keeps growing every year. Moreover, the Internet is becoming a daily tool, which impacts the individual's lives used either as a work tool or for entertainment purposes. However, by using it, people become possible targets for cyber attacks as they keep exchanging data, sometimes sensitive and private data, with remote servers.Among all the different attacks types, MitB is the reason behind the genesis of this thesis subject. MitB attacks are performed by a computer program running on user's computer that is commonly known as Malware, which has access to what happens inside a browser window. It can be a system library or even a browser extension programmed to, automatically, misrepresent the source code of the client-side server response, and other information stored in user's browsers. They rely on markup and DOM anchors to identify sections of a web page to attack. The end result of an attack will be dictated by the malware's ability to successfully identify the right location on the web page to perform the attack.Polymorphism is a broad concept that can be applied to web pages as a tool to both neutralize and defeat such kind of attacks, as documented by Shape Security, Inc. in 2014. Applying polymorphic techniques to web pages, the server response will be textually different between requests, but the visual display to the user will always be the same. That is, the values of static attributes and the structure of HTML documents may be modified on the server immediately before responses are sent off, creating a polymorphic version of the web page, or by pre-building this new versions on the server to decrease the real time computational costs. Therefore, no two HTML documents will be textually the same, turning web pages in somehow a moving target against MitB attacks. This level of protection is necessary since all changes are made locally, client side, making their detection difficult by control and security structures implemented on the service provider's servers.In this thesis, we aim to develop a tool based on polymorphism to protect web pages and users from MitB attacks based on markup and DOM anchors. This tool will be evaluated by accuracy and efficiency. The first metric will be evaluated by recording and comparing the list of errors and warnings generated by original web pages and by their polymorphic versions created with our tool. The efficiency will be evaluated by running automated attempts for tampering web pages protected by our tool.
Lienhard, John. "Rohsenow Symposium web page." 2004. http://hdl.handle.net/1721.1/7307.
Full textTsai, Ming-yung, and 蔡明原. "Related Web Page Retrieval Based on Semantic Concepts and Features of Web Pages." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/fj4qej.
Full text朝陽科技大學
資訊管理系碩士班
93
Using search engines to find information on the Internet often fails to satisfy user requirements. Previous search methodologies extended the domain of query keywords by the corresponding domain ontology to find related web pages, but typically they omitted the semantic content of the web pages, resulting in ineffective searches. In this paper, we present a related web page retrieval method that not only considers the corresponding domain ontology but also analyzes the semantic content of web pages. First, the method embeds the corresponding domain ontology of search keyword in order to find web pages from the Internet. Next, the method considers the location of the concept in the web pages, and relationships between concepts in the domain ontology when clustering the web page. Finally, an RDF structure is used to describe the relationships between keywords and web pages. We also used a Latent semantic analysis (LSA) algorithm to find relevant words in order to extend the information in the RDF. Experimental results prove that our method makes queries more effectively.
Marath, Sathi. "Large-Scale Web Page Classification." Thesis, 2010. http://hdl.handle.net/10222/13130.
Full textHu, Yony-Yi, and 胡永毅. "SharePoint Responsive Web Page Design." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/06091112537820073844.
Full text中原大學
應用數學研究所
103
As various hand held devices such as mobile phones and tablets become more and more popular, how to design a web page that can be displayed properly on the screens of all devices becomes a very important question. If we were to design the same web page for each possible device, then the maintenance afterwards will become very troublesome and will cost a lot of money. By using Responsive Web Design, we can design a web page once and have it displayed properly on various devices. In this study, we will experiment how to deliver Responsive Web Design web pages on SharePoint platform using Bootstrap. SharePoint Server is one of Microsoft’s product that we can ever encounter in enterprises. It is a product that can be used as a collaboration platform to promote internal or external communication of an enterprise. By using the web pages that SharePoint provides, users can communicate with any device that they have. Among the numerous functions that SharePoint provide, one of which is to let user edit a web page just like the way we usually type our Word document, and can easily configure different ways to view it and set permissions for it. Users can also design workflows on SharePoint without writing a single line of code. One frequently mentioned of Responsive Web Design is when displaying a web page using different screen resolution, the web page will adapt to the screen resolution and modify its content of display. Bootstrap is a set of tools that can be applied on web sites and web applications. It content includes frameworks for HTML, CSS and JavaScript, providing various effects on typesetting, controls used on web pages and navigations of web pages. For web sites and web applications that need to provide service to various devices and their browsers, Bootstrap provides CSS media query that can save web designers a huge amount of time and work since they will no longer to have a version for each of their client devices.
許烘祥. "Bidirectional Integrated Web Page System." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/77750544223640859097.
Full textChean, Chao-Nan, and 陳昭男. "Detection of Page Type, Time, and Key Terms of Web Pages." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/49481238888740466198.
Full text國立中正大學
資訊工程研究所
91
With the rapid growth of WWW, the amount of online resources is getting richer. Modern search engines not only provide general search service for web pages, but domain-specified or type-specified search service to meet users'' need. To be able to provide type-specified search service, one needs to build up an automatic mechanism for type detection. By statistical analysis of the web pages, we find out some features which are appropriate for type detection. We also propose a scoring method to evaluate which type the web page belongs. Sometimes, the time information described in the content of the web page may be different from the last modified time of the web page. We define some rules to detect the time information from the web page. When extracting key terms, three features are calculated for each term in the web page. They are: location, which is the term''s first appearance; emphatic tag, whether the term is emphasized by some kinds of HTML tag or not; and TFIDF, a generality measure of a term''s frequency in a web page.
"Sequence-based Web Page Template Detection." Master's thesis, 2011. http://hdl.handle.net/2286/R.I.9268.
Full textDissertation/Thesis
M.S. Computer Science 2011
Videira, António Miguel Baptista. "Web Page Classification using Visual Features." Master's thesis, 2013. http://hdl.handle.net/10316/40388.
Full textDai, Shyh-Ming, and 戴世明. "Link-based Automatic Web Page Classification." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/72672279859286395513.
Full text元智大學
資訊工程學系
90
As the Internet rapidly develops, the amount of information accumulates vastly.Web search engines and categories help users to find important information quickly and effectively.Therefore, Web search engines and categories of Web pages have became two important services on the Internet. However, either Web search engines or categories of Web pages need some support mechanisms for precisely classifying Web pages to improve the effectiveness. The automatic Web page classification is one of the mechanisms. Because the amount of Internet information is too huge to be classified manually, the automatic Web page classification is becoming the main stream of Web page classification.However, two problems need to be discussed further: how to improve the classification accuracy and how to reduce the ratio of the pages that can not be classified at all.This thesis proposes a new approach called linked-based automatic Web page classification to relief the problems.We improve a tag-weighted approach (Jenkins&Inman) by incorporating link analysis, which picks out the authority links from the Web page being classified, and analyzing the contents which pointed by the authority links. We have conducted experiments to compare our approach with Jenkins&Inman approach. We used a set of classified Yahoo! Web pages for training and verification.The experiment results show that the linked-based automatic Web pages classification indeed improves the classification correctness rate and reduces the amount of Web pagess which cannot be classified in Jenkins'' approach.
Pi-Hsien, Chang, and 張碧顯. "Web Structure and Page Relationship Discovery from Web Server Log." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/20161066534968523651.
Full text國立暨南國際大學
資訊工程學系
96
Web usage mining which extracts knowledge from Web server log is an application of data mining method. The mining results can be used for improving the Web design, predicating user behavior and personalizing Web site. Web usage mining has three major stages: data preprocessing, pattern discovery and pattern analysis. Data pre-processing, which normally spends more than 60% of the whole mining process, is most time consuming. Cooley divided data preprocessing into four and one optional steps. They were data cleaning, user/session identification, path completion, page view identification and transaction identification which is optional. Until now, the preprocessing of Web usage mining must gather external domain knowledge, such as Web structure and Web content classification, which greatly affects the application of Web usage mining. It takes more time for the analyst to be familiar with Web structure and content. For Web administrator, she/he may have concerns with the confidential Web data when giving the detailed Web structure to the analyst. Thus, we want to solve the problem by creating a platform between analysts and Web administrators to help them better communicate during the Web usage mining progress. In this thesis, we propose a framework that can reconstruct Web structure and discover the page relationship from Web server log’s implicit information. The experimental results showed that Web site reconstruction and page relationship discovery with precision of more than 90%. This method that can be easily embedded in the popular preprocessing stage is a workable and practical substitute method.