Tesis sobre el tema "Web of document"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Web of document".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Tandon, Seema Amit. "Web Texturizer: Exploring intra web document dependencies". CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2539.
Texto completoMartins, Bruno. "Inter-Document Similarity in Web Searches". Master's thesis, Department of Informatics, University of Lisbon, 2004. http://hdl.handle.net/10451/14045.
Texto completoArocena, Gustavo O. "WebOQL, exploiting document structure in Web queries". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq29235.pdf.
Texto completoTang, Bo. "WEBDOC: AN AUTOMATED WEB DOCUMENT INDEXING SYSTEM". MSSTATE, 2002. http://sun.library.msstate.edu/ETD-db/theses/available/etd-11052002-213723/.
Texto completo伍頌斌 y Chung-pun Ng. "Document distribution algorithms for distributed web servers". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B31227703.
Texto completoLecarpentier, Jean-Marc. "Sydonie : modèle de document et ingénierie du Web". Phd thesis, Université de Caen, 2011. http://tel.archives-ouvertes.fr/tel-01070899.
Texto completoLecarpentier, Jean-Marc. "Sydonie : architecture de document et ingénierie du WEB". Caen, 2011. http://www.theses.fr/2011CAEN2044.
Texto completoThe Web has evolved, in the past few years, from a document centered approach, to become a web of applications. In this regard, multilingual composite documents management has become a focus point for Content Management Systems (CMS). This thesis offers a new approach, inspired by the Functional Requirements for Bibliographic Records report (FRBR). We propose tree-based model to describe relations between a digital document's various versions, translations and formats. The proposed approach allows composite documents to be rendered according to a user's preferences, using content negotiation and relationships between documents at the highest level of the tree. We created a web development framework called Sydonie (SYstème de gestion de DOcuments Numériques pour l’Internet et l’Édition), a research and industrial project. The proposed model has been implemented and validated within the Sydonie framework. Using both industry and academic work in the field of web engineering, Sydonie offers new ways to develop web applications. Finally, we propose a model for web aplications to interact with documents' metadata. The architecture we propose helps web developpers to implement metadata management in web applications more easily
Cloran, Russell Andrew. "Trust on the semantic web". Thesis, Rhodes University, 2006. http://eprints.ru.ac.za/852/.
Texto completoDongo, Escalante Irvin Franco Benito. "Anonymisation de documents RDF". Thesis, Pau, 2017. http://www.theses.fr/2017PAUU3045/document.
Texto completoWith the advance of the Semantic Web and the Open Linked Data initiatives, a huge quantity of RDF data is available on Internet. The goal is to make this data readable for humans and machines, adopting special formats and connecting them by using International Resource Identifiers (IRIs), which are abstractions of real resources of the world. As more data is published and shared, sensitive information is also provided. In consequence, the privacy of entities of interest (e.g., people, companies) is a real challenge, requiring special techniques to ensure privacy and adequate security over data available in an environment in which every user has access to the information without any restriction (Web). Then, three main aspects are considered to ensure entity protection: (i) Preserve privacy, by identifying and treating the data that can compromise the privacy of the entities (e.g., identifiers, quasi-identifiers); (ii) Identify utility of the public data for diverse applications (e.g., statistics, testing, research); and (iii) Model background knowledge that can be used for adversaries (e.g., number of relationships, a specific relationship, information of a node). Anonymization is one technique for privacy protection that has been successfully applied in practice for databases and graph structures. However, studies about anonymization in the context of RDF data, are really limited. These studies are initial works for protecting individuals on RDF data, since they show a practical anonymization approach for simple scenarios as the use of generalization and suppression operations based on hierarchies. However, for complex scenarios, where a diversity of data is presented, the existing anonymization approaches does not ensure an enough privacy. Thus, in this context, we propose an anonymization framework, which analyzes the neighbors according to the background knowledge, focused on the privacy of entities represented as nodes in the RDF data. Our anonymization approach is able to provide better privacy, since it takes into account the l-diversity condition as well as the neighbors (nodes and edges) of entities of interest. Also, an automatic anonymization process is provided by the use of anonymization operations associated to the datatypes
Stankovic, Milan. "Convergence entre Web Social et Web Sémantique. Application à l'innovation à l'aide du Web". Thesis, Paris 4, 2012. http://www.theses.fr/2012PA040247/document.
Texto completoThis thesis builds upon the work on the Social Semantic Web, a research perspective on the complementarity and coevolution of two aspects of the Web, the social and semantic one. Web development in recent years has given rise to a huge graph of semantically structured data, partly resulting from user activity. We are particularly interested in the use of this graph in order to facilitate access to information found on the Web, in a useful, informative manner. This problem is particularly studied in scenarios related to innovation on the Web - practices to use Web technologies to contribute to the emergence of innovation. A notable specificity of this context, so far little discussed in literature, is the need to encourage serendipity and discovery. Beyond the simple relevance sought in any search and recommendation situation on the Web, the context of innovation requires a certain openness to allow the user to access information relevant yet unexpected, and should also open opportunities to learn and translate ideas from one domain to another.The work presented in this thesis therefore aims to assist, directly or indirectly, the innovators online (eg, companies seeking to innovate, experts and carriers of ideas) to make discoveries. We address each of these challenges in different parts of the thesis. This vision is principally implemented through the construction of an expert search system, Hy.SemEx, a system for keyword recommendation allowing to discover unknown relevant keywords, HyProximity, and an approach for recommending collaborators to experts in order to help them face multidisciplinary problems
Elza, Dethe. "Browser evolution document access on the World Wide Web". Ohio : Ohio University, 1998. http://www.ohiolink.edu/etd/view.cgi?ohiou1176833339.
Texto completoImmaneni, Trivikram. "A HYBRID APPROACH TO RETRIEVING WEB DOCUMENTS AND SEMANTIC WEB DATA". Wright State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=wright1199923822.
Texto completoHuang, Yuzhou. "Duplicate detection in XML Web data /". View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?CSED%202009%20HUANG.
Texto completoFaessel, Nicolas. "Indexation et interrogation de pages web décomposées en blocs visuels". Thesis, Aix-Marseille 3, 2011. http://www.theses.fr/2011AIX30014/document.
Texto completoThis thesis is about indexing and querying Web pages. We propose a new model called BlockWeb, based on the decomposition of Web pages into a hierarchy of visual blocks. This model takes in account the visual importance of each block as well as the permeability of block's content to their neighbor blocks on the page. Splitting up a page into blocks has several advantages in terms of indexing and querying. It allows to query the system with a finer granularity than the whole page: the most similar blocks to the query can be returned instead of the whole page. A page is modeled as a directed acyclic graph, the IP graph, where each node is associated with a block and is labeled by the coefficient of importance of this block and each arc is labeled by the coefficient of permeability of the target node content to the source node content. In order to build this graph from the bloc tree representation of a page, we propose a new language : XIML (acronym for XML Indexing Management Language), a rule based language like XSLT. The model has been assessed on two distinct dataset: finding the best entry point in a dataset of electronic newspaper articles, and images indexing and querying in a dataset drawn from web pages of the ImagEval 2006 campaign. We present the results of these experiments
Vidal, Colin. "Programmation web réactive". Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4049/document.
Texto completoThe web is an universal platform used to develop applications interacting with users and remote services. These interactions are implemented as asynchronous events that can be fired anytime. JavaScript, the mainstream language of the web, handles asynchronous events using low-level abstractions that makes it difficult to write, verify, and maintain interactive applications. We have addressed this problem by designing and implementing a new domain specific language called Hiphop.js. It offers an alternative to JavaScript event handling mechanism by reusing temporal constructions coming from the synchronous programming language Esterel. These constructions make the control flow of the program explicit and deterministic. Hiphop.js is embedded in JavaScript and suits the traditional dynamic programming style of the Web. It is tighly coupled to JavaScript with which it can exchange values and access any data structures. It can also support dynamic modifications of existing programs needed to support on-demand download on the Web. It can run on both end of Web applications, namely on servers and on clients. In this thesis, we present Hiphop.js, its design and implementation. We overview its programming environment and we present the prototypical web applications we have implemented to validate the approach
Mereuta, Alina. "Smart web accessibility platform : dichromacy compensation and web page structure improvement". Thesis, Tours, 2014. http://www.theses.fr/2014TOUR4032/document.
Texto completoThis thesis works are focused on enhancing web accessibility for users with visual disabilities using tools integrated within the SmartWeb Accessibility Platform (SWAP). After a synthesis on accessibility, SWAP is presented. Our first contribution consists in reducing the contrast loss for textual information in web pages for dichromat users while maintaining the author’s intentions conveyed by colors. The contrast compensation problem is reduced at minimizing a fitness function which depends on the original colors and the relationships between them. The interest and efficiency of three methods (mass-spring system, CMA-ES, API) are assessed on two datasets (real and artificial). The second contribution focuses on enhancing web page structure for screen reader users in order to overcome the effect of contents’linearization. Using heuristics and machine learning techniques, the main zones of the page are identified. The page structure can be enhanced using ARIA statements and access links to improve zone identification by screen readers
Mull, Randall Franklin. "Teaching web design at the higher education level". Morgantown, W. Va. : [West Virginia University Libraries], 2001. http://etd.wvu.edu/templates/showETD.cfm?recnum=1954.
Texto completoTitle from document title page. Document formatted into pages; contains iii, 47 p. Vita. Includes abstract. Includes bibliographical references (p. 36-37).
Do, Tuan Anh. "A quality-centered approach for web application engineering". Thesis, Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1201/document.
Texto completoWeb application developers are not all experts. Even if they use methods such as UWE (UML web engineering) and CASE tools, they are not always able to make good decisions regarding the content of the web application, the navigation schema, and/or the presentation of information. Literature provides them with many guidelines for these tasks. However this knowledge is disseminated in many sources and not structured. In this dissertation, we perform a knowledge capitalization of all these guidelines. The contribution is threefold: (i) we propose a meta-model allowing a rich representation of these guidelines, (ii) we propose a grammar enabling the description of existing guidelines, (iii) based on this grammar, we developed a guideline management tool. We enrich the UWE method with this knowledge base leading to a quality based approach. Thus, our tool enriches existing UWE-based Computer Aided Software Engineering prototypes with ad hoc guidance
Ghenname, Mérième. "Le web social et le web sémantique pour la recommandation de ressources pédagogiques". Thesis, Saint-Etienne, 2015. http://www.theses.fr/2015STET4015/document.
Texto completoThis work has been jointly supervised by U. Jean Monnet Saint Etienne, in the Hubert Curien Lab (Frederique Laforest, Christophe Gravier, Julien Subercaze) and U. Mohamed V Rabat, LeRMA ENSIAS (Rachida Ahjoun, Mounia Abik). Knowledge, education and learning are major concerns in today’s society. The technologies for human learning aim to promote, stimulate, support and validate the learning process. Our approach explores the opportunities raised by mixing the Social Web and the Semantic Web technologies for e-learning. More precisely, we work on discovering learners profiles from their activities on the social web. The Social Web can be a source of information, as it involves users in the information world and gives them the ability to participate in the construction and dissemination of knowledge. We focused our attention on tracking the different types of contributions, activities and conversations in learners spontaneous collaborative activities on social networks. The learner profile is not only based on the knowledge extracted from his/her activities on the e-learning system, but also from his/her many activities on social networks. We propose a methodology for exploiting hashtags contained in users’ writings for the automatic generation of learner’s semantic profiles. Hashtags require some processing before being source of knowledge on the user interests. We have defined a method to identify semantics of hashtags and semantic relationships between the meanings of different hashtags. By the way, we have defined the concept of Folksionary, as a hashtags dictionary that for each hashtag clusters its definitions into meanings. Semantized hashtags are thus used to feed the learner’s profile so as to personalize recommendations on learning material. The goal is to build a semantic representation of the activities and interests of learners on social networks in order to enrich their profiles. We also discuss our recommendation approach based on three types of filtering (personalized, social, and statistical interactions with the system). We focus on personalized recommendation of pedagogical resources to the learner according to his/her expectations and profile
Tolomei, Gabriele <1980>. "Enhancing web search user experience : from document retrieval to task recommendation". Doctoral thesis, Università Ca' Foscari Venezia, 2011. http://hdl.handle.net/10579/1231.
Texto completoThe World Wide Web is the biggest and most heterogeneous database that humans have ever built, making it the place of choice where people search for any sort of information through Web search engines. Indeed, users are increasingly asking Web search engines for performing their daily tasks (e.g., "planning holidays", "obtaining a visa", "organizing a birthday party", etc.), instead of simply looking for Web pages. In this Ph.D. dissertation, we sketch and address two core research challenges that we claim next-generation Web search engines should tackle for enhancing user search experience, i.e., Web task discovery and Web task recommendation. Both these challenges rely on the actual understanding of user search behaviors, which can be achieved by mining knowledge from query logs. Search processes of many users are analyzed at a higher level of abstraction, namely from a "task-by-task" instead of a "query-by-query" perspective, thereby producing a model of user search tasks, which in turn can be used to support people during their daily "Web lives".
Yang, Jingtao. "Document flow model : a formal notation for modelling asynchronous web services". Thesis, University of Southampton, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.427364.
Texto completoRoussinov, Dmitri G. y Hsinchun Chen. "Document clustering for electronic meetings: an experimental comparison of two techniques". Elsevier, 1999. http://hdl.handle.net/10150/105091.
Texto completoIn this article, we report our implementation and comparison of two text clustering techniques. One is based on Wardâ s clustering and the other on Kohonenâ s Self-organizing Maps. We have evaluated how closely clusters produced by a computer resemble those created by human experts. We have also measured the time that it takes for an expert to â â clean upâ â the automatically produced clusters. The technique based on Wardâ s clustering was found to be more precise. Both techniques have worked equally well in detecting associations between text documents. We used text messages obtained from group brainstorming meetings.
Harmon, Trev R. "On-Line Electronic Document Collaboration and Annotation". Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1589.pdf.
Texto completoHan, Wei. "Wrapper application generation for semantic web". Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/5407.
Texto completoOita, Marilena. "Inférer des objets sémantiques du Web structuré". Thesis, Paris, ENST, 2012. http://www.theses.fr/2012ENST0060/document.
Texto completoThis thesis focuses on the extraction and analysis of Web data objects, investigated from different points of view: temporal, structural, semantic. We first survey different strategies and best practices for deriving temporal aspects of Web pages, together with a more in-depth study on Web feeds for this particular purpose, and other statistics. Next, in the context of dynamically-generated Web pages by content management systems, we present two keyword-based techniques that perform article extraction from such pages. Keywords, automatically acquired, guide the process of object identification, either at the level of a single Web page (SIGFEED), or across different pages sharing the same template (FOREST). We finally present, in the context of the deep Web, a generic framework that aims at discovering the semantic model of a Web object (here, data record) by, first, using FOREST for the extraction of objects, and second, representing the implicit rdf:type similarities between the object attributes and the entity of the form as relationships that, together with the instances extracted from the objects, form a labeled graph. This graph is further aligned to an ontology like YAGO for the discovery of the unknown types and relations
Sanoja, Vargas Andrés. "Segmentation de pages web, évaluation et applications". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066004/document.
Texto completoWeb pages are becoming more complex than ever, as they are generated by Content Management Systems (CMS). Thus, analyzing them, i.e. automatically identifying and classifying different elements from Web pages, such as main content, menus, among others, becomes difficult. A solution to this issue is provided by Web page segmentation which refers to the process of dividing a Web page into visually and semantically coherent segments called blocks.The quality of a Web page segmenter is measured by its correctness and its genericity, i.e. the variety of Web page types it is able to segment. Our research focuses on enhancing this quality and measuring it in a fair and accurate way. We first propose a conceptual model for segmentation, as well as Block-o-Matic (BoM), our Web page segmenter. We propose an evaluation model that takes the content as well as the geometry of blocks into account in order to measure the correctness of a segmentation algorithm according to a predefined ground truth. The quality of four state of the art algorithms is experimentally tested on four types of pages. Our evaluation framework allows testing any segmenter, i.e. measuring their quality. The results show that BoM presents the best performance among the four segmentation algorithms tested, and also that the performance of segmenters depends on the type of page to segment.We present two applications of BoM. Pagelyzer uses BoM for comparing two Web pages versions and decides if they are similar or not. It is the main contribution of our team to the European project Scape (FP7-IP). We also developed a migration tool of Web pages from HTML4 format to HTML5 format in the context of Web archives
Chen, Benfeng. "Transforming Web pages to become standard-compliant through reverse engineering /". View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?COMP%202006%20CHEN.
Texto completoCui, Heng. "Analyse et diagnostic des performances du web du point de vue de l'utilisateur". Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0017/document.
Texto completoIn recent years, the interest of the research community in the performance of Web browsing has grown steadily. In order to reveal end-user perceived performance of Web browsing, in this thesis work, we address multiple issues of Web browsing performance from the perspective of the end-user. The thesis is composed by three parts: the first part introduces our initial platform which is based on browser-level measurements. We explain measurement metrics that can be easily acquired from the browser and indicators for end-user experience. Then, we use clustering techniques to correlate higher-level performance metrics with lower level metrics. In the second part, we present our diagnosis tool called FireLog. We first discuss different possible causes that can prevent a Web page to achieve fast rendering; then, we describe details of the tool's components and its measurements. Based on the measured metrics, we illustrate our model for the performance diagnosis in an automatic fashion. In the last part, we propose a new methodology named Critical Path Method for the Web performance analysis. We first explain details about Web browser's intrinsic features during page rendering and then we formalize our the methodology
Matsubara, Shigeki, Tomohiro Ohno y Masashi Ito. "Text-Style Conversion of Speech Transcript into Web Document for Lecture Archive". Fuji Technology Press, 2009. http://hdl.handle.net/2237/15083.
Texto completoLee, David Chunglin. "Pre-fetch document caching to improve World-Wide Web user response time". Thesis, Virginia Tech, 1996. http://hdl.handle.net/10919/44951.
Texto completoThe World-Wide Web, or the Web, is currently one of the most highly used network services. Because of this, improvements and new technologies are rapidly being developed and deployed. One important area of study is improving user response time through the use of caching mechanisms. Most prior work considered multiple user caches running on cache relay systems. These systems are mostly post-caching systems; they perform no "look ahead," or pre-fetch, functions. This research studies a pre-fetch caching scheme based on Web server access statistics. The scheme employs a least-recently used replacement policy and allows for multiple simultaneous document retrievals to occur. The scheme is based on a combined statistical and locality of reference model associated with the links in hypertext systems. Results show that cache hit rates are doubled over schemes that use only post-caching and are mixed for user response time improvements. The conclusion is that pre-fetch caching Web documents offers an improvement over post-caching methods and should be studied in detail for both single user and multiple user systems.
Master of Science
Everts, TJ. "Using Formal Concept Analysis with a Push-based Web Document Management System". Thesis, Honours thesis, University of Tasmania, 2004. https://eprints.utas.edu.au/116/1/EvertsT_Hons_Thesis2004.pdf.
Texto completoRocco, Daniel J. (Daniel John). "Discovering and Tracking Interesting Web Services". Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4889.
Texto completoCao, Tien Dung. "Test and Validation of Web Services". Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14122/document.
Texto completoIn this thesis, we propose the testing approaches for web service composition. We focus on unit, integrated testing of an orchestration of web services and also the runtime verification aspect. We defined an unit testing framework for an orchestration that is composed of a test architecture, a conformance relation and two proposed testing approaches based on Timed Extended Finite State Machine (TEFSM) model: offline which test activities as timed test case generation, test execution and verdict assignment are applied in sequential, and online which test activities are applied in parallel. For integrated testing of an orchestration, we combines of two approaches: active and passive. Firstly, active approach is used to start a new session of the orchestration by sending a SOAP request. Then all communicating messages among services are collected and analyzed by a passive approach. On the runtime verification aspect, we are interested in the correctness of an execution trace with a set of defined constraints, called rules. We have proposed to extend the Nomad language, by defining the constraints on each atomic action (fixed conditions) and a set of data correlations between the actions to define the rules for web services. This language allows us to define a rule with future and past time, and to use the operations: NOT, AND, OR to combines some conditions into a context of the rule. Afterwards, we proposed an algorithm to check correctness of a message sequence in parallel with the trace collection engine. Specifically, this algorithm verifies message by message without storing them
Kolafa, Lukáš. "Generování výstupních sestav v prostředí webu". Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-75494.
Texto completoTerdjimi, Mehdi. "Adaptation Contextuelle Multi-Préoccupations Orientée Sémantique dans le Web des Objets". Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1315/document.
Texto completoThe Web of Things (WoT) takes place in a variety of application domains (e.g. homes, enterprises, industry, healthcare, city, agriculture...). It builds a Web-based uniform layer on top of the Internet of Things (IoT) to overcome the heterogeneity of protocols present in the IoT networks. WoT applications provide added value by combining access to connected objects and external data sources, as well as standard-based reasoning (RDF-S, OWL 2) to allow for interpretation and manipulation of gathered data as contextual information. Contextual information is then exploited to allow these applications to adapt their components to changes in their environment. Yet, contextual adaptation is a major challenge for theWoT. Existing adaptation solutions are either tightly coupled with their application domains (as they rely on domain-specific context models) or offered as standalone software components that hardly fit inWeb-based and semantic architectures. This leads to integration, performance and maintainability problems. In this thesis, we propose a multi-purpose contextual adaptation solution for WoT applications that addresses usability, flexibility, relevance, and performance issues in such applications. Our work is based on a smart agriculture scenario running inside the avatar-based platformASAWoO. First,we provide a generic context meta-model to build standard, interoperable et reusable context models. Second, we present a context lifecycle and a contextual adaptation workflow that provide parallel raw data semantization and contextualization at runtime, using heterogeneous sources (expert knowledge, device documentation, sensors,Web services, etc.). Third, we present a situation-driven adaptation rule design and generation at design time that eases experts and WoT application designers’ work. Fourth, we provide two optimizations of contextual reasoning for theWeb: the first adapts the location of reasoning tasks depending on the context, and the second improves incremental maintenance of contextual information
Lowery, David S. "Utilization of Web services to improve communication of operational information". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Sep%5FLowery.pdf.
Texto completoZhuo, Ling y 卓玲. "Document replication and distribution algorithms for load balancing ingeographically distributed web server systems". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B31228148.
Texto completoQumsiyeh, Rani Majed. "Easy to Find: Creating Query-Based Multi-Document Summaries to Enhance Web Search". BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2713.
Texto completoPassant, Alexandre. "Technologies du Web Sémantique pour l’Entreprise 2.0". Thesis, Paris 4, 2009. http://www.theses.fr/2009PA040077/document.
Texto completoThe work described in this thesis provides different methods, thoughts and implementations combining Web 2.0 and the Semantic Web. After introducing those terms, we present the current shortcomings of tools such as blogs and wikis as well as tagging practices in an Enterprise 2.0 context. We define the SemSLATES methodology and the global vision of a middleware architecture based on Semantic Web technologies (languages, models, tools and protocols) to solve these issues. Then, we detail the various ontologies (as in computer science) that we build to achieve this goal: on the one hand models dedicated to socio-structural meta-data, by actively contributing to SIOC - Semantically-Interlinked Online Communities -, and on the other hands models extending public ontologies for domain data. Moreover, the MOAT ontology - Meaning Of A Tag – allows us to combine the flexibility of tagging and the power of ontology-based indexing. We then describe several software implementations, at EDF R&D, dedicated to easily produce and use semantic annotations to enrich original tools: semantic wikis, advanced visualization interfaces (faceted browsing, semantic mash-ups, etc.) and a semantic search engine. Several contributions have been published as public ontologies or open-source software, contributing more generally to this convergence between Web 2.0 and the Semantic Web, not only in enterprise but on the Web as a whole
Cao, Hanyang. "Développement d'applications Web avec des composants tiers". Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0017/document.
Texto completoWeb applications are highly popular and using some of them (e.g., Facebook, Google) is becoming part of our lives. Developers are eager to create various web applications to meet people's increasing demands. To build a web application, developers need to know some basic programming technologies. Moreover, they prefer to use some third-party components (such as server-side libraries, client-side libraries, REST services) in the web applications. By including those components, they could benefit from maintainability, reusability, readability, and efficiency. In this thesis, we propose to help developers to use third-party components when they create web applications. We present three impediments when developers using the third-party components: What are the best JavaScript libraries to use? How to get the standard specifications of REST services? How to adapt to the data changes of REST services? As such, we present three approaches to solve these problems. Those approaches have been validated through several case studies and industrial data. We describe some future work to improve our solutions, and some research problems that our approaches can target
Benouaret, Karim. "Advanced techniques for Web service query optimization". Thesis, Lyon 1, 2012. http://www.theses.fr/2012LYO10177/document.
Texto completoAs we move from a Web of data to a Web of services, enhancing the capabilities of the current Web search engines with effective and efficient techniques for Web services retrieval and selection becomes an important issue. In this dissertation, we present a framework that identifies the top-k Web service compositions according to the user fuzzy preferences based on a fuzzification of the Pareto dominance relationship. We also provide a method to improve the diversity of the top-k compositions. An efficient algorithm is proposed for each method. We evaluate our approach through a set of thorough experiments. After that, we consider the problem of Web service selection under multiple users preferences. We introduce a novel concept called majority service skyline for this problem based on the majority rule. This allows users to make a “democratic” decision on which Web services are the most appropriate. We develop a suitable algorithm for computing the majority service skyline. We conduct a set of thorough experiments to evaluate the effectiveness of the majority service skyline and the efficiency of our algorithm. We then propose the notion of α-dominant service skyline based on a fuzzification of Pareto dominance relationship, which allows the inclusion of Web services with a good compromise between QoS parameters, and the exclusion ofWeb services with a bad compromise between QoS parameters. We develop an efficient algorithm based on R-Tree index structure for computing efficiently the α-dominant service skyline. We evaluate the effectiveness of the α-dominant service skyline and the efficiency of the algorithm through a set of experiments. Finally, we consider the uncertainty of the QoS delivered by Web services. We model each uncertain QoS attribute using a possibility distribution, and we introduce the notion of pos-dominant service skyline and the notion of nec-dominant service skyline that facilitates users to select their desired Web services with the presence of uncertainty in their QoS. We then developappropriate algorithms to efficiently compute both the pos-dominant service skyline and nec-dominant service skyline. We conduct extensive sets of experiments to evaluate the proposed service skyline extensions and algorithms
Petit, Albin. "Introducing privacy in current web search engines". Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI016/document.
Texto completoDuring the last few years, the technological progress in collecting, storing and processing a large quantity of data for a reasonable cost has raised serious privacy issues. Privacy concerns many areas, but is especially important in frequently used services like search engines (e.g., Google, Bing, Yahoo!). These services allow users to retrieve relevant content on the Internet by exploiting their personal data. In this context, developing solutions to enable users to use these services in a privacy-preserving way is becoming increasingly important. In this thesis, we introduce SimAttack an attack against existing protection mechanism to query search engines in a privacy-preserving way. This attack aims at retrieving the original user query. We show with this attack that three representative state-of-the-art solutions do not protect the user privacy in a satisfactory manner. We therefore develop PEAS a new protection mechanism that better protects the user privacy. This solution leverages two types of protection: hiding the user identity (with a succession of two nodes) and masking users' queries (by combining them with several fake queries). To generate realistic fake queries, PEAS exploits previous queries sent by the users in the system. Finally, we present mechanisms to identify sensitive queries. Our goal is to adapt existing protection mechanisms to protect sensitive queries only, and thus save user resources (e.g., CPU, RAM). We design two modules to identify sensitive queries. By deploying these modules on real protection mechanisms, we establish empirically that they dramatically improve the performance of the protection mechanisms
Mekki, Mohamed-Anis. "Synthèse et compilation de services web sécurisés". Thesis, Nancy 1, 2011. http://www.theses.fr/2011NAN10123/document.
Texto completoAutomatic composition of web services is a challenging task. Many works have considered simplified automata models that abstract away from the structure of messages exchanged by the services. For the domain of secured services we propose a novel approach to automated composition of services based on their security policies. Given a community of services and a goal service, we reduce the problem of composing the goal from services in the community to a security problem where an intruder we call mediator should intercept and redirect messages from the service community and a client service till reaching a satisfying state. We have implemented the algorithm in AVANTSSAR Platform and applied the tool to several case studies. Then we present a tool that compiles the obtained trace describing the execution of a the mediator into its corresponding runnable code. For that we first compute an executable specification as prudent as possible of her role in the orchestration. This specification is expressed in ASLan language, a formal language designed for modeling Web Services tied with security policies. Then we can check with automatic tools that this ASLan specification verifies some required security properties such as secrecy and authentication. If no flaw is found, we compile the specification into a Java servlet that can be used by the mediatior to lead the orchestration
Sun, Hua. "Telephone directory web service". CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2421.
Texto completoBonnenfant, Bruno Veyron Thierry. "Définir une politique d'archivage du web régional en bibliothèque municipale l'exemple du web forézien /". [S.l.] : [s.n.], 2008. http://www.enssib.fr/bibliotheque-numerique/document-2042.
Texto completoRichard-Foy, Julien. "Ingénierie des applications Web : réduire la complexité sans diminuer le contrôle". Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S110/document.
Texto completoThanks to information technologies, some tasks or information process can be automated, thus saving a significant amount of money. The web platform brings numerous of such digital tools. These are hosted on web servers that centralize information and coordinate users, which can use the tools from several kinds of devices (desktop computer, laptop, smartphone, etc.), by using a web browser, without installing anything. Nevertheless, developing such web applications is challenging. The difficulty mainly comes from the distance between client and server devices. First, the physical distance between these machines requires them to be networked. This raises several issues. How to manage latency ? How to provide a good quality of service even when the network is down ? How to choose on which side (client or server) to execute a computation ? How to free developers from addressing these problems without yet hiding the distributed nature of web application so that they can still benefit from their advantages ? Second, the execution environment is different between clients and servers. Indeed, on client-side the program is executed within a web browser whose API provides means of reacting to user actions and of updating the page. On server-side, the program is executed on a web server that processes HTTP requests. Some aspects of web applications can be shared between client and server sides (e.g. content display, form validation, navigation, or even some business computations). However, the APIs and environments are different between clients and servers, so how to share the same code while keeping the same execution performance as with native APIs ? How to keep the opportunity to leverage the specificities of a given platform ? This work aims at shortening this distance while keeping the opportunity to leverage it, that is while giving developers as much expressive power
Zhang, Tuo. "Vers une médiation de composition dynamique de Services Web dans des environnements ubiquitaires". Thesis, Paris 13, 2014. http://www.theses.fr/2014PA132042/document.
Texto completoNowadays, high market pressure stimulates service providers to be more competitive in order to attract more subscribers. The user-centric approach, which aims to provide adapted services to user’s needs, is attracting a great attention thanks to the emergence of ubiquitous environment. The interoperability, either that between users and services or that among services, is favored by the adoption of SOA (Service Oriented Architecture), as a development model as well as the Web services that combine the advantages of this model with the language and development technologies devoted to Internet-based applications. In particular, the dynamic Web service composition is currently the main practice which allows achieving enhanced services, as an answer to increasing complex requests by users for various types of services, by combining functionalities of multiple services within a single and personalized service session. Indeed, already available services are numerous and of various natures, similar services can be provided by heterogeneous platforms. In a ubiquitous environment, users are mobile, either by changing the access network or by changing the terminal, or even both of them. This leads in turn to potential needs on mobility of services, both in terms of the (physical) server and in terms of the (equivalent) services. It is in this dynamic and ubiquitous context that we have conducted our research. In particular, we focused on the particular topic of mediation of dynamic composition of web services. We proposed a mediation approach which consists in identifying and organizing various concrete services (both SOAP and RESTful) to form a set of abstract services, and, from this knowledge base, to provide users the possibility to realize personalized service session according to their needs through dynamic composition of some of the abstract services and their mapping to best suited concrete services. We considered three types of service composition (SOAP/SOAP, SOAP/RESTful, RESTful/RESTful) in our mediation. Depending on the user’s will, this composition (Mashup on the side of the mediator) can be returned to him/her, so that he/she can invoke it autonomously; or the mediator can ensure the realization of the composed services and provide only the final result to the user. In the latter case, the mediator can handle the aforementioned different mobility. This feature is achieved by exploring the mechanism of the virtual community to select the most appropriate concrete service corresponding to the abstract service and maintain the continuity of service while respecting its requested QoS. The virtual community has been developed within the ANR/UBIS project (to which part of this thesis is related)
Manabe, Tomohiro. "Web Search Based on Hierarchical Heading-Block Structure Analysis". 京都大学 (Kyoto University), 2016. http://hdl.handle.net/2433/215681.
Texto completoKyoto University (京都大学)
0048
新制・課程博士
博士(情報学)
甲第19854号
情博第605号
新制||情||105(附属図書館)
32890
京都大学大学院情報学研究科社会情報学専攻
(主査)教授 田島 敬史, 教授 田中 克己, 教授 吉川 正俊
学位規則第4条第1項該当
Lobbe, Quentin. "Archives, fragments Web et diasporas : pour une exploration désagrégée de corpus d'archives Web liées aux représentations en ligne des diasporas". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT017/document.
Texto completoThe Web is an unsteady environment. As Web sites emerge every days, whole communities may fade away over time by leaving too few or incomplete traces on the living Web. Facing this phenomenon, several archiving initiatives try to preserve the memory of the Web. But today, a mystery remains : While they have never been so vast and numerous, why are the Web archives not already the subject of many historical researches ? In reality, Web archives should not be considered as a faithful representation of the living Web. In fact, they are the direct traces of the archiving tools that tear them away from their original temporality. Thus, this thesis aims to give researchers the theoretical and technical means for a greater manageability of the Web archives, by defining a new unit of exploration : the Web fragment, a coherent and self-sufficient subset of a Web page. To that end, we will follow the pioneering work of the e-Diasporas Atlas which allowed, in the 2000s, to map and archive thousands of migrant Web sites. Main source of data from which we will unfold our reflections, it is through the particular angle of online representations of diasporas that we will explore the Web archives of the Atlas
Othman, Abdallah Mohamad. "MELQART : un système d'exécution de mashups avec disponibilité de données". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM010/document.
Texto completoThis thesis presents MELQART: a mashup execution system that ensures data availability. A mashup is a Web application that application that combines data from heterogeneous provides (Web services). Data are aggregated for building a homogenous result visualized by components named mashlets. Previous works have mainly focused, on the definition of mashups and associated tools and on their use and interaction with users. In this thesis, we focus on mashups data management, and more specifically on fresh mashups data availability. Improving the data availability take into account the dynamic aspect of mashups data. It ensures (1) the access to the required data even if the provider is unavailable, (2) the freshness of these data and (3) the data sharing between mashups in order to avoid the multiple retrieval of the same data. For this purpose, we have defined a mashup description formal model, which allows the specification of data availability features. The mashups execution schema is defined according to this model with functionalities that improve availability and freshness of mashed-up data. These functionalities are orthogonal to the mashup execution process. The MELQART system implements our contribution and validates it by executing mashups instances with unpredictable situations of broken communications with data providers