Dissertations / Theses on the topic 'Web page data extraction'

To see the other types of publications on this topic, follow the link: Web page data extraction.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Web page data extraction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Alves, Ricardo João de Freitas. "Declarative approach to data extraction of web pages." Master's thesis, Faculdade de Ciências e Tecnologia, 2009. http://hdl.handle.net/10362/5822.

Full text
Abstract:
Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfilment of the requirements for the degree of Master in Computer Science
In the last few years, we have been witnessing a noticeable WEB evolution with the introduction of significant improvements at technological level, such as the emergence of XHTML, CSS,Javascript, and Web2.0, just to name ones. This, combined with other factors such as physical expansion of the Web, as well as its low cost, have been the great motivator for the organizations and the general public to join, with a consequent growth in the number of users and thus influencing the volume of the largest global data repository. In consequence, there was an increasing need for regular data acquisition from the WEB, and because of its frequency, length or complexity, it would only be viable to obtain through automatic extractors. However, two main difficulties are inherent to automatic extractors. First, much of the Web's information is presented in visual formats mainly directed for human reading. Secondly, the introduction of dynamic webpages, which are brought together in local memory from different sources, causing some pages not to have a source file. Therefore, this thesis proposes a new and more modern extractor, capable of supporting the Web evolution, as well as being generic, so as to be able to be used in any situation, and capable of being extended and easily adaptable to a more particular use. This project is an extension of an earlier one which had the capability of extractions on semi-structured text files. However it evolved to a modular extraction system capable of extracting data from webpages, semi-structured text files and be expanded to support other data source types. It also contains a more complete and generic validation system and a new data delivery system capable of performing the earlier deliveries as well as new generic ones. A graphical editor was also developed to support the extraction system features and to allow a domain expert without computer knowledge to create extractions with only a few simple and intuitive interactions on the rendered webpage.
APA, Harvard, Vancouver, ISO, and other styles
2

Cheng, Wang. "AMBER : a domain-aware template based system for data extraction." Thesis, University of Oxford, 2015. http://ora.ox.ac.uk/objects/uuid:ff49d786-bfd8-4cd4-a69c-19e81cb95920.

Full text
Abstract:
The web is the greatest information source in human history, yet finding all offers for flats with gardens in London, Paris, and Berlin or all restaurants open after a screening of the latest blockbuster remain hard tasks – as that data is not easily amenable to processing. Extracting web data into databases for easier processing has been a resource-intensive process, requiring human supervision for every source from which to extract. This has been changing with approaches that replace human annotators with automated annotations. Such approaches could be successfully applied to restricted settings such as single attribute extraction or for domains with significant redundancy among sources. Multi-attribute objects are often presented on (i) Result pages, where multiple objects are presented on a single page as lists, tables or grids, with most important attributes and a summary description, (ii) Detail pages, where each page provides a detailed list of attributes and long description for a single entity, often in rich format. Both result and detail pages are having their own advantages. Extracting objects from result pages is orders of magnitude faster than from detail pages, and the links to detail pages are often only accessible through result pages. Detail pages have a complete list of attributes and full description of the entity. Early web data extraction approaches requires manual annotations for each web site to reach high accuracy, while a number of domain independent approaches only focus on unsupervised repeated structure segmentation. The former is limited in scaling and automation, while the latter is lacked in accuracy. Recent automated data extraction systems are often informed with an ontology and a set of object and attribute recognizers, however they have focused on extracting simple objects with few attributes from single-entity pages and avoided result pages. We present an automatic ontology-based multi-attribute object extraction system AMBER, which deals with both result and detail pages, achieves very high accuracy (>96%) with zero site-specific supervision, and is able to solve practical issues that arise in real-life data extraction tasks. AMBER is also applied as an important component of DIADEM, the first automatic full-site extraction system that is able to extract structured data from different domains without site-specific supervision, and has been tested through a large-scale evaluation (>10, 000) sites. On the result page side, AMBER achieves high accuracy through a novel domain- aware, path-based template discovery algorithm, and integrates annotations for all parts of the extraction, from identifying the primary list of objects, over segment- ing the individual objects, to aligning the attributes. Yet, AMBER is able to tolerate significant noise in the annotations, by combining these annotations with a novel algorithm for finding regular structures based on XPATH expressions that capture regular tree structures. On the detail page side, AMBER integrates boilerplate removal, dynamic lists identification and page dissimilarity calculation seamlessly to identify data region, then employs a set of fairly simple and cheaply computable features for attribute extraction. Besides, AMBER is the first approach that combines result page extraction and detail page extraction by integrating attributes extracted from result pages and the attributes found on corresponding detail pages. AMBER is able to identify attributes of objects with near perfect accuracy and to extract dozens of attributes with > 96% across several domains, even in presence of significant noise. It outperforms uninformed, automated approaches by a wide margin if given an ontology. Even in absence of an ontology, AMBER outperforms most previous systems on record segmentation.
APA, Harvard, Vancouver, ISO, and other styles
3

Anderson, Neil David Alan. "Data extraction & semantic annotation from web query result pages." Thesis, Queen's University Belfast, 2016. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.705642.

Full text
Abstract:
Our unquenchable thirst for knowledge is one of the few things that really defines our humanity. Yet the Information Age, which we have created, has left us floating aimlessly in a vast ocean of unintelligible data. Hidden Web databases are one massive source of structured data. The contents of these databases are, however, often only accessible through a query proposed by a user. The data returned in these Query Result Pages is intended for human consumption and, as such, has nothing more than an implicit semantic structure which can be understood visually by a human reader, but not by a computer. This thesis presents an investigation into the processes of extraction and semantic understanding of data from Query Result Pages. The work is multi-faceted and includes at the outset, the development of a vision-based data extraction tool. This work is followed by the development of a number of algorithms which make use of machine learning-based techniques first to align the data extracted into semantically similar groups and then to assign a meaningful label to each group. Part of the work undertaken in fulfilment of this thesis has also addressed the lack of large, modern datasets containing a wide range of result pages representing of those typically found online today. In particular, a new innovative crowdsourced dataset is presented. Finally, the work concludes by examining techniques from the complementary research field of Information Extraction. An initial, critical assessment of how these mature techniques could be applied to this research area is provided.
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Yongliang. "Aggregating product reviews for the Chinese market." Thesis, KTH, Kommunikationssystem, CoS, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-91484.

Full text
Abstract:
As of December 2007, the number of Internet users in China had increased to 210 million people. The annual growth rate reached 53.3 percent in 2008, with the average number of Internet users increasing every day by 200,000 people. Currently, China's Internet population is slightly lower than the 215 million internet users in the United States. [1] Despite the rapid growth of the Chinese economy in the global Internet market, China’s e-commerce is not following the traditional pattern of commerce, but instead has developed based on user demand. This growth has extended into every area of the Internet. In the west, expert product reviews have been shown to be an important element in a user’s purchase decision. The higher the quality of product reviews that customers received, the more products they buy from on-line shops. As the number of products and options increase, Chinese customers need impersonal, impartial, and detailed products reviews. This thesis focuses on on-line product reviews and how they affect Chinese customer’s purchase decisions. E-commerce is a complex system. As a typical model of e-commerce, we examine a Business to Consumer (B2C) on-line retail site and consider a number of factors; including some seemingly subtitle factors that may influence a customer’s eventually decision to shop on website. Specifically this thesis project will examine aggregated product reviews from different on-line sources by analyzing some existing western companies. Following this the thesis demonstrates how to aggregate product reviews for an e-business website. During this thesis project we found that existing data mining techniques made it straight forward to collect reviews. These reviews were stored in a database and web applications can query this database to provide a user with a set of relevant product reviews. One of the important issues, just as with search engines is providing the relevant product reviews and determining what order they should be presented in. In our work we selected the reviews based upon matching the product (although in some cases there are ambiguities concerning if two products are actually identical or not) and ordering the matching reviews by date - with the most recent reviews present first. Some of the open questions that remain for the future are: (1) improving the matching - to avoid the ambiguity concerning if the reviews are about the same product or not and (2) determining if the availability of product reviews actually affect a Chinese user's decision to purchase a product.
I december 2007 uppgick antalet internetanvändare i Kina har ökat till 210 miljoner människor. Den årliga tillväxttakten nådde 53,3 procent 2008, med den genomsnittliga Antalet Internet-användare ökar för varje dag av 200.000 människor. Närvarande Kinas Internet befolkningen är något lägre än de 215 miljoner Internetanvändare i USA Staterna.[1] Trots den snabba tillväxten i den kinesiska ekonomin i den globala Internetmarknaden, Kinas e-handel inte följer det traditionella mönstret av handel, men i stället har utvecklats baserat på användarnas efterfrågan. Denna tillväxt har utvidgas till alla områden I Internet. I väst har expert recensioner visat sig vara en viktig del I användarens köpbeslut. Ju högre kvalitet på produkten recensioner som kunderna mottagna fler produkter de köper från on-line butiker. Eftersom antalet produkter och alternativen ökar, kinesiska kunderna behöver opersonlig, opartisk och detaljerade produkter recensioner. Denna avhandling fokuserar på on-line recensioner och hur de påverkar Kinesiska kundens köpbeslut.</p> E-handel är ett komplext system. Som en typisk modell för e-handel, vi undersöka ett Business to Consumer (B2C) on-line-försäljning plats och överväga ett antal faktorer; inklusive några till synes subtitle faktorer som kan påverka kundens småningom Beslutet att handla på webbplatsen. Uttryckligen detta examensarbete kommer att undersöka aggregerade recensioner från olika online-källor genom att analysera vissa befintliga västra företag. Efter den här avhandlingen visar hur samlade produkt recensioner för en e-affärer webbplats. Under detta examensarbete fann vi att befintliga data mining tekniker gjort det rakt fram för att samla recensioner. Dessa översyner har lagrats i en databas och webb program kan söka denna databas för att ge en användare med en rad relevanta product recensioner. En av de viktiga frågorna, precis som med sökmotorer är att tillhandahålla relevanta produkt recensioner och bestämma vilken ordning de ska presenteras i. vårt arbete har vi valt recensioner baserat på matchning produkten (men i vissa fall det finns oklarheter i fråga om två produkter verkligen identiska eller inte) och beställa matchande recensioner efter datum - med den senaste recensioner närvarande första. Några av de öppna frågorna som kvarstår för framtiden är: (1) förbättra matchning - För att undvika oklarheter rörande om Gästrecensionerna om samma produkt eller inte och (2) avgöra om det finns recensioner faktiskt påverka en kinesisk användarens val att köpa en produkt.
APA, Harvard, Vancouver, ISO, and other styles
5

Malchik, Alexander 1975. "An aggregator tool for extraction and collection of data from web pages." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86522.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (p. 54-56).
by Alexander Malchik.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
6

Kolečkář, David. "Systém pro integraci webových datových zdrojů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-417239.

Full text
Abstract:
The thesis aims at designing and implementing a web application that will be used for the integration of web data sources. For data integration, a method using domain model of the target information system was applied. The work describes individual methods used for extracting information from web pages. The text describes the process of designing the architecture of the system including a description of the chosen technologies and tools. The main part of the work is implementation and testing the final web application that is written in Java and Angular framework. The outcome of the work is a web application that will allow its users to define web data sources and save data in the target database.
APA, Harvard, Vancouver, ISO, and other styles
7

Mazal, Zdeněk. "Extrakce textových dat z internetových stránek." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219347.

Full text
Abstract:
This work focus at data and especially text mining from Web pages, an overview of programs for downloading the text and ways of their extraction. It also contains an overview of the most frequently used programs for extracting data from internet. The output of this thesis is a Java program that can download text from a selection of servers and save them into xml le.
APA, Harvard, Vancouver, ISO, and other styles
8

Weng, Daiyue. "Extracting structured data from Web query result pages." Thesis, Queen's University Belfast, 2016. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709858.

Full text
Abstract:
A rapidly increasing number of Web databases are now become accessible via their HTML form- based query interfaces only. Comparing various services or products from a number of web sites in a specific domain is time-consuming and tedious. There is a demand for value-added Web applications that integrate data from multiple sources. To facilitate the development of such applications, we need to develop techniques for automating the process of providing integrated access to a multitude of database-driven Web sites, and integrating data from their underlying databases. This presents three challenges, namely query form extraction, query form matching and translation, and Web query result extraction. In this thesis, 1 focus on Web query result extraction, which aims to extract structured data encoded in semi-structured HTML pages, and return extracted data in relational tables. 1 begin by reviewing the existing approaches for Web query result extraction. 1 categorize them based on their degree of automation, i.e. manual, semi-automatic and fully automatic approaches. For each category, every approach will be described in terms of its technical features, followed by an analysis listing the advantages and limitations of the approach. The literature review leads to my proposed approaches, which resolve the Web data extraction problem, i.e. Web data record extraction, Web data alignment and Web data annotation. Each approach is presented in a chapter which includes the methodology, experiment and related work. The last chapter concludes the thesis.
APA, Harvard, Vancouver, ISO, and other styles
9

Смілянець, Федір Андрійович. "Екстракція структурованої інформації з множини веб-сторінок." Master's thesis, КПІ ім. Ігоря Сікорського, 2020. https://ela.kpi.ua/handle/123456789/39926.

Full text
Abstract:
Актуальність теми дослідження. Сучасний широкий інтернет є істотним джерелом даних для використання у наукових та бізнес-дослідженнях. Можливість видобувати актуальні дані часто є ключовою для досягнення необхідних цілей, але сучасні якісні рішення з застосуванням технологій машинного зору та інших можуть бути дорогими до придбання або розробки, тому прості та дешеві як з точки зору розробки та підтримки, так і з точки зору експлуатації рішення є необхідними. Метою дослідження є створення програмного інструментарію екстракції структурованих даних з веб-сторінок новинних ресурсів для подальшої класифікації за достовірністю. Для досягнення поставленої мети було окреслено та виконано наступні завдання: - провести огляд існуючих підходів та програмних аналогів у областях екстракції даних з веб-ресурсів та оцінки якості новин; - позробити та реалізувати алгоритми екстракції, підготовки та класифікації даних; - порівняти результати, отримані розробленим алгоритмом та результатами тренування алгоритмів машинного навчання на даних, видобутих ним з існуючим аналогом та результатами тренування на даних аналогу. Об’єктом дослідження є процес екстракції текстових даних з подальшою обробкою методами машинного навчання. Предметом дослідження є методи та засоби екстракції та аналізу структурованих текстових даних. Наукова новизна одержаних результатів. Було створено простий жадібний алгоритм у якому суміщено процеси пошуку посилань та видобування інформації, доведено доцільність використання простих алгоритмів для збору даних з ресурсів у мережі Інтернет з ціллю використання у тренуванні алгоритмів машинного навчання. Було доведено що як класичні алгоритми навчання здатні досягати результатів, співставним з такими у нейронних мереж, таких як мережі ДКЧП, та показано що такі моделі здатні працювати на двомовному датасеті. Публікації. Матеріали роботи було опубліковано у п’ятій Всеукраїнській науково-практичній конференції молодих вчених та студентів «Інформаційні системи та технології управління» (ІСТУ-2020) «Класифікація новин за достовірністю на основі методів машинного навчання».
Relevance of the research topic. Modern wide internet is a considerable source of data to be used in scientific and business applications. An ability to extract up to date data is frequently crutial for reaching necessary goals, though, modern quality solutions to this problem, which are using computer vision and other technologies, may be finantially demanding to acquire or develop, thus simple and cheap to develop, maintain and use solutions are necessary. The purpose of the study is to create a software instrument aimed at extraction of structured data from news websites for usage in news trustworthiness classification. Following tasks were outlined and implemented to achieve the aforementioned goal: - Outline existing approaches and analogues in areas of data extraction and news classification; - Design and develop extraction, preparation and classification algorhitms; - Compare the results achieved with developed extraction algorhitm and with existing software solution, including comparing machine learning accuracies on both of the extractors. The object of the study is the process of text data extraction with subsequent machine learning analysis. The subjects of the study are methods and tools of extraction and analysis of text data. Scientific novelty of the obtained results. A simple greedy algorithm was created, combining the process of link discovery and data extraction. Expediency of usage of simple web data extraction algorithms for composing machine learning datasets was proven. It was also proven that classical machine learning algorithms can achieve results similar to neural networks such as LSTM. Capabilities of machine learning systems to function efficiently in a bilingual context were also shown. Publications. Materials, related to this study, were published in the All-Ukrainian Scientific and Practical Conference of Young Scientists and Students “Information Systems and Management Technologies” (ISTU-2019) “News trustworthiness classification with machine learning”.
APA, Harvard, Vancouver, ISO, and other styles
10

Hou, Jingyu. "Discovering web page communities for web-based data management." University of Southern Queensland, Faculty of Sciences, 2002. http://eprints.usq.edu.au/archive/00001447/.

Full text
Abstract:
The World Wide Web is a rich source of information and continues to expand in size and complexity. Mainly because the data on the web is lack of rigid and uniform data models or schemas, how to effectively and efficiently manage web data and retrieve information is becoming a challenge problem. Discovering web page communities, which capture the features of the web and web-based data to find intrinsic relationships among the data, is one of the effective ways to solve this problem. A web page community is a set of web pages that has its own logical and semantic structures. In this work, we concentrate on the web data in web page format and exploit hyperlink information to discover (construct) web page communities. Three main web page communities are studied in this work: the first one is consisted of hub and authority pages, the second one is composed of relevant web pages with respect to a given page (URL), and the last one is the community with hierarchical cluster structures. For analysing hyperlinks, we establish a mathematical framework, especially the matrix-based framework, to model hyperlinks. Within this mathematical framework, hyperlink analysis is placed on a solid mathematic base and the results are reliable. For the web page community that is consisted of hub and authority pages, we focus on eliminating noise pages from the concerned page source to obtain another good quality page source, and in turn improve the quality of web page communities. We propose an innovative noise page elimination algorithm based on the hyperlink matrix model and mathematic operations, especially the singular value decomposition (SVD) of matrix. The proposed algorithm exploits hyperlink information among the web pages, reveals page relationships at a deeper level, and numerically defines thresholds for noise page elimination. The experiment results show the effectiveness and feasibility of the algorithm. This algorithm could also be used solely for web-based data management systems to filter unnecessary web pages and reduce the management cost. In order to construct a web page community that is consisted of relevant pages with respect to a given page (URL), we propose two hyperlink based relevant page finding algorithms. The first algorithm comes from the extended co-citation analysis of web pages. It is intuitive and easy to be implemented. The second one takes advantage of linear algebra theories to reveal deeper relationships among the web pages and identify relevant pages more precisely and effectively. The corresponding page source construction for these two algorithms can prevent the results from being affected by malicious hyperlinks on the web. The experiment results show the feasibility and effectiveness of the algorithms. The research results could be used to enhance web search by caching the relevant pages for certain searched pages. For the purpose of clustering web pages to construct a community with its hierarchical cluster structures, we propose an innovative web page similarity measurement that incorporates hyperlink transitivity and page importance (weight).Based on this similarity measurement, two types of hierarchical web page clustering algorithms are proposed. The first one is the improvement of the conventional K-mean algorithms. It is effective in improving page clustering, but is sensitive to the predefined similarity thresholds for clustering. Another type is the matrix-based hierarchical algorithm. Two algorithms of this type are proposed in this work. One takes cluster-overlapping into consideration, another one does not. The matrix-based algorithms do not require predefined similarity thresholds for clustering, are independent of the order in which the pages are presented, and produce stable clustering results. The matrix-based algorithms exploit intrinsic relationships among web pages within a uniform matrix framework, avoid much influence of human interference in the clustering procedure, and are easy to be implemented for applications. The experiments show the effectiveness of the new similarity measurement and the proposed algorithms in web page clustering improvement. For applying above mathematical algorithms better in practice, we generalize the web page discovering as a special case of information retrieval and present a visualization system prototype, as well as technical details on visualization algorithm design, to support information retrieval based on linear algebra. The visualization algorithms could be smoothly applied to web applications. XML is a new standard for data representation and exchange on the Internet. In order to extend our research to cover this important web data, we propose an object representation model (ORM) for XML data. A set of transformation rules and algorithms are established to transform XML data (DTD and XML documents with DTD or without DTD) into this model. This model capsulizes elements of XML data and data manipulation methods. DTD-Tree is also defined to describe the logical structure of DTD. It also can be used as an application program interface (API) for processing DTD, such as transforming a DTD document into the ORM. With this data model, semantic meanings of the tags (elements) in XML data can be used for further research in XML data management and information retrieval, such as community construction for XML data.
APA, Harvard, Vancouver, ISO, and other styles
11

Khalil, Faten. "Combining web data mining techniques for web page access prediction." University of Southern Queensland, Faculty of Sciences, 2008. http://eprints.usq.edu.au/archive/00004341/.

Full text
Abstract:
[Abstract]: Web page access prediction gained its importance from the ever increasing number of e-commerce Web information systems and e-businesses. Web page prediction, that involves personalising the Web users’ browsing experiences, assists Web masters in the improvement of the Web site structure and helps Web users in navigating the site and accessing the information they need. The most widely used approach for this purpose is the pattern discovery process of Web usage mining that entails many techniques like Markov model, association rules and clustering. Implementing pattern discovery techniques as such helps predict the next page tobe accessed by theWeb user based on the user’s previous browsing patterns. However, each of the aforementioned techniques has its own limitations, especiallywhen it comes to accuracy and space complexity. This dissertation achieves better accuracy as well as less state space complexity and rules generated by performingthe following combinations. First, we combine low-order Markov model and association rules. Markov model analysis are performed on the data sets. If the Markov model prediction results in a tie or no state, association rules are used for prediction. The outcome of this integration is better accuracy, less Markov model state space complexity and less number of generated rules than using each of the methods individually. Second, we integrate low-order Markov model and clustering. The data sets are clustered and Markov model analysis are performed oneach cluster instead of the whole data sets. The outcome of the integration is better accuracy than the first combination with less state space complexity than higherorder Markov model. The last integration model involves combining all three techniques together: clustering, association rules and low-order Markov model. The data sets are clustered and Markov model analysis are performed on each cluster. If the Markov model prediction results in close accuracies for the same item, association rules are used for prediction. This integration model achievesbetter Web page access prediction accuracy, less Markov model state space complexity and less number of rules generated than the previous two models.
APA, Harvard, Vancouver, ISO, and other styles
12

Guo, Jinsong. "Reducing human effort in web data extraction." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:04bd39dd-bfec-4c07-91db-980fcbc745ba.

Full text
Abstract:
The human effort in large-scale web data extraction significantly affects both the extraction flexibility and the economic cost. Our work aims to reduce the human effort required by web data extraction tasks in three specific scenarios. (I) Data demand is unclear, and the user has to guide the wrapper induction by annotations. To maximally save the human effort in the annotation process, wrappers should be robust, i.e., immune to the webpage's change, to avoid the wrapper re-generation which requires a re-annotation process. Existing approaches primarily aim at generating accurate wrappers but barely generate robust wrappers. We prove that the XPATH wrapper induction problem is NP-hard, and propose an approximate solution estimating a set of top-k robust wrappers in polynomial time. Our method also meets one additional requirement that the induction process should be noise resistant, i.e., tolerate slightly erroneous examples. (II) Data demand is clear, and the user's guide should be avoided, i.e., the wrapper generation should be fully-unsupervised. Existing unsupervised methods purely relying on the repeated patterns of HTML structures/visual information are far from being practical. Partially supervised methods, such as the state-of-the-art system DIADEM, can work well for tasks involving only a small number of domains. However, the human effort in the annotator preparation process becomes a heavier burden when the domain number increases. We propose a new approach, called RED (abbreviation for 'redundancy'), an automatic approach exploiting content redundancy between the result page and its corresponding detail pages. RED requires no annotation (thus requires no human effort) and its wrapper accuracy is significantly higher than that of previous unsupervised methods. (III) Data quality is unknown, and the user's related decisions are blind. Without knowing the error types and the error number of each type in the extracted data, the extraction effort could be wasted on useless websites, and even worse, the human effort could be wasted on unnecessary or wrongly-targeted data cleaning process. Despite the importance of error estimation, no methods have addressed it sufficiently. We focus on two types of common errors in web data, namely duplicates and violations of integrity constraints. We propose a series of error estimation approaches by adapting, extending, and synthesizing some recent innovations in diverse areas such as active learning, classifier calibration, F-measure estimation, and interactive training.
APA, Harvard, Vancouver, ISO, and other styles
13

Ouahid, Hicham. "Data extraction from the Web using XML." Thesis, University of Ottawa (Canada), 2001. http://hdl.handle.net/10393/9260.

Full text
Abstract:
This thesis presents a mechanism based on eXtensible Markup Language (XML) to extract data from HTML-based Web pages and populate relational databases. This task is performed by a system called the XML-based Web Agent (XWA). The data extraction is done in three phases. First, the Web pages are converted to well-formed XML documents to facilitate their processing. Second, the data is extracted from the well-formed XML documents and formatted into valid XML documents. Finally, the valid XML documents are mapped into tables to be stored in a relational database. To extract specific data from the Web, the XWA requires information about the Web pages from which to extract the data, the location of the data within the Web pages, and how the extracted data should be formatted. This information is stored in Web Site Ontologies which are built using a language called the Web Ontology Description Language (WONDEL). WONDEL is based on XML and XML Pointer Language. It has been defined as a part of this work to allow users to specify the data they want, and let the XWA work offline to extract it and store it in a database. This has the advantage of saving users the time waiting for the Web pages to download, and taking benefit from the powerful query mechanism offered by database management systems.
APA, Harvard, Vancouver, ISO, and other styles
14

Gottlieb, Matthew. "Understanding malware autostart techniques with web data extraction /." Online version of thesis, 2009. http://hdl.handle.net/1850/10632.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lalithsena, Sarasi. "Domain-specific Knowledge Extraction from the Web of Data." Wright State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=wright1527202092744638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Popescu, Ana-Maria. "Information extraction from unstructured web text /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/6935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Chartrand, Timothy Adam. "Ontology-Based Extraction of RDF Data from the World Wide Web." BYU ScholarsArchive, 2003. https://scholarsarchive.byu.edu/etd/56.

Full text
Abstract:
The simplicity and proliferation of the World Wide Web (WWW) has taken the availability of information to an unprecedented level. The next generation of the Web, the Semantic Web, seeks to make information more usable by machines by introducing a more rigorous structure based on ontologies. One hinderance to the Semantic Web is the lack of existing semantically marked-up data. Until there is a critical mass of Semantic Web data, few people will develop and use Semantic Web applications. This project helps promote the Semantic Web by providing content. We apply existing information-extraction techniques, in particular, the BYU ontologybased data-extraction system, to extract information from the WWW based on a Semantic Web ontology to produce Semantic Web data with respect to that ontology. As an example of how the generated Semantic Web data can be used, we provide an application to browse the extracted data and the source documents together. In this sense, the extracted data is superimposed over or is an index over the source documents. Our experiments with ontologies in four application domains show that our approach can indeed extract Semantic Web data from the WWW with precision and recall similar to that achieved by the underlying information extraction system and make that data accessible to Semantic Web applications.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhou, Yuanqiu. "Generating Data-Extraction Ontologies By Example." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd1115.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ortona, Stefano. "Easing information extraction on the web through automated rules discovery." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:a5a7a070-338a-4afc-8be5-a38b486cf526.

Full text
Abstract:
The advent of the era of big data on the Web has made automatic web information extraction an essential tool in data acquisition processes. Unfortunately, automated solutions are in most cases more error prone than those created by humans, resulting in dirty and erroneous data. Automatic repair and cleaning of the extracted data is thus a necessary complement to information extraction on the Web. This thesis investigates the problem of inducing cleaning rules on web extracted data in order to (i) repair and align the data w.r.t. an original target schema, (ii) produce repairs that are as generic as possible such that different instances can benefit from them. The problem is addressed from three different angles: replace cross-site redundancy with an ensemble of entity recognisers; produce general repairs that can be encoded in the extraction process; and exploit entity-wide relations to infer common knowledge on extracted data. First, we present ROSeAnn, an unsupervised approach to integrate semantic annotators and produce a unied and consistent annotation layer on top of them. Both the diversity in vocabulary and widely varying accuracy justify the need for middleware that reconciles different annotator opinions. Considering annotators as "black-boxes" that do not require per-domain supervision allows us to recognise semantically related content in web extracted data in a scalable way. Second, we show in WADaR how annotators can be used to discover rules to repair web extracted data. We study the problem of computing joint repairs for web data extraction programs and their extracted data, providing an approximate solution that requires no per-source supervision and proves effective across a wide variety of domains and sources. The proposed solution is effective not only in repairing the extracted data, but also in encoding such repairs in the original extraction process. Third, we investigate how relationships among entities can be exploited to discover inconsistencies and additional information. We present RuDiK, a disk-based scalable solution to discover first-order logic rules over RDF knowledge bases built from web sources. We present an approach that does not limit its search space to rules that rely on "positive" relationships between entities, as in the case with traditional mining of constraints. On the contrary, it extends the search space to also discover negative rules, i.e., patterns that lead to contradictions in the data.
APA, Harvard, Vancouver, ISO, and other styles
20

Xhemali, Daniela. "Automated retrieval and extraction of training course information from unstructured web pages." Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/7022.

Full text
Abstract:
Web Information Extraction (WIE) is the discipline dealing with the discovery, processing and extraction of specific pieces of information from semi-structured or unstructured web pages. The World Wide Web comprises billions of web pages and there is much need for systems that will locate, extract and integrate the acquired knowledge into organisations practices. There are some commercial, automated web extraction software packages, however their success comes from heavily involving their users in the process of finding the relevant web pages, preparing the system to recognise items of interest on these pages and manually dealing with the evaluation and storage of the extracted results. This research has explored WIE, specifically with regard to the automation of the extraction and validation of online training information. The work also includes research and development in the area of automated Web Information Retrieval (WIR), more specifically in Web Searching (or Crawling) and Web Classification. Different technologies were considered, however after much consideration, Naïve Bayes Networks were chosen as the most suitable for the development of the classification system. The extraction part of the system used Genetic Programming (GP) for the generation of web extraction solutions. Specifically, GP was used to evolve Regular Expressions, which were then used to extract specific training course information from the web such as: course names, prices, dates and locations. The experimental results indicate that all three aspects of this research perform very well, with the Web Crawler outperforming existing crawling systems, the Web Classifier performing with an accuracy of over 95% and a precision of over 98%, and the Web Extractor achieving an accuracy of over 94% for the extraction of course titles and an accuracy of just under 67% for the extraction of other course attributes such as dates, prices and locations. Furthermore, the overall work is of great significance to the sponsoring company, as it simplifies and improves the existing time-consuming, labour-intensive and error-prone manual techniques, as will be discussed in this thesis. The prototype developed in this research works in the background and requires very little, often no, human assistance.
APA, Harvard, Vancouver, ISO, and other styles
21

Chartrand, Tim. "Ontology-based extraction of RDF data from the World Wide Web /." Diss., CLICK HERE for online access, 2003. http://contentdm.lib.byu.edu/ETD/image/etd168.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Wei, Chenjie. "Using Automated Extraction of the Page Component Hierarchy to Customize and Adapt Web Pages to Mobile Devices." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1338348757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Palma, Michael, and Shidi Zhou. "A Web Scraper For Forums : Navigation and text extraction methods." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-219903.

Full text
Abstract:
Web forums are a popular way of exchanging information and discussing various topics. These websites usually have a special structure, divided into boards, threads and posts. Although the structure might be consistent across forums, the layout of each forum is different. The way a web forum presents the user posts is also very different from how a news website presents a single piece of information. All of this makes the navigation and extraction of text a hard task for web scrapers. The focus of this thesis is the development of a web scraper specialized in forums. Three different methods for text extraction are implemented and tested before choosing the most appropriate method for the task. The methods are Word Count, Text-Detection Framework and Text-to-Tag Ratio. The handling of link duplicates is also considered and solved by implementing a multi-layer bloom filter. The thesis is conducted applying a qualitative methodology. The results indicate that the Text-to-Tag Ratio has the best overall performance and gives the most desirable result in web forums. Thus, this was the selected methods to keep on the final version of the web scraper.
Webforum är ett populärt sätt att utbyta information och diskutera olika ämnen. Dessa webbplatser har vanligtvis en särskild struktur, uppdelad i startsida, trådar och inlägg. Även om strukturen kan vara konsekvent bland olika forum är layouten av varje forum annorlunda. Det sätt på vilket ett webbforum presenterar användarinläggen är också väldigt annorlunda än hur en nyhet webbplats presenterar en enda informationsinlägg. Allt detta gör navigering och extrahering av text en svår uppgift för webbskrapor. Fokuset av detta examensarbete är utvecklingen av en webbskrapa specialiserad på forum. Tre olika metoder för textutvinning implementeras och testas innan man väljer den lämpligaste metoden för uppgiften. Metoderna är Word Count, Text Detection Framework och Text-to-Tag Ratio. Hanteringen av länk dubbleringar noga övervägd och löses genom att implementera ett flerlagers bloom filter. Examensarbetet genomförs med tillämpning av en kvalitativ metodik. Resultaten indikerar att Text-to-Tag Ratio har den bästa övergripande prestandan och ger det mest önskvärda resultatet i webbforum. Således var detta den valda metoden att behålla i den slutliga versionen av webbskrapan.
APA, Harvard, Vancouver, ISO, and other styles
24

Gerber, Daniel. "Statistical Extraction of Multilingual Natural Language Patterns for RDF Predicates: Algorithms and Applications." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-208759.

Full text
Abstract:
The Data Web has undergone a tremendous growth period. It currently consists of more then 3300 publicly available knowledge bases describing millions of resources from various domains, such as life sciences, government or geography, with over 89 billion facts. In the same way, the Document Web grew to the state where approximately 4.55 billion websites exist, 300 million photos are uploaded on Facebook as well as 3.5 billion Google searches are performed on average every day. However, there is a gap between the Document Web and the Data Web, since for example knowledge bases available on the Data Web are most commonly extracted from structured or semi-structured sources, but the majority of information available on the Web is contained in unstructured sources such as news articles, blog post, photos, forum discussions, etc. As a result, data on the Data Web not only misses a significant fragment of information but also suffers from a lack of actuality since typical extraction methods are time-consuming and can only be carried out periodically. Furthermore, provenance information is rarely taken into consideration and therefore gets lost in the transformation process. In addition, users are accustomed to entering keyword queries to satisfy their information needs. With the availability of machine-readable knowledge bases, lay users could be empowered to issue more specific questions and get more precise answers. In this thesis, we address the problem of Relation Extraction, one of the key challenges pertaining to closing the gap between the Document Web and the Data Web by four means. First, we present a distant supervision approach that allows finding multilingual natural language representations of formal relations already contained in the Data Web. We use these natural language representations to find sentences on the Document Web that contain unseen instances of this relation between two entities. Second, we address the problem of data actuality by presenting a real-time data stream RDF extraction framework and utilize this framework to extract RDF from RSS news feeds. Third, we present a novel fact validation algorithm, based on natural language representations, able to not only verify or falsify a given triple, but also to find trustworthy sources for it on the Web and estimating a time scope in which the triple holds true. The features used by this algorithm to determine if a website is indeed trustworthy are used as provenance information and therewith help to create metadata for facts in the Data Web. Finally, we present a question answering system that uses the natural language representations to map natural language question to formal SPARQL queries, allowing lay users to make use of the large amounts of data available on the Data Web to satisfy their information need.
APA, Harvard, Vancouver, ISO, and other styles
25

Usbeck, Ricardo. "Knowledge Extraction for Hybrid Question Answering." Doctoral thesis, Universitätsbibliothek Leipzig, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-225097.

Full text
Abstract:
Since the proposal of hypertext by Tim Berners-Lee to his employer CERN on March 12, 1989 the World Wide Web has grown to more than one billion Web pages and still grows. With the later proposed Semantic Web vision,Berners-Lee et al. suggested an extension of the existing (Document) Web to allow better reuse, sharing and understanding of data. Both the Document Web and the Web of Data (which is the current implementation of the Semantic Web) grow continuously. This is a mixed blessing, as the two forms of the Web grow concurrently and most commonly contain different pieces of information. Modern information systems must thus bridge a Semantic Gap to allow a holistic and unified access to information about a particular information independent of the representation of the data. One way to bridge the gap between the two forms of the Web is the extraction of structured data, i.e., RDF, from the growing amount of unstructured and semi-structured information (e.g., tables and XML) on the Document Web. Note, that unstructured data stands for any type of textual information like news, blogs or tweets. While extracting structured data from unstructured data allows the development of powerful information system, it requires high-quality and scalable knowledge extraction frameworks to lead to useful results. The dire need for such approaches has led to the development of a multitude of annotation frameworks and tools. However, most of these approaches are not evaluated on the same datasets or using the same measures. The resulting Evaluation Gap needs to be tackled by a concise evaluation framework to foster fine-grained and uniform evaluations of annotation tools and frameworks over any knowledge bases. Moreover, with the constant growth of data and the ongoing decentralization of knowledge, intuitive ways for non-experts to access the generated data are required. Humans adapted their search behavior to current Web data by access paradigms such as keyword search so as to retrieve high-quality results. Hence, most Web users only expect Web documents in return. However, humans think and most commonly express their information needs in their natural language rather than using keyword phrases. Answering complex information needs often requires the combination of knowledge from various, differently structured data sources. Thus, we observe an Information Gap between natural-language questions and current keyword-based search paradigms, which in addition do not make use of the available structured and unstructured data sources. Question Answering (QA) systems provide an easy and efficient way to bridge this gap by allowing to query data via natural language, thus reducing (1) a possible loss of precision and (2) potential loss of time while reformulating the search intention to transform it into a machine-readable way. Furthermore, QA systems enable answering natural language queries with concise results instead of links to verbose Web documents. Additionally, they allow as well as encourage the access to and the combination of knowledge from heterogeneous knowledge bases (KBs) within one answer. Consequently, three main research gaps are considered and addressed in this work: First, addressing the Semantic Gap between the unstructured Document Web and the Semantic Gap requires the development of scalable and accurate approaches for the extraction of structured data in RDF. This research challenge is addressed by several approaches within this thesis. This thesis presents CETUS, an approach for recognizing entity types to populate RDF KBs. Furthermore, our knowledge base-agnostic disambiguation framework AGDISTIS can efficiently detect the correct URIs for a given set of named entities. Additionally, we introduce REX, a Web-scale framework for RDF extraction from semi-structured (i.e., templated) websites which makes use of the semantics of the reference knowledge based to check the extracted data. The ongoing research on closing the Semantic Gap has already yielded a large number of annotation tools and frameworks. However, these approaches are currently still hard to compare since the published evaluation results are calculated on diverse datasets and evaluated based on different measures. On the other hand, the issue of comparability of results is not to be regarded as being intrinsic to the annotation task. Indeed, it is now well established that scientists spend between 60% and 80% of their time preparing data for experiments. Data preparation being such a tedious problem in the annotation domain is mostly due to the different formats of the gold standards as well as the different data representations across reference datasets. We tackle the resulting Evaluation Gap in two ways: First, we introduce a collection of three novel datasets, dubbed N3, to leverage the possibility of optimizing NER and NED algorithms via Linked Data and to ensure a maximal interoperability to overcome the need for corpus-specific parsers. Second, we present GERBIL, an evaluation framework for semantic entity annotation. The rationale behind our framework is to provide developers, end users and researchers with easy-to-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools and frameworks on multiple datasets. The decentral architecture behind the Web has led to pieces of information being distributed across data sources with varying structure. Moreover, the increasing the demand for natural-language interfaces as depicted by current mobile applications requires systems to deeply understand the underlying user information need. In conclusion, the natural language interface for asking questions requires a hybrid approach to data usage, i.e., simultaneously performing a search on full-texts and semantic knowledge bases. To close the Information Gap, this thesis presents HAWK, a novel entity search approach developed for hybrid QA based on combining structured RDF and unstructured full-text data sources.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhao, Hongkun. "Automatic wrapper generation for the extraction of search result records from search engines." Diss., Online access via UMI:, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
27

Logofatu, Cristina. "Improving communication in a transportation company by using a Web page." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2520.

Full text
Abstract:
The Internet has become a very powerful tool in improving communication, making it easier, more convenient, and faster to access or exchange information. This project takes advantage of the strengths the Internet provides by improving communication by developing a web site for a transportation company.
APA, Harvard, Vancouver, ISO, and other styles
28

Sellers, Andrew. "OXPath : a scalable, memory-efficient formalism for data extraction from modern web applications." Thesis, University of Oxford, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.555325.

Full text
Abstract:
The evolution of the web has outpaced itself: The growing wealth of informa- tion and the increasing sophistication of interfaces necessitate automated pro- cessing. Web automation and extraction technologies have been overwhelmed by this very growth. To a'ddress this trend, we identify four key requirements of web extraction: (1) Interact with sophisticated web application interfaces, (2) Precisely capture the relevant data for most web extraction tasks, (3) Scale with the number of visited pages, and (4) Readily embed into existing web technologies. ThIS dissertation discusses OXPATH, an extension of XPath for interacting with web applications and for extracting information thus revealed. It ad- -: dresses all the above requirements. OXPATH's page-at-a-time evaluation guar- antees memory use independent of the number of visited pages, yet remains polynomial in time. We validate experimentally the theoretical complexity and demonstrate that its evaluation is dominated by technical aspects such as the page rendering of the underlying browser. We also present OXPATH host languages, including Ox LATIN. Ox LATIN extends the well-known Pig Latin language and can run on a standard Hadoop cluster. The Ox LATIN language facilitates distributed expression evaluation in a cloud computing paradigm, providing support for common web extraction scenarios that include expression composition, aggregation, and integration. Ox LATIN adds support for continuations within its programs, which increases its efficiency by eliminating unneeded page fetches. Our experiments confirm the scalability of OXPATH and Ox LATIN. We fur- ther show that OXPATH outperforms existing commercial and academic data extraction tools by a wide margin. OXPATH is available under an open source license. We also discuss applications and ongoing tool development that establish OX- PATH as a data extraction tool that advances the state-of-the-art.
APA, Harvard, Vancouver, ISO, and other styles
29

Abdulrahman, Ruqayya. "Multi agent system for web database processing, on data extraction from online social networks." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5502.

Full text
Abstract:
In recent years, there has been a flood of continuously changing information from a variety of web resources such as web databases, web sites, web services and programs. Online Social Networks (OSNs) represent such a field where huge amounts of information are being posted online over time. Due to the nature of OSNs, which offer a productive source for qualitative and quantitative personal information, researchers from various disciplines contribute to developing methods for extracting data from OSNs. However, there is limited research which addresses extracting data automatically. To the best of the author's knowledge, there is no research which focuses on tracking the real time changes of information retrieved from OSN profiles over time and this motivated the present work. This thesis presents different approaches for automated Data Extraction (DE) from OSN: crawler, parser, Multi Agent System (MAS) and Application Programming Interface (API). Initially, a parser was implemented as a centralized system to traverse the OSN graph and extract the profile's attributes and list of friends from Myspace, the top OSN at that time, by parsing the Myspace profiles and extracting the relevant tokens from the parsed HTML source files. A Breadth First Search (BFS) algorithm was used to travel across the generated OSN friendship graph in order to select the next profile for parsing. The approach was implemented and tested on two types of friends: top friends and all friends. In case of top friends, 500 seed profiles have been visited; 298 public profiles were parsed to get 2197 top friends' profiles and 2747 friendship edges, while in case of all friends, 250 public profiles have been parsed to extract 10,196 friends' profiles and 17,223 friendship edges. This approach has two main limitations. The system is designed as a centralized system that controlled and retrieved information of each user's profile just once. This means that the extraction process will stop if the system fails to process one of the profiles; either the seed profile (first profile to be crawled) or its friends. To overcome this problem, an Online Social Network Retrieval System (OSNRS) is proposed to decentralize the DE process from OSN through using MAS. The novelty of OSNRS is its ability to monitor profiles continuously over time. The second challenge is that the parser had to be modified to cope with changes in the profiles' structure. To overcome this problem, the proposed OSNRS is improved through use of an API tool to enable OSNRS agents to obtain the required fields of an OSN profile despite modifications in the representation of the profile's source web pages. The experimental work shows that using API and MAS simplifies and speeds up the process of tracking a profile's history. It also helps security personnel, parents, guardians, social workers and marketers in understanding the dynamic behaviour of OSN users. This thesis proposes solutions for web database processing on data extraction from OSNs by the use of parser and MAS and discusses the limitations and improvements.
APA, Harvard, Vancouver, ISO, and other styles
30

Cooper, Erica L. "Automatic repair and recovery for Omnibase : robust extraction of data from diverse Web sources." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61157.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 31).
In order to make the best use of the multitude of diverse, semi-structured sources of data available on the internet, information retrieval systems need to reliably access the data on these different sites in a manner that is robust to changes in format or structure that these sites might undergo. An interface that gives a system uniform, programmatic access to the data on some web site is called a web wrapper, and the process of inferring a wrapper for a given website based on a few examples of its pages is known as wrapper induction. A challenge of using wrappers for online information extraction arises from the dynamic nature of the web-even the slightest of changes to the format of a web page may be enough to invalidate a wrapper. Thus, it is important to be able to detect when a wrapper no longer extracts the correct information, and also for the system to be able to recover from this type of failure. This thesis demonstrates improved error detection as well as methods of recovery and repair for broken wrappers for START, a natural-language question-answering system developed by Infolab at MIT.
by Erica L. Cooper.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
31

Gollapally, Devender R. "Multi-Agent Architecture for Internet Information Extraction and Visualization." Thesis, University of North Texas, 2000. https://digital.library.unt.edu/ark:/67531/metadc2575/.

Full text
Abstract:
The World Wide Web is one of the largest sources of information; more and more applications are being developed daily to make use of this information. This thesis presents a multi-agent architecture that deals with some of the issues related to Internet data extraction. The primary issue addresses the reliable, efficient and quick extraction of data through the use of HTTP performance monitoring agents. A second issue focuses on how to make use of available data to take decisions and alert the user when there is change in data; this is done with the help of user agents that are equipped with a Defeasible reasoning interpreter. An additional issue is the visualization of extracted data; this is done with the aid of VRML visualization agents. The cited issues are discussed using stock portfolio management as an example application.
APA, Harvard, Vancouver, ISO, and other styles
32

Sernadela, Pedro Miguel Lopes. "Data integration services for biomedical applications." Doctoral thesis, Universidade de Aveiro, 2018. http://hdl.handle.net/10773/23511.

Full text
Abstract:
Doutoramento em Informática (MAP-i)
In the last decades, the field of biomedical science has fostered unprecedented scientific advances. Research is stimulated by the constant evolution of information technology, delivering novel and diverse bioinformatics tools. Nevertheless, the proliferation of new and disconnected solutions has resulted in massive amounts of resources spread over heterogeneous and distributed platforms. Distinct data types and formats are generated and stored in miscellaneous repositories posing data interoperability challenges and delays in discoveries. Data sharing and integrated access to these resources are key features for successful knowledge extraction. In this context, this thesis makes contributions towards accelerating the semantic integration, linkage and reuse of biomedical resources. The first contribution addresses the connection of distributed and heterogeneous registries. The proposed methodology creates a holistic view over the different registries, supporting semantic data representation, integrated access and querying. The second contribution addresses the integration of heterogeneous information across scientific research, aiming to enable adequate data-sharing services. The third contribution presents a modular architecture to support the extraction and integration of textual information, enabling the full exploitation of curated data. The last contribution lies in providing a platform to accelerate the deployment of enhanced semantic information systems. All the proposed solutions were deployed and validated in the scope of rare diseases.
Nas últimas décadas, o campo das ciências biomédicas proporcionou grandes avanços científicos estimulados pela constante evolução das tecnologias de informação. A criação de diversas ferramentas na área da bioinformática e a falta de integração entre novas soluções resultou em enormes quantidades de dados distribuídos por diferentes plataformas. Dados de diferentes tipos e formatos são gerados e armazenados em vários repositórios, o que origina problemas de interoperabilidade e atrasa a investigação. A partilha de informação e o acesso integrado a esses recursos são características fundamentais para a extração bem sucedida do conhecimento científico. Nesta medida, esta tese fornece contribuições para acelerar a integração, ligação e reutilização semântica de dados biomédicos. A primeira contribuição aborda a interconexão de registos distribuídos e heterogéneos. A metodologia proposta cria uma visão holística sobre os diferentes registos, suportando a representação semântica de dados e o acesso integrado. A segunda contribuição aborda a integração de diversos dados para investigações científicas, com o objetivo de suportar serviços interoperáveis para a partilha de informação. O terceiro contributo apresenta uma arquitetura modular que apoia a extração e integração de informações textuais, permitindo a exploração destes dados. A última contribuição consiste numa plataforma web para acelerar a criação de sistemas de informação semânticos. Todas as soluções propostas foram validadas no âmbito das doenças raras.
APA, Harvard, Vancouver, ISO, and other styles
33

Le, Grand Bénédicte. "Extraction d'information et visualisation de systèmes complexes sémantiquement structurés." Paris 6, 2001. http://www.theses.fr/2001PA066508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Hansson, Andreas. "Relational Database Web Application : Web administration interface for visualizing and predicting relationships to manage relational databases." Thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-25879.

Full text
Abstract:
There is a need of storing and keeping track of things. As the amount of information increases, so does the demand for suitable applications that can manage the data. This thesis has had its focus on developing a web administration interface for relational databases, where the focus has been on managing and visualizing the data, where relationships between data within the database could be predicted through an algorithm. During the thesis, it was revealed that administrators can utilize naming conventions for databases, a property which can be used to predict its relationships. Furthermore, existing applications for managing databases has been compared with the thesis' implementation. Notable differences are that existing solutions are focused towards the structure of the data, rather than the data itself. To accomplish all this, an agile method was chosen for fast results within the deadline, together with standardized web development tools and JavaScript frameworks. The resulting implementation consists of a front- and backend. The frontend was developed using the Ember.JS framework for making web applications and the backend was implemented using Node.JS, together with a component for handling different database dialects called Sequelize. It has been concluded that the prototype this thesis has resulted in works as a proof of concept, complete with a prediction algorithm that can suggest relationships within databases that utilizes convenient and consistent naming conventions. In the future, further research and tests could be conducted to evaluate the security, reliability and usability of the application, to ensure its production quality.
APA, Harvard, Vancouver, ISO, and other styles
35

Lutz, João Adolfo Froede. "Descoberta de ruído em páginas da web oculta através de uma abordagem de aprendizagem supervisionada." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/94625.

Full text
Abstract:
Um dos problemas da extração de dados na web é a remoção de ruído existente nas páginas. Esta tarefa busca identificar todos os elementos não informativos em meio ao conteúdo, como por exemplo cabeçalhos, menus ou propagandas. A presença de ruído pode prejudicar seriamente o desempenho de motores de busca e tarefas de mineração de dados na web. Este trabalho aborda o problema da descoberta de ruído em páginas da web oculta, a parte da web que é acessível apenas através do preenchimento de formulários. No processamento da web oculta, a extração de dados geralmente é precedida por uma etapa de inserção de dados, na qual os formulários que dão acesso às páginas ocultas são automaticamente ou semi-automaticamente preenchidos. Durante esta fase, são coleta- dos dados do domínio em questão, como os rótulos e valores dos campos. A proposta deste trabalho é agregar este tipo de dados com informações sintáticas dos elementos que compõem a página. É mostrado empiricamente que esta combinação atinge resultados melhores que uma abordagem baseada apenas em informações sintáticas.
One of the problems of data extraction from web pages is the identification of noise in pages. This task aims at identifying non-informative elements in pages, such as headers, menus, or advertisement. The presence of noise may hinder the performance of search engines and web mining tasks. In this paper we tackle the problem of discovering noise in web pages found in the hidden web, i.e., that part of the web that is only accessible by filling web forms. In hidden web processing, data extraction is usually preceeded by a form filling step, in which the query forms that give access to the hidden web pages are automatically or semi-automatically filled. During form filling relevant data about the queried domain are collected, as field names and field values. Our proposal combines this type of data with syntactic information about the nodes that compose the page. We show empirically that this combination achieves better results than an approach that is based solely on syntactic information. Keywords:
APA, Harvard, Vancouver, ISO, and other styles
36

Maillot, Pierre. "Nouvelles méthodes pour l'évaluation, l'évolution et l'interrogation des bases du Web des données." Thesis, Angers, 2015. http://www.theses.fr/2015ANGE0007/document.

Full text
Abstract:
Le Web des données offre un environnement de partage et de diffusion des données, selon un cadre particulier qui permet une exploitation des données tant par l’humain que par la machine. Pour cela, le framework RDF propose de formater les données en phrases élémentaires de la forme (sujet, relation, objet) , appelées triplets. Les bases du Web des données, dites bases RDF, sont des ensembles de triplets. Dans une base RDF, l’ontologie – données structurelles – organise la description des données factuelles. Le nombre et la taille des bases du Web des données n’a pas cessé de croître depuis sa création en 2001. Cette croissance s’est même accélérée depuis l’apparition du mouvement du Linked Data en 2008 qui encourage le partage et l’interconnexion de bases publiquement accessibles sur Internet. Ces bases couvrent des domaines variés tels que les données encyclopédiques (e.g. Wikipédia), gouvernementales ou bibliographiques. L’utilisation et la mise à jour des données dans ces bases sont faits par des communautés d’utilisateurs liés par un domaine d’intérêt commun. Cette exploitation communautaire se fait avec le soutien d’outils insuffisamment matures pour diagnostiquer le contenu d’une base ou pour interroger ensemble les bases du Web des données. Notre thèse propose trois méthodes pour encadrer le développement, tant factuel qu’ontologique, et pour améliorer l’interrogation des bases du Web des données. Nous proposons d’abord une méthode pour évaluer la qualité des modifications des données factuelles lors d’une mise à jour par un contributeur. Nous proposons ensuite une méthode pour faciliter l’examen de la base par la mise en évidence de groupes de données factuelles en conflit avec l’ontologie. L’expert qui guide l’évolution de cette base peut ainsi modifier l’ontologie ou les données. Nous proposons enfin une méthode d’interrogation dans un environnement distribué qui interroge uniquement les bases susceptibles de fournir une réponse
The web of data is a mean to share and broadcast data user-readable data as well as machine-readable data. This is possible thanks to rdf which propose the formatting of data into short sentences (subject, relation, object) called triples. Bases from the web of data, called rdf bases, are sets of triples. In a rdf base, the ontology – structural data – organize the description of factual data. Since the web of datacreation in 2001, the number and sizes of rdf bases have been constantly rising. This increase has accelerated since the apparition of linked data, which promote the sharing and interlinking of publicly available bases by user communities. The exploitation – interrogation and edition – by theses communities is made without adequateSolution to evaluate the quality of new data, check the current state of the bases or query together a set of bases. This thesis proposes three methods to help the expansion at factual and ontological level and the querying of bases from the web ofData. We propose a method designed to help an expert to check factual data in conflict with the ontology. Finally we propose a method for distributed querying limiting the sending of queries to bases that may contain answers
APA, Harvard, Vancouver, ISO, and other styles
37

Musaraj, Kreshnik. "Extraction automatique de protocoles de communication pour la composition de services Web." Thesis, Lyon 1, 2010. http://www.theses.fr/2010LYO10288/document.

Full text
Abstract:
La gestion des processus-métiers, des architectures orientées-services et leur rétro-ingénierie s’appuie fortement sur l’extraction des protocoles-métier des services Web et des modèles des processus-métiers à partir de fichiers de journaux. La fouille et l’extraction de ces modèles visent la (re)découverte du comportement d'un modèle mis en œuvre lors de son exécution en utilisant uniquement les traces d'activité, ne faisant usage d’aucune information a priori sur le modèle cible. Notre étude préliminaire montre que : (i) une minorité de données sur l'interaction sont enregistrées par le processus et les architectures de services, (ii) un nombre limité de méthodes d'extraction découvrent ce modèle sans connaître ni les instances positives du protocole, ni l'information pour les déduire, et (iii) les approches actuelles se basent sur des hypothèses restrictives que seule une fraction des services Web issus du monde réel satisfont. Rendre possible l'extraction de ces modèles d'interaction des journaux d'activité, en se basant sur des hypothèses réalistes nécessite: (i) des approches qui font abstraction du contexte de l'entreprise afin de permettre une utilisation élargie et générique, et (ii) des outils pour évaluer le résultat de la fouille à travers la mise en œuvre du cycle de vie des modèles découverts de services. En outre, puisque les journaux d'interaction sont souvent incomplets, comportent des erreurs et de l’information incertaine, alors les approches d'extraction proposées dans cette thèse doivent être capables de traiter ces imperfections correctement. Nous proposons un ensemble de modèles mathématiques qui englobent les différents aspects de la fouille des protocoles-métiers. Les approches d’extraction que nous présentons, issues de l'algèbre linéaire, nous permettent d'extraire le protocole-métier tout en fusionnant les étapes classiques de la fouille des processus-métiers. D'autre part, notre représentation du protocole basée sur des séries temporelles des variations de densité de flux permet de récupérer l'ordre temporel de l'exécution des événements et des messages dans un processus. En outre, nous proposons la définition des expirations propres pour identifier les transitions temporisées, et fournissons une méthode pour les extraire en dépit de leur propriété d'être invisible dans les journaux. Finalement, nous présentons un cadre multitâche visant à soutenir toutes les étapes du cycle de vie des workflow de processus et des protocoles, allant de la conception à l'optimisation. Les approches présentées dans ce manuscrit ont été implantées dans des outils de prototypage, et validées expérimentalement sur des ensembles de données et des modèles de processus et de services Web. Le protocole-métier découvert, peut ensuite être utilisé pour effectuer une multitude de tâches dans une organisation ou une entreprise
Business process management, service-oriented architectures and their reverse engineering heavily rely on the fundamental endeavor of mining business process models and Web service business protocols from log files. Model extraction and mining aim at the (re)discovery of the behavior of a running model implementation using solely its interaction and activity traces, and no a priori information on the target model. Our preliminary study shows that : (i) a minority of interaction data is recorded by process and service-aware architectures, (ii) a limited number of methods achieve model extraction without knowledge of either positive process and protocol instances or the information to infer them, and (iii) the existing approaches rely on restrictive assumptions that only a fraction of real-world Web services satisfy. Enabling the extraction of these interaction models from activity logs based on realistic hypothesis necessitates: (i) approaches that make abstraction of the business context in order to allow their extended and generic usage, and (ii) tools for assessing the mining result through implementation of the process and service life-cycle. Moreover, since interaction logs are often incomplete, uncertain and contain errors, then mining approaches proposed in this work need to be capable of handling these imperfections properly. We propose a set of mathematical models that encompass the different aspects of process and protocol mining. The extraction approaches that we present, issued from linear algebra, allow us to extract the business protocol while merging the classic process mining stages. On the other hand, our protocol representation based on time series of flow density variations makes it possible to recover the temporal order of execution of events and messages in the process. In addition, we propose the concept of proper timeouts to refer to timed transitions, and provide a method for extracting them despite their property of being invisible in logs. In the end, we present a multitask framework aimed at supporting all the steps of the process workflow and business protocol life-cycle from design to optimization.The approaches presented in this manuscript have been implemented in prototype tools, and experimentally validated on scalable datasets and real-world process and web service models.The discovered business protocols, can thus be used to perform a multitude of tasks in an organization or enterprise
APA, Harvard, Vancouver, ISO, and other styles
38

Khalil, Jacob, and Gustaf Edlund. "Building backlinks with Web 2.0 : Designing, implementing and evaluating a costless off-site SEO strategy with backlinks originating from Web 2.0 blogs." Thesis, Jönköping University, Tekniska Högskolan, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-50352.

Full text
Abstract:
Purpose – The purpose of this thesis is contributing to the research on the efficacy of backlinks originating from Web 2.0 blogs by designing a costless method for creating controllable backlinks to a website, solely with Web 2.0 blogs as a source of backlinks. The objective is to find out if such links can provide an effect on a website’s positions in the Google SERPs in 2020 and to hopefully contribute with a controllable link strategy that is available for any SEO practitioner regardless of their economic circumstances. The thesis provides answers to the two research questions: 1. What positions in the SERPs can an already existing website claim as a result of creating and implementing a link strategy that utilizes Web 2.0 blogs? 2. In context of implementing a link strategy, what practices must be considered for it to remain unpunished by Google in 2020? Method – The choice of research method, due to the nature of the project is Design Science Research (DSR), in which the designed artefact is observationally evaluated by conducting a field study. The artefact consists of four unique Web 2.0 blogs that each sent a backlink to the target website through qualitative blog posts following Google’s guidelines. Quantitative data was collected using SERPWatcher by Mangools, which tracked 29 keywords for 52 days, and was qualitatively analysed. Conclusions – There is a distinct relation between the improvement in keyword positions and the implementation of the artefact, leaving us with the conclusion that it is reasonable to believe that Web 2.0 blog backlinks can affect a website’s positions in the SERPs in the modern Google Search. More research experimenting with Web 2.0 blogs as the origin of backlinks must be conducted in order to truly affirm or deny this claim, as an evaluation on solely one website is insufficient. It can be concluded that the target website was not punished by Google after implementation. While their search algorithm may be complex and intelligent, it was not intelligent enough to punish our intentions of manipulating another website’s keyword positions via a link scheme. Passing through as legitimate may have been due to following E-A-T practices and acting natural, but this is mere speculation without comparisons with similar strategies that disregard these practices. Limitations – Rigorous testing and evaluation of the designed artefact and its components is very important when conducting research that employs DSR as a method. Due to time constraints, the lack of data points in form of websites the artefact has been tested on, as well as the absence of iterative design, partially denies the validity of the artefact since it does not meet the criteria of being rigorously tested. The data collected would be more impactful if keyword data were gathered many days before executing the artefact, as a pre-implementation period larger than 7 days would act as a reference point when evaluating the effect. It would also be ideal to track the effects post-implementation for a longer time period due to the slow nature of SEO. Keywords – SEO, search engine optimization, off-page optimization, Google Search, Web 2.0, backlinks.
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Xueqi. "Query Rewriting for Extracting Data behind HTML Forms." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd406.Chen.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

De, Wilde Max. "From Information Extraction to Knowledge Discovery: Semantic Enrichment of Multilingual Content with Linked Open Data." Doctoral thesis, Universite Libre de Bruxelles, 2015. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/218774.

Full text
Abstract:
Discovering relevant knowledge out of unstructured text in not a trivial task. Search engines relying on full-text indexing of content reach their limits when confronted to poor quality, ambiguity, or multiple languages. Some of these shortcomings can be addressed by information extraction and related natural language processing techniques, but it still falls short of adequate knowledge representation. In this thesis, we defend a generic approach striving to be as language-independent, domain-independent, and content-independent as possible. To reach this goal, we offer to disambiguate terms with their corresponding identifiers in Linked Data knowledge bases, paving the way for full-scale semantic enrichment of textual content. The added value of our approach is illustrated with a comprehensive case study based on a trilingual historical archive, addressing constraints of data quality, multilingualism, and language evolution. A proof-of-concept implementation is also proposed in the form of a Multilingual Entity/Resource Combiner & Knowledge eXtractor (MERCKX), demonstrating to a certain extent the general applicability of our methodology to any language, domain, and type of content.
Découvrir de nouveaux savoirs dans du texte non-structuré n'est pas une tâche aisée. Les moteurs de recherche basés sur l'indexation complète des contenus montrent leur limites quand ils se voient confrontés à des textes de mauvaise qualité, ambigus et/ou multilingues. L'extraction d'information et d'autres techniques issues du traitement automatique des langues permettent de répondre partiellement à cette problématique, mais sans pour autant atteindre l'idéal d'une représentation adéquate de la connaissance. Dans cette thèse, nous défendons une approche générique qui se veut la plus indépendante possible des langues, domaines et types de contenus traités. Pour ce faire, nous proposons de désambiguïser les termes à l'aide d'identifiants issus de bases de connaissances du Web des données, facilitant ainsi l'enrichissement sémantique des contenus. La valeur ajoutée de cette approche est illustrée par une étude de cas basée sur une archive historique trilingue, en mettant un accent particulier sur les contraintes de qualité, de multilinguisme et d'évolution dans le temps. Un prototype d'outil est également développé sous le nom de Multilingual Entity/Resource Combiner & Knowledge eXtractor (MERCKX), démontrant ainsi le caractère généralisable de notre approche, dans un certaine mesure, à n'importe quelle langue, domaine ou type de contenu.
Doctorat en Information et communication
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
41

Kilic, Sefa. "Clustering Frequent Navigation Patterns From Website Logs Using Ontology And Temporal Information." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12613979/index.pdf.

Full text
Abstract:
Given set of web pages labeled with ontological items, the level of similarity between two web pages is measured using the level of similarity between ontological items of pages labeled with. Using similarity measure between two pages, degree of similarity between two sequences of web page visits can be calculated as well. Using clustering algorithms, similar frequent sequences are grouped and representative sequences are selected from these groups. A new sequence is compared with all clusters and it is assigned to most similar one. Representatives of the most similar cluster can be used in several real world cases. They can be used for predicting and prefetching the next page user will visit or for helping the navigation of user in the website. They can also be used to improve the structure of website for easier navigation. In this study the effect of time spent on each web page during the session is analyzed.
APA, Harvard, Vancouver, ISO, and other styles
42

Porto, André Luiz Lopes. "Extração não supervisionada de dados da web utilizando abordagem independente de formato." Universidade Federal do Amazonas, 2015. http://tede.ufam.edu.br/handle/tede/5113.

Full text
Abstract:
Submitted by Lenieze Lira (leniezeblira@gmail.com) on 2016-07-25T13:47:02Z No. of bitstreams: 1 Dissertação - André Luiz Lopes Porto.pdf: 14791950 bytes, checksum: be2de076023a64a02a6a43c99e9977d8 (MD5)
Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-07-28T13:48:47Z (GMT) No. of bitstreams: 1 Dissertação - André Luiz Lopes Porto.pdf: 14791950 bytes, checksum: be2de076023a64a02a6a43c99e9977d8 (MD5)
Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-07-28T13:50:19Z (GMT) No. of bitstreams: 1 Dissertação - André Luiz Lopes Porto.pdf: 14791950 bytes, checksum: be2de076023a64a02a6a43c99e9977d8 (MD5)
Made available in DSpace on 2016-07-28T13:50:19Z (GMT). No. of bitstreams: 1 Dissertação - André Luiz Lopes Porto.pdf: 14791950 bytes, checksum: be2de076023a64a02a6a43c99e9977d8 (MD5) Previous issue date: 2015-11-17
In this thesis we propose a new method for extraction data in rich Web pages that uses only the textual content of these pages. Our method, called FIEX (Format Independent Web Data Extraction), is based on information extraction techniques for text segmentation, and can extract data from Web pages where methods of state of the art based on data alignment techniques fail due to inconsistency between the logical structure of Web pages and the conceptual structure of the data represented in them. The FIEX, unlike the methods previously proposed in the literature, is able to extract data using only the textual content of a Web pages in challenging scenarios such as severe cases of textual elements compounds, in which various values of interest for extraction are represented by only one HTML element. To perform the extraction data of the web pages, FIEX is based on techniques of elimination noise by information redundancy and an information extraction method for text segmentation known in the literature as ONDUX (On-Demand Unsupervised Learning for Information Extraction). In our experiments, we used various Web pages collections of di erent areas of products and e-commerce stores with goal to extract data from product descriptions. The choose of this type of Web page, due to the large amount of data these pages are contained in severe cases of textual elements compounds. According to the results obtained in our experiments in various areas of products and e-commerce stores, we validate the hypothesis that the extraction based on only textual features is possible and e ective.
Nessa dissertação de mestrado propomos um novo método para extração em páginas Web ricas em dados que utiliza apenas o conteúdo textual destas páginas. Nosso método, chamado de FIEX (Format Independent Web Data Extraction), é baseado em técnicas de extração de informação por segmentação de texto, e consegue extrair dados de páginas Web nas quais métodos do estado-da-arte baseados em técnicas de alinhamento de dados não conseguem devido à inconsistência entre a estrutura lógica das páginas Web e a estrutura conceitual dos dados nelas representadas. O FIEX, diferentemente dos métodos previamente propostos na literatura, é capaz de extrair dados apenas utilizando o conteúdo textual de uma página Web em cenários desa adores como casos severos de elementos textuais compostos, nos quais diversos valores de interesse para extração estão representados por apenas um elemento HTML. Para realizar a extração dos dados de páginas Web, o FIEX, é baseado em técnicas de eliminação de ruídos por redundância de informação e um método de extração de informação por segmentação de texto conhecido na literatura como ONDUX (On-Demand Unsupervised Learning for Information Extraction). Em nossos experimentos, utilizamos várias coleções de páginas Web de diferentes domínios de produtos e de lojas de comércio eletr ônico com objetivo de extrair dados de descrições de produtos. A escolha desse tipo de página Web, deve-se à grande quantidade de dados destas páginas estarem contidos em casos severos de elementos textuais compostos. De acordo com os resultados obtidos em nossos experimentos em diferentes domínios de produtos e lojas de comércio eletrônico, validamos a hipótese de que a extração baseada em apenas características textuais é possível e e caz.
APA, Harvard, Vancouver, ISO, and other styles
43

Mikita, Tibor. "Portál pro agregaci dat z webových zdrojů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403123.

Full text
Abstract:
This thesis deals with data extraction and data aggregation from heterogeneous web sources. The goal is to create a platform and a functional web application using appropriate technologies. The main focus of the thesis is on the application design and implementation. The application domain is accommodation or lease of apartments. For the data extraction, we use the portal API or a wrapper. Obtained data is stored in a document database. In this thesis, we managed to design and implement a system that allows to obtain rental ads from multiple web sources at the same time and to present them in a uniform way.
APA, Harvard, Vancouver, ISO, and other styles
44

Toda, Guilherme Alves. "Um método probabilístico para o preenchimento automático de formulários Web a partir de textos ricos em dados." Universidade Federal do Amazonas, 2010. http://tede.ufam.edu.br/handle/tede/2892.

Full text
Abstract:
Made available in DSpace on 2015-04-11T14:02:37Z (GMT). No. of bitstreams: 1 guilherme.pdf: 504019 bytes, checksum: 57c95d6c4c259deff8aa998a2816faaf (MD5) Previous issue date: 2010-03-26
On the Web of today the most prevalent solution for users to interact with data-intesive applications is the use of form-based interfaces composed by several data input fields, such as text boxes, radio buttons, pull-down lists and check boxes. Although these interfaces are popular and effectiver, in many cases, free text interfaces are preferred over form based ones. In this work we present, the implementation and the evaluation of a novel method for automatically filling form-based input interfaces using data-rich text. Our solution takes a data-rich free text as input (e.g, an ad), extracts implicit data values from it and fills appropriate fields using them. For this task, we rely on knowledge obtained from values of previous submissions for each field, which are freely obtained from the usage of the interfaces. Our approach, called iForm, exploits features related to the content and the style of these values, which are combined through a Bayesian framework. Through extensive experimentation, we show that our approach is feasible and effective, and it works well even when only a few previous submissions to the input interface are available.
A solução mais comum atualmente para usuários interagirem com aplicações que utilizam banco de dados na Web é através do uso de formulários compostos por vários campos de entrada, como caixas de texto, listas de seleção e caixas de marcação. Apesar destes formulários serem efetivos e populares, em muitos casos, aplicações onde informações são fornecidas através de texto livre são geralmente preferidas pelos usuários. Neste trabalho apresentaremos a proposta, a implementação e a avaliação de um novo método para preencher automaticamente formulários Web utilizando um texto rico em dados. Nossa solução toma como entrada um texto livre rico em dados (por exemplo, um anúncio), extrai seus dados implícitos e preenche os campos apropriados do formulário utilizando estes dados. Para essa tarefa, utilizamos o conhecimento obtido a partir de valores utilizados previamente pelos usuários para preencher os formulários. Nossa abordagem, chamada de iForm, utiliza características relacionadas ao conteúdo e ao estilo desses valores, que são combinadas através de uma Rede Bayesiana. Em nossos experimentos, mostramos que nossa abordagem é viável e efetiva, funcionando bem mesmo quando poucas submissões foram feitas ao formulário.
APA, Harvard, Vancouver, ISO, and other styles
45

Issa, Subhi. "Linked data quality : completeness and conciseness." Electronic Thesis or Diss., Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1274.

Full text
Abstract:
La large diffusion des technologies du Web Sémantique telles que le Resource Description Framework (RDF) permet aux individus de construire leurs bases de données sur le Web, d'écrire des vocabulaires et de définir des règles pour organiser et expliquer les relations entre les données selon les principes des données liées. En conséquence, une grande quantité de données structurées et interconnectées est générée quotidiennement. Un examen attentif de la qualité de ces données pourrait s'avérer très critique, surtout si d'importantes recherches et décisions professionnelles en dépendent. La qualité des données liées est un aspect important pour indiquer leur aptitude à être utilisées dans des applications. Plusieurs dimensions permettant d'évaluer la qualité des données liées sont identifiées, telles que la précision, la complétude, la provenance et la concision. Cette thèse se concentre sur l'évaluation de la complétude et l'amélioration de la concision des données liées. En particulier, nous avons d'abord proposé une approche de calcul de complétude fondée sur un schéma généré. En effet, comme un schéma de référence est nécessaire pour évaluer la complétude, nous avons proposé une approche fondée sur la fouille de données pour obtenir un schéma approprié (c.-à-d. un ensemble de propriétés) à partir des données. Cette approche permet de distinguer les propriétés essentielles des propriétés marginales pour générer, pour un ensemble de données, un schéma conceptuel qui répond aux attentes de l'utilisateur quant aux contraintes de complétude des données. Nous avons implémenté un prototype appelé "LOD-CM" pour illustrer le processus de dérivation d'un schéma conceptuel d'un ensemble de données fondé sur les besoins de l'utilisateur. Nous avons également proposé une approche pour découvrir des prédicats équivalents afin d'améliorer la concision des données liées. Cette approche s'appuie, en plus d'une analyse statistique, sur une analyse sémantique approfondie des données et sur des algorithmes d'apprentissage. Nous soutenons que l'étude de la signification des prédicats peut aider à améliorer l'exactitude des résultats. Enfin, un ensemble d'expériences a été mené sur des ensembles de données réelles afin d'évaluer les approches que nous proposons
The wide spread of Semantic Web technologies such as the Resource Description Framework (RDF) enables individuals to build their databases on the Web, to write vocabularies, and define rules to arrange and explain the relationships between data according to the Linked Data principles. As a consequence, a large amount of structured and interlinked data is being generated daily. A close examination of the quality of this data could be very critical, especially, if important research and professional decisions depend on it. The quality of Linked Data is an important aspect to indicate their fitness for use in applications. Several dimensions to assess the quality of Linked Data are identified such as accuracy, completeness, provenance, and conciseness. This thesis focuses on assessing completeness and enhancing conciseness of Linked Data. In particular, we first proposed a completeness calculation approach based on a generated schema. Indeed, as a reference schema is required to assess completeness, we proposed a mining-based approach to derive a suitable schema (i.e., a set of properties) from data. This approach distinguishes between essential properties and marginal ones to generate, for a given dataset, a conceptual schema that meets the user's expectations regarding data completeness constraints. We implemented a prototype called “LOD-CM” to illustrate the process of deriving a conceptual schema of a dataset based on the user's requirements. We further proposed an approach to discover equivalent predicates to improve the conciseness of Linked Data. This approach is based, in addition to a statistical analysis, on a deep semantic analysis of data and on learning algorithms. We argue that studying the meaning of predicates can help to improve the accuracy of results. Finally, a set of experiments was conducted on real-world datasets to evaluate our proposed approaches
APA, Harvard, Vancouver, ISO, and other styles
46

Ågren, Ola. "Finding, extracting and exploiting structure in text and hypertext." Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-22352.

Full text
Abstract:
Data mining is a fast-developing field of study, using computations to either predict or describe large amounts of data. The increase in data produced each year goes hand in hand with this, requiring algorithms that are more and more efficient in order to find interesting information within a given time. In this thesis, we study methods for extracting information from semi-structured data, for finding structure within large sets of discrete data, and to efficiently rank web pages in a topic-sensitive way. The information extraction research focuses on support for keeping both documentation and source code up to date at the same time. Our approach to this problem is to embed parts of the documentation within strategic comments of the source code and then extracting them by using a specific tool. The structures that our structure mining algorithms are able to find among crisp data (such as keywords) is in the form of subsumptions, i.e. one keyword is a more general form of the other. We can use these subsumptions to build larger structures in the form of hierarchies or lattices, since subsumptions are transitive. Our tool has been used mainly as input to data mining systems and for visualisation of data-sets. The main part of the research has been on ranking web pages in a such a way that both the link structure between pages and also the content of each page matters. We have created a number of algorithms and compared them to other algorithms in use today. Our focus in these comparisons have been on convergence rate, algorithm stability and how relevant the answer sets from the algorithms are according to real-world users. The research has focused on the development of efficient algorithms for gathering and handling large data-sets of discrete and textual data. A proposed system of tools is described, all operating on a common database containing "fingerprints" and meta-data about items. This data could be searched by various algorithms to increase its usefulness or to find the real data more efficiently. All of the methods described handle data in a crisp manner, i.e. a word or a hyper-link either is or is not a part of a record or web page. This means that we can model their existence in a very efficient way. The methods and algorithms that we describe all make use of this fact.
Informationsutvinning (som ofta kallas data mining även på svenska) är ett forskningsområde som hela tiden utvecklas. Det handlar om att använda datorer för att hitta mönster i stora mängder data, alternativt förutsäga framtida data utifrån redan tillgänglig data. Eftersom det samtidigt produceras mer och mer data varje år ställer detta högre och högre krav på effektiviteten hos de algoritmer som används för att hitta eller använda informationen inom rimlig tid. Denna avhandling handlar om att extrahera information från semi-strukturerad data, att hitta strukturer i stora diskreta datamängder och att på ett effektivt sätt rangordna webbsidor utifrån ett ämnesbaserat perspektiv. Den informationsextraktion som beskrivs handlar om stöd för att hålla både dokumentationen och källkoden uppdaterad samtidigt. Vår lösning på detta problem är att låta delar av dokumentationen (främst algoritmbeskrivningen) ligga som blockkommentarer i källkoden och extrahera dessa automatiskt med ett verktyg. De strukturer som hittas av våra algoritmer för strukturextraktion är i form av underordnanden, exempelvis att ett visst nyckelord är mer generellt än ett annat. Dessa samband kan utnyttjas för att skapa större strukturer i form av hierarkier eller riktade grafer, eftersom underordnandena är transitiva. Det verktyg som vi har tagit fram har främst använts för att skapa indata till ett informationsutvinningssystem samt för att kunna visualisera indatan. Huvuddelen av den forskning som beskrivs i denna avhandling har dock handlat om att kunna rangordna webbsidor utifrån både deras innehåll och länkarna som finns mellan dem. Vi har skapat ett antal algoritmer och visat hur de beter sig i jämförelse med andra algoritmer som används idag. Dessa jämförelser har huvudsakligen handlat om konvergenshastighet, algoritmernas stabilitet givet osäker data och slutligen hur relevant algoritmernas svarsmängder har ansetts vara utifrån användarnas perspektiv. Forskningen har varit inriktad på effektiva algoritmer för att hämta in och hantera stora datamängder med diskreta eller textbaserade data. I avhandlingen presenterar vi även ett förslag till ett system av verktyg som arbetar tillsammans på en databas bestående av “fingeravtryck” och annan meta-data om de saker som indexerats i databasen. Denna data kan sedan användas av diverse algoritmer för att utöka värdet hos det som finns i databasen eller för att effektivt kunna hitta rätt information.
AlgExt, CHiC, ProT
APA, Harvard, Vancouver, ISO, and other styles
47

Pires, Julio Cesar Batista. "Extração e mineração de informação independente de domínios da web na língua portuguesa." Universidade Federal de Goiás, 2015. http://repositorio.bc.ufg.br/tede/handle/tede/4723.

Full text
Abstract:
Submitted by Cássia Santos (cassia.bcufg@gmail.com) on 2015-10-21T14:08:06Z No. of bitstreams: 2 Dissertação - Julio Cesar Batista Pires - 2015.pdf: 2026124 bytes, checksum: dda6bea6dfa125f21d2023f288178ebc (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-10-22T13:08:50Z (GMT) No. of bitstreams: 2 Dissertação - Julio Cesar Batista Pires - 2015.pdf: 2026124 bytes, checksum: dda6bea6dfa125f21d2023f288178ebc (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Made available in DSpace on 2015-10-22T13:08:50Z (GMT). No. of bitstreams: 2 Dissertação - Julio Cesar Batista Pires - 2015.pdf: 2026124 bytes, checksum: dda6bea6dfa125f21d2023f288178ebc (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2015-05-08
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Many people are constantly connected on the Web. They are looking for all kinds of things. The Web is a huge source of information. So, they can find almost everything they want. However, Web information is disorganized and have no formal structure. This hampers machine processing and consequently makes information access more difficult. Bringing structure to the Web can be one of the key points for facilitating user searching and navigation. A recent technique, Open Information Extraction, has been successfully applied to extract structured information from the Web. This technique has been mostly applied in pages written in English. This work is specifically focused on information extraction for Portuguese. Techniques used here can be also used to other languages too.
Muitas pessoas estão constantemente conectadas na Web. Elas estão procurando por todo tipo de coisa. A Web é uma enorme fonte de informação. Assim, as pessoas podem encontrar praticamente tudo que elas precisam. Entretanto, as informações da Web são desorganizadas e não possuem uma estrutura formal. Isso dificulta o processamento das máquinas e consequentemente torna o acesso à informaçã mais difícil. Trazer estrutura para a Web pode ser um dos pontos chave para facilitar a busca e navegaçã dos usuários. Uma técnica recente, Extração de Informação Aberta, foi aplicada com sucesso para extrair informação da Web. Essa técnica foi aplicada principalmente em páginas em Inglês. Este trabalho é focado especificamente na extração de informação em Português. As técnicas usadas aqui também podem ser utilizadas para outras linguagens.
APA, Harvard, Vancouver, ISO, and other styles
48

Tang, Wei. "Internet-Scale Information Monitoring: A Continual Query Approach." Diss., Available online, Georgia Institute of Technology, 2003:, 2003. http://etd.gatech.edu/theses/available/etd-12042003-173321/unrestricted/tangwei200312.pdf.

Full text
Abstract:
Thesis (Ph. D.)--Computing, Georgia Institute of Technology, 2004.
Thomas E. Potok, Committee Member; Calton Pu, Committee Member; Edward Omiecinski, Committee Member; Leo Mark, Committee Member; Constantinos Dovrolis, Committee Member; Ling Liu, Committee Chair. Includes bibliography.
APA, Harvard, Vancouver, ISO, and other styles
49

Oucif, Kadday. "Evaluation of web scraping methods : Different automation approaches regarding web scraping using desktop tools." Thesis, KTH, Data- och elektroteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188418.

Full text
Abstract:
A lot of information can be found and extracted from the semantic web in different forms through web scraping, with many techniques emerging throughout time. This thesis is written with the objective to evaluate different web scraping methods in order to develop an automated, performance reliable, easy implemented and solid extraction process. A number of parameters are set to better evaluate and compare consisting techniques. A matrix of desktop tools are examined and two were chosen for evaluation. The evaluation also includes the learning of setting up the scraping process with so called agents. A number of links gets scraped by using the presented techniques with and without executing JavaScript from the web sources. Prototypes with the chosen techniques are presented with Content Grabber as a final solution. The result is a better understanding around the subject along with a cost-effective extraction process consisting of different techniques and methods, where a good understanding around the web sources structure facilitates the data collection. To sum it all up, the result is discussed and presented with regard to chosen parameters.
En hel del information kan bli funnen och extraherad i olika format från den semantiska webben med hjälp av webbskrapning, med många tekniker som uppkommit med tiden. Den här rapporten är skriven med målet att utvärdera olika webbskrapnings metoder för att i sin tur utveckla en automatiserad, prestandasäker, enkelt implementerad och solid extraheringsprocess. Ett antal parametrar är definierade för att utvärdera och jämföra befintliga webbskrapningstekniker. En matris av skrivbords verktyg är utforskade och två är valda för utvärdering. Utvärderingen inkluderar också tillvägagångssättet till att lära sig sätta upp olika webbskrapnings processer med så kallade agenter. Ett nummer av länkar blir skrapade efter data med och utan exekvering av JavaScript från webbsidorna. Prototyper med de utvalda teknikerna testas och presenteras med webbskrapningsverktyget Content Grabber som slutlig lösning. Resultatet utav det hela är en bättre förståelse kring ämnet samt en prisvärd extraheringsprocess bestående utav blandade tekniker och metoder, där en god vetskap kring webbsidornas uppbyggnad underlättar datainsamlingen. Sammanfattningsvis presenteras och diskuteras resultatet med hänsyn till valda parametrar.
APA, Harvard, Vancouver, ISO, and other styles
50

Ammari, Ahmad N. "Transforming user data into user value by novel mining techniques for extraction of web content, structure and usage patterns : the development and evaluation of new Web mining methods that enhance information retrieval and improve the understanding of users' Web behavior in websites and social blogs." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5269.

Full text
Abstract:
The rapid growth of the World Wide Web in the last decade makes it the largest publicly accessible data source in the world, which has become one of the most significant and influential information revolution of modern times. The influence of the Web has impacted almost every aspect of humans' life, activities and fields, causing paradigm shifts and transformational changes in business, governance, and education. Moreover, the rapid evolution of Web 2.0 and the Social Web in the past few years, such as social blogs and friendship networking sites, has dramatically transformed the Web from a raw environment for information consumption to a dynamic and rich platform for information production and sharing worldwide. However, this growth and transformation of the Web has resulted in an uncontrollable explosion and abundance of the textual contents, creating a serious challenge for any user to find and retrieve the relevant information that he truly seeks to find on the Web. The process of finding a relevant Web page in a website easily and efficiently has become very difficult to achieve. This has created many challenges for researchers to develop new mining techniques in order to improve the user experience on the Web, as well as for organizations to understand the true informational interests and needs of their customers in order to improve their targeted services accordingly by providing the products, services and information that truly match the requirements of every online customer. With these challenges in mind, Web mining aims to extract hidden patterns and discover useful knowledge from Web page contents, Web hyperlinks, and Web usage logs. Based on the primary kinds of Web data used in the mining process, Web mining tasks can be categorized into three main types: Web content mining, which extracts knowledge from Web page contents using text mining techniques, Web structure mining, which extracts patterns from the hyperlinks that represent the structure of the website, and Web usage mining, which mines user's Web navigational patterns from Web server logs that record the Web page access made by every user, representing the interactional activities between the users and the Web pages in a website. The main goal of this thesis is to contribute toward addressing the challenges that have been resulted from the information explosion and overload on the Web, by proposing and developing novel Web mining-based approaches. Toward achieving this goal, the thesis presents, analyzes, and evaluates three major contributions. First, the development of an integrated Web structure and usage mining approach that recommends a collection of hyperlinks for the surfers of a website to be placed at the homepage of that website. Second, the development of an integrated Web content and usage mining approach to improve the understanding of the user's Web behavior and discover the user group interests in a website. Third, the development of a supervised classification model based on recent Social Web concepts, such as Tag Clouds, in order to improve the retrieval of relevant articles and posts from Web social blogs.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography