Dissertations / Theses on the topic 'Web page data extraction'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Web page data extraction.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Alves, Ricardo João de Freitas. "Declarative approach to data extraction of web pages." Master's thesis, Faculdade de Ciências e Tecnologia, 2009. http://hdl.handle.net/10362/5822.
Full textIn the last few years, we have been witnessing a noticeable WEB evolution with the introduction of significant improvements at technological level, such as the emergence of XHTML, CSS,Javascript, and Web2.0, just to name ones. This, combined with other factors such as physical expansion of the Web, as well as its low cost, have been the great motivator for the organizations and the general public to join, with a consequent growth in the number of users and thus influencing the volume of the largest global data repository. In consequence, there was an increasing need for regular data acquisition from the WEB, and because of its frequency, length or complexity, it would only be viable to obtain through automatic extractors. However, two main difficulties are inherent to automatic extractors. First, much of the Web's information is presented in visual formats mainly directed for human reading. Secondly, the introduction of dynamic webpages, which are brought together in local memory from different sources, causing some pages not to have a source file. Therefore, this thesis proposes a new and more modern extractor, capable of supporting the Web evolution, as well as being generic, so as to be able to be used in any situation, and capable of being extended and easily adaptable to a more particular use. This project is an extension of an earlier one which had the capability of extractions on semi-structured text files. However it evolved to a modular extraction system capable of extracting data from webpages, semi-structured text files and be expanded to support other data source types. It also contains a more complete and generic validation system and a new data delivery system capable of performing the earlier deliveries as well as new generic ones. A graphical editor was also developed to support the extraction system features and to allow a domain expert without computer knowledge to create extractions with only a few simple and intuitive interactions on the rendered webpage.
Cheng, Wang. "AMBER : a domain-aware template based system for data extraction." Thesis, University of Oxford, 2015. http://ora.ox.ac.uk/objects/uuid:ff49d786-bfd8-4cd4-a69c-19e81cb95920.
Full textAnderson, Neil David Alan. "Data extraction & semantic annotation from web query result pages." Thesis, Queen's University Belfast, 2016. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.705642.
Full textWu, Yongliang. "Aggregating product reviews for the Chinese market." Thesis, KTH, Kommunikationssystem, CoS, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-91484.
Full textI december 2007 uppgick antalet internetanvändare i Kina har ökat till 210 miljoner människor. Den årliga tillväxttakten nådde 53,3 procent 2008, med den genomsnittliga Antalet Internet-användare ökar för varje dag av 200.000 människor. Närvarande Kinas Internet befolkningen är något lägre än de 215 miljoner Internetanvändare i USA Staterna.[1] Trots den snabba tillväxten i den kinesiska ekonomin i den globala Internetmarknaden, Kinas e-handel inte följer det traditionella mönstret av handel, men i stället har utvecklats baserat på användarnas efterfrågan. Denna tillväxt har utvidgas till alla områden I Internet. I väst har expert recensioner visat sig vara en viktig del I användarens köpbeslut. Ju högre kvalitet på produkten recensioner som kunderna mottagna fler produkter de köper från on-line butiker. Eftersom antalet produkter och alternativen ökar, kinesiska kunderna behöver opersonlig, opartisk och detaljerade produkter recensioner. Denna avhandling fokuserar på on-line recensioner och hur de påverkar Kinesiska kundens köpbeslut.</p> E-handel är ett komplext system. Som en typisk modell för e-handel, vi undersöka ett Business to Consumer (B2C) on-line-försäljning plats och överväga ett antal faktorer; inklusive några till synes subtitle faktorer som kan påverka kundens småningom Beslutet att handla på webbplatsen. Uttryckligen detta examensarbete kommer att undersöka aggregerade recensioner från olika online-källor genom att analysera vissa befintliga västra företag. Efter den här avhandlingen visar hur samlade produkt recensioner för en e-affärer webbplats. Under detta examensarbete fann vi att befintliga data mining tekniker gjort det rakt fram för att samla recensioner. Dessa översyner har lagrats i en databas och webb program kan söka denna databas för att ge en användare med en rad relevanta product recensioner. En av de viktiga frågorna, precis som med sökmotorer är att tillhandahålla relevanta produkt recensioner och bestämma vilken ordning de ska presenteras i. vårt arbete har vi valt recensioner baserat på matchning produkten (men i vissa fall det finns oklarheter i fråga om två produkter verkligen identiska eller inte) och beställa matchande recensioner efter datum - med den senaste recensioner närvarande första. Några av de öppna frågorna som kvarstår för framtiden är: (1) förbättra matchning - För att undvika oklarheter rörande om Gästrecensionerna om samma produkt eller inte och (2) avgöra om det finns recensioner faktiskt påverka en kinesisk användarens val att köpa en produkt.
Malchik, Alexander 1975. "An aggregator tool for extraction and collection of data from web pages." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86522.
Full textIncludes bibliographical references (p. 54-56).
by Alexander Malchik.
M.Eng.
Kolečkář, David. "Systém pro integraci webových datových zdrojů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-417239.
Full textMazal, Zdeněk. "Extrakce textových dat z internetových stránek." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219347.
Full textWeng, Daiyue. "Extracting structured data from Web query result pages." Thesis, Queen's University Belfast, 2016. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709858.
Full textСмілянець, Федір Андрійович. "Екстракція структурованої інформації з множини веб-сторінок." Master's thesis, КПІ ім. Ігоря Сікорського, 2020. https://ela.kpi.ua/handle/123456789/39926.
Full textRelevance of the research topic. Modern wide internet is a considerable source of data to be used in scientific and business applications. An ability to extract up to date data is frequently crutial for reaching necessary goals, though, modern quality solutions to this problem, which are using computer vision and other technologies, may be finantially demanding to acquire or develop, thus simple and cheap to develop, maintain and use solutions are necessary. The purpose of the study is to create a software instrument aimed at extraction of structured data from news websites for usage in news trustworthiness classification. Following tasks were outlined and implemented to achieve the aforementioned goal: - Outline existing approaches and analogues in areas of data extraction and news classification; - Design and develop extraction, preparation and classification algorhitms; - Compare the results achieved with developed extraction algorhitm and with existing software solution, including comparing machine learning accuracies on both of the extractors. The object of the study is the process of text data extraction with subsequent machine learning analysis. The subjects of the study are methods and tools of extraction and analysis of text data. Scientific novelty of the obtained results. A simple greedy algorithm was created, combining the process of link discovery and data extraction. Expediency of usage of simple web data extraction algorithms for composing machine learning datasets was proven. It was also proven that classical machine learning algorithms can achieve results similar to neural networks such as LSTM. Capabilities of machine learning systems to function efficiently in a bilingual context were also shown. Publications. Materials, related to this study, were published in the All-Ukrainian Scientific and Practical Conference of Young Scientists and Students “Information Systems and Management Technologies” (ISTU-2019) “News trustworthiness classification with machine learning”.
Hou, Jingyu. "Discovering web page communities for web-based data management." University of Southern Queensland, Faculty of Sciences, 2002. http://eprints.usq.edu.au/archive/00001447/.
Full textKhalil, Faten. "Combining web data mining techniques for web page access prediction." University of Southern Queensland, Faculty of Sciences, 2008. http://eprints.usq.edu.au/archive/00004341/.
Full textGuo, Jinsong. "Reducing human effort in web data extraction." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:04bd39dd-bfec-4c07-91db-980fcbc745ba.
Full textOuahid, Hicham. "Data extraction from the Web using XML." Thesis, University of Ottawa (Canada), 2001. http://hdl.handle.net/10393/9260.
Full textGottlieb, Matthew. "Understanding malware autostart techniques with web data extraction /." Online version of thesis, 2009. http://hdl.handle.net/1850/10632.
Full textLalithsena, Sarasi. "Domain-specific Knowledge Extraction from the Web of Data." Wright State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=wright1527202092744638.
Full textPopescu, Ana-Maria. "Information extraction from unstructured web text /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/6935.
Full textChartrand, Timothy Adam. "Ontology-Based Extraction of RDF Data from the World Wide Web." BYU ScholarsArchive, 2003. https://scholarsarchive.byu.edu/etd/56.
Full textZhou, Yuanqiu. "Generating Data-Extraction Ontologies By Example." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd1115.pdf.
Full textOrtona, Stefano. "Easing information extraction on the web through automated rules discovery." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:a5a7a070-338a-4afc-8be5-a38b486cf526.
Full textXhemali, Daniela. "Automated retrieval and extraction of training course information from unstructured web pages." Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/7022.
Full textChartrand, Tim. "Ontology-based extraction of RDF data from the World Wide Web /." Diss., CLICK HERE for online access, 2003. http://contentdm.lib.byu.edu/ETD/image/etd168.pdf.
Full textWei, Chenjie. "Using Automated Extraction of the Page Component Hierarchy to Customize and Adapt Web Pages to Mobile Devices." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1338348757.
Full textPalma, Michael, and Shidi Zhou. "A Web Scraper For Forums : Navigation and text extraction methods." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-219903.
Full textWebforum är ett populärt sätt att utbyta information och diskutera olika ämnen. Dessa webbplatser har vanligtvis en särskild struktur, uppdelad i startsida, trådar och inlägg. Även om strukturen kan vara konsekvent bland olika forum är layouten av varje forum annorlunda. Det sätt på vilket ett webbforum presenterar användarinläggen är också väldigt annorlunda än hur en nyhet webbplats presenterar en enda informationsinlägg. Allt detta gör navigering och extrahering av text en svår uppgift för webbskrapor. Fokuset av detta examensarbete är utvecklingen av en webbskrapa specialiserad på forum. Tre olika metoder för textutvinning implementeras och testas innan man väljer den lämpligaste metoden för uppgiften. Metoderna är Word Count, Text Detection Framework och Text-to-Tag Ratio. Hanteringen av länk dubbleringar noga övervägd och löses genom att implementera ett flerlagers bloom filter. Examensarbetet genomförs med tillämpning av en kvalitativ metodik. Resultaten indikerar att Text-to-Tag Ratio har den bästa övergripande prestandan och ger det mest önskvärda resultatet i webbforum. Således var detta den valda metoden att behålla i den slutliga versionen av webbskrapan.
Gerber, Daniel. "Statistical Extraction of Multilingual Natural Language Patterns for RDF Predicates: Algorithms and Applications." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-208759.
Full textUsbeck, Ricardo. "Knowledge Extraction for Hybrid Question Answering." Doctoral thesis, Universitätsbibliothek Leipzig, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-225097.
Full textZhao, Hongkun. "Automatic wrapper generation for the extraction of search result records from search engines." Diss., Online access via UMI:, 2007.
Find full textLogofatu, Cristina. "Improving communication in a transportation company by using a Web page." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2520.
Full textSellers, Andrew. "OXPath : a scalable, memory-efficient formalism for data extraction from modern web applications." Thesis, University of Oxford, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.555325.
Full textAbdulrahman, Ruqayya. "Multi agent system for web database processing, on data extraction from online social networks." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5502.
Full textCooper, Erica L. "Automatic repair and recovery for Omnibase : robust extraction of data from diverse Web sources." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61157.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 31).
In order to make the best use of the multitude of diverse, semi-structured sources of data available on the internet, information retrieval systems need to reliably access the data on these different sites in a manner that is robust to changes in format or structure that these sites might undergo. An interface that gives a system uniform, programmatic access to the data on some web site is called a web wrapper, and the process of inferring a wrapper for a given website based on a few examples of its pages is known as wrapper induction. A challenge of using wrappers for online information extraction arises from the dynamic nature of the web-even the slightest of changes to the format of a web page may be enough to invalidate a wrapper. Thus, it is important to be able to detect when a wrapper no longer extracts the correct information, and also for the system to be able to recover from this type of failure. This thesis demonstrates improved error detection as well as methods of recovery and repair for broken wrappers for START, a natural-language question-answering system developed by Infolab at MIT.
by Erica L. Cooper.
M.Eng.
Gollapally, Devender R. "Multi-Agent Architecture for Internet Information Extraction and Visualization." Thesis, University of North Texas, 2000. https://digital.library.unt.edu/ark:/67531/metadc2575/.
Full textSernadela, Pedro Miguel Lopes. "Data integration services for biomedical applications." Doctoral thesis, Universidade de Aveiro, 2018. http://hdl.handle.net/10773/23511.
Full textIn the last decades, the field of biomedical science has fostered unprecedented scientific advances. Research is stimulated by the constant evolution of information technology, delivering novel and diverse bioinformatics tools. Nevertheless, the proliferation of new and disconnected solutions has resulted in massive amounts of resources spread over heterogeneous and distributed platforms. Distinct data types and formats are generated and stored in miscellaneous repositories posing data interoperability challenges and delays in discoveries. Data sharing and integrated access to these resources are key features for successful knowledge extraction. In this context, this thesis makes contributions towards accelerating the semantic integration, linkage and reuse of biomedical resources. The first contribution addresses the connection of distributed and heterogeneous registries. The proposed methodology creates a holistic view over the different registries, supporting semantic data representation, integrated access and querying. The second contribution addresses the integration of heterogeneous information across scientific research, aiming to enable adequate data-sharing services. The third contribution presents a modular architecture to support the extraction and integration of textual information, enabling the full exploitation of curated data. The last contribution lies in providing a platform to accelerate the deployment of enhanced semantic information systems. All the proposed solutions were deployed and validated in the scope of rare diseases.
Nas últimas décadas, o campo das ciências biomédicas proporcionou grandes avanços científicos estimulados pela constante evolução das tecnologias de informação. A criação de diversas ferramentas na área da bioinformática e a falta de integração entre novas soluções resultou em enormes quantidades de dados distribuídos por diferentes plataformas. Dados de diferentes tipos e formatos são gerados e armazenados em vários repositórios, o que origina problemas de interoperabilidade e atrasa a investigação. A partilha de informação e o acesso integrado a esses recursos são características fundamentais para a extração bem sucedida do conhecimento científico. Nesta medida, esta tese fornece contribuições para acelerar a integração, ligação e reutilização semântica de dados biomédicos. A primeira contribuição aborda a interconexão de registos distribuídos e heterogéneos. A metodologia proposta cria uma visão holística sobre os diferentes registos, suportando a representação semântica de dados e o acesso integrado. A segunda contribuição aborda a integração de diversos dados para investigações científicas, com o objetivo de suportar serviços interoperáveis para a partilha de informação. O terceiro contributo apresenta uma arquitetura modular que apoia a extração e integração de informações textuais, permitindo a exploração destes dados. A última contribuição consiste numa plataforma web para acelerar a criação de sistemas de informação semânticos. Todas as soluções propostas foram validadas no âmbito das doenças raras.
Le, Grand Bénédicte. "Extraction d'information et visualisation de systèmes complexes sémantiquement structurés." Paris 6, 2001. http://www.theses.fr/2001PA066508.
Full textHansson, Andreas. "Relational Database Web Application : Web administration interface for visualizing and predicting relationships to manage relational databases." Thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-25879.
Full textLutz, João Adolfo Froede. "Descoberta de ruído em páginas da web oculta através de uma abordagem de aprendizagem supervisionada." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/94625.
Full textOne of the problems of data extraction from web pages is the identification of noise in pages. This task aims at identifying non-informative elements in pages, such as headers, menus, or advertisement. The presence of noise may hinder the performance of search engines and web mining tasks. In this paper we tackle the problem of discovering noise in web pages found in the hidden web, i.e., that part of the web that is only accessible by filling web forms. In hidden web processing, data extraction is usually preceeded by a form filling step, in which the query forms that give access to the hidden web pages are automatically or semi-automatically filled. During form filling relevant data about the queried domain are collected, as field names and field values. Our proposal combines this type of data with syntactic information about the nodes that compose the page. We show empirically that this combination achieves better results than an approach that is based solely on syntactic information. Keywords:
Maillot, Pierre. "Nouvelles méthodes pour l'évaluation, l'évolution et l'interrogation des bases du Web des données." Thesis, Angers, 2015. http://www.theses.fr/2015ANGE0007/document.
Full textThe web of data is a mean to share and broadcast data user-readable data as well as machine-readable data. This is possible thanks to rdf which propose the formatting of data into short sentences (subject, relation, object) called triples. Bases from the web of data, called rdf bases, are sets of triples. In a rdf base, the ontology – structural data – organize the description of factual data. Since the web of datacreation in 2001, the number and sizes of rdf bases have been constantly rising. This increase has accelerated since the apparition of linked data, which promote the sharing and interlinking of publicly available bases by user communities. The exploitation – interrogation and edition – by theses communities is made without adequateSolution to evaluate the quality of new data, check the current state of the bases or query together a set of bases. This thesis proposes three methods to help the expansion at factual and ontological level and the querying of bases from the web ofData. We propose a method designed to help an expert to check factual data in conflict with the ontology. Finally we propose a method for distributed querying limiting the sending of queries to bases that may contain answers
Musaraj, Kreshnik. "Extraction automatique de protocoles de communication pour la composition de services Web." Thesis, Lyon 1, 2010. http://www.theses.fr/2010LYO10288/document.
Full textBusiness process management, service-oriented architectures and their reverse engineering heavily rely on the fundamental endeavor of mining business process models and Web service business protocols from log files. Model extraction and mining aim at the (re)discovery of the behavior of a running model implementation using solely its interaction and activity traces, and no a priori information on the target model. Our preliminary study shows that : (i) a minority of interaction data is recorded by process and service-aware architectures, (ii) a limited number of methods achieve model extraction without knowledge of either positive process and protocol instances or the information to infer them, and (iii) the existing approaches rely on restrictive assumptions that only a fraction of real-world Web services satisfy. Enabling the extraction of these interaction models from activity logs based on realistic hypothesis necessitates: (i) approaches that make abstraction of the business context in order to allow their extended and generic usage, and (ii) tools for assessing the mining result through implementation of the process and service life-cycle. Moreover, since interaction logs are often incomplete, uncertain and contain errors, then mining approaches proposed in this work need to be capable of handling these imperfections properly. We propose a set of mathematical models that encompass the different aspects of process and protocol mining. The extraction approaches that we present, issued from linear algebra, allow us to extract the business protocol while merging the classic process mining stages. On the other hand, our protocol representation based on time series of flow density variations makes it possible to recover the temporal order of execution of events and messages in the process. In addition, we propose the concept of proper timeouts to refer to timed transitions, and provide a method for extracting them despite their property of being invisible in logs. In the end, we present a multitask framework aimed at supporting all the steps of the process workflow and business protocol life-cycle from design to optimization.The approaches presented in this manuscript have been implemented in prototype tools, and experimentally validated on scalable datasets and real-world process and web service models.The discovered business protocols, can thus be used to perform a multitude of tasks in an organization or enterprise
Khalil, Jacob, and Gustaf Edlund. "Building backlinks with Web 2.0 : Designing, implementing and evaluating a costless off-site SEO strategy with backlinks originating from Web 2.0 blogs." Thesis, Jönköping University, Tekniska Högskolan, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-50352.
Full textChen, Xueqi. "Query Rewriting for Extracting Data behind HTML Forms." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd406.Chen.
Full textDe, Wilde Max. "From Information Extraction to Knowledge Discovery: Semantic Enrichment of Multilingual Content with Linked Open Data." Doctoral thesis, Universite Libre de Bruxelles, 2015. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/218774.
Full textDécouvrir de nouveaux savoirs dans du texte non-structuré n'est pas une tâche aisée. Les moteurs de recherche basés sur l'indexation complète des contenus montrent leur limites quand ils se voient confrontés à des textes de mauvaise qualité, ambigus et/ou multilingues. L'extraction d'information et d'autres techniques issues du traitement automatique des langues permettent de répondre partiellement à cette problématique, mais sans pour autant atteindre l'idéal d'une représentation adéquate de la connaissance. Dans cette thèse, nous défendons une approche générique qui se veut la plus indépendante possible des langues, domaines et types de contenus traités. Pour ce faire, nous proposons de désambiguïser les termes à l'aide d'identifiants issus de bases de connaissances du Web des données, facilitant ainsi l'enrichissement sémantique des contenus. La valeur ajoutée de cette approche est illustrée par une étude de cas basée sur une archive historique trilingue, en mettant un accent particulier sur les contraintes de qualité, de multilinguisme et d'évolution dans le temps. Un prototype d'outil est également développé sous le nom de Multilingual Entity/Resource Combiner & Knowledge eXtractor (MERCKX), démontrant ainsi le caractère généralisable de notre approche, dans un certaine mesure, à n'importe quelle langue, domaine ou type de contenu.
Doctorat en Information et communication
info:eu-repo/semantics/nonPublished
Kilic, Sefa. "Clustering Frequent Navigation Patterns From Website Logs Using Ontology And Temporal Information." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12613979/index.pdf.
Full textPorto, André Luiz Lopes. "Extração não supervisionada de dados da web utilizando abordagem independente de formato." Universidade Federal do Amazonas, 2015. http://tede.ufam.edu.br/handle/tede/5113.
Full textApproved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-07-28T13:48:47Z (GMT) No. of bitstreams: 1 Dissertação - André Luiz Lopes Porto.pdf: 14791950 bytes, checksum: be2de076023a64a02a6a43c99e9977d8 (MD5)
Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-07-28T13:50:19Z (GMT) No. of bitstreams: 1 Dissertação - André Luiz Lopes Porto.pdf: 14791950 bytes, checksum: be2de076023a64a02a6a43c99e9977d8 (MD5)
Made available in DSpace on 2016-07-28T13:50:19Z (GMT). No. of bitstreams: 1 Dissertação - André Luiz Lopes Porto.pdf: 14791950 bytes, checksum: be2de076023a64a02a6a43c99e9977d8 (MD5) Previous issue date: 2015-11-17
In this thesis we propose a new method for extraction data in rich Web pages that uses only the textual content of these pages. Our method, called FIEX (Format Independent Web Data Extraction), is based on information extraction techniques for text segmentation, and can extract data from Web pages where methods of state of the art based on data alignment techniques fail due to inconsistency between the logical structure of Web pages and the conceptual structure of the data represented in them. The FIEX, unlike the methods previously proposed in the literature, is able to extract data using only the textual content of a Web pages in challenging scenarios such as severe cases of textual elements compounds, in which various values of interest for extraction are represented by only one HTML element. To perform the extraction data of the web pages, FIEX is based on techniques of elimination noise by information redundancy and an information extraction method for text segmentation known in the literature as ONDUX (On-Demand Unsupervised Learning for Information Extraction). In our experiments, we used various Web pages collections of di erent areas of products and e-commerce stores with goal to extract data from product descriptions. The choose of this type of Web page, due to the large amount of data these pages are contained in severe cases of textual elements compounds. According to the results obtained in our experiments in various areas of products and e-commerce stores, we validate the hypothesis that the extraction based on only textual features is possible and e ective.
Nessa dissertação de mestrado propomos um novo método para extração em páginas Web ricas em dados que utiliza apenas o conteúdo textual destas páginas. Nosso método, chamado de FIEX (Format Independent Web Data Extraction), é baseado em técnicas de extração de informação por segmentação de texto, e consegue extrair dados de páginas Web nas quais métodos do estado-da-arte baseados em técnicas de alinhamento de dados não conseguem devido à inconsistência entre a estrutura lógica das páginas Web e a estrutura conceitual dos dados nelas representadas. O FIEX, diferentemente dos métodos previamente propostos na literatura, é capaz de extrair dados apenas utilizando o conteúdo textual de uma página Web em cenários desa adores como casos severos de elementos textuais compostos, nos quais diversos valores de interesse para extração estão representados por apenas um elemento HTML. Para realizar a extração dos dados de páginas Web, o FIEX, é baseado em técnicas de eliminação de ruídos por redundância de informação e um método de extração de informação por segmentação de texto conhecido na literatura como ONDUX (On-Demand Unsupervised Learning for Information Extraction). Em nossos experimentos, utilizamos várias coleções de páginas Web de diferentes domínios de produtos e de lojas de comércio eletr ônico com objetivo de extrair dados de descrições de produtos. A escolha desse tipo de página Web, deve-se à grande quantidade de dados destas páginas estarem contidos em casos severos de elementos textuais compostos. De acordo com os resultados obtidos em nossos experimentos em diferentes domínios de produtos e lojas de comércio eletrônico, validamos a hipótese de que a extração baseada em apenas características textuais é possível e e caz.
Mikita, Tibor. "Portál pro agregaci dat z webových zdrojů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403123.
Full textToda, Guilherme Alves. "Um método probabilístico para o preenchimento automático de formulários Web a partir de textos ricos em dados." Universidade Federal do Amazonas, 2010. http://tede.ufam.edu.br/handle/tede/2892.
Full textOn the Web of today the most prevalent solution for users to interact with data-intesive applications is the use of form-based interfaces composed by several data input fields, such as text boxes, radio buttons, pull-down lists and check boxes. Although these interfaces are popular and effectiver, in many cases, free text interfaces are preferred over form based ones. In this work we present, the implementation and the evaluation of a novel method for automatically filling form-based input interfaces using data-rich text. Our solution takes a data-rich free text as input (e.g, an ad), extracts implicit data values from it and fills appropriate fields using them. For this task, we rely on knowledge obtained from values of previous submissions for each field, which are freely obtained from the usage of the interfaces. Our approach, called iForm, exploits features related to the content and the style of these values, which are combined through a Bayesian framework. Through extensive experimentation, we show that our approach is feasible and effective, and it works well even when only a few previous submissions to the input interface are available.
A solução mais comum atualmente para usuários interagirem com aplicações que utilizam banco de dados na Web é através do uso de formulários compostos por vários campos de entrada, como caixas de texto, listas de seleção e caixas de marcação. Apesar destes formulários serem efetivos e populares, em muitos casos, aplicações onde informações são fornecidas através de texto livre são geralmente preferidas pelos usuários. Neste trabalho apresentaremos a proposta, a implementação e a avaliação de um novo método para preencher automaticamente formulários Web utilizando um texto rico em dados. Nossa solução toma como entrada um texto livre rico em dados (por exemplo, um anúncio), extrai seus dados implícitos e preenche os campos apropriados do formulário utilizando estes dados. Para essa tarefa, utilizamos o conhecimento obtido a partir de valores utilizados previamente pelos usuários para preencher os formulários. Nossa abordagem, chamada de iForm, utiliza características relacionadas ao conteúdo e ao estilo desses valores, que são combinadas através de uma Rede Bayesiana. Em nossos experimentos, mostramos que nossa abordagem é viável e efetiva, funcionando bem mesmo quando poucas submissões foram feitas ao formulário.
Issa, Subhi. "Linked data quality : completeness and conciseness." Electronic Thesis or Diss., Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1274.
Full textThe wide spread of Semantic Web technologies such as the Resource Description Framework (RDF) enables individuals to build their databases on the Web, to write vocabularies, and define rules to arrange and explain the relationships between data according to the Linked Data principles. As a consequence, a large amount of structured and interlinked data is being generated daily. A close examination of the quality of this data could be very critical, especially, if important research and professional decisions depend on it. The quality of Linked Data is an important aspect to indicate their fitness for use in applications. Several dimensions to assess the quality of Linked Data are identified such as accuracy, completeness, provenance, and conciseness. This thesis focuses on assessing completeness and enhancing conciseness of Linked Data. In particular, we first proposed a completeness calculation approach based on a generated schema. Indeed, as a reference schema is required to assess completeness, we proposed a mining-based approach to derive a suitable schema (i.e., a set of properties) from data. This approach distinguishes between essential properties and marginal ones to generate, for a given dataset, a conceptual schema that meets the user's expectations regarding data completeness constraints. We implemented a prototype called “LOD-CM” to illustrate the process of deriving a conceptual schema of a dataset based on the user's requirements. We further proposed an approach to discover equivalent predicates to improve the conciseness of Linked Data. This approach is based, in addition to a statistical analysis, on a deep semantic analysis of data and on learning algorithms. We argue that studying the meaning of predicates can help to improve the accuracy of results. Finally, a set of experiments was conducted on real-world datasets to evaluate our proposed approaches
Ågren, Ola. "Finding, extracting and exploiting structure in text and hypertext." Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-22352.
Full textInformationsutvinning (som ofta kallas data mining även på svenska) är ett forskningsområde som hela tiden utvecklas. Det handlar om att använda datorer för att hitta mönster i stora mängder data, alternativt förutsäga framtida data utifrån redan tillgänglig data. Eftersom det samtidigt produceras mer och mer data varje år ställer detta högre och högre krav på effektiviteten hos de algoritmer som används för att hitta eller använda informationen inom rimlig tid. Denna avhandling handlar om att extrahera information från semi-strukturerad data, att hitta strukturer i stora diskreta datamängder och att på ett effektivt sätt rangordna webbsidor utifrån ett ämnesbaserat perspektiv. Den informationsextraktion som beskrivs handlar om stöd för att hålla både dokumentationen och källkoden uppdaterad samtidigt. Vår lösning på detta problem är att låta delar av dokumentationen (främst algoritmbeskrivningen) ligga som blockkommentarer i källkoden och extrahera dessa automatiskt med ett verktyg. De strukturer som hittas av våra algoritmer för strukturextraktion är i form av underordnanden, exempelvis att ett visst nyckelord är mer generellt än ett annat. Dessa samband kan utnyttjas för att skapa större strukturer i form av hierarkier eller riktade grafer, eftersom underordnandena är transitiva. Det verktyg som vi har tagit fram har främst använts för att skapa indata till ett informationsutvinningssystem samt för att kunna visualisera indatan. Huvuddelen av den forskning som beskrivs i denna avhandling har dock handlat om att kunna rangordna webbsidor utifrån både deras innehåll och länkarna som finns mellan dem. Vi har skapat ett antal algoritmer och visat hur de beter sig i jämförelse med andra algoritmer som används idag. Dessa jämförelser har huvudsakligen handlat om konvergenshastighet, algoritmernas stabilitet givet osäker data och slutligen hur relevant algoritmernas svarsmängder har ansetts vara utifrån användarnas perspektiv. Forskningen har varit inriktad på effektiva algoritmer för att hämta in och hantera stora datamängder med diskreta eller textbaserade data. I avhandlingen presenterar vi även ett förslag till ett system av verktyg som arbetar tillsammans på en databas bestående av “fingeravtryck” och annan meta-data om de saker som indexerats i databasen. Denna data kan sedan användas av diverse algoritmer för att utöka värdet hos det som finns i databasen eller för att effektivt kunna hitta rätt information.
AlgExt, CHiC, ProT
Pires, Julio Cesar Batista. "Extração e mineração de informação independente de domínios da web na língua portuguesa." Universidade Federal de Goiás, 2015. http://repositorio.bc.ufg.br/tede/handle/tede/4723.
Full textApproved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-10-22T13:08:50Z (GMT) No. of bitstreams: 2 Dissertação - Julio Cesar Batista Pires - 2015.pdf: 2026124 bytes, checksum: dda6bea6dfa125f21d2023f288178ebc (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Made available in DSpace on 2015-10-22T13:08:50Z (GMT). No. of bitstreams: 2 Dissertação - Julio Cesar Batista Pires - 2015.pdf: 2026124 bytes, checksum: dda6bea6dfa125f21d2023f288178ebc (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2015-05-08
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Many people are constantly connected on the Web. They are looking for all kinds of things. The Web is a huge source of information. So, they can find almost everything they want. However, Web information is disorganized and have no formal structure. This hampers machine processing and consequently makes information access more difficult. Bringing structure to the Web can be one of the key points for facilitating user searching and navigation. A recent technique, Open Information Extraction, has been successfully applied to extract structured information from the Web. This technique has been mostly applied in pages written in English. This work is specifically focused on information extraction for Portuguese. Techniques used here can be also used to other languages too.
Muitas pessoas estão constantemente conectadas na Web. Elas estão procurando por todo tipo de coisa. A Web é uma enorme fonte de informação. Assim, as pessoas podem encontrar praticamente tudo que elas precisam. Entretanto, as informações da Web são desorganizadas e não possuem uma estrutura formal. Isso dificulta o processamento das máquinas e consequentemente torna o acesso à informaçã mais difícil. Trazer estrutura para a Web pode ser um dos pontos chave para facilitar a busca e navegaçã dos usuários. Uma técnica recente, Extração de Informação Aberta, foi aplicada com sucesso para extrair informação da Web. Essa técnica foi aplicada principalmente em páginas em Inglês. Este trabalho é focado especificamente na extração de informação em Português. As técnicas usadas aqui também podem ser utilizadas para outras linguagens.
Tang, Wei. "Internet-Scale Information Monitoring: A Continual Query Approach." Diss., Available online, Georgia Institute of Technology, 2003:, 2003. http://etd.gatech.edu/theses/available/etd-12042003-173321/unrestricted/tangwei200312.pdf.
Full textThomas E. Potok, Committee Member; Calton Pu, Committee Member; Edward Omiecinski, Committee Member; Leo Mark, Committee Member; Constantinos Dovrolis, Committee Member; Ling Liu, Committee Chair. Includes bibliography.
Oucif, Kadday. "Evaluation of web scraping methods : Different automation approaches regarding web scraping using desktop tools." Thesis, KTH, Data- och elektroteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188418.
Full textEn hel del information kan bli funnen och extraherad i olika format från den semantiska webben med hjälp av webbskrapning, med många tekniker som uppkommit med tiden. Den här rapporten är skriven med målet att utvärdera olika webbskrapnings metoder för att i sin tur utveckla en automatiserad, prestandasäker, enkelt implementerad och solid extraheringsprocess. Ett antal parametrar är definierade för att utvärdera och jämföra befintliga webbskrapningstekniker. En matris av skrivbords verktyg är utforskade och två är valda för utvärdering. Utvärderingen inkluderar också tillvägagångssättet till att lära sig sätta upp olika webbskrapnings processer med så kallade agenter. Ett nummer av länkar blir skrapade efter data med och utan exekvering av JavaScript från webbsidorna. Prototyper med de utvalda teknikerna testas och presenteras med webbskrapningsverktyget Content Grabber som slutlig lösning. Resultatet utav det hela är en bättre förståelse kring ämnet samt en prisvärd extraheringsprocess bestående utav blandade tekniker och metoder, där en god vetskap kring webbsidornas uppbyggnad underlättar datainsamlingen. Sammanfattningsvis presenteras och diskuteras resultatet med hänsyn till valda parametrar.
Ammari, Ahmad N. "Transforming user data into user value by novel mining techniques for extraction of web content, structure and usage patterns : the development and evaluation of new Web mining methods that enhance information retrieval and improve the understanding of users' Web behavior in websites and social blogs." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5269.
Full text