Thèses sur le sujet « Linked Data Quality »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les 20 meilleures thèses pour votre recherche sur le sujet « Linked Data Quality ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.
SPAHIU, BLERINA. « Profiling Linked Data ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2017. http://hdl.handle.net/10281/151645.
Texte intégralRecently, the increasing diffusion of Linked Data (LD) as a standard way to publish and structure data on the Web has received a growing attention from researchers and data publishers. LD adoption is reflected in different domains such as government, media, life science, etc., building a powerful Web available to anyone. Despite the high number of datasets published as LD, their usage is still not exploited as they lack comprehensive metadata. Data consumers need to obtain information about datasets content in a fast and summarized form to decide if they are useful for their use case at hand or not. Data profiling techniques offer an efficient solution to this problem as they are used to generate metadata and statistics that describe the content of the dataset. Existing profiling techniques do no cover a wide range of use cases. Many challenges due to the heterogeneity nature of Linked Data are still to overcome. This thesis presents the doctoral research which tackles the problems related to Profiling Linked Data. Even though the term of data profiling is the umbrella term for diverse descriptive information that describes a dataset, in this thesis we cover three aspects of profiling; topic-based, schema-based and linkage-based. The profile provided in this thesis is fundamental for the decision-making process and is the basic requirement towards the dataset understanding. In this thesis we present an approach to automatically classify datasets in one of the topical categories used in the LD cloud. Moreover, we investigate the problem of multi-topic profiling. For the schema-based profiling we propose a schema-based summarization approach, that provides an overview about the relations in the data. Our summaries are concise and informative enough to summarize the whole dataset. Moreover, they reveal quality issues and can help users in the query formulation tasks. Many datasets in the LD cloud contain similar information for the same entity. In order to fully exploit its potential LD should made this information explicit. Linkage profiling provides information about the number of equivalent entities between datasets and reveal possible errors. The techniques of profiling developed during this work are automatic and can be applied to different datasets independently of the domain.
Issa, Subhi. « Linked data quality : completeness and conciseness ». Electronic Thesis or Diss., Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1274.
Texte intégralThe wide spread of Semantic Web technologies such as the Resource Description Framework (RDF) enables individuals to build their databases on the Web, to write vocabularies, and define rules to arrange and explain the relationships between data according to the Linked Data principles. As a consequence, a large amount of structured and interlinked data is being generated daily. A close examination of the quality of this data could be very critical, especially, if important research and professional decisions depend on it. The quality of Linked Data is an important aspect to indicate their fitness for use in applications. Several dimensions to assess the quality of Linked Data are identified such as accuracy, completeness, provenance, and conciseness. This thesis focuses on assessing completeness and enhancing conciseness of Linked Data. In particular, we first proposed a completeness calculation approach based on a generated schema. Indeed, as a reference schema is required to assess completeness, we proposed a mining-based approach to derive a suitable schema (i.e., a set of properties) from data. This approach distinguishes between essential properties and marginal ones to generate, for a given dataset, a conceptual schema that meets the user's expectations regarding data completeness constraints. We implemented a prototype called “LOD-CM” to illustrate the process of deriving a conceptual schema of a dataset based on the user's requirements. We further proposed an approach to discover equivalent predicates to improve the conciseness of Linked Data. This approach is based, in addition to a statistical analysis, on a deep semantic analysis of data and on learning algorithms. We argue that studying the meaning of predicates can help to improve the accuracy of results. Finally, a set of experiments was conducted on real-world datasets to evaluate our proposed approaches
RULA, ANISA. « Time-related quality dimensions in linked data ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/81717.
Texte intégralDebattista, Jeremy [Verfasser]. « Scalable Quality Assessment of Linked Data / Jeremy Debattista ». Bonn : Universitäts- und Landesbibliothek Bonn, 2017. http://d-nb.info/1135663440/34.
Texte intégralBaillie, Chris. « Reasoning about quality in the Web of Linked Data ». Thesis, University of Aberdeen, 2015. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=227177.
Texte intégralZaveri, Amrapali. « Linked Data Quality Assessment and its Application to Societal Progress Measurement ». Doctoral thesis, Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-167021.
Texte intégralYAMAN, BEYZA. « Exploiting Context-Dependent Quality Metadata for Linked Data Source Selection ». Doctoral thesis, Università degli studi di Genova, 2018. http://hdl.handle.net/11567/930633.
Texte intégralSalibekyan, Zinaida. « Trends in job quality : evidence from French and British linked employer-employee data ». Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM2001.
Texte intégralThe contribution of this thesis is to examine the evolution of job quality from the perspective of the workplace using the British Workplace Employment Relations Surveys (WERS 2004 and 2011) and the French Enquête Relations Professionnelles et Négociations d’Entreprises (REPONSE 2005 and 2011). The thesis consists of three chapters and complements three main strands of the existing literature. The first chapter explores the impact of workplace adjustment practices on job quality in France during the recession. The second chapter examines the role of institutional regimes in Great Britain and France in explaining the cross-national variation in job quality. Finally, the third chapter investigates the strategies employees adopt in order to cope with their pay and working conditions
Melo, Jessica Oliveira de Souza Ferreira [UNESP]. « Metodologia de avaliação de qualidade de dados no contexto do linked data ». Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/150870.
Texte intégralApproved for entry into archive by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-06-12T12:21:39Z (GMT) No. of bitstreams: 1 melo_josf_me_mar.pdf: 5257476 bytes, checksum: 21d6468b47635a4df09d971c6c0bb581 (MD5)
Made available in DSpace on 2017-06-12T12:21:39Z (GMT). No. of bitstreams: 1 melo_josf_me_mar.pdf: 5257476 bytes, checksum: 21d6468b47635a4df09d971c6c0bb581 (MD5) Previous issue date: 2017-05-09
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
A Web Semântica sugere a utilização de padrões e tecnologias que atribuem estrutura e semântica aos dados, de modo que agentes computacionais possam fazer um processamento inteligente, automático, para cumprir tarefas específicas. Neste contexto, foi criado o projeto Linked Open Data (LOD), que consiste em uma iniciativa para promover a publicação de dados linkados (Linked Data). Com o evidente crescimento dos dados publicados como Linked Data, a qualidade tornou-se essencial para que tais conjuntos de dados (datasets) atendam os objetivos básicos da Web Semântica. Isso porque problemas de qualidade nos datasets publicados constituem em um empecilho não somente para a sua utilização, mas também para aplicações que fazem uso de tais dados. Considerando que os dados disponibilizados como Linked Data possibilitam um ambiente favorável para aplicações inteligentes, problemas de qualidade podem também dificultar ou impedir a integração dos dados provenientes de diferentes datasets. A literatura aplica diversas dimensões de qualidade no contexto do Linked Data, porém indaga-se a aplicabilidade de tais dimensões para avaliação de qualidade de dados linkados. Deste modo, esta pesquisa tem como objetivo propor uma metodologia para avaliação de qualidade nos datasets de Linked Data, bem como estabelecer um modelo do que pode ser considerado qualidade de dados no contexto da Web Semântica e do Linked Data. Para isso adotou-se uma abordagem exploratória e descritiva a fim de estabelecer problemas, dimensões e requisitos de qualidade e métodos quantitativos na metodologia de avaliação a fim de realizar a atribuição de índices de qualidade. O trabalho resultou na definição de sete dimensões de qualidade aplicáveis ao domínio do Linked Data e 14 fórmulas diferentes para a quantificação da qualidade de datasets sobre publicações científicas. Por fim realizou-se uma prova de conceito na qual a metodologia de avaliação de qualidade proposta foi aplicada em um dataset promovido pelo LOD. Conclui-se, a partir dos resultados da prova de conceito, que a metodologia proposta consiste em um meio viável para quantificação dos problemas de qualidade em datasets de Linked Data, e que apesar dos diversos requisitos para a publicação deste tipo de dados podem existir outros datasets que não atendam determinados requisitos de qualidade, e por sua vez, não deveriam estar inclusos no diagrama do projeto LOD.
The Semantic Web suggests the use of patterns and technologies that assign structure and semantics to the data, so that computational agents can perform intelligent, automatic processing to accomplish specific tasks. In this context, the Linked Open Data (LOD) project was created, which consists of an initiative to promote the publication of Linked Data. With the evident growth of data published as Linked Data, quality has become essential for such datasets to meet the basic goals of the Semantic Web. This is because quality problems in published datasets are a hindrance not only to their use but also to applications that make use of such data. Considering that data made available as Linked Data enables a favorable environment for intelligent applications, quality problems can also hinder or prevent the integration of data coming from different datasets. The literature applies several quality dimensions in the context of Linked Data, however, the applicability of such dimensions for quality evaluation of linked data is investigated. Thus, this research aims to propose a methodology for quality evaluation in Linked Data datasets, as well as to establish a model of what can be considered data quality in the Semantic Web and Linked Data context. For this, an exploratory and descriptive approach was adopted in order to establish problems, dimensions and quality requirements and quantitative methods in the evaluation methodology in order to perform the assignment of quality indexes. This work resulted in the definition of seven quality dimensions applicable to the Linked Data domain and 14 different formulas for the quantification of the quality of datasets on scientific publications. Finally, a proof of concept was developed in which the proposed quality assessment methodology was applied in a dataset promoted by the LOD. It is concluded from the proof of concept results that the proposed methodology consists of a viable means for quantification of quality problems in Linked Data datasets and that despite the diverse requirements for the publication of this type of data there may be other datasets that do not meet certain quality requirements, and in turn, should not be included in the LOD project diagram.
Zaveri, Amrapali [Verfasser], et Felix [Gutachter] Naumann. « Linked Data Quality Assessment and its Application to Societal Progress Measurement / Amrapali Zaveri ; Gutachter : Felix Naumann ». Leipzig : Universitätsbibliothek Leipzig, 2015. http://d-nb.info/1239565844/34.
Texte intégralTomčová, Lucie. « Datová kvalita v prostředí otevřených a propojitelných dat ». Master's thesis, Vysoká škola ekonomická v Praze, 2014. http://www.nusl.cz/ntk/nusl-192414.
Texte intégralBeretta, Valentina. « évaluation de la véracité des données : améliorer la découverte de la vérité en utilisant des connaissances a priori ». Thesis, IMT Mines Alès, 2018. http://www.theses.fr/2018EMAL0002/document.
Texte intégralThe notion of data veracity is increasingly getting attention due to the problem of misinformation and fake news. With more and more published online information it is becoming essential to develop models that automatically evaluate information veracity. Indeed, the task of evaluating data veracity is very difficult for humans. They are affected by confirmation bias that prevents them to objectively evaluate the information reliability. Moreover, the amount of information that is available nowadays makes this task time-consuming. The computational power of computer is required. It is critical to develop methods that are able to automate this task.In this thesis we focus on Truth Discovery models. These approaches address the data veracity problem when conflicting values about the same properties of real-world entities are provided by multiple sources.They aim to identify which are the true claims among the set of conflicting ones. More precisely, they are unsupervised models that are based on the rationale stating that true information is provided by reliable sources and reliable sources provide true information. The main contribution of this thesis consists in improving Truth Discovery models considering a priori knowledge expressed in ontologies. This knowledge may facilitate the identification of true claims. Two particular aspects of ontologies are considered. First of all, we explore the semantic dependencies that may exist among different values, i.e. the ordering of values through certain conceptual relationships. Indeed, two different values are not necessary conflicting. They may represent the same concept, but with different levels of detail. In order to integrate this kind of knowledge into existing approaches, we use the mathematical models of partial order. Then, we consider recurrent patterns that can be derived from ontologies. This additional information indeed reinforces the confidence in certain values when certain recurrent patterns are observed. In this case, we model recurrent patterns using rules. Experiments that were conducted both on synthetic and real-world datasets show that a priori knowledge enhances existing models and paves the way towards a more reliable information world. Source code as well as synthetic and real-world datasets are freely available
Arndt, Natanael, Kurt Junghanns, Roy Meissner, Philipp Frischmuth, Norman Radtke, Marvin Frommhold et Michael Martin. « Structured feedback : a distributed protocol for feedback and patches on the Web of Data ». Universität Leipzig, 2016. https://ul.qucosa.de/id/qucosa%3A15779.
Texte intégralMaillot, Pierre. « Nouvelles méthodes pour l'évaluation, l'évolution et l'interrogation des bases du Web des données ». Thesis, Angers, 2015. http://www.theses.fr/2015ANGE0007/document.
Texte intégralThe web of data is a mean to share and broadcast data user-readable data as well as machine-readable data. This is possible thanks to rdf which propose the formatting of data into short sentences (subject, relation, object) called triples. Bases from the web of data, called rdf bases, are sets of triples. In a rdf base, the ontology – structural data – organize the description of factual data. Since the web of datacreation in 2001, the number and sizes of rdf bases have been constantly rising. This increase has accelerated since the apparition of linked data, which promote the sharing and interlinking of publicly available bases by user communities. The exploitation – interrogation and edition – by theses communities is made without adequateSolution to evaluate the quality of new data, check the current state of the bases or query together a set of bases. This thesis proposes three methods to help the expansion at factual and ontological level and the querying of bases from the web ofData. We propose a method designed to help an expert to check factual data in conflict with the ontology. Finally we propose a method for distributed querying limiting the sending of queries to bases that may contain answers
Gängler, Thomas. « Semantic Federation of Musical and Music-Related Information for Establishing a Personal Music Knowledge Base ». Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-72434.
Texte intégralMichelfeit, Jan. « Integrade Linked Data ». Master's thesis, 2013. http://www.nusl.cz/ntk/nusl-321378.
Texte intégralZaveri, Amrapali. « Linked Data Quality Assessment and its Application to Societal Progress Measurement ». Doctoral thesis, 2014. https://ul.qucosa.de/id/qucosa%3A13295.
Texte intégralKadleček, Rastislav. « Transformace HTML dat o produktech do Linked Data formátu ». Master's thesis, 2018. http://www.nusl.cz/ntk/nusl-387272.
Texte intégral« MOOCLink : Linking and Maintaining Quality of Data Provided by Various MOOC Providers ». Master's thesis, 2016. http://hdl.handle.net/2286/R.I.39445.
Texte intégralDissertation/Thesis
Masters Thesis Computer Science 2016
Kontokostas, Dimitrios. « Large-Scale Multilingual Knowledge Extraction, Publishing and Quality Assessment : The case of DBpedia ». 2017. https://ul.qucosa.de/id/qucosa%3A31447.
Texte intégral