Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Linked Data Quality.

Articles de revues sur le sujet « Linked Data Quality »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Linked Data Quality ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Zaveri, Amrapali, Anisa Rula, Andrea Maurino, Ricardo Pietrobon, Jens Lehmann et Sören Auer. « Quality assessment for Linked Data : A Survey ». Semantic Web 7, no 1 (17 mars 2015) : 63–93. http://dx.doi.org/10.3233/sw-150175.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Radulovic, Filip, Nandana Mihindukulasooriya, Raúl García-Castro et Asunción Gómez-Pérez. « A comprehensive quality model for Linked Data ». Semantic Web 9, no 1 (30 novembre 2017) : 3–24. http://dx.doi.org/10.3233/sw-170267.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Batini, Carlo, Anisa Rula, Monica Scannapieco et Gianluigi Viscusi. « From Data Quality to Big Data Quality ». Journal of Database Management 26, no 1 (janvier 2015) : 60–82. http://dx.doi.org/10.4018/jdm.2015010103.

Texte intégral
Résumé :
This article investigates the evolution of data quality issues from traditional structured data managed in relational databases to Big Data. In particular, the paper examines the nature of the relationship between Data Quality and several research coordinates that are relevant in Big Data, such as the variety of data types, data sources and application domains, focusing on maps, semi-structured texts, linked open data, sensor & sensor networks and official statistics. Consequently a set of structural characteristics is identified and a systematization of the a posteriori correlation between them and quality dimensions is provided. Finally, Big Data quality issues are considered in a conceptual framework suitable to map the evolution of the quality paradigm according to three core coordinates that are significant in the context of the Big Data phenomenon: the data type considered, the source of data, and the application domain. Thus, the framework allows ascertaining the relevant changes in data quality emerging with the Big Data phenomenon, through an integrative and theoretical literature review.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Hadhiatma, A. « Improving data quality in the linked open data : a survey ». Journal of Physics : Conference Series 978 (mars 2018) : 012026. http://dx.doi.org/10.1088/1742-6596/978/1/012026.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kovacs, Adam Tamas, et Andras Micsik. « BIM quality control based on requirement linked data ». International Journal of Architectural Computing 19, no 3 (13 mai 2021) : 431–48. http://dx.doi.org/10.1177/14780771211012175.

Texte intégral
Résumé :
This article discusses a BIM Quality Control Ecosystem that is based on Requirement Linked Data in order to create a framework where automated BIM compliance checking methods can be widely used. The meaning of requirements is analyzed in a building project context as a basis for data flow analysis: what are the main types of requirements, how they are handled, and what sources they originate from. A literature review has been conducted to find the present development directions in quality checking, besides a market research on present, already widely used solutions. With the conclusions of these research and modern data management theory, the principles of a holistic approach have been defined for quality checking in the Architecture, Engineering and Construction (AEC) industry. A comparative analysis has been made on current BIM compliance checking solutions according to our review principles. Based on current practice and ongoing research, a state-of-the-art BIM quality control ecosystem is proposed that is open, enables automation, promotes interoperability, and leaves the data governing responsibility at the sources of the requirements. In order to facilitate the flow of requirement and quality data, we propose a model for requirements as Linked Data and provide example for quality checking using Shapes Constraint Language (SHACL). As a result, an opportunity is given for better quality and cheaper BIM design methods to be implemented in the industry.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Zaveri, Amrapali, Andrea Maurino et Laure-Berti Equille. « Web Data Quality ». International Journal on Semantic Web and Information Systems 10, no 2 (avril 2014) : 1–6. http://dx.doi.org/10.4018/ijswis.2014040101.

Texte intégral
Résumé :
The standardization and adoption of Semantic Web technologies has resulted in an unprecedented volume of data being published as Linked Data (LD). However, the “publish first, refine later” philosophy leads to various quality problems arising in the underlying data such as incompleteness, inconsistency and semantic ambiguities. In this article, we describe the current state of Data Quality in the Web of Data along with details of the three papers accepted for the International Journal on Semantic Web and Information Systems' (IJSWIS) Special Issue on Web Data Quality. Additionally, we identify new challenges that are specific to the Web of Data and provide insights into the current progress and future directions for each of those challenges.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Baillie, Chris, Peter Edwards et Edoardo Pignotti. « Assessing Quality in the Web of Linked Sensor Data ». Proceedings of the AAAI Conference on Artificial Intelligence 25, no 1 (4 août 2011) : 1750–51. http://dx.doi.org/10.1609/aaai.v25i1.8044.

Texte intégral
Résumé :
Assessing the quality of sensor data available on the Web is essential in order to identify reliable information for decision-making. This paper discusses how provenance of sensor observations and previous quality ratings can influence quality assessment decisions.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Paulheim, Heiko, et Christian Bizer. « Improving the Quality of Linked Data Using Statistical Distributions ». International Journal on Semantic Web and Information Systems 10, no 2 (avril 2014) : 63–86. http://dx.doi.org/10.4018/ijswis.2014040104.

Texte intégral
Résumé :
Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Assaf, Ahmad, Aline Senart et Raphaël Troncy. « Towards An Objective Assessment Framework for Linked Data Quality ». International Journal on Semantic Web and Information Systems 12, no 3 (juillet 2016) : 111–33. http://dx.doi.org/10.4018/ijswis.2016070104.

Texte intégral
Résumé :
Ensuring data quality in Linked Open Data is a complex process as it consists of structured information supported by models, ontologies and vocabularies and contains queryable endpoints and links. In this paper, the authors first propose an objective assessment framework for Linked Data quality. The authors build upon previous efforts that have identified potential quality issues but focus only on objective quality indicators that can measured regardless on the underlying use case. Secondly, the authors present an extensible quality measurement tool that helps on one hand data owners to rate the quality of their datasets, and on the other hand data consumers to choose their data sources from a ranked set. The authors evaluate this tool by measuring the quality of the LOD cloud. The results demonstrate that the general state of the datasets needs attention as they mostly have low completeness, provenance, licensing and comprehensibility quality scores.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Yang, Lu, Li Huang et Zhenzhen Liu. « Linked Data Crowdsourcing Quality Assessment based on Domain Professionalism ». Journal of Physics : Conference Series 1187, no 5 (avril 2019) : 052085. http://dx.doi.org/10.1088/1742-6596/1187/5/052085.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Zahedi Nooghabi, Mahdi, et Akram Fathian Dastgerdi. « Proposed metrics for data accessibility in the context of linked open data ». Program 50, no 2 (4 avril 2016) : 184–94. http://dx.doi.org/10.1108/prog-01-2015-0007.

Texte intégral
Résumé :
Purpose – One of the most important categories in linked open data (LOD) quality models is “data accessibility.” The purpose of this paper is to propose some metrics and indicators for assessing data accessibility in LOD and the semantic web context. Design/methodology/approach – In this paper, at first the authors consider some data quality and LOD quality models to review proposed subcategories for data accessibility dimension in related texts. Then, based on goal question metric (GQM) approach, the authors specify the project goals, main issues and some questions. Finally, the authors propose some metrics for assessing the data accessibility in the context of the semantic web. Findings – Based on GQM approach, the authors determined three main issues for data accessibility, including data availability, data performance, and data security policy. Then the authors created four main questions related to these issues. As a conclusion, the authors proposed 27 metrics for measuring these questions. Originality/value – Nowadays, one of the main challenges regarding data quality is the lack of agreement on widespread quality metrics and practical instruments for evaluating quality. Accessibility is an important aspect of data quality. However, few researches have been done to provide metrics and indicators for assessing data accessibility in the context of the semantic web. So, in this research, the authors consider the data accessibility dimension and propose a comparatively comprehensive set of metrics.
Styles APA, Harvard, Vancouver, ISO, etc.
12

De Souza, Jessica Oliveira, et Jose Eduardo Santarem Segundo. « Mapeamento de Problemas de Qualidade no Linked Data ». Journal on Advances in Theoretical and Applied Informatics 1, no 1 (6 octobre 2015) : 38. http://dx.doi.org/10.26729/jadi.v1i1.1043.

Texte intégral
Résumé :
Since the Semantic Web was created in order to improve the current web user experience, the Linked Data is the primary means in which semantic web application is theoretically full, respecting appropriate criteria and requirements. Therefore, the quality of data and information stored on the linked data sets is essential to meet the basic semantic web objectives. Hence, this article aims to describe and present specific dimensions and their related quality issues.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Acosta, Maribel, Amrapali Zaveri, Elena Simperl, Dimitris Kontokostas, Fabian Flöck et Jens Lehmann. « Detecting Linked Data quality issues via crowdsourcing : A DBpedia study ». Semantic Web 9, no 3 (12 avril 2018) : 303–35. http://dx.doi.org/10.3233/sw-160239.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Färber, Michael, Frederic Bartscherer, Carsten Menne et Achim Rettinger. « Linked data quality of DBpedia, Freebase, OpenCyc, Wikidata, and YAGO ». Semantic Web 9, no 1 (30 novembre 2017) : 77–129. http://dx.doi.org/10.3233/sw-170275.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Debattista, Jeremy, SÖren Auer et Christoph Lange. « Luzzu—A Methodology and Framework for Linked Data Quality Assessment ». Journal of Data and Information Quality 8, no 1 (29 novembre 2016) : 1–32. http://dx.doi.org/10.1145/2992786.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Prins, H., H. Buller et J. Zwetsloot-Schonk. « Effect of discharge letter-linked diagnosis registration on data quality ». International Journal for Quality in Health Care 12, no 1 (1 février 2000) : 47–57. http://dx.doi.org/10.1093/intqhc/12.1.47.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Gürdür, Didem, Jad El-khoury et Mattias Nyberg. « Methodology for linked enterprise data quality assessment through information visualizations ». Journal of Industrial Information Integration 15 (septembre 2019) : 191–200. http://dx.doi.org/10.1016/j.jii.2018.11.002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Behkamal, Behshid, Mohsen Kahani, Ebrahim Bagheri et Zoran Jeremic. « A Metrics-Driven Approach for Quality Assessment of Linked Open Data ». Journal of theoretical and applied electronic commerce research 9, no 2 (août 2014) : 11–12. http://dx.doi.org/10.4067/s0718-18762014000200006.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Kricker, Anne. « Using linked data to explore quality of care for breast cancer ». New South Wales Public Health Bulletin 12, no 4 (2001) : 110. http://dx.doi.org/10.1071/nb01033.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Al-khatib, Bassel, et Ali Ahmad Ali. « Linked Data : A Framework for Publishing FiveStar Open Government Data ». International Journal of Information Technology and Computer Science 13, no 6 (8 décembre 2021) : 1–15. http://dx.doi.org/10.5815/ijitcs.2021.06.01.

Texte intégral
Résumé :
With the increased adoption of open government initiatives around the world, a huge amount of governmental raw datasets was released. However, the data was published in heterogeneous formats and vocabularies and in many cases in bad quality due to inconsistency, messy, and maybe incorrectness as it has been collected by practicalities within the source organization, which makes it inefficient for reusing and integrating it for serving citizens and third-party apps. This research introduces the LDOG (Linked Data for Open Government) experimental framework, which aims to provide a modular architecture that can be integrated into the open government hierarchy, allowing huge amounts of data to be gathered in a fine-grained manner from source and directly publishing them as linked data based on Tim Berners lee’s five-star deployment scheme with a validation layer using SHACL, which results in high quality data. The general idea is to model the hierarchy of government and classify government organizations into two types, the modeling organizations at higher levels and data source organizations at lower levels. Modeling organization’s experts in linked data have the responsibility to design data templates, ontologies, SHACL shapes, and linkage specifications. whereas non-experts can be incorporated in data source organizations to utilize their knowledge in data to do mapping, reconciliation, and correcting data. This approach lowers the needed experts that represent a problem of linked data adoption. To test the functionality of our framework in action, we developed the LDOG platform which utilizes the different modules of the framework to power a set of user interfaces that can be used to publish government datasets. we used this platform to convert some of UAE's government datasets into linked data. Finally, on top of the converted data, we built a proof-of-concept app to show the power of five-star linked data for integrating datasets from disparate organizations and to promote the governments' adoption. Our work has defined a clear path to integrate the linked data into open governments and solid steps to publishing and enhancing it in a fine-grained and practical manner with a lower number of experts in linked data, It extends SHACL to define data shapes and convert CSV to RDF.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Penteado, Bruno, Maldonado Juan Carlos et Seiji Isotani. « Process Model with Quality Control for the Production of High Quality Linked Open Government Data ». IEEE Latin America Transactions 19, no 3 (mars 2021) : 421–29. http://dx.doi.org/10.1109/tla.2021.9447691.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Yanagihara, Dolores, Mahil Senathirajah et Marilyn Novich. « Cancer claims and registry data : A California linkage for quality measurement. » Journal of Clinical Oncology 34, no 7_suppl (1 mars 2016) : 296. http://dx.doi.org/10.1200/jco.2016.34.7_suppl.296.

Texte intégral
Résumé :
296 Background: Breast and colon are 2 of the most prevalent cancers in California according to the California Cancer Registry (CCR). Health claims data contains a wealth of information on diagnoses and treatment but lacks clinical information critical for evaluating care. Registry data contains key clinical information, but often does not contain the complete treatment regimen. This project aims to assess the feasibility of linking commercial claims data to population-based cancer registry data, and to use the linked data to examine variation in quality measures at the regional and physician organization (PO) levels. Methods: Nine NQF endorsed breast and colon cancer measures were selected and measure specifications with code sets were developed. Data on members identified with a diagnosis of breast and/or colon cancer in claims (2009-2012) from 4 commercial HMO health plans was linked with CCR data. Results were generated at the regional and PO levels. Results: The feasibility test was a success; CCR and claims data was linked for 8,757 individuals, and the data sets proved complementary. Rates relying on the linked dataset were typically higher than when either data source was used alone. For example, see the Table. Performance was strong across 8 of 9 measures with scores ranging from 80-98%. The 9th measure, which relied only on CCR data, was 51%. PO level measurement was limited by small sample sizes, but where sample size was adequate, a significant amount of variation in performance across POs was found. Conclusions: The project showed that linkage of claims and registry data is feasible and that the linked dataset supports more robust assessment of the quality of cancer care. Further, the data dictionary and programmable code sets that were developed are available to other entities interested in creating linkages. With larger sample size, robust benchmarks could be established. Linked registry and claims data could be used for additional research in areas such as compliance with clinical guidelines and examining patterns of treatment. [Table: see text]
Styles APA, Harvard, Vancouver, ISO, etc.
23

Friedemann, Marko, Ken Wenzel et Adrian Singer. « Linked Data Architecture for Assistance and Traceability in Smart Manufacturing ». MATEC Web of Conferences 304 (2019) : 04006. http://dx.doi.org/10.1051/matecconf/201930404006.

Texte intégral
Résumé :
Traceability systems and digital assistance solutions are becoming increasingly vital parts of modern manufacturing environments. They help tracking quality-related information throughout the production process and support workers and maintenance personnel to cope with the increasing complexity of manufacturing technologies. In order to support these use cases, the integration of information from different data sources is required to create the necessary insights into processes, equipment and production quality. Common challenges for such integration scenarios are the various data formats, encodings and software interfaces that are involved in the acquisition, transmission, management and retrieval of relevant product and process data. This paper proposes a Linked Data based system architecture for modular and decoupled assistance software. Its web-oriented approach allows to connect two usually disparate data sets: semantic descriptions of complex production systems on the one hand and high-volume and high-velocity production data on the other hand. The proposed concept is illustrated with a typical example from the manufacturing domain. The described End-of-Line quality assessment on forming machines is used for traceability and product monitoring.
Styles APA, Harvard, Vancouver, ISO, etc.
24

H. S., Shrisha, et Uma Boregowda. « Quality-of-Service-Linked Privileged Content-Caching Mechanism for Named Data Networks ». Future Internet 14, no 5 (20 mai 2022) : 157. http://dx.doi.org/10.3390/fi14050157.

Texte intégral
Résumé :
The domain of information-centric networking (ICN) is expanding as more devices are becoming a part of connected technologies. New methods for serving content from a producer to a consumer are being explored, and Named Data Networking (NDN) is one of them. The NDN protocol routes the content from a producer to a consumer in a network using content names, instead of IP addresses. This facility, combined with content caching, efficiently serves content for very large networks consisting of a hybrid and ad hoc topology with both wired and wireless media. This paper addresses the issue of the quality-of-service (QoS) dimension for content delivery in NDN-based networks. The Internet Engineering Task Force (IETF) classifies QoS traffic as (prompt, reliable), prompt, reliable, and regular, and assigns corresponding priorities for managing the content. QoS-linked privileged content caching (QLPCC) proposes strategies for Pending Interest Table (PIT) and content store (CS) management in dedicated QoS nodes for handling priority content. QoS nodes are intermediately resourceful NDN nodes between content producers and consumers which specifically manage QoS traffic. The results of this study are compared with EQPR, PRR probability cache, and Least Frequently Used (LFU) and Least Fresh First (LFF) schemes, and QLPCC outperformed the latter-mentioned schemes in terms of QoS-node CS size vs. hit rate (6% to 47%), response time vs, QoS-node CS size (65% to 90%), and hop count vs. QoS-node CS size (60% to 84%) from the perspectives of priority traffic and overall traffic. QLPCC performed predictably when the NDN node count was increased from 500 to 1000, showing that the strategy is scalable.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Harron, Katie L., James C. Doidge, Hannah E. Knight, Ruth E. Gilbert, Harvey Goldstein, David A. Cromwell et Jan H. van der Meulen. « A guide to evaluating linkage quality for the analysis of linked data ». International Journal of Epidemiology 46, no 5 (7 septembre 2017) : 1699–710. http://dx.doi.org/10.1093/ije/dyx177.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Albertoni, Riccardo, Monica De Martino et Paola Podestà. « Quality measures for skos ». Data Technologies and Applications 52, no 3 (2 juillet 2018) : 405–23. http://dx.doi.org/10.1108/dta-05-2017-0037.

Texte intégral
Résumé :
Purpose The purpose of this paper is to focus on the quality of the connections (linkset) among thesauri published as Linked Data on the Web. It extends the cross-walking measures with two new measures able to evaluate the enrichment brought by the information reached through the linkset (lexical enrichment, browsing space enrichment). It fosters the adoption of cross-walking linkset quality measures besides the well-known and deployed cardinality-based measures (linkset cardinality and linkset coverage). Design/methodology/approach The paper applies the linkset measures to the Linked Thesaurus fRamework for Environment (LusTRE). LusTRE is selected as testbed as it is encoded using a Simple Knowledge Organisation System (SKOS) published as Linked Data, and it explicitly exploits the cross-walking measures on its validated linksets. Findings The application on LusTRE offers an insight of the complementarities among the considered linkset measures. In particular, it shows that the cross-walking measures deepen the cardinality-based measures analysing quality facets that were not previously considered. The actual value of LusTRE’s linksets regarding the improvement of multilingualism and concept spaces is assessed. Research limitations/implications The paper considers skos:exactMatch linksets, which belong to a rather specific but a quite common kind of linkset. The cross-walking measures explicitly assume correctness and completeness of linksets. Third party approaches and tools can help to meet the above assumptions. Originality/value This paper fulfils an identified need to study the quality of linksets. Several approaches formalise and evaluate Linked Data quality focusing on data set quality but disregarding the other essential component: the connection among data.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Feeney, Kevin Chekov, Declan O'Sullivan, Wei Tai et Rob Brennan. « Improving Curated Web-Data Quality with Structured Harvesting and Assessment ». International Journal on Semantic Web and Information Systems 10, no 2 (avril 2014) : 35–62. http://dx.doi.org/10.4018/ijswis.2014040103.

Texte intégral
Résumé :
This paper describes a semi-automated process, framework and tools for harvesting, assessing, improving and maintaining high-quality linked-data. The framework, known as DaCura1, provides dataset curators, who may not be knowledge engineers, with tools to collect and curate evolving linked data datasets that maintain quality over time. The framework encompasses a novel process, workflow and architecture. A working implementation has been produced and applied firstly to the publication of an existing social-sciences dataset, then to the harvesting and curation of a related dataset from an unstructured data-source. The framework's performance is evaluated using data quality measures that have been developed to measure existing published datasets. An analysis of the framework against these dimensions demonstrates that it addresses a broad range of real-world data quality concerns. Experimental results quantify the impact of the DaCura process and tools on data quality through an assessment framework and methodology which combines automated and human data quality controls.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Zemmouchi-Ghomari, Leila, Kaouther Mezaache et Mounia Oumessad. « Ontology assessment based on linked data principles ». International Journal of Web Information Systems 14, no 4 (5 novembre 2018) : 453–79. http://dx.doi.org/10.1108/ijwis-01-2018-0003.

Texte intégral
Résumé :
Purpose The purpose of this paper is to evaluate ontologies with respect to the linked data principles. This paper presents a concrete interpretation of the four linked data principles applied to ontologies, along with an implementation that automatically detects violations of these principles and fixes them (semi-automatically). The implementation is applied to a number of state-of-the-art ontologies. Design/methodology/approach Based on a precise and detailed interpretation of the linked data principles in the context of ontologies (to become as reusable as possible), the authors propose a set of algorithms to assess ontologies according to the four linked data principles along with means to implement them using a Java/Jena framework. All ontology elements are extracted and examined taking into account particular cases, such as blank nodes and literals. The authors also provide propositions to fix some of the detected anomalies. Findings The experimental results are consistent with the proven quality of popular ontologies of the linked data cloud because these ontologies obtained good scores from the linked data validator tool. Originality/value The proposed approach and its implementation takes into account the assessment of the four linked data principles and propose means to correct the detected anomalies in the assessed data sets, whereas most LD validator tools focus on the evaluation of principle 2 (URI dereferenceability) and principle 3 (RDF validation); additionally, they do not tackle the issue of fixing detected errors.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Huang, Li, Zhenzhen Liu, Fangfang Xu et Jinguang Gu. « An RDF Data Set Quality Assessment Mechanism for Decentralized Systems ». Data Intelligence 2, no 4 (octobre 2020) : 529–53. http://dx.doi.org/10.1162/dint_a_00059.

Texte intégral
Résumé :
With the rapid growth of the linked data on the Web, the quality assessment of the RDF data set becomes particularly important, especially for the quality and accessibility of the linked data. In most cases, RDF data sets are shared online, leading to a high maintenance cost for the quality assessment. This also potentially pollutes Internet data. Recently blockchain technology has shown the potential in many applications. Using the blockchain storage quality assessment results can reduce the centralization of the authority, and the quality assessment results have characteristics such as non-tampering. To this end, we propose an RDF data quality assessment model in a decentralized environment, pointing out a new dimension of RDF data quality. We use the blockchain to record the data quality assessment results and design a detailed update strategy for the quality assessment results. We have implemented a system DCQA to test and verify the feasibility of the quality assessment model. The proposed method can provide users with better cost-effective results when knowledge is independently protected.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Nguyen, Khai, et Ryutaro Ichise. « Automatic Schema-Independent Linked Data Instance Matching System ». International Journal on Semantic Web and Information Systems 13, no 1 (janvier 2017) : 82–103. http://dx.doi.org/10.4018/ijswis.2017010106.

Texte intégral
Résumé :
The goal of linked data instance matching is to detect all instances that co-refer to the same objects in two linked data repositories, the source and the target. Since the amount of linked data is rapidly growing, it is important to automate this task. However, the difference between the schemata of source and target repositories remains a challenging barrier. This barrier reduces the portability, accuracy, and scalability of many proposed approaches. The authors present automatic schema-independent interlinking (ASL), which is a schema-independent system that performs instance matching on repositories with different schemata, without prior knowledge about the schemata. The key improvements of ASL compared to previous systems are the detection of useful attribute pairs for comparing instances, an attribute-driven token-based blocking scheme, and an effective modification of existing string similarities. To verify the performance of ASL, the authors conducted experiments on a large dataset containing 246 subsets with different schemata. The results show that ASL obtains high accuracy and significantly improves the quality of discovered coreferences against recently proposed complex systems.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Henson, Katherine E., Rachael Brock, Brian Shand, Victoria Coupland, Lucy Elliss-Brookes, Georgios Lyratzopoulos, Martin McCabe, Thomas Round, Kwok Wong et Jem Rashbass. « The value of linked data : prescriptions dispensed in the community linked to the national cancer registration data in England ». British Journal of General Practice 68, suppl 1 (juin 2018) : bjgp18X696761. http://dx.doi.org/10.3399/bjgp18x696761.

Texte intégral
Résumé :
BackgroundImprovements in cancer survival have resulted in an increasing population of cancer survivors who are managed in primary care. A partnership was established between the National Cancer Registration and Analysis Service and NHS Business Services Authority to link national cancer registration data to community dispensed prescriptions data.AimWe describe the linkage between these two datasets and the potential value of the resulting data resource.MethodCommunity prescriptions data was collected for April – July 2015 initially. Pseudonymised prescriptions data was supplied to NCRAS for linkage at an individual patient level.Results1.68 million individuals, with a history of cancer and having received a prescription April–July 2015, were identified in both datasets. This was 6% of all individuals prescribed medication in that time. 90,840 patients were newly diagnosed with cancer and had prescriptions in this time period: 90% of all patients diagnosed April–July 2015. Comparison of the two datasets identified data quality issues which must be considered, and these will also be presented.ConclusionThis linked resource has the potential to become largest of its kind and is thus crucial to primary care research. Prescribed medication and its correlation with symptom profiles or co-morbidity at time points relative to diagnosis, will offer unique insights into prescribing patterns and potential associations with earlier diagnosis. Full exploitation of this linked data offers the potential for an evidence base for empowering survivors, updating clinical follow-up guidelines and educating primary care physicians who manage the long-term care of cancer survivors.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Yang, F. P., Y. Z. Ou, C. W. Yu, J. Su, S. W. Bai, J. M. Ho et J. W. S. Liu. « A virtual repository for linked-data-based disaster management applications ». International Journal of Safety and Security Engineering 5, no 1 (31 mars 2015) : 1–12. http://dx.doi.org/10.2495/safe-v5-n1-1-12.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Espuny Pujol, Ferran, Christina Pagel, Katherine L. Brown, James C. Doidge, Richard G. Feltbower, Rodney C. Franklin, Arturo Gonzalez-Izquierdo et al. « Linkage of National Congenital Heart Disease Audit data to hospital, critical care and mortality national data sets to enable research focused on quality improvement ». BMJ Open 12, no 5 (mai 2022) : e057343. http://dx.doi.org/10.1136/bmjopen-2021-057343.

Texte intégral
Résumé :
ObjectivesTo link five national data sets (three registries, two administrative) and create longitudinal healthcare trajectories for patients with congenital heart disease (CHD), describing the quality and the summary statistics of the linked data set.DesignBespoke linkage of record-level patient identifiers across five national data sets. Generation of spells of care defined as periods of time-overlapping events across the data sets.SettingNational Congenital Heart Disease Audit (NCHDA) procedures in public (National Health Service; NHS) hospitals in England and Wales, paediatric and adult intensive care data sets (Paediatric Intensive Care Audit Network; PICANet and the Case Mix Programme from the Intensive Care National Audit & Research Centre; ICNARC-CMP), administrative hospital episodes (hospital episode statistics; HES inpatient, outpatient, accident and emergency; A&E) and mortality registry data.ParticipantsPatients with any CHD procedure recorded in NCHDA between April 2000 and March 2017 from public hospitals.Primary and secondary outcome measuresPrimary: number of linked records, number of unique patients and number of generated spells of care. Secondary: quality and completeness of linkage.ResultsThere were 143 862 records in NCHDA relating to 96 041 unique patients. We identified 65 797 linked PICANet patient admissions, 4664 linked ICNARC-CMP admissions and over 6 million linked HES episodes of care (1.1M inpatient, 4.7M outpatient). The linked data set had 4 908 153 spells of care after quality checks, with a median (IQR) of 3.4 (1.8–6.3) spells per patient-year. Where linkage was feasible (in terms of year and centre), 95.6% surgical procedure records were linked to a corresponding HES record, 93.9% paediatric (cardiac) surgery procedure records to a corresponding PICANet admission and 76.8% adult surgery procedure records to a corresponding ICNARC-CMP record.ConclusionsWe successfully linked four national data sets to the core data set of all CHD procedures performed between 2000 and 2017. This will enable a much richer analysis of longitudinal patient journeys and outcomes. We hope that our detailed description of the linkage process will be useful to others looking to link national data sets to address important research priorities.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Aljumaili, Mustafa, Karina Wandt, Ramin Karim et Phillip Tretten. « eMaintenance ontologies for data quality support ». Journal of Quality in Maintenance Engineering 21, no 3 (10 août 2015) : 358–74. http://dx.doi.org/10.1108/jqme-09-2014-0048.

Texte intégral
Résumé :
Purpose – The purpose of this paper is to explore the main ontologies related to eMaintenance solutions and to study their application area. The advantages of using these ontologies to improve and control data quality will be investigated. Design/methodology/approach – A literature study has been done to explore the eMaintenance ontologies in the different areas. These ontologies are mainly related to content structure and communication interface. Then, ontologies will be linked to each step of the data production process in maintenance. Findings – The findings suggest that eMaintenance ontologies can help to produce a high-quality data in maintenance. The suggested maintenance data production process may help to control data quality. Using these ontologies in every step of the process may help to provide management tools to provide high-quality data. Research limitations/implications – Based on this study, it can be concluded that further research could broaden the investigation to identify more eMaintenance ontologies. Moreover, studying these ontologies in more technical details may help to increase the understandability and the use of these standards. Practical implications – It has been concluded in this study that applying eMaintenance ontologies by companies needs additional cost and time. Also the lack or the ineffective use of eMaintenance tools in many enterprises is one of the limitations for using these ontologies. Originality/value – Investigating eMaintenance ontologies and connecting them to maintenance data production is important to control and manage the data quality in maintenance.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Färber, Michael, et David Lamprecht. « The data set knowledge graph : Creating a linked open data source for data sets ». Quantitative Science Studies 2, no 4 (2021) : 1324–55. http://dx.doi.org/10.1162/qss_a_00161.

Texte intégral
Résumé :
Abstract Several scholarly knowledge graphs have been proposed to model and analyze the academic landscape. However, although the number of data sets has increased remarkably in recent years, these knowledge graphs do not primarily focus on data sets but rather on associated entities such as publications. Moreover, publicly available data set knowledge graphs do not systematically contain links to the publications in which the data sets are mentioned. In this paper, we present an approach for constructing an RDF knowledge graph that fulfills these mentioned criteria. Our data set knowledge graph, DSKG, is publicly available at http://dskg.org and contains metadata of data sets for all scientific disciplines. To ensure high data quality of the DSKG, we first identify suitable raw data set collections for creating the DSKG. We then establish links between the data sets and publications modeled in the Microsoft Academic Knowledge Graph that mention these data sets. As the author names of data sets can be ambiguous, we develop and evaluate a method for author name disambiguation and enrich the knowledge graph with links to ORCID. Overall, our knowledge graph contains more than 2,000 data sets with associated properties, as well as 814,000 links to 635,000 scientific publications. It can be used for a variety of scenarios, facilitating advanced data set search systems and new ways of measuring and awarding the provisioning of data sets.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Shaon, Arif, Sarah Callaghan, Bryan Lawrence, Brian Matthews, Timothy Osborn, Colin Harpham et Andrew Woolf. « Opening Up Climate Research : A Linked Data Approach to Publishing Data Provenance ». International Journal of Digital Curation 7, no 1 (12 mars 2012) : 163–73. http://dx.doi.org/10.2218/ijdc.v7i1.223.

Texte intégral
Résumé :
Traditionally, the formal scientific output in most fields of natural science has been limited to peer-reviewed academic journal publications, with less attention paid to the chain of intermediate data results and their associated metadata, including provenance. In effect, this has constrained the representation and verification of the data provenance to the confines of the related publications. Detailed knowledge of a dataset’s provenance is essential to establish the pedigree of the data for its effective re-use, and to avoid redundant re-enactment of the experiment or computation involved. It is increasingly important for open-access data to determine their authenticity and quality, especially considering the growing volumes of datasets appearing in the public domain. To address these issues, we present an approach that combines the Digital Object Identifier (DOI) – a widely adopted citation technique – with existing, widely adopted climate science data standards to formally publish detailed provenance of a climate research dataset as an associated scientific workflow. This is integrated with linked-data compliant data re-use standards (e.g. OAI-ORE) to enable a seamless link between a publication and the complete trail of lineage of the corresponding dataset, including the dataset itself.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Perera, Roly, Minh Nguyen, Tomas Petricek et Meng Wang. « Linked visualisations via Galois dependencies ». Proceedings of the ACM on Programming Languages 6, POPL (16 janvier 2022) : 1–29. http://dx.doi.org/10.1145/3498668.

Texte intégral
Résumé :
We present new language-based dynamic analysis techniques for linking visualisations and other structured outputs to data in a fine-grained way, allowing users to explore how data attributes and visual or other output elements are related by selecting (focusing on) substructures of interest. Our approach builds on bidirectional program slicing techiques based on Galois connections, which provide desirable round-tripping properties. Unlike the prior work, our approach allows selections to be negated, equipping the bidirectional analysis with a De Morgan dual which can be used to link different outputs generated from the same input. This offers a principled language-based foundation for a popular view coordination feature called brushing and linking where selections in one chart automatically select corresponding elements in another related chart.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Meguerditchian, Ari-Nareg, Andrew Stewart, James Roistacher, Nancy Watroba, Michael Cropp et Stephen B. Edge. « Claims data linked to hospital registry data enhance evaluation of the quality of care of breast cancer ». Journal of Surgical Oncology 101, no 7 (6 mai 2010) : 593–99. http://dx.doi.org/10.1002/jso.21528.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
39

Dimou, Anastasia, Sahar Vahdati, Angelo Di Iorio, Christoph Lange, Ruben Verborgh et Erik Mannens. « Challenges as enablers for high quality Linked Data : insights from the Semantic Publishing Challenge ». PeerJ Computer Science 3 (30 janvier 2017) : e105. http://dx.doi.org/10.7717/peerj-cs.105.

Texte intégral
Résumé :
While most challenges organized so far in the Semantic Web domain are focused on comparing tools with respect to different criteria such as their features and competencies, or exploiting semantically enriched data, the Semantic Web Evaluation Challenges series, co-located with the ESWC Semantic Web Conference, aims to compare them based on their output, namely the produced dataset. The Semantic Publishing Challenge is one of these challenges. Its goal is to involve participants in extracting data from heterogeneous sources on scholarly publications, and producing Linked Data that can be exploited by the community itself. This paper reviews lessons learned from both (i) the overall organization of the Semantic Publishing Challenge, regarding the definition of the tasks, building the input dataset and forming the evaluation, and (ii) the results produced by the participants, regarding the proposed approaches, the used tools, the preferred vocabularies and the results produced in the three editions of 2014, 2015 and 2016. We compared these lessons to other Semantic Web Evaluation Challenges. In this paper, we (i) distill best practices for organizing such challenges that could be applied to similar events, and (ii) report observations on Linked Data publishing derived from the submitted solutions. We conclude that higher quality may be achieved when Linked Data is produced as a result of a challenge, because the competition becomes an incentive, while solutions become better with respect to Linked Data publishing best practices when they are evaluated against the rules of the challenge.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Nevzorova, Olga Avenirovna. « Methods and Algorithms for Increasing Linked Data Expressiveness (Overview) ». Russian Digital Libraries Journal 23, no 4 (28 mai 2020) : 808–34. http://dx.doi.org/10.26907/1562-5419-2020-23-4-808-834.

Texte intégral
Résumé :
This review discusses methods and algorithms for increasing linked data expressiveness which are prepared for Web publication. The main approaches to the enrichment of ontologies are considered, the methods on which they are based and the tools for implementing the corresponding methods are described.The main stage in the general scheme of the related data life cycle in a cloud of Linked Open Data is the stage of building a set of related RDF- triples. To improve the classification of data and the analysis of their quality, various methods are used to increase the expressiveness of related data. The main ideas of these methods are concerned with the enrichment of existing ontologies (an expansion of the basic scheme of knowledge) by adding or improving terminological axioms. Enrichment methods are based on methods used in various fields, such as knowledge representation, machine learning, statistics, natural language processing, analysis of formal concepts, and game theory.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Gao, Shan. « ODF : An Efficient OWL-Based Linked Course Data Generating Framework ». Advanced Materials Research 886 (janvier 2014) : 613–16. http://dx.doi.org/10.4028/www.scientific.net/amr.886.613.

Texte intégral
Résumé :
Although the intention of OWL is to provide an open, minimally constraining way for representing to represent rich and complex knowledge about things, there exists an increasing demands for the efficiency of course data generating. Addressing this issue, we present the ODF: a new OWL-based Linked Course Data generating framework, which makes it possible to specify semantic data directly. Generating such data directly does not only help in maintaining course data quality, but also opens up new optimization opportunities for link sources and, most importantly, makes generating process easier for users and system developers. We present OWL-based Linked Course Data generating framework and discuss the impact on Linked Data.
Styles APA, Harvard, Vancouver, ISO, etc.
42

McKay, Douglas R., Paul Nguyen, Ami Wang et Timothy P. Hanna. « A population-based study of administrative data linkage to measure melanoma surgical and pathology quality ». PLOS ONE 17, no 2 (18 février 2022) : e0263713. http://dx.doi.org/10.1371/journal.pone.0263713.

Texte intégral
Résumé :
Background Continuous quality improvement is important for cancer systems. However, collecting and compiling quality indicator data can be time-consuming and resource-intensive. Here we explore the utility and feasibility of linked routinely collected health data to capture key elements of quality of care for melanoma in a single-payer, universal health care setting. Method This pilot study utilized a retrospective population-based cohort from a previously developed linked administrative data set, with a 65% random sample of all invasive cutaneous melanoma cases diagnosed 2007–2012 in the province of Ontario. Data from the Ontario Cancer Registry was utilized, supplemented with linked pathology report data from Cancer Care Ontario, and other linked administrative data describing health care utilization. Quality indicators identified through provincial guidelines and international consensus were evaluated for potential collection with administrative data and measured where possible. Results A total of 7,654 cases of melanoma were evaluated. Ten of 25 (40%) candidate quality indicators were feasible to be collected with the available administrative data. Many indicators (8/25) could not be measured due to unavailable clinical information (e.g. width of clinical margins). Insufficient pathology information (6/25) or health structure information (1/25) were less common reasons. Reporting of recommended variables in pathology reports varied from 65.2% (satellitosis) to 99.6% (body location). For stage IB-II or T1b-T4a melanoma patients where SLNB should be discussed, approximately two-thirds met with a surgeon experienced in SLNB. Of patients undergoing full lymph node dissection, 76.2% had adequate evaluation of the basin. Conclusions We found that use of linked administrative data sources is feasible for measurement of melanoma quality in some cases. In those cases, findings suggest opportunities for quality improvement. Consultation with surgeons offering SLNB was limited, and pathology report completeness was sub-optimal, but was prior to routine synoptic reporting. However, to measure more quality indicators, text-based data sources will require alternative approaches to manual collection such as natural language processing or standardized collection. We recommend development of robust data platforms to support continuous re-evaluation of melanoma quality indicators, with the goal of optimizing quality of care for melanoma patients on an ongoing basis.
Styles APA, Harvard, Vancouver, ISO, etc.
43

von Hoffen, Moritz, et Abdulbaki Uzun. « Linked Open Data for Context-aware Services : Analysis, Classification and Context Data Discovery ». International Journal of Semantic Computing 08, no 04 (décembre 2014) : 389–413. http://dx.doi.org/10.1142/s1793351x14400121.

Texte intégral
Résumé :
The amount of data within the Linking Open Data (LOD) Cloud is steadily increasing and resembles a rich source of information. Since Context-aware Services (CAS) are based on the correlation of heterogeneous data sources for deriving the contextual situation of a target, it makes sense to leverage that enormous amount of data already present in the LOD Cloud to enhance the quality of these services. Within this work, the applicability of the LOD Cloud as a context provider for enriching CAS is investigated. For this purpose, a deep analysis according to the discoverability and availability of datasets is performed. Furthermore, in order to ease the process of finding a dataset that matches the information needs of a CAS developer, techniques for retrieving contents of LOD datasets are discussed and different approaches to condense the dataset to its most important concepts are shown. Finally, a Context Data Lookup Service is introduced that enables context data discovery within the LOD Cloud and its applicability is highlighted based on an example.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Bassier, M., J. Vermandere et H. De Winter. « LINKED BUILDING DATA FOR CONSTRUCTION SITE MONITORING : A TEST CASE ». ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2022 (17 mai 2022) : 159–65. http://dx.doi.org/10.5194/isprs-annals-v-2-2022-159-2022.

Texte intégral
Résumé :
Abstract. The automation of construction site monitoring is long overdue. One of the key challenges to track progress, quality, quantities is to integrate the necessary observations that make these analyses possible. Research has shown that semantic web technologies can overcome the data heterogeneity issues that currently halt the automated monitoring, but this technology is largely unexplored in the construction industry. In this paper, we therefore present a tentative framework for Linked Data usage on construction sites. Concretely, we combine observations of Lidar scans, UAV and hand-held cameras with the as-designed BIM through RDF graphs to establish a holistic analysis of the site. In the experiments, a proof of concept is presented of the structural building phase of a residential project and how remote sensing data can be managed during project execution.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Tallerås, Kim. « Quality of Linked Bibliographic Data : The Models, Vocabularies, and Links of Data Sets Published by Four National Libraries ». Journal of Library Metadata 17, no 2 (3 avril 2017) : 126–55. http://dx.doi.org/10.1080/19386389.2017.1355166.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
46

Buyle, Raf, Brecht Van de Vyvere, Julián Rojas Meléndez, Dwight Van Lancker, Eveline Vlassenroot, Mathias Van Compernolle, Stefan Lefever, Pieter Colpaert, Peter Mechant et Erik Mannens. « A Sustainable Method for Publishing Interoperable Open Data on the Web ». Data 6, no 8 (19 août 2021) : 93. http://dx.doi.org/10.3390/data6080093.

Texte intégral
Résumé :
Smart cities need (sensor) data for better decision-making. However, while there are vast amounts of data available about and from cities, an intermediary is needed that connects and interprets (sensor) data on a Web-scale. Today, governments in Europe are struggling to publish open data in a sustainable, predictable and cost-effective way. Our research question considers what methods for publishing Linked Open Data time series, in particular air quality data, are suitable in a sustainable and cost-effective way. Furthermore, we demonstrate the cross-domain applicability of our data publishing approach through a different use case on railway infrastructure—Linked Open Data. Based on scenarios co-created with various governmental stakeholders, we researched methods to promote data interoperability, scalability and flexibility. The results show that applying a Linked Data Fragments-based approach on public endpoints for air quality and railway infrastructure data, lowers the cost of publishing and increases availability due to better Web caching strategies.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Raza, Zahid, Khalid Mahmood et Nosheen Fatima Warraich. « Application of linked data technologies in digital libraries : a review of literature ». Library Hi Tech News 36, no 3 (7 mai 2019) : 9–12. http://dx.doi.org/10.1108/lhtn-10-2018-0067.

Texte intégral
Résumé :
Purpose This paper aims to describe how linked data technologies can change the digital library collections, what are the benefits of linked data applications in digital libraries and what are the challenges of digital libraries in linked data environment. Design/methodology/approach The present study is based on substantial literature review on the applications of linked data technologies in digital libraries. The search engines such as Google, Yahoo and Google Scholar were used to find the relevant literature for the study. Online databases such as Pro Quest, Science Direct, Emerald and JSTORE were also used to find the relevant literature of the study. Databases of Library Sciences Library and Information Science and Technology Abstracts and Library Information Science Abstracts were also used to find the relevant literature of the study. Library, linked data technologies, Semantic Web, digital library and digital collections were the main keywords which were used to find the relevant literature for the study. Findings The evolution of linked data technologies and Semantic Web has changed the traditional role of the libraries. Traditional libraries are converting into digital libraries and digital libraries are in a struggle to publish their resources on the Web using XML-based metadata standards. It has made capable the digital collections to be viewed by machines on the Web just like human. On the emergence of linked data applications in digital libraries, Web visibility of the libraries has been enhanced to provide the opportunities for the users to find their required quality information of libraries round-the-clock on the Web. National Library of France, National Library of Spain, Europeana, Digital Public Library of Americana, Library of Congress and The British Library have taken the initiatives to publish their resources on the Web using linked data technologies. Originality/value This study present several key issues for policy makers, software developers, decision makers and library administrators about linked data technologies and its implementations in digital libraries. The present study may play its role to facilitate the users of the Web who are enthusiastically interested to exploit the quality and authentic library resources on the Web round-the-clock. Search engines will also achieve their longstanding goal to exploit the quality resources of the libraries for their Web users to make their Web appearance more credible and trustworthy.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Harper, Gillian. « Linkage of Maternity Hospital Episode Statistics data to birth registration and notification records for births in England 2005–2014 : Quality assurance of linkage of routine data for singleton and multiple births ». BMJ Open 8, no 3 (mars 2018) : e017898. http://dx.doi.org/10.1136/bmjopen-2017-017898.

Texte intégral
Résumé :
ObjectivesTo quality assure a Trusted Third Party linked data set to prepare it for analysis.SettingBirth registration and notification records from the Office for National Statistics for all births in England 2005–2014 linked to Maternity Hospital Episode Statistics (HES) delivery records by NHS Digital using mothers’ identifiers.ParticipantsAll 6 676 912 births that occurred in England from 1 January 2005 to 31 December 2014.Primary and secondary outcome measuresEvery link between a registered birth and an HES delivery record for the study period was categorised as either the same baby or a different baby to the same mother, or as a wrong link, by comparing common baby data items and valid values in key fields with stepwise deterministic rules. Rates of preserved and discarded links were calculated and which features were more common in each group were assessed.ResultsNinety-eight per cent of births originally linked to HES were left with one preserved link. The majority of discarded links were due to duplicate HES delivery records. Of the 4854 discarded links categorised as wrong links, clerical checks found 85% were false-positives links, 13% were quality assurance false negatives and 2% were undeterminable. Births linked using a less reliable stage of the linkage algorithm, births at home and in the London region, and with birth weight or gestational age values missing in HES were more likely to have all links discarded.ConclusionsLinkage error, data quality issues, and false negatives in the quality assurance procedure were uncovered. The procedure could be improved by allowing for transposition in date fields, and more discrimination between missing and differing values. The availability of identifiers in the datasets supported clerical checking. Other research using Trusted Third Party linkage should not assume the linked dataset is error-free or optimised for their analysis, and allow sufficient resources for this.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Conrad, Zach, Nicole Blackstone et Eric Roy. « Diet Quality and Environmental Sustainability Are Linked, But in Unexpected Ways ». Current Developments in Nutrition 4, Supplement_2 (29 mai 2020) : 137. http://dx.doi.org/10.1093/cdn/nzaa042_002.

Texte intégral
Résumé :
Abstract Objectives Studies linking diet quality with environmental impacts in the US have generally not accounted for the additional burden associated with retail losses, inedible portions, and consumer waste. Moreover, there is a need to assess the environmental impacts of shifts in diet quality using data collected directly from individuals, rather than assessing the impacts of nutritionally perfect theoretical diets. This study fills these important research gaps by assessing the relationship between observed diet quality among a nationally-representative sample and the amount of agricultural resources used to produce food. Methods Dietary data from 50,014 individuals ≥2 y were collected from the National Health and Nutrition Examination Survey (NHANES, 2005–2016), and diet quality was measured using the Healthy Eating Index-2015 (HEI) and Alternate Healthy Eating Index-2010 (AHEI). Food retail losses, inedible portions, and consumer waste were estimated by linking data from the USDA Loss-adjusted Food Availability data series with dietary data from NHANES. These data were input into the US Foodprint Model, which was modified to estimate the amount of agricultural resources needed to meet food demand. Results Daily per capita food demand represented nearly four pounds (1673 grams) of food, including 7% retail loss, 15% inedible, 24% consumer waste, and 54% consumption. Higher diet quality (HEI and AHEI) was associated with greater retail loss, inedible portions, consumer waste, and consumption (P < 0.001 for all). Higher diet quality was associated (P < 0.05) with lower use of agricultural land (HEI and AHEI), greater use of irrigation water and pesticides (HEI), and lower use of fertilizers (AHEI). Conclusions Among a nationally-representative sample of over 50 thousand Americans, higher diet quality was associated with greater food retail loss, inedible portions, consumer waste, and consumption. Higher diet quality was also associated with lower use of some agricultural resources (land and fertilizers), but greater use of others (irrigation water and pesticides). By combining robust measures of diet quality with an advanced food system modeling framework, this study reveals that the link between diet quality and environmental sustainability is more nuanced than previously understood. Funding Sources None.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Nakamura, Fumiaki, Masato Masuda, Norihiro Teramoto, Kazuhiro Mizumoto, Eiji Mekata, Shunichi Higashide, Mikinobu Ohtani et Takahiro Higashi. « Implementing quality indicators using health insurance claims data linked to the hospital-based cancer registry. » Journal of Clinical Oncology 31, no 31_suppl (1 novembre 2013) : 94. http://dx.doi.org/10.1200/jco.2013.31.31_suppl.94.

Texte intégral
Résumé :
94 Background: To establish systematic monitoring of cancer care quality, we measured the quality of cancer care in several facilities through chart reviews by tumor registrars. However, this method required both extensive effort of and skills in registrars. To explore less-labor–intensive methods of measuring care quality, we assessed quality measurement using health insurance claims data linked to the Hospital Based Cancer Registry (HBCR). Methods: We previously developed 206 quality indicators (QIs) to assess cancer care processes in collaboration with clinical experts. Ten of these (stomach cancer, 1; colorectal cancer, 1; lung cancer, 2; breast cancer, 3; liver cancer, 1; and supportive care, 2) could be used for analyzing HBCR health insurance claims data. Patients treated at 7 designated cancer hospitals in Japan in 2010 were included. Their characteristics and tumor stages were obtained from HBCR, and processes of care administered to the patients in 2010–2011 were obtained from health insurance claims data. We calculated a score for each QI based on the proportion of patients receiving care among those eligible for QI. Results: Data of 4,785 patients were analyzed (stomach cancer, 1,181; patients with colorectal cancer, 1,077; lung cancer, 1,091; breast cancer, 1,184; and liver cancer, 252). Quality scores of essential laboratory tests were high; 91% patients underwent the HER2 test for invasive breast cancer and 95% underwent the liver function test using indocyanine green clearance before liver cancer surgery. However, indicator scores for adjuvant chemotherapy were relatively lower at only 59% for stomach cancer patients, 57% for colorectal cancer patients, and 56% for lung cancer patients receiving adjuvant chemotherapy. The supportive care scores had even more scope for improvement as only 43% patients received antiemetics for highly emetic chemotherapy and 66% patients received laxatives along with narcotics. Conclusions: These QIs can be implemented for health insurance claims data linked to HBCR and used to identify the potential target area for improvement. In future, such electronic systems will enable rapid cycles of quality measurement and feedback.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie