Littérature scientifique sur le sujet « Linked Data Quality »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Linked Data Quality ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Linked Data Quality"

1

Zaveri, Amrapali, Anisa Rula, Andrea Maurino, Ricardo Pietrobon, Jens Lehmann et Sören Auer. « Quality assessment for Linked Data : A Survey ». Semantic Web 7, no 1 (17 mars 2015) : 63–93. http://dx.doi.org/10.3233/sw-150175.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Radulovic, Filip, Nandana Mihindukulasooriya, Raúl García-Castro et Asunción Gómez-Pérez. « A comprehensive quality model for Linked Data ». Semantic Web 9, no 1 (30 novembre 2017) : 3–24. http://dx.doi.org/10.3233/sw-170267.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Batini, Carlo, Anisa Rula, Monica Scannapieco et Gianluigi Viscusi. « From Data Quality to Big Data Quality ». Journal of Database Management 26, no 1 (janvier 2015) : 60–82. http://dx.doi.org/10.4018/jdm.2015010103.

Texte intégral
Résumé :
This article investigates the evolution of data quality issues from traditional structured data managed in relational databases to Big Data. In particular, the paper examines the nature of the relationship between Data Quality and several research coordinates that are relevant in Big Data, such as the variety of data types, data sources and application domains, focusing on maps, semi-structured texts, linked open data, sensor & sensor networks and official statistics. Consequently a set of structural characteristics is identified and a systematization of the a posteriori correlation between them and quality dimensions is provided. Finally, Big Data quality issues are considered in a conceptual framework suitable to map the evolution of the quality paradigm according to three core coordinates that are significant in the context of the Big Data phenomenon: the data type considered, the source of data, and the application domain. Thus, the framework allows ascertaining the relevant changes in data quality emerging with the Big Data phenomenon, through an integrative and theoretical literature review.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Hadhiatma, A. « Improving data quality in the linked open data : a survey ». Journal of Physics : Conference Series 978 (mars 2018) : 012026. http://dx.doi.org/10.1088/1742-6596/978/1/012026.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kovacs, Adam Tamas, et Andras Micsik. « BIM quality control based on requirement linked data ». International Journal of Architectural Computing 19, no 3 (13 mai 2021) : 431–48. http://dx.doi.org/10.1177/14780771211012175.

Texte intégral
Résumé :
This article discusses a BIM Quality Control Ecosystem that is based on Requirement Linked Data in order to create a framework where automated BIM compliance checking methods can be widely used. The meaning of requirements is analyzed in a building project context as a basis for data flow analysis: what are the main types of requirements, how they are handled, and what sources they originate from. A literature review has been conducted to find the present development directions in quality checking, besides a market research on present, already widely used solutions. With the conclusions of these research and modern data management theory, the principles of a holistic approach have been defined for quality checking in the Architecture, Engineering and Construction (AEC) industry. A comparative analysis has been made on current BIM compliance checking solutions according to our review principles. Based on current practice and ongoing research, a state-of-the-art BIM quality control ecosystem is proposed that is open, enables automation, promotes interoperability, and leaves the data governing responsibility at the sources of the requirements. In order to facilitate the flow of requirement and quality data, we propose a model for requirements as Linked Data and provide example for quality checking using Shapes Constraint Language (SHACL). As a result, an opportunity is given for better quality and cheaper BIM design methods to be implemented in the industry.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Zaveri, Amrapali, Andrea Maurino et Laure-Berti Equille. « Web Data Quality ». International Journal on Semantic Web and Information Systems 10, no 2 (avril 2014) : 1–6. http://dx.doi.org/10.4018/ijswis.2014040101.

Texte intégral
Résumé :
The standardization and adoption of Semantic Web technologies has resulted in an unprecedented volume of data being published as Linked Data (LD). However, the “publish first, refine later” philosophy leads to various quality problems arising in the underlying data such as incompleteness, inconsistency and semantic ambiguities. In this article, we describe the current state of Data Quality in the Web of Data along with details of the three papers accepted for the International Journal on Semantic Web and Information Systems' (IJSWIS) Special Issue on Web Data Quality. Additionally, we identify new challenges that are specific to the Web of Data and provide insights into the current progress and future directions for each of those challenges.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Baillie, Chris, Peter Edwards et Edoardo Pignotti. « Assessing Quality in the Web of Linked Sensor Data ». Proceedings of the AAAI Conference on Artificial Intelligence 25, no 1 (4 août 2011) : 1750–51. http://dx.doi.org/10.1609/aaai.v25i1.8044.

Texte intégral
Résumé :
Assessing the quality of sensor data available on the Web is essential in order to identify reliable information for decision-making. This paper discusses how provenance of sensor observations and previous quality ratings can influence quality assessment decisions.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Paulheim, Heiko, et Christian Bizer. « Improving the Quality of Linked Data Using Statistical Distributions ». International Journal on Semantic Web and Information Systems 10, no 2 (avril 2014) : 63–86. http://dx.doi.org/10.4018/ijswis.2014040104.

Texte intégral
Résumé :
Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Assaf, Ahmad, Aline Senart et Raphaël Troncy. « Towards An Objective Assessment Framework for Linked Data Quality ». International Journal on Semantic Web and Information Systems 12, no 3 (juillet 2016) : 111–33. http://dx.doi.org/10.4018/ijswis.2016070104.

Texte intégral
Résumé :
Ensuring data quality in Linked Open Data is a complex process as it consists of structured information supported by models, ontologies and vocabularies and contains queryable endpoints and links. In this paper, the authors first propose an objective assessment framework for Linked Data quality. The authors build upon previous efforts that have identified potential quality issues but focus only on objective quality indicators that can measured regardless on the underlying use case. Secondly, the authors present an extensible quality measurement tool that helps on one hand data owners to rate the quality of their datasets, and on the other hand data consumers to choose their data sources from a ranked set. The authors evaluate this tool by measuring the quality of the LOD cloud. The results demonstrate that the general state of the datasets needs attention as they mostly have low completeness, provenance, licensing and comprehensibility quality scores.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Yang, Lu, Li Huang et Zhenzhen Liu. « Linked Data Crowdsourcing Quality Assessment based on Domain Professionalism ». Journal of Physics : Conference Series 1187, no 5 (avril 2019) : 052085. http://dx.doi.org/10.1088/1742-6596/1187/5/052085.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Linked Data Quality"

1

SPAHIU, BLERINA. « Profiling Linked Data ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2017. http://hdl.handle.net/10281/151645.

Texte intégral
Résumé :
Nonostante l'elevato numero di dati pubblicati come LD, il loro utilizzo non ha ancora mostrato il loro potenziale per l’assenza di comprensione dei metadati. I consumatori di dati hanno bisogno di ottenere informazioni dai dataset in modo veloce e concentrato per poter decidere se sono utili per il loro problema oppure no. Le tecniche di profilazione dei dati offrono una soluzione efficace a questo problema in quanto sono utilizzati per generare metadati e statistiche che descrivono il contenuto dei dataset. Questa tesi presenta una ricerca, che affronta i problemi legati alla profilazione Linked Data. Nonostante il termine profilazione dei dati è usato in modo generico per diverse informazioni che descrivono i dataset, in questa tesi noi andiamo a ricoprire tre aspetti della profilazione; topic-based, schema-based e linkage-based. Il profilo proposto in questa tesi è fondamentale per il processo decisionale ed è la base dei requisiti che portano verso la comprensione dei dataset. In questa tesi presentiamo un approccio per classificare automaticamente insiemi di dati in una delle categorie utilizzate nel mondo dei LD. Inoltre, indaghiamo il problema della profilazione multi-topic. Per la profilazione schema-based proponiamo un approccio riassuntivo schema-based, che fornisce una panoramica sui rapporti nei dati. I nostri riassunti sono concisi e chiari sufficientemente per riassumere l'intero dataset. Inoltre, essi rivelano problemi di qualità e possono aiutare gli utenti nei compiti di formulazione dei query. Molti dataset nel LD cloud contengono informazioni simili per la stessa entità. Al fine di sfruttare appieno il suo potenziale LD bisogna far vedere questa informazione in modo esplicito. Profiling Linkage fornisce informazioni sul numero di entità equivalenti tra i dataset e rivela possibili errori.Le tecniche di profiling sviluppate durante questo lavoro sono automatiche e possono essere applicate a differenti insiemi di dati indipendentemente dal dominio.
Recently, the increasing diffusion of Linked Data (LD) as a standard way to publish and structure data on the Web has received a growing attention from researchers and data publishers. LD adoption is reflected in different domains such as government, media, life science, etc., building a powerful Web available to anyone. Despite the high number of datasets published as LD, their usage is still not exploited as they lack comprehensive metadata. Data consumers need to obtain information about datasets content in a fast and summarized form to decide if they are useful for their use case at hand or not. Data profiling techniques offer an efficient solution to this problem as they are used to generate metadata and statistics that describe the content of the dataset. Existing profiling techniques do no cover a wide range of use cases. Many challenges due to the heterogeneity nature of Linked Data are still to overcome. This thesis presents the doctoral research which tackles the problems related to Profiling Linked Data. Even though the term of data profiling is the umbrella term for diverse descriptive information that describes a dataset, in this thesis we cover three aspects of profiling; topic-based, schema-based and linkage-based. The profile provided in this thesis is fundamental for the decision-making process and is the basic requirement towards the dataset understanding. In this thesis we present an approach to automatically classify datasets in one of the topical categories used in the LD cloud. Moreover, we investigate the problem of multi-topic profiling. For the schema-based profiling we propose a schema-based summarization approach, that provides an overview about the relations in the data. Our summaries are concise and informative enough to summarize the whole dataset. Moreover, they reveal quality issues and can help users in the query formulation tasks. Many datasets in the LD cloud contain similar information for the same entity. In order to fully exploit its potential LD should made this information explicit. Linkage profiling provides information about the number of equivalent entities between datasets and reveal possible errors. The techniques of profiling developed during this work are automatic and can be applied to different datasets independently of the domain.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Issa, Subhi. « Linked data quality : completeness and conciseness ». Electronic Thesis or Diss., Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1274.

Texte intégral
Résumé :
La large diffusion des technologies du Web Sémantique telles que le Resource Description Framework (RDF) permet aux individus de construire leurs bases de données sur le Web, d'écrire des vocabulaires et de définir des règles pour organiser et expliquer les relations entre les données selon les principes des données liées. En conséquence, une grande quantité de données structurées et interconnectées est générée quotidiennement. Un examen attentif de la qualité de ces données pourrait s'avérer très critique, surtout si d'importantes recherches et décisions professionnelles en dépendent. La qualité des données liées est un aspect important pour indiquer leur aptitude à être utilisées dans des applications. Plusieurs dimensions permettant d'évaluer la qualité des données liées sont identifiées, telles que la précision, la complétude, la provenance et la concision. Cette thèse se concentre sur l'évaluation de la complétude et l'amélioration de la concision des données liées. En particulier, nous avons d'abord proposé une approche de calcul de complétude fondée sur un schéma généré. En effet, comme un schéma de référence est nécessaire pour évaluer la complétude, nous avons proposé une approche fondée sur la fouille de données pour obtenir un schéma approprié (c.-à-d. un ensemble de propriétés) à partir des données. Cette approche permet de distinguer les propriétés essentielles des propriétés marginales pour générer, pour un ensemble de données, un schéma conceptuel qui répond aux attentes de l'utilisateur quant aux contraintes de complétude des données. Nous avons implémenté un prototype appelé "LOD-CM" pour illustrer le processus de dérivation d'un schéma conceptuel d'un ensemble de données fondé sur les besoins de l'utilisateur. Nous avons également proposé une approche pour découvrir des prédicats équivalents afin d'améliorer la concision des données liées. Cette approche s'appuie, en plus d'une analyse statistique, sur une analyse sémantique approfondie des données et sur des algorithmes d'apprentissage. Nous soutenons que l'étude de la signification des prédicats peut aider à améliorer l'exactitude des résultats. Enfin, un ensemble d'expériences a été mené sur des ensembles de données réelles afin d'évaluer les approches que nous proposons
The wide spread of Semantic Web technologies such as the Resource Description Framework (RDF) enables individuals to build their databases on the Web, to write vocabularies, and define rules to arrange and explain the relationships between data according to the Linked Data principles. As a consequence, a large amount of structured and interlinked data is being generated daily. A close examination of the quality of this data could be very critical, especially, if important research and professional decisions depend on it. The quality of Linked Data is an important aspect to indicate their fitness for use in applications. Several dimensions to assess the quality of Linked Data are identified such as accuracy, completeness, provenance, and conciseness. This thesis focuses on assessing completeness and enhancing conciseness of Linked Data. In particular, we first proposed a completeness calculation approach based on a generated schema. Indeed, as a reference schema is required to assess completeness, we proposed a mining-based approach to derive a suitable schema (i.e., a set of properties) from data. This approach distinguishes between essential properties and marginal ones to generate, for a given dataset, a conceptual schema that meets the user's expectations regarding data completeness constraints. We implemented a prototype called “LOD-CM” to illustrate the process of deriving a conceptual schema of a dataset based on the user's requirements. We further proposed an approach to discover equivalent predicates to improve the conciseness of Linked Data. This approach is based, in addition to a statistical analysis, on a deep semantic analysis of data and on learning algorithms. We argue that studying the meaning of predicates can help to improve the accuracy of results. Finally, a set of experiments was conducted on real-world datasets to evaluate our proposed approaches
Styles APA, Harvard, Vancouver, ISO, etc.
3

RULA, ANISA. « Time-related quality dimensions in linked data ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/81717.

Texte intégral
Résumé :
Over the last few years, there has been an increasing di↵usion of Linked Data as a standard way to publish interlinked structured data on the Web, which allows users, and public and private organizations to fully exploit a large amount of data from several domains that were not available in the past. Although gathering and publishing such massive amount of structured data is certainly a step in the right direction, quality still poses a significant obstacle to the uptake of data consumption applications at large-scale. A crucial aspect of quality regards the dynamic nature of Linked Data where information can change rapidly and fail to reflect changes in the real world, thus becoming out-date. Quality is characterised by di↵erent dimensions that capture several aspects of quality such as accuracy, currency, consistency or completeness. In particular, the aspects of Linked Data dynamicity are captured by Time-Related Quality Dimen- sions such as data currency. The assessment of Time-Related Quality Dimensions, which is the task of measuring the quality, is based on temporal information whose collection poses several challenges regarding their availability, representation and diversity in Linked Data. The assessment of Time-Related Quality Dimensions supports data consumers in their decisions whether information are valid or not. The main goal of this thesis is to develop techniques for assessing Time-Related Quality Dimensions in Linked Data, which must overcome several challenges posed by Linked Data such as third-party applications, variety of data, high volume of data or velocity of data. The major contributions of this thesis can be summarized as follows: it presents a general settings of definitions for quality dimensions and measures adopted in Linked Data; it provides a large-scale analysis of approaches for representing temporal information in Linked Data; it provides a sharable and interoperable conceptual model which integrates vocabularies used to represent temporal information required for the assessment of Time-Related Quality Di- mensions; it proposes two domain-independent techniques to assess data currency that work with incomplete or inaccurate temporal information and finally it pro- vides an approach that enrich information with time intervals representing their temporal validity.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Debattista, Jeremy [Verfasser]. « Scalable Quality Assessment of Linked Data / Jeremy Debattista ». Bonn : Universitäts- und Landesbibliothek Bonn, 2017. http://d-nb.info/1135663440/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Baillie, Chris. « Reasoning about quality in the Web of Linked Data ». Thesis, University of Aberdeen, 2015. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=227177.

Texte intégral
Résumé :
In recent years the Web has evolved from a collection of hyperlinked documents to a vast ecosystem of interconnected documents, devices, services, and agents. However, the open nature of the Web enables anyone or any thing to publish any content they choose. Therefore poor quality data can quickly propagate and an appropriate mechanism to assess the quality of such data is essential if agents are to identify reliable information for use in decision-making. Existing assessment frameworks investigate the context around data (additional information that describes the situation in which a datum was created). Such metadata can be made available by publishing information to the Web of Linked Data. However, there are situations in which examining context alone is not sufficient - such as when one must identify the agent responsible for data creation, or transformational processes applied to data. In these situations, examining data provenance is critical to identifying quality issues. Moreover, there will be situations in which an agent is unable to perform a quality assessment of their own. For example, if the original contextual metadata is no longer available. Here, it may be possible for agents to explore provenance of previous quality assessments and make decisions about quality result re-use. This thesis explores issues around quality assessment and provenance in the Web of Linked Data. It contributes a formal model of quality assessment designed to align with emerging standards for provenance on the Web. This model is then realised as an OWL ontology, which can be used as part of a software framework to perform data quality assessment. Through a number of real-world examples, spanning environmental sensing, invasive species monitoring, and passenger information domains, the thesis establishes the importance of examining provenance as part of quality assessment. Moreover, it demonstrates that by examining quality assessment provenance agents can make re-use decisions about existing quality assessment results. Included in these implementations are sets of example quality metrics that demonstrate how these can be encoded using the SPARQL Inferencing Notation (SPIN).
Styles APA, Harvard, Vancouver, ISO, etc.
6

Zaveri, Amrapali. « Linked Data Quality Assessment and its Application to Societal Progress Measurement ». Doctoral thesis, Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-167021.

Texte intégral
Résumé :
In recent years, the Linked Data (LD) paradigm has emerged as a simple mechanism for employing the Web as a medium for data and knowledge integration where both documents and data are linked. Moreover, the semantics and structure of the underlying data are kept intact, making this the Semantic Web. LD essentially entails a set of best practices for publishing and connecting structure data on the Web, which allows publish- ing and exchanging information in an interoperable and reusable fashion. Many different communities on the Internet such as geographic, media, life sciences and government have already adopted these LD principles. This is confirmed by the dramatically growing Linked Data Web, where currently more than 50 billion facts are represented. With the emergence of Web of Linked Data, there are several use cases, which are possible due to the rich and disparate data integrated into one global information space. Linked Data, in these cases, not only assists in building mashups by interlinking heterogeneous and dispersed data from multiple sources but also empowers the uncovering of meaningful and impactful relationships. These discoveries have paved the way for scientists to explore the existing data and uncover meaningful outcomes that they might not have been aware of previously. In all these use cases utilizing LD, one crippling problem is the underlying data quality. Incomplete, inconsistent or inaccurate data affects the end results gravely, thus making them unreliable. Data quality is commonly conceived as fitness for use, be it for a certain application or use case. There are cases when datasets that contain quality problems, are useful for certain applications, thus depending on the use case at hand. Thus, LD consumption has to deal with the problem of getting the data into a state in which it can be exploited for real use cases. The insufficient data quality can be caused either by the LD publication process or is intrinsic to the data source itself. A key challenge is to assess the quality of datasets published on the Web and make this quality information explicit. Assessing data quality is particularly a challenge in LD as the underlying data stems from a set of multiple, autonomous and evolving data sources. Moreover, the dynamic nature of LD makes assessing the quality crucial to measure the accuracy of representing the real-world data. On the document Web, data quality can only be indirectly or vaguely defined, but there is a requirement for more concrete and measurable data quality metrics for LD. Such data quality metrics include correctness of facts wrt. the real-world, adequacy of semantic representation, quality of interlinks, interoperability, timeliness or consistency with regard to implicit information. Even though data quality is an important concept in LD, there are few methodologies proposed to assess the quality of these datasets. Thus, in this thesis, we first unify 18 data quality dimensions and provide a total of 69 metrics for assessment of LD. The first methodology includes the employment of LD experts for the assessment. This assessment is performed with the help of the TripleCheckMate tool, which was developed specifically to assist LD experts for assessing the quality of a dataset, in this case DBpedia. The second methodology is a semi-automatic process, in which the first phase involves the detection of common quality problems by the automatic creation of an extended schema for DBpedia. The second phase involves the manual verification of the generated schema axioms. Thereafter, we employ the wisdom of the crowds i.e. workers for online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) to assess the quality of DBpedia. We then compare the two approaches (previous assessment by LD experts and assessment by MTurk workers in this study) in order to measure the feasibility of each type of the user-driven data quality assessment methodology. Additionally, we evaluate another semi-automated methodology for LD quality assessment, which also involves human judgement. In this semi-automated methodology, selected metrics are formally defined and implemented as part of a tool, namely R2RLint. The user is not only provided the results of the assessment but also specific entities that cause the errors, which help users understand the quality issues and thus can fix them. Finally, we take into account a domain-specific use case that consumes LD and leverages on data quality. In particular, we identify four LD sources, assess their quality using the R2RLint tool and then utilize them in building the Health Economic Research (HER) Observatory. We show the advantages of this semi-automated assessment over the other types of quality assessment methodologies discussed earlier. The Observatory aims at evaluating the impact of research development on the economic and healthcare performance of each country per year. We illustrate the usefulness of LD in this use case and the importance of quality assessment for any data analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
7

YAMAN, BEYZA. « Exploiting Context-Dependent Quality Metadata for Linked Data Source Selection ». Doctoral thesis, Università degli studi di Genova, 2018. http://hdl.handle.net/11567/930633.

Texte intégral
Résumé :
The traditional Web is evolving into the Web of Data which consists of huge collections of structured data over poorly controlled distributed data sources. Live queries are needed to get current information out of this global data space. In live query processing, source selection deserves attention since it allows us to identify the sources which might likely contain the relevant data. The thesis proposes a source selection technique in the context of live query processing on Linked Open Data, which takes into account the context of the request and the quality of data contained in the sources to enhance the relevance (since the context enables a better interpretation of the request) and the quality of the answers (which will be obtained by processing the request on the selected sources). Specifically, the thesis proposes an extension of the QTree indexing structure that had been proposed as a data summary to support source selection based on source content, to take into account quality and contextual information. With reference to a specific case study, the thesis also contributes an approach, relying on the Luzzu framework, to assess the quality of a source with respect to for a given context (according to different quality dimensions). An experimental evaluation of the proposed techniques is also provided
Styles APA, Harvard, Vancouver, ISO, etc.
8

Salibekyan, Zinaida. « Trends in job quality : evidence from French and British linked employer-employee data ». Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM2001.

Texte intégral
Résumé :
La contribution de cette thèse est d’examiner l’évolution de la qualité de l’emploi du point de vue de l’établissement. Elle s’appuie sur des données couplées employeurs - salariés issues des enquêtes comparables Workplace Employment Relations Survey (WERS 2004 et 2011) pour le cas de la Grande-Bretagne et Relations Professionnelles et Négociations d’Entreprise (REPONSE 2005 et 2011) pour la France. Cette thèse contient trois chapitres et enrichit trois grands axes de la littérature existante. Le premier chapitre explore l’impact des pratiques d’ajustement au niveau de l’établissement sur la qualité de l’emploi en France pendant la crise. Le deuxième chapitre analyse le rôle du régime institutionnel en France et en Grande-Bretagne afin d’expliquer la variation de la qualité de l’emploi entre les deux pays. Finalement, le troisième chapitre examine les stratégies adoptées par les salariés pour composer avec leur salaire et leurs conditions de travail
The contribution of this thesis is to examine the evolution of job quality from the perspective of the workplace using the British Workplace Employment Relations Surveys (WERS 2004 and 2011) and the French Enquête Relations Professionnelles et Négociations d’Entreprises (REPONSE 2005 and 2011). The thesis consists of three chapters and complements three main strands of the existing literature. The first chapter explores the impact of workplace adjustment practices on job quality in France during the recession. The second chapter examines the role of institutional regimes in Great Britain and France in explaining the cross-national variation in job quality. Finally, the third chapter investigates the strategies employees adopt in order to cope with their pay and working conditions
Styles APA, Harvard, Vancouver, ISO, etc.
9

Melo, Jessica Oliveira de Souza Ferreira [UNESP]. « Metodologia de avaliação de qualidade de dados no contexto do linked data ». Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/150870.

Texte intégral
Résumé :
Submitted by JESSICA OLIVEIRA DE SOUZA null (osz.jessica@gmail.com) on 2017-06-09T12:04:24Z No. of bitstreams: 1 Dissertação-Jessica-Melo.pdf: 5257476 bytes, checksum: 21d6468b47635a4df09d971c6c0bb581 (MD5)
Approved for entry into archive by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-06-12T12:21:39Z (GMT) No. of bitstreams: 1 melo_josf_me_mar.pdf: 5257476 bytes, checksum: 21d6468b47635a4df09d971c6c0bb581 (MD5)
Made available in DSpace on 2017-06-12T12:21:39Z (GMT). No. of bitstreams: 1 melo_josf_me_mar.pdf: 5257476 bytes, checksum: 21d6468b47635a4df09d971c6c0bb581 (MD5) Previous issue date: 2017-05-09
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
A Web Semântica sugere a utilização de padrões e tecnologias que atribuem estrutura e semântica aos dados, de modo que agentes computacionais possam fazer um processamento inteligente, automático, para cumprir tarefas específicas. Neste contexto, foi criado o projeto Linked Open Data (LOD), que consiste em uma iniciativa para promover a publicação de dados linkados (Linked Data). Com o evidente crescimento dos dados publicados como Linked Data, a qualidade tornou-se essencial para que tais conjuntos de dados (datasets) atendam os objetivos básicos da Web Semântica. Isso porque problemas de qualidade nos datasets publicados constituem em um empecilho não somente para a sua utilização, mas também para aplicações que fazem uso de tais dados. Considerando que os dados disponibilizados como Linked Data possibilitam um ambiente favorável para aplicações inteligentes, problemas de qualidade podem também dificultar ou impedir a integração dos dados provenientes de diferentes datasets. A literatura aplica diversas dimensões de qualidade no contexto do Linked Data, porém indaga-se a aplicabilidade de tais dimensões para avaliação de qualidade de dados linkados. Deste modo, esta pesquisa tem como objetivo propor uma metodologia para avaliação de qualidade nos datasets de Linked Data, bem como estabelecer um modelo do que pode ser considerado qualidade de dados no contexto da Web Semântica e do Linked Data. Para isso adotou-se uma abordagem exploratória e descritiva a fim de estabelecer problemas, dimensões e requisitos de qualidade e métodos quantitativos na metodologia de avaliação a fim de realizar a atribuição de índices de qualidade. O trabalho resultou na definição de sete dimensões de qualidade aplicáveis ao domínio do Linked Data e 14 fórmulas diferentes para a quantificação da qualidade de datasets sobre publicações científicas. Por fim realizou-se uma prova de conceito na qual a metodologia de avaliação de qualidade proposta foi aplicada em um dataset promovido pelo LOD. Conclui-se, a partir dos resultados da prova de conceito, que a metodologia proposta consiste em um meio viável para quantificação dos problemas de qualidade em datasets de Linked Data, e que apesar dos diversos requisitos para a publicação deste tipo de dados podem existir outros datasets que não atendam determinados requisitos de qualidade, e por sua vez, não deveriam estar inclusos no diagrama do projeto LOD.
The Semantic Web suggests the use of patterns and technologies that assign structure and semantics to the data, so that computational agents can perform intelligent, automatic processing to accomplish specific tasks. In this context, the Linked Open Data (LOD) project was created, which consists of an initiative to promote the publication of Linked Data. With the evident growth of data published as Linked Data, quality has become essential for such datasets to meet the basic goals of the Semantic Web. This is because quality problems in published datasets are a hindrance not only to their use but also to applications that make use of such data. Considering that data made available as Linked Data enables a favorable environment for intelligent applications, quality problems can also hinder or prevent the integration of data coming from different datasets. The literature applies several quality dimensions in the context of Linked Data, however, the applicability of such dimensions for quality evaluation of linked data is investigated. Thus, this research aims to propose a methodology for quality evaluation in Linked Data datasets, as well as to establish a model of what can be considered data quality in the Semantic Web and Linked Data context. For this, an exploratory and descriptive approach was adopted in order to establish problems, dimensions and quality requirements and quantitative methods in the evaluation methodology in order to perform the assignment of quality indexes. This work resulted in the definition of seven quality dimensions applicable to the Linked Data domain and 14 different formulas for the quantification of the quality of datasets on scientific publications. Finally, a proof of concept was developed in which the proposed quality assessment methodology was applied in a dataset promoted by the LOD. It is concluded from the proof of concept results that the proposed methodology consists of a viable means for quantification of quality problems in Linked Data datasets and that despite the diverse requirements for the publication of this type of data there may be other datasets that do not meet certain quality requirements, and in turn, should not be included in the LOD project diagram.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Zaveri, Amrapali [Verfasser], et Felix [Gutachter] Naumann. « Linked Data Quality Assessment and its Application to Societal Progress Measurement / Amrapali Zaveri ; Gutachter : Felix Naumann ». Leipzig : Universitätsbibliothek Leipzig, 2015. http://d-nb.info/1239565844/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Linked Data Quality"

1

Maugeri, Giuseppe, et Graziano Serragiotto. L’insegnamento della lingua italiana in Giappone Uno studio di caso sul Kansai. Venice : Fondazione Università Ca’ Foscari, 2021. http://dx.doi.org/10.30687/978-88-6969-525-4.

Texte intégral
Résumé :
This research stems from the need of the Italian Cultural Institute to map the institutions involved in teaching Italian in the area considered and to analyse the quality of the teaching and learning process of the Italian language. The objectives are multiple and linked to the importance of finding the causes that slow the growth of the study of Italian in Japanese Kansai. Therefore, the first part of this action research will outline the cultural and linguistic education coordinates that characterize the Japanese context; in the second part, the research data will be interpreted in order to trace new methodological development trajectories to increase the quality of the Italian teaching process in Kansai.Part 1 This part focuses on the situation of foreign language teaching in Japan. It also describes the strategies to promote the teaching of the Italian language in Japan from 1980 to now. 1 Modern Language Policy in Japan Between Past and Present This first chapter describes linguistic policy for the promotion of foreign languages in Japan by the Ministry of Education (MEXT). 2 Japanese Educational System Focus of this chapter are the cultural, pedagogical and linguistic education characteristics of the context under investigation. 3 Teaching Italian Language in Japan The purpose of this chapter is to outline the general frame of the spreading of the Italian cultural model in a traditional Japanese context. Part 2In the second part the action research and the training project design are described. 4 The Action-Research Project This chapter describes the overall design of the research and the research questions that inspired an investigation in the context under study. The aim is to understand whether there is a link between the methodological choices of the teachers and the difficulties in learning Italian for Japanese students. Part 3 In this third part, the situation of teaching Italian in relation to different learning contexts in Japanese Kansai will be examined. 5 A Case Study at Italian Culture Institute in Osaka The goals of this chapter are to analyse the problems of teaching Italian at the IIC and suggest methodological improvement paths for teachers of Italian language at IIC. 6 A Case Study at Osaka University The data obtained by the informants will be used to analyse the situation of the teaching of Italian at Department of Italian language of this university and suggest curricular and methodological improvements to increase the quality of teaching and learning Italian. 7 A Case Study at Kyoto Sangyo University The chapter outlines the methodological and technical characteristics used to teach Italian at Kyoto Sangyo University and suggests strategies aimed at enhancing students’ language learning.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Gold, Robert Louis. Low-flow water-quality and discharge data for lined channels in northwest Albuquerque, New Mexico, 1990 to 1994. Albuquerque, N.M : U.S. Dept. of the Interior, U.S. Geological Survey, 1997.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Gold, Robert Louis. Low-flow water-quality and discharge data for lined channels in northwest Albuquerque, New Mexico, 1990 to 1994. Albuquerque, N.M : U.S. Dept. of the Interior, U.S. Geological Survey, 1997.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Gold, Robert Louis. Low-flow water-quality and discharge data for lined channels in northwest Albuquerque, New Mexico, 1990 to 1994. Albuquerque, N.M : U.S. Dept. of the Interior, U.S. Geological Survey, 1997.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Robert, McBreen, Albuquerque Metropolitan Arroyo Flood Control Authority. et Geological Survey (U.S.), dir. Low-flow water-quality and discharge data for lined channels in northwest Albuquerque, New Mexico, 1990 to 1994. Albuquerque, N.M : U.S. Dept. of the Interior, U.S. Geological Survey, 1997.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Gold, Robert Louis. Low-flow water-quality and discharge data for lined channels in northwest Albuquerque, New Mexico, 1990 to 1994. Albuquerque, N.M : U.S. Dept. of the Interior, U.S. Geological Survey, 1997.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Gold, Robert Louis. Low-flow water-quality and discharge data for lined channels in northwest Albuquerque, New Mexico, 1990 to 1994. Albuquerque, N.M : U.S. Dept. of the Interior, U.S. Geological Survey, 1997.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Gold, Robert Louis. Low-flow water-quality and discharge data for lined channels in northwest Albuquerque, New Mexico, 1990 to 1994. Albuquerque, N.M : U.S. Dept. of the Interior, U.S. Geological Survey, 1997.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Gold, Robert Louis. Low-flow water-quality and discharge data for lined channels in northwest Albuquerque, New Mexico, 1990 to 1994. Albuquerque, N.M : U.S. Dept. of the Interior, U.S. Geological Survey, 1997.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Nakov, Svetlin. Fundamentals of Computer Programming with C# : The Bulgarian C# Book. Sofia, Bulgaria : Svetlin Nakov, 2013.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Linked Data Quality"

1

Acosta, Maribel, Amrapali Zaveri, Elena Simperl, Dimitris Kontokostas, Sören Auer et Jens Lehmann. « Crowdsourcing Linked Data Quality Assessment ». Dans Advanced Information Systems Engineering, 260–76. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41338-4_17.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Rula, Anisa, Andrea Maurino et Carlo Batini. « Data Quality Issues in Linked Open Data ». Dans Data-Centric Systems and Applications, 87–112. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-24106-7_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Ruckhaus, Edna, Oriana Baldizán et María-Esther Vidal. « Analyzing Linked Data Quality with LiQuate ». Dans Lecture Notes in Computer Science, 629–38. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41033-8_80.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Behkamal, Behshid, Mohsen Kahani et Ebrahim Bagheri. « Quality Metrics for Linked Open Data ». Dans Lecture Notes in Computer Science, 144–52. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-22849-5_11.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Ruckhaus, Edna, Maria-Esther Vidal, Simón Castillo, Oscar Burguillos et Oriana Baldizan. « Analyzing Linked Data Quality with LiQuate ». Dans Lecture Notes in Computer Science, 488–93. Cham : Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11955-7_72.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Nayak, Aparna, Bojan Božić et Luca Longo. « Linked Data Quality Assessment : A Survey ». Dans Web Services – ICWS 2021, 63–76. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96140-4_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Lu, Yuqing, Lei Zhang et Juanzi Li. « Evaluating Article Quality and Editor Reputation in Wikipedia ». Dans Linked Data and Knowledge Graph, 215–27. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-54025-7_19.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ma, Yanfang, et Guilin Qi. « An Analysis of Data Quality in DBpedia and Zhishi.me ». Dans Linked Data and Knowledge Graph, 106–17. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-54025-7_10.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Cappiello, Cinzia, Tommaso Di Noia, Bogdan Alexandru Marcu et Maristella Matera. « A Quality Model for Linked Data Exploration ». Dans Lecture Notes in Computer Science, 397–404. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-38791-8_25.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Kiryakos, Senan, et Shigeo Sugimoto. « A Linked Data Model to Aggregate Serialized Manga from Multiple Data Providers ». Dans Digital Libraries : Providing Quality Information, 120–31. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-27974-9_12.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Linked Data Quality"

1

Kontokostas, Dimitris, Patrick Westphal, Sören Auer, Sebastian Hellmann, Jens Lehmann, Roland Cornelissen et Amrapali Zaveri. « Test-driven evaluation of linked data quality ». Dans the 23rd international conference. New York, New York, USA : ACM Press, 2014. http://dx.doi.org/10.1145/2566486.2568002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

To, Alex, Rouzbeh Meymandpour, Joseph G. Davis, Guillaume Jourjon et Jonathan Chan. « A Linked Data Quality Assessment Framework for Network Data ». Dans the 2nd Joint International Workshop. New York, New York, USA : ACM Press, 2019. http://dx.doi.org/10.1145/3327964.3328493.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Debattista, Jeremy, Soren Auer et Christoph Lange. « Luzzu -- A Framework for Linked Data Quality Assessment ». Dans 2016 IEEE Tenth International Conference on Semantic Computing (ICSC). IEEE, 2016. http://dx.doi.org/10.1109/icsc.2016.48.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Tang, Zhenhao, Hanfei Wang, Bin Li, Juan Zhai, Jianhua Zhao et Xuandong Li. « Node-Set Analysis for Linked Recursive Data Structures ». Dans 2015 IEEE International Conference on Software Quality, Reliability and Security (QRS). IEEE, 2015. http://dx.doi.org/10.1109/qrs.2015.19.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Lorey, Johannes. « SPARQL Endpoint Metrics for Quality-Aware Linked Data Consumption ». Dans International Conference. New York, New York, USA : ACM Press, 2013. http://dx.doi.org/10.1145/2539150.2539240.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Catania, Barbara, Giovanna Guerrini et Beyza Yaman. « Exploiting context and quality for linked data source selection ». Dans SAC '19 : The 34th ACM/SIGAPP Symposium on Applied Computing. New York, NY, USA : ACM, 2019. http://dx.doi.org/10.1145/3297280.3297503.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Nahari, Mohammad Khodizadeh, Nasser Ghadiri, Zahra Jafarifard, Ahmad Baraani Dastjerdi et Joerg R. Sack. « A framework for linked data fusion and quality assessment ». Dans 2017 3th International Conference on Web Research (ICWR). IEEE, 2017. http://dx.doi.org/10.1109/icwr.2017.7959307.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Knap, Tomas, Jan Michelfeit et Martin Necasky. « Linked Open Data Aggregation : Conflict Resolution and Aggregate Quality ». Dans 2012 IEEE 36th IEEE Annual Computer Software and Applications Conference Workshops (COMPSACW). IEEE, 2012. http://dx.doi.org/10.1109/compsacw.2012.29.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Ilie cristian, Dorobat, Octavian Rinciog, George cristian Muraru et Vlad Posea. « IMPROVING THE QUALITY OF LINKED DATA USING STRING SUGGESTIONS ». Dans eLSE 2020. University Publishing House, 2020. http://dx.doi.org/10.12753/2066-026x-20-133.

Texte intégral
Résumé :
The Semantic Web standardization and their growing usage in professional communities, both governmental and non-governmental, have naturally driven for accelerating the growth of published data volume in the virtual space by 422 published datasets in 2011 to 9,960 until 2019, totaling of 192,230,648 triples from various areas as medicine, education, art, history, technology, public administration etc. This trend of increasing of the semantic datasets published in the virtual space, leads to the emergence of a new challenge: ensuring the data quality; a first step in this direction being made by Tim Berners-Lee in 2010, when he defined a set of criteria that data scientists are encouraged to use it for ensuring a highest quality level of datasets. However, an important shape has not been mentioned: data accuracy, a feature not strictly specific to semantic data but which is applied to any type of data representation. The paper starts with a brief presentation of the most important metrics used to determine the level of data quality, along with a brief introduction of the most used string similarity algorithms. After that, the paper presents a new feature for an existing open data integration tool, called Karma, that allows users, such as data analysts and scientists, to improve their time management plan by reducing the time needed to clean their data. This feature has been implemented as a string suggestion for miswritten strings by using the presented string similarity metrics, keeping, at the same time, the framework design and the framework workflow too.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Ahmed, Hana Haj. « Data Quality Assessment in the Integration Process of Linked Open Data (LOD) ». Dans 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA). IEEE, 2017. http://dx.doi.org/10.1109/aiccsa.2017.178.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Linked Data Quality"

1

Johnson, Billy, et Zhonglong Zhang. The demonstration and validation of a linked watershed-riverine modeling system for DoD installations : user guidance report version 2.0. Engineer Research and Development Center (U.S.), avril 2021. http://dx.doi.org/10.21079/11681/40425.

Texte intégral
Résumé :
A linked watershed model was evaluated on three watersheds within the U.S.: (1) House Creek Watershed, Fort Hood, TX; (2) Calleguas Creek Watershed, Ventura County, CA; and (3) Patuxent River Watershed, MD. The goal of this demonstration study was to show the utility of such a model in addressing water quality issues facing DoD installations across a variety of climate zones. In performing the demonstration study, evaluations of model output with regards to accuracy, predictability and meeting regulatory drivers were completed. Data availability, level of modeling expertise, and costs for model setup, validation, scenario analysis, and maintenance were evaluated in order to inform installation managers on the time and cost investment needed to use a linked watershed modeling system. Final conclusions were that the system evaluated in this study would be useful for answering a variety of questions posed by installation managers and could be useful in developing management scenarios to better control pollutant runoff from installations.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Friedler, Haley S., Michelle B. Leavy, Eric Bickelman, Barbara Casanova, Diana Clarke, Danielle Cooke, Andy DeMayo et al. Outcome Measure Harmonization and Data Infrastructure for Patient-Centered Outcomes Research in Depression : Data Use and Governance Toolkit. Agency for Healthcare Research and Quality (AHRQ), octobre 2021. http://dx.doi.org/10.23970/ahrqepcwhitepaperdepressiontoolkit.

Texte intégral
Résumé :
Executive Summary Patient registries are important tools for advancing research, improving healthcare quality, and supporting health policy. Registries contain vast amounts of data that could be used for new purposes when linked with other sources or shared with researchers. This toolkit was developed to summarize current best practices and provide information to assist registries interested in sharing data. The contents of this toolkit were developed based on review of the literature, existing registry practices, interviews with registries, and input from key stakeholders involved in the sharing of registry data. While some information in this toolkit may be relevant in other countries, this toolkit focuses on best practices for sharing data within the United States. Considerations related to data sharing differ across registries depending on the type of registry, registry purpose, funding source(s), and other factors; as such, this toolkit describes general best practices and considerations rather than providing specific recommendations. Finally, data sharing raises complex legal, regulatory, operational, and technical questions, and none of the information contained herein should be substituted for legal advice. The toolkit is organized into three sections: “Preparing to Share Data,” “Governance,” and “Procedures for Reviewing and Responding to Data Requests.” The section on “Preparing to Share Data” discusses the role of appropriate legal rights to further share the data and the need to follow all applicable ethical regulations. Registries should also prepare for data sharing activities by ensuring data are maintained appropriately and developing policies and procedures for governance and data sharing. The “Governance” section describes the role of governance in data sharing and outlines key governance tasks, including defining and staffing relevant oversight bodies; developing a data request process; reviewing data requests; and overseeing access to data by the requesting party. Governance structures vary based on the scope of data shared and registry resources. Lastly, the section on “Procedures for Reviewing and Responding to Data Requests” discusses the operational steps involved in sharing data. Policies and procedures for sharing data may depend on what types of data are available for sharing and with whom the data can be shared. Many registries develop a data request form for external researchers interested in using registry data. When reviewing requests, registries may consider whether the request aligns with the registry’s mission/purpose, the feasibility and merit of the proposed research, the qualifications of the requestor, and the necessary ethical and regulatory approvals, as well as administrative factors such as costs and timelines. Registries may require researchers to sign a data use agreement or other such contract to clearly define the terms and conditions of data use before providing access to the data in a secure manner. The toolkit concludes with a list of resources and appendices with supporting materials that registries may find helpful.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Friedler, Haley S., Michelle B. Leavy, Eric Bickelman, Barbara Casanova, Diana Clarke, Danielle Cooke, Andy DeMayo et al. Outcome Measure Harmonization and Data Infrastructure for Patient-Centered Outcomes Research in Depression : Data Use and Governance Toolkit. Agency for Healthcare Research and Quality (AHRQ), octobre 2021. http://dx.doi.org/10.23970/ahrqepcwhitepaperdepressiontoolkit.

Texte intégral
Résumé :
Executive Summary Patient registries are important tools for advancing research, improving healthcare quality, and supporting health policy. Registries contain vast amounts of data that could be used for new purposes when linked with other sources or shared with researchers. This toolkit was developed to summarize current best practices and provide information to assist registries interested in sharing data. The contents of this toolkit were developed based on review of the literature, existing registry practices, interviews with registries, and input from key stakeholders involved in the sharing of registry data. While some information in this toolkit may be relevant in other countries, this toolkit focuses on best practices for sharing data within the United States. Considerations related to data sharing differ across registries depending on the type of registry, registry purpose, funding source(s), and other factors; as such, this toolkit describes general best practices and considerations rather than providing specific recommendations. Finally, data sharing raises complex legal, regulatory, operational, and technical questions, and none of the information contained herein should be substituted for legal advice. The toolkit is organized into three sections: “Preparing to Share Data,” “Governance,” and “Procedures for Reviewing and Responding to Data Requests.” The section on “Preparing to Share Data” discusses the role of appropriate legal rights to further share the data and the need to follow all applicable ethical regulations. Registries should also prepare for data sharing activities by ensuring data are maintained appropriately and developing policies and procedures for governance and data sharing. The “Governance” section describes the role of governance in data sharing and outlines key governance tasks, including defining and staffing relevant oversight bodies; developing a data request process; reviewing data requests; and overseeing access to data by the requesting party. Governance structures vary based on the scope of data shared and registry resources. Lastly, the section on “Procedures for Reviewing and Responding to Data Requests” discusses the operational steps involved in sharing data. Policies and procedures for sharing data may depend on what types of data are available for sharing and with whom the data can be shared. Many registries develop a data request form for external researchers interested in using registry data. When reviewing requests, registries may consider whether the request aligns with the registry’s mission/purpose, the feasibility and merit of the proposed research, the qualifications of the requestor, and the necessary ethical and regulatory approvals, as well as administrative factors such as costs and timelines. Registries may require researchers to sign a data use agreement or other such contract to clearly define the terms and conditions of data use before providing access to the data in a secure manner. The toolkit concludes with a list of resources and appendices with supporting materials that registries may find helpful.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Bennett, Alan B., Arthur Schaffer et David Granot. Genetic and Biochemical Characterization of Fructose Accumulation : A Strategy to Improve Fruit Quality. United States Department of Agriculture, juin 2000. http://dx.doi.org/10.32747/2000.7571353.bard.

Texte intégral
Résumé :
The goal of the research project was to evaluate the potential to genetically modify or engineer carbohydrate metabolism in tomato fruit to enhance levels of fructose, a sugar with nearly twice the sweetness value of other sugars. The specific research objectives to achieve that goal were to: 1. Establish the inheritance of a fructose-accumulating trait identified in F1 hybrids of an inferspecific cross between L. hirsutum XL. esculentum and identify linked molecular markers to facilitate its introgression into tomato cultivars. This objective was completed with the genetic data indicating a single major gene, termed Fgr (Fructose glucose ratio), that controlled the partitioning of hexose in the mature fruit. Molecular markers for the gene, were developed to aid introgression of this gene into cultivated tomato. In addition, a second major gene encoding fructokinase 2 (FK2) was found to be a determinant of the fructose to glucose ratio in fruit. The relationship between FK2 and Fgr is epistatic with a combined synergistic effect of the two hirsutum-derived genes on fructose/glucose ratios. 2. Characterize the metabolic and transport properties responsible for high fructose/glucose ratios in fructose-accumulating genotypes. The effect of both the Fgr and FK2 genes on the developmental accumulation of hexoses was studied in a wide range of genetic backgrounds. In all backgrounds the trait is a developmental one and that the increase in fructose to glucose ratio occurs at the breaker stage of fruit development. The following enzymes were assayed, none of which showed differences between genotypes, at either the breaker or ripe stage: invertase, sucrose synthase, FK1, FK2, hexokinase, PGI and PGM. The lack of effect of the FK2 gene on fructokinase activity is surprising and at present we have no explanation for the phenomenon. However, the hirsutum derived Fgr allele was associated with significantly lower levels of phosphorylated glucose, G1c-1-P and G1c-6-P and concomitantly higher levels of the phosphorylated fructose, Fru-6-P, in both the breaker and ripe stage. This suggests a significant role for the isomerase reaction. 3. Develop and implement molecular genetic strategies for the production of transgenic plants with altered levels of enzymes that potentially control fructose/glucose ratios in fruit. This objective focused on manipulating hexokinase and fructokinase expression in transgenic plants. Two highly divergent cDNA clones (Frk1 and Frk2), encoding fructokinase (EC 2.7.1.4), were isolated from tomato (Lycopersicon esculentum) and a potato fructokinase cDNA clone was obtained from Dr. Howard Davies. Following expression in yeast, each fructokinase was identified to code for one of the tomato or potato fructokinase isoforms Transgenic tomato plants were generated with the fructokinase cDNA clone in both sense and antisense orientations and the effect of the gene on tomato plants is currently being studied.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Chapman, Ray, Phu Luong, Sung-Chan Kim et Earl Hayter. Development of three-dimensional wetting and drying algorithm for the Geophysical Scale Transport Multi-Block Hydrodynamic Sediment and Water Quality Transport Modeling System (GSMB). Engineer Research and Development Center (U.S.), juillet 2021. http://dx.doi.org/10.21079/11681/41085.

Texte intégral
Résumé :
The Environmental Laboratory (EL) and the Coastal and Hydraulics Laboratory (CHL) have jointly completed a number of large-scale hydrodynamic, sediment and water quality transport studies. EL and CHL have successfully executed these studies utilizing the Geophysical Scale Transport Modeling System (GSMB). The model framework of GSMB is composed of multiple process models as shown in Figure 1. Figure 1 shows that the United States Army Corps of Engineers (USACE) accepted wave, hydrodynamic, sediment and water quality transport models are directly and indirectly linked within the GSMB framework. The components of GSMB are the two-dimensional (2D) deep-water wave action model (WAM) (Komen et al. 1994, Jensen et al. 2012), data from meteorological model (MET) (e.g., Saha et al. 2010 - http://journals.ametsoc.org/doi/pdf/10.1175/2010BAMS3001.1), shallow water wave models (STWAVE) (Smith et al. 1999), Coastal Modeling System wave (CMS-WAVE) (Lin et al. 2008), the large-scale, unstructured two-dimensional Advanced Circulation (2D ADCIRC) hydrodynamic model (http://www.adcirc.org), and the regional scale models, Curvilinear Hydrodynamics in three dimensions-Multi-Block (CH3D-MB) (Luong and Chapman 2009), which is the multi-block (MB) version of Curvilinear Hydrodynamics in three-dimensions-Waterways Experiments Station (CH3D-WES) (Chapman et al. 1996, Chapman et al. 2009), MB CH3D-SEDZLJ sediment transport model (Hayter et al. 2012), and CE-QUAL Management - ICM water quality model (Bunch et al. 2003, Cerco and Cole 1994). Task 1 of the DOER project, “Modeling Transport in Wetting/Drying and Vegetated Regions,” is to implement and test three-dimensional (3D) wetting and drying (W/D) within GSMB. This technical note describes the methods and results of Task 1. The original W/D routines were restricted to a single vertical layer or depth-averaged simulations. In order to retain the required 3D or multi-layer capability of MB-CH3D, a multi-block version with variable block layers was developed (Chapman and Luong 2009). This approach requires a combination of grid decomposition, MB, and Message Passing Interface (MPI) communication (Snir et al. 1998). The MB single layer W/D has demonstrated itself as an effective tool in hyper-tide environments, such as Cook Inlet, Alaska (Hayter et al. 2012). The code modifications, implementation, and testing of a fully 3D W/D are described in the following sections of this technical note.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Macker, Joseph P. Controlled Link Sharing and Quality of Service Data Transfer for Military Internetworking. Fort Belvoir, VA : Defense Technical Information Center, janvier 1996. http://dx.doi.org/10.21236/ada464902.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Ogwuike, Clinton Obinna, et Chimere Iheonu. Stakeholder Perspectives on Improving Educational Outcomes in Enugu State. Research on Improving Systems of Education (RISE), novembre 2021. http://dx.doi.org/10.35489/bsg-rise-ri_2021/034.

Texte intégral
Résumé :
Education remains crucial for socioeconomic development and is linked to improved quality of life. In Nigeria, basic education has remained poor and is characterised by unhealthy attributes, including low quality infrastructure and a lack of effective management of primary and secondary schools. Access to education is a massive issue—according to the United Nations, there are currently about 10.5 million out of school children in Nigeria, and 1 in every 5 of the world’s out-of-school-children lives in Nigeria despite the fact that primary education in Nigeria is free. A considerable divide exists between the northern and southern regions of Nigeria, with the southern region performing better across most education metrics. That said, many children in southern Nigeria also do not go to school. In Nigeria’s South West Zone, 2016 data from the Nigerian Federal Ministry of Education reveals that Lagos State has the highest number of out of school children with more than 560,000 children aged 6-11 not going to school. In the South South Zone, Rivers State has the highest number of out-of-school children; more than 900,000 children aged 6-11 are not able to access education in this state. In Enugu State in the South East Zone, there are more than 340,000 children who do not have access to schooling (2016 is the most recent year high-quality data is available—these numbers have likely increased due to the impacts of COVID-19). As part of its political economy research project, the RISE Nigeria team conducted surveys of education stakeholders in Enugu State including teachers, parents, school administrators, youth leaders, religious leaders, and others in December 2020. The team also visited 10 schools in Nkanu West Local Government Area (LGA), Nsukka LGA, and Udi LGA to speak to administrators and teachers, and assess conditions. It then held three RISE Education Summits, in which RISE team members facilitated dialogues between stakeholders and political leaders about improving education policies and outcomes in Enugu. These types of interactions are rare in Nigeria and have the potential to impact the education sector by increasing local demand for quality education and government accountability in providing it. Inputs from the surveys in the LGAs determined the education sector issues included in the agenda for the meeting, which political leaders were able to see in advance. The Summits culminated with the presentation of a social contract, which the team hopes will aid stakeholders in the education sector in monitoring the government’s progress on education priorities. This article draws on stakeholder surveys and conversations, insights from the Education Summits, school visits, and secondary data to provide an overview of educational challenges in Enugu State with a focus on basic education. It then seeks to highlight potential solutions to these problems based on local stakeholders’ insights from the surveys and the outcomes of the Education Summits.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Luo, Yan, Shu Tian et Hao Yang. Green Bonds, Air Quality, and Mortality : Evidence from the People’s Republic of China. Asian Development Bank, décembre 2021. http://dx.doi.org/10.22617/wps210435-2.

Texte intégral
Résumé :
This study uses city-level data from the People’s Republic of China to examine links between green bond market development and air quality as well as mortality rates. It finds that cities with more green bond financing as a share of total bond financing tend to have better air quality. The effect is stronger when certified green bonds are examined and in cities with higher gross domestic product growth. Further, local green bond issuance is also negatively related to mortality rates. The findings support the argument that green bond issuance is a credible signal of corporates’ commitment to be environmentally responsible.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak et Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, juillet 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Texte intégral
Résumé :
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detection, refractometer and a scale (mass). Data were analyzed and provided input for five classification models. Chlorophyll from fluorescence was found to give the best estimation for ripeness stage while the combination of machine vision and firmness from impact performed best for quality sorting. A new algorithm was developed to estimate and minimize training size for supervised classification. A new criteria was established to choose a training set such that a recurrent auto-associative memory neural network is stabilized. Moreover, this method provides for rapid and accurate updating of the classifier over growing seasons, production environments and cultivars. Different classification approaches (parametric and non-parametric) for grading were examined. Statistical methods were found to be as accurate as neural networks in grading. Classification models by voting did not enhance the classification significantly. A hybrid model that incorporated heuristic rules and either a numerical classifier or neural network was found to be superior in classification accuracy with half the required processing of solely the numerical classifier or neural network. In Israel: A multi-sensing approach utilizing non-destructive sensors was developed. Shape, color, stem identification, surface defects and bruises were measured using a color image processing system. Flavor parameters (sugar, acidity, volatiles) and ripeness were measured using a near-infrared system and an electronic sniffer. Mechanical properties were measured using three sensors: drop impact, resonance frequency and cyclic deformation. Classification algorithms for quality sorting of fruit based on multi-sensory data were developed and implemented. The algorithms included a dynamic artificial neural network, a back propagation neural network and multiple linear regression. Results indicated that classification based on multiple sensors may be applied in real-time sorting and can improve overall classification. Advanced image processing algorithms were developed for shape determination, bruise and stem identification and general color and color homogeneity. An unsupervised method was developed to extract necessary vision features. The primary advantage of the algorithms developed is their ability to learn to determine the visual quality of almost any fruit or vegetable with no need for specific modification and no a-priori knowledge. Moreover, since there is no assumption as to the type of blemish to be characterized, the algorithm is capable of distinguishing between stems and bruises. This enables sorting of fruit without knowing the fruits' orientation. A new algorithm for on-line clustering of data was developed. The algorithm's adaptability is designed to overcome some of the difficulties encountered when incrementally clustering sparse data and preserves information even with memory constraints. Large quantities of data (many images) of high dimensionality (due to multiple sensors) and new information arriving incrementally (a function of the temporal dynamics of any natural process) can now be processed. Furhermore, since the learning is done on-line, it can be implemented in real-time. The methodology developed was tested to determine external quality of tomatoes based on visual information. An improved model for color sorting which is stable and does not require recalibration for each season was developed for color determination. Excellent classification results were obtained for both color and firmness classification. Results indicted that maturity classification can be obtained using a drop-impact and a vision sensor in order to predict the storability and marketing of harvested fruits. In conclusion: We have been able to define quantitatively the critical parameters in the quality sorting and grading of both fresh market cantaloupes and tomatoes. We have been able to accomplish this using nondestructive measurements and in a manner consistent with expert human grading and in accordance with market acceptance. This research constructed and used large databases of both commodities, for comparative evaluation and optimization of expert system, statistical and/or neural network models. The models developed in this research were successfully tested, and should be applicable to a wide range of other fruits and vegetables. These findings are valuable for the development of on-line grading and sorting of agricultural produce through the incorporation of multiple measurement inputs that rapidly define quality in an automated manner, and in a manner consistent with the human graders and inspectors.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Galili, Naftali, Roger P. Rohrbach, Itzhak Shmulevich, Yoram Fuchs et Giora Zauberman. Non-Destructive Quality Sensing of High-Value Agricultural Commodities Through Response Analysis. United States Department of Agriculture, octobre 1994. http://dx.doi.org/10.32747/1994.7570549.bard.

Texte intégral
Résumé :
The objectives of this project were to develop nondestructive methods for detection of internal properties and firmness of fruits and vegetables. One method was based on a soft piezoelectric film transducer developed in the Technion, for analysis of fruit response to low-energy excitation. The second method was a dot-matrix piezoelectric transducer of North Carolina State University, developed for contact-pressure analysis of fruit during impact. Two research teams, one in Israel and the other in North Carolina, coordinated their research effort according to the specific objectives of the project, to develop and apply the two complementary methods for quality control of agricultural commodities. In Israel: An improved firmness testing system was developed and tested with tropical fruits. The new system included an instrumented fruit-bed of three flexible piezoelectric sensors and miniature electromagnetic hammers, which served as fruit support and low-energy excitation device, respectively. Resonant frequencies were detected for determination of firmness index. Two new acoustic parameters were developed for evaluation of fruit firmness and maturity: a dumping-ratio and a centeroid of the frequency response. Experiments were performed with avocado and mango fruits. The internal damping ratio, which may indicate fruit ripeness, increased monotonically with time, while resonant frequencies and firmness indices decreased with time. Fruit samples were tested daily by destructive penetration test. A fairy high correlation was found in tropical fruits between the penetration force and the new acoustic parameters; a lower correlation was found between this parameter and the conventional firmness index. Improved table-top firmness testing units, Firmalon, with data-logging system and on-line data analysis capacity have been built. The new device was used for the full-scale experiments in the next two years, ahead of the original program and BARD timetable. Close cooperation was initiated with local industry for development of both off-line and on-line sorting and quality control of more agricultural commodities. Firmalon units were produced and operated in major packaging houses in Israel, Belgium and Washington State, on mango and avocado, apples, pears, tomatoes, melons and some other fruits, to gain field experience with the new method. The accumulated experimental data from all these activities is still analyzed, to improve firmness sorting criteria and shelf-life predicting curves for the different fruits. The test program in commercial CA storage facilities in Washington State included seven apple varieties: Fuji, Braeburn, Gala, Granny Smith, Jonagold, Red Delicious, Golden Delicious, and D'Anjou pear variety. FI master-curves could be developed for the Braeburn, Gala, Granny Smith and Jonagold apples. These fruits showed a steady ripening process during the test period. Yet, more work should be conducted to reduce scattering of the data and to determine the confidence limits of the method. Nearly constant FI in Red Delicious and the fluctuations of FI in the Fuji apples should be re-examined. Three sets of experiment were performed with Flandria tomatoes. Despite the complex structure of the tomatoes, the acoustic method could be used for firmness evaluation and to follow the ripening evolution with time. Close agreement was achieved between the auction expert evaluation and that of the nondestructive acoustic test, where firmness index of 4.0 and more indicated grade-A tomatoes. More work is performed to refine the sorting algorithm and to develop a general ripening scale for automatic grading of tomatoes for the fresh fruit market. Galia melons were tested in Israel, in simulated export conditions. It was concluded that the Firmalon is capable of detecting the ripening of melons nondestructively, and sorted out the defective fruits from the export shipment. The cooperation with local industry resulted in development of automatic on-line prototype of the acoustic sensor, that may be incorporated with the export quality control system for melons. More interesting is the development of the remote firmness sensing method for sealed CA cool-rooms, where most of the full-year fruit yield in stored for off-season consumption. Hundreds of ripening monitor systems have been installed in major fruit storage facilities, and being evaluated now by the consumers. If successful, the new method may cause a major change in long-term fruit storage technology. More uses of the acoustic test method have been considered, for monitoring fruit maturity and harvest time, testing fruit samples or each individual fruit when entering the storage facilities, packaging house and auction, and in the supermarket. This approach may result in a full line of equipment for nondestructive quality control of fruits and vegetables, from the orchard or the greenhouse, through the entire sorting, grading and storage process, up to the consumer table. The developed technology offers a tool to determine the maturity of the fruits nondestructively by monitoring their acoustic response to mechanical impulse on the tree. A special device was built and preliminary tested in mango fruit. More development is needed to develop a portable, hand operated sensing method for this purpose. In North Carolina: Analysis method based on an Auto-Regressive (AR) model was developed for detecting the first resonance of fruit from their response to mechanical impulse. The algorithm included a routine that detects the first resonant frequency from as many sensors as possible. Experiments on Red Delicious apples were performed and their firmness was determined. The AR method allowed the detection of the first resonance. The method could be fast enough to be utilized in a real time sorting machine. Yet, further study is needed to look for improvement of the search algorithm of the methods. An impact contact-pressure measurement system and Neural Network (NN) identification method were developed to investigate the relationships between surface pressure distributions on selected fruits and their respective internal textural qualities. A piezoelectric dot-matrix pressure transducer was developed for the purpose of acquiring time-sampled pressure profiles during impact. The acquired data was transferred into a personal computer and accurate visualization of animated data were presented. Preliminary test with 10 apples has been performed. Measurement were made by the contact-pressure transducer in two different positions. Complementary measurements were made on the same apples by using the Firmalon and Magness Taylor (MT) testers. Three-layer neural network was designed. 2/3 of the contact-pressure data were used as training input data and corresponding MT data as training target data. The remaining data were used as NN checking data. Six samples randomly chosen from the ten measured samples and their corresponding Firmalon values were used as the NN training and target data, respectively. The remaining four samples' data were input to the NN. The NN results consistent with the Firmness Tester values. So, if more training data would be obtained, the output should be more accurate. In addition, the Firmness Tester values do not consistent with MT firmness tester values. The NN method developed in this study appears to be a useful tool to emulate the MT Firmness test results without destroying the apple samples. To get more accurate estimation of MT firmness a much larger training data set is required. When the larger sensitive area of the pressure sensor being developed in this project becomes available, the entire contact 'shape' will provide additional information and the neural network results would be more accurate. It has been shown that the impact information can be utilized in the determination of internal quality factors of fruit. Until now,
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie