Auswahl der wissenschaftlichen Literatur zum Thema „RDF Data“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "RDF Data" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "RDF Data"

1

Jun, Hee-Gook, und Dong-Hyuk Im. „Semantics-Preserving RDB2RDF Data Transformation Using Hierarchical Direct Mapping“. Applied Sciences 10, Nr. 20 (12.10.2020): 7070. http://dx.doi.org/10.3390/app10207070.

Der volle Inhalt der Quelle
Annotation:
Direct mapping is an automatic transformation method used to generate resource description framework (RDF) data from relational data. In the field of direct mapping, semantics preservation is critical to ensure that the mapping method outputs RDF data without information loss or incorrect semantic data generation. However, existing direct-mapping methods have problems that prevent semantics preservation in specific cases. For this reason, a mapping method is developed to perform a semantics-preserving transformation of relational databases (RDB) into RDF data without semantic information loss and to reduce the volume of incorrect RDF data. This research reviews cases that do not generate semantics-preserving results, and the corresponding problems into categories are arranged. This paper defines lemmas that represent the features of RDF data transformation to resolve those problems. Based on the lemmas, this work develops a hierarchical direct-mapping method to strictly abide by the definition of semantics preservation and to prevent semantic information loss, reducing the volume of incorrect RDF data generated. Experiments demonstrate the capability of the proposed method to perform semantics-preserving RDB2RDF data transformation, generating semantically accurate results. This work impacts future studies, which should involve the development of synchronization methods to achieve RDF data consistency when original RDB data are modified.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Natarajan, Senthilselvan, Subramaniyaswamy Vairavasundaram, Yuvaraja Teekaraman, Ramya Kuppusamy und Arun Radhakrishnan. „Schema-Based Mapping Approach for Data Transformation to Enrich Semantic Web“. Wireless Communications and Mobile Computing 2021 (10.11.2021): 1–15. http://dx.doi.org/10.1155/2021/8567894.

Der volle Inhalt der Quelle
Annotation:
Modern web wants the data to be in Resource Description Framework (RDF) format, a machine-readable form that is easy to share and reuse data without human intervention. However, most of the information is still available in relational form. The existing conventional methods transform the data from RDB to RDF using instance-level mapping, which has not yielded the expected results because of poor mapping. Hence, in this paper, a novel schema-based RDB-RDF mapping method (relational database to Resource Description Framework) is proposed, which is an improvised version for transforming the relational database into the Resource Description Framework. It provides both data materialization and on-demand mapping. RDB-RDF reduces the data retrieval time for nonprimary key search by using schema-level mapping. The resultant mapped RDF graph presents the relational database in a conceptual schema and maintains the instance triples as data graph. This mechanism is known as data materialization, which suits well for the static dataset. To get the data in a dynamic environment, query translation (on-demand mapping) is best instead of whole data conversion. The proposed approach directly converts the SPARQL query into SQL query using the mapping descriptions available in the proposed system. The mapping description is the key component of this proposed system which is responsible for quick data retrieval and query translation. Join expression introduced in the proposed RDB-RDF mapping method efficiently handles all complex operations with primary and foreign keys. Experimental evaluation is done on the graphics designer database. It is observed from the result that the proposed schema-based RDB-RDF mapping method accomplishes more comprehensible mapping than conventional methods by dissolving structural and operational differences.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Chystiakova, I. S. „Implementation of mappings between the description logic and the binary relational data model on the RDF level“. PROBLEMS IN PROGRAMMING, Nr. 4 (Dezember 2020): 041–54. http://dx.doi.org/10.15407/pp2020.04.041.

Der volle Inhalt der Quelle
Annotation:
This paper is dedicated to the data integration problem. In article the task of practical implementation of mappings between description logic and a binary relational data model is discussed. This method was formulated earlier at a theoretical level. A practical technique to test mapping engines using RDF is provided in the current paper. To transform the constructs of the description logic ALC and its main extensions into RDF triplets the OWL 2-to-RDF mappings are used. To convert RDB to RDF graph, the R2R Mapping Language (R2R ML) was chosen. The mappings DL ALC and its main extensions to the RDF triplets are described in the publication. The mapping of the DL axioms into an RDF triplet also is considered in the publication. The main difficulties in describing DL-to-RDF transformations are given in the corresponding section. For each constructor of concepts and roles a corresponding expression in OWL 2 and its mapping into the RDF triplet. A schematic representation of the resulting RDF graph for each mapping is created. The paper also provides an overview of existing methods that relate to the use of RDF when mapping RDB to ontology and vice versa.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ri Kim, Ju, Zhanfang Zhao und Sung Kook Han. „Sparql query processing in relational databases“. International Journal of Engineering & Technology 7, Nr. 3.3 (08.06.2018): 84. http://dx.doi.org/10.14419/ijet.v7i2.33.13860.

Der volle Inhalt der Quelle
Annotation:
Background/Objectives: The mapping RDB to RDF has become important to populate Linked Data more efficiently. This paper shows how to implement SPARQL endpoint in RDB using a conceptual level mapping approach.Methods/Statistical analysis: Many diverse approaches and related languages for mapping RDB to RDF have been proposed. The prominent achievements of mapping RDB to RDF are two standard draft Direct Mapping and R2RML proposed by W3C RDB2RDF Working Group. This paper analyzes these conventional mapping approaches and proposes a new approach based on schema mapping. The paper also presents SPARQL query processing in RDB.Findings: There are distinct differences between instance level mapping and conceptual level mapping for RDB2RDF. Data redundancy of instance level mapping causes many inevitable problems during mapping procedure. The conceptual level mapping can provide straightforward and efficient way. The ER model in RDB and RDF model in Linked Data have obvious similarity. The ER model describes entities and relationships, which is the conceptual schema of RDB. RDF model consists of three parts: subject, predicate and object, which is the standard model for data interchange on the Web. The entities in ER model and subjects in RDF model are all the things that can be anything in the real world. Both the relationships in ER model and predicates in RDF model describe the relations between things.Since RDB and RDF share the similar modeling approach at the schema level, it is reasonable that mapping approach should be based on RDB schema. This kind of conceptual level mapping also can provide efficient SPARQL query processing in RDB.Improvements/Applications: The paper realizes SPARQL query processing in RDB, which is based on conceptual level mapping. The query experiments show that it is a concise and efficient way to populate Linked Data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Gayo, Jose Emilio Labra, Eric Prud'hommeaux, Iovka Boneva und Dimitris Kontokostas. „Validating RDF Data“. Synthesis Lectures on the Semantic Web: Theory and Technology 7, Nr. 1 (28.09.2017): 1–328. http://dx.doi.org/10.2200/s00786ed1v01y201707wbe016.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ri Kim, Ju, und Sung Kook Han. „R2RS: schema-based relational databases mapping to linked datasets“. International Journal of Engineering & Technology 7, Nr. 3.3 (08.06.2018): 119. http://dx.doi.org/10.14419/ijet.v7i2.33.13868.

Der volle Inhalt der Quelle
Annotation:
Background/Objectives: The vast amounts of high-quality data stored in relational databases (RDB) is the primary resources for Linked Open Data (LOD) datasets. This paper proposes a schema-based mapping approach from RDB to RDF, which provides succinct and efficient mapping.Methods/Statistical analysis: The various approaches, languages and tools for mapping RDB to LOD have been proposed in the recent years. This paper surveys and analyzes classic mapping approach and language such as Direct Mapping and R2RML. The mapping approaches can be categorized by means of their data modeling. After analyzing the conventional RDB-RDF mapping methods, this paper proposes a new mapping method and discusses its typical features and applications.Findings: There are two types of mapping approaches for the translation of RDB to RDF: instance-based and schema-based mapping approaches. The instance-based mapping approaches generate large amounts of RDF graphs by means of mapping rules. These approaches causes data redundancy since the same data is stored in two ways of RDB and RDF. It is very easy to bring the data inconsistence problem when data update operations occur. The schema-based mapping approaches can effectively avoid data redundancy since the mapping can be accomplished in the conceptual schema level.The architecture of SPARQL endpoint based on schema mapping approach consists of five phases:Generation of mapping description based on mapping rules.SPARQL query statements for RDF graph patterns.Translation of SPARQL query into SQL query.Execution of SQL query in RDB.Interpretation of SQL query result into JSON-LD format.Experiments show the schema-based mapping approach is a straightforward, succinct and efficient mapping method for RDB2RDF.Improvements/Applications: This paper proposes a schema-based mapping approach called R2RS, which shows better performance than the conventional mapping methods. In addition, R2RS also provides the efficient implementation of SPARQL endpoint in RDB.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Soliman, Hatem, Izhar Ahmed Khan und Yasir Hussain. „Global Sensitivity Analysis for Fuzzy RDF Data“. International Journal of Software Engineering and Knowledge Engineering 31, Nr. 08 (August 2021): 1119–44. http://dx.doi.org/10.1142/s0218194021500352.

Der volle Inhalt der Quelle
Annotation:
The resource description framework (RDF) was adopted by the World Wide Web (W3C) as an essential semantic web standard and the RDF scheme. It accords the hard semantics in the description and wields the crisp metadata. However, it usually produces vague or ambiguous information. Consequently, fuzzy RDF helps deal with such special data by transforming the crisp values into a fuzzy set. A method for analyzing fuzzy RDF data is proposed in this paper. To this end, first, we decompose the RDF into fuzzy RDF variables. Second, we are designing a model for global sensitivity analysis based on the decomposition of fuzzy RDF. It figures out the ambiguities of fuzzy RDF data. The proposed global sensitivity analysis model provides the importance of fuzzy RDF data by considering the response function’s structure and reselects it to a certain degree. A practical tool for sensitivity analysis of fuzzy RDF data has also been implemented based on the proposed model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Fernández, Javier D., Miguel A. Martínez-Prieto, Pablo de la Fuente Redondo und Claudio Gutiérrez. „Characterising RDF data sets“. Journal of Information Science 44, Nr. 2 (09.01.2017): 203–29. http://dx.doi.org/10.1177/0165551516677945.

Der volle Inhalt der Quelle
Annotation:
The publication of semantic web data, commonly represented in Resource Description Framework (RDF), has experienced outstanding growth over the last few years. Data from all fields of knowledge are shared publicly and interconnected in active initiatives such as Linked Open Data. However, despite the increasing availability of applications managing large-scale RDF information such as RDF stores and reasoning tools, little attention has been given to the structural features emerging in real-world RDF data. Our work addresses this issue by proposing specific metrics to characterise RDF data. We specifically focus on revealing the redundancy of each data set, as well as common structural patterns. We evaluate the proposed metrics on several data sets, which cover a wide range of designs and models. Our findings provide a basis for more efficient RDF data structures, indexes and compressors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Meng, Xiangfu, Lin Zhu, Qing Li und Xiaoyan Zhang. „Spatiotemporal RDF Data Query Based on Subgraph Matching“. ISPRS International Journal of Geo-Information 10, Nr. 12 (12.12.2021): 832. http://dx.doi.org/10.3390/ijgi10120832.

Der volle Inhalt der Quelle
Annotation:
Resource Description Framework (RDF), as a standard metadata description framework proposed by the World Wide Web Consortium (W3C), is suitable for modeling and querying Web data. With the growing importance of RDF data in Web data management, there is an increasing need for modeling and querying RDF data. Previous approaches mainly focus on querying RDF. However, a large amount of RDF data have spatial and temporal features. Therefore, it is important to study spatiotemporal RDF data query approaches. In this paper, firstly, we formally define spatiotemporal RDF data, and construct a spatiotemporal RDF model st-RDF that is used to represent and manipulate spatiotemporal RDF data. Secondly, we present a spatiotemporal RDF query algorithm stQuery based on subgraph matching. This algorithm can quickly determine whether the query result is empty for queries whose temporal or spatial range exceeds a specific range by adopting a preliminary query filtering mechanism in the query process. Thirdly, we propose a sorting strategy that calculates the matching order of query nodes to speed up the subgraph matching. Finally, we conduct experiments in terms of effect and query efficiency. The experimental results show the performance advantages of our approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Permatasari, Ayu Novira Shinta, und Herlina Jayadianti. „Direct Mapping and Turtle Ontology for Management of Indonesian Movies Knowledge“. MATEC Web of Conferences 372 (2022): 04011. http://dx.doi.org/10.1051/matecconf/202237204011.

Der volle Inhalt der Quelle
Annotation:
Web 2.0 or conventional web has developed into Web 3.0, known as semantic web. Semantic web technology requires ontology as the backbone in understanding a concept of knowledge. In the ontology computing process, the Resource Description Framework (RDF) is used as a framework to define web resources in triple form (subject-predicate-object) so that they can form metadata and describe the information contained on the web. The data used in this study is Indonesian movies data obtained from Kaggle in Comma Separated Values (.csv) format with a total of 242 lines of Indonesian movies data. The data processing is carried out by direct mapping using the help of DB2Triples to generate data from MySQL into RDF in turtle format (.ttl) file. The results shown that direct mapping can be used to map data from RDB to RDF semi-automatically. The data is mapped into the RDF according to the schema on the RDB without input from the user, so the results provided cannot be adjusted to the needs or desires of the user. Furthermore, the RDF generated in the turtle file format has formed classes and individuals automatically, but to be able to be used as a semantic web resource, RDF needs to be processed manually to form data properties and object properties, as well as assigning instance values.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "RDF Data"

1

Abedjan, Ziawasch. „Improving RDF data with data mining“. Phd thesis, Universität Potsdam, 2014. http://opus.kobv.de/ubp/volltexte/2014/7133/.

Der volle Inhalt der Quelle
Annotation:
Linked Open Data (LOD) comprises very many and often large public data sets and knowledge bases. Those datasets are mostly presented in the RDF triple structure of subject, predicate, and object, where each triple represents a statement or fact. Unfortunately, the heterogeneity of available open data requires significant integration steps before it can be used in applications. Meta information, such as ontological definitions and exact range definitions of predicates, are desirable and ideally provided by an ontology. However in the context of LOD, ontologies are often incomplete or simply not available. Thus, it is useful to automatically generate meta information, such as ontological dependencies, range definitions, and topical classifications. Association rule mining, which was originally applied for sales analysis on transactional databases, is a promising and novel technique to explore such data. We designed an adaptation of this technique for min-ing Rdf data and introduce the concept of “mining configurations”, which allows us to mine RDF data sets in various ways. Different configurations enable us to identify schema and value dependencies that in combination result in interesting use cases. To this end, we present rule-based approaches for auto-completion, data enrichment, ontology improvement, and query relaxation. Auto-completion remedies the problem of inconsistent ontology usage, providing an editing user with a sorted list of commonly used predicates. A combination of different configurations step extends this approach to create completely new facts for a knowledge base. We present two approaches for fact generation, a user-based approach where a user selects the entity to be amended with new facts and a data-driven approach where an algorithm discovers entities that have to be amended with missing facts. As knowledge bases constantly grow and evolve, another approach to improve the usage of RDF data is to improve existing ontologies. Here, we present an association rule based approach to reconcile ontology and data. Interlacing different mining configurations, we infer an algorithm to discover synonymously used predicates. Those predicates can be used to expand query results and to support users during query formulation. We provide a wide range of experiments on real world datasets for each use case. The experiments and evaluations show the added value of association rule mining for the integration and usability of RDF data and confirm the appropriateness of our mining configuration methodology.
Linked Open Data (LOD) umfasst viele und oft sehr große öffentlichen Datensätze und Wissensbanken, die hauptsächlich in der RDF Triplestruktur bestehend aus Subjekt, Prädikat und Objekt vorkommen. Dabei repräsentiert jedes Triple einen Fakt. Unglücklicherweise erfordert die Heterogenität der verfügbaren öffentlichen Daten signifikante Integrationsschritte bevor die Daten in Anwendungen genutzt werden können. Meta-Daten wie ontologische Strukturen und Bereichsdefinitionen von Prädikaten sind zwar wünschenswert und idealerweise durch eine Wissensbank verfügbar. Jedoch sind Wissensbanken im Kontext von LOD oft unvollständig oder einfach nicht verfügbar. Deshalb ist es nützlich automatisch Meta-Informationen, wie ontologische Abhängigkeiten, Bereichs-und Domänendefinitionen und thematische Assoziationen von Ressourcen generieren zu können. Eine neue und vielversprechende Technik um solche Daten zu untersuchen basiert auf das entdecken von Assoziationsregeln, welche ursprünglich für Verkaufsanalysen in transaktionalen Datenbanken angewendet wurde. Wir haben eine Adaptierung dieser Technik auf RDF Daten entworfen und stellen das Konzept der Mining Konfigurationen vor, welches uns befähigt in RDF Daten auf unterschiedlichen Weisen Muster zu erkennen. Verschiedene Konfigurationen erlauben uns Schema- und Wertbeziehungen zu erkennen, die für interessante Anwendungen genutzt werden können. In dem Sinne, stellen wir assoziationsbasierte Verfahren für eine Prädikatvorschlagsverfahren, Datenvervollständigung, Ontologieverbesserung und Anfrageerleichterung vor. Das Vorschlagen von Prädikaten behandelt das Problem der inkonsistenten Verwendung von Ontologien, indem einem Benutzer, der einen neuen Fakt einem Rdf-Datensatz hinzufügen will, eine sortierte Liste von passenden Prädikaten vorgeschlagen wird. Eine Kombinierung von verschiedenen Konfigurationen erweitert dieses Verfahren sodass automatisch komplett neue Fakten für eine Wissensbank generiert werden. Hierbei stellen wir zwei Verfahren vor, einen nutzergesteuertenVerfahren, bei dem ein Nutzer die Entität aussucht die erweitert werden soll und einen datengesteuerten Ansatz, bei dem ein Algorithmus selbst die Entitäten aussucht, die mit fehlenden Fakten erweitert werden. Da Wissensbanken stetig wachsen und sich verändern, ist ein anderer Ansatz um die Verwendung von RDF Daten zu erleichtern die Verbesserung von Ontologien. Hierbei präsentieren wir ein Assoziationsregeln-basiertes Verfahren, der Daten und zugrundeliegende Ontologien zusammenführt. Durch die Verflechtung von unterschiedlichen Konfigurationen leiten wir einen neuen Algorithmus her, der gleichbedeutende Prädikate entdeckt. Diese Prädikate können benutzt werden um Ergebnisse einer Anfrage zu erweitern oder einen Nutzer während einer Anfrage zu unterstützen. Für jeden unserer vorgestellten Anwendungen präsentieren wir eine große Auswahl an Experimenten auf Realweltdatensätzen. Die Experimente und Evaluierungen zeigen den Mehrwert von Assoziationsregeln-Generierung für die Integration und Nutzbarkeit von RDF Daten und bestätigen die Angemessenheit unserer konfigurationsbasierten Methodologie um solche Regeln herzuleiten.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Qiao, Shi. „QUERYING GRAPH STRUCTURED RDF DATA“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1447198654.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Frommhold, Marvin, Piris Rubén Navarro, Natanael Arndt, Sebastian Tramp, Niklas Petersen und Michael Martin. „Towards versioning of arbitrary RDF data“. Universität Leipzig, 2016. https://ul.qucosa.de/id/qucosa%3A15777.

Der volle Inhalt der Quelle
Annotation:
Coherent and consistent tracking of provenance data and in particular update history information is a crucial building block for any serious information system architecture. Version Control Systems can be a part of such an architecture enabling users to query and manipulate versioning information as well as content revisions. In this paper, we introduce an RDF versioning approach as a foundation for a full featured RDF Version Control System. We argue that such a system needs support for all concepts of the RDF specification including support for RDF datasets and blank nodes. Furthermore, we placed special emphasis on the protection against unperceived history manipulation by hashing the resulting patches. In addition to the conceptual analysis and an RDF vocabulary for representing versioning information, we present a mature implementation which captures versioning information for changes to arbitrary RDF datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

HERRERA, JOSE EDUARDO TALAVERA. „AN ARCHITECTURE FOR RDF DATA SOURCES RECOMMENDATION“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2012. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=21367@1.

Der volle Inhalt der Quelle
Annotation:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Dentro do processo de publicação de dados na Web recomenda-se interligar os dados entre diferentes fontes, através de recursos similares que descrevam um domínio em comum. No entanto, com o crescimento do número dos conjuntos de dados publicados na Web de Dados, as tarefas de descoberta e seleção de dados tornam-se cada vez mais complexas. Além disso, a natureza distribuída e interconectada dos dados, fazem com que a sua análise e entendimento sejam muito demorados. Neste sentido, este trabalho visa oferecer uma arquitetura Web para a identificação de fontes de dados em RDF, com o objetivo de prover melhorias nos processos de publicação, interconex ão, e exploração de dados na Linked Open Data. Para tal, nossa abordagem utiliza o modelo de MapReduce sobre o paradigma de computa ção nas nuvens. Assim, podemos efetuar buscas paralelas por palavraschave sobre um índice de dados semânticos existente na Web. Estas buscas permitem identificar fontes candidatas para ligar os dados. Por meio desta abordagem, foi possível integrar diferentes ferramentas da web semântica em um processo de busca para descobrir fontes de dados relevantes, e relacionar tópicos de interesse denidos pelo usuário. Para atingir nosso objetivo foi necessária a indexação e análise de texto para aperfeiçoar a busca de recursos na Linked Open Data. Para mostrar a ecácia de nossa abordagem foi desenvolvido um estudo de caso, utilizando um subconjunto de dados de uma fonte na Linked Open Data, através do seu serviço SPARQL endpoint. Os resultados do nosso trabalho revelam que a geração de estatísticas sobre os dados da fonte é, de fato, um grande diferencial no processo de busca. Estas estatísticas ajudam ao usuário no processo de escolha de indivíduos. Um processo especializado de extração de palavras-chave é aplicado para cada indivíduo com o objetivo de gerar diferentes buscas sobre o índice semântico. Mostramos a escalabilidade de nosso processo de recomendação de fontes RDF através de diferentes amostras de indivíduos.
In the Web publishing process of data it is recommended to link the data from different sources using similar resources that describe a domain in common. However, the growing number of published data sets on the Web have made the data discovery and data selection tasks become increasingly complex. Moreover, the distributed and interconnected nature of the data causes the understanding and analysis to become too prolonged. In this context, this work aims to provide a Web architecture for identifying RDF data sources with the goal of improving the publishing, interconnection, and data exploration processes within the Linked Open Data. Our approach utilizes the MapReduce computing model on top of the cloud computing paradigm. In this manner, we are able to make parallel keyword searches over existing semantic data indexes available on the web. This will allow to identify candidate sources to link the data. Through this approach, it was possible to integrate different semantic web tools and relevant data sources in a search process, and also to relate topics of interest denied by the user. In order to achieve our objectives it was necessary to index and analyze text to improve the search of resources in the Linked Open Data. To show the effectiveness of our approach we developed a case study using a subset of data from a source in the Linked Open Data through its SPARQL endpoint service. The results of our work reveal that the generation and usage of data source s statistics do make a great difference within the search process. These statistics help the user within the choosing individuals process. Furthermore, a specialized keyword extraction process is run for each individual in order to create different search processes using the semantic index. We show the scalability of our RDF recommendation process by sampling several individuals.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kaithi, Bhargavacharan Reddy. „Knowledge Graph Reasoning over Unseen RDF Data“. Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1571955816559707.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Espinola, Roger Humberto Castillo. „Indexing RDF data using materialized SPARQL queries“. Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2012. http://dx.doi.org/10.18452/16582.

Der volle Inhalt der Quelle
Annotation:
In dieser Arbeit schlagen wir die Verwendung von materialisierten Anfragen als Indexstruktur für RDF-Daten vor. Wir streben eine Reduktion der Bearbeitungszeit durch die Minimierung der Anzahl der Vergleiche zwischen Anfrage und RDF Datenmenge an. Darüberhinaus betonen wir die Rolle von Kostenmodellen und Indizes für die Auswahl eines efizienten Ausführungsplans in Abhängigkeit vom Workload. Wir geben einen Überblick über das Problem der Auswahl von materialisierten Anfragen in relationalen Datenbanken und diskutieren ihre Anwendung zur Optimierung der Anfrageverarbeitung. Wir stellen RDFMatView als Framework für SPARQL-Anfragen vor. RDFMatView benutzt materializierte Anfragen als Indizes und enthalt Algorithmen, um geeignete Indizes fur eine gegebene Anfrage zu finden und sie in Ausführungspläne zu integrieren. Die Auswahl eines effizienten Ausführungsplan ist das zweite Thema dieser Arbeit. Wir führen drei verschiedene Kostenmodelle für die Verarbeitung von SPARQL Anfragen ein. Ein detaillierter Vergleich der Kostmodelle zeigt, dass ein auf Index-- und Prädikat--Statistiken beruhendes Modell die genauesten Informationen liefert, um einen effizienten Ausführungsplan auszuwählen. Die Evaluation zeigt, dass unsere Methode die Anfragebearbeitungszeit im Vergleich zu unoptimierten SPARQL--Anfragen um mehrere Größenordnungen reduziert. Schließlich schlagen wir eine einfache, aber effektive Strategie für das Problem der Auswahl von materialisierten Anfragen über RDF-Daten vor. Ausgehend von einem bestimmten Workload werden algorithmisch diejenigen Indizes augewählt, die die Bearbeitungszeit des gesamten Workload minimieren sollen. Dann erstellen wir auf der Basis von Anfragemustern eine Menge von Index--Kandidaten und suchen in dieser Menge Zusammenhangskomponenten. Unsere Auswertung zeigt, dass unsere Methode zur Auswahl von Indizes im Vergleich zu anderen, die größten Einsparungen in der Anfragebearbeitungszeit liefert.
In this thesis, we propose to use materialized queries as a special index structure for RDF data. We strive to reduce the query processing time by minimizing the number of comparisons between the query and the RDF dataset. We also emphasize the role of cost models in the selection of execution plans as well as index sets for a given workload. We provide an overview of the materialized view selection problem in relational databases and discuss its application for optimization of query processing. We introduce RDFMatView, a framework for answering SPARQL queries using materialized views as indexes. We provide algorithms to discover those indexes that can be used to process a given query and we develop different strategies to integrate these views in query execution plans. The selection of an efficient execution plan states the topic of our second major contribution. We introduce three different cost models designed for SPARQL query processing with materialized views. A detailed comparison of these models reveals that a model based on index and predicate statistics provides the most accurate cost estimation. We show that selecting an execution plan using this cost model yields a reduction of processing time with several orders of magnitude compared to standard SPARQL query processing. Finally, we propose a simple yet effective strategy for the materialized view selection problem applied to RDF data. Based on a given workload of SPARQL queries we provide algorithms for selecting a set of indexes that minimizes the workload processing time. We create a candidate index by retrieving all connected components from query patterns. Our evaluation shows that using the set of suggested indexes usually achieves larger runtime savings than other index sets regarding the given workload.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sherif, Mohamed Ahmed Mohamed. „Automating Geospatial RDF Dataset Integration and Enrichment“. Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-215708.

Der volle Inhalt der Quelle
Annotation:
Over the last years, the Linked Open Data (LOD) has evolved from a mere 12 to more than 10,000 knowledge bases. These knowledge bases come from diverse domains including (but not limited to) publications, life sciences, social networking, government, media, linguistics. Moreover, the LOD cloud also contains a large number of crossdomain knowledge bases such as DBpedia and Yago2. These knowledge bases are commonly managed in a decentralized fashion and contain partly verlapping information. This architectural choice has led to knowledge pertaining to the same domain being published by independent entities in the LOD cloud. For example, information on drugs can be found in Diseasome as well as DBpedia and Drugbank. Furthermore, certain knowledge bases such as DBLP have been published by several bodies, which in turn has lead to duplicated content in the LOD . In addition, large amounts of geo-spatial information have been made available with the growth of heterogeneous Web of Data. The concurrent publication of knowledge bases containing related information promises to become a phenomenon of increasing importance with the growth of the number of independent data providers. Enabling the joint use of the knowledge bases published by these providers for tasks such as federated queries, cross-ontology question answering and data integration is most commonly tackled by creating links between the resources described within these knowledge bases. Within this thesis, we spur the transition from isolated knowledge bases to enriched Linked Data sets where information can be easily integrated and processed. To achieve this goal, we provide concepts, approaches and use cases that facilitate the integration and enrichment of information with other data types that are already present on the Linked Data Web with a focus on geo-spatial data. The first challenge that motivates our work is the lack of measures that use the geographic data for linking geo-spatial knowledge bases. This is partly due to the geo-spatial resources being described by the means of vector geometry. In particular, discrepancies in granularity and error measurements across knowledge bases render the selection of appropriate distance measures for geo-spatial resources difficult. We address this challenge by evaluating existing literature for point set measures that can be used to measure the similarity of vector geometries. Then, we present and evaluate the ten measures that we derived from the literature on samples of three real knowledge bases. The second challenge we address in this thesis is the lack of automatic Link Discovery (LD) approaches capable of dealing with geospatial knowledge bases with missing and erroneous data. To this end, we present Colibri, an unsupervised approach that allows discovering links between knowledge bases while improving the quality of the instance data in these knowledge bases. A Colibri iteration begins by generating links between knowledge bases. Then, the approach makes use of these links to detect resources with probably erroneous or missing information. This erroneous or missing information detected by the approach is finally corrected or added. The third challenge we address is the lack of scalable LD approaches for tackling big geo-spatial knowledge bases. Thus, we present Deterministic Particle-Swarm Optimization (DPSO), a novel load balancing technique for LD on parallel hardware based on particle-swarm optimization. We combine this approach with the Orchid algorithm for geo-spatial linking and evaluate it on real and artificial data sets. The lack of approaches for automatic updating of links of an evolving knowledge base is our fourth challenge. This challenge is addressed in this thesis by the Wombat algorithm. Wombat is a novel approach for the discovery of links between knowledge bases that relies exclusively on positive examples. Wombat is based on generalisation via an upward refinement operator to traverse the space of Link Specifications (LS). We study the theoretical characteristics of Wombat and evaluate it on different benchmark data sets. The last challenge addressed herein is the lack of automatic approaches for geo-spatial knowledge base enrichment. Thus, we propose Deer, a supervised learning approach based on a refinement operator for enriching Resource Description Framework (RDF) data sets. We show how we can use exemplary descriptions of enriched resources to generate accurate enrichment pipelines. We evaluate our approach against manually defined enrichment pipelines and show that our approach can learn accurate pipelines even when provided with a small number of training examples. Each of the proposed approaches is implemented and evaluated against state-of-the-art approaches on real and/or artificial data sets. Moreover, all approaches are peer-reviewed and published in a conference or a journal paper. Throughout this thesis, we detail the ideas, implementation and the evaluation of each of the approaches. Moreover, we discuss each approach and present lessons learned. Finally, we conclude this thesis by presenting a set of possible future extensions and use cases for each of the proposed approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Abedjan, Ziawasch [Verfasser], und Felix [Akademischer Betreuer] Naumann. „Improving RDF data with data mining / Ziawasch Abedjan. Betreuer: Felix Naumann“. Potsdam : Universitätsbibliothek der Universität Potsdam, 2014. http://d-nb.info/1059014122/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Morgan, Juston. „Visual language for exploring massive RDF data sets“. Pullman, Wash. : Washington State University, 2010. http://www.dissertations.wsu.edu/Thesis/Spring2010/J_Morgan_041210.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in computer science)--Washington State University, May 2010.
Title from PDF title page (viewed on July 12, 2010). "School of Engineering and Computer Science." Includes bibliographical references (p. 33-34).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Fan, Zhengjie. „Concise Pattern Learning for RDF Data Sets Interlinking“. Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM013/document.

Der volle Inhalt der Quelle
Annotation:
De nombreux jeux de données sont publiés sur le web à l’aide des technologies du web sémantique. Ces jeux de données contiennent des données qui représentent des liens vers des ressources similaires. Si ces jeux de données sont liés entre eux par des liens construits correctement, les utilisateurs peuvent facilement interroger des données à travers une interface uniforme, comme s’ils interrogeaient un jeu de données unique. Mais, trouver des liens corrects est très difficile car de nombreuses comparaisons doivent être effectuées. Plusieurs solutions ont été proposées pour résoudre ce problème : (1) l’approche la plus directe est de comparer les valeurs d’attributs d’instances pour identifier les liens, mais il est impossible de comparer toutes les paires possibles de valeurs d’attributs. (2) Une autre stratégie courante consiste à comparer les instances selon les attribut correspondants trouvés par l’alignement d’ontologies à base d’instances, qui permet de générer des correspondances d’attributs basés sur des instances. Cependant, il est difficile d’identifier des instances similaires à travers les ensembles de données car,dans certains cas, les valeurs des attributs en correspondance ne sont pas les mêmes.(3) Plusieurs méthodes utilisent la programmation génétique pour construire des modèles d’interconnexion afin de comparer différentes instances, mais elles souffrent de longues durées d’exécution.Dans cette thèse, une méthode d’interconnexion est proposée pour relier les instances similaires dans différents ensembles de données, basée à la fois sur l’apprentissage statistique et sur l’apprentissage symbolique. L’entrée est constituée de deux ensembles de données, des correspondances de classes sur les deux ensembles de données et un échantillon de liens “positif” ou “négatif” résultant d’une évaluation de l’utilisateur. La méthode construit un classifieur qui distingue les bons liens des liens incorrects dans deux ensembles de données RDF en utilisant l’ensemble des liens d’échantillons évalués. Le classifieur est composé de correspondances d’attributs entre les classes correspondantes et de deux ensembles de données,qui aident à comparer les instances et à établir les liens. Le classifieur est appelé motif d’interconnexion dans cette thèse. D’une part, notre méthode découvre des correspondances potentielles entre d’attributs pour chaque correspondance de classe via une méthode d’apprentissage statistique : l’algorithme de regroupement K-medoids,en utilisant des statistiques sur les valeurs des instances. D’autre part, notre solution s’appuie sur un modèle d’interconnexion par une méthode d’apprentissage symbolique: l’espace des versions, basée sur les correspondances d’attributs potentielles découvertes et l’ensemble des liens de l’échantillon évalué. Notre méthode peut résoudre la tâche d’interconnexion quand il n’existe pas de motif d’interconnexion combiné qui couvre tous les liens corrects évalués avec un format concis.L’expérimentation montre que notre méthode d’interconnexion, avec seulement1% des liens totaux dans l’échantillon, atteint une F-mesure élevée (de 0,94 à 0,99)
There are many data sets being published on the web with Semantic Web technology. The data sets usually contain analogous data which represent the similar resources in the world. If these data sets are linked together by correctly identifying the similar instances, users can conveniently query data through a uniform interface, as if they are connecting a single database. However, finding correct links is very challenging because web data sources usually have heterogeneous ontologies maintained by different organizations. Many existing solutions have been proposed for this problem. (1) One straight-forward idea is to compare the attribute values of instances for identifying links, yet it is impossible to compare all possible pairs of attribute values. (2) Another common strategy is to compare instances with correspondences found by instance-based ontology matching, which can generate attribute correspondences based on overlapping ranges between two attributes, while it is easy to cause incomparable attribute correspondences or undiscovered comparable attribute correspondences. (3) Many existing solutions leverage Genetic Programming to construct interlinking patterns for comparing instances, however the running times of the interlinking methods are usually long. In this thesis, an interlinking method is proposed to interlink instances for different data sets, based on both statistical learning and symbolic learning. On the one hand, the method discovers potential comparable attribute correspondences of each class correspondence via a K-medoids clustering algorithm with instance value statistics. We adopt K-medoids because of its high working efficiency and high tolerance on irregular data and even incorrect data. The K-medoids classifies attributes of each class into several groups according to their statistical value features. Groups from different classes are mapped when they have similar statistical value features, to determine potential comparable attribute correspondences. The clustering procedure effectively narrows the range of candidate attribute correspondences. On the other hand, our solution also leverages a symbolic learning method, called Version Space. Version Space is an iterative learning model that searches for the interlinking pattern from two directions. Our design can solve the interlinking task that does not have a single compatible conjunctive interlinking pattern that covers all assessed correct links with a concise format. The interlinking solution is evaluated with large-scale real-world data from IM@OAEI and CKAN. Experiments confirm that the solution with only 1% of sample links already reaches a high accuracy (up to 0.94-0.99 on F-measure). The F-measure quickly converges improving on other state-of-the-art approaches, by nearly 10 percent of their F-measure values
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "RDF Data"

1

Gayo, Jose Emilio Labra, Eric Prud’hommeaux, Iovka Boneva und Dimitris Kontokostas. Validating RDF Data. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-031-79478-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Kaoudi, Zoi, Ioana Manolescu und Stamatis Zampetakis. Cloud-Based RDF Data Management. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-031-01875-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ma, Zongmin, Guanfeng Li und Ruizhe Ma. Modeling and Management of Fuzzy Semantic RDF Data. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11669-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Evans, Colin. Programming the Semantic Web: Build Flexible Applications with Graph Data. Sebastopol, USA: O'Reilly, 2009.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

RDA vocabularies for a twenty-first-century data environment. Chicago, IL: ALATechSource, 2010.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Wang, Shenghui (Computer scientist), author, Mixter Jeffrey author und OCLC Research, Hrsg. Library linked data in the cloud: OCLC's experiments with new models of resource description. San Rafael, California]: Morgan & Claypool Publishers, 2015.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kopitz, Dietmar. RDS, the radio data system. Boston: Artech House, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

W, White Norman, Hrsg. Broadcast data systems: Teletext and RDS. London: Butterworths, 1990.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Mothersole, Peter L. Broadcast data systems: Teletext and RDS. Oxford: Focal Press, 1992.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Mothersole, Peter L. Broadcast data systems: Teletext and RDS. Oxford: Focal Press, 1990.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "RDF Data"

1

Martínez-Prieto, Miguel A., Javier D. Fernández, Antonio Hernández-Illera und Claudio Gutiérrez. „RDF Compression“. In Encyclopedia of Big Data Technologies, 1–11. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63962-8_62-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Martínez-Prieto, Miguel A., Javier D. Fernández, Antonio Hernández-Illera und Claudio Gutiérrez. „RDF Compression“. In Encyclopedia of Big Data Technologies, 1368–78. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-77525-8_62.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Giannini, Silvia. „RDF Data Clustering“. In Business Information Systems Workshops, 220–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41687-3_21.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ioannidis, Theofilos. „Geospatial RDF Stores“. In Geospatial Data Science, 221–40. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3581906.3581920.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sakr, Sherif, Marcin Wylot, Raghava Mutharaju, Danh Le Phuoc und Irini Fundulaki. „Centralized RDF Query Processing“. In Linked Data, 33–49. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73515-3_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sakr, Sherif, Marcin Wylot, Raghava Mutharaju, Danh Le Phuoc und Irini Fundulaki. „Distributed RDF Query Processing“. In Linked Data, 51–83. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73515-3_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Gayo, Jose Emilio Labra, Eric Prud’hommeaux, Iovka Boneva und Dimitris Kontokostas. „Data Quality“. In Validating RDF Data, 27–53. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-031-79478-0_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Futrelle, Joe. „Harvesting RDF Triples“. In Provenance and Annotation of Data, 64–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11890850_8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Dietze, Stefan, Elena Demidova und Konstantin Todorov. „RDF Dataset Profiling“. In Encyclopedia of Big Data Technologies, 1–8. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63962-8_288-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Dietze, Stefan, Elena Demidova und Konstantin Todorov. „RDF Dataset Profiling“. In Encyclopedia of Big Data Technologies, 1378–85. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-77525-8_288.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "RDF Data"

1

Huajun Chen, Zhaohui Wu, Heng Wang und Yuxin Mao. „RDF/RDFS-based Relational Database Integration“. In 22nd International Conference on Data Engineering (ICDE'06). IEEE, 2006. http://dx.doi.org/10.1109/icde.2006.127.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Albahli, Saleh, und Austin Melton. „RDF Data Management“. In WIMS '16: International Conference on Web Intelligence, Mining and Semantics. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2912845.2912878.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Cerdeira-Pena, Ana, Antonio Farina, Javier D. Fernandez und Miguel A. Martinez-Prieto. „Self-Indexing RDF Archives“. In 2016 Data Compression Conference (DCC). IEEE, 2016. http://dx.doi.org/10.1109/dcc.2016.40.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Tang, Nan. „Big RDF data cleaning“. In 2015 31st IEEE International Conference on Data Engineering Workshops (ICDEW). IEEE, 2015. http://dx.doi.org/10.1109/icdew.2015.7129549.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Dokulil, Jiri, und Jana Katreniakova. „Navigation in RDF Data“. In 2008 12th International Conference Information Visualisation (IV). IEEE, 2008. http://dx.doi.org/10.1109/iv.2008.12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Levandoski, Justin J., und Mohamed F. Mokbel. „RDF Data-Centric Storage“. In 2009 IEEE International Conference on Web Services (ICWS). IEEE, 2009. http://dx.doi.org/10.1109/icws.2009.49.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Lin, Harris T., Ngot Bui und Vasant Honavar. „Learning classifiers from remote RDF data stores augmented with RDFS subclass hierarchies“. In 2015 IEEE International Conference on Big Data (Big Data). IEEE, 2015. http://dx.doi.org/10.1109/bigdata.2015.7363953.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Papailiou, Nikolaos, Dimitrios Tsoumakos, Ioannis Konstantinou, Panagiotis Karras und Nectarios Koziris. „H 2 RDF+“. In SIGMOD/PODS'14: International Conference on Management of Data. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2588555.2594535.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

LiLi Xu, SangWon Lee und Seokhyun Kim. „E-R model based RDF data storage in RDB“. In 2010 3rd IEEE International Conference on Computer Science and Information Technology (ICCSIT 2010). IEEE, 2010. http://dx.doi.org/10.1109/iccsit.2010.5565036.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Azzam, Amr, Sabrina Kirrane und Axel Polleres. „Towards Making Distributed RDF Processing FLINKer“. In 2018 4th International Conference on Big Data Innovations and Applications (Innovate-Data). IEEE, 2018. http://dx.doi.org/10.1109/innovate-data.2018.00009.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "RDF Data"

1

González-Montaña, Luis Antonio. Semantic-based methods for morphological descriptions: An applied example for Neotropical species of genus Lepidocyrtus Bourlet, 1839 (Collembola: Entomobryidae). Verlag der Österreichischen Akademie der Wissenschaften, November 2021. http://dx.doi.org/10.1553/biosystecol.1.e71620.

Der volle Inhalt der Quelle
Annotation:
The production of semantic annotations has gained renewed attention due to the development of anatomical ontologies and the documentation of morphological data. Two methods are proposed in this production, differing in their methodological and philosophical approaches: class-based method and instance-based method. The first, the semantic annotations are established as class expressions, while in the second, the annotations incorporate individuals. An empirical evaluation of the above methods was applied in the morphological description of Neotropical species of the genus Lepidocyrtus (Collembola: Entomobryidae: Lepidocyrtinae). The semantic annotations are expressed as RDF triple, which is a language most flexible than the Entity-Quality syntax used commonly in the description of phenotypes. The morphological descriptions were built in Protégé 5.4.0 and stored in an RDF store created with Fuseki Jena. The semantic annotations based on RDF triple increase the interoperability and integration of data from diverse sources, e.g., museum data. However, computational challenges are present, which are related with the development of semi-automatic methods for the generation of RDF triple, interchanging between texts and RDF triple, and the access by non-expert users.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Author, Not Given. Data summary of municipal solid waste management alternatives. Volume 4, Appendix B: RDF technologies. Office of Scientific and Technical Information (OSTI), Oktober 1992. http://dx.doi.org/10.2172/10138540.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kramer, Stefan, Amber Leahey, Humphrey Southall, Johanna Vampras und Joachim Wackerow. Using RDF to Describe and Link Social Science Data to Related Resources on the Web. Inter-university Consortium for Political and Social Research (ICPSR), 2012. http://dx.doi.org/10.3886/ddisemanticweb01.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wherry, Robert J., Forster Jr., Morrison Estrella M. und Jeffery. The Rotated Diagonal Factors (RDF) Approach: A Substitute for MANOVA When Analyzing Multi-Task and Multi-Criterion Data. Fort Belvoir, VA: Defense Technical Information Center, April 1997. http://dx.doi.org/10.21236/ada328049.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Borchmann, Daniel, Felix Distel und Francesco Kriegel. Axiomatization of General Concept Inclusions from Finite Interpretations. Technische Universität Dresden, 2015. http://dx.doi.org/10.25368/2022.219.

Der volle Inhalt der Quelle
Annotation:
Description logic knowledge bases can be used to represent knowledge about a particular domain in a formal and unambiguous manner. Their practical relevance has been shown in many research areas, especially in biology and the semantic web. However, the tasks of constructing knowledge bases itself, often performed by human experts, is difficult, time-consuming and expensive. In particular the synthesis of terminological knowledge is a challenge every expert has to face. Because human experts cannot be omitted completely from the construction of knowledge bases, it would therefore be desirable to at least get some support from machines during this process. To this end, we shall investigate in this work an approach which shall allow us to extract terminological knowledge in the form of general concept inclusions from factual data, where the data is given in the form of vertex and edge labeled graphs. As such graphs appear naturally within the scope of the Semantic Web in the form of sets of RDF triples, the presented approach opens up the possibility to extract terminological knowledge from the Linked Open Data Cloud. We shall also present first experimental results showing that our approach has the potential to be useful for practical applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Turgeon, Mathieu. Causal Modeling with Regression Discontinuity Designs (RDD). Instats Inc., 2023. http://dx.doi.org/10.61700/s3nl5lfnmruqw469.

Der volle Inhalt der Quelle
Annotation:
This seminar introduces the use of the regression discontinuity design (RDD) to estimate treatment effects from observational data. Day 1 topics include directed acyclic graphs (DAG), potential outcomes framework and associated assumptions, the sharp regression discontinuity design, and tools for visualizing discontinuities. Day 2 topics focus on the estimation of values of interest from a regression discontinuity design, adopting a continuity-based approach to RD analysis. Day 2 will also cover issues related to the validation and falsification of the regression discontinuity design. An official Instats certificate of completion is provided at the conclusion of the seminar. For European PhD students, each seminar offers 2 ECTS Equivalent points.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Turgeon, Mathieu. Causal Modeling with Regression Discontinuity Designs (RDD). Instats Inc., 2023. http://dx.doi.org/10.61700/o6e22r1sh4h7m469.

Der volle Inhalt der Quelle
Annotation:
This seminar introduces the use of the regression discontinuity design (RDD) to estimate treatment effects from observational data. Sessions 1-3 topics include directed acyclic graphs (DAG), potential outcomes framework and associated assumptions, the sharp regression discontinuity design, and tools for visualizing discontinuities. Sessions 4-6 topics focus on the estimation of values of interest from a regression discontinuity design, adopting a continuity-based approach to RD analysis. Day 2 will also cover issues related to the validation and falsification of the regression discontinuity design. An official Instats certificate of completion is provided at the conclusion of the seminar. For European PhD students, each seminar offers 2 ECTS Equivalent points.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ali, Ibraheem, Thea Atwood, Renata Curty, Jimmy Ghaphery, Tim McGeary, Jennifer Muilenburg und Judy Ruttenberg. Research Data Services: Partnerships. Association of Research Libraries and Canadian Association of Research Libraries, Januar 2022. http://dx.doi.org/10.29242/report.rdspartnerships2022.

Der volle Inhalt der Quelle
Annotation:
The Association of Research Libraries (ARL)/Canadian Association of Research Libraries (CARL) Joint Task Force on Research Data Services (RDS) formed in 2020 with a two-fold purpose: (1) to demonstrate and commit to the roles research libraries have in stewarding research data and as part of institution-wide research support services and (2) to guide the development of resources for the ARL and CARL memberships in advancing their organizations as collaborative partners with respect to research data services in the context of FAIR (findable, accessible, interoperable, and reusable) data principles and the US National Academies’ Open Science by Design framework. Research libraries will be successful in meeting these objectives if they act collectively and are deeply engaged with disciplinary communities. The task force formed three working groups of data practitioners, representing a wealth of expertise, to research the institutional landscape and policy environment in both the US and Canada. This report of the ARL/CARL RDS task force’s working group on partnerships highlights library RDS programs’ work with partners and stakeholders. The report provides a set of tools for libraries to use when assessing their RDS partnerships, including assessing partnerships using a partnership life cycle, defining the continuum of possible partnerships, and creating a catalog. Not all partnerships will last the entirety of a librarian’s career, and having clear parameters for when to continue or sunset a partnership can reduce ambiguity and free up resources. Recognizing the continuum of possible partnerships can provide the framework by which librarians can understand the nature of each group. From cyclical to seasonal to sporadic, understanding the needs of a type of partnership can help libraries frame their understanding and meet a group where they are. Finally, creating a catalog of partnerships can help libraries see the landscape of the organization, as well as areas for growth. This approach also aligns with OCLC’s 2020 report on Social Interoperability in Research Support: Cross-Campus Partnerships and the University Research Enterprise, which highlights the necessity of building and stewarding partnerships. Developing and providing services in a decentralized organization relies on the ability to build trusted relationships. These tools will help libraries achieve sustainable growth that is in concert with their partners, generating robust, clearly aligned initiatives that benefit all parties, their campuses, and their communities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Hanisch, Robert. NIST Research Data Framework (RDaF):. Gaithersburg, MD: National Institute of Standards and Technology, 2023. http://dx.doi.org/10.6028/nist.sp.1500-18r1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zanoni, Wladimir, Jimena Romero, Nicolás Chuquimarca und Emmanuel Abuelafia. Dealing with Hard-to-Reach Populations in Panel Data: Respondent-Driven Survey (RDS) and Attrition. Inter-American Development Bank, Oktober 2023. http://dx.doi.org/10.18235/0005194.

Der volle Inhalt der Quelle
Annotation:
Hidden populations, such as irregular migrants, often elude traditional probabilistic sampling methods. In situations like these, chain-referral sampling techniques like Respondent-Driven Surveys (RDS) offer an effective solution. RDS, a variant of network sampling sometimes referred to as “snowball” sampling, estimates weights based on the network structures of friends and acquaintances formed during the sampling process. This ensures the samples are representative of the larger population. However, one significant limitation of these methods is the rigidity of the weights. When faced with participant attrition, recalibrating these weights to ensure continued representation poses a challenge. This technical note introduces a straightforward methodology to account for such attrition. Its applicability is demonstrated through a survey targeting Venezuelan migrants in Ecuador and Peru.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie