Dissertationen zum Thema „RDF Data“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "RDF Data" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Abedjan, Ziawasch. „Improving RDF data with data mining“. Phd thesis, Universität Potsdam, 2014. http://opus.kobv.de/ubp/volltexte/2014/7133/.
Der volle Inhalt der QuelleLinked Open Data (LOD) umfasst viele und oft sehr große öffentlichen Datensätze und Wissensbanken, die hauptsächlich in der RDF Triplestruktur bestehend aus Subjekt, Prädikat und Objekt vorkommen. Dabei repräsentiert jedes Triple einen Fakt. Unglücklicherweise erfordert die Heterogenität der verfügbaren öffentlichen Daten signifikante Integrationsschritte bevor die Daten in Anwendungen genutzt werden können. Meta-Daten wie ontologische Strukturen und Bereichsdefinitionen von Prädikaten sind zwar wünschenswert und idealerweise durch eine Wissensbank verfügbar. Jedoch sind Wissensbanken im Kontext von LOD oft unvollständig oder einfach nicht verfügbar. Deshalb ist es nützlich automatisch Meta-Informationen, wie ontologische Abhängigkeiten, Bereichs-und Domänendefinitionen und thematische Assoziationen von Ressourcen generieren zu können. Eine neue und vielversprechende Technik um solche Daten zu untersuchen basiert auf das entdecken von Assoziationsregeln, welche ursprünglich für Verkaufsanalysen in transaktionalen Datenbanken angewendet wurde. Wir haben eine Adaptierung dieser Technik auf RDF Daten entworfen und stellen das Konzept der Mining Konfigurationen vor, welches uns befähigt in RDF Daten auf unterschiedlichen Weisen Muster zu erkennen. Verschiedene Konfigurationen erlauben uns Schema- und Wertbeziehungen zu erkennen, die für interessante Anwendungen genutzt werden können. In dem Sinne, stellen wir assoziationsbasierte Verfahren für eine Prädikatvorschlagsverfahren, Datenvervollständigung, Ontologieverbesserung und Anfrageerleichterung vor. Das Vorschlagen von Prädikaten behandelt das Problem der inkonsistenten Verwendung von Ontologien, indem einem Benutzer, der einen neuen Fakt einem Rdf-Datensatz hinzufügen will, eine sortierte Liste von passenden Prädikaten vorgeschlagen wird. Eine Kombinierung von verschiedenen Konfigurationen erweitert dieses Verfahren sodass automatisch komplett neue Fakten für eine Wissensbank generiert werden. Hierbei stellen wir zwei Verfahren vor, einen nutzergesteuertenVerfahren, bei dem ein Nutzer die Entität aussucht die erweitert werden soll und einen datengesteuerten Ansatz, bei dem ein Algorithmus selbst die Entitäten aussucht, die mit fehlenden Fakten erweitert werden. Da Wissensbanken stetig wachsen und sich verändern, ist ein anderer Ansatz um die Verwendung von RDF Daten zu erleichtern die Verbesserung von Ontologien. Hierbei präsentieren wir ein Assoziationsregeln-basiertes Verfahren, der Daten und zugrundeliegende Ontologien zusammenführt. Durch die Verflechtung von unterschiedlichen Konfigurationen leiten wir einen neuen Algorithmus her, der gleichbedeutende Prädikate entdeckt. Diese Prädikate können benutzt werden um Ergebnisse einer Anfrage zu erweitern oder einen Nutzer während einer Anfrage zu unterstützen. Für jeden unserer vorgestellten Anwendungen präsentieren wir eine große Auswahl an Experimenten auf Realweltdatensätzen. Die Experimente und Evaluierungen zeigen den Mehrwert von Assoziationsregeln-Generierung für die Integration und Nutzbarkeit von RDF Daten und bestätigen die Angemessenheit unserer konfigurationsbasierten Methodologie um solche Regeln herzuleiten.
Qiao, Shi. „QUERYING GRAPH STRUCTURED RDF DATA“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1447198654.
Der volle Inhalt der QuelleFrommhold, Marvin, Piris Rubén Navarro, Natanael Arndt, Sebastian Tramp, Niklas Petersen und Michael Martin. „Towards versioning of arbitrary RDF data“. Universität Leipzig, 2016. https://ul.qucosa.de/id/qucosa%3A15777.
Der volle Inhalt der QuelleHERRERA, JOSE EDUARDO TALAVERA. „AN ARCHITECTURE FOR RDF DATA SOURCES RECOMMENDATION“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2012. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=21367@1.
Der volle Inhalt der QuelleCOORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Dentro do processo de publicação de dados na Web recomenda-se interligar os dados entre diferentes fontes, através de recursos similares que descrevam um domínio em comum. No entanto, com o crescimento do número dos conjuntos de dados publicados na Web de Dados, as tarefas de descoberta e seleção de dados tornam-se cada vez mais complexas. Além disso, a natureza distribuída e interconectada dos dados, fazem com que a sua análise e entendimento sejam muito demorados. Neste sentido, este trabalho visa oferecer uma arquitetura Web para a identificação de fontes de dados em RDF, com o objetivo de prover melhorias nos processos de publicação, interconex ão, e exploração de dados na Linked Open Data. Para tal, nossa abordagem utiliza o modelo de MapReduce sobre o paradigma de computa ção nas nuvens. Assim, podemos efetuar buscas paralelas por palavraschave sobre um índice de dados semânticos existente na Web. Estas buscas permitem identificar fontes candidatas para ligar os dados. Por meio desta abordagem, foi possível integrar diferentes ferramentas da web semântica em um processo de busca para descobrir fontes de dados relevantes, e relacionar tópicos de interesse denidos pelo usuário. Para atingir nosso objetivo foi necessária a indexação e análise de texto para aperfeiçoar a busca de recursos na Linked Open Data. Para mostrar a ecácia de nossa abordagem foi desenvolvido um estudo de caso, utilizando um subconjunto de dados de uma fonte na Linked Open Data, através do seu serviço SPARQL endpoint. Os resultados do nosso trabalho revelam que a geração de estatísticas sobre os dados da fonte é, de fato, um grande diferencial no processo de busca. Estas estatísticas ajudam ao usuário no processo de escolha de indivíduos. Um processo especializado de extração de palavras-chave é aplicado para cada indivíduo com o objetivo de gerar diferentes buscas sobre o índice semântico. Mostramos a escalabilidade de nosso processo de recomendação de fontes RDF através de diferentes amostras de indivíduos.
In the Web publishing process of data it is recommended to link the data from different sources using similar resources that describe a domain in common. However, the growing number of published data sets on the Web have made the data discovery and data selection tasks become increasingly complex. Moreover, the distributed and interconnected nature of the data causes the understanding and analysis to become too prolonged. In this context, this work aims to provide a Web architecture for identifying RDF data sources with the goal of improving the publishing, interconnection, and data exploration processes within the Linked Open Data. Our approach utilizes the MapReduce computing model on top of the cloud computing paradigm. In this manner, we are able to make parallel keyword searches over existing semantic data indexes available on the web. This will allow to identify candidate sources to link the data. Through this approach, it was possible to integrate different semantic web tools and relevant data sources in a search process, and also to relate topics of interest denied by the user. In order to achieve our objectives it was necessary to index and analyze text to improve the search of resources in the Linked Open Data. To show the effectiveness of our approach we developed a case study using a subset of data from a source in the Linked Open Data through its SPARQL endpoint service. The results of our work reveal that the generation and usage of data source s statistics do make a great difference within the search process. These statistics help the user within the choosing individuals process. Furthermore, a specialized keyword extraction process is run for each individual in order to create different search processes using the semantic index. We show the scalability of our RDF recommendation process by sampling several individuals.
Kaithi, Bhargavacharan Reddy. „Knowledge Graph Reasoning over Unseen RDF Data“. Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1571955816559707.
Der volle Inhalt der QuelleEspinola, Roger Humberto Castillo. „Indexing RDF data using materialized SPARQL queries“. Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2012. http://dx.doi.org/10.18452/16582.
Der volle Inhalt der QuelleIn this thesis, we propose to use materialized queries as a special index structure for RDF data. We strive to reduce the query processing time by minimizing the number of comparisons between the query and the RDF dataset. We also emphasize the role of cost models in the selection of execution plans as well as index sets for a given workload. We provide an overview of the materialized view selection problem in relational databases and discuss its application for optimization of query processing. We introduce RDFMatView, a framework for answering SPARQL queries using materialized views as indexes. We provide algorithms to discover those indexes that can be used to process a given query and we develop different strategies to integrate these views in query execution plans. The selection of an efficient execution plan states the topic of our second major contribution. We introduce three different cost models designed for SPARQL query processing with materialized views. A detailed comparison of these models reveals that a model based on index and predicate statistics provides the most accurate cost estimation. We show that selecting an execution plan using this cost model yields a reduction of processing time with several orders of magnitude compared to standard SPARQL query processing. Finally, we propose a simple yet effective strategy for the materialized view selection problem applied to RDF data. Based on a given workload of SPARQL queries we provide algorithms for selecting a set of indexes that minimizes the workload processing time. We create a candidate index by retrieving all connected components from query patterns. Our evaluation shows that using the set of suggested indexes usually achieves larger runtime savings than other index sets regarding the given workload.
Sherif, Mohamed Ahmed Mohamed. „Automating Geospatial RDF Dataset Integration and Enrichment“. Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-215708.
Der volle Inhalt der QuelleAbedjan, Ziawasch [Verfasser], und Felix [Akademischer Betreuer] Naumann. „Improving RDF data with data mining / Ziawasch Abedjan. Betreuer: Felix Naumann“. Potsdam : Universitätsbibliothek der Universität Potsdam, 2014. http://d-nb.info/1059014122/34.
Der volle Inhalt der QuelleMorgan, Juston. „Visual language for exploring massive RDF data sets“. Pullman, Wash. : Washington State University, 2010. http://www.dissertations.wsu.edu/Thesis/Spring2010/J_Morgan_041210.pdf.
Der volle Inhalt der QuelleTitle from PDF title page (viewed on July 12, 2010). "School of Engineering and Computer Science." Includes bibliographical references (p. 33-34).
Fan, Zhengjie. „Concise Pattern Learning for RDF Data Sets Interlinking“. Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM013/document.
Der volle Inhalt der QuelleThere are many data sets being published on the web with Semantic Web technology. The data sets usually contain analogous data which represent the similar resources in the world. If these data sets are linked together by correctly identifying the similar instances, users can conveniently query data through a uniform interface, as if they are connecting a single database. However, finding correct links is very challenging because web data sources usually have heterogeneous ontologies maintained by different organizations. Many existing solutions have been proposed for this problem. (1) One straight-forward idea is to compare the attribute values of instances for identifying links, yet it is impossible to compare all possible pairs of attribute values. (2) Another common strategy is to compare instances with correspondences found by instance-based ontology matching, which can generate attribute correspondences based on overlapping ranges between two attributes, while it is easy to cause incomparable attribute correspondences or undiscovered comparable attribute correspondences. (3) Many existing solutions leverage Genetic Programming to construct interlinking patterns for comparing instances, however the running times of the interlinking methods are usually long. In this thesis, an interlinking method is proposed to interlink instances for different data sets, based on both statistical learning and symbolic learning. On the one hand, the method discovers potential comparable attribute correspondences of each class correspondence via a K-medoids clustering algorithm with instance value statistics. We adopt K-medoids because of its high working efficiency and high tolerance on irregular data and even incorrect data. The K-medoids classifies attributes of each class into several groups according to their statistical value features. Groups from different classes are mapped when they have similar statistical value features, to determine potential comparable attribute correspondences. The clustering procedure effectively narrows the range of candidate attribute correspondences. On the other hand, our solution also leverages a symbolic learning method, called Version Space. Version Space is an iterative learning model that searches for the interlinking pattern from two directions. Our design can solve the interlinking task that does not have a single compatible conjunctive interlinking pattern that covers all assessed correct links with a concise format. The interlinking solution is evaluated with large-scale real-world data from IM@OAEI and CKAN. Experiments confirm that the solution with only 1% of sample links already reaches a high accuracy (up to 0.94-0.99 on F-measure). The F-measure quickly converges improving on other state-of-the-art approaches, by nearly 10 percent of their F-measure values
Bang, Ole Petter, und Tormod Fjeldskår. „Storing and Querying RDF in Mars“. Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8971.
Der volle Inhalt der QuelleAs part of the Semantic Web movement, the Resource Description Framework (RDF) is gaining momentum as a format for storing data, particularly metadata. The SPARQL Protocol and RDF Query Language is a SQL-like query language, recommended by W3C for querying RDF data. FAST is exploring the possibilities of supporting storage and querying of RDF data in their Mars search engine. To facilitate this, a SPARQL parser has been created for the Microsoft .NET Framework, using the MPLex and MPPG tools from Microsoft's Managed Babel package. This thesis proposes a solution for efficiently storing and retrieving RDF data in Mars, based on decomposition and B+ Tree indexing. Further, a method for transforming SPARQL queries into Mars operator graphs is described. Finally, the implementation of a prototype implementation is discussed. The prototype has been developed in collaboration with FAST and has required customized indexing in Mars. Some deviations from the proposed solution were made in order to create a working prototype within the available time frame. The focus has been on exploring possibilities, and performance has thus not been a priority, neither in indexing nor in evaluation.
Fernandez, Garcia Javier David, Jürgen Umbrich, Axel Polleres und Magnus Knuth. „Evaluating Query and Storage Strategies for RDF Archives“. IOS Press, 2018. http://epub.wu.ac.at/6488/1/BEAR%2DSWJ.pdf.
Der volle Inhalt der QuellePomykacz, Michal. „Hromadná extrakce dat veřejné správy do RDF“. Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-197443.
Der volle Inhalt der QuelleMeissner, Roy, und Kurt Junghanns. „Using DevOps principles to continuously monitor RDF data quality“. Universität Leipzig, 2016. https://ul.qucosa.de/id/qucosa%3A15940.
Der volle Inhalt der QuelleDarari, Fariz. „Managing and Consuming Completeness Information for RDF Data Sources“. Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-226655.
Der volle Inhalt der QuelleChartrand, Timothy Adam. „Ontology-Based Extraction of RDF Data from the World Wide Web“. BYU ScholarsArchive, 2003. https://scholarsarchive.byu.edu/etd/56.
Der volle Inhalt der QuelleRen, Xiangnan. „Traitement et raisonnement distribués des flux RDF“. Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1139/document.
Der volle Inhalt der QuelleReal-time processing of data streams emanating from sensors is becoming a common task in industrial scenarios. In an Internet of Things (IoT) context, data are emitted from heterogeneous stream sources, i.e., coming from different domains and data models. This requires that IoT applications efficiently handle data integration mechanisms. The processing of RDF data streams hence became an important research field. This trend enables a wide range of innovative applications where the real-time and reasoning aspects are pervasive. The key implementation goal of such application consists in efficiently handling massive incoming data streams and supporting advanced data analytics services like anomaly detection. However, a modern RSP engine has to address volume and velocity characteristics encountered in the Big Data era. In an on-going industrial project, we found out that a 24/7 available stream processing engine usually faces massive data volume, dynamically changing data structure and workload characteristics. These facts impact the engine's performance and reliability. To address these issues, we propose Strider, a hybrid adaptive distributed RDF Stream Processing engine that optimizes logical query plan according to the state of data streams. Strider has been designed to guarantee important industrial properties such as scalability, high availability, fault-tolerant, high throughput and acceptable latency. These guarantees are obtained by designing the engine's architecture with state-of-the-art Apache components such as Spark and Kafka. Moreover, an increasing number of processing jobs executed over RSP engines are requiring reasoning mechanisms. It usually comes at the cost of finding a trade-off between data throughput, latency and the computational cost of expressive inferences. Therefore, we extend Strider to support real-time RDFS+ (i.e., RDFS + owl:sameAs) reasoning capability. We combine Strider with a query rewriting approach for SPARQL that benefits from an intelligent encoding of knowledge base. The system is evaluated along different dimensions and over multiple datasets to emphasize its performance. Finally, we have stepped further to exploratory RDF stream reasoning with a fragment of Answer Set Programming. This part of our research work is mainly motivated by the fact that more and more streaming applications require more expressive and complex reasoning tasks. The main challenge is to cope with the large volume and high-velocity dimensions in a scalable and inference-enabled manner. Recent efforts in this area still missing the aspect of system scalability for stream reasoning. Thus, we aim to explore the ability of modern distributed computing frameworks to process highly expressive knowledge inference queries over Big Data streams. To do so, we consider queries expressed as a positive fragment of LARS (a temporal logic framework based on Answer Set Programming) and propose solutions to process such queries, based on the two main execution models adopted by major parallel and distributed execution frameworks: Bulk Synchronous Parallel (BSP) and Record-at-A-Time (RAT). We implement our solution named BigSR and conduct a series of evaluations. Our experiments show that BigSR achieves high throughput beyond million-triples per second using a rather small cluster of machines
Görlitz, Olaf [Verfasser]. „Distributed query processing for federated RDF data management / Olaf Görlitz“. Koblenz : Universitätsbibliothek Koblenz, 2015. http://d-nb.info/1065246986/34.
Der volle Inhalt der QuelleARAUJO, SAMUR FELIPE CARDOSO DE. „EXPLORATOR: A TOOL FOR EXPLORING RDF DATA THROUGH DIRECT MANIPULATION“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=13792@1.
Der volle Inhalt der QuelleNessa dissertação propomos uma ferramenta destinada à exploração dos dados na Web Semântica. Nosso objetivo foi desenvolver um modelo de exploração que permitisse o usuário explorar uma base de dados RDF sem deter qualquer conhecimento prévio de seu domínio. Para tanto, apresentamos um modelo de operações que suportadas por uma interface baseada no paradigma de manipulação direta e query-by-example, nos permite explorar uma base de dados RDF semi-estruturada para ganhar conhecimento e responder questões específicas sobre o domínio, através de navegação, busca e outros mecanismos de exploração. Também desenvolvemos um modelo de especificação e geração automática de facetas que pode ser utilizado na construção de mecanismos de navegação facetada sobre dados RDF. O produto final desse trabalho é a ferramenta Explorator, que propomos como um ambiente para Exploração dos dados na Web Semântica.
In this dissertation we propose a tool for Semantic Data exploration. We developed an exploration model that allows users without any a prior knowledge about the data domain to explore an RDF database. So that, we presented an operation model, that supported by an interface based on the direct manipulation and query-by-example paradigm, allows users to explore an RDF base to both gain knowledge and answer questions about a domain, through navigation, search and others exploration mechanisms. Also, we developed a facet specification model and a mechanism for automatic facet extraction that can be used in the development of facet navigation systems over RDF. The final product of this work is a tool called Explorator that we are proposing as an environment for Semantic Web data exploration.
PESCE, MARCIA LUCAS. „RDXEL: A TOOLKIT FOR RDF STATISTICAL DATA MANIPULATION THROUGH SPREADSHEETS“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2012. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=26259@1.
Der volle Inhalt der QuelleCOORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Dados estatísticos são uma das mais importantes fontes de informação para atividades humanas e organizações. No entanto, o acesso, consulta e correlação deste tipo de dados demanda grande esforço, principalmente em situações que envolvem diferentes organizações. Soluções que facilitem o acesso e a integração de grandes bases de dados analíticos, desta forma, agregam muito valor a este cenário. Neste trabalho propomos um arcabouço de software que permite com que dados estatísticos sejam eficientemente transformados e representados no formato de triplas RDF. Utilizando como base o DataCube Vocabulary, padrão W3C para o processo de triplificação de informações, a solução proposta facilita a consulta, análise, e reuso dos dados quando no formato RDF. O processo inverso, RDF para Excel, também é suportado, de modo a oferecer uma solução para a integração e consumo de dados RDF a partir de planilha.
Statistical data represent one of the most important sources of information both for humans and organizations alike. However, accessing, querying and correlating statistical data demand a great deal of effort, especially in situations that involve different organizations. Therefore, solutions to facilitate the manipulation and integration of large statistical databases add value to this scenario. In this dissertation we propose a framework that allows statistical data to be efficiently processed and represented as RDF triples. Based on the DataCube Vocabulary, W3C s triplification standard, the proposed solution makes it easy to query, analyze, and reuse statistical data in RDF format. The reverse process, RDF for Excel, is also supported, so as to offer a solution for the integration and use of RDF data in spreadsheets.
Li, Sheng. „Query Suggestion for Keyword Search over XML and RDF Data“. Thesis, Griffith University, 2014. http://hdl.handle.net/10072/367139.
Der volle Inhalt der QuelleThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology
Science, Environment, Engineering and Technology
Full Text
Huang, Xin. „Querying big RDF data : semantic heterogeneity and rule-based inconsistency“. Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB124/document.
Der volle Inhalt der QuelleSemantic Web is the vision of next generation of Web proposed by Tim Berners-Lee in 2001. Indeed, with the rapid development of Semantic Web technologies, large-scale RDF data already exist as linked open data, and their number is growing rapidly. Traditional Semantic Web querying and reasoning tools are designed to run in stand-alone environment. Therefor, Processing large-scale bulk data computation using traditional solutions will result in bottlenecks of memory space and computational performance inevitably. Large volumes of heterogeneous data are collected from different data sources by different organizations. In this context, different sources always exist inconsistencies and uncertainties which are difficult to identify and evaluate. To solve these challenges of Semantic Web, the main research contents and innovative approaches are proposed as follows. For these purposes, we firstly developed an inference based semantic entity resolution approach and linking mechanism when the same entity is provided in multiple RDF resources described using different semantics and URIs identifiers. We also developed a MapReduce based rewriting engine for Sparql query over big RDF data to handle the implicit data described intentionally by inference rules during query evaluation. The rewriting approach also deal with the transitive closure and cyclic rules to provide a rich inference language as RDFS and OWL. The second contribution concerns the distributed inconsistency processing. We extend the approach presented in first contribution by taking into account inconsistency in the data. This includes: (1)Rules based inconsistency detection with the help of our query rewriting engine; (2)Consistent query evaluation in three different semantics. The third contribution concerns the reasoning and querying over large-scale uncertain RDF data. We propose an MapReduce based approach to deal with large-scale reasoning with uncertainty. Unlike possible worlds semantic, we propose an algorithm for generating intensional Sparql query plan over probabilistic RDF graph for computing the probabilities of results within the query
Huang, Xin. „Querying big RDF data : semantic heterogeneity and rule-based inconsistency“. Electronic Thesis or Diss., Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB124.
Der volle Inhalt der QuelleSemantic Web is the vision of next generation of Web proposed by Tim Berners-Lee in 2001. Indeed, with the rapid development of Semantic Web technologies, large-scale RDF data already exist as linked open data, and their number is growing rapidly. Traditional Semantic Web querying and reasoning tools are designed to run in stand-alone environment. Therefor, Processing large-scale bulk data computation using traditional solutions will result in bottlenecks of memory space and computational performance inevitably. Large volumes of heterogeneous data are collected from different data sources by different organizations. In this context, different sources always exist inconsistencies and uncertainties which are difficult to identify and evaluate. To solve these challenges of Semantic Web, the main research contents and innovative approaches are proposed as follows. For these purposes, we firstly developed an inference based semantic entity resolution approach and linking mechanism when the same entity is provided in multiple RDF resources described using different semantics and URIs identifiers. We also developed a MapReduce based rewriting engine for Sparql query over big RDF data to handle the implicit data described intentionally by inference rules during query evaluation. The rewriting approach also deal with the transitive closure and cyclic rules to provide a rich inference language as RDFS and OWL. The second contribution concerns the distributed inconsistency processing. We extend the approach presented in first contribution by taking into account inconsistency in the data. This includes: (1)Rules based inconsistency detection with the help of our query rewriting engine; (2)Consistent query evaluation in three different semantics. The third contribution concerns the reasoning and querying over large-scale uncertain RDF data. We propose an MapReduce based approach to deal with large-scale reasoning with uncertainty. Unlike possible worlds semantic, we propose an algorithm for generating intensional Sparql query plan over probabilistic RDF graph for computing the probabilities of results within the query
Fernandez, Garcia Javier David, Sabrina Kirrane, Axel Polleres und Simon Steyskal. „HDT crypt: Compression and Encryption of RDF Datasets“. IOS Press, 2018. http://epub.wu.ac.at/6489/1/HDTCrypt%2DSWJ.pdf.
Der volle Inhalt der QuelleHazuza, Petr. „Ontologie přístupnosti budov“. Master's thesis, Vysoká škola ekonomická v Praze, 2014. http://www.nusl.cz/ntk/nusl-202106.
Der volle Inhalt der QuelleAbicht, Konrad, Georges Alkhouri, Natanael Arndt, Roy Meissner und Michael Martin. „CubeViz.js: A Lightweight Framework for Discovering and Visualizing RDF Data Cubes“. Gesellschaft für Informatik, 2017. https://ul.qucosa.de/id/qucosa%3A32064.
Der volle Inhalt der QuelleLozano, Aparicio Jose Martin. „Data exchange from relational databases to RDF with target shape schemas“. Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1I063.
Der volle Inhalt der QuelleResource Description Framework (RDF) is a graph data model which has recently found the use of publishing on the web data from relational databases. We investigate data exchange from relational databases to RDF graphs with target shapes schemas. Essentially, data exchange models a process of transforming an instance of a relational schema, called the source schema, to a RDF graph constrained by a target schema, according to a set of rules, called source-to-target tuple generating dependencies. The output RDF graph is called a solution. Because the tuple generating dependencies define this process in a declarative fashion, there might be many possible solutions or no solution at all. We study constructive relational to RDF data exchange setting with target shapes schemas, which is composed of a relational source schema, a shapes schema for the target schema, a set of mappings that uses IRI constructors. Furthermore, we assume that any two IRI constructors are non-overlapping. We propose a visual mapping language (VML) that helps non-expert users to specify mappings in this setting. Moreover, we develop a tool called ShERML that performs data exchange with the use of VML and for users that want to understand the model behind VML mappings, we define R2VML, a text-based mapping language, that captures VML and presents a succinct syntax for defining mappings.We investigate the problem of checking consistency: a data exchange setting is consistent if for every input source instance, there is at least one solution. We show that the consistency problem is coNP-complete and provide a static analysis algorithm of the setting that allows to decide if the setting is consistent or not. We study the problem of computing certain answers. An answer is certain if the answer holds in every solution. Typically, certain answers are computed using a universal solution. However, in our setting a universal solution might not exist. Thus, we introduce the notion of universal simulation solution, which always exists and allows to compute certain answers to any class of queries that is robust under simulation. One such class is nested regular expressions (NREs) that are forward i.e., do not use the inverse operation. Using universal simulation solution renders tractable the computation of certain answers to forward NREs (data-complexity).Finally, we investigate the shapes schema elicitation problem that consists of constructing a target shapes schema from a constructive relational to RDF data exchange setting without the target shapes schema. We identity two desirable properties of a good target schema, which are soundness i.e., every produced RDF graph is accepted by the target schema; and completeness i.e., every RDF graph accepted by the target schema can be produced. We propose an elicitation algorithm that is sound for any schema-less data exchange setting, but also that is complete for a large practical class of schema-less settings
Chartrand, Tim. „Ontology-based extraction of RDF data from the World Wide Web /“. Diss., CLICK HERE for online access, 2003. http://contentdm.lib.byu.edu/ETD/image/etd168.pdf.
Der volle Inhalt der QuellePérez, de Laborda Schwankhart Cristian. „Incorporating relational data into the Semantic Web“. [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=982420390.
Der volle Inhalt der QuelleDomin, Annika. „Konzeption eines RDF-Vokabulars für die Darstellung von COUNTER-Nutzungsstatistiken“. Master's thesis, Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-179416.
Der volle Inhalt der QuelleNeto, Luis Eufrasio Teixeira. „Uma abordagem para publicaÃÃo de visÃes RDF de dados relacionais“. Universidade Federal do CearÃ, 2014. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=12676.
Der volle Inhalt der QuelleA iniciativa Linked Data trouxe novas oportunidades para a construÃÃo da nova geraÃÃo de aplicaÃÃes Web. Entretanto, a utilizaÃÃo das melhores prÃticas estabelecidas por este padrÃo depende de mecanismos que facilitem a transformaÃÃo dos dados armazenados em bancos relacionais em triplas RDF. Recentemente, o grupo de trabalho W3C RDB2RDF propÃs uma linguagem de mapeamento padrÃo, denominada R2RML, para especificar mapeamentos customizados entre esquemas relacionais e vocabulÃrios RDF. No entanto, a geraÃÃo de mapeamentos R2RML nÃo à uma tarefa fÃcil. à imperativo, entÃo, definir: (a) uma soluÃÃo para mapear os conceitos de um esquema relacional em termos de um esquema RDF; (b) um processo que suporte a publicaÃÃo dos dados relacionais no formato RDF; e (c) uma ferramenta para facilitar a aplicaÃÃo deste processo. Assertivas de correspondÃncia sÃo propostas para formalizar mapeamentos entre esquemas relacionais e esquemas RDF. VisÃes sÃo usadas para publicar dados de uma base de dados em uma nova estrutura ou esquema. A definiÃÃo de visÃes RDF sobre dados relacionais permite que esses dados possam ser disponibilizados em uma estrutura de termos de uma ontologia OWL, sem que seja necessÃrio alterar o esquema da base de dados. Neste trabalho, propomos uma arquitetura em trÃs camadas â de dados, de visÃes SQL e de visÃes RDF â onde a camada de visÃes SQL mapeia os conceitos da camada de dados nos termos da camada de visÃes RDF. A criaÃÃo desta camada intermediÃria de visÃes facilita a geraÃÃo dos mapeamentos R2RML e evita que alteraÃÃes na camada de dados impliquem em alteraÃÃes destes mapeamentos. Adicionalmente, definimos um processo em trÃs etapas para geraÃÃo das visÃes RDF. Na primeira etapa, o usuÃrio define o esquema do banco de dados relacional e a ontologia OWL alvo e cria assertivas de correspondÃncia que mapeiam os conceitos do esquema relacional nos termos da ontologia alvo. A partir destas assertivas, uma ontologia exportada à gerada automaticamente. O segundo passo produz um esquema de visÃes SQL gerado a partir da ontologia exportada e um mapeamento R2RML do esquema de visÃes para a ontologia exportada, de forma automatizada. Por fim, no terceiro passo, as visÃes RDF sÃo publicadas em um SPARQL endpoint. Neste trabalho sÃo detalhados as assertivas de correspondÃncia, a arquitetura, o processo, os algoritmos necessÃrios, uma ferramenta que suporta o processo e um estudo de caso para validaÃÃo dos resultados obtidos.
The Linked Data initiative brought new opportunities for building the next generation of Web applications. However, the full potential of linked data depends on how easy it is to transform data stored in conventional, relational databases into RDF triples. Recently, the W3C RDB2RDF Working Group proposed a standard mapping language, called R2RML, to specify customized mappings between relational schemas and target RDF vocabularies. However, the generation of customized R2RML mappings is not an easy task. Thus, it is mandatory to define: (a) a solution that maps concepts from a relational schema to terms from a RDF schema; (b) a process to support the publication of relational data into RDF, and (c) a tool that implements this process. Correspondence assertions are proposed to formalize the mappings between relational schemas and RDF schemas. Views are created to publish data from a database to a new structure or schema. The definition of RDF views over relational data allows providing this data in terms of an OWL ontology structure without having to change the database schema. In this work, we propose a three-tier architecture â database, SQL views and RDF views â where the SQL views layer maps the database concepts into RDF terms. The creation of this intermediate layer facilitates the generation of R2RML mappings and prevents that changes in the data layer result in changes on R2RML mappings. Additionally, we define a three-step process to generate the RDF views of relational data. First, the user defines the schema of the relational database and the target OWL ontology. Then, he defines correspondence assertions that formally specify the relational database in terms of the target ontology. Using these assertions, an exported ontology is generated automatically. The second step produces the SQL views that perform the mapping defined by the assertions and a R2RML mapping between these views and the exported ontology. This dissertation describes a formalization of the correspondence assertions, the three-tier architecture, the publishing process steps, the algorithms needed, a tool that supports the entire process and a case study to validate the results obtained.
Nohejl, Petr. „Transformace a publikace otevřených a propojitelných dat“. Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-198076.
Der volle Inhalt der QuelleSALAS, PERCY ENRIQUE RIVERA. „OLAP2DATACUBE: AN ON-DEMAND TRANSFORMATION FRAMEWORK FROM OLAP TO RDF DATA CUBES“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=26120@1.
Der volle Inhalt der QuelleCOORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO
PROGRAMA DE EXCELENCIA ACADEMICA
Dados estatísticos são uma das mais importantes fontes de informações, relevantes para um grande número de partes interessadas nos domínios governamentais, científicos e de negócios. Um conjunto de dados estatísticos compreende uma coleção de observações feitas em alguns pontos através de um espaço lógico e muitas vezes é organizado como cubos de dados. A definição adequada de cubos de dados, especialmente das suas dimensões, ajuda a processar as observações e, mais importante, ajuda a combinar observações de diferentes cubos de dados. Neste contexto, os princípios de Linked Data podem ser proveitosamente aplicados na definição de cubos de dados, no sentido de que os princípios oferecem uma estratégia para fornecer a semântica ausentes nas dimensões, incluindo os seus valores. Nesta tese, descrevemos o processo e a implementação de uma arquitetura de mediação, chamada OLAP2DataCube On Demand Framework, que ajuda a descrever e consumir dados estatísticos, expostos como triplas RDF, mas armazenados em bancos de dados relacionais. O Framework possui um catálogo de descrições de Linked Data Cubes, criado de acordo com os princípios de Linked Data. O catálogo tem uma descrição padronizada para cada cubo de dados armazenado em bancos de dados (relacionais) estatísticos conhecidos pelo Framework. O Framework oferece uma interface para navegar pelas descrições dos Linked Data Cubes e para exportar os cubos de dados como triplas RDF geradas por demanda a partir das fontes de dados subjacentes. Também discutimos a implementação de operações sofisticadas de busca de metadados, operações OLAP em cubo de dados, tais como slice e dice, e operações de mashup sofisticadas de cubo de dados que criam novos cubos através da combinação de outros cubos.
Statistical data is one of the most important sources of information, relevant to a large number of stakeholders in the governmental, scientific and business domains alike. A statistical data set comprises a collection of observations made at some points across a logical space and is often organized as what is called a data cube. The proper definition of the data cubes, especially of their dimensions, helps processing the observations and, more importantly, helps combining observations from different data cubes. In this context, the Linked Data principles can be profitably applied to the definition of data cubes, in the sense that the principles offer a strategy to provide the missing semantics of the dimensions, including their values. In this thesis we describe the process and the implementation of a mediation architecture, called OLAP2DataCube On Demand, which helps describe and consume statistical data, exposed as RDF triples, but stored in relational databases. The tool features a catalogue of Linked Data Cube descriptions, created according to the Linked Data principles. The catalogue has a standardized description for each data cube actually stored in each statistical (relational) database known to the tool. The tool offers an interface to browse the linked data cube descriptions and to export the data cubes as RDF triples, generated on demand from the underlying data sources. We also discuss the implementation of sophisticated metadata search operations, OLAP data cube operations, such as slice and dice, and data cube mashup operations that create new cubes by combining other cubes.
Bongiovanni, Francesco. „Design, formalization and implementation of overlay networks : application to RDF data storage“. Nice, 2012. http://www.theses.fr/2012NICE4021.
Der volle Inhalt der QuelleStructured Overlay Networks (SONs) are a new class of Peer-to-Peer (P2P) systems which are widely used for large scale applications such as file sharing , information dissemination, storage and retrieval of different resources… Many different SONs co-exist on the Web yet they do not cooperate with each other. In order to promote cooperation, we propose two protocols, Babelchord and Synapse, whose goals are to enable the inter-connection of structured and heterogeneous overlays networks through meta-protocols. Babelchord aims to aggregate small structured overlay networks in an unstructured fashion while Synapse generalizes this concept and provides flexible mechanisms relying on co-located nodes, i. E. Nodes which belong to multiple overlays at the same time. We provides the algorithms behind both protocols, as well as simulations results showing their behaviours in the context of information retrieval. We have also implemented and experimented a prototype of JSynapse on the Grid’5000 platform, confirming the obtained simulation results and giving a proof of concept of our protocol. A novel generation of SONs was created in order to store and retrieve semantic data in large scale settings. The Semantic Web community is in need for scalable solutions which are able to store and retrieve RDF data, the core data model of the Semantic Web. The first generation of these systems is too monolithic and provided limited support for expressive queries. We propose the design and implementation of a new modular P2P-based system for these purposes. We build the system with RDF in mind and used a three-dimensional CAN overlay network, mimicking the nature of an RDF triple. We made specific design choices which preserve data locality but raises interesting technical challenges. Our modular design reduces the coupling between its underlying components, allowing them to be inter-changed with others. We also ran some micro-benchmarks on Grid’50000 which will be discussed. SONs have a specific geometrical topology which could be leveraged in order to increase the overall performance of the system. In this regard we propose a new broadcast efficient algorithm for CAN, developed in response to the results found from running the experiments in the RDF data store we have built, which used too many messages. Along this algorithm, we also propose a reasoning framework, developed with the Isabelle/HOL proof assistant, for proving correctness properties of dissemination algorithms for CAN-like P2P-systems. We focus on providing the minimal set of abstractions needed to devise efficient correct-by-construction dissemination algorithms on top of such overlay
Mixter, Jeffrey. „Linked Data in VRA Core 4.0: Converting VRA XML Records into RDF/XML“. Kent State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=kent1366642050.
Der volle Inhalt der QuelleFrommhold, Marvin, Sebastian Tramp, Natanael Arndt und Niklas Petersen. „Publish and subscribe for RDF in enterprise value networks“. Universität Leipzig, 2016. https://ul.qucosa.de/id/qucosa%3A15776.
Der volle Inhalt der QuelleAli, Liaquat [Verfasser], und Georg [Akademischer Betreuer] Lausen. „Efficient management and querying of RDF data in a P2P framework = Effiziente Verwaltung und Abfrage von RDF-Daten in einem P2P-Rahmen“. Freiburg : Universität, 2014. http://d-nb.info/1123481075/34.
Der volle Inhalt der QuelleAlbahli, Saleh Mohammad. „Ontology-based approaches to improve RDF Triple Store“. Kent State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=kent1456063330.
Der volle Inhalt der QuelleLesnikova, Tatiana. „Liage de données RDF : évaluation d'approches interlingues“. Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM011/document.
Der volle Inhalt der QuelleThe Semantic Web extends the Web by publishing structured and interlinked data using RDF.An RDF data set is a graph where resources are nodes labelled in natural languages. One of the key challenges of linked data is to be able to discover links across RDF data sets. Given two data sets, equivalent resources should be identified and linked by owl:sameAs links. This problem is particularly difficult when resources are described in different natural languages.This thesis investigates the effectiveness of linguistic resources for interlinking RDF data sets. For this purpose, we introduce a general framework in which each RDF resource is represented as a virtual document containing text information of neighboring nodes. The context of a resource are the labels of the neighboring nodes. Once virtual documents are created, they are projected in the same space in order to be compared. This can be achieved by using machine translation or multilingual lexical resources. Once documents are in the same space, similarity measures to find identical resources are applied. Similarity between elements of this space is taken for similarity between RDF resources.We performed evaluation of cross-lingual techniques within the proposed framework. We experimentally evaluate different methods for linking RDF data. In particular, two strategies are explored: applying machine translation or using references to multilingual resources. Overall, evaluation shows the effectiveness of cross-lingual string-based approaches for linking RDF resources expressed in different languages. The methods have been evaluated on resources in English, Chinese, French and German. The best performance (over 0.90 F-measure) was obtained by the machine translation approach. This shows that the similarity-based method can be successfully applied on RDF resources independently of their type (named entities or thesauri concepts). The best experimental results involving just a pair of languages demonstrated the usefulness of such techniques for interlinking RDF resources cross-lingually
Joshi, Amit Krishna. „Exploiting Alignments in Linked Data for Compression and Query Answering“. Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1496142816700187.
Der volle Inhalt der QuelleIanniello, Raffaele. „Linked Open Data per la pubblica amministrazione: conversione e utilizzo dei dati“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/5776/.
Der volle Inhalt der QuelleGarrido, García Camilo Fernando. „Resúmenes semiautomáticos de conocimiento : caso de RDF“. Tesis, Universidad de Chile, 2013. http://www.repositorio.uchile.cl/handle/2250/113509.
Der volle Inhalt der QuelleEn la actualidad, la cantidad de información que se genera en el mundo es inmensa. En el campo científico tenemos, por ejemplo, datos astronómicos con imágenes de las estrellas, los datos de pronósticos meteorológicos, los datos de infomación biológica y genética, etc. No sólo en el mundo científico se produce este fenómeno, por ejemplo, un usuario navegando por Internet produce grandes cantidades de información: Comentarios en foros, participación en redes sociales o simplemente la comunicación a través de la web. Manejar y analizar esta cantidad de información trae grandes problemas y costos. Por ello, antes de realizar un análisis, es conveniente determinar si el conjunto de datos que se posee es adecuado para lo que se desea o si trata sobre los temas que son de nuestro interés. Estas preguntas podrían responderse si se contara con un resumen del conjunto de datos. De aquí surge el problema que esta memoria abarca: Crear resúmenes semi-automáticos de conocimiento formalizado. En esta memoria se diseñó e implementó un método para la obtención de resúmenes semiautomáticos de conjuntos RDF. Dado un grafo RDF se puede obtener un conjunto de nodos, cuyo tamaño es determinado por el usuario, el cual representa y da a entender cuáles son los temas más importantes dentro del conjunto completo. Este método fue diseñado en base a los conjuntos de datos provistos por DBpedia. La selección de recursos dentro del conjunto de datos se hizo utilizando dos métricas usadas ampliamente en otros escenarios: Centralidad de intermediación y grados. Con ellas se detectaron los recursos más importantes en forma global y local. Las pruebas realizadas, las cuales contaron con evaluación de usuarios y evaluación automática, indicaron que el trabajo realizado cumple con el objetivo de realizar resúmenes que den a entender y representen al conjunto de datos. Las pruebas también mostraron que los resúmenes logran un buen balance de los temas generales, temas populares y la distribución respecto al conjunto de datos completo.
Teixeira, Neto Luis Eufrasio. „Uma abordagem para publicação de visões RDF de dados relacionais“. reponame:Repositório Institucional da UFC, 2014. http://www.repositorio.ufc.br/handle/riufc/12694.
Der volle Inhalt der QuelleSubmitted by Daniel Eduardo Alencar da Silva (dealencar.silva@gmail.com) on 2015-01-23T19:39:51Z No. of bitstreams: 1 2014_dis_letneto.pdf: 2039098 bytes, checksum: 476ca3810a4d9341414016b0440023ba (MD5)
Approved for entry into archive by Nirlange Queiroz(nirlange@gmail.com) on 2015-06-09T14:15:58Z (GMT) No. of bitstreams: 1 2014_dis_letneto.pdf: 2039098 bytes, checksum: 476ca3810a4d9341414016b0440023ba (MD5)
Made available in DSpace on 2015-06-09T14:15:58Z (GMT). No. of bitstreams: 1 2014_dis_letneto.pdf: 2039098 bytes, checksum: 476ca3810a4d9341414016b0440023ba (MD5) Previous issue date: 2014
The Linked Data initiative brought new opportunities for building the next generation of Web applications. However, the full potential of linked data depends on how easy it is to transform data stored in conventional, relational databases into RDF triples. Recently, the W3C RDB2RDF Working Group proposed a standard mapping language, called R2RML, to specify customized mappings between relational schemas and target RDF vocabularies. However, the generation of customized R2RML mappings is not an easy task. Thus, it is mandatory to define: (a) a solution that maps concepts from a relational schema to terms from a RDF schema; (b) a process to support the publication of relational data into RDF, and (c) a tool that implements this process. Correspondence assertions are proposed to formalize the mappings between relational schemas and RDF schemas. Views are created to publish data from a database to a new structure or schema. The definition of RDF views over relational data allows providing this data in terms of an OWL ontology structure without having to change the database schema. In this work, we propose a three-tier architecture – database, SQL views and RDF views – where the SQL views layer maps the database concepts into RDF terms. The creation of this intermediate layer facilitates the generation of R2RML mappings and prevents that changes in the data layer result in changes on R2RML mappings. Additionally, we define a three-step process to generate the RDF views of relational data. First, the user defines the schema of the relational database and the target OWL ontology. Then, he defines correspondence assertions that formally specify the relational database in terms of the target ontology. Using these assertions, an exported ontology is generated automatically. The second step produces the SQL views that perform the mapping defined by the assertions and a R2RML mapping between these views and the exported ontology. This dissertation describes a formalization of the correspondence assertions, the three-tier architecture, the publishing process steps, the algorithms needed, a tool that supports the entire process and a case study to validate the results obtained.
A iniciativa Linked Data trouxe novas oportunidades para a construção da nova geração de aplicações Web. Entretanto, a utilização das melhores práticas estabelecidas por este padrão depende de mecanismos que facilitem a transformação dos dados armazenados em bancos relacionais em triplas RDF. Recentemente, o grupo de trabalho W3C RDB2RDF propôs uma linguagem de mapeamento padrão, denominada R2RML, para especificar mapeamentos customizados entre esquemas relacionais e vocabulários RDF. No entanto, a geração de mapeamentos R2RML não é uma tarefa fácil. É imperativo, então, definir: (a) uma solução para mapear os conceitos de um esquema relacional em termos de um esquema RDF; (b) um processo que suporte a publicação dos dados relacionais no formato RDF; e (c) uma ferramenta para facilitar a aplicação deste processo. Assertivas de correspondência são propostas para formalizar mapeamentos entre esquemas relacionais e esquemas RDF. Visões são usadas para publicar dados de uma base de dados em uma nova estrutura ou esquema. A definição de visões RDF sobre dados relacionais permite que esses dados possam ser disponibilizados em uma estrutura de termos de uma ontologia OWL, sem que seja necessário alterar o esquema da base de dados. Neste trabalho, propomos uma arquitetura em três camadas – de dados, de visões SQL e de visões RDF – onde a camada de visões SQL mapeia os conceitos da camada de dados nos termos da camada de visões RDF. A criação desta camada intermediária de visões facilita a geração dos mapeamentos R2RML e evita que alterações na camada de dados impliquem em alterações destes mapeamentos. Adicionalmente, definimos um processo em três etapas para geração das visões RDF. Na primeira etapa, o usuário define o esquema do banco de dados relacional e a ontologia OWL alvo e cria assertivas de correspondência que mapeiam os conceitos do esquema relacional nos termos da ontologia alvo. A partir destas assertivas, uma ontologia exportada é gerada automaticamente. O segundo passo produz um esquema de visões SQL gerado a partir da ontologia exportada e um mapeamento R2RML do esquema de visões para a ontologia exportada, de forma automatizada. Por fim, no terceiro passo, as visões RDF são publicadas em um SPARQL endpoint. Neste trabalho são detalhados as assertivas de correspondência, a arquitetura, o processo, os algoritmos necessários, uma ferramenta que suporta o processo e um estudo de caso para validação dos resultados obtidos.
Escobar, Esteban María Pilar. „Un enfoque multidimensional basado en RDF para la publicación de Linked Open Data“. Doctoral thesis, Universidad de Alicante, 2020. http://hdl.handle.net/10045/109950.
Der volle Inhalt der QuelleFelix, Juan Manuel. „Esplorando i Linked Open Data con RSLT“. Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20925/.
Der volle Inhalt der QuelleZamagni, Luca. „Analisi comparativa di DB a grafo in particolare RDF“. Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.
Den vollen Inhalt der Quelle findenGerber, Daniel. „Statistical Extraction of Multilingual Natural Language Patterns for RDF Predicates: Algorithms and Applications“. Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-208759.
Der volle Inhalt der QuelleAnsell, Peter. „A context sensitive model for querying linked scientific data“. Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/49777/1/Peter_Ansell_Thesis.pdf.
Der volle Inhalt der QuelleArndt, Natanael, und Norman Radtke. „Quit diff: calculating the delta between RDF datasets under version control“. Universität Leipzig, 2016. https://ul.qucosa.de/id/qucosa%3A15780.
Der volle Inhalt der QuellePerry, Matthew Steven. „A Framework to Support Spatial, Temporal and Thematic Analytics over Semantic Web Data“. Wright State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=wright1219267560.
Der volle Inhalt der Quelle