Academic literature on the topic 'RDF-To-Text'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'RDF-To-Text.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "RDF-To-Text"

1

Chellali, Mustapha, and Nader Jafari Rad. "Trees with independent Roman domination number twice the independent domination number." Discrete Mathematics, Algorithms and Applications 07, no. 04 (December 2015): 1550048. http://dx.doi.org/10.1142/s1793830915500482.

Full text
Abstract:
A Roman dominating function (RDF) on a graph [Formula: see text] is a function [Formula: see text] satisfying the condition that every vertex [Formula: see text] for which [Formula: see text] is adjacent to at least one vertex [Formula: see text] for which [Formula: see text]. The weight of a RDF [Formula: see text] is the value [Formula: see text]. The Roman domination number, [Formula: see text], of [Formula: see text] is the minimum weight of a RDF on [Formula: see text]. An RDF [Formula: see text] is called an independent Roman dominating function (IRDF) if the set [Formula: see text] is an independent set. The independent Roman domination number, [Formula: see text], is the minimum weight of an IRDF on [Formula: see text]. In this paper, we study trees with independent Roman domination number twice their independent domination number, answering an open question.
APA, Harvard, Vancouver, ISO, and other styles
2

Gryaznov, Yevgeny, and Pavel Rusakov. "Analysis of RDF Syntaxes for Semantic Web Development." Applied Computer Systems 18, no. 1 (December 1, 2015): 33–42. http://dx.doi.org/10.1515/acss-2015-0017.

Full text
Abstract:
Abstract In this paper authors perform a research on possibilities of RDF (Resource Description Framework) syntaxes usage for information representation in Semantic Web. It is described why pure XML cannot be effectively used for this purpose, and how RDF framework solves this problem. Information is being represented in a form of a directed graph. RDF is only an abstract formal model for information representation and side tools are required in order to write down that information. Such tools are RDF syntaxes – concrete text or binary formats, which prescribe rules for RDF data serialization. Text-based RDF syntaxes can be developed on the existing format basis (XML, JSON) or can be an RDF-specific – designed from scratch to serve the only purpose – to serialize RDF graphs. Authors briefly describe some of the RDF syntaxes (both XML and non-XML) and compare them in order to identify strengths and weaknesses of each version. Serialization and deserialization speed tests using Jena library are made. The results from both analytical and experimental parts of this research are used to develop the recommendations for RDF syntaxes usage and to design a RDF/XML syntax subset, which is intended to simplify the development and raise compatibility of information serialized with this RDF syntax.
APA, Harvard, Vancouver, ISO, and other styles
3

Meddah, Nacéra, and Mustapha Chellali. "Roman domination and 2-independence in trees." Discrete Mathematics, Algorithms and Applications 09, no. 02 (April 2017): 1750023. http://dx.doi.org/10.1142/s1793830917500239.

Full text
Abstract:
A Roman dominating function (RDF) on a graph [Formula: see text] is a function [Formula: see text] satisfying the condition that every vertex [Formula: see text] with [Formula: see text] is adjacent to at least one vertex [Formula: see text] of [Formula: see text] for which [Formula: see text]. The weight of a RDF is the sum [Formula: see text], and the minimum weight of a RDF [Formula: see text] is the Roman domination number [Formula: see text]. A subset [Formula: see text] of [Formula: see text] is a [Formula: see text]-independent set of [Formula: see text] if every vertex of [Formula: see text] has at most one neighbor in [Formula: see text] The maximum cardinality of a [Formula: see text]-independent set of [Formula: see text] is the [Formula: see text]-independence number [Formula: see text] Both parameters are incomparable in general, however, we show that if [Formula: see text] is a tree, then [Formula: see text]. Moreover, all extremal trees attaining equality are characterized.
APA, Harvard, Vancouver, ISO, and other styles
4

Samodivkin, Vladimir. "Roman domination in graphs: The class ℛUV R." Discrete Mathematics, Algorithms and Applications 08, no. 03 (August 2016): 1650049. http://dx.doi.org/10.1142/s179383091650049x.

Full text
Abstract:
For a graph [Formula: see text], a Roman dominating function (RDF) [Formula: see text] has the property that every vertex [Formula: see text] with [Formula: see text] has a neighbor [Formula: see text] with [Formula: see text]. The weight of a RDF [Formula: see text] is the sum [Formula: see text], and the minimum weight of a RDF on [Formula: see text] is the Roman domination number [Formula: see text] of [Formula: see text]. The Roman bondage number [Formula: see text] of [Formula: see text] is the minimum cardinality of all sets [Formula: see text] for which [Formula: see text]. A graph [Formula: see text] is in the class [Formula: see text] if the Roman domination number remains unchanged when a vertex is deleted. In this paper, we obtain tight upper bounds for [Formula: see text] and [Formula: see text] provided a graph [Formula: see text] is in [Formula: see text]. We present necessary and sufficient conditions for a tree to be in the class [Formula: see text]. We give a constructive characterization of [Formula: see text]-trees using labelings.
APA, Harvard, Vancouver, ISO, and other styles
5

Cui, Hong, Kenneth Yang Jiang, and Partha Pratim Sanyal. "From text to RDF triple store: An application for biodiversity literature." Proceedings of the American Society for Information Science and Technology 47, no. 1 (November 2010): 1–2. http://dx.doi.org/10.1002/meet.14504701415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Khoeilar, R., and S. M. Sheikholeslami. "Rainbow reinforcement numbers in digraphs." Asian-European Journal of Mathematics 10, no. 01 (March 2017): 1750004. http://dx.doi.org/10.1142/s1793557117500048.

Full text
Abstract:
Let [Formula: see text] be a finite and simple digraph. A [Formula: see text]-rainbow dominating function ([Formula: see text]RDF) of a digraph [Formula: see text] is a function [Formula: see text] from the vertex set [Formula: see text] to the set of all subsets of the set [Formula: see text] such that for any vertex [Formula: see text] with [Formula: see text] the condition [Formula: see text] is fulfilled, where [Formula: see text] is the set of in-neighbors of [Formula: see text]. The weight of a [Formula: see text]RDF [Formula: see text] is the value [Formula: see text]. The [Formula: see text]-rainbow domination number of a digraph [Formula: see text], denoted by [Formula: see text], is the minimum weight of a [Formula: see text]RDF of [Formula: see text]. The [Formula: see text]-rainbow reinforcement number [Formula: see text] of a digraph [Formula: see text] is the minimum number of arcs that must be added to [Formula: see text] in order to decrease the [Formula: see text]-rainbow domination number. In this paper, we initiate the study of [Formula: see text]-rainbow reinforcement number in digraphs and we present some sharp bounds for [Formula: see text]. In particular, we determine the [Formula: see text]-rainbow reinforcement number of some classes of digraphs.
APA, Harvard, Vancouver, ISO, and other styles
7

Dosso, Dennis, and Gianmaria Silvello. "Search Text to Retrieve Graphs: A Scalable RDF Keyword-Based Search System." IEEE Access 8 (2020): 14089–111. http://dx.doi.org/10.1109/access.2020.2966823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dong, Ngan T., and Lawrence B. Holder. "Natural Language Generation from Graphs." International Journal of Semantic Computing 08, no. 03 (September 2014): 335–84. http://dx.doi.org/10.1142/s1793351x14500068.

Full text
Abstract:
The Resource Description Framework (RDF) is the primary language to describe information on the Semantic Web. The deployment of semantic web search from Google and Microsoft, the Linked Open Data Community project along with the announcement of schema.org by Yahoo, Bing and Google have significantly fostered the generation of data available in RDF format. Yet the RDF is a computer representation of data and thus is hard for the non-expert user to understand. We propose a Natural Language Generation (NLG) engine to generate English text from a small RDF graph. The Natural Language Generation from Graphs (NLGG) system uses an ontology skeleton, which contains hierarchies of concepts, relationships and attributes, along with handcrafted template information as the knowledge base. We performed two experiments to evaluate NLGG. First, NLGG is tested with RDF graphs extracted from four ontologies in different domains. A Simple Verbalizer is used to compare the results. NLGG consistently outperforms the Simple Verbalizer in all the test cases. In the second experiment, we compare the effort spent to make NLGG and NaturalOWL work with the M-PIRO ontology. Results show that NLGG generates acceptable text with much smaller effort.
APA, Harvard, Vancouver, ISO, and other styles
9

Devi, Runumi, Deepti Mehrotra, and Hajer Baazaoui-Zghal. "RDF Model Generation for Unstructured Dengue Patients' Clinical and Pathological Data." International Journal of Information System Modeling and Design 10, no. 4 (October 2019): 71–89. http://dx.doi.org/10.4018/ijismd.2019100104.

Full text
Abstract:
The automatic extraction of triplets from unstructured patient records and transforming them into resource description framework (RDF) models has remained a huge challenge so far, and would provide significant benefit to potential applications like knowledge discovery, machine interoperability, and ontology design in the health care domain. This article describes an approach that extracts semantics (triplets) from dengue patient case-sheets and clinical reports and transforms them into an RDF model. A Text2Ontology framework is used for extracting relations from text and was found to have limited capability. The TypedDependency parsing-based algorithm is designed for extracting RDF facts from patients' case-sheets and subsequent conversion into RDF models. A mapping-driven semantifying approach is also designed for mapping clinical details extracted from patients' reports to its corresponding triplet components and subsequent RDF model generations. The exhaustiveness of the RDF models generated are measured based on the number of axioms generated with respect to the facts available.
APA, Harvard, Vancouver, ISO, and other styles
10

Mountantonakis, Michalis, and Yannis Tzitzikas. "Linking Entities from Text to Hundreds of RDF Datasets for Enabling Large Scale Entity Enrichment." Knowledge 2, no. 1 (December 24, 2021): 1–25. http://dx.doi.org/10.3390/knowledge2010001.

Full text
Abstract:
There is a high increase in approaches that receive as input a text and perform named entity recognition (or extraction) for linking the recognized entities of the given text to RDF Knowledge Bases (or datasets). In this way, it is feasible to retrieve more information for these entities, which can be of primary importance for several tasks, e.g., for facilitating manual annotation, hyperlink creation, content enrichment, for improving data veracity and others. However, current approaches link the extracted entities to one or few knowledge bases, therefore, it is not feasible to retrieve the URIs and facts of each recognized entity from multiple datasets and to discover the most relevant datasets for one or more extracted entities. For enabling this functionality, we introduce a research prototype, called LODsyndesisIE, which exploits three widely used Named Entity Recognition and Disambiguation tools (i.e., DBpedia Spotlight, WAT and Stanford CoreNLP) for recognizing the entities of a given text. Afterwards, it links these entities to the LODsyndesis knowledge base, which offers data enrichment and discovery services for millions of entities over hundreds of RDF datasets. We introduce all the steps of LODsyndesisIE, and we provide information on how to exploit its services through its online application and its REST API. Concerning the evaluation, we use three evaluation collections of texts: (i) for comparing the effectiveness of combining different Named Entity Recognition tools, (ii) for measuring the gain in terms of enrichment by linking the extracted entities to LODsyndesis instead of using a single or a few RDF datasets and (iii) for evaluating the efficiency of LODsyndesisIE.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "RDF-To-Text"

1

Faille, Juliette. "Data-Based Natural Language Generation : Evaluation and Explainability." Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0305.

Full text
Abstract:
Les modèles de génération de langage naturel (NLG) ont récemment atteint de très hautes performances. Les textes qu'ils produisent sont généralement corrects sur le plan grammatical et syntaxique, ce qui les rend naturels. Bien que leur sens soit correct dans la grande majorité des cas, même les modèles de NLG les plus avancés produisent encore des textes avec des significations partiellement inexactes. Dans cette thèse, en nous concentrons sur le cas particulier des problèmes liés au contenu des textes générés, nous proposons d'évaluer et d'analyser les modèles utilisés dans les tâches de verbalisation de graphes RDF (Resource Description Framework) et de génération de questions conversationnelles. Tout d'abord, nous étudions la tâche de verbalisation des graphes RDF et en particulier les omissions et hallucinations d'entités RDF, c'est-à-dire lorsqu'un texte généré automatiquement ne mentionne pas toutes les entités du graphe RDF d'entrée ou mentionne d'autres entités que celles du graphe d'entrée. Nous évaluons 25 modèles de verbalisation de graphes RDF sur les données WebNLG. Nous développons une méthode pour détecter automatiquement les omissions et les hallucinations d'entités RDF dans les sorties de ces modèles. Nous proposons une métrique basée sur le nombre d'omissions ou d'hallucinations pour quantifier l'adéquation sémantique des modèles NLG avec l'entrée. Nous constatons que cette métrique est corrélée avec ce que les annotateurs humains considèrent comme sémantiquement correct et nous montrons que même les modèles les plus globalement performants sont sujets à des omissions et à des hallucinations. Suite à cette observation sur la tendance des modèles de verbalisation RDF à générer des textes avec des problèmes liés au contenu, nous proposons d'analyser l'encodeur de deux de ces modèles, BART et T5. Nous utilisons une méthode d'explicabilité par sondage et introduisons deux sondes de classification, l'une paramétrique et l'autre non paramétrique, afin de détecter les omissions et les déformations des entités RDF dans les plongements lexicaux des modèles encodeur-décodeur. Nous constatons que ces classifieurs sont capables de détecter ces erreurs dans les encodages, ce qui suggère que l'encodeur des modèles est responsable d'une certaine perte d'informations sur les entités omises et déformées. Enfin, nous proposons un modèle de génération de questions conversationnelles basé sur T5 qui, en plus de générer une question basée sur un graphe RDF d'entrée et un contexte conversationnel, génère à la fois une question et le triplet RDF correspondant. Ce modèle nous permet d'introduire une procédure d'évaluation fine évaluant automatiquement la cohérence avec le contexte de la conversation et l'adéquation sémantique avec le graphe RDF d'entrée. Nos contributions s'inscrivent dans les domaines de l'évaluation en NLG et de l'explicabilité. Nous empruntons des techniques et des méthodologies à ces deux domaines de recherche afin d'améliorer la fiabilité des modèles de génération de texte
Recent Natural Language Generation (NLG) models achieve very high average performance. Their output texts are generally grammatically and syntactically correct which makes them sound natural. Though the semantics of the texts are right in most cases, even the state-of-the-art NLG models still produce texts with partially incorrect meanings. In this thesis, we propose evaluating and analyzing content-related issues of models used in the NLG tasks of Resource Description Framework (RDF) graphs verbalization and conversational question generation. First, we focus on the task of RDF verbalization and the omissions and hallucinations of RDF entities, i.e. when an automatically generated text does not mention all the input RDF entities or mentions other entities than those in the input. We evaluate 25 RDF verbalization models on the WebNLG dataset. We develop a method to automatically detect omissions and hallucinations of RDF entities in the outputs of these models. We propose a metric based on omissions or hallucination counts to quantify the semantic adequacy of the NLG models. We find that this metric correlates well with what human annotators consider to be semantically correct and show that even state-of-the-art models are subject to omissions and hallucinations. Following this observation about the tendency of RDF verbalization models to generate texts with content-related issues, we propose to analyze the encoder of two such state-of-the-art models, BART and T5. We use the probing explainability method and introduce two probing classifiers (one parametric and one non-parametric) to detect omissions and distortions of RDF input entities in the embeddings of the encoder-decoder models. We find that such probing classifiers are able to detect these mistakes in the encodings, suggesting that the encoder of the models is responsible for some loss of information about omitted and distorted entities. Finally, we propose a T5-based conversational question generation model that in addition to generating a question based on an input RDF graph and a conversational context, generates both a question and its corresponding RDF triples. This setting allows us to introduce a fine-grained evaluation procedure automatically assessing coherence with the conversation context and the semantic adequacy with the input RDF. Our contributions belong to the fields of NLG evaluation and explainability and use techniques and methodologies from these two research fields in order to work towards providing more reliable NLG models
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "RDF-To-Text"

1

Rezk, Martín, Jungyeul Park, Yoon Yongun, Kyungtae Lim, John Larsen, YoungGyun Hahm, and Key-Sun Choi. "Korean Linked Data on the Web: Text to RDF." In Semantic Technology, 368–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37996-3_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Draicchio, Francesco, Aldo Gangemi, Valentina Presutti, and Andrea Giovanni Nuzzolese. "FRED: From Natural Language Text to RDF and OWL in One Click." In Advanced Information Systems Engineering, 263–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41242-4_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Razia and Tanwir Uddin Haider. "Natural Language Text to RDF Schema Conversion and OWL Mapping for an e-Recruitment Domain." In Algorithms for Intelligent Systems, 49–57. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4862-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zheng, Yuan, Olli Seppänen, Sebastian Seiß, and Jürgen Melzner. "Testing ChatGPT-Aided SPARQL Generation for Semantic Construction Information Retrieval." In CONVR 2023 - Proceedings of the 23rd International Conference on Construction Applications of Virtual Reality, 751–60. Florence: Firenze University Press, 2023. http://dx.doi.org/10.36253/979-12-215-0289-3.75.

Full text
Abstract:
Recently there has been a strong interest in using semantic technologies to improve information management in the construction domain. Ontologies provide a formalized domain knowledge representation that provides a structured information model to facilitate information management issues such as formalization and integration of construction workflow information and data and enables further applications such as information retrieval and reasoning. SPARQL Protocol And RDF Query Language (SPARQL) queries are the main approaches to conduct the information retrieval from the Resource Description Framework (RDF) format data. However, there is a barrier for end users to develop the SPARQL queries, as it requires proficient skills to code them. This challenge hinders the practical application of ontology-based approaches on construction sites. As a generative language model, ChatGPT has already illustrated its capability to process and generate human-like text, including the capability to generate the SPARQL for domain-specific tasks. However, there are no specific tests evaluating and assessing the SPARQL-generating capability of ChatGPT within the construction domain. Therefore, this paper focuses on exploring the usage of ChatGPT with a case of importing the Digital Construction Ontologies (DiCon) and generating SPARQL queries for specific construction workflow information retrieval. We evaluate the generated queries with metrics including syntactical correctness, plausible query structure, and coverage of correct answers
APA, Harvard, Vancouver, ISO, and other styles
5

Zheng, Yuan, Olli Seppänen, Sebastian Seiß, and Jürgen Melzner. "Testing ChatGPT-Aided SPARQL Generation for Semantic Construction Information Retrieval." In CONVR 2023 - Proceedings of the 23rd International Conference on Construction Applications of Virtual Reality, 751–60. Florence: Firenze University Press, 2023. http://dx.doi.org/10.36253/10.36253/979-12-215-0289-3.75.

Full text
Abstract:
Recently there has been a strong interest in using semantic technologies to improve information management in the construction domain. Ontologies provide a formalized domain knowledge representation that provides a structured information model to facilitate information management issues such as formalization and integration of construction workflow information and data and enables further applications such as information retrieval and reasoning. SPARQL Protocol And RDF Query Language (SPARQL) queries are the main approaches to conduct the information retrieval from the Resource Description Framework (RDF) format data. However, there is a barrier for end users to develop the SPARQL queries, as it requires proficient skills to code them. This challenge hinders the practical application of ontology-based approaches on construction sites. As a generative language model, ChatGPT has already illustrated its capability to process and generate human-like text, including the capability to generate the SPARQL for domain-specific tasks. However, there are no specific tests evaluating and assessing the SPARQL-generating capability of ChatGPT within the construction domain. Therefore, this paper focuses on exploring the usage of ChatGPT with a case of importing the Digital Construction Ontologies (DiCon) and generating SPARQL queries for specific construction workflow information retrieval. We evaluate the generated queries with metrics including syntactical correctness, plausible query structure, and coverage of correct answers
APA, Harvard, Vancouver, ISO, and other styles
6

"Path-Based Approximate Matching of Spatiotemporal RDF Data." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 81–101. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-9108-9.ch005.

Full text
Abstract:
Due to an ever-increasing number of RDF data with time features and space features, it is an important task to query efficiently spatiotemporal RDF data over RDF datasets. In this chapter, the spatiotemporal RDF data contains time features, space features and text features, which are processed separately to facilitate query. Meanwhile, the authors propose an algorithm for path-based approximate matching of spatiotemporal RDF data, which includes the decomposition graph algorithm and the combination query path algorithm. The query graph with spatiotemporal features is split into multiple paths, and then every path in the query graph is used to search for the best matching path in the path sets contained in the data graph. Due to the existence of inaccurate matchings, approximate matchings are performed according to the evaluation function to find the best matching path. Finally, all the best paths are combined to generate a matching result graph.
APA, Harvard, Vancouver, ISO, and other styles
7

Nunes, Ronnie Carlos Tavares, Márcio Clemes, and Rogério Cid Bastos. "Use of public domain knowledge bases: A case study on the RDF language." In UNITING KNOWLEDGE INTEGRATED SCIENTIFIC RESEARCH FOR GLOBAL DEVELOPMENT. Seven Editora, 2023. http://dx.doi.org/10.56238/uniknowindevolp-048.

Full text
Abstract:
The purpose of this paper is to analyze Semantic Web and Linked Data technologies, with the development of a web application that performs a knowledge modeling through the use of ontologies and RDF (Resource Description Framework) language. A public knowledge ontology/vocabulary was used and populated with data from a relational database from Brazil. The aggregated data was converted to RDF format and made available in an RDF database. The Virtuoso© universal server was used for this task, a hybrid database platform that has the functionalities of a traditional relational database management system, object-relational database, virtual database, RDF, XML, free text, web application server and file server functionality in one system. As a result, a web application capable of retrieving information using the basic principles of the Semantic Web was obtained.
APA, Harvard, Vancouver, ISO, and other styles
8

Corcoglioniti, Francesco, Marco Rospocher, Roldano Cattoni, Bernardo Magnini, and Luciano Serafini. "Managing Large Volumes of Interlinked Text and Knowledge With the KnowledgeStore." In Innovations, Developments, and Applications of Semantic Web and Information Systems, 32–61. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5042-6.ch002.

Full text
Abstract:
This chapter describes the KnowledgeStore, a scalable, fault-tolerant, and Semantic Web grounded open-source storage system to jointly store, manage, retrieve, and query interlinked structured and unstructured data, especially designed to manage all the data involved in Knowledge Extraction applications. The chapter presents the concept, design, function and implementation of the KnowledgeStore, and reports on its concrete usage in four application scenarios within the NewsReader EU project, where it has been successfully used to store and support the querying of millions of news articles interlinked with billions of RDF triples, both extracted from text and imported from Linked Open Data sources.
APA, Harvard, Vancouver, ISO, and other styles
9

Martinez-Rodriguez, Jose L., Ivan Lopez-Arevalo, Jaime I. Lopez-Veyna, Ana B. Rios-Alvarado, and Edwin Aldana-Bobadilla. "NLP and the Representation of Data on the Semantic Web." In Handbook of Research on Natural Language Processing and Smart Service Systems, 393–426. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-4730-4.ch019.

Full text
Abstract:
One of the goals of data scientists and curators is to get information (contained in text) organized and integrated in a way that can be easily consumed by people and machines. A starting point for such a goal is to get a model to represent the information. This model should ease to obtain knowledge semantically (e.g., using reasoners and inferencing rules). In this sense, the Semantic Web is focused on representing the information through the Resource Description Framework (RDF) model, in which the triple (subject, predicate, object) is the basic unit of information. In this context, the natural language processing (NLP) field has been a cornerstone in the identification of elements that can be represented by triples of the Semantic Web. However, existing approaches for the representation of RDF triples from texts use diverse techniques and tasks for such purpose, which complicate the understanding of the process by non-expert users. This chapter aims to discuss the main concepts involved in the representation of the information through the Semantic Web and the NLP fields.
APA, Harvard, Vancouver, ISO, and other styles
10

Stöhr, Mark R., Andreas Günther, and Raphael W. Majeed. "ISO 21526 Conform Metadata Editor for FAIR Unicode SKOS Thesauri." In German Medical Data Sciences: Bringing Data to Life. IOS Press, 2021. http://dx.doi.org/10.3233/shti210056.

Full text
Abstract:
Metadata repositories are an indispensable component of data integration infrastructures and support semantic interoperability between knowledge organization systems. Standards for metadata representation like the ISO/IEC 11179 as well as the Resource Description Framework (RDF) and the Simple Knowledge Organization System (SKOS) by the World Wide Web Consortium were published to ensure metadata interoperability, maintainability and sustainability. The FAIR guidelines were composed to explicate those aspects in four principles divided in fifteen sub-principles. The ISO/IEC 21526 standard extends the 11179 standard for the domain of health care and mandates that SKOS be used for certain scenarios. In medical informatics, the composition of health care SKOS classification schemes is often managed by documentalists and data scientists. They use editors, which support them in producing comprehensive and valid metadata. Current metadata editors either do not properly support the SKOS resource annotations, require server applications or make use of additional databases for metadata storage. These characteristics are contrary to the application independency and versatility of raw Unicode SKOS files, e.g. the custom text arrangement, extensibility or copy & paste editing. We provide an application that adds navigation, auto completion and validity check capabilities on top of a regular Unicode text editor.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "RDF-To-Text"

1

Xiaoyue, Wang, and Bai Rujiang. "Applying RDF Ontologies to Improve Text Classification." In 2009 International Conference on Computational Intelligence and Natural Computing (CINC). IEEE, 2009. http://dx.doi.org/10.1109/cinc.2009.115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gao, Hanning, Lingfei Wu, Po Hu, and Fangli Xu. "RDF-to-Text Generation with Graph-augmented Structural Neural Encoders." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/419.

Full text
Abstract:
The task of RDF-to-text generation is to generate a corresponding descriptive text given a set of RDF triples. Most of the previous approaches either cast this task as a sequence-to-sequence problem or employ graph-based encoder for modeling RDF triples and decode a text sequence. However, none of these methods can explicitly model both local and global structure information between and within the triples. To address these issues, we propose to jointly learn local and global structure information via combining two new graph-augmented structural neural encoders (i.e., a bidirectional graph encoder and a bidirectional graph-based meta-paths encoder) for the input triples. Experimental results on two different WebNLG datasets show that our proposed model outperforms the state-of-the-art baselines. Furthermore, we perform a human evaluation that demonstrates the effectiveness of the proposed method by evaluating generated text quality using various subjective metrics.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Ziran, Zibo Lin, Ning Ding, Hai-Tao Zheng, and Ying Shen. "Triple-to-Text Generation with an Anchor-to-Prototype Framework." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/523.

Full text
Abstract:
Generating a textual description from a set of RDF triplets is a challenging task in natural language generation. Recent neural methods have become the mainstream for this task, which often generate sentences from scratch. However, due to the huge gap between the structured input and the unstructured output, the input triples alone are insufficient to decide an expressive and specific description. In this paper, we propose a novel anchor-to-prototype framework to bridge the gap between structured RDF triples and natural text. The model retrieves a set of prototype descriptions from the training data and extracts writing patterns from them to guide the generation process. Furthermore, to make a more precise use of the retrieved prototypes, we employ a triple anchor that aligns the input triples into groups so as to better match the prototypes. Experimental results on both English and Chinese datasets show that our method significantly outperforms the state-of-the-art baselines in terms of both automatic and manual evaluation, demonstrating the benefit of learning guidance from retrieved prototypes to facilitate triple-to-text generation.
APA, Harvard, Vancouver, ISO, and other styles
4

Salgueiro, Mariana D. A., Veronica dos Santos, André L. C. Rêgo, Daniel S. Guimarães, Edward H. Haeusler, Jefferson B. dos Santos, Marcos V. Villas, and Sérgio Lifschitz. "Quem@PUC - A tool to find researchers at PUC-Rio." In Anais Estendidos do Simpósio Brasileiro de Banco de Dados. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/sbbd_estendido.2021.18169.

Full text
Abstract:
Quem@PUC is an Information Retrieval System available on the Web that allows searching for researchers and professors based on a keyword list of research related terms. It publicizes research and teaching activities from the PUC-Rio community to society in general. The idea is to integrate information from professors from administrative systems, courses offered, and researchers’ Lattes CVs. Data sources are converted to RDF format using domain ontologies, then stored in a NoSQL database that supports native free-text indexing on triple objects. Search results include names, academic papers, teaching activities, and contact links.
APA, Harvard, Vancouver, ISO, and other styles
5

Kanuri, Neelima, Ian R. Grosse, Jack C. Wileden, and Wei-Shan Chiang. "Ontologies and Fine-Grained Control Over Sharing of Engineering Modeling Knowledge in a Web Based Engineering Environment." In ASME 2005 International Mechanical Engineering Congress and Exposition. ASMEDC, 2005. http://dx.doi.org/10.1115/imece2005-81373.

Full text
Abstract:
Within the knowledge modeling community the use of ontologies in the construction of knowledge intensive systems is now widespread. Ontologies are used to facilitate knowledge sharing, reuse, agent interoperability and knowledge acquisition. We have developed an ontology for representing and sharing engineering analysis modeling (EAM) knowledge in a web-based environment and implemented these ontologies into a computational knowledge base system, called ON-TEAM, using Prote´ge´1. In this paper we present new object-oriented methods that operate on the EAM knowledge base to perform specific tasks. One such method is the creation of a flat technical report that describes the properties or class relationships of an engineering modeling analysis class and/or the modeling knowledge involved in the development of a specific engineering analysis model. This method is a JAVA application that accesses the EAM knowledge base application using the Prote´ge´ application programming interface. It presents the user a graphical user interface for selecting the EAM class or specific analysis model instance and then exports the appropriate knowledge to a text file to form the basis of a technical report. Secondly, a method controlling knowledge access and sharing is under development which allocates permissions to portions of the knowledge base according to accessibility permissions. This method controls as efficiently as possible fine grain knowledge sharing. Both the methods acting together enable automatic generation of recipient-specific technical reports based on the recipient’s security permissions, customized knowledge viewing, and customized knowledge exporting through various knowledge exchange formats such as XML Walsh [1], RDF Klyne [2], etc. Finally, implementation of these methods and our EAM knowledge base application as components within commercial web-based distributed software architecture is presented.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography