Journal articles on the topic 'RDF-To-Text'

To see the other types of publications on this topic, follow the link: RDF-To-Text.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'RDF-To-Text.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chellali, Mustapha, and Nader Jafari Rad. "Trees with independent Roman domination number twice the independent domination number." Discrete Mathematics, Algorithms and Applications 07, no. 04 (December 2015): 1550048. http://dx.doi.org/10.1142/s1793830915500482.

Full text
Abstract:
A Roman dominating function (RDF) on a graph [Formula: see text] is a function [Formula: see text] satisfying the condition that every vertex [Formula: see text] for which [Formula: see text] is adjacent to at least one vertex [Formula: see text] for which [Formula: see text]. The weight of a RDF [Formula: see text] is the value [Formula: see text]. The Roman domination number, [Formula: see text], of [Formula: see text] is the minimum weight of a RDF on [Formula: see text]. An RDF [Formula: see text] is called an independent Roman dominating function (IRDF) if the set [Formula: see text] is an independent set. The independent Roman domination number, [Formula: see text], is the minimum weight of an IRDF on [Formula: see text]. In this paper, we study trees with independent Roman domination number twice their independent domination number, answering an open question.
APA, Harvard, Vancouver, ISO, and other styles
2

Gryaznov, Yevgeny, and Pavel Rusakov. "Analysis of RDF Syntaxes for Semantic Web Development." Applied Computer Systems 18, no. 1 (December 1, 2015): 33–42. http://dx.doi.org/10.1515/acss-2015-0017.

Full text
Abstract:
Abstract In this paper authors perform a research on possibilities of RDF (Resource Description Framework) syntaxes usage for information representation in Semantic Web. It is described why pure XML cannot be effectively used for this purpose, and how RDF framework solves this problem. Information is being represented in a form of a directed graph. RDF is only an abstract formal model for information representation and side tools are required in order to write down that information. Such tools are RDF syntaxes – concrete text or binary formats, which prescribe rules for RDF data serialization. Text-based RDF syntaxes can be developed on the existing format basis (XML, JSON) or can be an RDF-specific – designed from scratch to serve the only purpose – to serialize RDF graphs. Authors briefly describe some of the RDF syntaxes (both XML and non-XML) and compare them in order to identify strengths and weaknesses of each version. Serialization and deserialization speed tests using Jena library are made. The results from both analytical and experimental parts of this research are used to develop the recommendations for RDF syntaxes usage and to design a RDF/XML syntax subset, which is intended to simplify the development and raise compatibility of information serialized with this RDF syntax.
APA, Harvard, Vancouver, ISO, and other styles
3

Meddah, Nacéra, and Mustapha Chellali. "Roman domination and 2-independence in trees." Discrete Mathematics, Algorithms and Applications 09, no. 02 (April 2017): 1750023. http://dx.doi.org/10.1142/s1793830917500239.

Full text
Abstract:
A Roman dominating function (RDF) on a graph [Formula: see text] is a function [Formula: see text] satisfying the condition that every vertex [Formula: see text] with [Formula: see text] is adjacent to at least one vertex [Formula: see text] of [Formula: see text] for which [Formula: see text]. The weight of a RDF is the sum [Formula: see text], and the minimum weight of a RDF [Formula: see text] is the Roman domination number [Formula: see text]. A subset [Formula: see text] of [Formula: see text] is a [Formula: see text]-independent set of [Formula: see text] if every vertex of [Formula: see text] has at most one neighbor in [Formula: see text] The maximum cardinality of a [Formula: see text]-independent set of [Formula: see text] is the [Formula: see text]-independence number [Formula: see text] Both parameters are incomparable in general, however, we show that if [Formula: see text] is a tree, then [Formula: see text]. Moreover, all extremal trees attaining equality are characterized.
APA, Harvard, Vancouver, ISO, and other styles
4

Samodivkin, Vladimir. "Roman domination in graphs: The class ℛUV R." Discrete Mathematics, Algorithms and Applications 08, no. 03 (August 2016): 1650049. http://dx.doi.org/10.1142/s179383091650049x.

Full text
Abstract:
For a graph [Formula: see text], a Roman dominating function (RDF) [Formula: see text] has the property that every vertex [Formula: see text] with [Formula: see text] has a neighbor [Formula: see text] with [Formula: see text]. The weight of a RDF [Formula: see text] is the sum [Formula: see text], and the minimum weight of a RDF on [Formula: see text] is the Roman domination number [Formula: see text] of [Formula: see text]. The Roman bondage number [Formula: see text] of [Formula: see text] is the minimum cardinality of all sets [Formula: see text] for which [Formula: see text]. A graph [Formula: see text] is in the class [Formula: see text] if the Roman domination number remains unchanged when a vertex is deleted. In this paper, we obtain tight upper bounds for [Formula: see text] and [Formula: see text] provided a graph [Formula: see text] is in [Formula: see text]. We present necessary and sufficient conditions for a tree to be in the class [Formula: see text]. We give a constructive characterization of [Formula: see text]-trees using labelings.
APA, Harvard, Vancouver, ISO, and other styles
5

Cui, Hong, Kenneth Yang Jiang, and Partha Pratim Sanyal. "From text to RDF triple store: An application for biodiversity literature." Proceedings of the American Society for Information Science and Technology 47, no. 1 (November 2010): 1–2. http://dx.doi.org/10.1002/meet.14504701415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Khoeilar, R., and S. M. Sheikholeslami. "Rainbow reinforcement numbers in digraphs." Asian-European Journal of Mathematics 10, no. 01 (March 2017): 1750004. http://dx.doi.org/10.1142/s1793557117500048.

Full text
Abstract:
Let [Formula: see text] be a finite and simple digraph. A [Formula: see text]-rainbow dominating function ([Formula: see text]RDF) of a digraph [Formula: see text] is a function [Formula: see text] from the vertex set [Formula: see text] to the set of all subsets of the set [Formula: see text] such that for any vertex [Formula: see text] with [Formula: see text] the condition [Formula: see text] is fulfilled, where [Formula: see text] is the set of in-neighbors of [Formula: see text]. The weight of a [Formula: see text]RDF [Formula: see text] is the value [Formula: see text]. The [Formula: see text]-rainbow domination number of a digraph [Formula: see text], denoted by [Formula: see text], is the minimum weight of a [Formula: see text]RDF of [Formula: see text]. The [Formula: see text]-rainbow reinforcement number [Formula: see text] of a digraph [Formula: see text] is the minimum number of arcs that must be added to [Formula: see text] in order to decrease the [Formula: see text]-rainbow domination number. In this paper, we initiate the study of [Formula: see text]-rainbow reinforcement number in digraphs and we present some sharp bounds for [Formula: see text]. In particular, we determine the [Formula: see text]-rainbow reinforcement number of some classes of digraphs.
APA, Harvard, Vancouver, ISO, and other styles
7

Dosso, Dennis, and Gianmaria Silvello. "Search Text to Retrieve Graphs: A Scalable RDF Keyword-Based Search System." IEEE Access 8 (2020): 14089–111. http://dx.doi.org/10.1109/access.2020.2966823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dong, Ngan T., and Lawrence B. Holder. "Natural Language Generation from Graphs." International Journal of Semantic Computing 08, no. 03 (September 2014): 335–84. http://dx.doi.org/10.1142/s1793351x14500068.

Full text
Abstract:
The Resource Description Framework (RDF) is the primary language to describe information on the Semantic Web. The deployment of semantic web search from Google and Microsoft, the Linked Open Data Community project along with the announcement of schema.org by Yahoo, Bing and Google have significantly fostered the generation of data available in RDF format. Yet the RDF is a computer representation of data and thus is hard for the non-expert user to understand. We propose a Natural Language Generation (NLG) engine to generate English text from a small RDF graph. The Natural Language Generation from Graphs (NLGG) system uses an ontology skeleton, which contains hierarchies of concepts, relationships and attributes, along with handcrafted template information as the knowledge base. We performed two experiments to evaluate NLGG. First, NLGG is tested with RDF graphs extracted from four ontologies in different domains. A Simple Verbalizer is used to compare the results. NLGG consistently outperforms the Simple Verbalizer in all the test cases. In the second experiment, we compare the effort spent to make NLGG and NaturalOWL work with the M-PIRO ontology. Results show that NLGG generates acceptable text with much smaller effort.
APA, Harvard, Vancouver, ISO, and other styles
9

Devi, Runumi, Deepti Mehrotra, and Hajer Baazaoui-Zghal. "RDF Model Generation for Unstructured Dengue Patients' Clinical and Pathological Data." International Journal of Information System Modeling and Design 10, no. 4 (October 2019): 71–89. http://dx.doi.org/10.4018/ijismd.2019100104.

Full text
Abstract:
The automatic extraction of triplets from unstructured patient records and transforming them into resource description framework (RDF) models has remained a huge challenge so far, and would provide significant benefit to potential applications like knowledge discovery, machine interoperability, and ontology design in the health care domain. This article describes an approach that extracts semantics (triplets) from dengue patient case-sheets and clinical reports and transforms them into an RDF model. A Text2Ontology framework is used for extracting relations from text and was found to have limited capability. The TypedDependency parsing-based algorithm is designed for extracting RDF facts from patients' case-sheets and subsequent conversion into RDF models. A mapping-driven semantifying approach is also designed for mapping clinical details extracted from patients' reports to its corresponding triplet components and subsequent RDF model generations. The exhaustiveness of the RDF models generated are measured based on the number of axioms generated with respect to the facts available.
APA, Harvard, Vancouver, ISO, and other styles
10

Mountantonakis, Michalis, and Yannis Tzitzikas. "Linking Entities from Text to Hundreds of RDF Datasets for Enabling Large Scale Entity Enrichment." Knowledge 2, no. 1 (December 24, 2021): 1–25. http://dx.doi.org/10.3390/knowledge2010001.

Full text
Abstract:
There is a high increase in approaches that receive as input a text and perform named entity recognition (or extraction) for linking the recognized entities of the given text to RDF Knowledge Bases (or datasets). In this way, it is feasible to retrieve more information for these entities, which can be of primary importance for several tasks, e.g., for facilitating manual annotation, hyperlink creation, content enrichment, for improving data veracity and others. However, current approaches link the extracted entities to one or few knowledge bases, therefore, it is not feasible to retrieve the URIs and facts of each recognized entity from multiple datasets and to discover the most relevant datasets for one or more extracted entities. For enabling this functionality, we introduce a research prototype, called LODsyndesisIE, which exploits three widely used Named Entity Recognition and Disambiguation tools (i.e., DBpedia Spotlight, WAT and Stanford CoreNLP) for recognizing the entities of a given text. Afterwards, it links these entities to the LODsyndesis knowledge base, which offers data enrichment and discovery services for millions of entities over hundreds of RDF datasets. We introduce all the steps of LODsyndesisIE, and we provide information on how to exploit its services through its online application and its REST API. Concerning the evaluation, we use three evaluation collections of texts: (i) for comparing the effectiveness of combining different Named Entity Recognition tools, (ii) for measuring the gain in terms of enrichment by linking the extracted entities to LODsyndesis instead of using a single or a few RDF datasets and (iii) for evaluating the efficiency of LODsyndesisIE.
APA, Harvard, Vancouver, ISO, and other styles
11

Dung, Nguyen Trong, Nguyen Chinh Cuong, and Duong Quoc Van. "Molecular dynamics studies the effect of structure MgSiO3 bulk on formation process geology of the Earth." International Journal of Computational Materials Science and Engineering 08, no. 03 (September 2019): 1950011. http://dx.doi.org/10.1142/s2047684119500118.

Full text
Abstract:
The paper studies the effect of temperature ([Formula: see text]), ([Formula: see text], 3200, 4000, 5000, 6000, 7000[Formula: see text]K) at pressure [Formula: see text][Formula: see text]GPa; pressure ([Formula: see text]), ([Formula: see text], 100, 200, 300, 350, 400[Formula: see text]GPa) at [Formula: see text][Formula: see text]K and thermal annealing time ([Formula: see text]), [Formula: see text][Formula: see text]ps (after 105 steps) at [Formula: see text][Formula: see text]K, [Formula: see text][Formula: see text]Gpa) on the structure of MgSiO3 bulk 3000 atoms by Molecular Dynamics (MD) simulation using Born–Mayer (BM) pair interaction potential and periodic boundary conditions. The structural results are analyzed through the Radial Distribution Function (RDF), the Coordination Number (CN), the angle distribution, size ([Formula: see text]), total energy of the system ([Formula: see text]) and the bonding lengths. The results show that the temperature and pressure had influenced the structural properties of MgSiO3 bulk and formation process geology of the Earth. In addition, the center of the Earth with [Formula: see text][Formula: see text]K and [Formula: see text][Formula: see text]GPa has appearance and disappearance of the Si–Si, Si–O, O–O, Si–Mg, O–Mg, Mg–Mg bonds and SiO4, SiO5, SiO6, MgO3, MgO4, MgO5, MgO6, MgO7, MgO8, MgO9, MgO[Formula: see text], MgO[Formula: see text], MgO[Formula: see text] angle distributions. When increasing the depth of the Earth’s surface [Formula: see text] lead to size [Formula: see text] of MgSiO3 decreases, total energy of the system ([Formula: see text]) increases, position of first peak of Radial Distribution Function (RDF) is [Formula: see text], height of RDF is [Formula: see text] varies greatly with [Formula: see text] from [Formula: see text][Formula: see text]km to [Formula: see text][Formula: see text]km, gradually decreasing with [Formula: see text] from [Formula: see text][Formula: see text]km to [Formula: see text][Formula: see text]km and the smallest structural change with [Formula: see text][Formula: see text]km that shows has influence affects on the geological formation of the Earth.
APA, Harvard, Vancouver, ISO, and other styles
12

Narayana Bhat, Talapady, and John Barkley. "Development of a use Case for Chemical Resource Description Framework for Acquired Immune Deficiency Syndrome Drug Discovery." Open Bioinformatics Journal 2, no. 1 (July 2, 2008): 20–27. http://dx.doi.org/10.2174/1875036200802010020.

Full text
Abstract:
There is considerable interest in RDF (Resource Description Framework) as a data representation standard for the growing information technology needs of drug discovery. Though several efforts towards this goal have been reported, most of the reported efforts have focused on text-based data. Structural data of chemicals are a key component of drug discovery and molecular images may offer certain advantages over text-based representations for them. Here we discuss the steps that we used to develop and search chemical Resource Description Framework (RDF) using text and image for structures of relevant to Acquired Immune Deficiency Syndrome (AIDS). These steps are (a) acquisition of the data on drugs, (b) definition of the framework to establish RDF on drugs using commonly asked questions during a drug discovery effort, (c) annotation of the structural data on drugs into RDF using the framework established in step (b), (d) validation of the annotation methods using Semantic Web concepts and tools, (e) design and development of public Web to distribute data to the public, (f) generation and distribution of data using OWL (Web Ontology Language). This paper describes this effort, discusses our observations and announces the availability of the OWL model at the W3C Web site (http://esw.w3.org/topic/HCLS/ChemicalTaxonomiesUseCase). The style of this paper is chosen so as to cover a broad audience including structural biologists, medicinal chemists, and information technologists and at times may appear say to the obvious for certain experts. A full discussion of our method and its comparison to other published methods is beyond the scope of this publication.
APA, Harvard, Vancouver, ISO, and other styles
13

Katayama, Toshiaki, Shuichi Kawashima, Gos Micklem, Shin Kawano, Jin-Dong Kim, Simon Kocbek, Shinobu Okamoto, et al. "BioHackathon series in 2013 and 2014: improvements of semantic interoperability in life science data and services." F1000Research 8 (September 23, 2019): 1677. http://dx.doi.org/10.12688/f1000research.18238.1.

Full text
Abstract:
Publishing databases in the Resource Description Framework (RDF) model is becoming widely accepted to maximize the syntactic and semantic interoperability of open data in life sciences. Here we report advancements made in the 6th and 7th annual BioHackathons which were held in Tokyo and Miyagi respectively. This review consists of two major sections covering: 1) improvement and utilization of RDF data in various domains of the life sciences and 2) meta-data about these RDF data, the resources that store them, and the service quality of SPARQL Protocol and RDF Query Language (SPARQL) endpoints. The first section describes how we developed RDF data, ontologies and tools in genomics, proteomics, metabolomics, glycomics and by literature text mining. The second section describes how we defined descriptions of datasets, the provenance of data, and quality assessment of services and service discovery. By enhancing the harmonization of these two layers of machine-readable data and knowledge, we improve the way community wide resources are developed and published. Moreover, we outline best practices for the future, and prepare ourselves for an exciting and unanticipatable variety of real world applications in coming years.
APA, Harvard, Vancouver, ISO, and other styles
14

Zimina, Elizaveta, Kalervo Järvelin, Jaakko Peltonen, Aarne Ranta, Kostas Stefanidis, and Jyrki Nummenmaa. "Linguistic summarisation of multiple entities in RDF graphs." Applied Computing and Intelligence 4, no. 1 (2024): 1–18. http://dx.doi.org/10.3934/aci.2024001.

Full text
Abstract:
<abstract><p>Methods for producing summaries from structured data have gained interest due to the huge volume of available data in the Web. Simultaneously, there have been advances in natural language generation from Resource Description Framework (RDF) data. However, no efforts have been made to generate natural language summaries for groups of multiple RDF entities. This paper describes the first algorithm for summarising the information of a set of RDF entities in the form of human-readable text. The paper also proposes an experimental design for the evaluation of the summaries in a human task context. Experiments were carried out comparing machine-made summaries and summaries written by humans, with and without the help of machine-made summaries. We develop criteria for evaluating the content and text quality of summaries of both types, as well as a function measuring the agreement between machine-made and human-written summaries. The experiments indicated that machine-made natural language summaries can substantially help humans in writing their own textual descriptions of entity sets within a limited time.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
15

AYDIN, HAKAN, and FURKAN BOSTANCI. "IMPROVEMENT OF WEAR RESISTANCE OF SHREDDER BLADES USED IN A REFUSE-DERIVED FUEL (RDF) FACILITY BY PLASMA NITRIDING." Surface Review and Letters 27, no. 04 (July 29, 2019): 1950131. http://dx.doi.org/10.1142/s0218625x19501312.

Full text
Abstract:
Refuse-derived fuel (RDF) is a kind of renewable energy source to produce energy for replacement of fossil fuels. Aggressive working conditions in RDF facilities cause the shredder blades to wear out quickly. So, the purpose of this paper was to study the effect of plasma-nitriding process on wear resistance of shredder blades made of AISI D2 tool steel in the service condition of RDF facility. Shredder blades were commercially available from two different suppliers (A and B suppliers). These hardened shredder blades were plasma-nitrided in the mixed nitrogen and hydrogen atmosphere at a volume ratio of 3:1 at 450∘C for 12, 18 and 24[Formula: see text]h at a total pressure of 250 Pa. Characterisation of plasma-nitrided layers on the shredder blades was carried out by means of microstructure and microhardness measurements. Wear tests of plasma-nitrided shredder blades were performed under actual working conditions in the RDF facility. Wear analysis of these shredder blades was conducted using three-dimensional (3D) optical measuring instrument GOM ATOS II. The compositional difference of the shredder blades provided by A and B suppliers played an important role on the nitrided layer. The case depth of A-blades significantly increased with increasing plasma-nitriding time. However, the case depth of B-blades was fairly lower at the same nitriding time and only slightly increased with increasing plasma-nitriding time. Plasma-nitriding process significantly improved the surface hardness of the shredder blades. Maximum surface hardness values were achieved at nitriding time of 18 h for both blades. In this case, this increase in surface hardness values was above 100%. At nitriding time of 24[Formula: see text]h, the maximum surface hardness of A-blades significantly decreased, whereas this decrease in surface hardness of B-blades was the negligible value. The wear test results showed that plasma-nitriding process significatly decreased the wear of shredder blades; 18 h nitriding for A-blades and 24[Formula: see text]h nitriding for B-blades had better wear-reducing ability in the service condition of RDF facility. In these cases, the decreases in the total volume wear loss for A- and B-blades were 53% and 60%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
16

Tan, Guoyu, Jiaxin Zheng, and Feng Pan. "Molecular dynamics study on the microstructure of CH3COOLi solutions with different concentrations." Functional Materials Letters 11, no. 04 (August 2018): 1850075. http://dx.doi.org/10.1142/s1793604718500753.

Full text
Abstract:
Due to the toxic and flammable problems of organic electrolytes, the study on concentrated aqueous system for lithium-ion batteries (LIBs) has attracted wide attention. In this paper, by molecular dynamics simulations, the CH3COOLi aqueous system is considered as the potential concentrated aqueous system for LIBs, and all the variations of the microstructure of the aqueous system from dilution to concentration are analyzed. The details of microstructure are discussed, especially the interactions concerning anions. Among them, the first peak of RDF (radial distribution function) between the Li[Formula: see text] ion and the oxygen atom in CH3COOLi is 2.9[Formula: see text]Å, which does not change from dilution to concentration. This RDF information further indicates that when the concentration increases, the microstructures of small components formed by any two clusters do not change much, but at the same time, the spatial structures constructed by many small components are gradually built up from a broader perspective.
APA, Harvard, Vancouver, ISO, and other styles
17

Sari, Bengisu, Humberto Batiz, Chunsong Zhao, Ali Javey, D. C. Chrzan, and Mary C. Scott. "Structural heterogeneity in non-crystalline Tex Se1−x thin films." Applied Physics Letters 121, no. 1 (July 4, 2022): 012101. http://dx.doi.org/10.1063/5.0094600.

Full text
Abstract:
Rapid crystallization behavior of amorphous Te x[Formula: see text] thin films limits the use of these alloys as coatings and in optoelectronic devices. Understanding the short- and medium-range ordering of the amorphous structure and the fundamental physics governing the crystallization of the films is crucial. Although the lack of long range crystalline order restricts the characterization of the amorphous films, electron microscopy offers a way to extract information about the nanoscale ordering. In this paper, the local ordering of amorphous Te x[Formula: see text] thin films with [Formula: see text] grown by thermal evaporation is investigated using radial distribution function (RDF) and fluctuation electron microscopy (FEM) analysis. RDF results show that the nearest-neighbor distances of selenium (Se) and tellurium (Te) in their crystalline structure are preserved, and their bond lengths increase with the addition of Te. Density functional theory (DFT) calculations predict structures with interatomic distances similar to those measured experimentally. Additionally, fluctuations in atomic coordination are analyzed. Medium range order (MRO) analysis obtained from FEM and DFT calculations suggests that there are at least two populations within the chain network structure, which are close to the Se–Se and Te–Te intrachain distances. For the binary alloy with x > 0.61, Te x[Formula: see text], Te–Te like populations increase and Te fragments might form, suggesting that the glass forming ability decreases rapidly.
APA, Harvard, Vancouver, ISO, and other styles
18

Petersen, Niklas, Alexandra Similea, Christoph Lange, and Steffen Lohmann. "TurtleEditor: A Web-Based RDF Editor to Support Distributed Ontology Development on Repository Hosting Platforms." International Journal of Semantic Computing 11, no. 03 (September 2017): 311–23. http://dx.doi.org/10.1142/s1793351x17400128.

Full text
Abstract:
Ontologies are increasingly being developed on web-based repository hosting platforms such as GitHub. Accordingly, there is a demand for ontology editors which can be easily connected to the hosted repositories. TurtleEditor is a web-based RDF editor that provides this capability and supports the distributed development of ontologies on repository hosting platforms. It offers features such as syntax checking, syntax highlighting, and auto completion, along with a SPARQL endpoint to query the ontology. Furthermore, TurtleEditor integrates a visual editing view that allows for the graphical manipulation of the RDF graph and includes some basic clustering functionality. The text and graph views are constantly synchronized so that all changes to the ontology are immediately propagated and the views are updated accordingly. The results of a user study and performance tests show that TurtleEditor can indeed be effectively used to support the distributed development of ontologies on repository hosting platforms.
APA, Harvard, Vancouver, ISO, and other styles
19

Fang, Hong. "pSPARQL: A Querying Language for Probabilistic RDF Data." Complexity 2019 (March 26, 2019): 1–7. http://dx.doi.org/10.1155/2019/8258197.

Full text
Abstract:
More and more linked data (taken as knowledge) can be automatically generated from nonstructured data such as text and image via learning, which are often uncertain in practice. On the other hand, most of the existing approaches to processing linked data are mainly designed for certain data. It becomes more and more important to process uncertain linked data in theoretical aspect. In this paper, we present a querying language framework for probabilistic RDF data (an important uncertain linked data), where each triple has a probability, called pSRARQL, built on SPARQL, recommended by W3C as a querying language for RDF databases. pSPARQL can support the full SPARQL and satisfies some important properties such as well-definedness, uniqueness, and some equivalences. Finally, we illustrate that pSPARQL is feasible in expressing practical queries in a real world.
APA, Harvard, Vancouver, ISO, and other styles
20

Vainikainen, Sari, Antti Nummiaho, Asta Bäck, and Timo Laakko. "Collecting and Sharing Observations with Semantic Support." Proceedings of the International AAAI Conference on Web and Social Media 3, no. 1 (March 20, 2009): 338–41. http://dx.doi.org/10.1609/icwsm.v3i1.13968.

Full text
Abstract:
We present two applications that can be used to store and share ideas, bookmarks and observations from the web and on the move. These applications utilize semantic web tech­nologies both to support users in tagging and to store and integrate data. The core of the system is a social book­mark­ing application, Tilkut, complemented with a mobile application TagIt, which can be used to send photo and text entries from a mobile device. Tag suggestions are given from external ontologies, and from earlier tags. TagIt stores its data in an RDF database, which is also used in integrating these applications. The ontology for the RDF database combines existing social media ontologies, and its key structures are presented. The paper shares our experiences from linking and using external ontologies, and its special challenges on mobile applications.
APA, Harvard, Vancouver, ISO, and other styles
21

Houssein, Essam H., Nahed Ibrahem, Alaa M. Zaki, and Awny Sayed. "Semantic Protocol and Resource Description Framework Query Language: A Comprehensive Review." Mathematics 10, no. 17 (September 5, 2022): 3203. http://dx.doi.org/10.3390/math10173203.

Full text
Abstract:
This review presents various perspectives on converting user keywords into a formal query. Without understanding the dataset’s underlying structure, how can a user input a text-based query and then convert this text into semantic protocol and resource description framework query language (SPARQL) that deals with the resource description framework (RDF) knowledge base? The user may not know the structure and syntax of SPARQL, a formal query language and a sophisticated tool for the semantic web (SEW) and its vast and growing collection of interconnected open data repositories. As a result, this study examines various strategies for turning natural language into formal queries, their workings, and their results. In an Internet search engine from a single query, such as on Google, numerous matching documents are returned, with several related to the inquiry while others are not. Since a considerable percentage of the information retrieved is likely unrelated, sophisticated information retrieval systems based on SEW technologies, such as RDF and web ontology language (OWL), can help end users organize vast amounts of data to address this issue. This study reviews this research field and discusses two different approaches to show how users with no knowledge of the syntax of semantic web technologies deal with queries.
APA, Harvard, Vancouver, ISO, and other styles
22

SCHLICHT, ANNE, and HEINER STUCKENSCHMIDT. "PEER-TO-PEER REASONING FOR INTERLINKED ONTOLOGIES." International Journal of Semantic Computing 04, no. 01 (March 2010): 27–58. http://dx.doi.org/10.1142/s1793351x10000948.

Full text
Abstract:
The Semantic Web is commonly perceived as a web of partially-interlinked machine readable data. This data is inherently distributed and resembles the structure of the web in terms of resources being provided by different parties at different physical locations. A number of infrastructures for storing and querying distributed semantic web data, primarily encoded in RDF have been developed. While there are first attempts for integrating RDF Schema reasoning into distributed query processing, almost all the work on description logic reasoning as a basis for implementing inference in the Web Ontology Language OWL still assumes a centralized approach where the complete terminology has to be present on a single system and all inference steps are carried out on this system. We have designed and implemented a distributed reasoning method that preserves soundness and completeness of reasoning under the original OWL import semantics and has beneficial properties regarding parallel computation and overhead caused by communication effort and additional derivations. The method is based on sound and complete resolution methods for the description logic [Formula: see text] that we modify to work in a distributed setting.
APA, Harvard, Vancouver, ISO, and other styles
23

Perera, Rivindu, Parma Nand, and Gisela Klette. "RealText-lex: A Lexicalization Framework for RDF Triples." Prague Bulletin of Mathematical Linguistics 106, no. 1 (October 1, 2016): 45–68. http://dx.doi.org/10.1515/pralin-2016-0011.

Full text
Abstract:
Abstract The online era has made available almost cosmic amounts of information in the public and semi-restricted domains, prompting development of corresponding host of technologies to organize and navigate this information. One of these developing technologies deals with encoding information from free form natural language into a structured form as RDF triples. This representation enables machine processing of the data, however the processed information can not be directly converted back to human language. This has created a need to be able to lexicalize machine processed data existing as triples into a natural language, so that there is seamless transition between machine representation of information and information meant for human consumption. This paper presents a framework to lexicalize RDF triples extracted from DBpedia, a central interlinking hub for the emerging Web of Data. The framework comprises of four pattern mining modules which generate lexicalization patterns to transform triples to natural language sentences. Among these modules, three are based on lexicons and the other works on extracting relations by exploiting unstructured text to generate lexicalization patterns. A linguistic accuracy evaluation and a human evaluation on a sub-sample showed that the framework can produce patterns which are accurate and emanate human generated qualities.
APA, Harvard, Vancouver, ISO, and other styles
24

KUMAR, M. KAMAL, and L. SUDERSHAN REDDY. "INVERSE ROMAN DOMINATION IN GRAPHS." Discrete Mathematics, Algorithms and Applications 05, no. 03 (September 2013): 1350011. http://dx.doi.org/10.1142/s1793830913500110.

Full text
Abstract:
Motivated by the article in Scientific American [7], Michael A Henning and Stephen T Hedetniemi explored the strategy of defending the Roman Empire. Cockayne defined Roman dominating function (RDF) on a Graph G = (V, E) to be a function f : V → {0, 1, 2} satisfying the condition that every vertex u for which f(u) = 0 is adjacent to at least one vertex v for which f(v) = 2. For a real valued function f : V → R the weight of f is w(f) = ∑v∈V f(v). The Roman domination number (RDN) denoted by γR(G) is the minimum weight among all RDF in G. If V – D contains a roman dominating function f1 : V → {0, 1, 2}. "D" is the set of vertices v for which f(v) > 0. Then f1 is called Inverse Roman Dominating function (IRDF) on a graph G w.r.t. f. The inverse roman domination number (IRDN) denoted by [Formula: see text] is the minimum weight among all IRDF in G. In this paper we find few results of IRDN.
APA, Harvard, Vancouver, ISO, and other styles
25

Matsson, Arild, and Olov Kriström. "Building and Serving the Queerlit Thesaurus as Linked Open Data." Digital Humanities in the Nordic and Baltic Countries Publications 5, no. 1 (October 10, 2023): 29–39. http://dx.doi.org/10.5617/dhnbpub.10648.

Full text
Abstract:
This paper describes the creation of the Queer Literature Indexing Thesaurus (QLIT) as well as the digital infrastructure supporting the workflow for editing and publishing it. The purpose of QLIT is to adequately catalogue Swedish fiction with LGBTQI themes. It is continually edited in plain-text RDF and automatically processed for correctness and storage. Finally, it is published online as Linked Open Data and used with external systems. The technical approach relies on scripts and applications developed ad hoc, rather than existing solutions. Code is available on https://github.com/gu-gridh/queerlit-terms.
APA, Harvard, Vancouver, ISO, and other styles
26

Goncharov, M. V., and K. A. Kolosov. "On interoperability of metadata within RNPLS&T’s Single Open Information Archive." Scientific and Technical Libraries, no. 10 (November 12, 2021): 45–62. http://dx.doi.org/10.33186/1027-3689-2021-10-45-62.

Full text
Abstract:
Russian National Public Library for Science and Technology has been developing the Single Open Information Archive (UOIA) to merge all digital full-text resources created or acquired by the Library. The authors examine the issues of interoperability when exchanging metadata between UOIA built on library automation software and open archives using OAI-PMH technology for metadata acquisition. Interoperability in information exchange between different ALIS is provided, for example, through applying SRU/SRW protocol and metadata scheme, while metadata exchange between OA repositories is provided mainly within Dublin Core (DC) scheme. ALIS – OA metadata transmission with transformation into DC results in information loss and prevents unambiguous reverse transformation.For a long time, DSpace has been the most popular software for open digital repositories. This product enables OAI-PMH metadata acquisition in DC and Qualified DC (QDC) formats, and supports Object Reuse and Exchange (ORE) standard, which enables to describe aggregated resources. ORE in DSpace enables to collect not only metadata but also connected files and to receive other connected data provided by importing source. DSpace uses rather simple ORE format based on Atom XML that allows binding several files of different functionality with RDF-triplets.The OAI-PMH software connector is designed for RNPLS&T SOIA and enables to present metadata in DC, QDC, MARC21, and ORE formats, which supports interoperability in information exchange with OA repositories with DSpace software. Beside metadata transmission, transmission of various data types is possible, e. g. document text or license information. Further development is to expand format structure to represent associated data, in particular using RDF.
APA, Harvard, Vancouver, ISO, and other styles
27

Lakhal, L., Y. Brunet, and T. Kanit. "Evaluation of second-order correlations adjusted with simulated annealing on physical properties of unidirectional nonoverlapping fiber-reinforced materials (UD Composites)." International Journal of Modern Physics C 30, no. 02n03 (February 2019): 1950017. http://dx.doi.org/10.1142/s0129183119500177.

Full text
Abstract:
The focus of this paper is on aligned fiber-reinforced composites, where fiber centers were randomly distributed in their cross-sections. The volume fractions of fibers were [Formula: see text]% and [Formula: see text]%. Samples were built with the help of the simulated annealing technique according to the chosen Radial Distribution Functions (RDFs). For each sample, the fields of local stresses and heat fluxes were simulated by finite element method. Then, homogenization by volume averaging was performed in order to investigate both the effective mechanical and thermal properties. The effect of RDF shape on elastic and thermal properties was quantified along with the influence of the probability of near neighbors of fibers on the physical properties. The more the fiber distributions deviate from Poisson’s Law, the higher the results compared to the lower bound of Hashin–Shtrikman.
APA, Harvard, Vancouver, ISO, and other styles
28

Penev, Lyubomir, Mariya Dimitrova, Viktor Senderov, Georgi Zhelezov, Teodor Georgiev, Pavel Stoev, and Kiril Simov. "OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science." Publications 7, no. 2 (May 29, 2019): 38. http://dx.doi.org/10.3390/publications7020038.

Full text
Abstract:
Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.
APA, Harvard, Vancouver, ISO, and other styles
29

Rusek, Marian, Waldemar Karwowski, and Jakub Maguza. "INTEGRATION OF QUERIES TO HETEROGENEOUS DATA SOURCES USING LINQ TECHNOLOGY." Information System in Management 7, no. 3 (September 30, 2018): 180–89. http://dx.doi.org/10.22630/isim.2018.7.3.16.

Full text
Abstract:
Nowadays, the data are available in a variety of formats such as relational data-base tables, xml files, rdf files or simply text files. Database systems have their own query languages and tools for the manipulation of data. On the other hand, most of todays applications are created in languages based on the object-oriented paradigm. From the level of the programming language it is important to use different sources of data in a uniform manner. The paper discusses the elements of the various query languages such as SQL XQuery or SPARQL. And then shows the capabilities of LINQ and its role in the creation of abstract data access layer. Then the possibilities of LINQ extension are discussed. As the example, design and implementation of LINQ provider for Allegro is presented.
APA, Harvard, Vancouver, ISO, and other styles
30

Tuan, Tran Quoc, and Nguyen Trong Dung. "Effect of heating rate, impurity concentration of Cu, atomic number, temperatures, time annealing temperature on the structure, crystallization temperature and crystallization process of Ni1−xCux bulk; x = 0.1, 0.3, 0.5, 0.7." International Journal of Modern Physics B 32, no. 26 (October 18, 2018): 1830009. http://dx.doi.org/10.1142/s0217979218300098.

Full text
Abstract:
This paper studies the effects of heating rate 4 × 10[Formula: see text] K/s, 4 × 10[Formula: see text] K/s, 4 × 10[Formula: see text] K/s; impurity concentration of Cu on Ni[Formula: see text]Cu[Formula: see text] bulk with x = 0.1, x = 0.3, x = 0.5, x = 0.7; atom number (N), N = 4000 atoms, 5324 atoms, 6912 atoms, 8788 atoms at temperatures (T), T = 300 K; N = 6912 atoms at T = 300 K, 400 K, 500 K, 600 K, 700 K, 800 K; N = 6912 atoms at T = 600 K after time annealing temperature (t), t = 500 ps on the structure, crystallization temperature and crystallization process of Ni[Formula: see text]Cu[Formula: see text] bulk by molecular dynamics (MD) method with interactive embedding Sutton–Chen (ST) and periodic boundary conditions. The structural characteristics were analyzed through radial distribution function (RDF), energy total (E[Formula: see text]), size (l) and common neighborhood analysis (CNA) method; temperature (T), crystallization temperature (T[Formula: see text]), crystallization process through relationship between E[Formula: see text], T. The results showed Ni[Formula: see text]Cu[Formula: see text] bulk and links Ni–Ni, Ni–Cu, Cu–Cu always exist in 03 types structures: FCC, HCP, Amor. When time annealing temperature increases then Ni[Formula: see text]Cu[Formula: see text] bulk moves from a crystalline state to an amorphous state. When increases impurity concentration of Cu in Ni[Formula: see text]Cu[Formula: see text] bulk, then the structure unit number FCC, HCP decreases and then increases, structure unit number Amor increases and then decreases. When atom number (N) increases, decreasing T and increasing time annealing temperature lead to structure unit number FCC, HCP increases, Amor decreases and structural, crystallization temperature, crystallization process of Ni[Formula: see text]Cu[Formula: see text] bulk change.
APA, Harvard, Vancouver, ISO, and other styles
31

ZHOU, SUQIN, DENGHAO LI, WEI ZHOU, XUEHAI JU, and DINGYUN JIANG. "DIFFUSION OF NH2NO2 ON Al (111) SURFACE: MOLECULAR DYNAMICS STUDY." Surface Review and Letters 23, no. 06 (November 17, 2016): 1650048. http://dx.doi.org/10.1142/s0218625x16500487.

Full text
Abstract:
The diffusion of NH2NO2 molecules on the Al (111) surface has been investigated by the molecular dynamics (MD) method. The influence of temperature and pressure on their diffusion was studied. The binding energies decrease obviously with the temperature increasing because non-bonding interaction between Al atoms and NH2NO2 molecules weakens. As the temperature increases, the value of the first peak for radial distribution function (RDF) of Al–N and Al–O bond decreases. Diffusion rates increase with temperature increasing whereas they first decrease, then increase with pressure increasing below 450[Formula: see text]K. The diffusion activation energy of NH2NO2 molecules on the Al surface is among 13.8–18.1[Formula: see text]kJ[Formula: see text][Formula: see text][Formula: see text]mol[Formula: see text] at the pressure from 0[Formula: see text]GPa to 10.0[Formula: see text]GPa, which indicates that NH2NO2 molecules diffuse easily on the Al surface and the influence of pressure on diffusion of NH2NO2 molecules on the Al surface is small. At the low temperature (i.e. below 300[Formula: see text]K), the NH2NO2 molecules are mainly adsorbed on the Al surface by intramolecular and intermolecular interactions, while NH2NO2 molecules diffuse on the Al surface and surface Al atoms deviate a little from their original position above the moderate temperature (i.e. above 350[Formula: see text]K). These results indicate that the influence of temperature on the diffusion of NH2NO2 molecules on the Al surface is large whereas the influence of pressure on the diffusion is relatively small.
APA, Harvard, Vancouver, ISO, and other styles
32

Wu, Chunlei, Shuhai Zhang, Fude Ren, Ruijun Gou, and Gang Han. "Theoretical insight into the cocrystal explosive of 2,4,6,8,10,12-hexanitrohexaazaisowurtzitane (CL-20)/1-Methyl-4,5-dinitro-1H-imidazole (MDNI)." Journal of Theoretical and Computational Chemistry 16, no. 07 (November 2017): 1750061. http://dx.doi.org/10.1142/s0219633617500614.

Full text
Abstract:
Cocrystal explosive is getting more and more attention in high energy density material field. Different molar ratios of 2,4,6,8,10,12-hexanitrohexaazaisowurtzitane (CL-20)/1-Methyl-4,5-dinitro-1H-imidazole (MDNI) cocrystal were studied by molecular dynamics (MD) simulation and quantum-chemical density functional theory (DFT) calculation. Binding energy of CL-20/MDNI cocrystal and radial distribution function (RDF) were used to estimate the interaction. Mechanical properties were calculated to predict the elasticity and ductility. The length and bond dissociation energy of trigger bond, surface electrostatic potentials (ESP) of CL-20/MDNI framework were calculated at B3LYP/6-311[Formula: see text]G(d,p) level. The results indicate that CL-20/MDNI cocrystal explosive might have better mechanical properties and stability in a molar ratio 3:2. The N–NO2 bond becomes stronger upon the formation of intermolecular H-bonding interaction. The surface electrostatic potential further confirms that the sensitivity decreases in cocrystal explosive in comparison with that in isolated CL-20. The oxygen balance (OB), heat of detonation ([Formula: see text], detonation velocity ([Formula: see text] and detonation pressure ([Formula: see text] of CL-20/MDNI suggest that the CL-20/MDNI cocrystal possesses excellent detonation performance and low sensitivity.
APA, Harvard, Vancouver, ISO, and other styles
33

Park, Eun G., and Matthew Milner. "Building Social Interactions as a Creation of Networks in an RDF Repository." Journal of Arts and Humanities 7, no. 2 (February 28, 2018): 59. http://dx.doi.org/10.18533/journal.v7i2.1342.

Full text
Abstract:
<p>Humanities scholars are not likely to be thinking about their research findings as data, and the predominant models of organizing documents remain generally archival or bibliographic in nature for text-based documents. Although the linked data movement has greatly influenced information organization and search queries on the Web, in comparison to other fields, the adoption of the linked data approach to humanities collections is unequally paced. This study intends to explain how people or actors make social interactions, and how social interactions are formed in a type of network through the example of the Making Publics (MaPs) project. The objective of the MaPs project is to build collaborative common environments for tracing social interactions between people, things, places and times. To build social interactions, the Networked Event Model was designed in a collaborative environment. Events were defined as six types of nodes (e.g., people, organizations, places, things, events, and literals) in the RDF (Resource Description Framework) triple statements. The interaction vocabulary list is made of 173 verbs and predicates, offering 510 traceable events. The RDF repository runs on a Sesame server and MySQL architecture. Users can use digital tools to select and document events and visually present the selected events in interactive social web forms. The MaPs project sought to extract the network extant in the works of prose in large collaborative humanities documents. In this way, the dissemination of and access to humanities data can be made more connectable, available and accessible to both academic and non-academic communities.</p>
APA, Harvard, Vancouver, ISO, and other styles
34

Sateli, Bahar, Felicitas Löffler, Birgitta König-Ries, and René Witte. "ScholarLens: extracting competences from research publications for the automatic generation of semantic user profiles." PeerJ Computer Science 3 (July 3, 2017): e121. http://dx.doi.org/10.7717/peerj-cs.121.

Full text
Abstract:
Motivation Scientists increasingly rely on intelligent information systems to help them in their daily tasks, in particular for managing research objects, like publications or datasets. The relatively young research field of Semantic Publishing has been addressing the question how scientific applications can be improved through semantically rich representations of research objects, in order to facilitate their discovery and re-use. To complement the efforts in this area, we propose an automatic workflow to construct semantic user profiles of scholars, so that scholarly applications, like digital libraries or data repositories, can better understand their users’ interests, tasks, and competences, by incorporating these user profiles in their design. To make the user profiles sharable across applications, we propose to build them based on standard semantic web technologies, in particular the Resource Description Framework (RDF) for representing user profiles and Linked Open Data (LOD) sources for representing competence topics. To avoid the cold start problem, we suggest to automatically populate these profiles by analyzing the publications (co-)authored by users, which we hypothesize reflect their research competences. Results We developed a novel approach, ScholarLens, which can automatically generate semantic user profiles for authors of scholarly literature. For modeling the competences of scholarly users and groups, we surveyed a number of existing linked open data vocabularies. In accordance with the LOD best practices, we propose an RDF Schema (RDFS) based model for competence records that reuses existing vocabularies where appropriate. To automate the creation of semantic user profiles, we developed a complete, automated workflow that can generate semantic user profiles by analyzing full-text research articles through various natural language processing (NLP) techniques. In our method, we start by processing a set of research articles for a given user. Competences are derived by text mining the articles, including syntactic, semantic, and LOD entity linking steps. We then populate a knowledge base in RDF format with user profiles containing the extracted competences.We implemented our approach as an open source library and evaluated our system through two user studies, resulting in mean average precision (MAP) of up to 95%. As part of the evaluation, we also analyze the impact of semantic zoning of research articles on the accuracy of the resulting profiles. Finally, we demonstrate how these semantic user profiles can be applied in a number of use cases, including article ranking for personalized search and finding scientists competent in a topic —e.g., to find reviewers for a paper. Availability All software and datasets presented in this paper are available under open source licenses in the supplements and documented at http://www.semanticsoftware.info/semantic-user-profiling-peerj-2016-supplements. Additionally, development releases of ScholarLens are available on our GitHub page: https://github.com/SemanticSoftwareLab/ScholarLens.
APA, Harvard, Vancouver, ISO, and other styles
35

Matsumura, Y., N. Mihara, Y. Kawakami, K. Sasai, H. Takeda, H. Nakamura, and Y. Hasegawa. "Development of a System that Generates Structured Reports for Chest X-ray Radiography." Methods of Information in Medicine 49, no. 04 (2010): 360–70. http://dx.doi.org/10.3414/me09-01-0014.

Full text
Abstract:
Summary Objectives: Radiology reports are typically made in narrative form; this is a barrier to the implementation of advanced applications for data analysis or a decision support. We developed a system that generates structured reports for chest x-ray radiography. Methods: Based on analyzing existing reports, we determined the fundamental sentence structure of findings as compositions of procedure, region, finding, and diagnosis. We categorized the observation objects into lung, mediastinum, bone, soft tissue, and pleura and chest wall. The terms of region, finding, and diagnosis were associated with each other. We expressed the terms and the relations between the terms using a resource description framework (RDF) and developed a reporting system based on it. The system shows a list of terms in each category, and modifiers can be entered using templates that are linked to each term. This system guides users to select terms by highlighting associated terms. Fifty chest x-rays with abnormal findings were interpreted by five radiologists and reports were made either by the system or by the free-text method. Results: The system decreased the time needed to make a report by 12.5% compared with the free-text method, and the sentences generated by the system were well concordant with those made by free-text method (F-measure = 90%). The results of the questionnaire showed that our system is applicable to radiology reports of chest x-rays in daily clinical practice. Conclusions: The method of generating structured reports for chest x-rays was feasible, because it generated almost concordant reports in shorter time compared with the free-text method.
APA, Harvard, Vancouver, ISO, and other styles
36

Binding, Ceri, Douglas Tudhope, and Andreas Vlachidis. "A study of semantic integration across archaeological data and reports in different languages." Journal of Information Science 45, no. 3 (July 31, 2018): 364–86. http://dx.doi.org/10.1177/0165551518789874.

Full text
Abstract:
This study investigates the semantic integration of data extracted from archaeological datasets with information extracted via natural language processing (NLP) across different languages. The investigation follows a broad theme relating to wooden objects and their dating via dendrochronological techniques, including types of wooden material, samples taken and wooden objects including shipwrecks. The outcomes are an integrated RDF dataset coupled with an associated interactive research demonstrator query builder application. The semantic framework combines the CIDOC Conceptual Reference Model (CRM) with the Getty Art and Architecture Thesaurus (AAT). The NLP, data cleansing and integration methods are described in detail together with illustrative scenarios from the web application Demonstrator. Reflections and recommendations from the study are discussed. The Demonstrator is a novel SPARQL web application, with CRM/AAT-based data integration. Functionality includes the combination of free text and semantic search with browsing on semantic links, hierarchical and associative relationship thesaurus query expansion. Queries concern wooden objects (e.g. samples of beech wood keels), optionally from a given date range, with automatic expansion over AAT hierarchies of wood types and specialised associative relationships. Following a ‘mapping pattern’ approach (via the STELETO tool) ensured validity and consistency of all RDF output. The user is shielded from the complexity of the underlying semantic framework by a query builder user interface. The study demonstrates the feasibility of connecting information extracted from datasets and grey literature reports in different languages and semantic cross-searching of the integrated information. The semantic linking of textual reports and datasets opens new possibilities for integrative research across diverse resources.
APA, Harvard, Vancouver, ISO, and other styles
37

Yu, Feiyan, Erik Champion, and David McMeekin. "Exploring Historical Australian Expeditions with Time-Layered Cultural Maps." ISPRS International Journal of Geo-Information 12, no. 3 (March 2, 2023): 104. http://dx.doi.org/10.3390/ijgi12030104.

Full text
Abstract:
The Australian Time Layered Cultural Map platform was created to help digital humanities scholars investigate how online geospatial tools could provide exemplars to their humanities colleagues on how historical collections and cultural data could be extended and re-examined with geospatial tools. The project discussed here investigated how Recogito/TMT could effectively extract spatial and temporal data from pure text-based historical information and generate time-layered interactive maps of that spatio-temporal data using accessible and user-friendly software. The target audience was humanities scholars relatively new to geospatial technologies and relevant programming systems. The interactive maps were created with two free, open-source web applications and one commercial GIS (Geographic Information System) mapping application. The relative pros and cons of each application are discussed. This paper also investigates simple workflows for extracting spatiotemporal data into RDF (Resource Description Framework) format to be used as Linked Open Data.
APA, Harvard, Vancouver, ISO, and other styles
38

SAUDE, MOHAMMAD J. I. A., and BASHIR I. MORSHED. "ELECTROSTATICS STUDY OF A SINGLE-STRANDED DNA: A PROSPECTIVE FOR SINGLE MOLECULE SEQUENCING." Biophysical Reviews and Letters 09, no. 01 (March 2014): 105–14. http://dx.doi.org/10.1142/s1793048013500100.

Full text
Abstract:
Single molecule DNA sequencing requires new approaches to identify nucleotide bases. Using molecular dynamics simulations, we investigate the intrinsic electrostatics of single-stranded DNA by solving the nonlinear Poisson–Boltzmann equation. The results show variations of the molecular electrostatic potential (MEP) within 3 nm from the center of the sugar backbone, with suitably differentiable variations at 1.4 nm distance. MEP variations among four nucleotide bases are the most significant near ~ 33.7° and ~ 326.3° angular orientation, while the influences of the neighboring bases on MEPs become insignificant after the 3rd-nearest neighbors. This analysis shows potential for direct electronic sequencing of individual DNA molecules. [Formula: see text]Special Issue Comment: This paper about DNA sequencing based on molecular electrostatic potential maps of the DNA in the channel is related to the Special Issue articles about: measuring enzymes,32 and solving single molecules' trajectories with the RDF approach33 and with the QuB software.34
APA, Harvard, Vancouver, ISO, and other styles
39

Gradmann, Stefan. "From containers to content to context." Journal of Documentation 70, no. 2 (March 4, 2014): 241–60. http://dx.doi.org/10.1108/jd-05-2013-0058.

Full text
Abstract:
Purpose – The aim of this paper is to reposition the research library in the context of the changing information and knowledge architecture at the end of the “Gutenberg Parenthesis” and as part of the rapidly emerging “semantic” environment of the Linked Open Data paradigm. Understanding this process requires a good understanding of the evolution of the “document” notion in the passage from print based culture to the distributed hypertextual and RDF based information architecture of the WWW. Design/methodology/approach – These objectives are reached using literature study and a descriptive historical approach as well as text mining techniques using Google nGrams as a data source. Findings – The paper presents a proposal for effectively repositioning research libraries in the context of eScience and eScholarship as well as clear indications of the proposed repositioning already taking place. Furthermore, a new perspective of the “document” notion is provided. Practical implications – The evolution described in the contribution creates opportunities for libraries to reposition themselves as aggregators and selectors of content and as contextualising agents as part of future Linked Data based scholarly research environments provided they are able and ready to operate the related cultural changes. Originality/value – The paper will be useful for practitioners in search of strategic guidance for repositioning their librarian institutions in a context of ever increasing competition for scarce funding resources.
APA, Harvard, Vancouver, ISO, and other styles
40

Vileiniskis, Tomas, and Rita Butkiene. "Applying Semantic Role Labeling and Spreading Activation Techniques for Semantic Information Retrieval." Information Technology And Control 49, no. 2 (June 16, 2020): 275–88. http://dx.doi.org/10.5755/j01.itc.49.2.24985.

Full text
Abstract:
Semantically enhanced information retrieval (IR) is aimed at improving classical IR methods and goes way beyond plain Boolean keyword matching with the main goal of better serving implicit and ambiguous information needs. As a de-facto pre-requisite to semantic IR, different information extraction (IE) techniques are used to mine unstructured text for underlying knowledge. In this paper we present a method that combines both IE and IR to enable semantic search in natural language texts. First, we apply semantic role labeling (SRL) to automatically extract event-oriented information found in natural language texts to an RDF knowledge graph leveraging semantic web technology. Second, we investigate how a custom flavored graph traversal spreading activation algorithm can be employed to interpret user’s information needs on top of the prior-extracted knowledge base. Finally, we present an assessment on the applicability of our method for semantically enhanced IR. An experimental evaluation on partial WikiQA dataset shows the strengths of our approach and also unveils common pitfalls that we use as guidelines to draw further work directions in the open-domain semantic search field.
APA, Harvard, Vancouver, ISO, and other styles
41

Podobriy, Aleksandr N., Vadim V. Timirzyanov, and Andrey A. Pertsev. "THE ARCHITECTURE OF KNOWLEDGE BASE CONSTRUCTION FOR A DESIGN ORGANIZATION." Автоматизация процессов управления 4, no. 66 (2021): 28–38. http://dx.doi.org/10.35752/1991-2927-2021-4-66-28-38.

Full text
Abstract:
The article presents the results of software package development to create a knowledge base for a design organization. The definition of the concept of knowledge is presented on the basis of existing definitions in open sources. The architecture of the software package is described, its main modules are listed, as well as third-party software systems used in the development of the software package. The article considers the organization of data storage subsystem of the software complex and describes the data model of this subsystem. An approach to organize a repository of the software package ontologies based on a database in an RDF format is described. The software structure for the implementation of connectors for integration with the existing systems of the design organization is presented. The article also describes a method for expanding a search query using ontology and a full-text search to take into account the features of the subject area in the search process. The information and digital model of the user of the design organization is described with reference to the artifacts of the design organization.
APA, Harvard, Vancouver, ISO, and other styles
42

Oshita, Kazuki, Masaru Tomita, and Kazuharu Arakawa. "G-Links: a gene-centric link acquisition service." F1000Research 3 (November 18, 2015): 285. http://dx.doi.org/10.12688/f1000research.5754.2.

Full text
Abstract:
With the availability of numerous curated databases, researchers are now able to efficiently use the multitude of biological data by integrating these resources via hyperlinks and cross-references. A large proportion of bioinformatics research tasks, however, may include labor-intensive tasks such as fetching, parsing, and merging datasets and functional annotations from distributed multi-domain databases. This data integration issue is one of the key challenges in bioinformatics. We aim to provide an identifier conversion and data aggregation system as a part of solution to solve this problem with a service named G-Links, 1) by gathering resource URI information from 130 databases and 30 web services in a gene-centric manner so that users can retrieve all available links about a given gene, 2) by providing RESTful API for easy retrieval of links including facet searching based on keywords and/or predicate types, and 3) by producing a variety of outputs as visual HTML page, tab-delimited text, and in Semantic Web formats such as Notation3 and RDF. G-Links as well as other relevant documentation are available at http://link.g-language.org/
APA, Harvard, Vancouver, ISO, and other styles
43

Oshita, Kazuki, Masaru Tomita, and Kazuharu Arakawa. "G-Links: a gene-centric link acquisition service." F1000Research 3 (November 19, 2014): 285. http://dx.doi.org/10.12688/f1000research.5754.1.

Full text
Abstract:
With the availability of numerous curated databases, researchers are now able to efficiently use the multitude of biological data by integrating these resources via hyperlinks and cross-references. A large proportion of bioinformatics research tasks, however, may include labor-intensive tasks such as fetching, parsing, and merging datasets and functional annotations from distributed multi-domain databases. This data integration issue is one of the key challenges in bioinformatics. We aim to solve this problem with a service named G-Links, 1) by gathering resource URI information from 130 databases and 30 web services in a gene-centric manner so that users can retrieve all available links about a given gene, 2) by providing RESTful API for easy retrieval of links including facet searching based on keywords and/or predicate types, and 3) by producing a variety of outputs as visual HTML page, tab-delimited text, and in Semantic Web formats such as Notation3 and RDF. G-Links as well as other relevant documentation are available at http://link.g-language.org/
APA, Harvard, Vancouver, ISO, and other styles
44

Ghazal, Rubina, Ahmad Malik, Basit Raza, Nauman Qadeer, Nafees Qamar, and Sajal Bhatia. "Agent-Based Semantic Role Mining for Intelligent Access Control in Multi-Domain Collaborative Applications of Smart Cities." Sensors 21, no. 13 (June 22, 2021): 4253. http://dx.doi.org/10.3390/s21134253.

Full text
Abstract:
Significance and popularity of Role-Based Access Control (RBAC) is inevitable; however, its application is highly challenging in multi-domain collaborative smart city environments. The reason is its limitations in adapting the dynamically changing information of users, tasks, access policies and resources in such applications. It also does not incorporate semantically meaningful business roles, which could have a diverse impact upon access decisions in such multi-domain collaborative business environments. We propose an Intelligent Role-based Access Control (I-RBAC) model that uses intelligent software agents for achieving intelligent access control in such highly dynamic multi-domain environments. The novelty of this model lies in using a core I-RBAC ontology that is developed using real-world semantic business roles as occupational roles provided by Standard Occupational Classification (SOC), USA. It contains around 1400 business roles, from nearly all domains, along with their detailed task descriptions as well as hierarchical relationships among them. The semantic role mining process is performed through intelligent agents that use word embedding and a bidirectional LSTM deep neural network for automated population of organizational ontology from its unstructured text policy and, subsequently, matching this ontology with core I-RBAC ontology to extract unified business roles. The experimentation was performed on a large number of collaboration case scenarios of five multi-domain organizations and promising results were obtained regarding the accuracy of automatically derived RDF triples (Subject, Predicate, Object) from organizational text policies as well as the accuracy of extracted semantically meaningful roles.
APA, Harvard, Vancouver, ISO, and other styles
45

Seydou, Sangare, Konan Marcellin Brou, Kouame Appoh, and Kouadio Prosper Kimou. "HYBRID MODEL FOR THE CLASSIFICATION OF QUESTIONS EXPRESSED IN NATURAL LANGUAGE." International Journal of Advanced Research 10, no. 09 (September 30, 2022): 202–12. http://dx.doi.org/10.21474/ijar01/15343.

Full text
Abstract:
Question-answering systems rely on an unstructured text corpora or a knowledge base to answer user questions. Most of these systems store knowledge in multiple repositories including RDF. To access this type of repository, SPARQL is the most convenient formal language. It is a complex language, it is therefore necessary to transform the questions expressed in natural language by users into a SPARQL query. As this language is complex, several approaches have been proposed to transform the questions expressed in natural language by users into a SPARQL query.However, the identification of the question type is a serious problem. Questions classification plays a potential role at this level. Machine learning algorithms including neural networks are used for this classification. With the increase in the volume of data, neural networks better perform than those obtained by machine learning algorithms, in general. That is, neural networks, machine learning algorithms also remain good classifiers. For more efficiency, a combination of convolutional neural network with these algorithms has been suggested in this paper. The BICNN-SVM combination has obtained good score not only with small dataset with a precision of 96.60% but also with a large dataset with 94.05%.
APA, Harvard, Vancouver, ISO, and other styles
46

Gao, Mingxia, Jianguo Lu, and Furong Chen. "Medical Knowledge Graph Completion Based on Word Embeddings." Information 13, no. 4 (April 18, 2022): 205. http://dx.doi.org/10.3390/info13040205.

Full text
Abstract:
The aim of Medical Knowledge Graph Completion is to automatically predict one of three parts (head entity, relationship, and tail entity) in RDF triples from medical data, mainly text data. Following their introduction, the use of pretrained language models, such as Word2vec, BERT, and XLNET, to complete Medical Knowledge Graphs has become a popular research topic. The existing work focuses mainly on relationship completion and has rarely solved entities and related triples. In this paper, a framework to predict RDF triples for Medical Knowledge Graphs based on word embeddings (named PTMKG-WE) is proposed, for the specific use for the completion of entities and triples. The framework first formalizes existing samples for a given relationship from the Medical Knowledge Graph as prior knowledge. Second, it trains word embeddings from big medical data according to prior knowledge through Word2vec. Third, it can acquire candidate triples from word embeddings based on analogies from existing samples. In this framework, the paper proposes two strategies to improve the relation features. One is used to refine the relational semantics by clustering existing triple samples. Another is used to accurately embed the expression of the relationship through means of existing samples. These two strategies can be used separately (called PTMKG-WE-C and PTMKG-WE-M, respectively), and can also be superimposed (called PTMKG-WE-C-M) in the framework. Finally, in the current study, PubMed data and the National Drug File-Reference Terminology (NDF-RT) were collected, and a series of experiments was conducted. The experimental results show that the framework proposed in this paper and the two improvement strategies can be used to predict new triples for Medical Knowledge Graphs, when medical data are sufficiently abundant and the Knowledge Graph has appropriate prior knowledge. The two strategies designed to improve the relation features have a significant effect on the lifting precision, and the superposition effect becomes more obvious. Another conclusion is that, under the same parameter setting, the semantic precision of word embedding can be improved by extending the breadth and depth of data, and the precision of the prediction framework in this paper can be further improved in most cases. Thus, collecting and training big medical data is a viable method to learn more useful knowledge.
APA, Harvard, Vancouver, ISO, and other styles
47

Castillo, Luis F., Narmer Galeano, Gustavo A. Isaza, and Alvaro Gaitan. "Construction of coffee transcriptome networks based on gene annotation semantics." Journal of Integrative Bioinformatics 9, no. 3 (December 1, 2012): 80–92. http://dx.doi.org/10.1515/jib-2012-205.

Full text
Abstract:
Summary Gene annotation is a process that encompasses multiple approaches on the analysis of nucleic acids or protein sequences in order to assign structural and functional characteristics to gene models. When thousands of gene models are being described in an organism genome, construction and visualization of gene networks impose novel challenges in the understanding of complex expression patterns and the generation of new knowledge in genomics research. In order to take advantage of accumulated text data after conventional gene sequence analysis, this work applied semantics in combination with visualization tools to build transcriptome networks from a set of coffee gene annotations. A set of selected coffee transcriptome sequences, chosen by the quality of the sequence comparison reported by Basic Local Alignment Search Tool (BLAST) and Interproscan, were filtered out by coverage, identity, length of the query, and e-values. Meanwhile, term descriptors for molecular biology and biochemistry were obtained along the Wordnet dictionary in order to construct a Resource Description Framework (RDF) using Ruby scripts and Methontology to find associations between concepts. Relationships between sequence annotations and semantic concepts were graphically represented through a total of 6845 oriented vectors, which were reduced to 745 non-redundant associations. A large gene network connecting transcripts by way of relational concepts was created where detailed connections remain to be validated for biological significance based on current biochemical and genetics frameworks. Besides reusing text information in the generation of gene connections and for data mining purposes, this tool development opens the possibility to visualize complex and abundant transcriptome data, and triggers the formulation of new hypotheses in metabolic pathways analysis.
APA, Harvard, Vancouver, ISO, and other styles
48

Huang, Gang, Man Yuan, Chun-Sheng Li, and Yong-he Wei. "Personalized Knowledge Recommendation Based on Knowledge Graph in Petroleum Exploration and Development." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 10 (January 28, 2020): 2059033. http://dx.doi.org/10.1142/s0218001420590338.

Full text
Abstract:
Firstly, this paper designs the process of personalized recommendation method based on knowledge graph, and constructs user interest model. Second, the traditional personalized recommendation algorithms are studied and their advantages and disadvantages are analyzed. Finally, this paper focuses on the combination of knowledge graph and collaborative filtering recommendation algorithm. They are effective to solve the problem where [Formula: see text] value is difficult to be determined in the clustering process of traditional collaborative filtering recommendation algorithm as well as data sparsity and cold start, utilizing the ample semantic relation in knowledge graph. If we use RDF data, which is distributed by the E and P (Exploration and Development) database based on the petroleum E and P, to verify the validity of the algorithm, the result shows that collaborative filtering algorithm based on knowledge graph can build the users’ potential intentions by knowledge graph. It is enlightening to query the information of users. In this way, it expands the mind of users to accomplish the goal of recommendation. In this paper, a collaborative filtering algorithm based on domain knowledge atlas is proposed. By using knowledge graph to effectively classify and describe domain knowledge, the problems are solved including clustering and the cold start in traditional collaborative filtering recommendation algorithm. The better recommendation effect has been achieved.
APA, Harvard, Vancouver, ISO, and other styles
49

Ngootip, Treepidok, Paiboon Manorom, Wirapong Chansanam, and Marut Buranarach. "Developing a Linked Open Data Platform for Folktales in the Greater Mekong Subregion." Emerging Science Journal 7, no. 6 (December 1, 2023): 1937–53. http://dx.doi.org/10.28991/esj-2023-07-06-06.

Full text
Abstract:
This research paper presents the development of a linked open data (LOD) platform that aims to organize and facilitate access to valuable knowledge about folktales and ethnic groups in the Greater Mekong Subregion countries. The study’s methodology involved the creation of a linked open data platform, structuring folktales’ knowledge, and evaluating its performance through expert assessment. The LOD platform was constructed through Google OpenRefine to establish connections with external data sources, and the RDF files (N-Triples) were deployed on Fuseki Server (Apache Jena) to serve as the SPARQL endpoint for querying the linked open data. The Pubby web app was chosen for further development to provide a user-friendly interface, which customized with the Bootstrap framework, featuring an intuitive homepage and a search box function for simplified data retrieval. For the expert evaluation, the study confirmed that the platform performs a high suitability in terms of congruence, reliability, integrity, understandability, collaboration, accessibility, and connectedness. The developed LOD platform exhibits significant potential for expanding its application to various content domains, offering a valuable resource for accessing and exploring the rich cultural heritage of folktales in the Greater Mekong Subregion countries. Doi: 10.28991/ESJ-2023-07-06-06 Full Text: PDF
APA, Harvard, Vancouver, ISO, and other styles
50

HACHEY, B., C. GROVER, and R. TOBIN. "Datasets for generic relation extraction." Natural Language Engineering 18, no. 1 (March 9, 2011): 21–59. http://dx.doi.org/10.1017/s1351324911000106.

Full text
Abstract:
AbstractA vast amount of usable electronic data is in the form of unstructured text. The relation extraction task aims to identify useful information in text (e.g. PersonW works for OrganisationX, GeneY encodes ProteinZ) and recode it in a format such as a relational database or RDF triplestore that can be more effectively used for querying and automated reasoning. A number of resources have been developed for training and evaluating automatic systems for relation extraction in different domains. However, comparative evaluation is impeded by the fact that these corpora use different markup formats and notions of what constitutes a relation. We describe the preparation of corpora for comparative evaluation of relation extraction across domains based on the publicly available ACE 2004, ACE 2005 and BioInfer data sets. We present a common document type using token standoff and including detailed linguistic markup, while maintaining all information in the original annotation. The subsequent reannotation process normalises the two data sets so that they comply with a notion of relation that is intuitive, simple and informed by the semantic web. For the ACE data, we describe an automatic process that automatically converts many relations involving nested, nominal entity mentions to relations involving non-nested, named or pronominal entity mentions. For example, the first entity is mapped from ‘one’ to ‘Amidu Berry’ in the membership relation described in ‘Amidu Berry, one half of PBS’. Moreover, we describe a comparably reannotated version of the BioInfer corpus that flattens nested relations, maps part-whole to part-part relations and maps n-ary to binary relations. Finally, we summarise experiments that compare approaches to generic relation extraction, a knowledge discovery task that uses minimally supervised techniques to achieve maximally portable extractors. These experiments illustrate the utility of the corpora.1
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography