Journal articles on the topic 'Knowledge Graph Evaluation'

To see the other types of publications on this topic, follow the link: Knowledge Graph Evaluation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Knowledge Graph Evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gao, Junyang, Xian Li, Yifan Ethan Xu, Bunyamin Sisman, Xin Luna Dong, and Jun Yang. "Efficient knowledge graph accuracy evaluation." Proceedings of the VLDB Endowment 12, no. 11 (July 2019): 1679–91. http://dx.doi.org/10.14778/3342263.3342642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Wenguang, Yonglin Xu, Chunhui Du, Yunwen Chen, Yijie Wang, and Hui Wen. "Data Set and Evaluation of Automated Construction of Financial Knowledge Graph." Data Intelligence 3, no. 3 (2021): 418–43. http://dx.doi.org/10.1162/dint_a_00108.

Full text
Abstract:
With the technological development of entity extraction, relationship extraction, knowledge reasoning, and entity linking, the research on knowledge graph has been carried out in full swing in recent years. To better promote the development of knowledge graph, especially in the Chinese language and in the financial industry, we built a high-quality data set, named financial research report knowledge graph (FR2KG), and organized the automated construction of financial knowledge graph evaluation at the 2020 China Knowledge Graph and Semantic Computing Conference (CCKS2020). FR2KG consists of 17,799 entities, 26,798 relationship triples, and 1,328 attribute triples covering 10 entity types, 19 relationship types, and 6 attributes. Participants are required to develop a constructor that will automatically construct a financial knowledge graph based on the FR2KG. In addition, we summarized the technologies for automatically constructing knowledge graphs, and introduced the methods used by the winners and the results of this evaluation.
APA, Harvard, Vancouver, ISO, and other styles
3

Alshahrani, Mona, Maha A. Thafar, and Magbubah Essack. "Application and evaluation of knowledge graph embeddings in biomedical data." PeerJ Computer Science 7 (February 18, 2021): e341. http://dx.doi.org/10.7717/peerj-cs.341.

Full text
Abstract:
Linked data and bio-ontologies enabling knowledge representation, standardization, and dissemination are an integral part of developing biological and biomedical databases. That is, linked data and bio-ontologies are employed in databases to maintain data integrity, data organization, and to empower search capabilities. However, linked data and bio-ontologies are more recently being used to represent information as multi-relational heterogeneous graphs, “knowledge graphs”. The reason being, entities and relations in the knowledge graph can be represented as embedding vectors in semantic space, and these embedding vectors have been used to predict relationships between entities. Such knowledge graph embedding methods provide a practical approach to data analytics and increase chances of building machine learning models with high prediction accuracy that can enhance decision support systems. Here, we present a comparative assessment and a standard benchmark for knowledge graph-based representation learning methods focused on the link prediction task for biological relations. We systematically investigated and compared state-of-the-art embedding methods based on the design settings used for training and evaluation. We further tested various strategies aimed at controlling the amount of information related to each relation in the knowledge graph and its effects on the final performance. We also assessed the quality of the knowledge graph features through clustering and visualization and employed several evaluation metrics to examine their uses and differences. Based on this systematic comparison and assessments, we identify and discuss the limitations of knowledge graph-based representation learning methods and suggest some guidelines for the development of more improved methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Mao, Yanmei. "Summary and Evaluation of the Application of Knowledge Graphs in Education 2007–2020." Discrete Dynamics in Nature and Society 2021 (September 28, 2021): 1–10. http://dx.doi.org/10.1155/2021/6304109.

Full text
Abstract:
Since 2007, knowledge graphs, an important research tool, have been applied to education and many other disciplines. This paper firstly overviews the application of knowledge graphs in education and then samples the knowledge graph applications in CSSCI- (Chinese Social Sciences Citation Index-) indexed journals in the past two years. These samples were classified and analyzed in terms of research institute, data source, visualization software, and analysis perspective. Next, the situation of knowledge graph applications in education was summarized and evaluated in detail. Furthermore, the authors discussed and assessed the normalization of knowledge graph applications in education. The results show that in the past 15 years, knowledge graphs have been widely used in education. The academia has reached a consensus on the paradigm of the research tool: examining the hotspots, topics, and trends in the related fields from the angles of keyword cooccurrence network (KCN), time zone map, clustering network, and literature/author cocitation, with the aid of CiteSpace and other visualization software and text analysis. However, there is not yet a thorough understanding of the limitations of the visualization software. The relevant research should be improved in terms of scientific level, normalization level, and quality.
APA, Harvard, Vancouver, ISO, and other styles
5

Malaviya, Chaitanya, Chandra Bhagavatula, Antoine Bosselut, and Yejin Choi. "Commonsense Knowledge Base Completion with Structural and Semantic Context." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 03 (April 3, 2020): 2925–33. http://dx.doi.org/10.1609/aaai.v34i03.5684.

Full text
Abstract:
Automatic KB completion for commonsense knowledge graphs (e.g., ATOMIC and ConceptNet) poses unique challenges compared to the much studied conventional knowledge bases (e.g., Freebase). Commonsense knowledge graphs use free-form text to represent nodes, resulting in orders of magnitude more nodes compared to conventional KBs ( ∼18x more nodes in ATOMIC compared to Freebase (FB15K-237)). Importantly, this implies significantly sparser graph structures — a major challenge for existing KB completion methods that assume densely connected graphs over a relatively smaller set of nodes.In this paper, we present novel KB completion models that can address these challenges by exploiting the structural and semantic context of nodes. Specifically, we investigate two key ideas: (1) learning from local graph structure, using graph convolutional networks and automatic graph densification and (2) transfer learning from pre-trained language models to knowledge graphs for enhanced contextual representation of knowledge. We describe our method to incorporate information from both these sources in a joint model and provide the first empirical results for KB completion on ATOMIC and evaluation with ranking metrics on ConceptNet. Our results demonstrate the effectiveness of language model representations in boosting link prediction performance and the advantages of learning from local graph structure (+1.5 points in MRR for ConceptNet) when training on subgraphs for computational efficiency. Further analysis on model predictions shines light on the types of commonsense knowledge that language models capture well.
APA, Harvard, Vancouver, ISO, and other styles
6

Sekkal, Houda, Naïla Amrous, and Samir Bennani. "Knowledge graph-based method for solutions detection and evaluation in an online problem-solving community." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 6 (December 1, 2022): 6350. http://dx.doi.org/10.11591/ijece.v12i6.pp6350-6362.

Full text
Abstract:
<span lang="EN-US">Online communities are a real medium for human experiences sharing. They contain rich knowledge of lived situations and experiences that can be used to support decision-making process and problem-solving. This work presents an approach for extracting, representing, and evaluating components of problem-solving knowledge shared in online communities. Few studies have tackled the issue of knowledge extraction and its usefulness evaluation in online communities. In this study, we propose a new approach to detect and evaluate best solutions to problems discussed by members of online communities. Our approach is based on knowledge graph technology and graphs theory enabling the representation of knowledge shared by the community and facilitating its reuse. Our process of problem-solving knowledge extraction in online communities (PSKEOC) consists of three phases: problems and solutions detection and classification, knowledge graph constitution and finally best solutions evaluation. The experimental results are compared to the World Health Organization (WHO) model chapter about Infant and young child feeding and show that our approach succeed to extract and reveal important problem-solving knowledge contained in online community’s conversations. Our proposed approach leads to the construction of an experiential knowledge graph as a representation of the constructed knowledge base in the community studied in this paper.</span>
APA, Harvard, Vancouver, ISO, and other styles
7

Monka, Sebastian, Lavdim Halilaj, and Achim Rettinger. "A survey on visual transfer learning using knowledge graphs." Semantic Web 13, no. 3 (April 6, 2022): 477–510. http://dx.doi.org/10.3233/sw-212959.

Full text
Abstract:
The information perceived via visual observations of real-world phenomena is unstructured and complex. Computer vision (CV) is the field of research that attempts to make use of that information. Recent approaches of CV utilize deep learning (DL) methods as they perform quite well if training and testing domains follow the same underlying data distribution. However, it has been shown that minor variations in the images that occur when these methods are used in the real world can lead to unpredictable and catastrophic errors. Transfer learning is the area of machine learning that tries to prevent these errors. Especially, approaches that augment image data using auxiliary knowledge encoded in language embeddings or knowledge graphs (KGs) have achieved promising results in recent years. This survey focuses on visual transfer learning approaches using KGs, as we believe that KGs are well suited to store and represent any kind of auxiliary knowledge. KGs can represent auxiliary knowledge either in an underlying graph-structured schema or in a vector-based knowledge graph embedding. Intending to enable the reader to solve visual transfer learning problems with the help of specific KG-DL configurations we start with a description of relevant modeling structures of a KG of various expressions, such as directed labeled graphs, hypergraphs, and hyper-relational graphs. We explain the notion of feature extractor, while specifically referring to visual and semantic features. We provide a broad overview of knowledge graph embedding methods and describe several joint training objectives suitable to combine them with high dimensional visual embeddings. The main section introduces four different categories on how a KG can be combined with a DL pipeline: 1) Knowledge Graph as a Reviewer; 2) Knowledge Graph as a Trainee; 3) Knowledge Graph as a Trainer; and 4) Knowledge Graph as a Peer. To help researchers find meaningful evaluation benchmarks, we provide an overview of generic KGs and a set of image processing datasets and benchmarks that include various types of auxiliary knowledge. Last, we summarize related surveys and give an outlook about challenges and open issues for future research.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Pu, Tianci Li, Xin Wang, Suzhi Zhang, Yuncheng Jiang, and Yong Tang. "Scholar Recommendation Based on High-Order Propagation of Knowledge Graphs." International Journal on Semantic Web and Information Systems 18, no. 1 (January 2022): 1–19. http://dx.doi.org/10.4018/ijswis.297146.

Full text
Abstract:
In a big data environment, traditional recommendation methods have limitations such as data sparseness and cold start, etc. In view of the rich semantics, excellent quality, and good structure of knowledge graphs, many researchers have introduced knowledge graphs into the research about recommendation systems, and studied interpretable recommendations based on knowledge graphs. Along this line, this paper proposes a scholar recommendation method based on the high-order propagation of knowledge graph (HoPKG), which analyzes the high-order semantic information in the knowledge graph, and generates richer entity representations to obtain users’ potential interest by distinguishing the importance of different entities. On this basis, a dual aggregation method of high-order propagation is proposed to enable entity information to be propagated more effectively. Through experimental analysis, compared with some baselines, such as Ripplenet, RKGE and CKE, our method has certain advantages in the evaluation indicators AUC and F1.
APA, Harvard, Vancouver, ISO, and other styles
9

Grundspenkis, Janis, and Maija Strautmane. "Usage of Graph Patterns for Knowledge Assessment Based on Concept Maps." Scientific Journal of Riga Technical University. Computer Sciences 38, no. 38 (January 1, 2009): 60–71. http://dx.doi.org/10.2478/v10143-009-0005-y.

Full text
Abstract:
Usage of Graph Patterns for Knowledge Assessment Based on Concept MapsThe paper discusses application of concepts maps (CMs) for knowledge assessment. CMs are graphs which nodes represent concepts and arcs represent relationships between them. CMs reveal learners' knowledge structure and allow assessing their knowledge level. Step-by-step construction and use of CMs is easy. However, mere comparison of expert constructed and learners' completed CMs forces students to construct their knowledge exactly in the same way as experts. At the same time it is known that individuals construct their knowledge structures in different ways. The developed adaptive knowledge assessment system which is implemented as multiagent system includes the knowledge evaluation agent which carries out the comparison of CMs. The paper presents a novel approach to comparison of CMs using graph patterns. Graph patterns are subgraphs, i.e., paths with limited length. Graph patterns are given for both fill-in-the-map tasks where CM structure is predefined and construct-the-map tasks. The corresponding production rules of graph patterns allow to expand the expert's constructed CM and in this way to promote more flexible and adaptive knowledge assessment.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Yixiao, Xiaosong Wang, Ziyue Xu, Qihang Yu, Alan Yuille, and Daguang Xu. "When Radiology Report Generation Meets Knowledge Graph." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12910–17. http://dx.doi.org/10.1609/aaai.v34i07.6989.

Full text
Abstract:
Automatic radiology report generation has been an attracting research problem towards computer-aided diagnosis to alleviate the workload of doctors in recent years. Deep learning techniques for natural image captioning are successfully adapted to generating radiology reports. However, radiology image reporting is different from the natural image captioning task in two aspects: 1) the accuracy of positive disease keyword mentions is critical in radiology image reporting in comparison to the equivalent importance of every single word in a natural image caption; 2) the evaluation of reporting quality should focus more on matching the disease keywords and their associated attributes instead of counting the occurrence of N-gram. Based on these concerns, we propose to utilize a pre-constructed graph embedding module (modeled with a graph convolutional neural network) on multiple disease findings to assist the generation of reports in this work. The incorporation of knowledge graph allows for dedicated feature learning for each disease finding and the relationship modeling between them. In addition, we proposed a new evaluation metric for radiology image reporting with the assistance of the same composed graph. Experimental results demonstrate the superior performance of the methods integrated with the proposed graph embedding module on a publicly accessible dataset (IU-RR) of chest radiographs compared with previous approaches using both the conventional evaluation metrics commonly adopted for image captioning and our proposed ones.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Peng, Yi Bu, Peng Jiang, Xiaowen Shi, Bing Lun, Chongyan Chen, Arida Ferti Syafiandini, Ying Ding, and Min Song. "Toward a Coronavirus Knowledge Graph." Genes 12, no. 7 (June 29, 2021): 998. http://dx.doi.org/10.3390/genes12070998.

Full text
Abstract:
This study builds a coronavirus knowledge graph (KG) by merging two information sources. The first source is Analytical Graph (AG), which integrates more than 20 different public datasets related to drug discovery. The second source is CORD-19, a collection of published scientific articles related to COVID-19. We combined both chemo genomic entities in AG with entities extracted from CORD-19 to expand knowledge in the COVID-19 domain. Before populating KG with those entities, we perform entity disambiguation on CORD-19 collections using Wikidata. Our newly built KG contains at least 21,700 genes, 2500 diseases, 94,000 phenotypes, and other biological entities (e.g., compound, species, and cell lines). We define 27 relationship types and use them to label each edge in our KG. This research presents two cases to evaluate the KG’s usability: analyzing a subgraph (ego-centered network) from the angiotensin-converting enzyme (ACE) and revealing paths between biological entities (hydroxychloroquine and IL-6 receptor; chloroquine and STAT1). The ego-centered network captured information related to COVID-19. We also found significant COVID-19-related information in top-ranked paths with a depth of three based on our path evaluation.
APA, Harvard, Vancouver, ISO, and other styles
12

Kurniawan, Kabul, Andreas Ekelhart, Elmar Kiesling, Dietmar Winkler, Gerald Quirchmayr, and A. Min Tjoa. "VloGraph: A Virtual Knowledge Graph Framework for Distributed Security Log Analysis." Machine Learning and Knowledge Extraction 4, no. 2 (April 11, 2022): 371–96. http://dx.doi.org/10.3390/make4020016.

Full text
Abstract:
The integration of heterogeneous and weakly linked log data poses a major challenge in many log-analytic applications. Knowledge graphs (KGs) can facilitate such integration by providing a versatile representation that can interlink objects of interest and enrich log events with background knowledge. Furthermore, graph-pattern based query languages, such as SPARQL, can support rich log analyses by leveraging semantic relationships between objects in heterogeneous log streams. Constructing, materializing, and maintaining centralized log knowledge graphs, however, poses significant challenges. To tackle this issue, we propose VloGraph—a distributed and virtualized alternative to centralized log knowledge graph construction. The proposed approach does not involve any a priori parsing, aggregation, and processing of log data, but dynamically constructs a virtual log KG from heterogeneous raw log sources across multiple hosts. To explore the feasibility of this approach, we developed a prototype and demonstrate its applicability to three scenarios. Furthermore, we evaluate the approach in various experimental settings with multiple heterogeneous log sources and machines; the encouraging results from this evaluation suggest that the approach can enable efficient graph-based ad-hoc log analyses in federated settings.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Yongmei, Zhirong Du, and Lei Hu. "A construction method of urban road risky vehicles based on dynamic knowledge graph." Electronic Research Archive 31, no. 7 (2023): 3776–90. http://dx.doi.org/10.3934/era.2023192.

Full text
Abstract:
<abstract> <p>The growth of the Internet of Things makes it possible to share information on risky vehicles openly and freely. How to create dynamic knowledge graphs of continually changing risky vehicles has emerged as a crucial technology for identifying risky vehicles, as well as a research hotspot in both artificial intelligence and field knowledge graphs. The node information of the risky vehicle knowledge graph is not rich, and the graph structure plays a major role in its dynamic changes. The paper presents a fusion algorithm based on relational graph convolutional network (R-GCN) and Long Short-Term Memory (LSTM) to build the dynamic knowledge graph of risky vehicles and conducts a comparative experiment on the link prediction task. The results showed that the fusion algorithm based on R-GCN and LSTM had better performance than the other methods such as GCN, DynGEM, ROLAND, and RE-GCN, with the MAP value of 0.2746 and the MRR value of 0.1075. To further verify the proposed algorithm, classification experiments are carried out on the risky vehicle dataset. Accuracy, precision, recall, and F-values were used as heat-tolerance evaluation indexes in classification experiments, the values were 0.667, 0.034, 0.422, and 0.52 respectively.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Pu, Guohao Zhou, Zhilei Yin, Rui Chen, and Suzhi Zhang. "A Semantically Enhanced Knowledge Discovery Method for Knowledge Graph Based on Adjacency Fuzzy Predicates Reasoning." International Journal on Semantic Web and Information Systems 19, no. 1 (June 1, 2023): 1–24. http://dx.doi.org/10.4018/ijswis.323921.

Full text
Abstract:
Discover the deep semantics from the massively structured data in knowledge graph and provide reasonable explanations are a series of important foundational research issues of artificial intelligence. However, the deep semantics hidden between entities in knowledge graph cannot be well expressed. Moreover, considering many predicates express fuzzy relationships, the existing reasoning methods cannot effectively deal with these fuzzy semantics and interpret the corresponding reasoning process. To counter the above problems, in this article, a new interpretable reasoning schema is proposed by introducing fuzzy theory. The presented method focuses on analyzing the fuzzy semantic between related entities in a knowledge graph. By annotating the fuzzy semantic features of adjacency predicates, a novel semantic reasoning model is designed to realize the fuzzy semantic extension over knowledge graph. The evaluation, based on both visualization and query experiments, shows that this proposal has advantages over the initial knowledge graph and can discover more valid semantic information.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Shu, Xueying Zhang, Peng Ye, Mi Du, Yanxu Lu, and Haonan Xue. "Geographic Knowledge Graph (GeoKG): A Formalized Geographic Knowledge Representation." ISPRS International Journal of Geo-Information 8, no. 4 (April 8, 2019): 184. http://dx.doi.org/10.3390/ijgi8040184.

Full text
Abstract:
Formalized knowledge representation is the foundation of Big Data computing, mining and visualization. Current knowledge representations regard information as items linked to relevant objects or concepts by tree or graph structures. However, geographic knowledge differs from general knowledge, which is more focused on temporal, spatial, and changing knowledge. Thus, discrete knowledge items are difficult to represent geographic states, evolutions, and mechanisms, e.g., the processes of a storm “{9:30-60 mm-precipitation}-{12:00-80 mm-precipitation}-…”. The underlying problem is the constructors of the logic foundation (ALC description language) of current geographic knowledge representations, which cannot provide these descriptions. To address this issue, this study designed a formalized geographic knowledge representation called GeoKG and supplemented the constructors of the ALC description language. Then, an evolution case of administrative divisions of Nanjing was represented with the GeoKG. In order to evaluate the capabilities of our formalized model, two knowledge graphs were constructed by using the GeoKG and the YAGO by using the administrative division case. Then, a set of geographic questions were defined and translated into queries. The query results have shown that GeoKG results are more accurate and complete than the YAGO’s with the enhancing state information. Additionally, the user evaluation verified these improvements, which indicates it is a promising powerful model for geographic knowledge representation.
APA, Harvard, Vancouver, ISO, and other styles
16

Li, Christy Y., Xiaodan Liang, Zhiting Hu, and Eric P. Xing. "Knowledge-Driven Encode, Retrieve, Paraphrase for Medical Image Report Generation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6666–73. http://dx.doi.org/10.1609/aaai.v33i01.33016666.

Full text
Abstract:
Generating long and semantic-coherent reports to describe medical images poses great challenges towards bridging visual and linguistic modalities, incorporating medical domain knowledge, and generating realistic and accurate descriptions. We propose a novel Knowledge-driven Encode, Retrieve, Paraphrase (KERP) approach which reconciles traditional knowledge- and retrieval-based methods with modern learning-based methods for accurate and robust medical report generation. Specifically, KERP decomposes medical report generation into explicit medical abnormality graph learning and subsequent natural language modeling. KERP first employs an Encode module that transforms visual features into a structured abnormality graph by incorporating prior medical knowledge; then a Retrieve module that retrieves text templates based on the detected abnormalities; and lastly, a Paraphrase module that rewrites the templates according to specific cases. The core of KERP is a proposed generic implementation unit—Graph Transformer (GTR) that dynamically transforms high-level semantics between graph-structured data of multiple domains such as knowledge graphs, images and sequences. Experiments show that the proposed approach generates structured and robust reports supported with accurate abnormality description and explainable attentive regions, achieving the state-of-the-art results on two medical report benchmarks, with the best medical abnormality and disease classification accuracy and improved human evaluation performance.
APA, Harvard, Vancouver, ISO, and other styles
17

Hoffmann, Maximilian, and Ralph Bergmann. "Using Graph Embedding Techniques in Process-Oriented Case-Based Reasoning." Algorithms 15, no. 2 (January 18, 2022): 27. http://dx.doi.org/10.3390/a15020027.

Full text
Abstract:
Similarity-based retrieval of semantic graphs is a core task of Process-Oriented Case-Based Reasoning (POCBR) with applications in real-world scenarios, e.g., in smart manufacturing. The involved similarity computation is usually complex and time-consuming, as it requires some kind of inexact graph matching. To tackle these problems, we present an approach to modeling similarity measures based on embedding semantic graphs via Graph Neural Networks (GNNs). Therefore, we first examine how arbitrary semantic graphs, including node and edge types and their knowledge-rich semantic annotations, can be encoded in a numeric format that is usable by GNNs. Given this, the architecture of two generic graph embedding models from the literature is adapted to enable their usage as a similarity measure for similarity-based retrieval. Thereby, one of the two models is more optimized towards fast similarity prediction, while the other model is optimized towards knowledge-intensive, more expressive predictions. The evaluation examines the quality and performance of these models in preselecting retrieval candidates and in approximating the ground-truth similarities of a graph-matching-based similarity measure for two semantic graph domains. The results show the great potential of the approach for use in a retrieval scenario, either as a preselection model or as an approximation of a graph similarity measure.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhu, Yongjun, Chao Che, Bo Jin, Ningrui Zhang, Chang Su, and Fei Wang. "Knowledge-driven drug repurposing using a comprehensive drug knowledge graph." Health Informatics Journal 26, no. 4 (July 17, 2020): 2737–50. http://dx.doi.org/10.1177/1460458220937101.

Full text
Abstract:
Due to the huge costs associated with new drug discovery and development, drug repurposing has become an important complement to the traditional de novo approach. With the increasing number of public databases and the rapid development of analytical methodologies, computational approaches have gained great momentum in the field of drug repurposing. In this study, we introduce an approach to knowledge-driven drug repurposing based on a comprehensive drug knowledge graph. We design and develop a drug knowledge graph by systematically integrating multiple drug knowledge bases. We describe path- and embedding-based data representation methods of transforming information in the drug knowledge graph into valuable inputs to allow machine learning models to predict drug repurposing candidates. The evaluation demonstrates that the knowledge-driven approach can produce high predictive results for known diabetes mellitus treatments by only using treatment information on other diseases. In addition, this approach supports exploratory investigation through the review of meta paths that connect drugs with diseases. This knowledge-driven approach is an effective drug repurposing strategy supporting large-scale prediction and the investigation of case studies.
APA, Harvard, Vancouver, ISO, and other styles
19

Búr, Márton, Gábor Szilágyi, András Vörös, and Dániel Varró. "Distributed graph queries over models@run.time for runtime monitoring of cyber-physical systems." International Journal on Software Tools for Technology Transfer 22, no. 1 (September 26, 2019): 79–102. http://dx.doi.org/10.1007/s10009-019-00531-5.

Full text
Abstract:
Abstract Smart cyber-physical systems (CPSs) have complex interaction with their environment which is rarely known in advance, and they heavily depend on intelligent data processing carried out over a heterogeneous and distributed computation platform with resource-constrained devices to monitor, manage and control autonomous behavior. First, we propose a distributed runtime model to capture the operational state and the context information of a smart CPS using directed, typed and attributed graphs as high-level knowledge representation. The runtime model is distributed among the participating nodes, and it is consistently kept up to date in a continuously evolving environment by a time-triggered model management protocol. Our runtime models offer a (domain-specific) model query and manipulation interface over the reliable communication middleware of the Data Distribution Service (DDS) standard widely used in the CPS domain. Then, we propose to carry out distributed runtime monitoring by capturing critical properties of interest in the form of graph queries, and design a distributed graph query evaluation algorithm for evaluating such graph queries over the distributed runtime model. As the key innovation, our (1) distributed runtime model extends existing publish–subscribe middleware (like DDS) used in real-time CPS applications by enabling the dynamic creation and deletion of graph nodes (without compile time limits). Moreover, (2) our distributed query evaluation extends existing graph query techniques by enabling query evaluation in a real-time, resource-constrained environment while still providing scalable performance. Our approach is illustrated, and an initial scalability evaluation is carried out on the MoDeS3 CPS demonstrator and the open Train Benchmark for graph queries.
APA, Harvard, Vancouver, ISO, and other styles
20

Kim, Jongmo, Kunyoung Kim, Mye Sohn, and Gyudong Park. "Deep Model-Based Security-Aware Entity Alignment Method for Edge-Specific Knowledge Graphs." Sustainability 14, no. 14 (July 20, 2022): 8877. http://dx.doi.org/10.3390/su14148877.

Full text
Abstract:
This paper proposes a deep model-based entity alignment method for the edge-specific knowledge graphs (KGs) to resolve the semantic heterogeneity between the edge systems’ data. To do so, this paper first analyzes the edge-specific knowledge graphs (KGs) to find unique characteristics. The deep model-based entity alignment method is developed based on their unique characteristics. The proposed method performs the entity alignment using a graph which is not topological but data-centric, to reflect the characteristics of the edge-specific KGs, which are mainly composed of the instance entities rather than the conceptual entities. In addition, two deep models, namely BERT (bidirectional encoder representations from transformers) for the concept entities and GAN (generative adversarial networks) for the instance entities, are applied to model learning. By utilizing the deep models, neural network models that humans cannot interpret, it is possible to secure data on the edge systems. The two learning models trained separately are integrated using a graph-based deep learning model GCN (graph convolution network). Finally, the integrated deep model is utilized to align the entities in the edge-specific KGs. To demonstrate the superiority of the proposed method, we perform the experiment and evaluation compared to the state-of-the-art entity alignment methods with the two experimental datasets from DBpedia, YAGO, and wikidata. In the evaluation metrics of Hits@k, mean rank (MR), and mean reciprocal rank (MRR), the proposed method shows the best predictive and generalization performance for the KG entity alignment.
APA, Harvard, Vancouver, ISO, and other styles
21

Mežnar, Sebastian, Matej Bevec, Nada Lavrač, and Blaž Škrlj. "Ontology Completion with Graph-Based Machine Learning: A Comprehensive Evaluation." Machine Learning and Knowledge Extraction 4, no. 4 (December 1, 2022): 1107–23. http://dx.doi.org/10.3390/make4040056.

Full text
Abstract:
Increasing quantities of semantic resources offer a wealth of human knowledge, but their growth also increases the probability of wrong knowledge base entries. The development of approaches that identify potentially spurious parts of a given knowledge base is therefore highly relevant. We propose an approach for ontology completion that transforms an ontology into a graph and recommends missing edges using structure-only link analysis methods. By systematically evaluating thirteen methods (some for knowledge graphs) on eight different semantic resources, including Gene Ontology, Food Ontology, Marine Ontology, and similar ontologies, we demonstrate that a structure-only link analysis can offer a scalable and computationally efficient ontology completion approach for a subset of analyzed data sets. To the best of our knowledge, this is currently the most extensive systematic study of the applicability of different types of link analysis methods across semantic resources from different domains. It demonstrates that by considering symbolic node embeddings, explanations of the predictions (links) can be obtained, making this branch of methods potentially more valuable than black-box methods.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Yuxin, Bohan Li, Han Gao, Ye Ji, Han Yang, Meng Wang, and Weitong Chen. "Fine-Grained Evaluation of Knowledge Graph Embedding Model in Knowledge Enhancement Downstream Tasks." Big Data Research 25 (July 2021): 100218. http://dx.doi.org/10.1016/j.bdr.2021.100218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Cox, Steven, Stanley C. Ahalt, James Balhoff, Chris Bizon, Karamarie Fecho, Yaphet Kebede, Kenneth Morton, Alexander Tropsha, Patrick Wang, and Hao Xu. "Visualization Environment for Federated Knowledge Graphs: Development of an Interactive Biomedical Query Language and Web Application Interface." JMIR Medical Informatics 8, no. 11 (November 23, 2020): e17964. http://dx.doi.org/10.2196/17964.

Full text
Abstract:
Background Efforts are underway to semantically integrate large biomedical knowledge graphs using common upper-level ontologies to federate graph-oriented application programming interfaces (APIs) to the data. However, federation poses several challenges, including query routing to appropriate knowledge sources, generation and evaluation of answer subsets, semantic merger of those answer subsets, and visualization and exploration of results. Objective We aimed to develop an interactive environment for query, visualization, and deep exploration of federated knowledge graphs. Methods We developed a biomedical query language and web application interphase—termed as Translator Query Language (TranQL)—to query semantically federated knowledge graphs and explore query results. TranQL uses the Biolink data model as an upper-level biomedical ontology and an API standard that has been adopted by the Biomedical Data Translator Consortium to specify a protocol for expressing a query as a graph of Biolink data elements compiled from statements in the TranQL query language. Queries are mapped to federated knowledge sources, and answers are merged into a knowledge graph, with mappings between the knowledge graph and specific elements of the query. The TranQL interactive web application includes a user interface to support user exploration of the federated knowledge graph. Results We developed 2 real-world use cases to validate TranQL and address biomedical questions of relevance to translational science. The use cases posed questions that traversed 2 federated Translator API endpoints: Integrated Clinical and Environmental Exposures Service (ICEES) and Reasoning Over Biomedical Objects linked in Knowledge Oriented Pathways (ROBOKOP). ICEES provides open access to observational clinical and environmental data, and ROBOKOP provides access to linked biomedical entities, such as “gene,” “chemical substance,” and “disease,” that are derived largely from curated public data sources. We successfully posed queries to TranQL that traversed these endpoints and retrieved answers that we visualized and evaluated. Conclusions TranQL can be used to ask questions of relevance to translational science, rapidly obtain answers that require assertions from a federation of knowledge sources, and provide valuable insights for translational research and clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
24

Paulheim, Heiko. "Knowledge graph refinement: A survey of approaches and evaluation methods." Semantic Web 8, no. 3 (December 6, 2016): 489–508. http://dx.doi.org/10.3233/sw-160218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Sun, Haixia, Jin Xiao, Wei Zhu, Yilong He, Sheng Zhang, Xiaowei Xu, Li Hou, Jiao Li, Yuan Ni, and Guotong Xie. "Medical Knowledge Graph to Enhance Fraud, Waste, and Abuse Detection on Claim Data: Model Development and Performance Evaluation." JMIR Medical Informatics 8, no. 7 (July 23, 2020): e17653. http://dx.doi.org/10.2196/17653.

Full text
Abstract:
Background Fraud, Waste, and Abuse (FWA) detection is a significant yet challenging problem in the health insurance industry. An essential step in FWA detection is to check whether the medication is clinically reasonable with respect to the diagnosis. Currently, human experts with sufficient medical knowledge are required to perform this task. To reduce the cost, insurance inspectors tend to build an intelligent system to detect suspicious claims with inappropriate diagnoses/medications automatically. Objective The aim of this study was to develop an automated method for making use of a medical knowledge graph to identify clinically suspected claims for FWA detection. Methods First, we identified the medical knowledge that is required to assess the clinical rationality of the claims. We then searched for data sources that contain information to build such knowledge. In this study, we focused on Chinese medical knowledge. Second, we constructed a medical knowledge graph using unstructured knowledge. We used a deep learning–based method to extract the entities and relationships from the knowledge sources and developed a multilevel similarity matching approach to conduct the entity linking. To guarantee the quality of the medical knowledge graph, we involved human experts to review the entity and relationships with lower confidence. These reviewed results could be used to further improve the machine-learning models. Finally, we developed the rules to identify the suspected claims by reasoning according to the medical knowledge graph. Results We collected 185,796 drug labels from the China Food and Drug Administration, 3390 types of disease information from medical textbooks (eg, symptoms, diagnosis, treatment, and prognosis), and information from 5272 examinations as the knowledge sources. The final medical knowledge graph includes 1,616,549 nodes and 5,963,444 edges. We designed three knowledge graph reasoning rules to identify three kinds of inappropriate diagnosis/medications. The experimental results showed that the medical knowledge graph helps to detect 70% of the suspected claims. Conclusions The medical knowledge graph–based method successfully identified suspected cases of FWA (such as fraud diagnosis, excess prescription, and irrational prescription) from the claim documents, which helped to improve the efficiency of claim processing.
APA, Harvard, Vancouver, ISO, and other styles
26

Varitimiadis, Savvas, Konstantinos Kotis, Dimitra Pittou, and Georgios Konstantakis. "Graph-Based Conversational AI: Towards a Distributed and Collaborative Multi-Chatbot Approach for Museums." Applied Sciences 11, no. 19 (October 1, 2021): 9160. http://dx.doi.org/10.3390/app11199160.

Full text
Abstract:
Nowadays, museums are developing chatbots to assist their visitors and to provide an enhanced visiting experience. Most of these chatbots do not provide a human-like conversation and fail to deliver the complete requested knowledge by the visitors. There are plenty of stand-alone museum chatbots, developed using a chatbot platform, that provide predefined dialog routes. However, as chatbot platforms are evolving and AI technologies mature, new architectural approaches arise. Museums are already designing chatbots that are trained using machine learning techniques or chatbots connected to knowledge graphs, delivering more intelligent chatbots. This paper is surveying a representative set of developed museum chatbots and platforms for implementing them. More importantly, this paper presents the result of a systematic evaluation approach for evaluating both chatbots and platforms. Furthermore, the paper is introducing a novel approach in developing intelligent chatbots for museums. This approach emphasizes graph-based, distributed, and collaborative multi-chatbot conversational AI systems for museums. The paper accentuates the use of knowledge graphs as the key technology for potentially providing unlimited knowledge to chatbot users, satisfying conversational AI’s need for rich machine-understandable content. In addition, the proposed architecture is designed to deliver an efficient deployment solution where knowledge can be distributed (distributed knowledge graphs) and shared among different chatbots that collaborate when is needed.
APA, Harvard, Vancouver, ISO, and other styles
27

Portisch, Jan, Nicolas Heist, and Heiko Paulheim. "Knowledge graph embedding for data mining vs. knowledge graph embedding for link prediction – two sides of the same coin?" Semantic Web 13, no. 3 (April 6, 2022): 399–422. http://dx.doi.org/10.3233/sw-212892.

Full text
Abstract:
Knowledge Graph Embeddings, i.e., projections of entities and relations to lower dimensional spaces, have been proposed for two purposes: (1) providing an encoding for data mining tasks, and (2) predicting links in a knowledge graph. Both lines of research have been pursued rather in isolation from each other so far, each with their own benchmarks and evaluation methodologies. In this paper, we argue that both tasks are actually related, and we show that the first family of approaches can also be used for the second task and vice versa. In two series of experiments, we provide a comparison of both families of approaches on both tasks, which, to the best of our knowledge, has not been done so far. Furthermore, we discuss the differences in the similarity functions evoked by the different embedding approaches.
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Linfeng, Peng Wang, Yao Wang, Shenghui Wang, Jun Yan, Jinpeng Jiang, Buzhou Tang, Chengliang Wang, and Yuting Liu. "A Method to Learn Embedding of a Probabilistic Medical Knowledge Graph: Algorithm Development." JMIR Medical Informatics 8, no. 5 (May 21, 2020): e17645. http://dx.doi.org/10.2196/17645.

Full text
Abstract:
Background Knowledge graph embedding is an effective semantic representation method for entities and relations in knowledge graphs. Several translation-based algorithms, including TransE, TransH, TransR, TransD, and TranSparse, have been proposed to learn effective embedding vectors from typical knowledge graphs in which the relations between head and tail entities are deterministic. However, in medical knowledge graphs, the relations between head and tail entities are inherently probabilistic. This difference introduces a challenge in embedding medical knowledge graphs. Objective We aimed to address the challenge of how to learn the probability values of triplets into representation vectors by making enhancements to existing TransX (where X is E, H, R, D, or Sparse) algorithms, including the following: (1) constructing a mapping function between the score value and the probability, and (2) introducing probability-based loss of triplets into the original margin-based loss function. Methods We performed the proposed PrTransX algorithm on a medical knowledge graph that we built from large-scale real-world electronic medical records data. We evaluated the embeddings using link prediction task. Results Compared with the corresponding TransX algorithms, the proposed PrTransX performed better than the TransX model in all evaluation indicators, achieving a higher proportion of corrected entities ranked in the top 10 and normalized discounted cumulative gain of the top 10 predicted tail entities, and lower mean rank. Conclusions The proposed PrTransX successfully incorporated the uncertainty of the knowledge triplets into the embedding vectors.
APA, Harvard, Vancouver, ISO, and other styles
29

Choi, Junho. "Graph Embedding-Based Domain-Specific Knowledge Graph Expansion Using Research Literature Summary." Sustainability 14, no. 19 (September 27, 2022): 12299. http://dx.doi.org/10.3390/su141912299.

Full text
Abstract:
Knowledge bases built in the knowledge processing field have a problem in that experts have to add rules or update them through modifications. To solve this problem, research has been conducted on knowledge graph expansion methods using deep learning technology, and in recent years, many studies have been conducted on methods of generating knowledge bases by embedding the knowledge graph’s triple information in a continuous vector space. In this paper, using a research literature summary, we propose a domain-specific knowledge graph expansion method based on graph embedding. To this end, we perform pre-processing and process and text summarization with the collected research literature data. Furthermore, we propose a method of generating a knowledge graph by extracting the entity and relation information and a method of expanding the knowledge graph using web data. To this end, we summarize research literature using the Bidirectional Encoder Representations from Transformers for Summarization (BERTSUM) model based on domain-specific research literature data and design a Research-BERT (RE-BERT) model that extracts entities and relation information, which are components of the knowledge graph, from the summarized research literature. Moreover, we proposed a method of expanding related entities based on Google news after extracting related entities through the web for the entities in the generated knowledge graph. In the experiment, we measured the performance of summarizing research literature using the BERTSUM model and the accuracy of the knowledge graph relation extraction model. In the experiment of removing unnecessary sentences from the research literature text and summarizing them in key sentences, the result shows that the BERTSUM Classifier model’s ROUGE-1 precision is 57.86%. The knowledge graph extraction performance was measured using the mean reciprocal rank (MRR), mean rank (MR), and HIT@N rank-based evaluation metric. The knowledge graph extraction method using summarized text showed superior performance in terms of speed and knowledge graph quality.
APA, Harvard, Vancouver, ISO, and other styles
30

Dorodnykh, N. O., and A. Yu Yurin. "An approach for automated knowledge graph filling with entities based on table analysis." Ontology of Designing 12, no. 3 (September 27, 2022): 336–52. http://dx.doi.org/10.18287/2223-9537-2022-12-3-336-352.

Full text
Abstract:
The use of Semantic Web technologies including ontologies and knowledge graphs is a widespread practice in the development of modern intelligent systems for information retrieval, recommendation and question-answering. The pro-cess of developing ontologies and knowledge graphs involves the use of various information sources, for example, databases, documents, conceptual models. Tables are one of the most accessible and widely used ways of storing and presenting information, as well as a valuable source of domain knowledge. In this paper, it is proposed to automate the extraction process of specific entities (facts) from tabular data for the subsequent filling of a target knowledge graph. A new approach is proposed for this purpose. A key feature of this approach is the semantic interpretation (annotation) of individual table elements. A description of its main stages is given, the application of the approach is shown in solv-ing practical problems of creating subject knowledge graphs, including in the field of industrial safety expertise of pet-rochemical equipment and technological complexes. An experimental quantitative evaluation of the proposed ap-proach was also obtained on a test set of tabular data. The obtained results showed the feasibility of using the pro-posed approach and the developed software to solve the problem of extracting facts from tabular data for the subsequent filling of the target knowledge graph.
APA, Harvard, Vancouver, ISO, and other styles
31

Ferrari, Ilaria, Giacomo Frisoni, Paolo Italiani, Gianluca Moro, and Claudio Sartori. "Comprehensive Analysis of Knowledge Graph Embedding Techniques Benchmarked on Link Prediction." Electronics 11, no. 23 (November 23, 2022): 3866. http://dx.doi.org/10.3390/electronics11233866.

Full text
Abstract:
In knowledge graph representation learning, link prediction is among the most popular and influential tasks. Its surge in popularity has resulted in a panoply of orthogonal embedding-based methods projecting entities and relations into low-dimensional continuous vectors. To further enrich the research space, the community witnessed a prolific development of evaluation benchmarks with a variety of structures and domains. Therefore, researchers and practitioners face an unprecedented challenge in effectively identifying the best solution to their needs. To this end, we propose the most comprehensive and up-to-date study to systematically assess the effectiveness and efficiency of embedding models for knowledge graph completion. We compare 13 models on six datasets with different sizes, domains, and relational properties, covering translational, semantic matching, and neural network-based encoders. A fine-grained evaluation is conducted to compare each technique head-to-head in terms of standard metrics, training and evaluation times, memory consumption, carbon footprint, and space geometry. Our results demonstrate the high dependence between performance and graph types, identifying the best options for each scenario. Among all the encoding strategies, the new generation of translational models emerges as the most promising, bringing out the best and most consistent results across all the datasets and evaluation criteria.
APA, Harvard, Vancouver, ISO, and other styles
32

Kochsiek, Adrian, and Rainer Gemulla. "Parallel training of knowledge graph embedding models." Proceedings of the VLDB Endowment 15, no. 3 (November 2021): 633–45. http://dx.doi.org/10.14778/3494124.3494144.

Full text
Abstract:
Knowledge graph embedding (KGE) models represent the entities and relations of a knowledge graph (KG) using dense continuous representations called embeddings. KGE methods have recently gained traction for tasks such as knowledge graph completion and reasoning as well as to provide suitable entity representations for downstream learning tasks. While a large part of the available literature focuses on small KGs, a number of frameworks that are able to train KGE models for large-scale KGs by parallelization across multiple GPUs or machines have recently been proposed. So far, the benefits and drawbacks of the various parallelization techniques have not been studied comprehensively. In this paper, we report on an experimental study in which we presented, re-implemented in a common computational framework, investigated, and improved the available techniques. We found that the evaluation methodologies used in prior work are often not comparable and can be misleading, and that most of currently implemented training methods tend to have a negative impact on embedding quality. We propose a simple but effective variation of the stratification technique used by PyTorch BigGraph for mitigation. Moreover, basic random partitioning can be an effective or even the best-performing choice when combined with suitable sampling techniques. Ultimately, we found that efficient and effective parallel training of large-scale KGE models is indeed achievable but requires a careful choice of techniques.
APA, Harvard, Vancouver, ISO, and other styles
33

Kim, Kuekyeng, Yuna Hur, Gyeongmin Kim, and Heuiseok Lim. "GREG: A Global Level Relation Extraction with Knowledge Graph Embedding." Applied Sciences 10, no. 3 (February 10, 2020): 1181. http://dx.doi.org/10.3390/app10031181.

Full text
Abstract:
In an age overflowing with information, the task of converting unstructured data into structured data are a vital task of great need. Currently, most relation extraction modules are more focused on the extraction of local mention-level relations—usually from short volumes of text. However, in most cases, the most vital and important relations are those that are described in length and detail. In this research, we propose GREG: A Global level Relation Extractor model using knowledge graph embeddings for document-level inputs. The model uses vector representations of mention-level ‘local’ relation’s to construct knowledge graphs that can represent the input document. The knowledge graph is then used to predict global level relations from documents or large bodies of text. The proposed model is largely divided into two modules which are synchronized during their training. Thus, each of the model’s modules is designed to deal with local relations and global relations separately. This allows the model to avoid the problem of struggling against loss of information due to too much information crunched into smaller sized representations when attempting global level relation extraction. Through evaluation, we have shown that the proposed model yields high performances in both predicting global level relations and local level relations consistently.
APA, Harvard, Vancouver, ISO, and other styles
34

Xiong, Wangping, Jun Cao, Xian Zhou, Jianqiang Du, Bin Nie, Zhijun Zeng, and Tianci Li. "Design and Evaluation of a Prescription Drug Monitoring Program for Chinese Patent Medicine based on Knowledge Graph." Evidence-Based Complementary and Alternative Medicine 2021 (July 16, 2021): 1–8. http://dx.doi.org/10.1155/2021/9970063.

Full text
Abstract:
Background. Chinese patent medicines are increasingly used clinically, and the prescription drug monitoring program is an effective tool to promote drug safety and maintain health. Methods. We constructed a prescription drug monitoring program for Chinese patent medicines based on knowledge graphs. First, we extracted the key information of Chinese patent medicines, diseases, and symptoms from the domain-specific corpus by the information extraction. Second, based on the extracted entities and relationships, a knowledge graph was constructed to form a rule base for the monitoring of data. Then, the named entity recognition model extracted the key information from the electronic medical record to be monitored and matched the knowledge graph to realize the monitoring of the Chinese patent medicines in the prescription. Results. Named entity recognition based on the pretrained model achieved an F1 value of 83.3% on the Chinese patent medicines dataset. On the basis of entity recognition technology and knowledge graph, we implemented a prescription drug monitoring program for Chinese patent medicines. The accuracy rate of combined medication monitoring of three or more drugs of the program increased from 68% to 86.4%. The accuracy rate of drug control monitoring increased from 70% to 97%. The response time for conflicting prescriptions with two drugs was shortened from 1.3S to 0.8S. The response time for conflicting prescriptions with three or more drugs was shortened from 5.2S to 1.4S. Conclusions. The program constructed in this study can respond quickly and improve the efficiency of monitoring prescriptions. It is of great significance to ensure the safety of patients’ medication.
APA, Harvard, Vancouver, ISO, and other styles
35

Kim, Taejin, Yeoil Yun, and Namgyu Kim. "Deep Learning-Based Knowledge Graph Generation for COVID-19." Sustainability 13, no. 4 (February 19, 2021): 2276. http://dx.doi.org/10.3390/su13042276.

Full text
Abstract:
Many attempts have been made to construct new domain-specific knowledge graphs using the existing knowledge base of various domains. However, traditional “dictionary-based” or “supervised” knowledge graph building methods rely on predefined human-annotated resources of entities and their relationships. The cost of creating human-annotated resources is high in terms of both time and effort. This means that relying on human-annotated resources will not allow rapid adaptability in describing new knowledge when domain-specific information is added or updated very frequently, such as with the recent coronavirus disease-19 (COVID-19) pandemic situation. Therefore, in this study, we propose an Open Information Extraction (OpenIE) system based on unsupervised learning without a pre-built dataset. The proposed method obtains knowledge from a vast amount of text documents about COVID-19 rather than a general knowledge base and add this to the existing knowledge graph. First, we constructed a COVID-19 entity dictionary, and then we scraped a large text dataset related to COVID-19. Next, we constructed a COVID-19 perspective language model by fine-tuning the bidirectional encoder representations from transformer (BERT) pre-trained language model. Finally, we defined a new COVID-19-specific knowledge base by extracting connecting words between COVID-19 entities using the BERT self-attention weight from COVID-19 sentences. Experimental results demonstrated that the proposed Co-BERT model outperforms the original BERT in terms of mask prediction accuracy and metric for evaluation of translation with explicit ordering (METEOR) score.
APA, Harvard, Vancouver, ISO, and other styles
36

Liao, Danlu. "Construction of Knowledge Graph English Online Homework Evaluation System Based on Multimodal Neural Network Feature Extraction." Computational Intelligence and Neuroscience 2022 (May 13, 2022): 1–12. http://dx.doi.org/10.1155/2022/7941414.

Full text
Abstract:
This paper defines the data schema of the multimodal knowledge graph, that is, the definition of entity types and relationships between entities. The knowledge point entities are defined as three types of structures, algorithms, and related terms, speech is also defined as one type of entities, and six semantic relationships are defined between entities. This paper adopts a named entity recognition model that combines bidirectional long short-term memory network and convolutional neural network, combines local information and global information of text, uses conditional random field algorithm to label feature sequences, and combines domain dictionary. A knowledge evaluation method based on triplet context information is designed, which combines triplet context information (internal relationship path information in knowledge graph and external text information related to entities in triplet) through knowledge representation learning. The knowledge of triples is evaluated. The knowledge evaluation ability of the English online homework evaluation system was evaluated on the knowledge graph noise detection task, the knowledge graph completion task (entity link prediction task), and the triplet classification task. The experimental results show that the English online homework evaluation system has good noise processing ability and knowledge credibility calculation ability, and has a stronger evaluation ability for low-noise data. Using the online homework platform to implement personalized English homework is conducive to improving students’ homework mood, and students’ “happy” homework mood has been significantly improved. The implementation of English personalized homework based on the online homework platform is conducive to improving students’ homework initiative. With the help of the online homework platform to implement personalized English homework, students’ homework time has been reduced, and the homework has been completed well, achieving the purpose of “reducing burden and increasing efficiency.”
APA, Harvard, Vancouver, ISO, and other styles
37

Yan, Jianzhuo, Tiantian Lv, and Yongchuan Yu. "Construction and Recommendation of a Water Affair Knowledge Graph." Sustainability 10, no. 10 (September 26, 2018): 3429. http://dx.doi.org/10.3390/su10103429.

Full text
Abstract:
Water affair data mainly consists of structured data and unstructured data, and the storage methods of data are diverse and heterogeneous. To meet the needs of water affair information integration, a method of constructing a knowledge graph using a combination of water affair structured and unstructured data is proposed. To meet the needs of a water information search, an information recommendation system for constructing a water affair knowledge graph is proposed. In this paper, the edit distance algorithm and latent Dirichlet allocation (LDA) algorithm are used to construct a water affair structured data and unstructured data combination knowledge graph, and this graph is validated based on the semantic distance algorithm. Finally, this paper uses the recall rate, accuracy rate, and F comprehensive results to compare the algorithms. The evaluation results of the edit distance algorithm and the LDA algorithm exceed 90%, which is greater than the comparison algorithm, thus confirming the validity and accuracy of the construction of a water affair knowledge graph. Furthermore, a set of water affair verification sets is used to verify the recommendation method, which proves the effectiveness of the recommended method.
APA, Harvard, Vancouver, ISO, and other styles
38

Xu, Jiakang, Wolfgang Mayer, Hongyu Zhang, Keqing He, and Zaiwen Feng. "Automatic Semantic Modeling for Structural Data Source with the Prior Knowledge from Knowledge Base." Mathematics 10, no. 24 (December 15, 2022): 4778. http://dx.doi.org/10.3390/math10244778.

Full text
Abstract:
A critical step in sharing semantic content online is to map the structural data source to a public domain ontology. This problem is denoted as the Relational-To-Ontology Mapping Problem (Rel2Onto). A huge effort and expertise are required for manually modeling the semantics of data. Therefore, an automatic approach for learning the semantics of a data source is desirable. Most of the existing work studies the semantic annotation of source attributes. However, although critical, the research for automatically inferring the relationships between attributes is very limited. In this paper, we propose a novel method for semantically annotating structured data sources using machine learning, graph matching and modified frequent subgraph mining to amend the candidate model. In our work, Knowledge graph is used as prior knowledge. Our evaluation shows that our approach outperforms two state-of-the-art solutions in tricky cases where only a few semantic models are known.
APA, Harvard, Vancouver, ISO, and other styles
39

Xu, Guoyan, Qirui Zhang, Du Yu, Sijun Lu, and Yuwei Lu. "JKRL: Joint Knowledge Representation Learning of Text Description and Knowledge Graph." Symmetry 15, no. 5 (May 10, 2023): 1056. http://dx.doi.org/10.3390/sym15051056.

Full text
Abstract:
The purpose of knowledge representation learning is to learn the vector representation of research objects projected by a matrix in low-dimensional vector space and explore the relationship between embedded objects in low-dimensional space. However, most methods only consider the triple structure in the knowledge graph and ignore the additional information related to the triple, especially the text description information. In this paper, we propose a knowledge graph representation model with a symmetric architecture called Joint Knowledge Representation Learning of Text Description and Knowledge Graph (JKRL), which models the entity description and relationship description of the triple structure for joint representation learning of knowledge and balances the contribution of the triple structure and text description in the process of vector learning. First, we adopt the TransE model to learn the structural vector representations of entities and relations, and then use a CNN model to encode the entity description to obtain the text representation of the entity. To semantically encode the relation descriptions, we designed an Attention-Bi-LSTM text encoder, which introduces an attention mechanism into the Bi-LSTM model to calculate the semantic relevance between each word in the sentence and different relations. In addition, we also introduce position features into word features in order to better encode word order information. Finally, we define a joint evaluation function to learn the joint representation of structural and textual representations. The experiments show that compared with the baseline methods, our model achieves the best performance on both Mean Rank and Hits@10 metrics. The accuracy of the triple classification task on the FB15K dataset reached 93.2%.
APA, Harvard, Vancouver, ISO, and other styles
40

Welivita, Anuradha, and Pearl Pu. "HEAL: A Knowledge Graph for Distress Management Conversations." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (June 28, 2022): 11459–67. http://dx.doi.org/10.1609/aaai.v36i10.21398.

Full text
Abstract:
The demands of the modern world are increasingly responsible for causing psychological burdens and bringing adverse impacts on our mental health. As a result, neural conversational agents with empathetic responding and distress management capabilities have recently gained popularity. However, existing end-to-end empathetic conversational agents often generate generic and repetitive empathetic statements such as "I am sorry to hear that", which fail to convey specificity to a given situation. Due to the lack of controllability in such models, they also impose the risk of generating toxic responses. Chatbots leveraging reasoning over knowledge graphs is seen as an efficient and fail-safe solution over end-to-end models. However, such resources are limited in the context of emotional distress. To address this, we introduce HEAL, a knowledge graph developed based on 1M distress narratives and their corresponding consoling responses curated from Reddit. It consists of 22K nodes identifying different types of stressors, speaker expectations, responses, and feedback types associated with distress dialogues and forms 104K connections between different types of nodes. Each node is associated with one of 41 affective states. Statistical and visual analysis conducted on HEAL reveals emotional dynamics between speakers and listeners in distress-oriented conversations and identifies useful response patterns leading to emotional relief. Automatic and human evaluation experiments show that HEAL's responses are more diverse, empathetic, and reliable compared to the baselines.
APA, Harvard, Vancouver, ISO, and other styles
41

Alam, Mehwish, Aldo Gangemi, Valentina Presutti, and Diego Reforgiato Recupero. "Semantic role labeling for knowledge graph extraction from text." Progress in Artificial Intelligence 10, no. 3 (April 5, 2021): 309–20. http://dx.doi.org/10.1007/s13748-021-00241-7.

Full text
Abstract:
AbstractThis paper introduces , a new semantic role labeling method that transforms a text into a frame-oriented knowledge graph. It performs dependency parsing, identifies the words that evoke lexical frames, locates the roles and fillers for each frame, runs coercion techniques, and formalizes the results as a knowledge graph. This formal representation complies with the frame semantics used in Framester, a factual-linguistic linked data resource. We tested our method on the WSJ section of the Peen Treebank annotated with VerbNet and PropBank labels and on the Brown corpus. The evaluation has been performed according to the CoNLL Shared Task on Joint Parsing of Syntactic and Semantic Dependencies. The obtained precision, recall, and F1 values indicate that TakeFive is competitive with other existing methods such as SEMAFOR, Pikes, PathLSTM, and FRED. We finally discuss how to combine TakeFive and FRED, obtaining higher values of precision, recall, and F1 measure.
APA, Harvard, Vancouver, ISO, and other styles
42

Luo, Zhiwei, Rong Xie, Wen Chen, and Zetao Ye. "Automatic domain terminology extraction and its evaluation for domain knowledge graph construction." Web Intelligence 16, no. 3 (September 11, 2018): 173–85. http://dx.doi.org/10.3233/web-180385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ahmad, Zishan, Asif Ekbal, Shubhashis Sengupta, and Pushpak Bhattacharyya. "Neural response generation for task completion using conversational knowledge graph." PLOS ONE 18, no. 2 (February 9, 2023): e0269856. http://dx.doi.org/10.1371/journal.pone.0269856.

Full text
Abstract:
Effective dialogue generation for task completion is challenging to build. The task requires the response generation system to generate the responses consistent with intent and slot values, have diversity in response and be able to handle multiple domains. The response also needs to be context relevant with respect to the previous utterances in the conversation. In this paper, we build six different models containing Bi-directional Long Short Term Memory (Bi-LSTM) and Bidirectional Encoder Representations from Transformers (BERT) based encoders. To effectively generate the correct slot values, we implement a copy mechanism at the decoder side. To capture the conversation context and the current state of the conversation we introduce a simple heuristic to build a conversational knowledge graph. Using this novel algorithm we are able to capture important aspects in a conversation. This conversational knowledge-graph is then used by our response generation model to generate more relevant and consistent responses. Using this knowledge-graph we do not need the entire utterance history, rather only the last utterance to capture the conversational context. We conduct experiments showing the effectiveness of the knowledge-graph in capturing the context and generating good response. We compare these results against hierarchical-encoder-decoder models and show that the use of triples from the conversational knowledge-graph is an effective method to capture context and the user requirement. Using this knowledge-graph we show an average performance gain of 0.75 BLEU score across different models. Similar results also hold true across different manual evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
44

Kazimianec, Michail, and Nikolaus Augsten. "Clustering with Proximity Graphs." International Journal of Knowledge-Based Organizations 3, no. 4 (October 2013): 84–104. http://dx.doi.org/10.4018/ijkbo.2013100105.

Full text
Abstract:
Graph Proximity Cleansing (GPC) is a string clustering algorithm that automatically detects cluster borders and has been successfully used for string cleansing. For each potential cluster a so-called proximity graph is computed, and the cluster border is detected based on the proximity graph. However, the computation of the proximity graph is expensive and the state-of-the-art GPC algorithms only approximate the proximity graph using a sampling technique. Further, the quality of GPC clusters has never been compared to standard clustering techniques like k-means, density-based, or hierarchical clustering. In this article the authors propose two efficient algorithms, PG-DS and PG-SM, for the exact computation of proximity graphs. The authors experimentally show that our solutions are faster even if the sampling-based algorithms use very small sample sizes. The authors provide a thorough experimental evaluation of GPC and conclude that it is very efficient and shows good clustering quality in comparison to the standard techniques. These results open a new perspective on string clustering in settings, where no knowledge about the input data is available.
APA, Harvard, Vancouver, ISO, and other styles
45

Yang, Yangrui, Yaping Zhu, and Pengpeng Jian. "Application of Knowledge Graph in Water Conservancy Education Resource Organization under the Background of Big Data." Electronics 11, no. 23 (November 26, 2022): 3913. http://dx.doi.org/10.3390/electronics11233913.

Full text
Abstract:
The key to improving the readability and usage of educational resources is their orderly arrangement and integration. Knowledge graphs, which are a large-scale form of knowledge engineering, are an effective tool for managing and organizing educational resources. The water conservancy’s educational big data is separated into three tiers of objectives–courses–knowledge units based on the connotation level of self-directed learning. Combined with the idea of Outcome-based Education(OBE), the goal-oriented knowledge graph structure of water conservancy disciplines and graph creation method is proposed. The focus is the error accumulation problem brought about by the traditional relational extraction method of Named Entity Recognition based on rules or sequence labeling. We first complete this objective, and then the relationship classification is performed according to the water conservancy disciplines entity and relations joint extraction (WDERJE) model, on which the prompt mechanism design is based. Think of the entity-relationship extraction task as a sequence-to-sequence generation task, and take the structured extraction language to unify the coding entity extraction and relationship extraction structures. The evaluation results of the WDERJE model show that the F_0.5 value of each entity extraction is above 0.76, and the cumulative extraction relationship triple is nearly 180,000. The graph fully optimizes the organization and management of water conservancy education resources and effectively improves the readability and utilization rate of water conservancy teaching resources.
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Zhi-Qing, Zi-Xuan Fu, Wen-Jun Li, Hao Fan, Shu-Nan Li, Xi-Mo Wang, and Peng Zhou. "Prediction of Diabetic Macular Edema Using Knowledge Graph." Diagnostics 13, no. 11 (May 26, 2023): 1858. http://dx.doi.org/10.3390/diagnostics13111858.

Full text
Abstract:
Diabetic macular edema (DME) is a significant complication of diabetes that impacts the eye and is a primary contributor to vision loss in individuals with diabetes. Early control of the related risk factors is crucial to reduce the incidence of DME. Artificial intelligence (AI) clinical decision-making tools can construct disease prediction models to aid in the clinical screening of the high-risk population for early disease intervention. However, conventional machine learning and data mining techniques have limitations in predicting diseases when dealing with missing feature values. To solve this problem, a knowledge graph displays the connection relationships of multi-source and multi-domain data in the form of a semantic network to enable cross-domain modeling and queries. This approach can facilitate the personalized prediction of diseases using any number of known feature data. In this study, we proposed an improved correlation enhancement algorithm based on knowledge graph reasoning to comprehensively evaluate the factors that influence DME to achieve disease prediction. We constructed a knowledge graph based on Neo4j by preprocessing the collected clinical data and analyzing the statistical rules. Based on reasoning using the statistical rules of the knowledge graph, we used the correlation enhancement coefficient and generalized closeness degree method to enhance the model. Meanwhile, we analyzed and verified these models’ results using link prediction evaluation indicators. The disease prediction model proposed in this study achieved a precision rate of 86.21%, which is more accurate and efficient in predicting DME. Furthermore, the clinical decision support system developed using this model can facilitate personalized disease risk prediction, making it convenient for the clinical screening of a high-risk population and early disease intervention.
APA, Harvard, Vancouver, ISO, and other styles
47

Hernández-Almazán, Jorge Arturo, Juan Diego Lumbreras-Vega, Arturo Amaya Amaya, and Rubén Machucho-Cadena. "Knowledge Graph to determine the domain of learning in Higher Education." Apertura 13, no. 1 (March 26, 2021): 118–33. http://dx.doi.org/10.32870/ap.v13n1.1937.

Full text
Abstract:
The representing of a student’s knowledge in an academic discipline plays an important role in boosting the student’s skills. To support stakeholders in the educational domain, it is necessary to provide them with robust assessment strategies that facilitate the teaching-learning process. Student´s mastery is determined by the degree of knowledge, which demonstrates objectively, on the topics included in the different areas that make up an academic discipline. Although there is a wide variety of techniques to represent knowledge, particularly Knowledge Graph technique is becoming relevant due to the structured approach and benefits it offers. This paper proposes a method that classifies and weights the nodes (topics) of a Knowledge Graph of a disciplinary area, which is analyzed through a case study. The method has two approaches: avoid exhaustive evaluation of the nodes and weight the nodes with adequate precision. Method´s application is illustrated by a case study. As results, a Knowledge Graph is obtained with its classified and weighted nodes through the application of the proposed method, in which 100% of the topics have been impacted through the objective evaluation of 20.8% representing 10 nodes. It is concluded that the proposed method has potential to be used in the representation and management of knowledge, being necessary to improve phases’ iteration to condition number of objective nodes.
APA, Harvard, Vancouver, ISO, and other styles
48

Le, Tiep, Francesco Fabiano, Tran Son, and Enrico Pontelli. "EFP and PG-EFP: Epistemic Forward Search Planners in Multi-Agent Domains." Proceedings of the International Conference on Automated Planning and Scheduling 28 (June 15, 2018): 161–70. http://dx.doi.org/10.1609/icaps.v28i1.13881.

Full text
Abstract:
This paper presents two prototypical epistemic forward planners, called EFP and PG-EFP, for generating plans in multi-agent environments. These planners differ from recently developed epistemic planners in that they can deal with unlimited nested beliefs, common knowledge, and capable of generating plans with both knowledge and belief goals. EFP is simply a breadth first search planner while PG-EFP is a heuristic search based system. To generate heuristics in PG-EFP, the paper introduces the notion of an epistemic planning graph. The paper includes an evaluation of the planners using benchmarks collected from the literature and discusses the issues that affect their scalability and efficiency, thus identifies potentially directions for future work. It also includes experimental evaluation that proves the usefulness of epistemic planning graphs.
APA, Harvard, Vancouver, ISO, and other styles
49

Tauqeer, Amar, Anelia Kurteva, Tek Raj Chhetri, Albin Ahmeti, and Anna Fensel. "Automated GDPR Contract Compliance Verification Using Knowledge Graphs." Information 13, no. 10 (September 24, 2022): 447. http://dx.doi.org/10.3390/info13100447.

Full text
Abstract:
In the past few years, the main research efforts regarding General Data Protection Regulation (GDPR)-compliant data sharing have been focused primarily on informed consent (one of the six GDPR lawful bases for data processing). In cases such as Business-to-Business (B2B) and Business-to-Consumer (B2C) data sharing, when consent might not be enough, many small and medium enterprises (SMEs) still depend on contracts—a GDPR basis that is often overlooked due to its complexity. The contract’s lifecycle comprises many stages (e.g., drafting, negotiation, and signing) that must be executed in compliance with GDPR. Despite the active research efforts on digital contracts, contract-based GDPR compliance and challenges such as contract interoperability have not been sufficiently elaborated on yet. Since knowledge graphs and ontologies provide interoperability and support knowledge discovery, we propose and develop a knowledge graph-based tool for GDPR contract compliance verification (CCV). It binds GDPR’s legal basis to data sharing contracts. In addition, we conducted a performance evaluation in terms of execution time and test cases to validate CCV’s correctness in determining the overhead and applicability of the proposed tool in smart city and insurance application scenarios. The evaluation results and the correctness of the CCV tool demonstrate the tool’s practicability for deployment in the real world with minimum overhead.
APA, Harvard, Vancouver, ISO, and other styles
50

He, Qiang. "Human-computer interaction based on background knowledge and emotion certainty." PeerJ Computer Science 9 (May 31, 2023): e1418. http://dx.doi.org/10.7717/peerj-cs.1418.

Full text
Abstract:
Aiming at the problems of lack of background knowledge and the inconsistent response of robots in the current human-computer interaction system, we proposed a human-computer interaction model based on a knowledge graph ripple network. The model simulated the natural human communication process to realize a more natural and intelligent human-computer interaction system. This study had three contributions: first, the affective friendliness of human-computer interaction was obtained by calculating the affective evaluation value and the emotional measurement of human-computer interaction. Then, the external knowledge graph was introduced as the background knowledge of the robot, and the conversation entity was embedded into the ripple network of the knowledge graph to obtain the potential entity content of interest of the participant. Finally, the robot replies based on emotional friendliness and content friendliness. The experimental results showed that, compared with the comparison models, the emotional friendliness and coherence of robots with background knowledge and emotional measurement effectively improve the response accuracy by 5.5% at least during human-computer interaction.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography