Journal articles on the topic 'Domain specific knowledge graph'

To see the other types of publications on this topic, follow the link: Domain specific knowledge graph.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Domain specific knowledge graph.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Barai, Mohit Kumar, and Subhasis Sanyal. "DOMAIN SPECIFIC KEY FEATURE EXTRACTION USING KNOWLEDGE GRAPH MINING." Multiple Criteria Decision Making 15 (2020): 1–22. http://dx.doi.org/10.22367/mcdm.2020.15.01.

Full text
Abstract:
In the field of text mining, many novel feature extraction approaches have been propounded. The following research paper is based on a novel feature extraction algorithm. In this paper, to formulate this approach, a weighted graph mining has been used to ensure the effectiveness of the feature extraction and computational efficiency; only the most effective graphs representing the maximum number of triangles based on a predefined relational criterion have been considered. The proposed novel technique is an amalgamation of the relation between words surrounding an aspect of the product and the lexicon-based connection among those words, which creates a relational triangle. A maximum number of a triangle covering an element has been accounted as a prime feature. The proposed algorithm performs more than three times better than TF-IDF within a limited set of data in analysis based on domain-specific data. Keywords: feature extraction, natural language processing, product review, text processing, knowledge graph.
APA, Harvard, Vancouver, ISO, and other styles
2

Tong, Peihao, Qifan Zhang, and Junjie Yao. "Leveraging Domain Context for Question Answering Over Knowledge Graph." Data Science and Engineering 4, no. 4 (November 4, 2019): 323–35. http://dx.doi.org/10.1007/s41019-019-00109-w.

Full text
Abstract:
Abstract With the growing availability of different knowledge graphs in a variety of domains, question answering over knowledge graph (KG-QA) becomes a prevalent information retrieval approach. Current KG-QA methods usually resort to semantic parsing, search or neural matching models. However, they cannot well tackle increasingly long input questions and complex information needs. In this work, we propose a new KG-QA approach, leveraging the rich domain context in the knowledge graph. We incorporate the new approach with question and answer domain context descriptions. Specifically, for questions, we enrich them with users’ subsequent input questions within a session and expand the input question representation. For the candidate answers, we equip them with surrounding context structures, i.e., meta-paths within the targeting knowledge graph. On top of these, we design a cross-attention mechanism to improve the question and answer matching performance. An experimental study on real datasets verifies these improvements. The new approach is especially beneficial for specific knowledge graphs with complex questions.
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Jiajing, Zhiqiang Wei, Dongning Jia, Xin Dou, Huo Tang, and Nannan Li. "Constructing marine expert management knowledge graph based on Trellisnet-CRF." PeerJ Computer Science 8 (September 5, 2022): e1083. http://dx.doi.org/10.7717/peerj-cs.1083.

Full text
Abstract:
Creating and maintaining a domain-specific database of research institutions, academic experts and scholarly literature is essential to expanding national marine science and technology. Knowledge graphs (KGs) have now been widely used in both industry and academia to address real-world problems. Despite the abundance of generic KGs, there is a vital need to build domain-specific knowledge graphs in the marine sciences domain. In addition, there is still not an effective method for named entity recognition when constructing a knowledge graph, especially when including data from both scientific and social media sources. This article presents a novel marine science domain-based knowledge graph framework. This framework involves capturing marine domain data into KG representations. The proposed approach utilizes various entity information based on marine domain experts to enrich the semantic content of the knowledge graph. To enhance named entity recognition accuracy, we propose a novel TrellisNet-CRF model. Our experiment results demonstrate that the TrellisNet-CRF model reached a 96.99% accuracy rate for marine domain named entity recognition, which outperforms the current state-of-the-art baseline. The effectiveness of the TrellisNet-CRF module was then further demonstrated and confirmed on entity recognition and visualization tasks.
APA, Harvard, Vancouver, ISO, and other styles
4

Choi, Junho. "Graph Embedding-Based Domain-Specific Knowledge Graph Expansion Using Research Literature Summary." Sustainability 14, no. 19 (September 27, 2022): 12299. http://dx.doi.org/10.3390/su141912299.

Full text
Abstract:
Knowledge bases built in the knowledge processing field have a problem in that experts have to add rules or update them through modifications. To solve this problem, research has been conducted on knowledge graph expansion methods using deep learning technology, and in recent years, many studies have been conducted on methods of generating knowledge bases by embedding the knowledge graph’s triple information in a continuous vector space. In this paper, using a research literature summary, we propose a domain-specific knowledge graph expansion method based on graph embedding. To this end, we perform pre-processing and process and text summarization with the collected research literature data. Furthermore, we propose a method of generating a knowledge graph by extracting the entity and relation information and a method of expanding the knowledge graph using web data. To this end, we summarize research literature using the Bidirectional Encoder Representations from Transformers for Summarization (BERTSUM) model based on domain-specific research literature data and design a Research-BERT (RE-BERT) model that extracts entities and relation information, which are components of the knowledge graph, from the summarized research literature. Moreover, we proposed a method of expanding related entities based on Google news after extracting related entities through the web for the entities in the generated knowledge graph. In the experiment, we measured the performance of summarizing research literature using the BERTSUM model and the accuracy of the knowledge graph relation extraction model. In the experiment of removing unnecessary sentences from the research literature text and summarizing them in key sentences, the result shows that the BERTSUM Classifier model’s ROUGE-1 precision is 57.86%. The knowledge graph extraction performance was measured using the mean reciprocal rank (MRR), mean rank (MR), and HIT@N rank-based evaluation metric. The knowledge graph extraction method using summarized text showed superior performance in terms of speed and knowledge graph quality.
APA, Harvard, Vancouver, ISO, and other styles
5

Yuan, Jianbo, Zhiwei Jin, Han Guo, Hongxia Jin, Xianchao Zhang, Tristram Smith, and Jiebo Luo. "Constructing biomedical domain-specific knowledge graph with minimum supervision." Knowledge and Information Systems 62, no. 1 (March 23, 2019): 317–36. http://dx.doi.org/10.1007/s10115-019-01351-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hassani, Kaveh. "Cross-Domain Few-Shot Graph Classification." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 6856–64. http://dx.doi.org/10.1609/aaai.v36i6.20642.

Full text
Abstract:
We study the problem of few-shot graph classification across domains with nonequivalent feature spaces by introducing three new cross-domain benchmarks constructed from publicly available datasets. We also propose an attention-based graph encoder that uses three congruent views of graphs, one contextual and two topological views, to learn representations of task-specific information for fast adaptation, and task-agnostic information for knowledge transfer. We run exhaustive experiments to evaluate the performance of contrastive and meta-learning strategies. We show that when coupled with metric-based meta-learning frameworks, the proposed encoder achieves the best average meta-test classification accuracy across all benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Taejin, Yeoil Yun, and Namgyu Kim. "Deep Learning-Based Knowledge Graph Generation for COVID-19." Sustainability 13, no. 4 (February 19, 2021): 2276. http://dx.doi.org/10.3390/su13042276.

Full text
Abstract:
Many attempts have been made to construct new domain-specific knowledge graphs using the existing knowledge base of various domains. However, traditional “dictionary-based” or “supervised” knowledge graph building methods rely on predefined human-annotated resources of entities and their relationships. The cost of creating human-annotated resources is high in terms of both time and effort. This means that relying on human-annotated resources will not allow rapid adaptability in describing new knowledge when domain-specific information is added or updated very frequently, such as with the recent coronavirus disease-19 (COVID-19) pandemic situation. Therefore, in this study, we propose an Open Information Extraction (OpenIE) system based on unsupervised learning without a pre-built dataset. The proposed method obtains knowledge from a vast amount of text documents about COVID-19 rather than a general knowledge base and add this to the existing knowledge graph. First, we constructed a COVID-19 entity dictionary, and then we scraped a large text dataset related to COVID-19. Next, we constructed a COVID-19 perspective language model by fine-tuning the bidirectional encoder representations from transformer (BERT) pre-trained language model. Finally, we defined a new COVID-19-specific knowledge base by extracting connecting words between COVID-19 entities using the BERT self-attention weight from COVID-19 sentences. Experimental results demonstrated that the proposed Co-BERT model outperforms the original BERT in terms of mask prediction accuracy and metric for evaluation of translation with explicit ordering (METEOR) score.
APA, Harvard, Vancouver, ISO, and other styles
8

Sharma, Bhuvan, Van C. Willis, Claudia S. Huettner, Kirk Beaty, Jane L. Snowdon, Shang Xue, Brett R. South, Gretchen P. Jackson, Dilhan Weeraratne, and Vanessa Michelini. "Predictive article recommendation using natural language processing and machine learning to support evidence updates in domain-specific knowledge graphs." JAMIA Open 3, no. 3 (September 29, 2020): 332–37. http://dx.doi.org/10.1093/jamiaopen/ooaa028.

Full text
Abstract:
Abstract Objectives Describe an augmented intelligence approach to facilitate the update of evidence for associations in knowledge graphs. Methods New publications are filtered through multiple machine learning study classifiers, and filtered publications are combined with articles already included as evidence in the knowledge graph. The corpus is then subjected to named entity recognition, semantic dictionary mapping, term vector space modeling, pairwise similarity, and focal entity match to identify highly related publications. Subject matter experts review recommended articles to assess inclusion in the knowledge graph; discrepancies are resolved by consensus. Results Study classifiers achieved F-scores from 0.88 to 0.94, and similarity thresholds for each study type were determined by experimentation. Our approach reduces human literature review load by 99%, and over the past 12 months, 41% of recommendations were accepted to update the knowledge graph. Conclusion Integrated search and recommendation exploiting current evidence in a knowledge graph is useful for reducing human cognition load.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Weijie, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. "K-BERT: Enabling Language Representation with Knowledge Graph." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 03 (April 3, 2020): 2901–8. http://dx.doi.org/10.1609/aaai.v34i03.5681.

Full text
Abstract:
Pre-trained language representation models, such as BERT, capture a general language representation from large-scale corpora, but lack domain-specific knowledge. When reading a domain text, experts make inferences with relevant knowledge. For machines to achieve this capability, we propose a knowledge-enabled language representation model (K-BERT) with knowledge graphs (KGs), in which triples are injected into the sentences as domain knowledge. However, too much knowledge incorporation may divert the sentence from its correct meaning, which is called knowledge noise (KN) issue. To overcome KN, K-BERT introduces soft-position and visible matrix to limit the impact of knowledge. K-BERT can easily inject domain knowledge into the models by being equipped with a KG without pre-training by itself because it is capable of loading model parameters from the pre-trained BERT. Our investigation reveals promising results in twelve NLP tasks. Especially in domain-specific tasks (including finance, law, and medicine), K-BERT significantly outperforms BERT, which demonstrates that K-BERT is an excellent choice for solving the knowledge-driven problems that require experts.
APA, Harvard, Vancouver, ISO, and other styles
10

Huang, Lan, Yuanwei Zhao, Bo Wang, Dongxu Zhang, Rui Zhang, Subhashis Das, Simone Bocca, and Fausto Giunchiglia. "Property-Based Semantic Similarity Criteria to Evaluate the Overlaps of Schemas." Algorithms 14, no. 8 (August 17, 2021): 241. http://dx.doi.org/10.3390/a14080241.

Full text
Abstract:
Knowledge graph-based data integration is a practical methodology for heterogeneous legacy database-integrated service construction. However, it is neither efficient nor economical to build a new cross-domain knowledge graph on top of the schemas of each legacy database for the specific integration application rather than reusing the existing high-quality knowledge graphs. Consequently, a question arises as to whether the existing knowledge graph is compatible with cross-domain queries and with heterogenous schemas of the legacy systems. An effective criterion is urgently needed in order to evaluate such compatibility as it limits the quality upbound of the integration. This research studies the semantic similarity of the schemas from the aspect of properties. It provides a set of in-depth criteria, namely coverage and flexibility, to evaluate the pairwise compatibility between the schemas. It takes advantage of the properties of knowledge graphs to evaluate the overlaps between schemas and defines the weights of entity types in order to perform precise compatibility computation. The effectiveness of the criteria obtained to evaluate the compatibility between knowledge graphs and cross-domain queries is demonstrated using a case study.
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Chuanyou, Xinhang Yang, Shance Luo, Mingzhe Song, and Wei Li. "Towards Domain-Specific Knowledge Graph Construction for Flight Control Aided Maintenance." Applied Sciences 12, no. 24 (December 12, 2022): 12736. http://dx.doi.org/10.3390/app122412736.

Full text
Abstract:
Flight control is a key system of modern aircraft. During each flight, pilots use flight control to control the forces of flight and also the aircraft’s direction and attitude. Whether flight control can work properly is closely related to safety such that daily maintenance is an essential task of airlines. Flight control maintenance heavily relies on expert knowledge. To facilitate knowledge achievement, aircraft manufacturers and airlines normally provide structural manuals for consulting. On the other hand, computer-aided maintenance systems are adopted for improving daily maintenance efficiency. However, we find that grass-roots engineers of airlines still inevitably consult unstructured technical manuals from time to time, for example, when meeting an unusual problem or an unfamiliar type of aircraft. Achieving effective knowledge from unstructured data is inefficient and inconvenient. Aiming at the problem, we propose a knowledge-graph-based maintenance prototype system as a complementary solution. The knowledge graph we built is dedicated for unstructured manuals referring to flight control. We first build ontology to represent key concepts and relation types and then perform entity-relation extraction adopting a pipeline paradigm with natural language processing techniques. To fully utilize domain-specific features, we present a hybrid method consisting of dedicated rules and a machine learning model for entity recognition. As for relation extraction, we leverage a two-stage Bi-LSTM (bi-directional long short-term memory networks) based method to improve the extraction precision by solving a sample imbalanced problem. We conduct comprehensive experiments to study the technical feasibility on real manuals from airlines. The average precision of entity recognition reaches 85%, and the average precision of relation extraction comes to 61%. Finally, we design a flight control maintenance prototype system based on the knowledge graph constructed and a graph database Neo4j. The prototype system takes alarm messages represented in natural language as the input and returns maintenance suggestions to serve grass-roots engineers.
APA, Harvard, Vancouver, ISO, and other styles
12

Kejriwal, Mayank, and Pedro Szekely. "myDIG: Personalized Illicit Domain-Specific Knowledge Discovery with No Programming." Future Internet 11, no. 3 (March 4, 2019): 59. http://dx.doi.org/10.3390/fi11030059.

Full text
Abstract:
With advances in machine learning, knowledge discovery systems have become very complicated to set up, requiring extensive tuning and programming effort. Democratizing such technology so that non-technical domain experts can avail themselves of these advances in an interactive and personalized way is an important problem. We describe myDIG, a highly modular, open source pipeline-construction system that is specifically geared towards investigative users (e.g., law enforcement) with no programming abilities. The myDIG system allows users both to build a knowledge graph of entities, relationships, and attributes for illicit domains from a raw HTML corpus and also to set up a personalized search interface for analyzing the structured knowledge. We use qualitative and quantitative data from five case studies involving investigative experts from illicit domains such as securities fraud and illegal firearms sales to illustrate the potential of myDIG.
APA, Harvard, Vancouver, ISO, and other styles
13

Trappey, Amy J. C., Chih-Ping Liang, and Hsin-Jung Lin. "Using Machine Learning Language Models to Generate Innovation Knowledge Graphs for Patent Mining." Applied Sciences 12, no. 19 (September 29, 2022): 9818. http://dx.doi.org/10.3390/app12199818.

Full text
Abstract:
To explore and understand the state-of-the-art innovations in any given domain, researchers often need to study many domain patents and synthesize their knowledge content. This study provides a smart patent knowledge graph generation system, adopting a machine learning (ML) natural language modeling approach, to help researchers grasp the patent knowledge by generating deep knowledge graphs. This research focuses on converting chemical utility patents, consisting of chemistries and chemical processes, into summarized knowledge graphs. The research methods are in two parts, i.e., the visualization of the chemical processes in the chemical patents’ most relevant paragraphs and a knowledge graph of any domain-specific collection of patent texts. The ML language modeling algorithms, including ALBERT for text vectorization, Sentence-BERT for sentence classification, and KeyBERT for keyword extraction, are adopted. These models are trained and tested in the case study using 879 chemical patents in the carbon capture domain. The results demonstrate that the average retention rate of the summary graphs for five clustered patent texts exceeds 80%. The proposed approach is novel and proven to be reliable in graphical deep knowledge representation.
APA, Harvard, Vancouver, ISO, and other styles
14

Abu-Salih, Bilal. "Domain-specific knowledge graphs: A survey." Journal of Network and Computer Applications 185 (July 2021): 103076. http://dx.doi.org/10.1016/j.jnca.2021.103076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Abu-Rasheed, Hasan, Christian Weber, Johannes Zenkert, Mareike Dornhöfer, and Madjid Fathi. "Transferrable Framework Based on Knowledge Graphs for Generating Explainable Results in Domain-Specific, Intelligent Information Retrieval." Informatics 9, no. 1 (January 19, 2022): 6. http://dx.doi.org/10.3390/informatics9010006.

Full text
Abstract:
In modern industrial systems, collected textual data accumulates over time, offering an important source of information for enhancing present and future industrial practices. Although many AI-based solutions have been developed in the literature for a domain-specific information retrieval (IR) from this data, the explainability of these systems was rarely investigated in such domain-specific environments. In addition to considering the domain requirements within an explainable intelligent IR, transferring the explainable IR algorithm to other domains remains an open-ended challenge. This is due to the high costs, which are associated with intensive customization and required knowledge modelling, when developing new explainable solutions for each industrial domain. In this article, we present a transferable framework for generating domain-specific explanations for intelligent IR systems. The aim of our work is to provide a comprehensive approach for constructing explainable IR and recommendation algorithms, which are capable of adopting to domain requirements and are usable in multiple domains at the same time. Our method utilizes knowledge graphs (KG) for modeling the domain knowledge. The KG provides a solid foundation for developing intelligent IR solutions. Utilizing the same KG, we develop graph-based components for generating textual and visual explanations of the retrieved information, taking into account the domain requirements and supporting the transferability to other domain-specific environments, through the structured approach. The use of the KG resulted in minimum-to-zero adjustments when creating explanations for multiple intelligent IR algorithms in multiple domains. We test our method within two different use cases, a semiconductor manufacturing centered use case and a job-to-applicant matching one. Our quantitative results show a high capability of our approach to generate high-level explanations for the end users. In addition, the developed explanation components were highly adaptable to both industrial domains without sacrificing the overall accuracy of the intelligent IR algorithm. Furthermore, a qualitative user-study was conducted. We recorded a high level of acceptance from the users, who reported an enhanced overall experience with the explainable IR system.
APA, Harvard, Vancouver, ISO, and other styles
16

Li, Christy Y., Xiaodan Liang, Zhiting Hu, and Eric P. Xing. "Knowledge-Driven Encode, Retrieve, Paraphrase for Medical Image Report Generation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6666–73. http://dx.doi.org/10.1609/aaai.v33i01.33016666.

Full text
Abstract:
Generating long and semantic-coherent reports to describe medical images poses great challenges towards bridging visual and linguistic modalities, incorporating medical domain knowledge, and generating realistic and accurate descriptions. We propose a novel Knowledge-driven Encode, Retrieve, Paraphrase (KERP) approach which reconciles traditional knowledge- and retrieval-based methods with modern learning-based methods for accurate and robust medical report generation. Specifically, KERP decomposes medical report generation into explicit medical abnormality graph learning and subsequent natural language modeling. KERP first employs an Encode module that transforms visual features into a structured abnormality graph by incorporating prior medical knowledge; then a Retrieve module that retrieves text templates based on the detected abnormalities; and lastly, a Paraphrase module that rewrites the templates according to specific cases. The core of KERP is a proposed generic implementation unit—Graph Transformer (GTR) that dynamically transforms high-level semantics between graph-structured data of multiple domains such as knowledge graphs, images and sequences. Experiments show that the proposed approach generates structured and robust reports supported with accurate abnormality description and explainable attentive regions, achieving the state-of-the-art results on two medical report benchmarks, with the best medical abnormality and disease classification accuracy and improved human evaluation performance.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhang, Xiaoming, Xiaoling Sun, Chunjie Xie, and Bing Lun. "From Vision to Content: Construction of Domain-Specific Multi-Modal Knowledge Graph." IEEE Access 7 (2019): 108278–94. http://dx.doi.org/10.1109/access.2019.2933370.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Malik, Khalid Mahmood, Madan Krishnamurthy, Mazen Alobaidi, Maqbool Hussain, Fakhare Alam, and Ghaus Malik. "Automated domain-specific healthcare knowledge graph curation framework: Subarachnoid hemorrhage as phenotype." Expert Systems with Applications 145 (May 2020): 113120. http://dx.doi.org/10.1016/j.eswa.2019.113120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Zhang, Xiaoming, Mingming Meng, Xiaoling Sun, and Yu Bai. "FactQA: question answering over domain knowledge graph based on two-level query expansion." Data Technologies and Applications 54, no. 1 (November 22, 2019): 34–63. http://dx.doi.org/10.1108/dta-02-2019-0029.

Full text
Abstract:
Purpose With the advent of the era of Big Data, the scale of knowledge graph (KG) in various domains is growing rapidly, which holds huge amount of knowledge surely benefiting the question answering (QA) research. However, the KG, which is always constituted of entities and relations, is structurally inconsistent with the natural language query. Thus, the QA system based on KG is still faced with difficulties. The purpose of this paper is to propose a method to answer the domain-specific questions based on KG, providing conveniences for the information query over domain KG. Design/methodology/approach The authors propose a method FactQA to answer the factual questions about specific domain. A series of logical rules are designed to transform the factual questions into the triples, in order to solve the structural inconsistency between the user’s question and the domain knowledge. Then, the query expansion strategies and filtering strategies are proposed from two levels (i.e. words and triples in the question). For matching the question with domain knowledge, not only the similarity values between the words in the question and the resources in the domain knowledge but also the tag information of these words is considered. And the tag information is obtained by parsing the question using Stanford CoreNLP. In this paper, the KG in metallic materials domain is used to illustrate the FactQA method. Findings The designed logical rules have time stability for transforming the factual questions into the triples. Additionally, after filtering the synonym expansion results of the words in the question, the expansion quality of the triple representation of the question is improved. The tag information of the words in the question is considered in the process of data matching, which could help to filter out the wrong matches. Originality/value Although the FactQA is proposed for domain-specific QA, it can also be applied to any other domain besides metallic materials domain. For a question that cannot be answered, FactQA would generate a new related question to answer, providing as much as possible the user with the information they probably need. The FactQA could facilitate the user’s information query based on the emerging KG.
APA, Harvard, Vancouver, ISO, and other styles
20

Xu, Wenxia, Lei Feng, and Jun Ma. "Understanding the domain of driving distraction with knowledge graphs." PLOS ONE 17, no. 12 (December 9, 2022): e0278822. http://dx.doi.org/10.1371/journal.pone.0278822.

Full text
Abstract:
This paper aims to provide insight into the driving distraction domain systematically on the basis of scientific knowledge graphs. For this purpose, 3,790 documents were taken into consideration after retrieving from Web of Science Core Collection and screening, and two types of knowledge graphs were constructed to demonstrate bibliometric information and domain-specific research content respectively. In terms of bibliometric analysis, the evolution of publication and citation numbers reveals the accelerated development of this domain, and trends of multidisciplinary and global participation could be identified according to knowledge graphs from Vosviewer. In terms of research content analysis, a new framework consisting of five dimensions was clarified, including “objective factors”, “human factors”, “research methods”, “data” and “data science”. The main entities of this domain were identified and relations between entities were extracted using Natural Language Processing methods with Python 3.9. In addition to the knowledge graph composed of all the keywords and relationships, entities and relations under each dimension were visualized, and relations between relevant dimensions were demonstrated in the form of heat maps. Furthermore, the trend and significance of driving distraction research were discussed, and special attention was given to future directions of this domain.
APA, Harvard, Vancouver, ISO, and other styles
21

Vaghani, Dev. "An Approch for Representation of Node Using Graph Transformer Networks." International Journal for Research in Applied Science and Engineering Technology 11, no. 1 (January 31, 2023): 27–37. http://dx.doi.org/10.22214/ijraset.2023.48485.

Full text
Abstract:
Abstract: In representation learning on graphs, graph neural networks (GNNs) have been widely employed and have attained cutting-edge performance in tasks like node categorization and link prediction. However, the majority of GNNs now in use are made to learn node representations on homogenous and fixed graphs. The limits are particularly significant when learning representations on a network that has been incorrectly described or one that is heterogeneous, or made up of different kinds of nodes and edges. This study proposes Graph Transformer Networks (GTNs), which may generate new network structures by finding valuable connections between disconnected nodes in the original graph and learning efficient node representation on the new graphs end-to-end. A basic layer of GTNs called the Graph Transformer layer learns a soft selection of edge types and composite relations to produce meaningful multi-hop connections known as meta-paths. This research demonstrates that GTNs can learn new graph structures from data and tasks without any prior domain expertise and that they can then use convolution on the new graphs to provide effective node representation. GTNs outperformed state-of-the-art approaches that need predefined meta-paths from domain knowledge in all three benchmark node classification tasks without the use of domain-specific graph pre-processing.
APA, Harvard, Vancouver, ISO, and other styles
22

Hong, Liang, Wenjun Hou, and Lina Zhou. "KnowPoetry: A Knowledge Service Platform for Tang Poetry Research Based on Domain-Specific Knowledge Graph." Library Trends 69, no. 1 (2020): 101–24. http://dx.doi.org/10.1353/lib.2020.0025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Sukmana, Husni Teja, JM Muslimin, Asep Fajar Firmansyah, and Lee Kyung Oh. "Building the Knowledge Graph for Zakat (KGZ) in Indonesian Language." ASM Science Journal 16 (July 26, 2021): 1–10. http://dx.doi.org/10.32802/asmscj.2021.758.

Full text
Abstract:
In Indonesia, philanthropy is identical to Zakat. Zakat belongs to a specific domain because it has its characteristics of knowledge. This research studied knowledge graph in the Zakat domain called KGZ which is conducted in Indonesia. This area is still rarely performed, thus it becomes the first knowledge graph for Zakat in Indonesia. It is designed to provide basic knowledge on Zakat and managing the Zakat in Indonesia. There are some issues with building KGZ, firstly, the existing Indonesian named entity recognition (NER) is non-restricted and general-purpose based which data is obtained from a general source like news. Second, there is no dataset for NER in the Zakat domain. We define four steps to build KGZ, involving data acquisition, extracting entities and their relationship, mapping to ontology, and deploying knowledge graphs and visualizations. This research contributed a knowledge graph for Zakat (KGZ) and a building NER model for Zakat, called KGZ-NER. We defined 17 new named entity classes related to Zakat with 272 entities, 169 relationships and provided labelled datasets for KGZ-NER that are publicly accessible. We applied the Indonesian-Open Domain Information Extractor framework to process identifying entities’ relationships. Then designed modeling of information using resources description framework (RDF) to build the knowledge base for KGZ and store it to GraphDB, a product from Ontotext. This NER model has a precision 0.7641, recall 0.4544, and F1-score 0.5655. The increasing data size of KGZ is required to discover all of the knowledge of Zakat and managing Zakat in Indonesia. Moreover, sufficient resources are required in future works.
APA, Harvard, Vancouver, ISO, and other styles
24

Chen, Zhongliang, Feng Yuan, Xiaohui Li, Xiang Wang, He Li, Bangcai Wu, and Yuheng Chen. "Knowledge Extraction and Quality Inspection of Chinese Petrographic Description Texts with Complex Entities and Relations Using Machine Reading and Knowledge Graph: A Preliminary Research Study." Minerals 12, no. 9 (August 26, 2022): 1080. http://dx.doi.org/10.3390/min12091080.

Full text
Abstract:
(1) Background: Geological surveying is undergoing a digital transformation process towards the adoption of intelligent methods in China. Cognitive intelligence methods, such as those based on knowledge graphs and machine reading, have made progress in many domains and also provide a technical basis for quality detection in unstructured lithographic description texts. (2) Methods: First, the named entities and the relations of the domain-specific knowledge graph of petrography were defined based on the petrographic theory. Second, research was carried out based on a manually annotated corpus of petrographic description. The extraction of N-ary and single-entity overlapping relations and the separation of complex entities are key steps in this process. Third, a petrographic knowledge graph was formulated based on prior knowledge. Finally, the consistency between knowledge triples extracted from the corpus and the petrographic knowledge graph was calculated. The 1:50,000 sheet of Fengxiangyi located in the Dabie orogenic belt was selected for the empirical research. (3) Results: Using machine reading and the knowledge graph, petrographic knowledge can be extracted and the knowledge consistency calculation can quickly detect description errors about textures, structures and mineral components in petrographic description. (4) Conclusions: The proposed framework can be used to realise the intelligent inspection of petrographic knowledge with complex entities and relations and to improve the quality of petrographic description texts effectively.
APA, Harvard, Vancouver, ISO, and other styles
25

Shanshan, Zhu. "Research on We—Media Information Retrieval Technology of Knowledge Map." Journal of Education, Teaching and Social Studies 1, no. 2 (December 30, 2019): p130. http://dx.doi.org/10.22158/jetss.v1n2p130.

Full text
Abstract:
As the development direction of information retrieval technology gradually evolves toward the relationship of search entities, traditional relational databases are difficult to satisfy, and graph databases are specifically created to handle the relationships between data. This article explains the basic concept of graph database, and takes the example of domain-specific database information retrieval as an example, analyzes its advantages and disadvantages, and analyzes the challenges faced by the graph database in full-text information retrieval.
APA, Harvard, Vancouver, ISO, and other styles
26

Boroghina, Gabriel, Dragos Georgian Corlatescu, and Mihai Dascalu. "Multi-Microworld Conversational Agent with RDF Knowledge Graph Integration." Information 13, no. 11 (November 15, 2022): 539. http://dx.doi.org/10.3390/info13110539.

Full text
Abstract:
We live in an era where time is a scarce resource and people enjoy the benefits of technological innovations to ensure prompt and smooth access to information required for our daily activities. In this context, conversational agents start to play a remarkable role by mediating the interaction between humans and computers in specific contexts. However, they turn out to be laborious for cross-domain use cases or when they are expected to automatically adapt throughout user dialogues. This paper introduces a method to plug in multiple domains of knowledge for a conversational agent localized in Romanian in order to facilitate the extension of the agent’s area of expertise. Furthermore, the agent is intended to become more domain-aware and learn new information dynamically from user conversations by means of a knowledge graph acting as a network of facts and information. We ensure high capabilities for natural language understanding by proposing a novel architecture that takes into account RoBERT-contextualized embeddings alongside syntactic features. Our approach leads to improved intent classification performance (F1 score = 82.6) when compared with a basic pipeline relying only on features extracted from the agent’s training data. Moreover, the proposed RDF knowledge representation is confirmed to provide flexibility in storing and retrieving natural language entities, values, and factoid relations between them in the context of each microworld.
APA, Harvard, Vancouver, ISO, and other styles
27

Auer, Sören, Allard Oelen, Muhammad Haris, Markus Stocker, Jennifer D’Souza, Kheir Eddine Farfar, Lars Vogt, Manuel Prinz, Vitalis Wiens, and Mohamad Yaser Jaradeh. "Improving Access to Scientific Literature with Knowledge Graphs." Bibliothek Forschung und Praxis 44, no. 3 (November 30, 2020): 516–29. http://dx.doi.org/10.1515/bfp-2020-2042.

Full text
Abstract:
AbstractThe transfer of knowledge has not changed fundamentally for many hundreds of years: It is usually document-based-formerly printed on paper as a classic essay and nowadays as PDF. With around 2.5 million new research contributions every year, researchers drown in a flood of pseudo-digitized PDF publications. As a result research is seriously weakened. In this article, we argue for representing scholarly contributions in a structured and semantic way as a knowledge graph. The advantage is that information represented in a knowledge graph is readable by machines and humans. As an example, we give an overview on the Open Research Knowledge Graph (ORKG), a service implementing this approach. For creating the knowledge graph representation, we rely on a mixture of manual (crowd/expert sourcing) and (semi-)automated techniques. Only with such a combination of human and machine intelligence, we can achieve the required quality of the representation to allow for novel exploration and assistance services for researchers. As a result, a scholarly knowledge graph such as the ORKG can be used to give a condensed overview on the state-of-the-art addressing a particular research quest, for example as a tabular comparison of contributions according to various characteristics of the approaches. Further possible intuitive access interfaces to such scholarly knowledge graphs include domain-specific (chart) visualizations or answering of natural language questions.
APA, Harvard, Vancouver, ISO, and other styles
28

Liu, Xingli, Wei Zhao, and Haiqun Ma. "Research on Domain-Specific Knowledge Graph Based on the RoBERTa-wwm-ext Pretraining Model." Computational Intelligence and Neuroscience 2022 (October 12, 2022): 1–11. http://dx.doi.org/10.1155/2022/8656013.

Full text
Abstract:
The purpose of this study is to solve the effective way of domain-specific knowledge graph construction from information to knowledge. We propose the deep learning algorithm to extract entities and relationship from open-source intelligence by the RoBERTa-wwm-ext pretraining model and a knowledge fusion framework based on the longest common attribute entity alignment technology and bring in different text similarity algorithms and classification algorithms for verification. The experimental research showed that the named entity recognition model using the RoBERTa-wwm-ext pretrained model achieves the best results in terms of recall rate and F1 value, first, and the F value of RoBERTa-wwm-ext + BiLSTM + CRF reached up to 83.07%. Second, the RoBERTa-wwm-ext relationship extraction model has achieved the best results; compared with the relation extraction model based on recurrent neural network, it is improved by about 20%∼30%. Finally, the entity alignment algorithm based on the attribute similarity of the longest common subsequence has achieved the best results on the whole. The findings of this study provide an effective way to complete knowledge graph construction in domain-specific texts. The research serves as a first step for future research, for example, domain-specific intelligent Q&A.
APA, Harvard, Vancouver, ISO, and other styles
29

Chicaiza, Janneth, and Priscila Valdiviezo-Diaz. "A Comprehensive Survey of Knowledge Graph-Based Recommender Systems: Technologies, Development, and Contributions." Information 12, no. 6 (May 28, 2021): 232. http://dx.doi.org/10.3390/info12060232.

Full text
Abstract:
In recent years, the use of recommender systems has become popular on the web. To improve recommendation performance, usage, and scalability, the research has evolved by producing several generations of recommender systems. There is much literature about it, although most proposals focus on traditional methods’ theories and applications. Recently, knowledge graph-based recommendations have attracted attention in academia and the industry because they can alleviate information sparsity and performance problems. We found only two studies that analyze the recommendation system’s role over graphs, but they focus on specific recommendation methods. This survey attempts to cover a broader analysis from a set of selected papers. In summary, the contributions of this paper are as follows: (1) we explore traditional and more recent developments of filtering methods for a recommender system, (2) we identify and analyze proposals related to knowledge graph-based recommender systems, (3) we present the most relevant contributions using an application domain, and (4) we outline future directions of research in the domain of recommender systems. As the main survey result, we found that the use of knowledge graphs for recommendations is an efficient way to leverage and connect a user’s and an item’s knowledge, thus providing more precise results for users.
APA, Harvard, Vancouver, ISO, and other styles
30

Yang, Chun, Chang Liu, and Xu-Cheng Yin. "Weakly Correlated Knowledge Integration for Few-shot Image Classification." Machine Intelligence Research 19, no. 1 (January 21, 2022): 24–37. http://dx.doi.org/10.1007/s11633-022-1320-9.

Full text
Abstract:
AbstractVarious few-shot image classification methods indicate that transferring knowledge from other sources can improve the accuracy of the classification. However, most of these methods work with one single source or use only closely correlated knowledge sources. In this paper, we propose a novel weakly correlated knowledge integration (WCKI) framework to address these issues. More specifically, we propose a unified knowledge graph (UKG) to integrate knowledge transferred from different sources (i.e., visual domain and textual domain). Moreover, a graph attention module is proposed to sample the subgraph from the UKG with low complexity. To avoid explicitly aligning the visual features to the potentially biased and weakly correlated knowledge space, we sample a task-specific subgraph from UKG and append it as latent variables. Our framework demonstrates significant improvements on multiple few-shot image classification datasets.
APA, Harvard, Vancouver, ISO, and other styles
31

Einsfeld, Katja, Achim Ebert, Andreas Kerren, and Matthias Deller. "Knowledge Generation through Human-Centered Information Visualization." Information Visualization 8, no. 3 (June 25, 2009): 180–96. http://dx.doi.org/10.1057/ivs.2009.15.

Full text
Abstract:
One important intention of human-centered information visualization is to represent huge amounts of abstract data in a visual representation that allows even users from foreign application domains to interact with the visualization, to understand the underlying data, and finally, to gain new, application-related knowledge. The visualization will help experts as well as non-experts to link previously or isolated knowledge-items in their mental map with new insights. Our approach explicitly supports the process of linking knowledge-items with three concepts. At first, the representation of data items in an ontology categorizes and relates them. Secondly, the use of various visualization techniques visually correlates isolated items by graph-structures, layout, attachment, integration or hyperlink techniques. Thirdly, the intensive use of visual metaphors relates a known source domain to a less known target domain. In order to realize a scenario of these concepts, we developed a visual interface for non-experts to maintain complex wastewater treatment plants. This domain-specific application is used to give our concepts a meaningful background.
APA, Harvard, Vancouver, ISO, and other styles
32

Monnin, Pierre, Chedy Raïssi, Amedeo Napoli, and Adrien Coulet. "Discovering alignment relations with Graph Convolutional Networks: A biomedical case study." Semantic Web 13, no. 3 (April 6, 2022): 379–98. http://dx.doi.org/10.3233/sw-210452.

Full text
Abstract:
Knowledge graphs are freely aggregated, published, and edited in the Web of data, and thus may overlap. Hence, a key task resides in aligning (or matching) their content. This task encompasses the identification, within an aggregated knowledge graph, of nodes that are equivalent, more specific, or weakly related. In this article, we propose to match nodes within a knowledge graph by (i) learning node embeddings with Graph Convolutional Networks such that similar nodes have low distances in the embedding space, and (ii) clustering nodes based on their embeddings, in order to suggest alignment relations between nodes of a same cluster. We conducted experiments with this approach on the real world application of aligning knowledge in the field of pharmacogenomics, which motivated our study. We particularly investigated the interplay between domain knowledge and GCN models with the two following focuses. First, we applied inference rules associated with domain knowledge, independently or combined, before learning node embeddings, and we measured the improvements in matching results. Second, while our GCN model is agnostic to the exact alignment relations (e.g., equivalence, weak similarity), we observed that distances in the embedding space are coherent with the “strength” of these different relations (e.g., smaller distances for equivalences), letting us considering clustering and distances in the embedding space as a means to suggest alignment relations in our case study.
APA, Harvard, Vancouver, ISO, and other styles
33

Stylianou, Nikolaos, Danai Vlachava, Ioannis Konstantinidis, Nick Bassiliades, and Vassilios Peristeras. "Doc2KG." International Journal on Semantic Web and Information Systems 18, no. 1 (January 2022): 1–20. http://dx.doi.org/10.4018/ijswis.295552.

Full text
Abstract:
Document Management Systems (DMS) are used for decades to store large amounts of information in textual form. Their technology paradigm is based on storing vast quantities of textual information enriched with metadata to support searchability. However, this exhibits limitations as it treats textual information as black box and is based exclusively on user-created metadata, a process that suffers from quality and completeness shortcomings. The use of knowledge graphs in DMS can substantially improve searchability, providing the ability to link data and enabling semantic searching. Recent approaches focus on either creating knowledge graphs from document collections or updating existing ones. In this paper, we introduce Doc2KG (Document-to-Knowledge-Graph), an intelligent framework that handles both creation and real-time updating of a knowledge graph, while also exploiting domain-specific ontology standards. We use DIAVGEIA (clarity), an award winning Greek open government portal, as our case-study and discuss new capabilities for the portal by implementing Doc2KG.
APA, Harvard, Vancouver, ISO, and other styles
34

Ahmad, Jawad, Abdur Rehman, Hafiz Tayyab Rauf, Kashif Javed, Maram Abdullah Alkhayyal, and Abeer Ali Alnuaim. "Service Recommendations Using a Hybrid Approach in Knowledge Graph with Keyword Acceptance Criteria." Applied Sciences 12, no. 7 (March 31, 2022): 3544. http://dx.doi.org/10.3390/app12073544.

Full text
Abstract:
Businesses are overgrowing worldwide; people struggle for their businesses and startups in almost every field of life, whether industrial or academic. The businesses or services have multiple income streams with which they generate revenue. Most companies use different marketing and advertisement strategies to engage their customers and spread their services worldwide. Service recommendation systems are gaining popularity to recommend the best services and products to customers. In recent years, the development of service-oriented computing has had a significant impact on the growth of businesses. Knowledge graphs are commonly used data structures to describe the relations among data entities in recommendation systems. Domain-oriented user and service interaction knowledge graph (DUSKG) is a framework for keyword extraction in recommendation systems. This paper proposes a novel method of chunking-based keyword extractions for hybrid recommendations to extract domain-specific keywords in DUSKG. We further show that the performance of the hybrid approach is better than other techniques. The proposed chunking method for keyword extraction outperforms the existing value feature entity extraction (VF2E) by extracting fewer keywords.
APA, Harvard, Vancouver, ISO, and other styles
35

Frawley, S. J., S. M. Powsner, R. N. Shiftman, P. L. Miller, and C. A. Brandt. "Visualizing the Logic of a Clinical Guideline: A Case Study in Childhood Immunization." Methods of Information in Medicine 36, no. 03 (July 1997): 179–83. http://dx.doi.org/10.1055/s-0038-1636845.

Full text
Abstract:
IMM/Graph is a visual model designed to help knowledge-base developers understand and refine the guideline logic for childhood immunization. The IMM/Graph model is domain-specific and was developed to help build a knowledge-based system that makes patient-specific immunization recommendations. A “visual vocabulary” models issues specific to the immunization domain, such as (1) the age a child is first eligible for each vaccination dose, (2) recommended, “past due” and maximum ages, (3) minimum waiting periods between doses, (4) the vaccine brand or preparation to be given, and (5) the various factors affecting the time course of vaccination. Several lessons learned in the course of developing IMM/Graph include the following: (1) The intended use of the model may influence the choice of visual presentation; (2) There is a potentially interesting interplay between the use of visual and textual information in creating the visual model; (3) Visualization may help a development team better understand a complex clinical guideline and may also help highlight areas of incompleteness.
APA, Harvard, Vancouver, ISO, and other styles
36

Geng, Yuxia, Jiaoyan Chen, Zhiquan Ye, Zonggang Yuan, Wei Zhang, and Huajun Chen. "Explainable zero-shot learning via attentive graph convolutional network and knowledge graphs." Semantic Web 12, no. 5 (August 27, 2021): 741–65. http://dx.doi.org/10.3233/sw-210435.

Full text
Abstract:
Zero-shot learning (ZSL) which aims to deal with new classes that have never appeared in the training data (i.e., unseen classes) has attracted massive research interests recently. Transferring of deep features learned from training classes (i.e., seen classes) are often used, but most current methods are black-box models without any explanations, especially textual explanations that are more acceptable to not only machine learning specialists but also common people without artificial intelligence expertise. In this paper, we focus on explainable ZSL, and present a knowledge graph (KG) based framework that can explain the transferability of features in ZSL in a human understandable manner. The framework has two modules: an attentive ZSL learner and an explanation generator. The former utilizes an Attentive Graph Convolutional Network (AGCN) to match class knowledge from WordNet with deep features learned from CNNs (i.e., encode inter-class relationship to predict classifiers), in which the features of unseen classes are transferred from seen classes to predict the samples of unseen classes, with impressive (important) seen classes detected, while the latter generates human understandable explanations for the transferability of features with class knowledge that are enriched by external KGs, including a domain-specific Attribute Graph and DBpedia. We evaluate our method on two benchmarks of animal recognition. Augmented by class knowledge from KGs, our framework generates promising explanations for the transferability of features, and at the same time improves the recognition accuracy.
APA, Harvard, Vancouver, ISO, and other styles
37

Dorodnykh, N. O., and A. Yu Yurin. "An approach for automated knowledge graph filling with entities based on table analysis." Ontology of Designing 12, no. 3 (September 27, 2022): 336–52. http://dx.doi.org/10.18287/2223-9537-2022-12-3-336-352.

Full text
Abstract:
The use of Semantic Web technologies including ontologies and knowledge graphs is a widespread practice in the development of modern intelligent systems for information retrieval, recommendation and question-answering. The pro-cess of developing ontologies and knowledge graphs involves the use of various information sources, for example, databases, documents, conceptual models. Tables are one of the most accessible and widely used ways of storing and presenting information, as well as a valuable source of domain knowledge. In this paper, it is proposed to automate the extraction process of specific entities (facts) from tabular data for the subsequent filling of a target knowledge graph. A new approach is proposed for this purpose. A key feature of this approach is the semantic interpretation (annotation) of individual table elements. A description of its main stages is given, the application of the approach is shown in solv-ing practical problems of creating subject knowledge graphs, including in the field of industrial safety expertise of pet-rochemical equipment and technological complexes. An experimental quantitative evaluation of the proposed ap-proach was also obtained on a test set of tabular data. The obtained results showed the feasibility of using the pro-posed approach and the developed software to solve the problem of extracting facts from tabular data for the subsequent filling of the target knowledge graph.
APA, Harvard, Vancouver, ISO, and other styles
38

Nie, Binling, Ruixue Ding, Pengjun Xie, Fei Huang, Chen Qian, and Luo Si. "Knowledge-aware Named Entity Recognition with Alleviating Heterogeneity." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 15 (May 18, 2021): 13595–603. http://dx.doi.org/10.1609/aaai.v35i15.17603.

Full text
Abstract:
Named Entity Recognition (NER) is a fundamental and important research topic for many downstream NLP tasks, aiming at detecting and classifying named entities (NEs) mentioned in unstructured text into pre-defined categories. Learning from labeled data only is far from enough when it comes to domain-specific or temporally-evolving entities (medical terminologies or restaurant names). Luckily, open-source Knowledge Bases (KBs) (Wikidata and Freebase) contain NEs that are manually labeled with predefined types in different domains, which is potentially beneficial to identify entity boundaries and recognize entity types more accurately. However, the type system of a domain-specific NER task is typically independent of that of current KBs and thus exhibits heterogeneity issue inevitably, which makes matching between the original NER and KB types (Person in NER potentially matches President in KBs) less likely, or introduces unintended noises without considering domain-specific knowledge (Band in NER should be mapped to Out_of_Entity_Types in the restaurant-related task). To better incorporate and denoise the abundant knowledge in KBs, we propose a new KB-aware NER framework (KaNa), which utilizes type-heterogeneous knowledge to improve NER. Specifically, for an entity mention along with a set of candidate entities that are linked from KBs, KaNa first uses a type projection mechanism that maps the mention type and entity types into a shared space to homogenize the heterogeneous entity types. Then, based on projected types, a noise detector filters out certain less-confident candidate entities in an unsupervised manner. Finally, the filtered mention-entity pairs are injected into a NER model as a graph to predict answers. The experimental results demonstrate KaNa's state-of-the-art performance on five public benchmark datasets from different domains.
APA, Harvard, Vancouver, ISO, and other styles
39

Monka, Sebastian, Lavdim Halilaj, and Achim Rettinger. "A survey on visual transfer learning using knowledge graphs." Semantic Web 13, no. 3 (April 6, 2022): 477–510. http://dx.doi.org/10.3233/sw-212959.

Full text
Abstract:
The information perceived via visual observations of real-world phenomena is unstructured and complex. Computer vision (CV) is the field of research that attempts to make use of that information. Recent approaches of CV utilize deep learning (DL) methods as they perform quite well if training and testing domains follow the same underlying data distribution. However, it has been shown that minor variations in the images that occur when these methods are used in the real world can lead to unpredictable and catastrophic errors. Transfer learning is the area of machine learning that tries to prevent these errors. Especially, approaches that augment image data using auxiliary knowledge encoded in language embeddings or knowledge graphs (KGs) have achieved promising results in recent years. This survey focuses on visual transfer learning approaches using KGs, as we believe that KGs are well suited to store and represent any kind of auxiliary knowledge. KGs can represent auxiliary knowledge either in an underlying graph-structured schema or in a vector-based knowledge graph embedding. Intending to enable the reader to solve visual transfer learning problems with the help of specific KG-DL configurations we start with a description of relevant modeling structures of a KG of various expressions, such as directed labeled graphs, hypergraphs, and hyper-relational graphs. We explain the notion of feature extractor, while specifically referring to visual and semantic features. We provide a broad overview of knowledge graph embedding methods and describe several joint training objectives suitable to combine them with high dimensional visual embeddings. The main section introduces four different categories on how a KG can be combined with a DL pipeline: 1) Knowledge Graph as a Reviewer; 2) Knowledge Graph as a Trainee; 3) Knowledge Graph as a Trainer; and 4) Knowledge Graph as a Peer. To help researchers find meaningful evaluation benchmarks, we provide an overview of generic KGs and a set of image processing datasets and benchmarks that include various types of auxiliary knowledge. Last, we summarize related surveys and give an outlook about challenges and open issues for future research.
APA, Harvard, Vancouver, ISO, and other styles
40

Yan, Muheng, Yu-Ru Lin, Rebecca Hwa, Ali Mert Ertugrul, Meiqi Guo, and Wen-Ting Chung. "MimicProp: Learning to Incorporate Lexicon Knowledge into Distributed Word Representation for Social Media Analysis." Proceedings of the International AAAI Conference on Web and Social Media 14 (May 26, 2020): 738–49. http://dx.doi.org/10.1609/icwsm.v14i1.7339.

Full text
Abstract:
Lexicon-based methods and word embeddings are the two widely used approaches for analyzing texts in social media. The choice of an approach can have a significant impact on the reliability of the text analysis. For example, lexicons provide manually curated, domain-specific attributes about a limited set of words, while word embeddings learn to encode some loose semantic interpretations for a much broader set of words. Text analysis can benefit from a representation that offers both the broad coverage of word embeddings and the domain knowledge of lexicons. This paper presents MimicProp, a new graph-mode method that learns a lexicon-aligned word embedding. Our approach improves over prior graph-based methods in terms of its interpretability (i.e., lexicon attributes can be recovered) and generalizability (i.e., new words can be learned to incorporate lexicon knowledge). It also effectively improves the performance of downstream analysis applications, such as text classification.
APA, Harvard, Vancouver, ISO, and other styles
41

Yang, Linqing, Bo Liu, Youpei Huang, and Xiaozhuo Li. "Research on Entity Label Value Assignment Method in Knowledge Graph." Computer and Information Science 14, no. 2 (April 13, 2021): 63. http://dx.doi.org/10.5539/cis.v14n2p63.

Full text
Abstract:
The lack of entity label values is one of the problems faced by the application of Knowledge Graph. The method of automatically assigning entity label values still has shortcomings, such as costing more resources during training, leading to inaccurate label value assignment because of lacking entity semantics. In this paper, oriented to domain-specific Knowledge Graph, based on the situation that the initial entity label values of all triples are completely unknown, an Entity Label Value Assignment Method (ELVAM) based on external resources and entropy is proposed. ELVAM first constructs a Relationship Triples Cluster according to the relationship type, and randomly extracts the triples data from each cluster to form a Relationship Triples Subset; then collects the extended semantic text of the entities in the subset from the external resources to obtain nouns. Information Entropy and Conditional Entropy of the nouns are calculated through Ontology Category Hierarchy Graph, so as to obtain the entity label value with moderate granularity. Finally, the Label Triples Pattern of each Relationship Triples Cluster is summarized, and the corresponding entity is assigned the label value according to the pattern. The experimental results verify the effectiveness of ELVAM in assigning entity label values in Knowledge Graph.
APA, Harvard, Vancouver, ISO, and other styles
42

Heverin, Thomas, Elsa Deitz, Eve Cohen, and Jordana Wilkes. "Development and Analysis of a Reconnaissance-Technique Knowledge Graph." International Conference on Cyber Warfare and Security 18, no. 1 (February 28, 2023): 128–36. http://dx.doi.org/10.34190/iccws.18.1.1041.

Full text
Abstract:
Penetration testing involves the use of many tools and techniques. The first stage of penetration testing involves conducting reconnaissance on a target organization. In the reconnaissance phase, adversaries use tools to find network data, people data, company/organization data, and attack data to generate a risk assessment about a target to determine where initial weaknesses may be. Although a small number of tools can be used to conduct many of reconnaissance tasks, including Shodan, Nmap, Recon-ng, Maltego, Metasploit, Google and more, each tool holds an abundance of specific techniques that can be used. Furthermore, each technique uses unique syntax. For example, Nmap holds over 600 scripts that make up its Nmap Scripting Engine. Depending on the type of device targeted, Nmap scripts can scan for ports, operating systems, IP addresses, hostnames and more. As another example, Maltego operates over 150 transforms or modules that collect data on organizations, files and people. Understanding which reconnaissance tool, techniques within those tools, and the syntax for each technique represents a highly complex task. MITRE ATT&CK, a widely accepted framework, models tactics and techniques within the tactics to help users make sense of adversarial behaviours. The tactic of reconnaissance is modelled in ATT&CK as well as its techniques. However, the explicit links between reconnaissance techniques are not modelled. Our research focused on the development of an ontology called Recontology to model the domain of reconnaissance. Recontology was then used to form Reconnaissance-Technique Graph (RT-Graph) to model 102 reconnaissance techniques and the directional links between the techniques. We used exploratory data analysis (EDA) methods including a graph spatial-layout algorithm and several graph-statistical algorithms to examine RT-Graph. We also used EDA to find critical techniques within the graph. Patterns across the results are discussed as well as implications for real-world uses of RT-Graph.
APA, Harvard, Vancouver, ISO, and other styles
43

Búr, Márton, Gábor Szilágyi, András Vörös, and Dániel Varró. "Distributed graph queries over models@run.time for runtime monitoring of cyber-physical systems." International Journal on Software Tools for Technology Transfer 22, no. 1 (September 26, 2019): 79–102. http://dx.doi.org/10.1007/s10009-019-00531-5.

Full text
Abstract:
Abstract Smart cyber-physical systems (CPSs) have complex interaction with their environment which is rarely known in advance, and they heavily depend on intelligent data processing carried out over a heterogeneous and distributed computation platform with resource-constrained devices to monitor, manage and control autonomous behavior. First, we propose a distributed runtime model to capture the operational state and the context information of a smart CPS using directed, typed and attributed graphs as high-level knowledge representation. The runtime model is distributed among the participating nodes, and it is consistently kept up to date in a continuously evolving environment by a time-triggered model management protocol. Our runtime models offer a (domain-specific) model query and manipulation interface over the reliable communication middleware of the Data Distribution Service (DDS) standard widely used in the CPS domain. Then, we propose to carry out distributed runtime monitoring by capturing critical properties of interest in the form of graph queries, and design a distributed graph query evaluation algorithm for evaluating such graph queries over the distributed runtime model. As the key innovation, our (1) distributed runtime model extends existing publish–subscribe middleware (like DDS) used in real-time CPS applications by enabling the dynamic creation and deletion of graph nodes (without compile time limits). Moreover, (2) our distributed query evaluation extends existing graph query techniques by enabling query evaluation in a real-time, resource-constrained environment while still providing scalable performance. Our approach is illustrated, and an initial scalability evaluation is carried out on the MoDeS3 CPS demonstrator and the open Train Benchmark for graph queries.
APA, Harvard, Vancouver, ISO, and other styles
44

Banerjee, Suman, and Mitesh M. Khapra. "Graph Convolutional Network with Sequential Attention for Goal-Oriented Dialogue Systems." Transactions of the Association for Computational Linguistics 7 (November 2019): 485–500. http://dx.doi.org/10.1162/tacl_a_00284.

Full text
Abstract:
Domain-specific goal-oriented dialogue systems typically require modeling three types of inputs, namely, (i) the knowledge-base associated with the domain, (ii) the history of the conversation, which is a sequence of utterances, and (iii) the current utterance for which the response needs to be generated. While modeling these inputs, current state-of-the-art models such as Mem2Seq typically ignore the rich structure inherent in the knowledge graph and the sentences in the conversation context. Inspired by the recent success of structure-aware Graph Convolutional Networks (GCNs) for various NLP tasks such as machine translation, semantic role labeling, and document dating, we propose a memory-augmented GCN for goal-oriented dialogues. Our model exploits (i) the entity relation graph in a knowledge-base and (ii) the dependency graph associated with an utterance to compute richer representations for words and entities. Further, we take cognizance of the fact that in certain situations, such as when the conversation is in a code-mixed language, dependency parsers may not be available. We show that in such situations we could use the global word co-occurrence graph to enrich the representations of utterances. We experiment with four datasets: (i) the modified DSTC2 dataset, (ii) recently released code-mixed versions of DSTC2 dataset in four languages, (iii) Wizard-of-Oz style CAM676 dataset, and (iv) Wizard-of-Oz style MultiWOZ dataset. On all four datasets our method outperforms existing methods, on a wide range of evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
45

Ge, Xingtong, Yi Yang, Ling Peng, Luanjie Chen, Weichao Li, Wenyue Zhang, and Jiahui Chen. "Spatio-Temporal Knowledge Graph Based Forest Fire Prediction with Multi Source Heterogeneous Data." Remote Sensing 14, no. 14 (July 21, 2022): 3496. http://dx.doi.org/10.3390/rs14143496.

Full text
Abstract:
Forest fires have frequently occurred and caused great harm to people’s lives. Many researchers use machine learning techniques to predict forest fires by considering spatio-temporal data features. However, it is difficult to efficiently obtain the features from large-scale, multi-source, heterogeneous data. There is a lack of a method that can effectively extract features required by machine learning-based forest fire predictions from multi-source spatio-temporal data. This paper proposes a forest fire prediction method that integrates spatio-temporal knowledge graphs and machine learning models. This method can fuse multi-source heterogeneous spatio-temporal forest fire data by constructing a forest fire semantic ontology and a knowledge graph-based spatio-temporal framework. This paper defines the domain expertise of forest fire analysis as the semantic rules of the knowledge graph. This paper proposes a rule-based reasoning method to obtain the corresponding data for the specific machine learning-based forest fire prediction methods, which are dedicated to tackling the problem with real-time prediction scenarios. This paper performs experiments regarding forest fire predictions based on real-world data in the experimental areas Xichang and Yanyuan in Sichuan province. The results show that the proposed method is beneficial for the fusion of multi-source spatio-temporal data and highly improves the prediction performance in real forest fire prediction scenarios.
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Wuyang, Xinyu Liu, Xiwen Yao, and Yixuan Yuan. "SCAN: Cross Domain Object Detection with Semantic Conditioned Adaptation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (June 28, 2022): 1421–28. http://dx.doi.org/10.1609/aaai.v36i2.20031.

Full text
Abstract:
The domain gap severely limits the transferability and scalability of object detectors trained in a specific domain when applied to a novel one. Most existing works bridge the domain gap by minimizing the domain discrepancy in the category space and aligning category-agnostic global features. Though great success, these methods model domain discrepancy with prototypes within a batch, yielding a biased estimation of domain-level distribution. Besides, the category-agnostic alignment leads to the disagreement of class-specific distributions in the two domains, further causing inevitable classification errors. To overcome these two challenges, we propose a novel Semantic Conditioned AdaptatioN (SCAN) framework such that well-modeled unbiased semantics can support semantic conditioned adaptation for precise domain adaptive object detection. Specifically, class-specific semantics crossing different images in the source domain are graphically aggregated as the input to learn an unbiased semantic paradigm incrementally. The paradigm is then sent to a lightweight manifestation module to obtain conditional kernels to serve as the role of extracting semantics from the target domain for better adaptation. Subsequently, conditional kernels are integrated into global alignment to support the class-specific adaptation in a well-designed Conditional Kernel guided Alignment (CKA) module. Meanwhile, rich knowledge of the unbiased paradigm is transferred to the target domain with a novel Graph-based Semantic Transfer (GST) mechanism, yielding the adaptation in the category-based feature space. Comprehensive experiments conducted on three adaptation benchmarks demonstrate that SCAN outperforms existing works by a large margin.
APA, Harvard, Vancouver, ISO, and other styles
47

Mouromtsev, Dmitry. "Semantic Reference Model for Individualization of Information Processes in IoT Heterogeneous Environment." Electronics 10, no. 20 (October 16, 2021): 2523. http://dx.doi.org/10.3390/electronics10202523.

Full text
Abstract:
The individualization of information processes based on artificial intelligence (AI), especially in the context of industrial tasks, requires new, hybrid approaches to process modeling that take into account the novel methods and technologies both in the field of semantic representation of knowledge and machine learning. The combination of both AI techniques imposes several requirements and restrictions on the types of data and object properties and the structure of ontologies for data and knowledge representation about processes. The conceptual reference model for effective individualization of information processes (IIP CRM) proposed in this work considers these requirements and restrictions. This model is based on such well-known standard upper ontologies as BFO, GFO and MASON. Evaluation of the proposed model is done on a practical use case in the field of precise agriculture where IoT-enabled processes are widely used. It is shown that IIP CRM allows the construction of a knowledge graph about processes that are surrounded by unstructured data in soft and heterogeneous domains. CRM also provides the ability to answer specific questions in the domain using queries written with the CRM vocabulary, which makes it easier to develop applications based on knowledge graphs.
APA, Harvard, Vancouver, ISO, and other styles
48

Gordon, Sallie E. "Cognitive Task Analysis Using Complementary Elicitation Methods." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 39, no. 9 (October 1995): 525–29. http://dx.doi.org/10.1177/154193129503900919.

Full text
Abstract:
Cognitive task analysis is accomplished using a wide variety of methodologies, and we have previously argued that different methods will tend to elicit qualitatively different types of knowledge and skills. Because of this, many practitioners use complementary methods for a given project. We have developed such a complementary package of knowledge elicitation techniques, along with a specific representational method, which together are termed conceptual graph analysis. Conceptual graph analysis is domain-independent and can be used to evaluate complex cognitive tasks or subtasks. It relies on the successive use of document analysis, interviews, task observation, and induction based on review of task performance. The information from these elicitation techniques is represented as a set of interrelated conceptual graphs, but can be represented in other formats also. There are several issues relevant to cognitive task analysis that are currently being faced, including when to perform this type of analysis, and what methods to use. One answer is to perform cognitive task analysis when the task has an inherently high degree of cognitive complexity.
APA, Harvard, Vancouver, ISO, and other styles
49

Xie, Bo, Guowei Shen, Chun Guo, and Yunhe Cui. "The Named Entity Recognition of Chinese Cybersecurity Using an Active Learning Strategy." Wireless Communications and Mobile Computing 2021 (April 21, 2021): 1–11. http://dx.doi.org/10.1155/2021/6629591.

Full text
Abstract:
In data-driven big data security analysis, knowledge graph-based multisource heterogeneous threat data organization, association mining, and inference analysis attach increasinginterest in the field of cybersecurity. Although the construction of knowledge graph based on deep learning has achieved great success, the construction of a largescale, high-quality, and domain-specific knowledge graph needs a manual annotation of large corpora, which means it is very difficult. To tackle this problem, we present a straightforward active learning strategy for cybersecurity entity recognition utilizing deep learning technology. BERT pre-trained model and residual dilation convolutional neural networks (RDCNN) are introduced to learn entity context features, and the conditional random field (CRF) layer is employed as a tag decoder. Then, taking advantages of the output results and distribution of cybersecurity entities, we propose an active learning strategy named TPCL that considers the uncertainty, confidence, and diversity. We evaluated TPCL on the general domain datasets and cybersecurity datasets, respectively. The experimental results show that TPCL performs better than the traditional strategies in terms of accuracy and F1. Moreover, compared with the general field, it has better performance in the cybersecurity field and is more suitable for the Chinese entity recognition task in this field.
APA, Harvard, Vancouver, ISO, and other styles
50

Nguyen, Ba Xuan, Jesse David Dinneen, and Markus Luczak-Roesch. "A Novel Method for Resolving and Completing Authors’ Country Affiliation Data in Bibliographic Records." Journal of Data and Information Science 5, no. 3 (July 9, 2020): 97–115. http://dx.doi.org/10.2478/jdis-2020-0020.

Full text
Abstract:
AbstractPurposeOur work seeks to overcome data quality issues related to incomplete author affiliation data in bibliographic records in order to support accurate and reliable measurement of international research collaboration (IRC).Design/methodology/approchWe propose, implement, and evaluate a method that leverages the Web-based knowledge graph Wikidata to resolve publication affiliation data to particular countries. The method is tested with general and domain-specific data sets.FindingsOur evaluation covers the magnitude of improvement, accuracy, and consistency. Results suggest the method is beneficial, reliable, and consistent, and thus a viable and improved approach to measuring IRC.Research limitationsThough our evaluation suggests the method works with both general and domain-specific bibliographic data sets, it may perform differently with data sets not tested here. Further limitations stem from the use of the R programming language and R libraries for country identification as well as imbalanced data coverage and quality in Wikidata that may also change over time.Practical implicationsThe new method helps to increase the accuracy in IRC studies and provides a basis for further development into a general tool that enriches bibliographic data using the Wikidata knowledge graph.OriginalityThis is the first attempt to enrich bibliographic data using a peer-produced, Web-based knowledge graph like Wikidata.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography