Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Graph embeddings.

Zeitschriftenartikel zum Thema „Graph embeddings“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Graph embeddings" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Zhou, Houquan, Shenghua Liu, Danai Koutra, Huawei Shen und Xueqi Cheng. „A Provable Framework of Learning Graph Embeddings via Summarization“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 4 (26.06.2023): 4946–53. http://dx.doi.org/10.1609/aaai.v37i4.25621.

Der volle Inhalt der Quelle
Annotation:
Given a large graph, can we learn its node embeddings from a smaller summary graph? What is the relationship between embeddings learned from original graphs and their summary graphs? Graph representation learning plays an important role in many graph mining applications, but learning em-beddings of large-scale graphs remains a challenge. Recent works try to alleviate it via graph summarization, which typ-ically includes the three steps: reducing the graph size by combining nodes and edges into supernodes and superedges,learning the supernode embedding on the summary graph and then restoring the embeddings of the original nodes. How-ever, the justification behind those steps is still unknown. In this work, we propose GELSUMM, a well-formulated graph embedding learning framework based on graph sum-marization, in which we show the theoretical ground of learn-ing from summary graphs and the restoration with the three well-known graph embedding approaches in a closed form.Through extensive experiments on real-world datasets, we demonstrate that our methods can learn graph embeddings with matching or better performance on downstream tasks.This work provides theoretical analysis for learning node em-beddings via summarization and helps explain and under-stand the mechanism of the existing works.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Mohar, Bojan. „Combinatorial Local Planarity and the Width of Graph Embeddings“. Canadian Journal of Mathematics 44, Nr. 6 (01.12.1992): 1272–88. http://dx.doi.org/10.4153/cjm-1992-076-8.

Der volle Inhalt der Quelle
Annotation:
AbstractLet G be a graph embedded in a closed surface. The embedding is “locally planar” if for each face, a “large” neighbourhood of this face is simply connected. This notion is formalized, following [RV], by introducing the width ρ(ψ) of the embedding ψ. It is shown that embeddings with ρ(ψ) ≥ 3 behave very much like the embeddings of planar graphs in the 2-sphere. Another notion, “combinatorial local planarity”, is introduced. The criterion is independent of embeddings of the graph, but it guarantees that a given cycle in a graph G must be contractible in any minimal genus embedding of G (either orientable, or non-orientable). It generalizes the width introduced before. As application, short proofs of some important recently discovered results about embeddings of graphs are given and generalized or improved. Uniqueness and switching equivalence of graphs embedded in a fixed surface are also considered.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Makarov, Ilya, Dmitrii Kiselev, Nikita Nikitinsky und Lovro Subelj. „Survey on graph embeddings and their applications to machine learning problems on graphs“. PeerJ Computer Science 7 (04.02.2021): e357. http://dx.doi.org/10.7717/peerj-cs.357.

Der volle Inhalt der Quelle
Annotation:
Dealing with relational data always required significant computational resources, domain expertise and task-dependent feature engineering to incorporate structural information into a predictive model. Nowadays, a family of automated graph feature engineering techniques has been proposed in different streams of literature. So-called graph embeddings provide a powerful tool to construct vectorized feature spaces for graphs and their components, such as nodes, edges and subgraphs under preserving inner graph properties. Using the constructed feature spaces, many machine learning problems on graphs can be solved via standard frameworks suitable for vectorized feature representation. Our survey aims to describe the core concepts of graph embeddings and provide several taxonomies for their description. First, we start with the methodological approach and extract three types of graph embedding models based on matrix factorization, random-walks and deep learning approaches. Next, we describe how different types of networks impact the ability of models to incorporate structural and attributed data into a unified embedding. Going further, we perform a thorough evaluation of graph embedding applications to machine learning problems on graphs, among which are node classification, link prediction, clustering, visualization, compression, and a family of the whole graph embedding algorithms suitable for graph classification, similarity and alignment problems. Finally, we overview the existing applications of graph embeddings to computer science domains, formulate open problems and provide experiment results, explaining how different networks properties result in graph embeddings quality in the four classic machine learning problems on graphs, such as node classification, link prediction, clustering and graph visualization. As a result, our survey covers a new rapidly growing field of network feature engineering, presents an in-depth analysis of models based on network types, and overviews a wide range of applications to machine learning problems on graphs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mao, Yuqing, und Kin Wah Fung. „Use of word and graph embedding to measure semantic relatedness between Unified Medical Language System concepts“. Journal of the American Medical Informatics Association 27, Nr. 10 (01.10.2020): 1538–46. http://dx.doi.org/10.1093/jamia/ocaa136.

Der volle Inhalt der Quelle
Annotation:
Abstract Objective The study sought to explore the use of deep learning techniques to measure the semantic relatedness between Unified Medical Language System (UMLS) concepts. Materials and Methods Concept sentence embeddings were generated for UMLS concepts by applying the word embedding models BioWordVec and various flavors of BERT to concept sentences formed by concatenating UMLS terms. Graph embeddings were generated by the graph convolutional networks and 4 knowledge graph embedding models, using graphs built from UMLS hierarchical relations. Semantic relatedness was measured by the cosine between the concepts’ embedding vectors. Performance was compared with 2 traditional path-based (shortest path and Leacock-Chodorow) measurements and the publicly available concept embeddings, cui2vec, generated from large biomedical corpora. The concept sentence embeddings were also evaluated on a word sense disambiguation (WSD) task. Reference standards used included the semantic relatedness and semantic similarity datasets from the University of Minnesota, concept pairs generated from the Standardized MedDRA Queries and the MeSH (Medical Subject Headings) WSD corpus. Results Sentence embeddings generated by BioWordVec outperformed all other methods used individually in semantic relatedness measurements. Graph convolutional network graph embedding uniformly outperformed path-based measurements and was better than some word embeddings for the Standardized MedDRA Queries dataset. When used together, combined word and graph embedding achieved the best performance in all datasets. For WSD, the enhanced versions of BERT outperformed BioWordVec. Conclusions Word and graph embedding techniques can be used to harness terms and relations in the UMLS to measure semantic relatedness between concepts. Concept sentence embedding outperforms path-based measurements and cui2vec, and can be further enhanced by combining with graph embedding.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Fionda, Valeria, und Giuseppe Pirrò. „Learning Triple Embeddings from Knowledge Graphs“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 04 (03.04.2020): 3874–81. http://dx.doi.org/10.1609/aaai.v34i04.5800.

Der volle Inhalt der Quelle
Annotation:
Graph embedding techniques allow to learn high-quality feature vectors from graph structures and are useful in a variety of tasks, from node classification to clustering. Existing approaches have only focused on learning feature vectors for the nodes and predicates in a knowledge graph. To the best of our knowledge, none of them has tackled the problem of directly learning triple embeddings. The approaches that are closer to this task have focused on homogeneous graphs involving only one type of edge and obtain edge embeddings by applying some operation (e.g., average) on the embeddings of the endpoint nodes. The goal of this paper is to introduce Triple2Vec, a new technique to directly embed knowledge graph triples. We leverage the idea of line graph of a graph and extend it to the context of knowledge graphs. We introduce an edge weighting mechanism for the line graph based on semantic proximity. Embeddings are finally generated by adopting the SkipGram model, where sentences are replaced with graph walks. We evaluate our approach on different real-world knowledge graphs and compared it with related work. We also show an application of triple embeddings in the context of user-item recommendations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

FRIESEN, TYLER, und VASSILY OLEGOVICH MANTUROV. „EMBEDDINGS OF *-GRAPHS INTO 2-SURFACES“. Journal of Knot Theory and Its Ramifications 22, Nr. 12 (Oktober 2013): 1341005. http://dx.doi.org/10.1142/s0218216513410058.

Der volle Inhalt der Quelle
Annotation:
This paper considers *-graphs in which all vertices have degree 4 or 6, and studies the question of calculating the genus of orientable 2-surfaces into which such graphs may be embedded. A *-graph is a graph endowed with a formal adjacency structure on the half-edges around each vertex, and an embedding of a *-graph is an embedding under which the formal adjacency relation on half-edges corresponds to the adjacency relation induced by the embedding. *-graphs are a natural generalization of four-valent framed graphs, which are four-valent graphs with an opposite half-edge structure. In [Embeddings of four-valent framed graphs into 2-surfaces, Dokl. Akad. Nauk424(3) (2009) 308–310], the question of whether a four-valent framed graph admits a ℤ2-homologically trivial embedding into a given surface was shown to be equivalent to a problem on matrices. We show that a similar result holds for *-graphs in which all vertices have degree 4 or 6. This gives an algorithm in quadratic time to determine whether a *-graph admits an embedding into the plane.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Chen, Mingyang, Wen Zhang, Zhen Yao, Yushan Zhu, Yang Gao, Jeff Z. Pan und Huajun Chen. „Entity-Agnostic Representation Learning for Parameter-Efficient Knowledge Graph Embedding“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 4 (26.06.2023): 4182–90. http://dx.doi.org/10.1609/aaai.v37i4.25535.

Der volle Inhalt der Quelle
Annotation:
We propose an entity-agnostic representation learning method for handling the problem of inefficient parameter storage costs brought by embedding knowledge graphs. Conventional knowledge graph embedding methods map elements in a knowledge graph, including entities and relations, into continuous vector spaces by assigning them one or multiple specific embeddings (i.e., vector representations). Thus the number of embedding parameters increases linearly as the growth of knowledge graphs. In our proposed model, Entity-Agnostic Representation Learning (EARL), we only learn the embeddings for a small set of entities and refer to them as reserved entities. To obtain the embeddings for the full set of entities, we encode their distinguishable information from their connected relations, k-nearest reserved entities, and multi-hop neighbors. We learn universal and entity-agnostic encoders for transforming distinguishable information into entity embeddings. This approach allows our proposed EARL to have a static, efficient, and lower parameter count than conventional knowledge graph embedding methods. Experimental results show that EARL uses fewer parameters and performs better on link prediction tasks than baselines, reflecting its parameter efficiency.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Fang, Peng, Arijit Khan, Siqiang Luo, Fang Wang, Dan Feng, Zhenli Li, Wei Yin und Yuchao Cao. „Distributed Graph Embedding with Information-Oriented Random Walks“. Proceedings of the VLDB Endowment 16, Nr. 7 (März 2023): 1643–56. http://dx.doi.org/10.14778/3587136.3587140.

Der volle Inhalt der Quelle
Annotation:
Graph embedding maps graph nodes to low-dimensional vectors, and is widely adopted in machine learning tasks. The increasing availability of billion-edge graphs underscores the importance of learning efficient and effective embeddings on large graphs, such as link prediction on Twitter with over one billion edges. Most existing graph embedding methods fall short of reaching high data scalability. In this paper, we present a general-purpose, distributed, information-centric random walk-based graph embedding framework, DistGER, which can scale to embed billion-edge graphs. DistGER incrementally computes information-centric random walks. It further leverages a multi-proximity-aware, streaming, parallel graph partitioning strategy, simultaneously achieving high local partition quality and excellent workload balancing across machines. DistGER also improves the distributed Skip-Gram learning model to generate node embeddings by optimizing the access locality, CPU throughput, and synchronization efficiency. Experiments on real-world graphs demonstrate that compared to state-of-the-art distributed graph embedding frameworks, including KnightKing, DistDGL, and Pytorch-BigGraph, DistGER exhibits 2.33×--129× acceleration, 45% reduction in cross-machines communication, and >10% effectiveness improvement in downstream tasks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

NIKKUNI, RYO. „THE SECOND SKEW-SYMMETRIC COHOMOLOGY GROUP AND SPATIAL EMBEDDINGS OF GRAPHS“. Journal of Knot Theory and Its Ramifications 09, Nr. 03 (Mai 2000): 387–411. http://dx.doi.org/10.1142/s0218216500000189.

Der volle Inhalt der Quelle
Annotation:
Let L(G) be the second skew-symmetric cohomology group of the residual space of a graph G. We determine L(G) in the case G is a 3-connected simple graph, and give the structure of L(G) in the case of G is a complete graph and a complete bipartite graph. By using these results, we determine the Wu invariants in L(G) of the spatial embeddings of the complete graph and those of the complete bipartite graph, respectively. Since the Wu invariant of a spatial embedding is a complete invariant up to homology which is an equivalence relation on spatial embeddings introduced in [12], we give a homology classification of the spatial embeddings of such graphs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Duong, Chi Thang, Trung Dung Hoang, Hongzhi Yin, Matthias Weidlich, Quoc Viet Hung Nguyen und Karl Aberer. „Scalable robust graph embedding with Spark“. Proceedings of the VLDB Endowment 15, Nr. 4 (Dezember 2021): 914–22. http://dx.doi.org/10.14778/3503585.3503599.

Der volle Inhalt der Quelle
Annotation:
Graph embedding aims at learning a vector-based representation of vertices that incorporates the structure of the graph. This representation then enables inference of graph properties. Existing graph embedding techniques, however, do not scale well to large graphs. While several techniques to scale graph embedding using compute clusters have been proposed, they require continuous communication between the compute nodes and cannot handle node failure. We therefore propose a framework for scalable and robust graph embedding based on the MapReduce model, which can distribute any existing embedding technique. Our method splits a graph into subgraphs to learn their embeddings in isolation and subsequently reconciles the embedding spaces derived for the subgraphs. We realize this idea through a novel distributed graph decomposition algorithm. In addition, we show how to implement our framework in Spark to enable efficient learning of effective embeddings. Experimental results illustrate that our approach scales well, while largely maintaining the embedding quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Pietrasik, Marcin, und Marek Z. Reformat. „Probabilistic Coarsening for Knowledge Graph Embeddings“. Axioms 12, Nr. 3 (06.03.2023): 275. http://dx.doi.org/10.3390/axioms12030275.

Der volle Inhalt der Quelle
Annotation:
Knowledge graphs have risen in popularity in recent years, demonstrating their utility in applications across the spectrum of computer science. Finding their embedded representations is thus highly desirable as it makes them easily operated on and reasoned with by machines. With this in mind, we propose a simple meta-strategy for embedding knowledge graphs using probabilistic coarsening. In this approach, a knowledge graph is first coarsened before being embedded by an arbitrary embedding method. The resulting coarse embeddings are then extended down as those of the initial knowledge graph. Although straightforward, this allows for faster training by reducing knowledge graph complexity while revealing its higher-order structures. We demonstrate this empirically on four real-world datasets, which show that coarse embeddings are learned faster and are often of higher quality. We conclude that coarsening is a recommended prepossessing step regardless of the underlying embedding method used.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Trisedya, Bayu Distiawan, Jianzhong Qi und Rui Zhang. „Entity Alignment between Knowledge Graphs Using Attribute Embeddings“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 297–304. http://dx.doi.org/10.1609/aaai.v33i01.3301297.

Der volle Inhalt der Quelle
Annotation:
The task of entity alignment between knowledge graphs aims to find entities in two knowledge graphs that represent the same real-world entity. Recently, embedding-based models are proposed for this task. Such models are built on top of a knowledge graph embedding model that learns entity embeddings to capture the semantic similarity between entities in the same knowledge graph. We propose to learn embeddings that can capture the similarity between entities in different knowledge graphs. Our proposed model helps align entities from different knowledge graphs, and hence enables the integration of multiple knowledge graphs. Our model exploits large numbers of attribute triples existing in the knowledge graphs and generates attribute character embeddings. The attribute character embedding shifts the entity embeddings from two knowledge graphs into the same space by computing the similarity between entities based on their attributes. We use a transitivity rule to further enrich the number of attributes of an entity to enhance the attribute character embedding. Experiments using real-world knowledge bases show that our proposed model achieves consistent improvements over the baseline models by over 50% in terms of hits@1 on the entity alignment task.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Hu, Ganglin, und Jun Pang. „Relation-Aware Weighted Embedding for Heterogeneous Graphs“. Information Technology and Control 52, Nr. 1 (28.03.2023): 199–214. http://dx.doi.org/10.5755/j01.itc.52.1.32390.

Der volle Inhalt der Quelle
Annotation:
Heterogeneous graph embedding, aiming to learn the low-dimensional representations of nodes, is effective in many tasks, such as link prediction, node classification, and community detection. Most existing graph embedding methods conducted on heterogeneous graphs treat the heterogeneous neighbours equally. Although it is possible to get node weights through attention mechanisms mainly developed using expensive recursive message-passing, they are difficult to deal with large-scale networks. In this paper, we propose R-WHGE, a relation-aware weighted embedding model for heterogeneous graphs, to resolve this issue. R-WHGE comprehensively considers structural information, semantic information, meta-paths of nodes and meta-path-based node weights to learn effective node embeddings. More specifically, we first extract the feature importance of each node and then take the nodes’ importance as node weights. A weighted random walks-based embedding learning model is proposed to generate the initial weighted node embeddings according to each meta-path. Finally, we feed these embeddings to a relation-aware heterogeneous graph neural network to generate compact embeddings of nodes, which captures relation-aware characteristics. Extensive experiments on real-world datasets demonstrate that our model is competitive against various state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Liang, Jiongqian, Saket Gurukar und Srinivasan Parthasarathy. „MILE: A Multi-Level Framework for Scalable Graph Embedding“. Proceedings of the International AAAI Conference on Web and Social Media 15 (22.05.2021): 361–72. http://dx.doi.org/10.1609/icwsm.v15i1.18067.

Der volle Inhalt der Quelle
Annotation:
Recently there has been a surge of interest in designing graph embedding methods. Few, if any, can scale to a large-sized graph with millions of nodes due to both computational complexity and memory requirements. In this paper, we relax this limitation by introducing the MultI-Level Embedding (MILE) framework – a generic methodology allowing contemporary graph embedding methods to scale to large graphs. MILE repeatedly coarsens the graph into smaller ones using a hybrid matching technique to maintain the backbone structure of the graph. It then applies existing embedding methods on the coarsest graph and refines the embeddings to the original graph through a graph convolution neural network that it learns. The proposed MILE framework is agnostic to the underlying graph embedding techniques and can be applied to many existing graph embedding methods without modifying them. We employ our framework on several popular graph embedding techniques and conduct embedding for real-world graphs. Experimental results on five large-scale datasets demonstrate that MILE significantly boosts the speed (order of magnitude) of graph embedding while generating embeddings of better quality, for the task of node classification. MILE can comfortably scale to a graph with 9 million nodes and 40 million edges, on which existing methods run out of memory or take too long to compute on a modern workstation. Our code and data are publicly available with detailed instructions for adding new base embedding methods: https://github.com/jiongqian/MILE.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Liu, Xin, Chenyi Zhuang, Tsuyoshi Murata, Kyoung-Sook Kim und Natthawut Kertkeidkachorn. „How much topological structure is preserved by graph embeddings?“ Computer Science and Information Systems 16, Nr. 2 (2019): 597–614. http://dx.doi.org/10.2298/csis181001011l.

Der volle Inhalt der Quelle
Annotation:
Graph embedding aims at learning representations of nodes in a low dimensional vector space. Good embeddings should preserve the graph topological structure. To study how much such structure can be preserved, we propose evaluation methods from four aspects: 1) How well the graph can be reconstructed based on the embeddings, 2) The divergence of the original link distribution and the embedding-derived distribution, 3) The consistency of communities discovered from the graph and embeddings, and 4) To what extent we can employ embeddings to facilitate link prediction. We find that it is insufficient to rely on the embeddings to reconstruct the original graph, to discover communities, and to predict links at a high precision. Thus, the embeddings by the state-of-the-art approaches can only preserve part of the topological structure.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Kalogeropoulos, Nikitas-Rigas, Dimitris Ioannou, Dionysios Stathopoulos und Christos Makris. „On Embedding Implementations in Text Ranking and Classification Employing Graphs“. Electronics 13, Nr. 10 (12.05.2024): 1897. http://dx.doi.org/10.3390/electronics13101897.

Der volle Inhalt der Quelle
Annotation:
This paper aims to enhance the Graphical Set-based model (GSB) for ranking and classification tasks by incorporating node and word embeddings. The model integrates a textual graph representation with a set-based model for information retrieval. Initially, each document in a collection is transformed into a graph representation. The proposed enhancement involves augmenting the edges of these graphs with embeddings, which can be pretrained or generated using Word2Vec and GloVe models. Additionally, an alternative aspect of our proposed model consists of the Node2Vec embedding technique, which is applied to a graph created at the collection level through the extension of the set-based model, providing edges based on the graph’s structural information. Core decomposition is utilized as a method for pruning the graph. As a byproduct of our information retrieval model, we explore text classification techniques based on our approach. Node2Vec embeddings are generated by our graphs and are applied in order to represent the different documents in our collections that have undergone various preprocessing methods. We compare the graph-based embeddings with the Doc2Vec and Word2Vec representations to elaborate on whether our approach can be implemented on topic classification problems. For that reason, we then train popular classifiers on the document embeddings obtained from each model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

BOZKURT, ILKER NADI, HAI HUANG, BRUCE MAGGS, ANDRÉA RICHA und MAVERICK WOO. „Mutual Embeddings“. Journal of Interconnection Networks 15, Nr. 01n02 (März 2015): 1550001. http://dx.doi.org/10.1142/s0219265915500012.

Der volle Inhalt der Quelle
Annotation:
This paper introduces a type of graph embedding called a mutual embedding. A mutual embedding between two n-node graphs [Formula: see text] and [Formula: see text] is an identification of the vertices of V1 and V2, i.e., a bijection [Formula: see text], together with an embedding of G1 into G2 and an embedding of G2 into G1 where in the embedding of G1 into G2, each node u of G1 is mapped to π(u) in G2 and in the embedding of G2 into G1 each node v of G2 is mapped to [Formula: see text] in G1. The identification of vertices in G1 and G2 constrains the two embeddings so that it is not always possible for both to exhibit small congestion and dilation, even if there are traditional one-way embeddings in both directions with small congestion and dilation. Mutual embeddings arise in the context of finding preconditioners for accelerating the convergence of iterative methods for solving systems of linear equations. We present mutual embeddings between several types of graphs such as linear arrays, cycles, trees, and meshes, prove lower bounds on mutual embeddings between several classes of graphs, and present some open problems related to optimal mutual embeddings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Cruceru, Calin, Gary Becigneul und Octavian-Eugen Ganea. „Computationally Tractable Riemannian Manifolds for Graph Embeddings“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 8 (18.05.2021): 7133–41. http://dx.doi.org/10.1609/aaai.v35i8.16877.

Der volle Inhalt der Quelle
Annotation:
Representing graphs as sets of node embeddings in certain curved Riemannian manifolds has recently gained momentum in machine learning due to their desirable geometric inductive biases (e.g., hierarchical structures benefit from hyperbolic geometry). However, going beyond embedding spaces of constant sectional curvature, while potentially more representationally powerful, proves to be challenging as one can easily lose the appeal of computationally tractable tools such as geodesic distances or Riemannian gradients. Here, we explore two computationally efficient matrix manifolds, showcasing how to learn and optimize graph embeddings in these Riemannian spaces. Empirically, we demonstrate consistent improvements over Euclidean geometry while often outperforming hyperbolic and elliptical embeddings based on various metrics that capture different graph properties. Our results serve as new evidence for the benefits of non-Euclidean embeddings in machine learning pipelines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Merchant, Arpit, Aristides Gionis und Michael Mathioudakis. „Succinct graph representations as distance oracles“. Proceedings of the VLDB Endowment 15, Nr. 11 (Juli 2022): 2297–306. http://dx.doi.org/10.14778/3551793.3551794.

Der volle Inhalt der Quelle
Annotation:
Distance oracles answer shortest-path queries between any pair of nodes in a graph. They are often built using succinct graph representations such as spanners, sketches, and compressors to minimize oracle size and query answering latency. Node embeddings, in particular, offer graph representations that place adjacent nodes nearby each other in a low-rank space. However, their use in the design of distance oracles has not been sufficiently studied. In this paper, we empirically compare exact distance oracles constructed based on a variety of node embeddings and other succinct representations. We evaluate twelve such oracles along three measures of efficiency: construction time, memory requirements, and query-processing time over fourteen real datasets and four synthetic graphs. We show that distances between embedding vectors are excellent estimators of graph distances when graphs are well-structured, but less so for more unstructured graphs. Overall, our findings suggest that exact oracles based on embeddings can be constructed faster than multi-dimensional scaling (MDS) but slower than compressed adjacency indexes, require less memory than landmark oracles but more than sparsifiers or indexes, can answer queries faster than indexes but slower than MDS, and are exact more often with a smaller additive error than spanners (that have multiplicative error) while not being lossless like adjacency lists. Finally, while the exactness of such oracles is infeasible to maintain for huge graphs even under large amounts of resources, we empirically demonstrate that approximate oracles based on GOSH embeddings can efficiently scale to graphs of 100M+ nodes with only small additive errors in distance estimations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Cheng, Pengyu, Yitong Li, Xinyuan Zhang, Liqun Chen, David Carlson und Lawrence Carin. „Dynamic Embedding on Textual Networks via a Gaussian Process“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 05 (03.04.2020): 7562–69. http://dx.doi.org/10.1609/aaai.v34i05.6255.

Der volle Inhalt der Quelle
Annotation:
Textual network embedding aims to learn low-dimensional representations of text-annotated nodes in a graph. Prior work in this area has typically focused on fixed graph structures; however, real-world networks are often dynamic. We address this challenge with a novel end-to-end node-embedding model, called Dynamic Embedding for Textual Networks with a Gaussian Process (DetGP). After training, DetGP can be applied efficiently to dynamic graphs without re-training or backpropagation. The learned representation of each node is a combination of textual and structural embeddings. Because the structure is allowed to be dynamic, our method uses the Gaussian process to take advantage of its non-parametric properties. To use both local and global graph structures, diffusion is used to model multiple hops between neighbors. The relative importance of global versus local structure for the embeddings is learned automatically. With the non-parametric nature of the Gaussian process, updating the embeddings for a changed graph structure requires only a forward pass through the learned model. Considering link prediction and node classification, experiments demonstrate the empirical effectiveness of our method compared to baseline approaches. We further show that DetGP can be straightforwardly and efficiently applied to dynamic textual networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Wu, Xueyi, Yuanyuan Xu, Wenjie Zhang und Ying Zhang. „Billion-Scale Bipartite Graph Embedding: A Global-Local Induced Approach“. Proceedings of the VLDB Endowment 17, Nr. 2 (Oktober 2023): 175–83. http://dx.doi.org/10.14778/3626292.3626300.

Der volle Inhalt der Quelle
Annotation:
Bipartite graph embedding (BGE), as the fundamental task in bipartite network analysis, is to map each node to compact low-dimensional vectors that preserve intrinsic properties. The existing solutions towards BGE fall into two groups: metric-based methods and graph neural network-based (GNN-based) methods. The latter typically generates higher-quality embeddings than the former due to the strong representation ability of deep learning. Nevertheless, none of the existing GNN-based methods can handle billion-scale bipartite graphs due to the expensive message passing or complex modelling choices. Hence, existing solutions face a challenge in achieving both embedding quality and model scalability. Motivated by this, we propose a novel graph neural network named AnchorGNN based on global-local learning framework, which can generate high-quality BGE and scale to billion-scale bipartite graphs. Concretely, AnchorGNN leverages a novel anchor-based message passing schema for global learning, which enables global knowledge to be incorporated to generate node embeddings. Meanwhile, AnchorGNN offers an efficient one-hop local structure modelling using maximum likelihood estimation for bipartite graphs with rational analysis, avoiding large adjacency matrix construction. Both global information and local structure are integrated to generate distinguishable node embeddings. Extensive experiments demonstrate that AnchorGNN outperforms the best competitor by up to 36% in accuracy and achieves up to 28 times speed-up against the only metric-based baseline on billion-scale bipartite graphs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Peng, Yanhui, Jing Zhang, Cangqi Zhou und Shunmei Meng. „Knowledge Graph Entity Alignment Using Relation Structural Similarity“. Journal of Database Management 33, Nr. 1 (01.01.2022): 1–19. http://dx.doi.org/10.4018/jdm.305733.

Der volle Inhalt der Quelle
Annotation:
Embedding-based entity alignment, which represents knowledge graphs as low-dimensional embeddings and finds entities in different knowledge graphs that semantically represent the same real-world entity by measuring the similarities between entity embeddings, has achieved promising results. However, existing methods are still challenged by the error accumulation of embeddings along multi-step paths and the semantic information loss. This paper proposes a novel embedding-based entity alignment method that iteratively aligns both entities and relations with high similarities as training data. Newly-aligned entities and relations are used to calibrate the corresponding embeddings in the unified embedding space, which reduces the error accumulation. To reduce the negative impact of semantic information loss, the authors propose to use relation structural similarity instead of embedding similarity to align relations. Experimental results on five widely used real-world datasets show that the proposed method significantly outperforms several state-of-the-art methods for entity alignment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Friesen, Tyler, und Vassily Olegovich Manturov. „Checkerboard embeddings of *-graphs into nonorientable surfaces“. Journal of Knot Theory and Its Ramifications 23, Nr. 07 (Juni 2014): 1460004. http://dx.doi.org/10.1142/s0218216514600049.

Der volle Inhalt der Quelle
Annotation:
This paper considers *-graphs in which all vertices have degree 4 or 6, and studies the question of calculating the genus of nonorientable surfaces into which such graphs may be embedded. In a previous paper [Embeddings of *-graphs into 2-surfaces, preprint (2012), arXiv:1212.5646] by the authors, the problem of calculating whether a given *-graph in which all vertices have degree 4 or 6 admits a ℤ2-homologically trivial embedding into a given orientable surface was shown to be equivalent to a problem on matrices. Here we extend those results to nonorientable surfaces. The embeddability condition that we obtain yields quadratic-time algorithms to determine whether a *-graph with all vertices of degree 4 or 6 admits a ℤ2-homologically trivial embedding into the projective plane or into the Klein bottle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Peng, Yun, Byron Choi und Jianliang Xu. „Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art“. Data Science and Engineering 6, Nr. 2 (28.04.2021): 119–41. http://dx.doi.org/10.1007/s41019-021-00155-3.

Der volle Inhalt der Quelle
Annotation:
AbstractGraphs have been widely used to represent complex data in many applications, such as e-commerce, social networks, and bioinformatics. Efficient and effective analysis of graph data is important for graph-based applications. However, most graph analysis tasks are combinatorial optimization (CO) problems, which are NP-hard. Recent studies have focused a lot on the potential of using machine learning (ML) to solve graph-based CO problems. Most recent methods follow the two-stage framework. The first stage is graph representation learning, which embeds the graphs into low-dimension vectors. The second stage uses machine learning to solve the CO problems using the embeddings of the graphs learned in the first stage. The works for the first stage can be classified into two categories, graph embedding methods and end-to-end learning methods. For graph embedding methods, the learning of the the embeddings of the graphs has its own objective, which may not rely on the CO problems to be solved. The CO problems are solved by independent downstream tasks. For end-to-end learning methods, the learning of the embeddings of the graphs does not have its own objective and is an intermediate step of the learning procedure of solving the CO problems. The works for the second stage can also be classified into two categories, non-autoregressive methods and autoregressive methods. Non-autoregressive methods predict a solution for a CO problem in one shot. A non-autoregressive method predicts a matrix that denotes the probability of each node/edge being a part of a solution of the CO problem. The solution can be computed from the matrix using search heuristics such as beam search. Autoregressive methods iteratively extend a partial solution step by step. At each step, an autoregressive method predicts a node/edge conditioned to current partial solution, which is used to its extension. In this survey, we provide a thorough overview of recent studies of the graph learning-based CO methods. The survey ends with several remarks on future research directions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

FÜRER, MARTIN, und SHIVA PRASAD KASIVISWANATHAN. „Approximately Counting Embeddings into Random Graphs“. Combinatorics, Probability and Computing 23, Nr. 6 (09.07.2014): 1028–56. http://dx.doi.org/10.1017/s0963548314000339.

Der volle Inhalt der Quelle
Annotation:
LetHbe a graph, and letCH(G) be the number of (subgraph isomorphic) copies ofHcontained in a graphG. We investigate the fundamental problem of estimatingCH(G). Previous results cover only a few specific instances of this general problem, for example the case whenHhas degree at most one (the monomer-dimer problem). In this paper we present the first general subcase of the subgraph isomorphism counting problem, which is almost always efficiently approximable. The results rely on a new graph decomposition technique. Informally, the decomposition is a labelling of the vertices such that every edge is between vertices with different labels, and for every vertex all neighbours with a higher label have identical labels. The labelling implicitly generates a sequence of bipartite graphs, which permits us to break the problem of counting embeddings of large subgraphs into that of counting embeddings of small subgraphs. Using this method, we present a simple randomized algorithm for the counting problem. For all decomposable graphsHand all graphsG, the algorithm is an unbiased estimator. Furthermore, for all graphsHhaving a decomposition where each of the bipartite graphs generated is small and almost all graphsG, the algorithm is a fully polynomial randomized approximation scheme.We show that the graph classes ofHfor which we obtain a fully polynomial randomized approximation scheme for almost allGincludes graphs of degree at most two, bounded-degree forests, bounded-width grid graphs, subdivision of bounded-degree graphs, and major subclasses of outerplanar graphs, series-parallel graphs and planar graphs of large girth, whereas unbounded-width grid graphs are excluded. Moreover, our general technique can easily be applied to proving many more similar results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Barthel, Senja. „On chirality of toroidal embeddings of polyhedral graphs“. Journal of Knot Theory and Its Ramifications 26, Nr. 08 (22.05.2017): 1750050. http://dx.doi.org/10.1142/s021821651750050x.

Der volle Inhalt der Quelle
Annotation:
We investigate properties of spatial graphs on the standard torus. It is known that nontrivial embeddings of planar graphs in the torus contain a nontrivial knot or a nonsplit link due to [2, 3]. Building on this and using the chirality of torus knots and links [9, 10], we prove that the nontrivial embeddings of simple 3-connected planar graphs in the standard torus are chiral. For the case that the spatial graph contains a nontrivial knot, the statement was shown by Castle et al. [5]. We give an alternative proof using minors instead of the Euler characteristic. To prove the case in which the graph embedding contains a nonsplit link, we show the chirality of Hopf ladders with at least three rungs, thus generalizing a theorem of Simon [12].
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Shang, Chao, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He und Bowen Zhou. „End-to-End Structure-Aware Convolutional Networks for Knowledge Base Completion“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 3060–67. http://dx.doi.org/10.1609/aaai.v33i01.33013060.

Der volle Inhalt der Quelle
Annotation:
Knowledge graph embedding has been an active research topic for knowledge base completion, with progressive improvement from the initial TransE, TransH, DistMult et al to the current state-of-the-art ConvE. ConvE uses 2D convolution over embeddings and multiple layers of nonlinear features to model knowledge graphs. The model can be efficiently trained and scalable to large knowledge graphs. However, there is no structure enforcement in the embedding space of ConvE. The recent graph convolutional network (GCN) provides another way of learning graph node embedding by successfully utilizing graph connectivity structure. In this work, we propose a novel end-to-end StructureAware Convolutional Network (SACN) that takes the benefit of GCN and ConvE together. SACN consists of an encoder of a weighted graph convolutional network (WGCN), and a decoder of a convolutional network called Conv-TransE. WGCN utilizes knowledge graph node structure, node attributes and edge relation types. It has learnable weights that adapt the amount of information from neighbors used in local aggregation, leading to more accurate embeddings of graph nodes. Node attributes in the graph are represented as additional nodes in the WGCN. The decoder Conv-TransE enables the state-of-the-art ConvE to be translational between entities and relations while keeps the same link prediction performance as ConvE. We demonstrate the effectiveness of the proposed SACN on standard FB15k-237 and WN18RR datasets, and it gives about 10% relative improvement over the state-of-theart ConvE in terms of HITS@1, HITS@3 and HITS@10.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Song, Yumeng, Xiaohua Li, Fangfang Li und Ge Yu. „Learning from Feature and Global Topologies: Adaptive Multi-View Parallel Graph Contrastive Learning“. Mathematics 12, Nr. 14 (21.07.2024): 2277. http://dx.doi.org/10.3390/math12142277.

Der volle Inhalt der Quelle
Annotation:
To address the limitations of existing graph contrastive learning methods, which fail to adaptively integrate feature and topological information and struggle to efficiently capture multi-hop information, we propose an adaptive multi-view parallel graph contrastive learning framework (AMPGCL). It is an unsupervised graph representation learning method designed to generate task-agnostic node embeddings. AMPGCL constructs and encodes feature and topological views to mine feature and global topological information. To encode global topological information, we introduce an H-Transformer to decouple multi-hop neighbor aggregations, capturing global topology from node subgraphs. AMPGCL learns embedding consistency among feature, topology, and original graph encodings through a multi-view contrastive loss, generating semantically rich embeddings while avoiding information redundancy. Experiments on nine real datasets demonstrate that AMPGCL consistently outperforms thirteen state-of-the-art graph representation learning models in classification accuracy, whether in homophilous or non-homophilous graphs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

KOMLÓS, JÁNOS. „The Blow-up Lemma“. Combinatorics, Probability and Computing 8, Nr. 1-2 (Januar 1999): 161–76. http://dx.doi.org/10.1017/s0963548398003502.

Der volle Inhalt der Quelle
Annotation:
Extremal graph theory has a great number of conjectures concerning the embedding of large sparse graphs into dense graphs. Szemerédi's Regularity Lemma is a valuable tool in finding embeddings of small graphs. The Blow-up Lemma, proved recently by Komlós, Sárközy and Szemerédi, can be applied to obtain approximate versions of many of the embedding conjectures. In this paper we review recent developments in the area.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Wang, Bin, Yu Chen, Jinfang Sheng und Zhengkun He. „Attributed Graph Embedding Based on Attention with Cluster“. Mathematics 10, Nr. 23 (01.12.2022): 4563. http://dx.doi.org/10.3390/math10234563.

Der volle Inhalt der Quelle
Annotation:
Graph embedding is of great significance for the research and analysis of graphs. Graph embedding aims to map nodes in the network to low-dimensional vectors while preserving information in the original graph of nodes. In recent years, the appearance of graph neural networks has significantly improved the accuracy of graph embedding. However, the influence of clusters was not considered in existing graph neural network (GNN)-based methods, so this paper proposes a new method to incorporate the influence of clusters into the generation of graph embedding. We use the attention mechanism to pass the message of the cluster pooled result and integrate the whole process into the graph autoencoder as the third layer of the encoder. The experimental results show that our model has made great improvement over the baseline methods in the node clustering and link prediction tasks, demonstrating that the embeddings generated by our model have excellent expressiveness.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Ye, Yutong, Xiang Lian und Mingsong Chen. „Efficient Exact Subgraph Matching via GNN-Based Path Dominance Embedding“. Proceedings of the VLDB Endowment 17, Nr. 7 (März 2024): 1628–41. http://dx.doi.org/10.14778/3654621.3654630.

Der volle Inhalt der Quelle
Annotation:
The classic problem of exact subgraph matching returns those subgraphs in a large-scale data graph that are isomorphic to a given query graph, which has gained increasing importance in many real-world applications such as social network analysis, knowledge graph discovery in the Semantic Web, bibliographical network mining, and so on. In this paper, we propose a novel and effective graph neural network (GNN)-based path embedding framework (GNN-PE), which allows efficient exact subgraph matching without introducing false dismissals. Unlike traditional GNN-based graph embeddings that only produce approximate subgraph matching results, in this paper, we carefully devise GNN-based embeddings for paths, such that: if two paths (and 1-hop neighbors of vertices on them) have the subgraph relationship, their corresponding GNN-based embedding vectors will strictly follow the dominance relationship. With such a newly designed property of path dominance embeddings, we are able to propose effective pruning strategies based on path label/dominance embeddings and guarantee no false dismissals for subgraph matching. We build multidimensional indexes over path embedding vectors, and develop an efficient subgraph matching algorithm by traversing indexes over graph partitions in parallel and applying our pruning methods. We also propose a cost-model-based query plan that obtains query paths from the query graph with low query cost. Through extensive experiments, we confirm the efficiency and effectiveness of our proposed GNN-PE approach for exact subgraph matching on both real and synthetic graph data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Xie, Anze, Anders Carlsson, Jason Mohoney, Roger Waleffe, Shanan Peters, Theodoros Rekatsinas und Shivaram Venkataraman. „Demo of marius“. Proceedings of the VLDB Endowment 14, Nr. 12 (Juli 2021): 2759–62. http://dx.doi.org/10.14778/3476311.3476338.

Der volle Inhalt der Quelle
Annotation:
Graph embeddings have emerged as the de facto representation for modern machine learning over graph data structures. The goal of graph embedding models is to convert high-dimensional sparse graphs into low-dimensional, dense and continuous vector spaces that preserve the graph structure properties. However, learning a graph embedding model is a resource intensive process, and existing solutions rely on expensive distributed computation to scale training to instances that do not fit in GPU memory. This demonstration showcases Marius: a new open-source engine for learning graph embedding models over billion-edge graphs on a single machine. Marius is built around a recently-introduced architecture for machine learning over graphs that utilizes pipelining and a novel data replacement policy to maximize GPU utilization and exploit the entire memory hierarchy (including disk, CPU, and GPU memory) to scale to large instances. The audience will experience how to develop, train, and deploy graph embedding models using Marius' configuration-driven programming model. Moreover, the audience will have the opportunity to explore Marius' deployments on applications including link-prediction on WikiKG90M and reasoning queries on a paleobiology knowledge graph. Marius is available as open source software at https://marius-project.org.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Myklebust, Erik B., Ernesto Jiménez-Ruiz, Jiaoyan Chen, Raoul Wolf und Knut Erik Tollefsen. „Prediction of adverse biological effects of chemicals using knowledge graph embeddings“. Semantic Web 13, Nr. 3 (06.04.2022): 299–338. http://dx.doi.org/10.3233/sw-222804.

Der volle Inhalt der Quelle
Annotation:
We have created a knowledge graph based on major data sources used in ecotoxicological risk assessment. We have applied this knowledge graph to an important task in risk assessment, namely chemical effect prediction. We have evaluated nine knowledge graph embedding models from a selection of geometric, decomposition, and convolutional models on this prediction task. We show that using knowledge graph embeddings can increase the accuracy of effect prediction with neural networks. Furthermore, we have implemented a fine-tuning architecture which adapts the knowledge graph embeddings to the effect prediction task and leads to a better performance. Finally, we evaluate certain characteristics of the knowledge graph embedding models to shed light on the individual model performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Sheng, Jinfang, Zili Yang, Bin Wang und Yu Chen. „Attribute Graph Embedding Based on Multi-Order Adjacency Views and Attention Mechanisms“. Mathematics 12, Nr. 5 (27.02.2024): 697. http://dx.doi.org/10.3390/math12050697.

Der volle Inhalt der Quelle
Annotation:
Graph embedding plays an important role in the analysis and study of typical non-Euclidean data, such as graphs. Graph embedding aims to transform complex graph structures into vector representations for further machine learning or data mining tasks. It helps capture relationships and similarities between nodes, providing better representations for various tasks on graphs. Different orders of neighbors have different impacts on the generation of node embedding vectors. Therefore, this paper proposes a multi-order adjacency view encoder to fuse the feature information of neighbors at different orders. We generate different node views for different orders of neighbor information, consider different orders of neighbor information through different views, and then use attention mechanisms to integrate node embeddings from different views. Finally, we evaluate the effectiveness of our model through downstream tasks on the graph. Experimental results demonstrate that our model achieves improvements in attributed graph clustering and link prediction tasks compared to existing methods, indicating that the generated embedding representations have higher expressiveness.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Jing, Baoyu, Yuchen Yan, Kaize Ding, Chanyoung Park, Yada Zhu, Huan Liu und Hanghang Tong. „Sterling: Synergistic Representation Learning on Bipartite Graphs“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 12 (24.03.2024): 12976–84. http://dx.doi.org/10.1609/aaai.v38i12.29195.

Der volle Inhalt der Quelle
Annotation:
A fundamental challenge of bipartite graph representation learning is how to extract informative node embeddings. Self-Supervised Learning (SSL) is a promising paradigm to address this challenge. Most recent bipartite graph SSL methods are based on contrastive learning which learns embeddings by discriminating positive and negative node pairs. Contrastive learning usually requires a large number of negative node pairs, which could lead to computational burden and semantic errors. In this paper, we introduce a novel synergistic representation learning model (STERLING) to learn node embeddings without negative node pairs. STERLING preserves the unique local and global synergies in bipartite graphs. The local synergies are captured by maximizing the similarity of the inter-type and intra-type positive node pairs, and the global synergies are captured by maximizing the mutual information of co-clusters. Theoretical analysis demonstrates that STERLING could improve the connectivity between different node types in the embedding space. Extensive empirical evaluation on various benchmark datasets and tasks demonstrates the effectiveness of STERLING for extracting node embeddings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Naimi, Ramin, und Elena Pavelescu. „Linear embeddings of K9 are triple linked“. Journal of Knot Theory and Its Ramifications 23, Nr. 03 (März 2014): 1420001. http://dx.doi.org/10.1142/s0218216514200016.

Der volle Inhalt der Quelle
Annotation:
We use the theory of oriented matroids to show that any linear embedding of K9, the complete graph on nine vertices, into 3-space contains a non-split link with three components. This shows that Sachs' conjecture on linear, linkless embeddings of graphs, whether true or false, does not extend to 3-links.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Suo, Xinhua, Bing Guo, Yan Shen, Wei Wang, Yaosen Chen und Zhen Zhang. „Embodying the Number of an Entity’s Relations for Knowledge Representation Learning“. International Journal of Software Engineering and Knowledge Engineering 31, Nr. 10 (Oktober 2021): 1495–515. http://dx.doi.org/10.1142/s0218194021500509.

Der volle Inhalt der Quelle
Annotation:
Knowledge representation learning (knowledge graph embedding) plays a critical role in the application of knowledge graph construction. The multi-source information knowledge representation learning, which is one class of the most promising knowledge representation learning at present, mainly focuses on learning a large number of useful additional information of entities and relations in the knowledge graph into their embeddings, such as the text description information, entity type information, visual information, graph structure information, etc. However, there is a kind of simple but very common information — the number of an entity’s relations which means the number of an entity’s semantic types has been ignored. This work proposes a multi-source knowledge representation learning model KRL-NER, which embodies information of the number of an entity’s relations between entities into the entities’ embeddings through the attention mechanism. Specifically, first of all, we design and construct a submodel of the KRL-NER LearnNER which learns an embedding including the information on the number of an entity’s relations; then, we obtain a new embedding by exerting attention onto the embedding learned by the models such as TransE with this embedding; finally, we translate based onto the new embedding. Experiments, such as related tasks on knowledge graph: entity prediction, entity prediction under different relation types, and triple classification, are carried out to verify our model. The results show that our model is effective on the large-scale knowledge graphs, e.g. FB15K.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Kong, Fanshuang, Richong Zhang, Yongyi Mao und Ting Deng. „LENA: Locality-Expanded Neural Embedding for Knowledge Base Completion“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 2895–902. http://dx.doi.org/10.1609/aaai.v33i01.33012895.

Der volle Inhalt der Quelle
Annotation:
Embedding based models for knowledge base completion have demonstrated great successes and attracted significant research interest. In this work, we observe that existing embedding models all have their loss functions decomposed into atomic loss functions, each on a triple or an postulated edge in the knowledge graph. Such an approach essentially implies that conditioned on the embeddings of the triple, whether the triple is factual is independent of the structure of the knowledge graph. Although arguably the embeddings of the entities and relation in the triple contain certain structural information of the knowledge base, we believe that the global information contained in the embeddings of the triple can be insufficient and such an assumption is overly optimistic in heterogeneous knowledge bases. Motivated by this understanding, in this work we propose a new embedding model in which we discard the assumption that the embeddings of the entities and relation in a triple is a sufficient statistic for the triple’s factual existence. More specifically, the proposed model assumes that whether a triple is factual depends not only on the embedding of the triple but also on the embeddings of the entities and relations in a larger graph neighbourhood. In this model, attention mechanisms are constructed to select the relevant information in the graph neighbourhood so that irrelevant signals in the neighbourhood are suppressed. Termed locality-expanded neural embedding with attention (LENA), this model is tested on four standard datasets and compared with several stateof-the-art models for knowledge base completion. Extensive experiments suggest that LENA outperforms the existing models in virtually every metric.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Monnin, Pierre, Chedy Raïssi, Amedeo Napoli und Adrien Coulet. „Discovering alignment relations with Graph Convolutional Networks: A biomedical case study“. Semantic Web 13, Nr. 3 (06.04.2022): 379–98. http://dx.doi.org/10.3233/sw-210452.

Der volle Inhalt der Quelle
Annotation:
Knowledge graphs are freely aggregated, published, and edited in the Web of data, and thus may overlap. Hence, a key task resides in aligning (or matching) their content. This task encompasses the identification, within an aggregated knowledge graph, of nodes that are equivalent, more specific, or weakly related. In this article, we propose to match nodes within a knowledge graph by (i) learning node embeddings with Graph Convolutional Networks such that similar nodes have low distances in the embedding space, and (ii) clustering nodes based on their embeddings, in order to suggest alignment relations between nodes of a same cluster. We conducted experiments with this approach on the real world application of aligning knowledge in the field of pharmacogenomics, which motivated our study. We particularly investigated the interplay between domain knowledge and GCN models with the two following focuses. First, we applied inference rules associated with domain knowledge, independently or combined, before learning node embeddings, and we measured the improvements in matching results. Second, while our GCN model is agnostic to the exact alignment relations (e.g., equivalence, weak similarity), we observed that distances in the embedding space are coherent with the “strength” of these different relations (e.g., smaller distances for equivalences), letting us considering clustering and distances in the embedding space as a means to suggest alignment relations in our case study.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Liu, Dianting, Danling Wu und Shan Wu. „A Graph Matching Model for Designer Team Selection for Collaborative Design Crowdsourcing Tasks in Social Manufacturing“. Machines 10, Nr. 9 (06.09.2022): 776. http://dx.doi.org/10.3390/machines10090776.

Der volle Inhalt der Quelle
Annotation:
In order to find a suitable designer team for the collaborative design crowdsourcing task of a product, we consider the matching problem between collaborative design crowdsourcing task network graph and the designer network graph. Due to the difference in the nodes and edges of the two types of graphs, we propose a graph matching model based on a similar structure. The model first uses the Graph Convolutional Network to extract features of the graph structure to obtain the node-level embeddings. Secondly, an attention mechanism considering the differences in the importance of different nodes in the graph assigns different weights to different nodes to aggregate node-level embeddings into graph-level embeddings. Finally, the graph-level embeddings of the two graphs to be matched are input into a multi-layer fully connected neural network to obtain the similarity score of the graph pair after they are obtained from the concat operation. We compare our model with the basic model based on four evaluation metrics in two datasets. The experimental results show that our model can more accurately find graph pairs based on a similar structure. The crankshaft linkage mechanism produced by the enterprise is taken as an example to verify the practicality and applicability of our model and method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Xiang, Xintao, Tiancheng Huang und Donglin Wang. „Learning to Evolve on Dynamic Graphs (Student Abstract)“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 11 (28.06.2022): 13091–92. http://dx.doi.org/10.1609/aaai.v36i11.21682.

Der volle Inhalt der Quelle
Annotation:
Representation learning in dynamic graphs is a challenging problem because the topology of graph and node features vary at different time. This requires the model to be able to effectively capture both graph topology information and temporal information. Most existing works are built on recurrent neural networks (RNNs), which are used to exact temporal information of dynamic graphs, and thus they inherit the same drawbacks of RNNs. In this paper, we propose Learning to Evolve on Dynamic Graphs (LEDG) - a novel algorithm that jointly learns graph information and time information. Specifically, our approach utilizes gradient-based meta-learning to learn updating strategies that have better generalization ability than RNN on snapshots. It is model-agnostic and thus can train any message passing based graph neural network (GNN) on dynamic graphs. To enhance the representation power, we disentangle the embeddings into time embeddings and graph intrinsic embeddings. We conduct experiments on various datasets and down-stream tasks, and the experimental results validate the effectiveness of our method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Chen, Jianer. „Algorithmic graph embeddings“. Theoretical Computer Science 181, Nr. 2 (Juli 1997): 247–66. http://dx.doi.org/10.1016/s0304-3975(96)00273-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Shah, Haseeb, Johannes Villmow, Adrian Ulges, Ulrich Schwanecke und Faisal Shafait. „An Open-World Extension to Knowledge Graph Completion Models“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 3044–51. http://dx.doi.org/10.1609/aaai.v33i01.33013044.

Der volle Inhalt der Quelle
Annotation:
We present a novel extension to embedding-based knowledge graph completion models which enables them to perform open-world link prediction, i.e. to predict facts for entities unseen in training based on their textual description. Our model combines a regular link prediction model learned from a knowledge graph with word embeddings learned from a textual corpus. After training both independently, we learn a transformation to map the embeddings of an entity’s name and description to the graph-based embedding space.In experiments on several datasets including FB20k, DBPedia50k and our new dataset FB15k-237-OWE, we demonstrate competitive results. Particularly, our approach exploits the full knowledge graph structure even when textual descriptions are scarce, does not require a joint training on graph and text, and can be applied to any embedding-based link prediction model, such as TransE, ComplEx and DistMult.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Wang, Xiaojie, Haijun Zhao und Huayue Chen. „Improved Skip-Gram Based on Graph Structure Information“. Sensors 23, Nr. 14 (19.07.2023): 6527. http://dx.doi.org/10.3390/s23146527.

Der volle Inhalt der Quelle
Annotation:
Applying the Skip-gram to graph representation learning has become a widely researched topic in recent years. Prior works usually focus on the migration application of the Skip-gram model, while Skip-gram in graph representation learning, initially applied to word embedding, is left insufficiently explored. To compensate for the shortcoming, we analyze the difference between word embedding and graph embedding and reveal the principle of graph representation learning through a case study to explain the essential idea of graph embedding intuitively. Through the case study and in-depth understanding of graph embeddings, we propose Graph Skip-gram, an extension of the Skip-gram model using graph structure information. Graph Skip-gram can be combined with a variety of algorithms for excellent adaptability. Inspired by word embeddings in natural language processing, we design a novel feature fusion algorithm to fuse node vectors based on node vector similarity. We fully articulate the ideas of our approach on a small network and provide extensive experimental comparisons, including multiple classification tasks and link prediction tasks, demonstrating that our proposed approach is more applicable to graph representation learning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Shim, Sooyeon, Junghun Kim, Kahyun Park und U. Kang. „Accurate graph classification via two-staged contrastive curriculum learning“. PLOS ONE 19, Nr. 1 (03.01.2024): e0296171. http://dx.doi.org/10.1371/journal.pone.0296171.

Der volle Inhalt der Quelle
Annotation:
Given a graph dataset, how can we generate meaningful graph representations that maximize classification accuracy? Learning representative graph embeddings is important for solving various real-world graph-based tasks. Graph contrastive learning aims to learn representations of graphs by capturing the relationship between the original graph and the augmented graph. However, previous contrastive learning methods neither capture semantic information within graphs nor consider both nodes and graphs while learning graph embeddings. We propose TAG (Two-staged contrAstive curriculum learning for Graphs), a two-staged contrastive learning method for graph classification. TAG learns graph representations in two levels: node-level and graph level, by exploiting six degree-based model-agnostic augmentation algorithms. Experiments show that TAG outperforms both unsupervised and supervised methods in classification accuracy, achieving up to 4.08% points and 4.76% points higher than the second-best unsupervised and supervised methods on average, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

O’Keeffe, Michael, und Michael M. J. Treacy. „Embeddings of Graphs: Tessellate and Decussate Structures“. International Journal of Topology 1, Nr. 1 (29.03.2024): 1–10. http://dx.doi.org/10.3390/ijt1010001.

Der volle Inhalt der Quelle
Annotation:
We address the problem of finding a unique graph embedding that best describes a graph’s “topology” i.e., a canonical embedding (spatial graph). This question is of particular interest in the chemistry of materials. Graphs that admit a tiling in 3-dimensional Euclidean space are termed tessellate, those that do not decussate. We give examples of decussate and tessellate graphs that are finite and 3-periodic. We conjecture that a graph has at most one tessellate embedding. We give reasons for considering this the default “topology” of periodic graphs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Gurukar, Saket, Nikil Pancha, Andrew Zhai, Eric Kim, Samson Hu, Srinivasan Parthasarathy, Charles Rosenberg und Jure Leskovec. „MultiBiSage“. Proceedings of the VLDB Endowment 16, Nr. 4 (Dezember 2022): 781–89. http://dx.doi.org/10.14778/3574245.3574262.

Der volle Inhalt der Quelle
Annotation:
Graph Convolutional Networks (GCN) can efficiently integrate graph structure and node features to learn high-quality node embeddings. At Pinterest, we have developed and deployed PinSage, a data-efficient GCN that learns pin embeddings from the Pin-Board graph. Pinterest relies heavily on PinSage which in turn only leverages the Pin-Board graph. However, there exist several entities at Pinterest and heterogeneous interactions among these entities. These diverse entities and interactions provide important signal for recommendations and modeling. In this work, we show that training deep learning models on graphs that captures these diverse interactions can result in learning higher-quality pin embeddings than training PinSage on only the Pin-Board graph. However, building a large-scale heterogeneous graph engine that can process the entire Pinterest size data has not yet been done. In this work, we present a clever and effective solution where we break the heterogeneous graph into multiple disjoint bipartite graphs and then develop novel data-efficient MultiBiSage model that combines the signals from them. MultiBiSage can capture the graph structure of multiple bipartite graphs to learn high-quality pin embeddings. The benefit of our approach is that individual bipartite graphs can be processed with minimal changes to Pinterest's current infrastructure, while being able to combine information from all the graphs while achieving high performance. We train MultiBiSage on six bipartite graphs including our Pin-Board graph and show that it significantly outperforms the deployed latest version of PinSage on multiple user engagement metrics. We also perform experiments on two public datasets to show that MultiBiSage is generalizable and can be applied to datasets outside of Pinterest.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

DI GIACOMO, EMILIO, und GIUSEPPE LIOTTA. „SIMULTANEOUS EMBEDDING OF OUTERPLANAR GRAPHS, PATHS, AND CYCLES“. International Journal of Computational Geometry & Applications 17, Nr. 02 (April 2007): 139–60. http://dx.doi.org/10.1142/s0218195907002276.

Der volle Inhalt der Quelle
Annotation:
Let G1 and G2 be two planar graphs having some vertices in common. A simultaneous embedding of G1 and G2 is a pair of crossing-free drawings of G1 and G2 such that each vertex in common is represented by the same point in both drawings. In this paper we show that an outerplanar graph and a simple path can be simultaneously embedded with fixed edges such that the edges in common are straight-line segments while the other edges of the outerplanar graph can have at most one bend per edge. We then exploit the technique for outerplanar graphs and paths to study simultaneous embeddings of other pairs of graphs. Namely, we study simultaneous embedding with fixed edges of: (i) two outerplanar graphs sharing a forest of paths and (ii) an outerplanar graph and a cycle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Park, Chanyoung, Donghyun Kim, Jiawei Han und Hwanjo Yu. „Unsupervised Attributed Multiplex Network Embedding“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 04 (03.04.2020): 5371–78. http://dx.doi.org/10.1609/aaai.v34i04.5985.

Der volle Inhalt der Quelle
Annotation:
Nodes in a multiplex network are connected by multiple types of relations. However, most existing network embedding methods assume that only a single type of relation exists between nodes. Even for those that consider the multiplexity of a network, they overlook node attributes, resort to node labels for training, and fail to model the global properties of a graph. We present a simple yet effective unsupervised network embedding method for attributed multiplex network called DMGI, inspired by Deep Graph Infomax (DGI) that maximizes the mutual information between local patches of a graph, and the global representation of the entire graph. We devise a systematic way to jointly integrate the node embeddings from multiple graphs by introducing 1) the consensus regularization framework that minimizes the disagreements among the relation-type specific node embeddings, and 2) the universal discriminator that discriminates true samples regardless of the relation types. We also show that the attention mechanism infers the importance of each relation type, and thus can be useful for filtering unnecessary relation types as a preprocessing step. Extensive experiments on various downstream tasks demonstrate that DMGI outperforms the state-of-the-art methods, even though DMGI is fully unsupervised.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Cheng, Kewei, Xian Li, Yifan Ethan Xu, Xin Luna Dong und Yizhou Sun. „PGE“. Proceedings of the VLDB Endowment 15, Nr. 6 (Februar 2022): 1288–96. http://dx.doi.org/10.14778/3514061.3514074.

Der volle Inhalt der Quelle
Annotation:
Although product graphs (PGs) have gained increasing attentions in recent years for their successful applications in product search and recommendations, the extensive power of PGs can be limited by the inevitable involvement of various kinds of errors. Thus, it is critical to validate the correctness of triples in PGs to improve their reliability. Knowledge graph (KG) embedding methods have strong error detection abilities. Yet, existing KG embedding methods may not be directly applicable to a PG due to its distinct characteristics: (1) PG contains rich textual signals, which necessitates a joint exploration of both text information and graph structure; (2) PG contains a large number of attribute triples, in which attribute values are represented by free texts. Since free texts are too flexible to define entities in KGs, traditional way to map entities to their embeddings using ids is no longer appropriate for attribute value representation; (3) Noisy triples in a PG mislead the embedding learning and significantly hurt the performance of error detection. To address the aforementioned challenges, we propose an end-to-end noise-tolerant embedding learning framework, PGE, to jointly leverage both text information and graph structure in PG to learn embeddings for error detection. Experimental results on real-world product graph demonstrate the effectiveness of the proposed framework comparing with the state-of-the-art approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie