Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Graphes embeddings.

Статті в журналах з теми "Graphes embeddings"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Graphes embeddings".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

BOZKURT, ILKER NADI, HAI HUANG, BRUCE MAGGS, ANDRÉA RICHA, and MAVERICK WOO. "Mutual Embeddings." Journal of Interconnection Networks 15, no. 01n02 (March 2015): 1550001. http://dx.doi.org/10.1142/s0219265915500012.

Повний текст джерела
Анотація:
This paper introduces a type of graph embedding called a mutual embedding. A mutual embedding between two n-node graphs [Formula: see text] and [Formula: see text] is an identification of the vertices of V1 and V2, i.e., a bijection [Formula: see text], together with an embedding of G1 into G2 and an embedding of G2 into G1 where in the embedding of G1 into G2, each node u of G1 is mapped to π(u) in G2 and in the embedding of G2 into G1 each node v of G2 is mapped to [Formula: see text] in G1. The identification of vertices in G1 and G2 constrains the two embeddings so that it is not always possible for both to exhibit small congestion and dilation, even if there are traditional one-way embeddings in both directions with small congestion and dilation. Mutual embeddings arise in the context of finding preconditioners for accelerating the convergence of iterative methods for solving systems of linear equations. We present mutual embeddings between several types of graphs such as linear arrays, cycles, trees, and meshes, prove lower bounds on mutual embeddings between several classes of graphs, and present some open problems related to optimal mutual embeddings.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Trisedya, Bayu Distiawan, Jianzhong Qi, and Rui Zhang. "Entity Alignment between Knowledge Graphs Using Attribute Embeddings." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 297–304. http://dx.doi.org/10.1609/aaai.v33i01.3301297.

Повний текст джерела
Анотація:
The task of entity alignment between knowledge graphs aims to find entities in two knowledge graphs that represent the same real-world entity. Recently, embedding-based models are proposed for this task. Such models are built on top of a knowledge graph embedding model that learns entity embeddings to capture the semantic similarity between entities in the same knowledge graph. We propose to learn embeddings that can capture the similarity between entities in different knowledge graphs. Our proposed model helps align entities from different knowledge graphs, and hence enables the integration of multiple knowledge graphs. Our model exploits large numbers of attribute triples existing in the knowledge graphs and generates attribute character embeddings. The attribute character embedding shifts the entity embeddings from two knowledge graphs into the same space by computing the similarity between entities based on their attributes. We use a transitivity rule to further enrich the number of attributes of an entity to enhance the attribute character embedding. Experiments using real-world knowledge bases show that our proposed model achieves consistent improvements over the baseline models by over 50% in terms of hits@1 on the entity alignment task.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

GUPTA, AJAY K., and GARRISON W. GREENWOOD. "APPLICATIONS OF EVOLUTIONARY STRATEGIES TO FINE-GRAINED TASK SCHEDULING." Parallel Processing Letters 06, no. 04 (December 1996): 551–61. http://dx.doi.org/10.1142/s0129626496000492.

Повний текст джерела
Анотація:
Embedding task graphs onto hypercubes is a difficult problem. When the embedding is one-to-one, schedule length is strongly influenced by dilation. Therefore, it is desirable to find low dilation embeddings. This paper describes a heuristic embedding technique based upon evolutionary strategies. The technique has been extensively investigated using task graphs which are trees, forests, and butterflies. In all cases the technique has found low dilation embeddings.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Mohar, Bojan. "Combinatorial Local Planarity and the Width of Graph Embeddings." Canadian Journal of Mathematics 44, no. 6 (December 1, 1992): 1272–88. http://dx.doi.org/10.4153/cjm-1992-076-8.

Повний текст джерела
Анотація:
AbstractLet G be a graph embedded in a closed surface. The embedding is “locally planar” if for each face, a “large” neighbourhood of this face is simply connected. This notion is formalized, following [RV], by introducing the width ρ(ψ) of the embedding ψ. It is shown that embeddings with ρ(ψ) ≥ 3 behave very much like the embeddings of planar graphs in the 2-sphere. Another notion, “combinatorial local planarity”, is introduced. The criterion is independent of embeddings of the graph, but it guarantees that a given cycle in a graph G must be contractible in any minimal genus embedding of G (either orientable, or non-orientable). It generalizes the width introduced before. As application, short proofs of some important recently discovered results about embeddings of graphs are given and generalized or improved. Uniqueness and switching equivalence of graphs embedded in a fixed surface are also considered.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kalogeropoulos, Nikitas-Rigas, Dimitris Ioannou, Dionysios Stathopoulos, and Christos Makris. "On Embedding Implementations in Text Ranking and Classification Employing Graphs." Electronics 13, no. 10 (May 12, 2024): 1897. http://dx.doi.org/10.3390/electronics13101897.

Повний текст джерела
Анотація:
This paper aims to enhance the Graphical Set-based model (GSB) for ranking and classification tasks by incorporating node and word embeddings. The model integrates a textual graph representation with a set-based model for information retrieval. Initially, each document in a collection is transformed into a graph representation. The proposed enhancement involves augmenting the edges of these graphs with embeddings, which can be pretrained or generated using Word2Vec and GloVe models. Additionally, an alternative aspect of our proposed model consists of the Node2Vec embedding technique, which is applied to a graph created at the collection level through the extension of the set-based model, providing edges based on the graph’s structural information. Core decomposition is utilized as a method for pruning the graph. As a byproduct of our information retrieval model, we explore text classification techniques based on our approach. Node2Vec embeddings are generated by our graphs and are applied in order to represent the different documents in our collections that have undergone various preprocessing methods. We compare the graph-based embeddings with the Doc2Vec and Word2Vec representations to elaborate on whether our approach can be implemented on topic classification problems. For that reason, we then train popular classifiers on the document embeddings obtained from each model.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Hu, Ganglin, and Jun Pang. "Relation-Aware Weighted Embedding for Heterogeneous Graphs." Information Technology and Control 52, no. 1 (March 28, 2023): 199–214. http://dx.doi.org/10.5755/j01.itc.52.1.32390.

Повний текст джерела
Анотація:
Heterogeneous graph embedding, aiming to learn the low-dimensional representations of nodes, is effective in many tasks, such as link prediction, node classification, and community detection. Most existing graph embedding methods conducted on heterogeneous graphs treat the heterogeneous neighbours equally. Although it is possible to get node weights through attention mechanisms mainly developed using expensive recursive message-passing, they are difficult to deal with large-scale networks. In this paper, we propose R-WHGE, a relation-aware weighted embedding model for heterogeneous graphs, to resolve this issue. R-WHGE comprehensively considers structural information, semantic information, meta-paths of nodes and meta-path-based node weights to learn effective node embeddings. More specifically, we first extract the feature importance of each node and then take the nodes’ importance as node weights. A weighted random walks-based embedding learning model is proposed to generate the initial weighted node embeddings according to each meta-path. Finally, we feed these embeddings to a relation-aware heterogeneous graph neural network to generate compact embeddings of nodes, which captures relation-aware characteristics. Extensive experiments on real-world datasets demonstrate that our model is competitive against various state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Peng, Yanhui, Jing Zhang, Cangqi Zhou, and Shunmei Meng. "Knowledge Graph Entity Alignment Using Relation Structural Similarity." Journal of Database Management 33, no. 1 (January 1, 2022): 1–19. http://dx.doi.org/10.4018/jdm.305733.

Повний текст джерела
Анотація:
Embedding-based entity alignment, which represents knowledge graphs as low-dimensional embeddings and finds entities in different knowledge graphs that semantically represent the same real-world entity by measuring the similarities between entity embeddings, has achieved promising results. However, existing methods are still challenged by the error accumulation of embeddings along multi-step paths and the semantic information loss. This paper proposes a novel embedding-based entity alignment method that iteratively aligns both entities and relations with high similarities as training data. Newly-aligned entities and relations are used to calibrate the corresponding embeddings in the unified embedding space, which reduces the error accumulation. To reduce the negative impact of semantic information loss, the authors propose to use relation structural similarity instead of embedding similarity to align relations. Experimental results on five widely used real-world datasets show that the proposed method significantly outperforms several state-of-the-art methods for entity alignment.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Chen, Mingyang, Wen Zhang, Zhen Yao, Yushan Zhu, Yang Gao, Jeff Z. Pan, and Huajun Chen. "Entity-Agnostic Representation Learning for Parameter-Efficient Knowledge Graph Embedding." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 4182–90. http://dx.doi.org/10.1609/aaai.v37i4.25535.

Повний текст джерела
Анотація:
We propose an entity-agnostic representation learning method for handling the problem of inefficient parameter storage costs brought by embedding knowledge graphs. Conventional knowledge graph embedding methods map elements in a knowledge graph, including entities and relations, into continuous vector spaces by assigning them one or multiple specific embeddings (i.e., vector representations). Thus the number of embedding parameters increases linearly as the growth of knowledge graphs. In our proposed model, Entity-Agnostic Representation Learning (EARL), we only learn the embeddings for a small set of entities and refer to them as reserved entities. To obtain the embeddings for the full set of entities, we encode their distinguishable information from their connected relations, k-nearest reserved entities, and multi-hop neighbors. We learn universal and entity-agnostic encoders for transforming distinguishable information into entity embeddings. This approach allows our proposed EARL to have a static, efficient, and lower parameter count than conventional knowledge graph embedding methods. Experimental results show that EARL uses fewer parameters and performs better on link prediction tasks than baselines, reflecting its parameter efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zhou, Houquan, Shenghua Liu, Danai Koutra, Huawei Shen, and Xueqi Cheng. "A Provable Framework of Learning Graph Embeddings via Summarization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 4946–53. http://dx.doi.org/10.1609/aaai.v37i4.25621.

Повний текст джерела
Анотація:
Given a large graph, can we learn its node embeddings from a smaller summary graph? What is the relationship between embeddings learned from original graphs and their summary graphs? Graph representation learning plays an important role in many graph mining applications, but learning em-beddings of large-scale graphs remains a challenge. Recent works try to alleviate it via graph summarization, which typ-ically includes the three steps: reducing the graph size by combining nodes and edges into supernodes and superedges,learning the supernode embedding on the summary graph and then restoring the embeddings of the original nodes. How-ever, the justification behind those steps is still unknown. In this work, we propose GELSUMM, a well-formulated graph embedding learning framework based on graph sum-marization, in which we show the theoretical ground of learn-ing from summary graphs and the restoration with the three well-known graph embedding approaches in a closed form.Through extensive experiments on real-world datasets, we demonstrate that our methods can learn graph embeddings with matching or better performance on downstream tasks.This work provides theoretical analysis for learning node em-beddings via summarization and helps explain and under-stand the mechanism of the existing works.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Cui, Yuanning, Yuxin Wang, Zequn Sun, Wenqiang Liu, Yiqiao Jiang, Kexin Han, and Wei Hu. "Lifelong Embedding Learning and Transfer for Growing Knowledge Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 4217–24. http://dx.doi.org/10.1609/aaai.v37i4.25539.

Повний текст джерела
Анотація:
Existing knowledge graph (KG) embedding models have primarily focused on static KGs. However, real-world KGs do not remain static, but rather evolve and grow in tandem with the development of KG applications. Consequently, new facts and previously unseen entities and relations continually emerge, necessitating an embedding model that can quickly learn and transfer new knowledge through growth. Motivated by this, we delve into an expanding field of KG embedding in this paper, i.e., lifelong KG embedding. We consider knowledge transfer and retention of the learning on growing snapshots of a KG without having to learn embeddings from scratch. The proposed model includes a masked KG autoencoder for embedding learning and update, with an embedding transfer strategy to inject the learned knowledge into the new entity and relation embeddings, and an embedding regularization method to avoid catastrophic forgetting. To investigate the impacts of different aspects of KG growth, we construct four datasets to evaluate the performance of lifelong KG embedding. Experimental results show that the proposed model outperforms the state-of-the-art inductive and lifelong embedding baselines.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Fang, Peng, Arijit Khan, Siqiang Luo, Fang Wang, Dan Feng, Zhenli Li, Wei Yin, and Yuchao Cao. "Distributed Graph Embedding with Information-Oriented Random Walks." Proceedings of the VLDB Endowment 16, no. 7 (March 2023): 1643–56. http://dx.doi.org/10.14778/3587136.3587140.

Повний текст джерела
Анотація:
Graph embedding maps graph nodes to low-dimensional vectors, and is widely adopted in machine learning tasks. The increasing availability of billion-edge graphs underscores the importance of learning efficient and effective embeddings on large graphs, such as link prediction on Twitter with over one billion edges. Most existing graph embedding methods fall short of reaching high data scalability. In this paper, we present a general-purpose, distributed, information-centric random walk-based graph embedding framework, DistGER, which can scale to embed billion-edge graphs. DistGER incrementally computes information-centric random walks. It further leverages a multi-proximity-aware, streaming, parallel graph partitioning strategy, simultaneously achieving high local partition quality and excellent workload balancing across machines. DistGER also improves the distributed Skip-Gram learning model to generate node embeddings by optimizing the access locality, CPU throughput, and synchronization efficiency. Experiments on real-world graphs demonstrate that compared to state-of-the-art distributed graph embedding frameworks, including KnightKing, DistDGL, and Pytorch-BigGraph, DistGER exhibits 2.33×--129× acceleration, 45% reduction in cross-machines communication, and >10% effectiveness improvement in downstream tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

KOMLÓS, JÁNOS. "The Blow-up Lemma." Combinatorics, Probability and Computing 8, no. 1-2 (January 1999): 161–76. http://dx.doi.org/10.1017/s0963548398003502.

Повний текст джерела
Анотація:
Extremal graph theory has a great number of conjectures concerning the embedding of large sparse graphs into dense graphs. Szemerédi's Regularity Lemma is a valuable tool in finding embeddings of small graphs. The Blow-up Lemma, proved recently by Komlós, Sárközy and Szemerédi, can be applied to obtain approximate versions of many of the embedding conjectures. In this paper we review recent developments in the area.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Makarov, Ilya, Dmitrii Kiselev, Nikita Nikitinsky, and Lovro Subelj. "Survey on graph embeddings and their applications to machine learning problems on graphs." PeerJ Computer Science 7 (February 4, 2021): e357. http://dx.doi.org/10.7717/peerj-cs.357.

Повний текст джерела
Анотація:
Dealing with relational data always required significant computational resources, domain expertise and task-dependent feature engineering to incorporate structural information into a predictive model. Nowadays, a family of automated graph feature engineering techniques has been proposed in different streams of literature. So-called graph embeddings provide a powerful tool to construct vectorized feature spaces for graphs and their components, such as nodes, edges and subgraphs under preserving inner graph properties. Using the constructed feature spaces, many machine learning problems on graphs can be solved via standard frameworks suitable for vectorized feature representation. Our survey aims to describe the core concepts of graph embeddings and provide several taxonomies for their description. First, we start with the methodological approach and extract three types of graph embedding models based on matrix factorization, random-walks and deep learning approaches. Next, we describe how different types of networks impact the ability of models to incorporate structural and attributed data into a unified embedding. Going further, we perform a thorough evaluation of graph embedding applications to machine learning problems on graphs, among which are node classification, link prediction, clustering, visualization, compression, and a family of the whole graph embedding algorithms suitable for graph classification, similarity and alignment problems. Finally, we overview the existing applications of graph embeddings to computer science domains, formulate open problems and provide experiment results, explaining how different networks properties result in graph embeddings quality in the four classic machine learning problems on graphs, such as node classification, link prediction, clustering and graph visualization. As a result, our survey covers a new rapidly growing field of network feature engineering, presents an in-depth analysis of models based on network types, and overviews a wide range of applications to machine learning problems on graphs.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Soteros, C. E., D. W. Sumners, and S. G. Whittington. "Entanglement complexity of graphs in Z3." Mathematical Proceedings of the Cambridge Philosophical Society 111, no. 1 (January 1992): 75–91. http://dx.doi.org/10.1017/s0305004100075174.

Повний текст джерела
Анотація:
AbstractIn this paper we are concerned with questions about the knottedness of a closed curve of given length embedded in Z3. What is the probability that such a randomly chosen embedding is knotted? What is the probability that the embedding contains a particular knot? What is the expected complexity of the knot? To what extent can these questions also be answered for a graph of a given homeomorphism type?We use a pattern theorem due to Kesten 12 to prove that almost all embeddings in Z3 of a sufficiently long closed curve contain any given knot. We introduce the idea of a good measure of knot complexity. This is a function F which maps the set of equivalence classes of embeddings into 0, ). The F measure of the unknot is zero, and, generally speaking, the more complex the prime knot decomposition of a given knot type, the greater its F measure. We prove that the average value of F diverges to infinity as the length (n) of the embedding goes to infinity, at least linearly in n. One example of a good measure of knot complexity is crossing number.Finally we consider similar questions for embeddings of graphs. We show that for a fixed homeomorphism type, as the number of edges n goes to infinity, almost all embeddings are knotted if the homeomorphism type does not contain a cut edge. We prove a weaker result in the case that the homeomorphism type contains at least one cut edge and at least one cycle.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Cape, Joshua, Minh Tang, and Carey E. Priebe. "On spectral embedding performance and elucidating network structure in stochastic blockmodel graphs." Network Science 7, no. 3 (September 2019): 269–91. http://dx.doi.org/10.1017/nws.2019.23.

Повний текст джерела
Анотація:
AbstractStatistical inference on graphs often proceeds via spectral methods involving low-dimensional embeddings of matrix-valued graph representations such as the graph Laplacian or adjacency matrix. In this paper, we analyze the asymptotic information-theoretic relative performance of Laplacian spectral embedding and adjacency spectral embedding for block assignment recovery in stochastic blockmodel graphs by way of Chernoff information. We investigate the relationship between spectral embedding performance and underlying network structure (e.g., homogeneity, affinity, core-periphery, and (un)balancedness) via a comprehensive treatment of the two-block stochastic blockmodel and the class of K-blockmodels exhibiting homogeneous balanced affinity structure. Our findings support the claim that, for a particular notion of sparsity, loosely speaking, “Laplacian spectral embedding favors relatively sparse graphs, whereas adjacency spectral embedding favors not-too-sparse graphs.” We also provide evidence in support of the claim that “adjacency spectral embedding favors core-periphery network structure.”
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Chartrand, Gary, Donald W. Vanderjagt, and Ping Zhang. "Homogeneously embedding stratified graphs in stratified graphs." Mathematica Bohemica 130, no. 1 (2005): 35–48. http://dx.doi.org/10.21136/mb.2005.134221.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

FRIESEN, TYLER, and VASSILY OLEGOVICH MANTUROV. "EMBEDDINGS OF *-GRAPHS INTO 2-SURFACES." Journal of Knot Theory and Its Ramifications 22, no. 12 (October 2013): 1341005. http://dx.doi.org/10.1142/s0218216513410058.

Повний текст джерела
Анотація:
This paper considers *-graphs in which all vertices have degree 4 or 6, and studies the question of calculating the genus of orientable 2-surfaces into which such graphs may be embedded. A *-graph is a graph endowed with a formal adjacency structure on the half-edges around each vertex, and an embedding of a *-graph is an embedding under which the formal adjacency relation on half-edges corresponds to the adjacency relation induced by the embedding. *-graphs are a natural generalization of four-valent framed graphs, which are four-valent graphs with an opposite half-edge structure. In [Embeddings of four-valent framed graphs into 2-surfaces, Dokl. Akad. Nauk424(3) (2009) 308–310], the question of whether a four-valent framed graph admits a ℤ2-homologically trivial embedding into a given surface was shown to be equivalent to a problem on matrices. We show that a similar result holds for *-graphs in which all vertices have degree 4 or 6. This gives an algorithm in quadratic time to determine whether a *-graph admits an embedding into the plane.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

O’Keeffe, Michael, and Michael M. J. Treacy. "Embeddings of Graphs: Tessellate and Decussate Structures." International Journal of Topology 1, no. 1 (March 29, 2024): 1–10. http://dx.doi.org/10.3390/ijt1010001.

Повний текст джерела
Анотація:
We address the problem of finding a unique graph embedding that best describes a graph’s “topology” i.e., a canonical embedding (spatial graph). This question is of particular interest in the chemistry of materials. Graphs that admit a tiling in 3-dimensional Euclidean space are termed tessellate, those that do not decussate. We give examples of decussate and tessellate graphs that are finite and 3-periodic. We conjecture that a graph has at most one tessellate embedding. We give reasons for considering this the default “topology” of periodic graphs.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Pietrasik, Marcin, and Marek Z. Reformat. "Probabilistic Coarsening for Knowledge Graph Embeddings." Axioms 12, no. 3 (March 6, 2023): 275. http://dx.doi.org/10.3390/axioms12030275.

Повний текст джерела
Анотація:
Knowledge graphs have risen in popularity in recent years, demonstrating their utility in applications across the spectrum of computer science. Finding their embedded representations is thus highly desirable as it makes them easily operated on and reasoned with by machines. With this in mind, we propose a simple meta-strategy for embedding knowledge graphs using probabilistic coarsening. In this approach, a knowledge graph is first coarsened before being embedded by an arbitrary embedding method. The resulting coarse embeddings are then extended down as those of the initial knowledge graph. Although straightforward, this allows for faster training by reducing knowledge graph complexity while revealing its higher-order structures. We demonstrate this empirically on four real-world datasets, which show that coarse embeddings are learned faster and are often of higher quality. We conclude that coarsening is a recommended prepossessing step regardless of the underlying embedding method used.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Gopalakrishnan, Sriram, Liron Cohen, Sven Koenig, and T. K. Kumar. "Embedding Directed Graphs in Potential Fields Using FastMap-D." Proceedings of the International Symposium on Combinatorial Search 11, no. 1 (September 1, 2021): 48–56. http://dx.doi.org/10.1609/socs.v11i1.18534.

Повний текст джерела
Анотація:
Embedding undirected graphs in a Euclidean space has many computational benefits. FastMap is an efficient embedding algorithm that facilitates a geometric interpretation of problems posed on undirected graphs. However, Euclidean distances are inherently symmetric and, thus, Euclidean embeddings cannot be used for directed graphs. In this paper, we present FastMap-D, an efficient generalization of FastMap to directed graphs. FastMap-D embeds vertices using a potential field to capture the asymmetry between the to-and-fro pairwise distances in directed graphs. FastMap-D learns a potential function to define the potential field using a machine learning module. In experiments on various kinds of directed graphs, we demonstrate the advantage of FastMap-D over other approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Mao, Yuqing, and Kin Wah Fung. "Use of word and graph embedding to measure semantic relatedness between Unified Medical Language System concepts." Journal of the American Medical Informatics Association 27, no. 10 (October 1, 2020): 1538–46. http://dx.doi.org/10.1093/jamia/ocaa136.

Повний текст джерела
Анотація:
Abstract Objective The study sought to explore the use of deep learning techniques to measure the semantic relatedness between Unified Medical Language System (UMLS) concepts. Materials and Methods Concept sentence embeddings were generated for UMLS concepts by applying the word embedding models BioWordVec and various flavors of BERT to concept sentences formed by concatenating UMLS terms. Graph embeddings were generated by the graph convolutional networks and 4 knowledge graph embedding models, using graphs built from UMLS hierarchical relations. Semantic relatedness was measured by the cosine between the concepts’ embedding vectors. Performance was compared with 2 traditional path-based (shortest path and Leacock-Chodorow) measurements and the publicly available concept embeddings, cui2vec, generated from large biomedical corpora. The concept sentence embeddings were also evaluated on a word sense disambiguation (WSD) task. Reference standards used included the semantic relatedness and semantic similarity datasets from the University of Minnesota, concept pairs generated from the Standardized MedDRA Queries and the MeSH (Medical Subject Headings) WSD corpus. Results Sentence embeddings generated by BioWordVec outperformed all other methods used individually in semantic relatedness measurements. Graph convolutional network graph embedding uniformly outperformed path-based measurements and was better than some word embeddings for the Standardized MedDRA Queries dataset. When used together, combined word and graph embedding achieved the best performance in all datasets. For WSD, the enhanced versions of BERT outperformed BioWordVec. Conclusions Word and graph embedding techniques can be used to harness terms and relations in the UMLS to measure semantic relatedness between concepts. Concept sentence embedding outperforms path-based measurements and cui2vec, and can be further enhanced by combining with graph embedding.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Liang, Jiongqian, Saket Gurukar, and Srinivasan Parthasarathy. "MILE: A Multi-Level Framework for Scalable Graph Embedding." Proceedings of the International AAAI Conference on Web and Social Media 15 (May 22, 2021): 361–72. http://dx.doi.org/10.1609/icwsm.v15i1.18067.

Повний текст джерела
Анотація:
Recently there has been a surge of interest in designing graph embedding methods. Few, if any, can scale to a large-sized graph with millions of nodes due to both computational complexity and memory requirements. In this paper, we relax this limitation by introducing the MultI-Level Embedding (MILE) framework – a generic methodology allowing contemporary graph embedding methods to scale to large graphs. MILE repeatedly coarsens the graph into smaller ones using a hybrid matching technique to maintain the backbone structure of the graph. It then applies existing embedding methods on the coarsest graph and refines the embeddings to the original graph through a graph convolution neural network that it learns. The proposed MILE framework is agnostic to the underlying graph embedding techniques and can be applied to many existing graph embedding methods without modifying them. We employ our framework on several popular graph embedding techniques and conduct embedding for real-world graphs. Experimental results on five large-scale datasets demonstrate that MILE significantly boosts the speed (order of magnitude) of graph embedding while generating embeddings of better quality, for the task of node classification. MILE can comfortably scale to a graph with 9 million nodes and 40 million edges, on which existing methods run out of memory or take too long to compute on a modern workstation. Our code and data are publicly available with detailed instructions for adding new base embedding methods: https://github.com/jiongqian/MILE.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Duong, Chi Thang, Trung Dung Hoang, Hongzhi Yin, Matthias Weidlich, Quoc Viet Hung Nguyen, and Karl Aberer. "Scalable robust graph embedding with Spark." Proceedings of the VLDB Endowment 15, no. 4 (December 2021): 914–22. http://dx.doi.org/10.14778/3503585.3503599.

Повний текст джерела
Анотація:
Graph embedding aims at learning a vector-based representation of vertices that incorporates the structure of the graph. This representation then enables inference of graph properties. Existing graph embedding techniques, however, do not scale well to large graphs. While several techniques to scale graph embedding using compute clusters have been proposed, they require continuous communication between the compute nodes and cannot handle node failure. We therefore propose a framework for scalable and robust graph embedding based on the MapReduce model, which can distribute any existing embedding technique. Our method splits a graph into subgraphs to learn their embeddings in isolation and subsequently reconciles the embedding spaces derived for the subgraphs. We realize this idea through a novel distributed graph decomposition algorithm. In addition, we show how to implement our framework in Spark to enable efficient learning of effective embeddings. Experimental results illustrate that our approach scales well, while largely maintaining the embedding quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Wu, Xueyi, Yuanyuan Xu, Wenjie Zhang, and Ying Zhang. "Billion-Scale Bipartite Graph Embedding: A Global-Local Induced Approach." Proceedings of the VLDB Endowment 17, no. 2 (October 2023): 175–83. http://dx.doi.org/10.14778/3626292.3626300.

Повний текст джерела
Анотація:
Bipartite graph embedding (BGE), as the fundamental task in bipartite network analysis, is to map each node to compact low-dimensional vectors that preserve intrinsic properties. The existing solutions towards BGE fall into two groups: metric-based methods and graph neural network-based (GNN-based) methods. The latter typically generates higher-quality embeddings than the former due to the strong representation ability of deep learning. Nevertheless, none of the existing GNN-based methods can handle billion-scale bipartite graphs due to the expensive message passing or complex modelling choices. Hence, existing solutions face a challenge in achieving both embedding quality and model scalability. Motivated by this, we propose a novel graph neural network named AnchorGNN based on global-local learning framework, which can generate high-quality BGE and scale to billion-scale bipartite graphs. Concretely, AnchorGNN leverages a novel anchor-based message passing schema for global learning, which enables global knowledge to be incorporated to generate node embeddings. Meanwhile, AnchorGNN offers an efficient one-hop local structure modelling using maximum likelihood estimation for bipartite graphs with rational analysis, avoiding large adjacency matrix construction. Both global information and local structure are integrated to generate distinguishable node embeddings. Extensive experiments demonstrate that AnchorGNN outperforms the best competitor by up to 36% in accuracy and achieves up to 28 times speed-up against the only metric-based baseline on billion-scale bipartite graphs.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Fionda, Valeria, and Giuseppe Pirrò. "Learning Triple Embeddings from Knowledge Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3874–81. http://dx.doi.org/10.1609/aaai.v34i04.5800.

Повний текст джерела
Анотація:
Graph embedding techniques allow to learn high-quality feature vectors from graph structures and are useful in a variety of tasks, from node classification to clustering. Existing approaches have only focused on learning feature vectors for the nodes and predicates in a knowledge graph. To the best of our knowledge, none of them has tackled the problem of directly learning triple embeddings. The approaches that are closer to this task have focused on homogeneous graphs involving only one type of edge and obtain edge embeddings by applying some operation (e.g., average) on the embeddings of the endpoint nodes. The goal of this paper is to introduce Triple2Vec, a new technique to directly embed knowledge graph triples. We leverage the idea of line graph of a graph and extend it to the context of knowledge graphs. We introduce an edge weighting mechanism for the line graph based on semantic proximity. Embeddings are finally generated by adopting the SkipGram model, where sentences are replaced with graph walks. We evaluate our approach on different real-world knowledge graphs and compared it with related work. We also show an application of triple embeddings in the context of user-item recommendations.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Kurlin, V. A. "Basic embeddings of graphs and Dynnikov's three-page embedding method." Russian Mathematical Surveys 58, no. 2 (April 30, 2003): 372–74. http://dx.doi.org/10.1070/rm2003v058n02abeh000617.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Jing, Baoyu, Yuchen Yan, Kaize Ding, Chanyoung Park, Yada Zhu, Huan Liu, and Hanghang Tong. "Sterling: Synergistic Representation Learning on Bipartite Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 12 (March 24, 2024): 12976–84. http://dx.doi.org/10.1609/aaai.v38i12.29195.

Повний текст джерела
Анотація:
A fundamental challenge of bipartite graph representation learning is how to extract informative node embeddings. Self-Supervised Learning (SSL) is a promising paradigm to address this challenge. Most recent bipartite graph SSL methods are based on contrastive learning which learns embeddings by discriminating positive and negative node pairs. Contrastive learning usually requires a large number of negative node pairs, which could lead to computational burden and semantic errors. In this paper, we introduce a novel synergistic representation learning model (STERLING) to learn node embeddings without negative node pairs. STERLING preserves the unique local and global synergies in bipartite graphs. The local synergies are captured by maximizing the similarity of the inter-type and intra-type positive node pairs, and the global synergies are captured by maximizing the mutual information of co-clusters. Theoretical analysis demonstrates that STERLING could improve the connectivity between different node types in the embedding space. Extensive empirical evaluation on various benchmark datasets and tasks demonstrates the effectiveness of STERLING for extracting node embeddings.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Chen, Fukun, Guisheng Yin, Yuxin Dong, Gesu Li, and Weiqi Zhang. "KHGCN: Knowledge-Enhanced Recommendation with Hierarchical Graph Capsule Network." Entropy 25, no. 4 (April 20, 2023): 697. http://dx.doi.org/10.3390/e25040697.

Повний текст джерела
Анотація:
Knowledge graphs as external information has become one of the mainstream directions of current recommendation systems. Various knowledge-graph-representation methods have been proposed to promote the development of knowledge graphs in related fields. Knowledge-graph-embedding methods can learn entity information and complex relationships between the entities in knowledge graphs. Furthermore, recently proposed graph neural networks can learn higher-order representations of entities and relationships in knowledge graphs. Therefore, the complete presentation in the knowledge graph enriches the item information and alleviates the cold start of the recommendation process and too-sparse data. However, the knowledge graph’s entire entity and relation representation in personalized recommendation tasks will introduce unnecessary noise information for different users. To learn the entity-relationship presentation in the knowledge graph while effectively removing noise information, we innovatively propose a model named knowledge—enhanced hierarchical graph capsule network (KHGCN), which can extract node embeddings in graphs while learning the hierarchical structure of graphs. Our model eliminates noisy entities and relationship representations in the knowledge graph by the entity disentangling for the recommendation and introduces the attentive mechanism to strengthen the knowledge-graph aggregation. Our model learns the presentation of entity relationships by an original graph capsule network. The capsule neural networks represent the structured information between the entities more completely. We validate the proposed model on real-world datasets, and the validation results demonstrate the model’s effectiveness.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Sun, Ke, Shuo Yu, Ciyuan Peng, Yueru Wang, Osama Alfarraj, Amr Tolba, and Feng Xia. "Relational Structure-Aware Knowledge Graph Representation in Complex Space." Mathematics 10, no. 11 (June 4, 2022): 1930. http://dx.doi.org/10.3390/math10111930.

Повний текст джерела
Анотація:
Relations in knowledge graphs have rich relational structures and various binary relational patterns. Various relation modelling strategies are proposed for embedding knowledge graphs, but they fail to fully capture both features of relations, rich relational structures and various binary relational patterns. To address the problem of insufficient embedding due to the complexity of the relations, we propose a novel knowledge graph representation model in complex space, namely MARS, to exploit complex relations to embed knowledge graphs. MARS takes the mechanisms of complex numbers and message-passing and then embeds triplets into relation-specific complex hyperplanes. Thus, MARS can well preserve various relation patterns, as well as structural information in knowledge graphs. In addition, we find that the scores generated from the score function approximate a Gaussian distribution. The scores in the tail cannot effectively represent triplets. To address this particular issue and improve the precision of embeddings, we use the standard deviation to limit the dispersion of the score distribution, resulting in more accurate embeddings of triplets. Comprehensive experiments on multiple benchmarks demonstrate that our model significantly outperforms existing state-of-the-art models for link prediction and triple classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

SANKI, BIDYUT. "EMBEDDING OF METRIC GRAPHS ON HYPERBOLIC SURFACES." Bulletin of the Australian Mathematical Society 99, no. 03 (February 13, 2019): 508–20. http://dx.doi.org/10.1017/s0004972719000145.

Повний текст джерела
Анотація:
An embedding of a metric graph $(G,d)$ on a closed hyperbolic surface is essential if each complementary region has a negative Euler characteristic. We show, by construction, that given any metric graph, its metric can be rescaled so that it admits an essential and isometric embedding on a closed hyperbolic surface. The essential genus $g_{e}(G)$ of $(G,d)$ is the lowest genus of a surface on which such an embedding is possible. We establish a formula to compute $g_{e}(G)$ and show that, for every integer $g\geq g_{e}(G)$ , there is an embedding of $(G,d)$ (possibly after a rescaling of $d$ ) on a surface of genus $g$ . Next, we study minimal embeddings where each complementary region has Euler characteristic $-1$ . The maximum essential genus $g_{e}^{\max }(G)$ of $(G,d)$ is the largest genus of a surface on which the graph is minimally embedded. We describe a method for an essential embedding of $(G,d)$ , where $g_{e}(G)$ and $g_{e}^{\max }(G)$ are realised.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Barros, Claudio D. T., Matheus R. F. Mendonça, Alex B. Vieira, and Artur Ziviani. "A Survey on Embedding Dynamic Graphs." ACM Computing Surveys 55, no. 1 (January 31, 2023): 1–37. http://dx.doi.org/10.1145/3483595.

Повний текст джерела
Анотація:
Embedding static graphs in low-dimensional vector spaces plays a key role in network analytics and inference, supporting applications like node classification, link prediction, and graph visualization. However, many real-world networks present dynamic behavior, including topological evolution, feature evolution, and diffusion. Therefore, several methods for embedding dynamic graphs have been proposed to learn network representations over time, facing novel challenges, such as time-domain modeling, temporal features to be captured, and the temporal granularity to be embedded. In this survey, we overview dynamic graph embedding, discussing its fundamentals and the recent advances developed so far. We introduce the formal definition of dynamic graph embedding, focusing on the problem setting and introducing a novel taxonomy for dynamic graph embedding input and output. We further explore different dynamic behaviors that may be encompassed by embeddings, classifying by topological evolution, feature evolution, and processes on networks. Afterward, we describe existing techniques and propose a taxonomy for dynamic graph embedding techniques based on algorithmic approaches, from matrix and tensor factorization to deep learning, random walks, and temporal point processes. We also elucidate main applications, including dynamic link prediction, anomaly detection, and diffusion prediction, and we further state some promising research directions in the area.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Kainen, Paul C., Samuel Joslin, and Shannon Overbay. "On Dispersability of Some Circulant Graphs." Journal of Graph Algorithms and Applications 28, no. 1 (June 11, 2024): 225–41. http://dx.doi.org/10.7155/jgaa.v28i1.2941.

Повний текст джерела
Анотація:
The matching book thickness of a graph is the least number of pages in a book embedding such that each page is a matching. A graph is dispersable if its matching book thickness equals its maximum degree. Minimum page matching book embeddings are given for bipartite and for most non-bipartite circulants contained in the (Harary) cube of a cycle and for various higher-powers.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

NIKKUNI, RYO. "THE SECOND SKEW-SYMMETRIC COHOMOLOGY GROUP AND SPATIAL EMBEDDINGS OF GRAPHS." Journal of Knot Theory and Its Ramifications 09, no. 03 (May 2000): 387–411. http://dx.doi.org/10.1142/s0218216500000189.

Повний текст джерела
Анотація:
Let L(G) be the second skew-symmetric cohomology group of the residual space of a graph G. We determine L(G) in the case G is a 3-connected simple graph, and give the structure of L(G) in the case of G is a complete graph and a complete bipartite graph. By using these results, we determine the Wu invariants in L(G) of the spatial embeddings of the complete graph and those of the complete bipartite graph, respectively. Since the Wu invariant of a spatial embedding is a complete invariant up to homology which is an equivalence relation on spatial embeddings introduced in [12], we give a homology classification of the spatial embeddings of such graphs.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Li, Tongxin, Weiping Wang, Xiaobo Li, Tao Wang, Xin Zhou, and Meigen Huang. "Embedding Uncertain Temporal Knowledge Graphs." Mathematics 11, no. 3 (February 3, 2023): 775. http://dx.doi.org/10.3390/math11030775.

Повний текст джерела
Анотація:
Knowledge graph (KG) embedding for predicting missing relation facts in incomplete knowledge graphs (KGs) has been widely explored. In addition to the benchmark triple structural information such as head entities, tail entities, and the relations between them, there is a large amount of uncertain and temporal information, which is difficult to be exploited in KG embeddings, and there are some embedding models specifically for uncertain KGs and temporal KGs. However, these models either only utilize uncertain information or only temporal information, without integrating both kinds of information into the underlying model that utilizes triple structural information. In this paper, we propose an embedding model for uncertain temporal KGs called the confidence score, time, and ranking information embedded jointly model (CTRIEJ), which aims to preserve the uncertainty, temporal and structural information of relation facts in the embedding space. To further enhance the precision of the CTRIEJ model, we also introduce a self-adversarial negative sampling technique to generate negative samples. We use the embedding vectors obtained from our model to complete the missing relation facts and predict their corresponding confidence scores. Experiments are conducted on an uncertain temporal KG extracted from Wikidata via three tasks, i.e., confidence prediction, link prediction, and relation fact classification. The CTRIEJ model shows effectiveness in capturing uncertain and temporal knowledge by achieving promising results, and it consistently outperforms baselines on the three downstream experimental tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Barthel, Senja. "On chirality of toroidal embeddings of polyhedral graphs." Journal of Knot Theory and Its Ramifications 26, no. 08 (May 22, 2017): 1750050. http://dx.doi.org/10.1142/s021821651750050x.

Повний текст джерела
Анотація:
We investigate properties of spatial graphs on the standard torus. It is known that nontrivial embeddings of planar graphs in the torus contain a nontrivial knot or a nonsplit link due to [2, 3]. Building on this and using the chirality of torus knots and links [9, 10], we prove that the nontrivial embeddings of simple 3-connected planar graphs in the standard torus are chiral. For the case that the spatial graph contains a nontrivial knot, the statement was shown by Castle et al. [5]. We give an alternative proof using minors instead of the Euler characteristic. To prove the case in which the graph embedding contains a nonsplit link, we show the chirality of Hopf ladders with at least three rungs, thus generalizing a theorem of Simon [12].
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Zhang, Kainan, Zhipeng Cai, and Daehee Seo. "Privacy-Preserving Federated Graph Neural Network Learning on Non-IID Graph Data." Wireless Communications and Mobile Computing 2023 (February 3, 2023): 1–13. http://dx.doi.org/10.1155/2023/8545101.

Повний текст джерела
Анотація:
Since the concept of federated learning (FL) was proposed by Google in 2017, many applications have been combined with FL technology due to its outstanding performance in data integration, computing performance, privacy protection, etc. However, most traditional federated learning-based applications focus on image processing and natural language processing with few achievements in graph neural networks due to the graph’s nonindependent identically distributed (IID) nature. Representation learning on graph-structured data generates graph embedding, which helps machines understand graphs effectively. Meanwhile, privacy protection plays a more meaningful role in analyzing graph-structured data such as social networks. Hence, this paper proposes PPFL-GNN, a novel privacy-preserving federated graph neural network framework for node representation learning, which is a pioneer work for graph neural network-based federated learning. In PPFL-GNN, clients utilize a local graph dataset to generate graph embeddings and integrate information from other collaborative clients to utilize federated learning to produce more accurate representation results. More importantly, by integrating embedding alignment techniques in PPFL-GNN, we overcome the obstacles of federated learning on non-IID graph data and can further reduce privacy exposure by sharing preferred information.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

DI GIACOMO, EMILIO, and GIUSEPPE LIOTTA. "SIMULTANEOUS EMBEDDING OF OUTERPLANAR GRAPHS, PATHS, AND CYCLES." International Journal of Computational Geometry & Applications 17, no. 02 (April 2007): 139–60. http://dx.doi.org/10.1142/s0218195907002276.

Повний текст джерела
Анотація:
Let G1 and G2 be two planar graphs having some vertices in common. A simultaneous embedding of G1 and G2 is a pair of crossing-free drawings of G1 and G2 such that each vertex in common is represented by the same point in both drawings. In this paper we show that an outerplanar graph and a simple path can be simultaneously embedded with fixed edges such that the edges in common are straight-line segments while the other edges of the outerplanar graph can have at most one bend per edge. We then exploit the technique for outerplanar graphs and paths to study simultaneous embeddings of other pairs of graphs. Namely, we study simultaneous embedding with fixed edges of: (i) two outerplanar graphs sharing a forest of paths and (ii) an outerplanar graph and a cycle.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Chen, Xuelu, Muhao Chen, Weijia Shi, Yizhou Sun, and Carlo Zaniolo. "Embedding Uncertain Knowledge Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3363–70. http://dx.doi.org/10.1609/aaai.v33i01.33013363.

Повний текст джерела
Анотація:
Embedding models for deterministic Knowledge Graphs (KG) have been extensively studied, with the purpose of capturing latent semantic relations between entities and incorporating the structured knowledge they contain into machine learning. However, there are many KGs that model uncertain knowledge, which typically model the inherent uncertainty of relations facts with a confidence score, and embedding such uncertain knowledge represents an unresolved challenge. The capturing of uncertain knowledge will benefit many knowledge-driven applications such as question answering and semantic search by providing more natural characterization of the knowledge. In this paper, we propose a novel uncertain KG embedding model UKGE, which aims to preserve both structural and uncertainty information of relation facts in the embedding space. Unlike previous models that characterize relation facts with binary classification techniques, UKGE learns embeddings according to the confidence scores of uncertain relation facts. To further enhance the precision of UKGE, we also introduce probabilistic soft logic to infer confidence scores for unseen relation facts during training. We propose and evaluate two variants of UKGE based on different confidence score modeling strategies. Experiments are conducted on three real-world uncertain KGs via three tasks, i.e. confidence prediction, relation fact ranking, and relation fact classification. UKGE shows effectiveness in capturing uncertain knowledge by achieving promising results, and it consistently outperforms baselines on these tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

BENBATATA, Sabrina, and Bilal SAOUD. "Network Embedding Methods: Study and Comparison." Electrotehnica, Electronica, Automatica 72, no. 4 (December 18, 2024): 72–78. https://doi.org/10.46904/eea.24.72.4.1108008.

Повний текст джерела
Анотація:
Graph is a powerful language that can model many systems in different fields such as information sciences, social sciences, Biology, mathematics, physics, etc. Graphs can capture very well the relationships between nodes and their structure. Representing data through graphs has some limitations and is challenging to use them like input in machine learning and deep learning models. This challenge can be overcome by using network embedding. Embedding represents a network into low-dimensional vector space. Several methods have been proposed to embed networks. Methods like DeepWalk, Node2Vec and GraphSAGE have gained traction. DeepWalk employs random walks to capture local neighborhood information, generating embeddings using skip-gram models. Node2Vec extends this by incorporating biased random walks, balancing exploration and exploitation of the network. GraphSAGE adopts a neighborhood aggregation strategy, aggregating features from a node's local graph structure through different convolutional layers. These methods facilitate downstream tasks like node classification, link prediction and clustering. Challenges include scalability to large networks, handling heterogeneous information and preserving structural characteristics. Recent advancements integrate techniques like attention mechanisms and reinforcement learning for improved embeddings. Network embedding methods continue to evolve, catering to diverse applications in social networks, biology and recommendation systems, offering insights into complex network structures. In this paper we provide networks embedding concepts and some methods. A comparison between embedding methods is presented. Embedding network methods on node classification tasks on different benchmark datasets have been conducted in order to evaluate them. Through rigorous evaluation, we assess their effectiveness in capturing latent features and improving classification accuracy across various domains.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Friesen, Tyler, and Vassily Olegovich Manturov. "Checkerboard embeddings of *-graphs into nonorientable surfaces." Journal of Knot Theory and Its Ramifications 23, no. 07 (June 2014): 1460004. http://dx.doi.org/10.1142/s0218216514600049.

Повний текст джерела
Анотація:
This paper considers *-graphs in which all vertices have degree 4 or 6, and studies the question of calculating the genus of nonorientable surfaces into which such graphs may be embedded. In a previous paper [Embeddings of *-graphs into 2-surfaces, preprint (2012), arXiv:1212.5646] by the authors, the problem of calculating whether a given *-graph in which all vertices have degree 4 or 6 admits a ℤ2-homologically trivial embedding into a given orientable surface was shown to be equivalent to a problem on matrices. Here we extend those results to nonorientable surfaces. The embeddability condition that we obtain yields quadratic-time algorithms to determine whether a *-graph with all vertices of degree 4 or 6 admits a ℤ2-homologically trivial embedding into the projective plane or into the Klein bottle.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Tsitsulin, Anton, Marina Munkhoeva, Davide Mottin, Panagiotis Karras, Ivan Oseledets, and Emmanuel Müller. "FREDE." Proceedings of the VLDB Endowment 14, no. 6 (February 2021): 1102–10. http://dx.doi.org/10.14778/3447689.3447713.

Повний текст джерела
Анотація:
Low-dimensional representations, or embeddings , of a graph's nodes facilitate several practical data science and data engineering tasks. As such embeddings rely, explicitly or implicitly, on a similarity measure among nodes, they require the computation of a quadratic similarity matrix, inducing a tradeoff between space complexity and embedding quality. To date, no graph embedding work combines (i) linear space complexity, (ii) a nonlinear transform as its basis, and (iii) nontrivial quality guarantees. In this paper we introduce FREDE ( FREquent Directions Embedding ), a graph embedding based on matrix sketching that combines those three desiderata. Starting out from the observation that embedding methods aim to preserve the covariance among the rows of a similarity matrix, FREDE iteratively improves on quality while individually processing rows of a nonlinearly transformed PPR similarity matrix derived from a state-of-the-art graph embedding method and provides, at any iteration , column-covariance approximation guarantees in due course almost indistinguishable from those of the optimal approximation by SVD. Our experimental evaluation on variably sized networks shows that FREDE performs almost as well as SVD and competitively against state-of-the-art embedding methods in diverse data science tasks, even when it is based on as little as 10% of node similarities.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Xie, Cunxiang, Limin Zhang, and Zhaogen Zhong. "Entity Alignment Method Based on Joint Learning of Entity and Attribute Representations." Applied Sciences 13, no. 9 (May 6, 2023): 5748. http://dx.doi.org/10.3390/app13095748.

Повний текст джерела
Анотація:
Entity alignment helps discover and link entities from different knowledge graphs (KGs) that refer to the same real-world entity, making it a critical technique for KG fusion. Most entity alignment methods are based on knowledge representation learning, which uses a mapping function to project entities from different KGs into a unified vector space and align them based on calculated similarities. However, this process requires sufficient pre-aligned entity pairs. To address this problem, this study proposes an entity alignment method based on joint learning of entity and attribute representations. Structural embeddings are learned using the triples modeling method based on TransE and PTransE and extracted from the embedding vector space utilizing semantic information from direct and multi-step relation paths. Simultaneously, attribute character embeddings are learned using the N-gram-based compositional function to encode a character sequence for the attribute values, followed by TransE to model attribute triples in the embedding vector space to obtain attribute character embedding vectors. By learning the structural and attribute character embeddings simultaneously, the structural embeddings of entities from different KGs can be transferred into a unified vector space. Lastly, the similarities in the structural embedding of different entities were calculated to perform entity alignment. The experimental results showed that the proposed method performed well on the DBP15K and DWK100K datasets, and it outperformed currently available entity alignment methods by 16.8, 27.5, and 24.0% in precision, recall, and F1 measure, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Naimi, Ramin, and Elena Pavelescu. "Linear embeddings of K9 are triple linked." Journal of Knot Theory and Its Ramifications 23, no. 03 (March 2014): 1420001. http://dx.doi.org/10.1142/s0218216514200016.

Повний текст джерела
Анотація:
We use the theory of oriented matroids to show that any linear embedding of K9, the complete graph on nine vertices, into 3-space contains a non-split link with three components. This shows that Sachs' conjecture on linear, linkless embeddings of graphs, whether true or false, does not extend to 3-links.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Sheng, Jinfang, Zili Yang, Bin Wang, and Yu Chen. "Attribute Graph Embedding Based on Multi-Order Adjacency Views and Attention Mechanisms." Mathematics 12, no. 5 (February 27, 2024): 697. http://dx.doi.org/10.3390/math12050697.

Повний текст джерела
Анотація:
Graph embedding plays an important role in the analysis and study of typical non-Euclidean data, such as graphs. Graph embedding aims to transform complex graph structures into vector representations for further machine learning or data mining tasks. It helps capture relationships and similarities between nodes, providing better representations for various tasks on graphs. Different orders of neighbors have different impacts on the generation of node embedding vectors. Therefore, this paper proposes a multi-order adjacency view encoder to fuse the feature information of neighbors at different orders. We generate different node views for different orders of neighbor information, consider different orders of neighbor information through different views, and then use attention mechanisms to integrate node embeddings from different views. Finally, we evaluate the effectiveness of our model through downstream tasks on the graph. Experimental results demonstrate that our model achieves improvements in attributed graph clustering and link prediction tasks compared to existing methods, indicating that the generated embedding representations have higher expressiveness.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Gnezdilova, V. A., and Z. V. Apanovich. "Russian-English dataset and comparative analysis of algorithms for cross-language embedding-based entity alignment." Journal of Physics: Conference Series 2099, no. 1 (November 1, 2021): 012023. http://dx.doi.org/10.1088/1742-6596/2099/1/012023.

Повний текст джерела
Анотація:
Abstract The problem of data fusion from data bases and knowledge graphs in different languages is becoming increasingly important. The main step of such a fusion is the identification of equivalent entities in different knowledge graphs and merging their descriptions. This problem is known as the identity resolution, or entity alignment problem. Recently, a large group of new entity alignment methods has emerged. They look for the so called “embeddings” of entities and establish the equivalence of entities by comparing their embeddings. This paper presents experiments with embedding-based entity alignment algorithms on a Russian-English dataset. The purpose of this work is to identify language-specific features of the entity alignment algorithms. Also, future directions of research are outlined.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Tyshchuk, Kirill, Polina Karpikova, Andrew Spiridonov, Anastasiia Prutianova, Anton Razzhigaev, and Alexander Panchenko. "On Isotropy of Multimodal Embeddings." Information 14, no. 7 (July 10, 2023): 392. http://dx.doi.org/10.3390/info14070392.

Повний текст джерела
Анотація:
Embeddings, i.e., vector representations of objects, such as texts, images, or graphs, play a key role in deep learning methodologies nowadays. Prior research has shown the importance of analyzing the isotropy of textual embeddings for transformer-based text encoders, such as the BERT model. Anisotropic word embeddings do not use the entire space, instead concentrating on a narrow cone in such a pretrained vector space, negatively affecting the performance of applications, such as textual semantic similarity. Transforming a vector space to optimize isotropy has been shown to be beneficial for improving performance in text processing tasks. This paper is the first comprehensive investigation of the distribution of multimodal embeddings using the example of OpenAI’s CLIP pretrained model. We aimed to deepen the understanding of the embedding space of multimodal embeddings, which has previously been unexplored in this respect, and study the impact on various end tasks. Our initial efforts were focused on measuring the alignment of image and text embedding distributions, with an emphasis on their isotropic properties. In addition, we evaluated several gradient-free approaches to enhance these properties, establishing their efficiency in improving the isotropy/alignment of the embeddings and, in certain cases, the zero-shot classification accuracy. Significantly, our analysis revealed that both CLIP and BERT models yielded embeddings situated within a cone immediately after initialization and preceding training. However, they were mostly isotropic in the local sense. We further extended our investigation to the structure of multilingual CLIP text embeddings, confirming that the observed characteristics were language-independent. By computing the few-shot classification accuracy and point-cloud metrics, we provide evidence of a strong correlation among multilingual embeddings. Embeddings transformation using the methods described in this article makes it easier to visualize embeddings. At the same time, multiple experiments that we conducted showed that, in regard to the transformed embeddings, the downstream tasks performance does not drop substantially (and sometimes is even improved). This means that one could obtain an easily visualizable embedding space, without substantially losing the quality of downstream tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

SOMA, TERUHIKO. "SYMMETRIES OF MINIMUM GRAPHS IN THE 3-SPHERE." Journal of Knot Theory and Its Ramifications 07, no. 05 (August 1998): 651–57. http://dx.doi.org/10.1142/s0218216598000346.

Повний текст джерела
Анотація:
In this paper, a new equivalence relation, weak isotopy, on spatial graphs will be introduced. By using a rearrangement theorem (Theorem 1), it is shown that, for any spatial embedding Γ of a connected graph, the weak isotopy class [Γ] w.i. contains a unique minimum element Γ min up to ambient isotopy. We will see that this minimum element plays an important role in investigating symmetries of embeddings in [Γ] w.i. .
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Xie, Anze, Anders Carlsson, Jason Mohoney, Roger Waleffe, Shanan Peters, Theodoros Rekatsinas, and Shivaram Venkataraman. "Demo of marius." Proceedings of the VLDB Endowment 14, no. 12 (July 2021): 2759–62. http://dx.doi.org/10.14778/3476311.3476338.

Повний текст джерела
Анотація:
Graph embeddings have emerged as the de facto representation for modern machine learning over graph data structures. The goal of graph embedding models is to convert high-dimensional sparse graphs into low-dimensional, dense and continuous vector spaces that preserve the graph structure properties. However, learning a graph embedding model is a resource intensive process, and existing solutions rely on expensive distributed computation to scale training to instances that do not fit in GPU memory. This demonstration showcases Marius: a new open-source engine for learning graph embedding models over billion-edge graphs on a single machine. Marius is built around a recently-introduced architecture for machine learning over graphs that utilizes pipelining and a novel data replacement policy to maximize GPU utilization and exploit the entire memory hierarchy (including disk, CPU, and GPU memory) to scale to large instances. The audience will experience how to develop, train, and deploy graph embedding models using Marius' configuration-driven programming model. Moreover, the audience will have the opportunity to explore Marius' deployments on applications including link-prediction on WikiKG90M and reasoning queries on a paleobiology knowledge graph. Marius is available as open source software at https://marius-project.org.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Mavromatis, Costas, Prasanna Lakkur Subramanyam, Vassilis N. Ioannidis, Adesoji Adeshina, Phillip R. Howard, Tetiana Grinberg, Nagib Hakim, and George Karypis. "TempoQR: Temporal Question Reasoning over Knowledge Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (June 28, 2022): 5825–33. http://dx.doi.org/10.1609/aaai.v36i5.20526.

Повний текст джерела
Анотація:
Knowledge Graph Question Answering (KGQA) involves retrieving facts from a Knowledge Graph (KG) using natural language queries. A KG is a curated set of facts consisting of entities linked by relations. Certain facts include also temporal information forming a Temporal KG (TKG). Although many natural questions involve explicit or implicit time constraints, question answering (QA) over TKGs has been a relatively unexplored area. Existing solutions are mainly designed for simple temporal questions that can be answered directly by a single TKG fact. This paper puts forth a comprehensive embedding-based framework for answering complex questions over TKGs. Our method termed temporal question reasoning (TempoQR) exploits TKG embeddings to ground the question to the specific entities and time scope it refers to. It does so by augmenting the question embeddings with context, entity and time-aware information by employing three specialized modules. The first computes a textual representation of a given question, the second combines it with the entity embeddings for entities involved in the question, and the third generates question-specific time embeddings. Finally, a transformer-based encoder learns to fuse the generated temporal information with the question representation, which is used for answer predictions. Extensive experiments show that TempoQR improves accuracy by 25--45 percentage points on complex temporal questions over state-of-the-art approaches and it generalizes better to unseen question types.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Merchant, Arpit, Aristides Gionis, and Michael Mathioudakis. "Succinct graph representations as distance oracles." Proceedings of the VLDB Endowment 15, no. 11 (July 2022): 2297–306. http://dx.doi.org/10.14778/3551793.3551794.

Повний текст джерела
Анотація:
Distance oracles answer shortest-path queries between any pair of nodes in a graph. They are often built using succinct graph representations such as spanners, sketches, and compressors to minimize oracle size and query answering latency. Node embeddings, in particular, offer graph representations that place adjacent nodes nearby each other in a low-rank space. However, their use in the design of distance oracles has not been sufficiently studied. In this paper, we empirically compare exact distance oracles constructed based on a variety of node embeddings and other succinct representations. We evaluate twelve such oracles along three measures of efficiency: construction time, memory requirements, and query-processing time over fourteen real datasets and four synthetic graphs. We show that distances between embedding vectors are excellent estimators of graph distances when graphs are well-structured, but less so for more unstructured graphs. Overall, our findings suggest that exact oracles based on embeddings can be constructed faster than multi-dimensional scaling (MDS) but slower than compressed adjacency indexes, require less memory than landmark oracles but more than sparsifiers or indexes, can answer queries faster than indexes but slower than MDS, and are exact more often with a smaller additive error than spanners (that have multiplicative error) while not being lossless like adjacency lists. Finally, while the exactness of such oracles is infeasible to maintain for huge graphs even under large amounts of resources, we empirically demonstrate that approximate oracles based on GOSH embeddings can efficiently scale to graphs of 100M+ nodes with only small additive errors in distance estimations.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії