Auswahl der wissenschaftlichen Literatur zum Thema „Graph embeddings“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Graph embeddings" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Graph embeddings"

1

Zhou, Houquan, Shenghua Liu, Danai Koutra, Huawei Shen und Xueqi Cheng. „A Provable Framework of Learning Graph Embeddings via Summarization“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 4 (26.06.2023): 4946–53. http://dx.doi.org/10.1609/aaai.v37i4.25621.

Der volle Inhalt der Quelle
Annotation:
Given a large graph, can we learn its node embeddings from a smaller summary graph? What is the relationship between embeddings learned from original graphs and their summary graphs? Graph representation learning plays an important role in many graph mining applications, but learning em-beddings of large-scale graphs remains a challenge. Recent works try to alleviate it via graph summarization, which typ-ically includes the three steps: reducing the graph size by combining nodes and edges into supernodes and superedges,learning the supernode embedding on the summary graph and then restoring the embeddings of the original nodes. How-ever, the justification behind those steps is still unknown. In this work, we propose GELSUMM, a well-formulated graph embedding learning framework based on graph sum-marization, in which we show the theoretical ground of learn-ing from summary graphs and the restoration with the three well-known graph embedding approaches in a closed form.Through extensive experiments on real-world datasets, we demonstrate that our methods can learn graph embeddings with matching or better performance on downstream tasks.This work provides theoretical analysis for learning node em-beddings via summarization and helps explain and under-stand the mechanism of the existing works.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Mohar, Bojan. „Combinatorial Local Planarity and the Width of Graph Embeddings“. Canadian Journal of Mathematics 44, Nr. 6 (01.12.1992): 1272–88. http://dx.doi.org/10.4153/cjm-1992-076-8.

Der volle Inhalt der Quelle
Annotation:
AbstractLet G be a graph embedded in a closed surface. The embedding is “locally planar” if for each face, a “large” neighbourhood of this face is simply connected. This notion is formalized, following [RV], by introducing the width ρ(ψ) of the embedding ψ. It is shown that embeddings with ρ(ψ) ≥ 3 behave very much like the embeddings of planar graphs in the 2-sphere. Another notion, “combinatorial local planarity”, is introduced. The criterion is independent of embeddings of the graph, but it guarantees that a given cycle in a graph G must be contractible in any minimal genus embedding of G (either orientable, or non-orientable). It generalizes the width introduced before. As application, short proofs of some important recently discovered results about embeddings of graphs are given and generalized or improved. Uniqueness and switching equivalence of graphs embedded in a fixed surface are also considered.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Makarov, Ilya, Dmitrii Kiselev, Nikita Nikitinsky und Lovro Subelj. „Survey on graph embeddings and their applications to machine learning problems on graphs“. PeerJ Computer Science 7 (04.02.2021): e357. http://dx.doi.org/10.7717/peerj-cs.357.

Der volle Inhalt der Quelle
Annotation:
Dealing with relational data always required significant computational resources, domain expertise and task-dependent feature engineering to incorporate structural information into a predictive model. Nowadays, a family of automated graph feature engineering techniques has been proposed in different streams of literature. So-called graph embeddings provide a powerful tool to construct vectorized feature spaces for graphs and their components, such as nodes, edges and subgraphs under preserving inner graph properties. Using the constructed feature spaces, many machine learning problems on graphs can be solved via standard frameworks suitable for vectorized feature representation. Our survey aims to describe the core concepts of graph embeddings and provide several taxonomies for their description. First, we start with the methodological approach and extract three types of graph embedding models based on matrix factorization, random-walks and deep learning approaches. Next, we describe how different types of networks impact the ability of models to incorporate structural and attributed data into a unified embedding. Going further, we perform a thorough evaluation of graph embedding applications to machine learning problems on graphs, among which are node classification, link prediction, clustering, visualization, compression, and a family of the whole graph embedding algorithms suitable for graph classification, similarity and alignment problems. Finally, we overview the existing applications of graph embeddings to computer science domains, formulate open problems and provide experiment results, explaining how different networks properties result in graph embeddings quality in the four classic machine learning problems on graphs, such as node classification, link prediction, clustering and graph visualization. As a result, our survey covers a new rapidly growing field of network feature engineering, presents an in-depth analysis of models based on network types, and overviews a wide range of applications to machine learning problems on graphs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mao, Yuqing, und Kin Wah Fung. „Use of word and graph embedding to measure semantic relatedness between Unified Medical Language System concepts“. Journal of the American Medical Informatics Association 27, Nr. 10 (01.10.2020): 1538–46. http://dx.doi.org/10.1093/jamia/ocaa136.

Der volle Inhalt der Quelle
Annotation:
Abstract Objective The study sought to explore the use of deep learning techniques to measure the semantic relatedness between Unified Medical Language System (UMLS) concepts. Materials and Methods Concept sentence embeddings were generated for UMLS concepts by applying the word embedding models BioWordVec and various flavors of BERT to concept sentences formed by concatenating UMLS terms. Graph embeddings were generated by the graph convolutional networks and 4 knowledge graph embedding models, using graphs built from UMLS hierarchical relations. Semantic relatedness was measured by the cosine between the concepts’ embedding vectors. Performance was compared with 2 traditional path-based (shortest path and Leacock-Chodorow) measurements and the publicly available concept embeddings, cui2vec, generated from large biomedical corpora. The concept sentence embeddings were also evaluated on a word sense disambiguation (WSD) task. Reference standards used included the semantic relatedness and semantic similarity datasets from the University of Minnesota, concept pairs generated from the Standardized MedDRA Queries and the MeSH (Medical Subject Headings) WSD corpus. Results Sentence embeddings generated by BioWordVec outperformed all other methods used individually in semantic relatedness measurements. Graph convolutional network graph embedding uniformly outperformed path-based measurements and was better than some word embeddings for the Standardized MedDRA Queries dataset. When used together, combined word and graph embedding achieved the best performance in all datasets. For WSD, the enhanced versions of BERT outperformed BioWordVec. Conclusions Word and graph embedding techniques can be used to harness terms and relations in the UMLS to measure semantic relatedness between concepts. Concept sentence embedding outperforms path-based measurements and cui2vec, and can be further enhanced by combining with graph embedding.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Fionda, Valeria, und Giuseppe Pirrò. „Learning Triple Embeddings from Knowledge Graphs“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 04 (03.04.2020): 3874–81. http://dx.doi.org/10.1609/aaai.v34i04.5800.

Der volle Inhalt der Quelle
Annotation:
Graph embedding techniques allow to learn high-quality feature vectors from graph structures and are useful in a variety of tasks, from node classification to clustering. Existing approaches have only focused on learning feature vectors for the nodes and predicates in a knowledge graph. To the best of our knowledge, none of them has tackled the problem of directly learning triple embeddings. The approaches that are closer to this task have focused on homogeneous graphs involving only one type of edge and obtain edge embeddings by applying some operation (e.g., average) on the embeddings of the endpoint nodes. The goal of this paper is to introduce Triple2Vec, a new technique to directly embed knowledge graph triples. We leverage the idea of line graph of a graph and extend it to the context of knowledge graphs. We introduce an edge weighting mechanism for the line graph based on semantic proximity. Embeddings are finally generated by adopting the SkipGram model, where sentences are replaced with graph walks. We evaluate our approach on different real-world knowledge graphs and compared it with related work. We also show an application of triple embeddings in the context of user-item recommendations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

FRIESEN, TYLER, und VASSILY OLEGOVICH MANTUROV. „EMBEDDINGS OF *-GRAPHS INTO 2-SURFACES“. Journal of Knot Theory and Its Ramifications 22, Nr. 12 (Oktober 2013): 1341005. http://dx.doi.org/10.1142/s0218216513410058.

Der volle Inhalt der Quelle
Annotation:
This paper considers *-graphs in which all vertices have degree 4 or 6, and studies the question of calculating the genus of orientable 2-surfaces into which such graphs may be embedded. A *-graph is a graph endowed with a formal adjacency structure on the half-edges around each vertex, and an embedding of a *-graph is an embedding under which the formal adjacency relation on half-edges corresponds to the adjacency relation induced by the embedding. *-graphs are a natural generalization of four-valent framed graphs, which are four-valent graphs with an opposite half-edge structure. In [Embeddings of four-valent framed graphs into 2-surfaces, Dokl. Akad. Nauk424(3) (2009) 308–310], the question of whether a four-valent framed graph admits a ℤ2-homologically trivial embedding into a given surface was shown to be equivalent to a problem on matrices. We show that a similar result holds for *-graphs in which all vertices have degree 4 or 6. This gives an algorithm in quadratic time to determine whether a *-graph admits an embedding into the plane.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Chen, Mingyang, Wen Zhang, Zhen Yao, Yushan Zhu, Yang Gao, Jeff Z. Pan und Huajun Chen. „Entity-Agnostic Representation Learning for Parameter-Efficient Knowledge Graph Embedding“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 4 (26.06.2023): 4182–90. http://dx.doi.org/10.1609/aaai.v37i4.25535.

Der volle Inhalt der Quelle
Annotation:
We propose an entity-agnostic representation learning method for handling the problem of inefficient parameter storage costs brought by embedding knowledge graphs. Conventional knowledge graph embedding methods map elements in a knowledge graph, including entities and relations, into continuous vector spaces by assigning them one or multiple specific embeddings (i.e., vector representations). Thus the number of embedding parameters increases linearly as the growth of knowledge graphs. In our proposed model, Entity-Agnostic Representation Learning (EARL), we only learn the embeddings for a small set of entities and refer to them as reserved entities. To obtain the embeddings for the full set of entities, we encode their distinguishable information from their connected relations, k-nearest reserved entities, and multi-hop neighbors. We learn universal and entity-agnostic encoders for transforming distinguishable information into entity embeddings. This approach allows our proposed EARL to have a static, efficient, and lower parameter count than conventional knowledge graph embedding methods. Experimental results show that EARL uses fewer parameters and performs better on link prediction tasks than baselines, reflecting its parameter efficiency.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Fang, Peng, Arijit Khan, Siqiang Luo, Fang Wang, Dan Feng, Zhenli Li, Wei Yin und Yuchao Cao. „Distributed Graph Embedding with Information-Oriented Random Walks“. Proceedings of the VLDB Endowment 16, Nr. 7 (März 2023): 1643–56. http://dx.doi.org/10.14778/3587136.3587140.

Der volle Inhalt der Quelle
Annotation:
Graph embedding maps graph nodes to low-dimensional vectors, and is widely adopted in machine learning tasks. The increasing availability of billion-edge graphs underscores the importance of learning efficient and effective embeddings on large graphs, such as link prediction on Twitter with over one billion edges. Most existing graph embedding methods fall short of reaching high data scalability. In this paper, we present a general-purpose, distributed, information-centric random walk-based graph embedding framework, DistGER, which can scale to embed billion-edge graphs. DistGER incrementally computes information-centric random walks. It further leverages a multi-proximity-aware, streaming, parallel graph partitioning strategy, simultaneously achieving high local partition quality and excellent workload balancing across machines. DistGER also improves the distributed Skip-Gram learning model to generate node embeddings by optimizing the access locality, CPU throughput, and synchronization efficiency. Experiments on real-world graphs demonstrate that compared to state-of-the-art distributed graph embedding frameworks, including KnightKing, DistDGL, and Pytorch-BigGraph, DistGER exhibits 2.33×--129× acceleration, 45% reduction in cross-machines communication, and >10% effectiveness improvement in downstream tasks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

NIKKUNI, RYO. „THE SECOND SKEW-SYMMETRIC COHOMOLOGY GROUP AND SPATIAL EMBEDDINGS OF GRAPHS“. Journal of Knot Theory and Its Ramifications 09, Nr. 03 (Mai 2000): 387–411. http://dx.doi.org/10.1142/s0218216500000189.

Der volle Inhalt der Quelle
Annotation:
Let L(G) be the second skew-symmetric cohomology group of the residual space of a graph G. We determine L(G) in the case G is a 3-connected simple graph, and give the structure of L(G) in the case of G is a complete graph and a complete bipartite graph. By using these results, we determine the Wu invariants in L(G) of the spatial embeddings of the complete graph and those of the complete bipartite graph, respectively. Since the Wu invariant of a spatial embedding is a complete invariant up to homology which is an equivalence relation on spatial embeddings introduced in [12], we give a homology classification of the spatial embeddings of such graphs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Duong, Chi Thang, Trung Dung Hoang, Hongzhi Yin, Matthias Weidlich, Quoc Viet Hung Nguyen und Karl Aberer. „Scalable robust graph embedding with Spark“. Proceedings of the VLDB Endowment 15, Nr. 4 (Dezember 2021): 914–22. http://dx.doi.org/10.14778/3503585.3503599.

Der volle Inhalt der Quelle
Annotation:
Graph embedding aims at learning a vector-based representation of vertices that incorporates the structure of the graph. This representation then enables inference of graph properties. Existing graph embedding techniques, however, do not scale well to large graphs. While several techniques to scale graph embedding using compute clusters have been proposed, they require continuous communication between the compute nodes and cannot handle node failure. We therefore propose a framework for scalable and robust graph embedding based on the MapReduce model, which can distribute any existing embedding technique. Our method splits a graph into subgraphs to learn their embeddings in isolation and subsequently reconciles the embedding spaces derived for the subgraphs. We realize this idea through a novel distributed graph decomposition algorithm. In addition, we show how to implement our framework in Spark to enable efficient learning of effective embeddings. Experimental results illustrate that our approach scales well, while largely maintaining the embedding quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Graph embeddings"

1

Labelle, François. „Graph embeddings and approximate graph coloring“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0031/MQ64386.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Djuphammar, Felix. „Efficient graph embeddings with community detection“. Thesis, Umeå universitet, Institutionen för fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185134.

Der volle Inhalt der Quelle
Annotation:
Networks are useful when modeling interactions in real-world systems based on relational data. Since networks often contain thousands or millions of nodes and links, analyzing and exploring them requires powerful visualizations. Presenting the network nodes in a map-like fashion provides a large scale overview of the data while also providing specific details. A suite of algorithms can compute an appropriate layout of all nodes for the visualization. However, these algorithms are computationally expensive when applied to large networks because they must repeatedly derive relations between every node and every other node, leading to quadratic scaling. Also, the available implementations compute the layout from the raw data instead of the network, making customization difficult. In this thesis, I introduce a modular algorithm that removes the need to consider all node pairs by approximating groups of pairwise relations. The groups are determined by clustering the network into densely connected groups of nodes with a community-detection algorithm. The implementation accepts a network as input and returns the layout coordinates, enabling modular and straightforward integration in a data analysis pipeline. The approximations improve the new algorithm's scaling to an order of 2N1.5 compared to the original N2. For a network with one million nodes, this scaling improvement gives a 500-fold performance boost such that a computation that previously took one week now completes in about 20 minutes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

PALUMBO, ENRICO. „Knowledge Graph Embeddings for Recommender Systems“. Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2850588.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Jenkins, Peter. „Partial graph design embeddings and related problems /“. [St. Lucia, Qld.], 2005. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe18945.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Turner, Bethany. „Embeddings of Product Graphs Where One Factor is a Hypercube“. VCU Scholars Compass, 2011. http://scholarscompass.vcu.edu/etd/2455.

Der volle Inhalt der Quelle
Annotation:
Voltage graph theory can be used to describe embeddings of product graphs if one factor is a Cayley graph. We use voltage graphs to explore embeddings of various products where one factor is a hypercube, describing some minimal and symmetrical embeddings. We then define a graph product, the weak symmetric difference, and illustrate a voltage graph construction useful for obtaining an embedding of the weak symmetric difference of an arbitrary graph with a hypercube.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Zhang, Zheng. „Explorations in Word Embeddings : graph-based word embedding learning and cross-lingual contextual word embedding learning“. Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS369/document.

Der volle Inhalt der Quelle
Annotation:
Les plongements lexicaux sont un composant standard des architectures modernes de traitement automatique des langues (TAL). Chaque fois qu'une avancée est obtenue dans l'apprentissage de plongements lexicaux, la grande majorité des tâches de traitement automatique des langues, telles que l'étiquetage morphosyntaxique, la reconnaissance d'entités nommées, la recherche de réponses à des questions, ou l'inférence textuelle, peuvent en bénéficier. Ce travail explore la question de l'amélioration de la qualité de plongements lexicaux monolingues appris par des modèles prédictifs et celle de la mise en correspondance entre langues de plongements lexicaux contextuels créés par des modèles préentraînés de représentation de la langue comme ELMo ou BERT.Pour l'apprentissage de plongements lexicaux monolingues, je prends en compte des informations globales au corpus et génère une distribution de bruit différente pour l'échantillonnage d'exemples négatifs dans word2vec. Dans ce but, je précalcule des statistiques de cooccurrence entre mots avec corpus2graph, un paquet Python en source ouverte orienté vers les applications en TAL : il génère efficacement un graphe de cooccurrence à partir d'un grand corpus, et lui applique des algorithmes de graphes tels que les marches aléatoires. Pour la mise en correspondance translingue de plongements lexicaux, je relie les plongements lexicaux contextuels à des plongements de sens de mots. L'algorithme amélioré de création d'ancres que je propose étend également la portée des algorithmes de mise en correspondance de plongements lexicaux du cas non-contextuel au cas des plongements contextuels
Word embeddings are a standard component of modern natural language processing architectures. Every time there is a breakthrough in word embedding learning, the vast majority of natural language processing tasks, such as POS-tagging, named entity recognition (NER), question answering, natural language inference, can benefit from it. This work addresses the question of how to improve the quality of monolingual word embeddings learned by prediction-based models and how to map contextual word embeddings generated by pretrained language representation models like ELMo or BERT across different languages.For monolingual word embedding learning, I take into account global, corpus-level information and generate a different noise distribution for negative sampling in word2vec. In this purpose I pre-compute word co-occurrence statistics with corpus2graph, an open-source NLP-application-oriented Python package that I developed: it efficiently generates a word co-occurrence network from a large corpus, and applies to it network algorithms such as random walks. For cross-lingual contextual word embedding mapping, I link contextual word embeddings to word sense embeddings. The improved anchor generation algorithm that I propose also expands the scope of word embedding mapping algorithms from context independent to contextual word embeddings
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Krombholz, Martin Rudolf Arne. „Proof theory of graph minors and tree embeddings“. Thesis, University of Leeds, 2018. http://etheses.whiterose.ac.uk/20834/.

Der volle Inhalt der Quelle
Annotation:
This thesis explores metamathematical properties of theorems appearing in the Graph Minors series. A number of these theorems have been known to have very high proof-theoretic strength, but an upper bound on many of them, including the graph minor theorem, had never been proved. We give such upper bounds, by showing that any proofs in the Graph Minors series can be carried out within a system of Pi^1_1-comprehension augmented with induction and bar-induction principles for certain classes of formulas. This establishes a narrow corridor for the possible proof-theoretic strength of many strong combinatorial principles, including the graph minor theorem, immersion theorem, theorems about patchwork containment, and various restrictions, extensions and labelled versions of these theorems. We also determine the precise proof-theoretic strength of some restrictions of the graph minor theorem, and show that they are equivalent to other restricted versions that had been considered before. Finally, we present a combinatorial theorem employing ordinal labelled trees ordered by embedding with gap-condition that may additionally have well-quasi-ordered labels on the vertices, which turns out not to be provable in the theory Pi^1_1-CA. This result suggests a potential for raising the lower bounds of the immersion theorem, and the thesis concludes by outlining this possibility and other avenues for further research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Brunet, Richard Carleton University Dissertation Mathematics. „On the homotopy of polygons in graph embeddings“. Ottawa, 1993.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Potter, John R. „Pseudo-triangulations on closed surfaces“. Worcester, Mass. : Worcester Polytechnic Institute, 2008. http://www.wpi.edu/Pubs/ETD/Available/etd-021408-102227/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Brooks, Eric B. „On the Embeddings of the Semi-Strong Product Graph“. VCU Scholars Compass, 2015. http://scholarscompass.vcu.edu/etd/3811.

Der volle Inhalt der Quelle
Annotation:
Over the years, a lot has been written about the three more common graph products (Cartesian product, Direct product and the Strong product), as all three of these are commutative products. This thesis investigates a non-commutative product graph, H, G, we call the Semi-Strong graph product, also referred in the literature as the Augmented Tensor and/or the Strong Tensor. We will start by discussing its basic properties and then focus on embeddings where the second factor, G, is a regular graph. We will use permutation voltage graphs and their graph coverings to compute the minimum genus for several families of graphs. The results follow work started first by A T White [12], extended by Ghidewon Abay Asmerom [1],[2], and follows the lead of Pisanski [9]. The strategy we use starts with an embedding of a graph H and then modifying H creating a pseudograph H*. H* is a voltage graph whose covering is HxG. Given the graph product HxG, where G is a regular graph and H meets certain conditions, we will use the embedding of H to study topological properties, particularly the surface on which HxG is minimally embedded.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Graph embeddings"

1

L, Miller Gary, und Langley Research Center, Hrsg. Graph embeddings and Laplacian eigenvalues. Hampton, Va: National Aeronautics and Space Administration, Langley Research Center, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

L, Miller Gary, und Langley Research Center, Hrsg. Graph embeddings and Laplacian eigenvalues. Hampton, Va: National Aeronautics and Space Administration, Langley Research Center, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Yanpei, Liu. Embeddability in graphs. Beijing, China: Science Press, 1995.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Institute for Computer Applications in Science and Engineering., Hrsg. Graph embeddings, symmetric real matrices, and generalized inverses. Hampton, VA: Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Institute for Computer Applications in Science and Engineering., Hrsg. Graph embeddings, symmetric real matrices, and generalized inverses. Hampton, VA: Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Alfaqeeh, Mosab, und David B. Skillicorn. Finding Communities in Social Networks Using Graph Embeddings. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-60916-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Guattery, Stephen. Graph embedding techniques for bounding condition numbers of incomplete factor preconditioners. Hampton, Va: Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Center, Langley Research, Hrsg. Graph embedding techniques for bounding condition numbers of incomplete factor preconditioners. Hampton, Va: National Aeronautics and Space Administration, Langley Research Center, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Deza. Geometry of cuts and metrics. New York: Springer-Verlag, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sun, Timothy. Testing Convexity and Acyclicity, and New Constructions for Dense Graph Embeddings. [New York, N.Y.?]: [publisher not identified], 2019.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Graph embeddings"

1

Jones, Gareth A., und Jürgen Wolfart. „Graph Embeddings“. In Springer Monographs in Mathematics, 41–58. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-24711-3_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Kamiński, Bogumił, Paweł Prałat und François Théberge. „Graph Embeddings“. In Mining Complex Networks, 161–200. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003218869-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Di Giacomo, Emilio, Walter Didimo, Giuseppe Liotta und Stephen K. Wismath. „Book Embeddings and Point-Set Embeddings of Series-Parallel Digraphs“. In Graph Drawing, 162–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36151-0_16.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Chen, Jianer. „Algorithmic graph embeddings“. In Lecture Notes in Computer Science, 151–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/bfb0030829.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Rosso, Paolo, Dingqi Yang und Philippe Cudré-Mauroux. „Knowledge Graph Embeddings“. In Encyclopedia of Big Data Technologies, 1–7. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63962-8_284-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Rosso, Paolo, Dingqi Yang und Philippe Cudré-Mauroux. „Knowledge Graph Embeddings“. In Encyclopedia of Big Data Technologies, 1073–80. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-77525-8_284.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Rosso, Paolo, Dingqi Yang und Philippe Cudré-Mauroux. „Knowledge Graph Embeddings“. In Encyclopedia of Big Data Technologies, 1–8. Cham: Springer International Publishing, 2012. http://dx.doi.org/10.1007/978-3-319-63962-8_284-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Krause, Franz, Kabul Kurniawan, Elmar Kiesling, Jorge Martinez-Gil, Thomas Hoch, Mario Pichler, Bernhard Heinzl und Bernhard Moser. „Leveraging Semantic Representations via Knowledge Graph Embeddings“. In Artificial Intelligence in Manufacturing, 71–85. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-46452-2_5.

Der volle Inhalt der Quelle
Annotation:
AbstractThe representation and exploitation of semantics has been gaining popularity in recent research, as exemplified by the uptake of large language models in the field of Natural Language Processing (NLP) and knowledge graphs (KGs) in the Semantic Web. Although KGs are already employed in manufacturing to integrate and standardize domain knowledge, the generation and application of corresponding KG embeddings as lean feature representations of graph elements have yet to be extensively explored in this domain. Existing KGs in manufacturing often focus on top-level domain knowledge and thus ignore domain dynamics, or they lack interconnectedness, i.e., nodes primarily represent non-contextual data values with single adjacent edges, such as sensor measurements. Consequently, context-dependent KG embedding algorithms are either restricted to non-dynamic use cases or cannot be applied at all due to the given KG characteristics. Therefore, this work provides an overview of state-of-the-art KG embedding methods and their functionalities, identifying the lack of dynamic embedding formalisms and application scenarios as the key obstacles that hinder their implementation in manufacturing. Accordingly, we introduce an approach for dynamizing existing KG embeddings based on local embedding reconstructions. Furthermore, we address the utilization of KG embeddings in the Horizon2020 project Teaming.AI (www.teamingai-project.eu.) focusing on their respective benefits.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Babilon, Robert, Jiří Matoušek, Jana Maxová und Pavel Valtr. „Low-Distortion Embeddings of Trees“. In Graph Drawing, 343–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45848-4_27.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Schaefer, Marcus, und Daniel Štefankovič. „Block Additivity of ℤ2-Embeddings“. In Graph Drawing, 185–95. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-03841-4_17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Graph embeddings"

1

Bai, Yunsheng, Hao Ding, Yang Qiao, Agustin Marinovic, Ken Gu, Ting Chen, Yizhou Sun und Wei Wang. „Unsupervised Inductive Graph-Level Representation Learning via Graph-Graph Proximity“. In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/275.

Der volle Inhalt der Quelle
Annotation:
We introduce a novel approach to graph-level representation learning, which is to embed an entire graph into a vector space where the embeddings of two graphs preserve their graph-graph proximity. Our approach, UGraphEmb, is a general framework that provides a novel means to performing graph-level embedding in a completely unsupervised and inductive manner. The learned neural network can be considered as a function that receives any graph as input, either seen or unseen in the training set, and transforms it into an embedding. A novel graph-level embedding generation mechanism called Multi-Scale Node Attention (MSNA), is proposed. Experiments on five real graph datasets show that UGraphEmb achieves competitive accuracy in the tasks of graph classification, similarity ranking, and graph visualization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Chen, Muhao, Yingtao Tian, Mohan Yang und Carlo Zaniolo. „Multilingual Knowledge Graph Embeddings for Cross-lingual Knowledge Alignment“. In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/209.

Der volle Inhalt der Quelle
Annotation:
Many recent works have demonstrated the benefits of knowledge graph embeddings in completing monolingual knowledge graphs. Inasmuch as related knowledge bases are built in several different languages, achieving cross-lingual knowledge alignment will help people in constructing a coherent knowledge base, and assist machines in dealing with different expressions of entity relationships across diverse human languages. Unfortunately, achieving this highly desirable cross-lingual alignment by human labor is very costly and error-prone. Thus, we propose MTransE, a translation-based model for multilingual knowledge graph embeddings, to provide a simple and automated solution. By encoding entities and relations of each language in a separated embedding space, MTransE provides transitions for each embedding vector to its cross-lingual counterparts in other spaces, while preserving the functionalities of monolingual embeddings. We deploy three different techniques to represent cross-lingual transitions, namely axis calibration, translation vectors, and linear transformations, and derive five variants for MTransE using different loss functions. Our models can be trained on partially aligned graphs, where just a small portion of triples are aligned with their cross-lingual counterparts. The experiments on cross-lingual entity matching and triple-wise alignment verification show promising results, with some variants consistently outperforming others on different tasks. We also explore how MTransE preserves the key properties of its monolingual counterpart.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Luo, Gongxu, Jianxin Li, Hao Peng, Carl Yang, Lichao Sun, Philip S. Yu und Lifang He. „Graph Entropy Guided Node Embedding Dimension Selection for Graph Neural Networks“. In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/381.

Der volle Inhalt der Quelle
Annotation:
Graph representation learning has achieved great success in many areas, including e-commerce, chemistry, biology, etc. However, the fundamental problem of choosing the appropriate dimension of node embedding for a given graph still remains unsolved. The commonly used strategies for Node Embedding Dimension Selection (NEDS) based on grid search or empirical knowledge suffer from heavy computation and poor model performance. In this paper, we revisit NEDS from the perspective of minimum entropy principle. Subsequently, we propose a novel Minimum Graph Entropy (MinGE) algorithm for NEDS with graph data. To be specific, MinGE considers both feature entropy and structure entropy on graphs, which are carefully designed according to the characteristics of the rich information in them. The feature entropy, which assumes the embeddings of adjacent nodes to be more similar, connects node features and link topology on graphs. The structure entropy takes the normalized degree as basic unit to further measure the higher-order structure of graphs. Based on them, we design MinGE to directly calculate the ideal node embedding dimension for any graph. Finally, comprehensive experiments with popular Graph Neural Networks (GNNs) on benchmark datasets demonstrate the effectiveness and generalizability of our proposed MinGE.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sun, Zequn, Wei Hu, Qingheng Zhang und Yuzhong Qu. „Bootstrapping Entity Alignment with Knowledge Graph Embedding“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/611.

Der volle Inhalt der Quelle
Annotation:
Embedding-based entity alignment represents different knowledge graphs (KGs) as low-dimensional embeddings and finds entity alignment by measuring the similarities between entity embeddings. Existing approaches have achieved promising results, however, they are still challenged by the lack of enough prior alignment as labeled training data. In this paper, we propose a bootstrapping approach to embedding-based entity alignment. It iteratively labels likely entity alignment as training data for learning alignment-oriented KG embeddings. Furthermore, it employs an alignment editing method to reduce error accumulation during iterations. Our experiments on real-world datasets showed that the proposed approach significantly outperformed the state-of-the-art embedding-based ones for entity alignment. The proposed alignment-oriented KG embedding, bootstrapping process and alignment editing method all contributed to the performance improvement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Peng, Jingshu, Zhao Chen, Yingxia Shao, Yanyan Shen, Lei Chen und Jiannong Cao. „Sancus: Staleness-Aware Communication-Avoiding Full-Graph Decentralized Training in Large-Scale Graph Neural Networks (Extended Abstract)“. In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/724.

Der volle Inhalt der Quelle
Annotation:
Graph neural networks (GNNs) have emerged due to their success at modeling graph data. Yet, it is challenging for GNNs to efficiently scale to large graphs. Thus, distributed GNNs come into play. To avoid communication caused by expensive data movement between workers, we propose SANCUS, a staleness-aware communication-avoiding decentralized GNN system. By introducing a set of novel bounded embedding staleness metrics and adaptively skipping broadcasts, SANCUS abstracts decentralized GNN processing as sequential matrix multiplication and uses historical embeddings via cache. Theoretically, we show bounded approximation errors of embeddings and gradients with convergence guarantee. Empirically, we evaluate SANCUS with common GNN models via different system setups on large-scale benchmark datasets. Compared to SOTA works, SANCUS can avoid up to 74% communication with at least 1:86_ faster throughput on average without accuracy loss.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Wan, Hai, Yonghao Luo, Bo Peng und Wei-Shi Zheng. „Representation Learning for Scene Graph Completion via Jointly Structural and Visual Embedding“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/132.

Der volle Inhalt der Quelle
Annotation:
This paper focuses on scene graph completion which aims at predicting new relations between two entities utilizing existing scene graphs and images. By comparing with the well-known knowledge graph, we first identify that each scene graph is associated with an image and each entity of a visual triple in a scene graph is composed of its entity type with attributes and grounded with a bounding box in its corresponding image. We then propose an end-to-end model named Representation Learning via Jointly Structural and Visual Embedding (RLSV) to take advantages of structural and visual information in scene graphs. In RLSV model, we provide a fully-convolutional module to extract the visual embeddings of a visual triple and apply hierarchical projection to combine the structural and visual embeddings of a visual triple. In experiments, we evaluate our model on two scene graph completion tasks: link prediction and visual triple classification, and further analyze by case studies. Experimental results demonstrate that our model outperforms all baselines in both tasks, which justifies the significance of combining structural and visual information for scene graph completion.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Gupta, Rajeev, Madhusudhanan Krishnamoorthy und Vipindeep Vangala. „Better Graph Embeddings for Enterprise Graphs“. In CODS-COMAD 2024: 7th Joint International Conference on Data Science & Management of Data (11th ACM IKDD CODS and 29th COMAD). New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3632410.3632412.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Modak, Sudipta, Aakarsh Malhotra, Sarthak Malik, Anil Surisetty und Esam Abdel-Raheem. „CPa-WAC: Constellation Partitioning-based Scalable Weighted Aggregation Composition for Knowledge Graph Embedding“. In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/388.

Der volle Inhalt der Quelle
Annotation:
Scalability and training time are crucial for any graph neural network model processing a knowledge graph (KG). While partitioning knowledge graphs helps reduce the training time, the prediction accuracy reduces significantly compared to training the model on the whole graph. In this paper, we propose CPa-WAC: a lightweight architecture that incorporates graph convolutional networks and modularity maximization-based constellation partitioning to harness the power of local graph topology. The proposed CPa-WAC method reduces the training time and memory cost of knowledge graph embedding, making the learning model scalable. The results from our experiments on standard databases, such as Wordnet and Freebase, show that by achieving meaningful partitioning, any knowledge graph can be broken down into subgraphs and processed separately to learn embeddings. Furthermore, these learned embeddings can be used for knowledge graph completion, retaining similar performance compared to training a GCN on the whole KG, while speeding up the training process by upto five times. Additionally, the proposed CPa-WAC method outperforms several other state-of-the-art KG in terms of prediction accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Veira, Neil, Brian Keng, Kanchana Padmanabhan und Andreas Veneris. „Unsupervised Embedding Enhancements of Knowledge Graphs using Textual Associations“. In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/725.

Der volle Inhalt der Quelle
Annotation:
Knowledge graph embeddings are instrumental for representing and learning from multi-relational data, with recent embedding models showing high effectiveness for inferring new facts from existing databases. However, such precisely structured data is usually limited in quantity and in scope. Therefore, to fully optimize the embeddings it is important to also consider more widely available sources of information such as text. This paper describes an unsupervised approach to incorporate textual information by augmenting entity embeddings with embeddings of associated words. The approach does not modify the optimization objective for the knowledge graph embedding, which allows it to be integrated with existing embedding models. Two distinct forms of textual data are considered, with different embedding enhancements proposed for each case. In the first case, each entity has an associated text document that describes it. In the second case, a text document is not available, and instead entities occur as words or phrases in an unstructured corpus of text fragments. Experiments show that both methods can offer improvement on the link prediction task when applied to many different knowledge graph embedding models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Tu, Wenxuan, Sihang Zhou, Xinwang Liu, Yue Liu, Zhiping Cai, En Zhu, Changwang Zhang und Jieren Cheng. „Initializing Then Refining: A Simple Graph Attribute Imputation Network“. In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/485.

Der volle Inhalt der Quelle
Annotation:
Representation learning on the attribute-missing graphs, whose connection information is complete while the attribute information of some nodes is missing, is an important yet challenging task. To impute the missing attributes, existing methods isolate the learning processes of attribute and structure information embeddings, and force both resultant representations to align with a common in-discriminative normal distribution, leading to inaccurate imputation. To tackle these issues, we propose a novel graph-oriented imputation framework called initializing then refining (ITR), where we first employ the structure information for initial imputation, and then leverage observed attribute and structure information to adaptively refine the imputed latent variables. Specifically, we first adopt the structure embeddings of attribute-missing samples as the embedding initialization, and then refine these initial values by aggregating the reliable and informative embeddings of attribute-observed samples according to the affinity structure. Specially, in our refining process, the affinity structure is adaptively updated through iterations by calculating the sample-wise correlations upon the recomposed embeddings. Extensive experiments on four benchmark datasets verify the superiority of ITR against state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie