Journal articles on the topic 'Node representations'

To see the other types of publications on this topic, follow the link: Node representations.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Node representations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zeng, Liang, Lanqing Li, Ziqi Gao, Peilin Zhao, and Jian Li. "ImGCL: Revisiting Graph Contrastive Learning on Imbalanced Node Classification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 11138–46. http://dx.doi.org/10.1609/aaai.v37i9.26319.

Full text
Abstract:
Graph contrastive learning (GCL) has attracted a surge of attention due to its superior performance for learning node/graph representations without labels. However, in practice, the underlying class distribution of unlabeled nodes for the given graph is usually imbalanced. This highly imbalanced class distribution inevitably deteriorates the quality of learned node representations in GCL. Indeed, we empirically find that most state-of-the-art GCL methods cannot obtain discriminative representations and exhibit poor performance on imbalanced node classification. Motivated by this observation, we propose a principled GCL framework on Imbalanced node classification (ImGCL), which automatically and adaptively balances the representations learned from GCL without labels. Specifically, we first introduce the online clustering based progressively balanced sampling (PBS) method with theoretical rationale, which balances the training sets based on pseudo-labels obtained from learned representations in GCL. We then develop the node centrality based PBS method to better preserve the intrinsic structure of graphs, by upweighting the important nodes of the given graph. Extensive experiments on multiple imbalanced graph datasets and imbalanced settings demonstrate the effectiveness of our proposed framework, which significantly improves the performance of the recent state-of-the-art GCL methods. Further experimental ablations and analyses show that the ImGCL framework consistently improves the representation quality of nodes in under-represented (tail) classes.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Junhui, Feihu Huang, and Jian Peng. "MSGCN: Multi-Subgraph Based Heterogeneous Graph Convolution Network Embedding." Applied Sciences 11, no. 21 (October 21, 2021): 9832. http://dx.doi.org/10.3390/app11219832.

Full text
Abstract:
Heterogeneous graph embedding has become a hot topic in network embedding in recent years and has been widely used in lots of practical scenarios. However, most of the existing heterogeneous graph embedding methods cannot make full use of all the auxiliary information. So we proposed a new method called Multi-Subgraph based Graph Convolution Network (MSGCN), which uses topology information, semantic information, and node feature information to learn node embedding vector. In MSGCN, the graph is firstly decomposed into multiple subgraphs according to the type of edges. Then convolution operation is adopted for each subgraph to obtain the node representations of each subgraph. Finally, the node representations are obtained by aggregating the representation vectors of nodes in each subgraph. Furthermore, we discussed the application of MSGCN with respect to a transductive learning task and inductive learning task, respectively. A node sampling method for inductive learning tasks to obtain representations of new nodes is proposed. This sampling method uses the attention mechanism to find important nodes and then assigns different weights to different nodes during aggregation. We conducted an experiment on three datasets. The experimental results indicate that our MSGCN outperforms the state-of-the-art methods in multi-class node classification tasks.
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Dong, Yan Ru, Qinpeng Li, Shibin Wang, and Jianwei Niu. "Semisupervised Community Preserving Network Embedding with Pairwise Constraints." Complexity 2020 (November 10, 2020): 1–14. http://dx.doi.org/10.1155/2020/7953758.

Full text
Abstract:
Network embedding aims to learn the low-dimensional representations of nodes in networks. It preserves the structure and internal attributes of the networks while representing nodes as low-dimensional dense real-valued vectors. These vectors are used as inputs of machine learning algorithms for network analysis tasks such as node clustering, classification, link prediction, and network visualization. The network embedding algorithms, which considered the community structure, impose a higher level of constraint on the similarity of nodes, and they make the learned node embedding results more discriminative. However, the existing network representation learning algorithms are mostly unsupervised models; the pairwise constraint information, which represents community membership, is not effectively utilized to obtain node embedding results that are more consistent with prior knowledge. This paper proposes a semisupervised modularized nonnegative matrix factorization model, SMNMF, while preserving the community structure for network embedding; the pairwise constraints (must-link and cannot-link) information are effectively fused with the adjacency matrix and node similarity matrix of the network so that the node representations learned by the model are more interpretable. Experimental results on eight real network datasets show that, comparing with the representative network embedding methods, the node representations learned after incorporating the pairwise constraints can obtain higher accuracy in node clustering task and the results of link prediction, and network visualization tasks indicate that the semisupervised model SMNMF is more discriminative than unsupervised ones.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Li, Weixin Zeng, Zhen Tan, Weidong Xiao, and Xiang Zhao. "Node Importance Estimation with Multiview Contrastive Representation Learning." International Journal of Intelligent Systems 2023 (September 6, 2023): 1–13. http://dx.doi.org/10.1155/2023/5917750.

Full text
Abstract:
Node importance estimation is a fundamental task in graph analysis, which can be applied to various downstream applications such as recommendation and resource allocation. However, existing studies merely work under a single view, which neglects the rich information hidden in other aspects of the graph. Hence, in this work, we propose a Multiview Contrastive Representation Learning (MCRL) model to obtain representations of nodes from multiple perspectives and then infer the node importance. Specifically, we are the first to apply the contrastive learning technique to the node importance analysis task, which enhances the expressiveness of graph representations and lays the foundation for importance estimation. Moreover, based on the improved representations, we generate the entity importance score by attentively aggregating the scores from two different views, i.e., node view and node-edge interaction view. We conduct extensive experiments on real-world datasets, and the experimental results show that MCRL outperforms existing methods on all evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
5

Han, Liangzhe, Ruixing Zhang, Leilei Sun, Bowen Du, Yanjie Fu, and Tongyu Zhu. "Generic and Dynamic Graph Representation Learning for Crowd Flow Modeling." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 4293–301. http://dx.doi.org/10.1609/aaai.v37i4.25548.

Full text
Abstract:
Many deep spatio-temporal learning methods have been proposed for crowd flow modeling in recent years. However, most of them focus on designing a spatial and temporal convolution mechanism to aggregate information from nearby nodes and historical observations for a pre-defined prediction task. Different from the existing research, this paper aims to provide a generic and dynamic representation learning method for crowd flow modeling. The main idea of our method is to maintain a continuous-time representation for each node, and update the representations of all nodes continuously according to the streaming observed data. Along this line, a particular encoder-decoder architecture is proposed, where the encoder converts the newly happened transactions into a timestamped message, and then the representations of related nodes are updated according to the generated message. The role of the decoder is to guide the representation learning process by reconstructing the observed transactions based on the most recent node representations. Moreover, a number of virtual nodes are added to discover macro-level spatial patterns and also share the representations among spatially-interacted stations. Experiments have been conducted on two real-world datasets for four popular prediction tasks in crowd flow modeling. The result demonstrates that our method could achieve better prediction performance for all the tasks than baseline methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Vaghani, Dev. "An Approch for Representation of Node Using Graph Transformer Networks." International Journal for Research in Applied Science and Engineering Technology 11, no. 1 (January 31, 2023): 27–37. http://dx.doi.org/10.22214/ijraset.2023.48485.

Full text
Abstract:
Abstract: In representation learning on graphs, graph neural networks (GNNs) have been widely employed and have attained cutting-edge performance in tasks like node categorization and link prediction. However, the majority of GNNs now in use are made to learn node representations on homogenous and fixed graphs. The limits are particularly significant when learning representations on a network that has been incorrectly described or one that is heterogeneous, or made up of different kinds of nodes and edges. This study proposes Graph Transformer Networks (GTNs), which may generate new network structures by finding valuable connections between disconnected nodes in the original graph and learning efficient node representation on the new graphs end-to-end. A basic layer of GTNs called the Graph Transformer layer learns a soft selection of edge types and composite relations to produce meaningful multi-hop connections known as meta-paths. This research demonstrates that GTNs can learn new graph structures from data and tasks without any prior domain expertise and that they can then use convolution on the new graphs to provide effective node representation. GTNs outperformed state-of-the-art approaches that need predefined meta-paths from domain knowledge in all three benchmark node classification tasks without the use of domain-specific graph pre-processing.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Haoyang, and Lei Chen. "EARLY: Efficient and Reliable Graph Neural Network for Dynamic Graphs." Proceedings of the ACM on Management of Data 1, no. 2 (June 13, 2023): 1–28. http://dx.doi.org/10.1145/3589308.

Full text
Abstract:
Graph neural networks have been widely used to learn node representations for many real-world static graphs. In general, they learn node representations by recursively aggregating information from neighbors. However, graphs in many applications are dynamic, evolving with continuous graph events, such as node feature and graph structure updates. These events require the node representations to be updated accordingly. Currently, due to the real-time requirement, how to efficiently and reliably update node representations under continuous graph events is still an open problem. Recent studies propose two solutions to partially address this problem, but their performance is still limited. First, local-based GNNs only update the nodes directly involved in events, suffering from the quality-deficit issue, since they neglect the other nodes affected by these events. Second, neighbor-sampling GNNs propose to sample neighbors to accelerate neighbor aggregation computations, encountering the neighbor-redundant issue. These sampled neighbors may be similar and cannot reflect the distribution of all neighbors, leading that node representations aggregated on these redundant neighbors may differ from those aggregated on all neighbors. In this paper, we propose an efficient and reliable graph neural network, namely EARLY, to update node representations for dynamic graphs. We first identify the top-k influential nodes that are most affected by graph events. Then, to sample neighbors diversely, we propose a diversity-aware layer-wise sampling technique. We theoretically demonstrate that this technique can decrease the sampling expectation error and learn more reliable node representations. Therefore, the top-k nodes selection and diversity-aware sampling enable EARLY to efficiently update node representations in a reliable way. Extensive experiments on the five real-world graphs demonstrate the effectiveness and efficiency of our proposed EARLY.
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Yixin, Yizhen Zheng, Daokun Zhang, Vincent CS Lee, and Shirui Pan. "Beyond Smoothing: Unsupervised Graph Representation Learning with Edge Heterophily Discriminating." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 4516–24. http://dx.doi.org/10.1609/aaai.v37i4.25573.

Full text
Abstract:
Unsupervised graph representation learning (UGRL) has drawn increasing research attention and achieved promising results in several graph analytic tasks. Relying on the homophily assumption, existing UGRL methods tend to smooth the learned node representations along all edges, ignoring the existence of heterophilic edges that connect nodes with distinct attributes. As a result, current methods are hard to generalize to heterophilic graphs where dissimilar nodes are widely connected, and also vulnerable to adversarial attacks. To address this issue, we propose a novel unsupervised Graph Representation learning method with Edge hEterophily discriminaTing (GREET) which learns representations by discriminating and leveraging homophilic edges and heterophilic edges. To distinguish two types of edges, we build an edge discriminator that infers edge homophily/heterophily from feature and structure information. We train the edge discriminator in an unsupervised way through minimizing the crafted pivot-anchored ranking loss, with randomly sampled node pairs acting as pivots. Node representations are learned through contrasting the dual-channel encodings obtained from the discriminated homophilic and heterophilic edges. With an effective interplaying scheme, edge discriminating and representation learning can mutually boost each other during the training phase. We conducted extensive experiments on 14 benchmark datasets and multiple learning scenarios to demonstrate the superiority of GREET.
APA, Harvard, Vancouver, ISO, and other styles
9

Duan, Wei, Junyu Xuan, Maoying Qiao, and Jie Lu. "Learning from the Dark: Boosting Graph Convolutional Neural Networks with Diverse Negative Samples." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 6550–58. http://dx.doi.org/10.1609/aaai.v36i6.20608.

Full text
Abstract:
Graph Convolutional Neural Networks (GCNs) have been generally accepted to be an effective tool for node representations learning. An interesting way to understand GCNs is to think of them as a message passing mechanism where each node updates its representation by accepting information from its neighbours (also known as positive samples). However, beyond these neighbouring nodes, graphs have a large, dark, all-but forgotten world in which we find the non-neighbouring nodes (negative samples). In this paper, we show that this great dark world holds a substantial amount of information that might be useful for representation learning. Most specifically, it can provide negative information about the node representations. Our overall idea is to select appropriate negative samples for each node and incorporate the negative information contained in these samples into the representation updates. Moreover, we show that the process of selecting the negative samples is not trivial. Our theme therefore begins by describing the criteria for a good negative sample, followed by a determinantal point process algorithm for efficiently obtaining such samples. A GCN, boosted by diverse negative samples, then jointly considers the positive and negative information when passing messages. Experimental evaluations show that this idea not only improves the overall performance of standard representation learning but also significantly alleviates over-smoothing problems.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Renbiao, Fengtai Li, Shuwei Liu, Weihao Li, Shizhan Chen, Bin Feng, and Di Jin. "Adaptive Multi-Channel Deep Graph Neural Networks." Symmetry 16, no. 4 (April 1, 2024): 406. http://dx.doi.org/10.3390/sym16040406.

Full text
Abstract:
Graph neural networks (GNNs) have shown significant success in graph representation learning. However, the performance of existing GNNs degrades seriously when their layers deepen due to the over-smoothing issue. The node embedding incline converges to a certain value when GNNs repeat, aggregating the representations of the receptive field. The main reason for over-smoothing is that the receptive field of each node tends to be similar as the layers increase, which leads to different nodes aggregating similar information. To solve this problem, we propose an adaptive multi-channel deep graph neural network (AMD-GNN) to adaptively and symmetrically aggregate information from the deep receptive field. The proposed model ensures that the receptive field of each node in the deep layer is different so that the node representations are distinguishable. The experimental results demonstrate that AMD-GNN achieves state-of-the-art performance on node classification tasks with deep models.
APA, Harvard, Vancouver, ISO, and other styles
11

Yoon, Jisung, Kai-Cheng Yang, Woo-Sung Jung, and Yong-Yeol Ahn. "Persona2vec: a flexible multi-role representations learning framework for graphs." PeerJ Computer Science 7 (March 30, 2021): e439. http://dx.doi.org/10.7717/peerj-cs.439.

Full text
Abstract:
Graph embedding techniques, which learn low-dimensional representations of a graph, are achieving state-of-the-art performance in many graph mining tasks. Most existing embedding algorithms assign a single vector to each node, implicitly assuming that a single representation is enough to capture all characteristics of the node. However, across many domains, it is common to observe pervasively overlapping community structure, where most nodes belong to multiple communities, playing different roles depending on the contexts. Here, we propose persona2vec, a graph embedding framework that efficiently learns multiple representations of nodes based on their structural contexts. Using link prediction-based evaluation, we show that our framework is significantly faster than the existing state-of-the-art model while achieving better performance.
APA, Harvard, Vancouver, ISO, and other styles
12

Shen, Xiao, Quanyu Dai, Fu-lai Chung, Wei Lu, and Kup-Sze Choi. "Adversarial Deep Network Embedding for Cross-Network Node Classification." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 03 (April 3, 2020): 2991–99. http://dx.doi.org/10.1609/aaai.v34i03.5692.

Full text
Abstract:
In this paper, the task of cross-network node classification, which leverages the abundant labeled nodes from a source network to help classify unlabeled nodes in a target network, is studied. The existing domain adaptation algorithms generally fail to model the network structural information, and the current network embedding models mainly focus on single-network applications. Thus, both of them cannot be directly applied to solve the cross-network node classification problem. This motivates us to propose an adversarial cross-network deep network embedding (ACDNE) model to integrate adversarial domain adaptation with deep network embedding so as to learn network-invariant node representations that can also well preserve the network structural information. In ACDNE, the deep network embedding module utilizes two feature extractors to jointly preserve attributed affinity and topological proximities between nodes. In addition, a node classifier is incorporated to make node representations label-discriminative. Moreover, an adversarial domain adaptation technique is employed to make node representations network-invariant. Extensive experimental results demonstrate that the proposed ACDNE model achieves the state-of-the-art performance in cross-network node classification.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Yu, Liang Hu, Yang Wu, and Wanfu Gao. "Graph Multihead Attention Pooling with Self-Supervised Learning." Entropy 24, no. 12 (November 29, 2022): 1745. http://dx.doi.org/10.3390/e24121745.

Full text
Abstract:
Graph neural networks (GNNs), which work with graph-structured data, have attracted considerable attention and achieved promising performance on graph-related tasks. While the majority of existing GNN methods focus on the convolutional operation for encoding the node representations, the graph pooling operation, which maps the set of nodes into a coarsened graph, is crucial for graph-level tasks. We argue that a well-defined graph pooling operation should avoid the information loss of the local node features and global graph structure. In this paper, we propose a hierarchical graph pooling method based on the multihead attention mechanism, namely GMAPS, which compresses both node features and graph structure into the coarsened graph. Specifically, a multihead attention mechanism is adopted to arrange nodes into a coarsened graph based on their features and structural dependencies between nodes. In addition, to enhance the expressiveness of the cluster representations, a self-supervised mechanism is introduced to maximize the mutual information between the cluster representations and the global representation of the hierarchical graph. Our experimental results show that the proposed GMAPS obtains significant and consistent performance improvements compared with state-of-the-art baselines on six benchmarks from the biological and social domains of graph classification and reconstruction tasks.
APA, Harvard, Vancouver, ISO, and other styles
14

Park, Jinyoung, Sungdong Yoo, Jihwan Park, and Hyunwoo J. Kim. "Deformable Graph Convolutional Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7949–56. http://dx.doi.org/10.1609/aaai.v36i7.20765.

Full text
Abstract:
Graph neural networks (GNNs) have significantly improved the representation power for graph-structured data. Despite of the recent success of GNNs, the graph convolution in most GNNs have two limitations. Since the graph convolution is performed in a small local neighborhood on the input graph, it is inherently incapable to capture long-range dependencies between distance nodes. In addition, when a node has neighbors that belong to different classes, i.e., heterophily, the aggregated messages from them often negatively affect representation learning. To address the two common problems of graph convolution, in this paper, we propose Deformable Graph Convolutional Networks (Deformable GCNs) that adaptively perform convolution in multiple latent spaces and capture short/long-range dependencies between nodes. Separated from node representations (features), our framework simultaneously learns the node positional embeddings (coordinates) to determine the relations between nodes in an end-to-end fashion. Depending on node position, the convolution kernels are deformed by deformation vectors and apply different transformations to its neighbor nodes. Our extensive experiments demonstrate that Deformable GCNs flexibly handles the heterophily and achieve the best performance in node classification tasks on six heterophilic graph datasets. Our code is publicly available at https://github.com/mlvlab/DeformableGCN.
APA, Harvard, Vancouver, ISO, and other styles
15

Lin, Mugang, Kunhui Wen, Xuanying Zhu, Huihuang Zhao, and Xianfang Sun. "Graph Autoencoder with Preserving Node Attribute Similarity." Entropy 25, no. 4 (March 26, 2023): 567. http://dx.doi.org/10.3390/e25040567.

Full text
Abstract:
The graph autoencoder (GAE) is a powerful graph representation learning tool in an unsupervised learning manner for graph data. However, most existing GAE-based methods typically focus on preserving the graph topological structure by reconstructing the adjacency matrix while ignoring the preservation of the attribute information of nodes. Thus, the node attributes cannot be fully learned and the ability of the GAE to learn higher-quality representations is weakened. To address the issue, this paper proposes a novel GAE model that preserves node attribute similarity. The structural graph and the attribute neighbor graph, which is constructed based on the attribute similarity between nodes, are integrated as the encoder input using an effective fusion strategy. In the encoder, the attributes of the nodes can be aggregated both in their structural neighborhood and by their attribute similarity in their attribute neighborhood. This allows performing the fusion of the structural and node attribute information in the node representation by sharing the same encoder. In the decoder module, the adjacency matrix and the attribute similarity matrix of the nodes are reconstructed using dual decoders. The cross-entropy loss of the reconstructed adjacency matrix and the mean-squared error loss of the reconstructed node attribute similarity matrix are used to update the model parameters and ensure that the node representation preserves the original structural and node attribute similarity information. Extensive experiments on three citation networks show that the proposed method outperforms state-of-the-art algorithms in link prediction and node clustering tasks.
APA, Harvard, Vancouver, ISO, and other styles
16

Jha, Kishlay, Guangxu Xun, and Aidong Zhang. "Continual representation learning for evolving biomedical bipartite networks." Bioinformatics 37, no. 15 (February 3, 2021): 2190–97. http://dx.doi.org/10.1093/bioinformatics/btab067.

Full text
Abstract:
Abstract Motivation Many real-world biomedical interactions such as ‘gene-disease’, ‘disease-symptom’ and ‘drug-target’ are modeled as a bipartite network structure. Learning meaningful representations for such networks is a fundamental problem in the research area of Network Representation Learning (NRL). NRL approaches aim to translate the network structure into low-dimensional vector representations that are useful to a variety of biomedical applications. Despite significant advances, the existing approaches still have certain limitations. First, a majority of these approaches do not model the unique topological properties of bipartite networks. Consequently, their straightforward application to the bipartite graphs yields unsatisfactory results. Second, the existing approaches typically learn representations from static networks. This is limiting for the biomedical bipartite networks that evolve at a rapid pace, and thus necessitate the development of approaches that can update the representations in an online fashion. Results In this research, we propose a novel representation learning approach that accurately preserves the intricate bipartite structure, and efficiently updates the node representations. Specifically, we design a customized autoencoder that captures the proximity relationship between nodes participating in the bipartite bicliques (2 × 2 sub-graph), while preserving both the global and local structures. Moreover, the proposed structure-preserving technique is carefully interleaved with the central tenets of continual machine learning to design an incremental learning strategy that updates the node representations in an online manner. Taken together, the proposed approach produces meaningful representations with high fidelity and computational efficiency. Extensive experiments conducted on several biomedical bipartite networks validate the effectiveness and rationality of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
17

Dongming Chen, Dongming Chen, Mingshuo Nie Dongming Chen, Jiarui Yan Mingshuo Nie, Jiangnan Meng Jiarui Yan, and Dongqi Wang Jiangnan Meng. "Network Representation Learning Algorithm Based on Community Folding." 網際網路技術學刊 23, no. 2 (March 2022): 415–23. http://dx.doi.org/10.53106/160792642022032302020.

Full text
Abstract:
<p>Network representation learning is a machine learning method that maps network topology and node information into low-dimensional vector space, which can reduce the temporal and spatial complexity of downstream network data mining such as node classification and graph clustering. This paper addresses the problem that neighborhood information-based network representation learning algorithm ignores the global topological information of the network. We propose the Network Representation Learning Algorithm Based on Community Folding (CF-NRL) considering the influence of community structure on the global topology of the network. Each community of the target network is regarded as a folding unit, the same network representation learning algorithm is used to learn the vector representation of the nodes on the folding network and the target network, then the vector representations are spliced correspondingly to obtain the final vector representation of the node. Experimental results show the excellent performance of the proposed algorithm.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
18

Ghoniem, Mohammad, Jean-Daniel Fekete, and Philippe Castagliola. "On the Readability of Graphs Using Node-Link and Matrix-Based Representations: A Controlled Experiment and Statistical Analysis." Information Visualization 4, no. 2 (May 12, 2005): 114–35. http://dx.doi.org/10.1057/palgrave.ivs.9500092.

Full text
Abstract:
In this article, we describe a taxonomy of generic graph related tasks along with a computer-based evaluation designed to assess the readability of two representations of graphs: matrix-based representations and node-link diagrams. This evaluation encompasses seven generic tasks and leads to insightful recommendations for the representation of graphs according to their size and density. Typically, we show that when graphs are bigger than twenty vertices, the matrix-based visualization outperforms node-link diagrams on most tasks. Only path finding is consistently in favor of node-link diagrams throughout the evaluation.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhang, Daokun, Jie Yin, Xingquan Zhu, and Chengqi Zhang. "Search Efficient Binary Network Embedding." ACM Transactions on Knowledge Discovery from Data 15, no. 4 (June 2021): 1–27. http://dx.doi.org/10.1145/3436892.

Full text
Abstract:
Traditional network embedding primarily focuses on learning a continuous vector representation for each node, preserving network structure and/or node content information, such that off-the-shelf machine learning algorithms can be easily applied to the vector-format node representations for network analysis. However, the learned continuous vector representations are inefficient for large-scale similarity search, which often involves finding nearest neighbors measured by distance or similarity in a continuous vector space. In this article, we propose a search efficient binary network embedding algorithm called BinaryNE to learn a binary code for each node, by simultaneously modeling node context relations and node attribute relations through a three-layer neural network. BinaryNE learns binary node representations using a stochastic gradient descent-based online learning algorithm. The learned binary encoding not only reduces memory usage to represent each node, but also allows fast bit-wise comparisons to support faster node similarity search than using Euclidean or other distance measures. Extensive experiments and comparisons demonstrate that BinaryNE not only delivers more than 25 times faster search speed, but also provides comparable or better search quality than traditional continuous vector based network embedding methods. The binary codes learned by BinaryNE also render competitive performance on node classification and node clustering tasks. The source code of the BinaryNE algorithm is available at https://github.com/daokunzhang/BinaryNE.
APA, Harvard, Vancouver, ISO, and other styles
20

He, Dongxiao, Shuwei Liu, Meng Ge, Zhizhi Yu, Guangquan Xu, and Zhiyong Feng. "Improving Distinguishability of Class for Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (March 24, 2024): 12349–57. http://dx.doi.org/10.1609/aaai.v38i11.29126.

Full text
Abstract:
Graph Neural Networks (GNNs) have received widespread attention and applications due to their excellent performance in graph representation learning. Most existing GNNs can only aggregate 1-hop neighbors in a GNN layer, so they usually stack multiple GNN layers to obtain more information from larger neighborhoods. However, many studies have shown that model performance experiences a significant degradation with the increase of GNN layers. In this paper, we first introduce the concept of distinguishability of class to indirectly evaluate the learned node representations, and verify the positive correlation between distinguishability of class and model performance. Then, we propose a Graph Neural Network guided by Distinguishability of class (Disc-GNN) to monitor the representation learning, so as to learn better node representations and improve model performance. Specifically, we first perform inter-layer filtering and initial compensation based on Local Distinguishability of Class (LDC) in each layer, so that the learned node representations have the ability to distinguish different classes. Furthermore, we add a regularization term based on Global Distinguishability of Class (GDC) to achieve global optimization of model performance. Extensive experiments on six real-world datasets have shown that the competitive performance of Disc-GNN to the state-of-the-art methods on node classification and node clustering tasks.
APA, Harvard, Vancouver, ISO, and other styles
21

Qiu, Yuanyuan, Zhijie Xu, and Jianqin Zhang. "Patch-Based Auxiliary Node Classification for Domain Adaptive Object Detection." Electronics 13, no. 7 (March 27, 2024): 1239. http://dx.doi.org/10.3390/electronics13071239.

Full text
Abstract:
Domain adaptive object detection (DAOD) aims to leverage labeled source domain data to train object detection models that can generalize well to unlabeled target domains. Recently, many researchers have considered implementing fine-grained pixel-level domain adaptation using graph representations. Existing methods construct semantically complete graphs and align them across domains via graph matching. This work introduced an auxiliary node classification task before domain alignment through graph matching, which utilizes the inherent information of graph nodes to classify them, in order to avoid suboptimal graph matching results caused by node class confusion. However, previous methods neglected the contextual information of graph nodes, leading to biased node classification and suboptimal graph matching. To solve this issue, we propose a novel patch-based auxiliary node classification method for DAOD. Unlike existing methods that use only the inherent information of nodes for node classification, our method exploits the local region information of nodes and employs multi-layer convolutional neural networks to learn the local region feature representation of nodes, enriching the node context information. Thus, accurate and robust node classification results are produced and the risk of class confusion is reduced. Moreover, we propose a progressive strategy to fuse the inherent features and the learned local region features of nodes, which ensures that the network can stably and reliably utilize local region features for accurate node classification. In this paper, we conduct abundant experiments on various DAOD scenarios and demonstrate that our proposed model outperforms existing works.
APA, Harvard, Vancouver, ISO, and other styles
22

Keller, René, Claudia M. Eckert, and P. John Clarkson. "Matrices or Node-Link Diagrams: Which Visual Representation is Better for Visualising Connectivity Models?" Information Visualization 5, no. 1 (March 2006): 62–76. http://dx.doi.org/10.1057/palgrave.ivs.9500116.

Full text
Abstract:
Adjacency matrices or DSMs (design structure matrices) and node-link diagrams are both visual representations of graphs, which are a common form of data in many disciplines. DSMs are used throughout the engineering community for various applications, such as process modelling or change prediction. However, outside this community, DSMs (and other matrix-based representations of graphs) are rarely applied and node-link diagrams are very popular. This paper will examine, which representation is more suitable for visualising graphs. For this purpose, several user experiments were conducted that aimed to answer this research question in the context of product models used, for example in engineering, but the results can be generalised to other applications. These experiments identify key factors on the readability of graph visualisations and confirm work on comparisons of different representations. This study widens the scope of readability comparisons between node-link and matrix-based representations by introducing new user tasks and replacing simulated, undirected graphs with directed ones employing real-world semantics.
APA, Harvard, Vancouver, ISO, and other styles
23

Hoang, Van Thuy, and O.-Joun Lee. "Transitivity-Preserving Graph Representation Learning for Bridging Local Connectivity and Role-Based Similarity." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (March 24, 2024): 12456–65. http://dx.doi.org/10.1609/aaai.v38i11.29138.

Full text
Abstract:
Graph representation learning (GRL) methods, such as graph neural networks and graph transformer models, have been successfully used to analyze graph-structured data, mainly focusing on node classification and link prediction tasks. However, the existing studies mostly only consider local connectivity while ignoring long-range connectivity and the roles of nodes. In this paper, we propose Unified Graph Transformer Networks (UGT) that effectively integrate local and global structural information into fixed-length vector representations. First, UGT learns local structure by identifying the local sub-structures and aggregating features of the k-hop neighborhoods of each node. Second, we construct virtual edges, bridging distant nodes with structural similarity to capture the long-range dependencies. Third, UGT learns unified representations through self-attention, encoding structural distance and p-step transition probability between node pairs. Furthermore, we propose a self-supervised learning task that effectively learns transition probability to fuse local and global structural features, which could then be transferred to other downstream tasks. Experimental results on real-world benchmark datasets over various downstream tasks showed that UGT significantly outperformed baselines that consist of state-of-the-art models. In addition, UGT reaches the third-order Weisfeiler-Lehman power to distinguish non-isomorphic graph pairs.
APA, Harvard, Vancouver, ISO, and other styles
24

Chen, Zhen, Jia Huang, Shengzheng Liu, and Haixia Long. "Multiscale Feature Fusion and Graph Convolutional Network for Detecting Ethereum Phishing Scams." Electronics 13, no. 6 (March 7, 2024): 1012. http://dx.doi.org/10.3390/electronics13061012.

Full text
Abstract:
With the emergence of blockchain technology, the cryptocurrency market has experienced significant growth in recent years, simultaneously fostering environments conducive to cybercrimes such as phishing scams. Phishing scams on blockchain platforms like Ethereum have become a grave economic threat. Consequently, there is a pressing demand for effective detection mechanisms for these phishing activities to establish a secure financial transaction environment. However, existing methods typically utilize only the most recent transaction record when constructing features, resulting in the loss of vast amounts of transaction data and failing to adequately reflect the characteristics of nodes. Addressing this need, this study introduces a multiscale feature fusion approach integrated with a graph convolutional network model to detect phishing scams on Ethereum. A node basic feature set comprising 12 features is initially designed based on the Ethereum transaction dataset in the basic feature module. Subsequently, in the edge embedding representation module, all transaction times and amounts between two nodes are sorted, and a gate recurrent unit (GRU) neural network is employed to capture the temporal features within this transaction sequence, generating a fixed-length edge embedding representation from variable-length input. In the time trading feature module, attention weights are allocated to all embedding representations surrounding a node, aggregating the edge embedding representations and structural relationships into the node. Finally, combining basic and time trading features of the node, graph convolutional networks (GCNs), SAGEConv, and graph attention networks (GATs) are utilized to classify phishing nodes. The performance of these three graph convolution-based deep learning models is validated on a real Ethereum phishing scam dataset, demonstrating commendable efficiency. Among these, SAGEConv achieves an F1-score of 0.958, an AUC-ROC value of 0.956, and an AUC-PR value of 0.949, outperforming existing methods and baseline models.
APA, Harvard, Vancouver, ISO, and other styles
25

Roy, Indradyumna, Abir De, and Soumen Chakrabarti. "Adversarial Permutation Guided Node Representations for Link Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 9445–53. http://dx.doi.org/10.1609/aaai.v35i11.17138.

Full text
Abstract:
After observing a snapshot of a social network, a link prediction (LP) algorithm identifies node pairs between which new edges will likely materialize in future. Most LP algorithms estimate a score for currently non-neighboring node pairs, and rank them by this score. Recent LP systems compute this score by comparing dense, low dimensional vector representations of nodes. Graph neural networks (GNNs), in particular graph convolutional networks (GCNs), are popular examples. For two nodes to be meaningfully compared, their embeddings should be indifferent to reordering of their neighbors. GNNs typically use simple, symmetric set aggregators to ensure this property, but this design decision has been shown to produce representations with limited expressive power. Sequence encoders are more expressive, but are permutation sensitive by design. Recent efforts to overcome this dilemma turn out to be unsatisfactory for LP tasks. In response, we propose PermGNN, which aggregates neighbor features using a recurrent, order-sensitive aggregator and directly minimizes an LP loss while it is `attacked' by adversarial generator of neighbor permutations. PermGNN has superior expressive power compared to earlier GNNs. Next, we devise an optimization framework to map PermGNN's node embeddings to a suitable locality-sensitive hash, which speeds up reporting the top-K most likely edges for the LP task. Our experiments on diverse datasets show that PermGNN outperforms several state-of-the-art link predictors by a significant margin, and can predict the most likely edges fast.
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Qizhou, Guansong Pang, Mahsa Salehi, Wray Buntine, and Christopher Leckie. "Cross-Domain Graph Anomaly Detection via Anomaly-Aware Contrastive Alignment." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 4676–84. http://dx.doi.org/10.1609/aaai.v37i4.25591.

Full text
Abstract:
Cross-domain graph anomaly detection (CD-GAD) describes the problem of detecting anomalous nodes in an unlabelled target graph using auxiliary, related source graphs with labelled anomalous and normal nodes. Although it presents a promising approach to address the notoriously high false positive issue in anomaly detection, little work has been done in this line of research. There are numerous domain adaptation methods in the literature, but it is difficult to adapt them for GAD due to the unknown distributions of the anomalies and the complex node relations embedded in graph data. To this end, we introduce a novel domain adaptation approach, namely Anomaly-aware Contrastive alignmenT (ACT), for GAD. ACT is designed to jointly optimise: (i) unsupervised contrastive learning of normal representations of nodes in the target graph, and (ii) anomaly-aware one-class alignment that aligns these contrastive node representations and the representations of labelled normal nodes in the source graph, while enforcing significant deviation of the representations of the normal nodes from the labelled anomalous nodes in the source graph. In doing so, ACT effectively transfers anomaly-informed knowledge from the source graph to learn the complex node relations of the normal class for GAD on the target graph without any specification of the anomaly distributions. Extensive experiments on eight CD-GAD settings demonstrate that our approach ACT achieves substantially improved detection performance over 10 state-of-the-art GAD methods. Code is available at https://github.com/QZ-WANG/ACT.
APA, Harvard, Vancouver, ISO, and other styles
27

Hush, Don R. "Training a Sigmoidal Node Is Hard." Neural Computation 11, no. 5 (July 1, 1999): 1249–60. http://dx.doi.org/10.1162/089976699300016449.

Full text
Abstract:
This article proves that the task of computing near-optimal weights for sigmoidal nodes under the L1 regression norm is NP-Hard. For the special case where the sigmoid is piecewise linear, we prove a slightly stronger result: that computing the optimal weights is NP-Hard. These results parallel that for the one-node pattern recognition problem—that determining the optimal weights for a threshold logic node is also intractable. Our results have important consequences for constructive algorithms that build a regression model one node at a time. It suggests that although such methods are (in principle) capable of producing efficient size representations (Barron, 1993; Jones, 1992), finding such representations may be computationally intractable. These results holds only in the deterministic sense; that is, they do not exclude the possibility that such representations may be found efficiently with high probability. In fact it motivates the use of heuristic or randomized algorithms for this problem.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhang, Wei, Baoyang Cui, Zhonglin Ye, and Zhen Liu. "A Network Representation Learning Model Based on Multiple Remodeling of Node Attributes." Mathematics 11, no. 23 (November 27, 2023): 4788. http://dx.doi.org/10.3390/math11234788.

Full text
Abstract:
Current network representation learning models mainly use matrix factorization-based and neural network-based approaches, and most models still focus only on local neighbor features of nodes. Knowledge representation learning aims to learn low-dimensional dense representations of entities and relations from structured knowledge graphs, and most models use the triplets to capture semantic, logical, and topological features between entities and relations. In order to extend the generalization capability of the network representation learning models, this paper proposes a network representation learning algorithm based on multiple remodeling of node attributes named MRNR. The model constructs the knowledge triplets through the textual association relationships between nodes. Meanwhile, a novel co-occurrence word training method has been proposed. Multiple remodeling of node attributes can significantly improve the effectiveness of network representation learning. At the same time, MRNR introduces the attention mechanism to achieve the weight information for key co-occurrence words and triplets, which further models the semantic and topological features between entities and relations, and it makes the network embedding more accurate and has better generalization ability.
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Yinghui, Wenjun Wang, Minglai Shao, and Yueheng Sun. "Deep Cross-Network Alignment with Anchor Node Pair Diverse Local Structure." Algorithms 16, no. 5 (April 28, 2023): 234. http://dx.doi.org/10.3390/a16050234.

Full text
Abstract:
Network alignment (NA) offers a comprehensive way to build associations between different networks by identifying shared nodes. While the majority of current NA methods rely on the topological consistency assumption, which posits that shared nodes across different networks typically have similar local structures or neighbors, we argue that anchor nodes, which play a pivotal role in NA, face a more challenging scenario that is often overlooked. In this paper, we conduct extensive statistical analysis across networks to investigate the connection status of labeled anchor node pairs and categorize them into four situations. Based on our analysis, we propose an end-to-end network alignment framework that uses node representations as a distribution rather than a point vector to better handle the structural diversity of networks. To mitigate the influence of specific nodes, we introduce a mask mechanism during the representation learning process. In addition, we utilize meta-learning to generalize the learned information on labeled anchor node pairs to other node pairs. Finally, we perform comprehensive experiments on both real-world and synthetic datasets to confirm the efficacy of our proposed method. The experimental results demonstrate that the proposed model outperforms the state-of-the-art methods significantly.
APA, Harvard, Vancouver, ISO, and other styles
30

Ding, Kaize, Yancheng Wang, Yingzhen Yang, and Huan Liu. "Eliciting Structural and Semantic Global Knowledge in Unsupervised Graph Contrastive Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 7378–86. http://dx.doi.org/10.1609/aaai.v37i6.25898.

Full text
Abstract:
Graph Contrastive Learning (GCL) has recently drawn much research interest for learning generalizable node representations in a self-supervised manner. In general, the contrastive learning process in GCL is performed on top of the representations learned by a graph neural network (GNN) backbone, which transforms and propagates the node contextual information based on its local neighborhoods. However, nodes sharing similar characteristics may not always be geographically close, which poses a great challenge for unsupervised GCL efforts due to their inherent limitations in capturing such global graph knowledge. In this work, we address their inherent limitations by proposing a simple yet effective framework -- Simple Neural Networks with Structural and Semantic Contrastive Learning} (S^3-CL). Notably, by virtue of the proposed structural and semantic contrastive learning algorithms, even a simple neural network can learn expressive node representations that preserve valuable global structural and semantic patterns. Our experiments demonstrate that the node representations learned by S^3-CL) achieve superior performance on different downstream tasks compared with the state-of-the-art unsupervised GCL methods. Implementation and more experimental details are publicly available at https://github.com/kaize0409/S-3-CL.
APA, Harvard, Vancouver, ISO, and other styles
31

Jung, Jinhong, Jaemin Yoo, and U. Kang. "Signed random walk diffusion for effective representation learning in signed graphs." PLOS ONE 17, no. 3 (March 17, 2022): e0265001. http://dx.doi.org/10.1371/journal.pone.0265001.

Full text
Abstract:
How can we model node representations to accurately infer the signs of missing edges in a signed social graph? Signed social graphs have attracted considerable attention to model trust relationships between people. Various representation learning methods such as network embedding and graph convolutional network (GCN) have been proposed to analyze signed graphs. However, existing network embedding models are not end-to-end for a specific task, and GCN-based models exhibit a performance degradation issue when their depth increases. In this paper, we propose Signed Diffusion Network (SidNet), a novel graph neural network that achieves end-to-end node representation learning for link sign prediction in signed social graphs. We propose a new random walk based feature aggregation, which is specially designed for signed graphs, so that SidNet effectively diffuses hidden node features and uses more information from neighboring nodes. Through extensive experiments, we show that SidNet significantly outperforms state-of-the-art models in terms of link sign prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
32

Najafi, Bahareh, Saeedeh Parsaeefard, and Alberto Leon-Garcia. "Entropy-Aware Time-Varying Graph Neural Networks with Generalized Temporal Hawkes Process: Dynamic Link Prediction in the Presence of Node Addition and Deletion." Machine Learning and Knowledge Extraction 5, no. 4 (October 4, 2023): 1359–81. http://dx.doi.org/10.3390/make5040069.

Full text
Abstract:
This paper addresses the problem of learning temporal graph representations, which capture the changing nature of complex evolving networks. Existing approaches mainly focus on adding new nodes and edges to capture dynamic graph structures. However, to achieve more accurate representation of graph evolution, we consider both the addition and deletion of nodes and edges as events. These events occur at irregular time scales and are modeled using temporal point processes. Our goal is to learn the conditional intensity function of the temporal point process to investigate the influence of deletion events on node representation learning for link-level prediction. We incorporate network entropy, a measure of node and edge significance, to capture the effect of node deletion and edge removal in our framework. Additionally, we leveraged the characteristics of a generalized temporal Hawkes process, which considers the inhibitory effects of events where past occurrences can reduce future intensity. This framework enables dynamic representation learning by effectively modeling both addition and deletion events in the temporal graph. To evaluate our approach, we utilize autonomous system graphs, a family of inhomogeneous sparse graphs with instances of node and edge additions and deletions, in a link prediction task. By integrating these enhancements into our framework, we improve the accuracy of dynamic link prediction and enable better understanding of the dynamic evolution of complex networks.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhan, Huixin, Kun Zhang, Keyi Lu, and Victor S. Sheng. "Measuring the Privacy Leakage via Graph Reconstruction Attacks on Simplicial Neural Networks (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 16380–81. http://dx.doi.org/10.1609/aaai.v37i13.27050.

Full text
Abstract:
In this paper, we measure the privacy leakage via studying whether graph representations can be inverted to recover the graph used to generate them via graph reconstruction attack (GRA). We propose a GRA that recovers a graph's adjacency matrix from the representations via a graph decoder that minimizes the reconstruction loss between the partial graph and the reconstructed graph. We study three types of representations that are trained on the graph, i.e., representations output from graph convolutional network (GCN), graph attention network (GAT), and our proposed simplicial neural network (SNN) via a higher-order combinatorial Laplacian. Unlike the first two types of representations that only encode pairwise relationships, the third type of representation, i.e., SNN outputs, encodes higher-order interactions (e.g., homological features) between nodes. We find that the SNN outputs reveal the lowest privacy-preserving ability to defend the GRA, followed by those of GATs and GCNs, which indicates the importance of building more private representations with higher-order node information that could defend the potential threats, such as GRAs.
APA, Harvard, Vancouver, ISO, and other styles
34

Peng, Hao, Ruitong Zhang, Yingtong Dou, Renyu Yang, Jingyi Zhang, and Philip S. Yu. "Reinforced Neighborhood Selection Guided Multi-Relational Graph Neural Networks." ACM Transactions on Information Systems 40, no. 4 (October 31, 2022): 1–46. http://dx.doi.org/10.1145/3490181.

Full text
Abstract:
Graph Neural Networks (GNNs) have been widely used for the representation learning of various structured graph data, typically through message passing among nodes by aggregating their neighborhood information via different operations. While promising, most existing GNNs oversimplify the complexity and diversity of the edges in the graph and thus are inefficient to cope with ubiquitous heterogeneous graphs, which are typically in the form of multi-relational graph representations. In this article, we propose RioGNN , a novel Reinforced, recursive, and flexible neighborhood selection guided multi-relational Graph Neural Network architecture, to navigate complexity of neural network structures whilst maintaining relation-dependent representations. We first construct a multi-relational graph, according to the practical task, to reflect the heterogeneity of nodes, edges, attributes, and labels. To avoid the embedding over-assimilation among different types of nodes, we employ a label-aware neural similarity measure to ascertain the most similar neighbors based on node attributes. A reinforced relation-aware neighbor selection mechanism is developed to choose the most similar neighbors of a targeting node within a relation before aggregating all neighborhood information from different relations to obtain the eventual node embedding. Particularly, to improve the efficiency of neighbor selecting, we propose a new recursive and scalable reinforcement learning framework with estimable depth and width for different scales of multi-relational graphs. RioGNN can learn more discriminative node embedding with enhanced explainability due to the recognition of individual importance of each relation via the filtering threshold mechanism. Comprehensive experiments on real-world graph data and practical tasks demonstrate the advancements of effectiveness, efficiency, and the model explainability, as opposed to other comparative GNN models.
APA, Harvard, Vancouver, ISO, and other styles
35

Song, Anping, Ruyi Ji, Wendong Qi, and Chenbei Zhang. "RGCLN: Relational Graph Convolutional Ladder-Shaped Networks for Signed Network Clustering." Applied Sciences 13, no. 3 (January 19, 2023): 1367. http://dx.doi.org/10.3390/app13031367.

Full text
Abstract:
Node embeddings are increasingly used in various analysis tasks of networks due to their excellent dimensional compression and feature representation capabilities. However, most researchers’ priorities have always been link prediction, which leads to signed network clustering being under-explored. Therefore, we propose an asymmetric ladder-shaped architecture called RGCLN based on multi-relational graph convolution that can fuse deep node features to generate node representations with great representational power. RGCLN adopts a deep framework to capture and convey information instead of using the common method in signed networks—balance theory. In addition, RGCLN adds a size constraint to the loss function to prevent image-like overfitting during the unsupervised learning process. Based on the node features learned by this end-to-end trained model, RGCLN performs community detection in a large number of real-world networks and generative networks, and the results indicate that our model has an advantage over state-of-the-art network embedding algorithms.
APA, Harvard, Vancouver, ISO, and other styles
36

Liu, Yanbei, Xiao Wang, Shu Wu, and Zhitao Xiao. "Independence Promoted Graph Disentangled Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4916–23. http://dx.doi.org/10.1609/aaai.v34i04.5929.

Full text
Abstract:
We address the problem of disentangled representation learning with independent latent factors in graph convolutional networks (GCNs). The current methods usually learn node representation by describing its neighborhood as a perceptual whole in a holistic manner while ignoring the entanglement of the latent factors. However, a real-world graph is formed by the complex interaction of many latent factors (e.g., the same hobby, education or work in social network). While little effort has been made toward exploring the disentangled representation in GCNs. In this paper, we propose a novel Independence Promoted Graph Disentangled Networks (IPGDN) to learn disentangled node representation while enhancing the independence among node representations. In particular, we firstly present disentangled representation learning by neighborhood routing mechanism, and then employ the Hilbert-Schmidt Independence Criterion (HSIC) to enforce independence between the latent representations, which is effectively integrated into a graph convolutional framework as a regularizer at the output layer. Experimental studies on real-world graphs validate our model and demonstrate that our algorithms outperform the state-of-the-arts by a wide margin in different network applications, including semi-supervised graph classification, graph clustering and graph visualization.
APA, Harvard, Vancouver, ISO, and other styles
37

Ren, Min, Yunlong Wang, Zhenan Sun, and Tieniu Tan. "Dynamic Graph Representation for Occlusion Handling in Biometrics." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11940–47. http://dx.doi.org/10.1609/aaai.v34i07.6869.

Full text
Abstract:
The generalization ability of Convolutional neural networks (CNNs) for biometrics drops greatly due to the adverse effects of various occlusions. To this end, we propose a novel unified framework integrated the merits of both CNNs and graphical models to learn dynamic graph representations for occlusion problems in biometrics, called Dynamic Graph Representation (DGR). Convolutional features onto certain regions are re-crafted by a graph generator to establish the connections among the spatial parts of biometrics and build Feature Graphs based on these node representations. Each node of Feature Graphs corresponds to a specific part of the input image and the edges express the spatial relationships between parts. By analyzing the similarities between the nodes, the framework is able to adaptively remove the nodes representing the occluded parts. During dynamic graph matching, we propose a novel strategy to measure the distances of both nodes and adjacent matrixes. In this way, the proposed method is more convincing than CNNs-based methods because the dynamic graph method implies a more illustrative and reasonable inference of the biometrics decision. Experiments conducted on iris and face demonstrate the superiority of the proposed framework, which boosts the accuracy of occluded biometrics recognition by a large margin comparing with baseline methods.
APA, Harvard, Vancouver, ISO, and other styles
38

Merchant, Arpit, Aristides Gionis, and Michael Mathioudakis. "Succinct graph representations as distance oracles." Proceedings of the VLDB Endowment 15, no. 11 (July 2022): 2297–306. http://dx.doi.org/10.14778/3551793.3551794.

Full text
Abstract:
Distance oracles answer shortest-path queries between any pair of nodes in a graph. They are often built using succinct graph representations such as spanners, sketches, and compressors to minimize oracle size and query answering latency. Node embeddings, in particular, offer graph representations that place adjacent nodes nearby each other in a low-rank space. However, their use in the design of distance oracles has not been sufficiently studied. In this paper, we empirically compare exact distance oracles constructed based on a variety of node embeddings and other succinct representations. We evaluate twelve such oracles along three measures of efficiency: construction time, memory requirements, and query-processing time over fourteen real datasets and four synthetic graphs. We show that distances between embedding vectors are excellent estimators of graph distances when graphs are well-structured, but less so for more unstructured graphs. Overall, our findings suggest that exact oracles based on embeddings can be constructed faster than multi-dimensional scaling (MDS) but slower than compressed adjacency indexes, require less memory than landmark oracles but more than sparsifiers or indexes, can answer queries faster than indexes but slower than MDS, and are exact more often with a smaller additive error than spanners (that have multiplicative error) while not being lossless like adjacency lists. Finally, while the exactness of such oracles is infeasible to maintain for huge graphs even under large amounts of resources, we empirically demonstrate that approximate oracles based on GOSH embeddings can efficiently scale to graphs of 100M+ nodes with only small additive errors in distance estimations.
APA, Harvard, Vancouver, ISO, and other styles
39

Ma, Chuang, and Helong Xia. "A one-step graph clustering method on heterogeneous graphs via variational graph embedding." Electronic Research Archive 32, no. 4 (2024): 2772–88. http://dx.doi.org/10.3934/era.2024125.

Full text
Abstract:
<abstract><p>Graph clustering is one of the fundamental tasks in graph data mining, with significant applications in social networks and recommendation systems. Traditional methods for clustering heterogeneous graphs typically involve obtaining node representations as a preliminary step, followed by the application of clustering algorithms to achieve the final clustering results. However, this two-step approach leads to a disconnection between the optimization of node representation and the clustering process, making it challenging to achieve optimal results. In this paper, we propose a graph clustering approach specifically designed for heterogeneous graphs that unifies the optimization of node representation and the clustering process for nodes in a heterogeneous graph. We assume that the relationships between different meta-paths in the heterogeneous graph are mutually independent. By maximizing the joint probability of meta-paths and nodes, we derive the optimization objective through variational methods. Finally, we employ backpropagation and reparameterization techniques to optimize this objective and thereby achieve the desired clustering results. Experiments conducted on multiple real heterogeneous datasets demonstrate that the proposed method is competitive with existing methods.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
40

DICKINSON, P. J., M. KRAETZL, H. BUNKE, M. NEUHAUS, and A. DADEJ. "SIMILARITY MEASURES FOR HIERARCHICAL REPRESENTATIONS OF GRAPHS WITH UNIQUE NODE LABELS." International Journal of Pattern Recognition and Artificial Intelligence 18, no. 03 (May 2004): 425–42. http://dx.doi.org/10.1142/s021800140400323x.

Full text
Abstract:
A hierarchical abstraction scheme based on node contraction and two related similarity measures for graphs with unique node labels are proposed in this paper. The contraction scheme reduces the number of nodes in a graph and leads to a speed-up in the computation of graph similarity. Theoretical properties of the new graph similarity measures are derived and experimentally verified. A potential application of the proposed graph abstraction scheme in the domain of computer network monitoring is discussed.
APA, Harvard, Vancouver, ISO, and other styles
41

Lin, Siyi, Jie Hong, Bo Lang, and Lin Huang. "DAG: Dual Attention Graph Representation Learning for Node Classification." Mathematics 11, no. 17 (August 28, 2023): 3691. http://dx.doi.org/10.3390/math11173691.

Full text
Abstract:
Transformer-based graph neural networks have accomplished notable achievements by utilizing the self-attention mechanism for message passing in various domains. However, traditional methods overlook the diverse significance of intra-node representations, focusing solely on internode interactions. To overcome this limitation, we propose a DAG (Dual Attention Graph), a novel approach that integrates both intra-node and internode dynamics for node classification tasks. By considering the information exchange process between nodes from dual branches, DAG provides a holistic understanding of information propagation within graphs, enhancing the interpretability of graph-based machine learning applications. The experimental evaluations demonstrate that DAG excels in node classification tasks, outperforming current benchmark models across ten datasets.
APA, Harvard, Vancouver, ISO, and other styles
42

Kefato, Zekarias, and Sarunas Girdzijauskas. "Gossip and Attend: Context-Sensitive Graph Representation Learning." Proceedings of the International AAAI Conference on Web and Social Media 14 (May 26, 2020): 351–59. http://dx.doi.org/10.1609/icwsm.v14i1.7305.

Full text
Abstract:
Graph representation learning (GRL) is a powerful technique for learning low-dimensional vector representation of high-dimensional and often sparse graphs. Most studies explore the structure and metadata associated with the graph using random walks and employ an unsupervised or semi-supervised learning schemes. Learning in these methods is context-free, resulting in only a single representation per node. Recently studies have argued on the adequacy of a single representation and proposed context-sensitive approaches, which are capable of extracting multiple node representations for different contexts. This proved to be highly effective in applications such as link prediction and ranking.However, most of these methods rely on additional textual features that require complex and expensive RNNs or CNNs to capture high-level features or rely on a community detection algorithm to identify multiple contexts of a node.In this study we show that in-order to extract high-quality context-sensitive node representations it is not needed to rely on supplementary node features, nor to employ computationally heavy and complex models. We propose Goat, a context-sensitive algorithm inspired by gossip communication and a mutual attention mechanism simply over the structure of the graph. We show the efficacy of Goat using 6 real-world datasets on link prediction and node clustering tasks and compare it against 12 popular and state-of-the-art (SOTA) baselines. Goat consistently outperforms them and achieves up to 12% and 19% gain over the best performing methods on link prediction and clustering tasks, respectively.
APA, Harvard, Vancouver, ISO, and other styles
43

Huang, Zhaoci, Wenzhe Xu, and Xinjian Zhuo. "Community-CL: An Enhanced Community Detection Algorithm Based on Contrastive Learning." Entropy 25, no. 6 (May 29, 2023): 864. http://dx.doi.org/10.3390/e25060864.

Full text
Abstract:
Graph contrastive learning (GCL) has gained considerable attention as a self-supervised learning technique that has been successfully employed in various applications, such as node classification, node clustering, and link prediction. Despite its achievements, GCL has limited exploration of the community structure of graphs. This paper presents a novel online framework called Community Contrastive Learning (Community-CL) for simultaneously learning node representations and detecting communities in a network. The proposed method employs contrastive learning to minimize the difference in the latent representations of nodes and communities in different graph views. To achieve this, learnable graph augmentation views using a graph auto-encoder (GAE) are proposed, followed by a shared encoder that learns the feature matrix of the original graph and augmentation views. This joint contrastive framework enables more accurate representation learning of the network and results in more expressive embeddings than traditional community detection algorithms that solely optimize for community structure. Experimental results demonstrate that Community-CL achieves superior performance compared to state-of-the-art baselines in community detection. Specifically, the NMI of Community-CL is reported to be 0.714 (0.551) on the Amazon-Photo (Amazon-Computers) dataset, which represents a performance improvement of up to 16% compared with the best baseline.
APA, Harvard, Vancouver, ISO, and other styles
44

Sheng, Jinfang, Zili Yang, Bin Wang, and Yu Chen. "Attribute Graph Embedding Based on Multi-Order Adjacency Views and Attention Mechanisms." Mathematics 12, no. 5 (February 27, 2024): 697. http://dx.doi.org/10.3390/math12050697.

Full text
Abstract:
Graph embedding plays an important role in the analysis and study of typical non-Euclidean data, such as graphs. Graph embedding aims to transform complex graph structures into vector representations for further machine learning or data mining tasks. It helps capture relationships and similarities between nodes, providing better representations for various tasks on graphs. Different orders of neighbors have different impacts on the generation of node embedding vectors. Therefore, this paper proposes a multi-order adjacency view encoder to fuse the feature information of neighbors at different orders. We generate different node views for different orders of neighbor information, consider different orders of neighbor information through different views, and then use attention mechanisms to integrate node embeddings from different views. Finally, we evaluate the effectiveness of our model through downstream tasks on the graph. Experimental results demonstrate that our model achieves improvements in attributed graph clustering and link prediction tasks compared to existing methods, indicating that the generated embedding representations have higher expressiveness.
APA, Harvard, Vancouver, ISO, and other styles
45

Mežnar, Sebastian, Nada Lavrač, and Blaž Škrlj. "Transfer Learning for Node Regression Applied to Spreading Prediction." Complex Systems 30, no. 4 (December 15, 2021): 457–81. http://dx.doi.org/10.25088/complexsystems.30.4.457.

Full text
Abstract:
Understanding how information propagates in real-life complex networks yields a better understanding of dynamic processes such as misinformation or epidemic spreading. The recently introduced branch of machine learning methods for learning node representations offers many novel applications, one of them being the task of spreading prediction addressed in this paper. We explore the utility of the state-of-the-art node representation learners when used to assess the effects of spreading from a given node, estimated via extensive simulations. Further, as many real-life networks are topologically similar, we systematically investigate whether the learned models generalize to previously unseen networks, showing that in some cases very good model transfer can be obtained. This paper is one of the first to explore transferability of the learned representations for the task of node regression; we show there exist pairs of networks with similar structure between which the trained models can be transferred (zero-shot) and demonstrate their competitive performance. To our knowledge, this is one of the first attempts to evaluate the utility of zero-shot transfer for the task of node regression.
APA, Harvard, Vancouver, ISO, and other styles
46

Najafi, Bahareh, Saeedeh Parsaeefard, and Alberto Leon-Garcia. "Missing Data Estimation in Temporal Multilayer Position-Aware Graph Neural Network (TMP-GNN)." Machine Learning and Knowledge Extraction 4, no. 2 (April 30, 2022): 397–417. http://dx.doi.org/10.3390/make4020017.

Full text
Abstract:
GNNs have been proven to perform highly effectively in various node-level, edge-level, and graph-level prediction tasks in several domains. Existing approaches mainly focus on static graphs. However, many graphs change over time and their edge may disappear, or the node/edge attribute may alter from one time to the other. It is essential to consider such evolution in the representation learning of nodes in time-varying graphs. In this paper, we propose a Temporal Multilayer Position-Aware Graph Neural Network (TMP-GNN), a node embedding approach for dynamic graphs that incorporates the interdependence of temporal relations into embedding computation. We evaluate the performance of TMP-GNN on two different representations of temporal multilayered graphs. The performance is assessed against the most popular GNNs on a node-level prediction task. Then, we incorporate TMP-GNN into a deep learning framework to estimate missing data and compare the performance with their corresponding competent GNNs from our former experiment, and a baseline method. Experimental results on four real-world datasets yield up to 58% lower ROCAUC for the pair-wise node classification task, and 96% lower MAE in missing feature estimation, particularly for graphs with a relatively high number of nodes and lower mean degree of connectivity.
APA, Harvard, Vancouver, ISO, and other styles
47

Sun, Li, Zhongbao Zhang, Jiawei Zhang, Feiyang Wang, Hao Peng, Sen Su, and Philip S. Yu. "Hyperbolic Variational Graph Neural Network for Modeling Dynamic Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (May 18, 2021): 4375–83. http://dx.doi.org/10.1609/aaai.v35i5.16563.

Full text
Abstract:
Learning representations for graphs plays a critical role in a wide spectrum of downstream applications. In this paper, we summarize the limitations of the prior works in three folds: representation space, modeling dynamics and modeling uncertainty. To bridge this gap, we propose to learn dynamic graph representations in hyperbolic space, for the first time, which aims to infer stochastic node representations. Working with hyperbolic space, we present a novel Hyperbolic Variational Graph Neural Network, referred to as HVGNN. In particular, to model the dynamics, we introduce a Temporal GNN (TGNN) based on a theoretically grounded time encoding approach. To model the uncertainty, we devise a hyperbolic graph variational autoencoder built upon the proposed TGNN to generate stochastic node representations of hyperbolic normal distributions. Furthermore, we introduce a reparameterisable sampling algorithm for the hyperbolic normal distribution to enable the gradient-based learning of HVGNN. Extensive experiments show that HVGNN outperforms state-of-the-art baselines on real-world datasets.
APA, Harvard, Vancouver, ISO, and other styles
48

Shi, Lei, Bin Hu, Deng Zhao, Jianshan He, Zhiqiang Zhang, and Jun Zhou. "Structural Information Enhanced Graph Representation for Link Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (March 24, 2024): 14964–72. http://dx.doi.org/10.1609/aaai.v38i13.29417.

Full text
Abstract:
Link prediction is a fundamental task of graph machine learning, and Graph Neural Network (GNN) based methods have become the mainstream approach due to their good performance. However, the typical practice learns node representations through neighborhood aggregation, lacking awareness of the structural relationships between target nodes. Recently, some methods have attempted to address this issue by node labeling tricks. However, they still rely on the node-centric neighborhood message passing of GNNs, which we believe involves two limitations in terms of information perception and transmission for link prediction. First, it cannot perceive long-range structural information due to the restricted receptive fields. Second, there may be information loss of node-centric model on link-centric task. In addition, we empirically find that the neighbor node features could introduce noise for link prediction. To address these issues, we propose a structural information enhanced link prediction framework, which involves removing the neighbor node features while fitting neighborhood graph structures more focused through GNN. Furthermore, we introduce Binary Structural Transformer (BST) to encode the structural relationships between target nodes, complementing the deficiency of GNN. Our approach achieves remarkable results on multiple popular benchmarks, including ranking first on ogbl-ppa, ogbl-citation2 and Pubmed.
APA, Harvard, Vancouver, ISO, and other styles
49

Li, Wang, Siwei Wang, Xifeng Guo, Zhenyu Zhou, and En Zhu. "Auxiliary Graph for Attribute Graph Clustering." Entropy 24, no. 10 (October 2, 2022): 1409. http://dx.doi.org/10.3390/e24101409.

Full text
Abstract:
Attribute graph clustering algorithms that include topological structural information into node characteristics for building robust representations have proven to have promising efficacy in a variety of applications. However, the presented topological structure emphasizes local links between linked nodes but fails to convey relationships between nodes that are not directly linked, limiting the potential for future clustering performance improvement. To solve this issue, we offer the Auxiliary Graph for Attribute Graph Clustering technique (AGAGC). Specifically, we construct an additional graph as a supervisor based on the node attribute. The additional graph can serve as an auxiliary supervisor that aids the present one. To generate a trustworthy auxiliary graph, we offer a noise-filtering approach. Under the supervision of both the pre-defined graph and an auxiliary graph, a more effective clustering model is trained. Additionally, the embeddings of multiple layers are merged to improve the discriminative power of representations. We offer a clustering module for a self-supervisor to make the learned representation more clustering-aware. Finally, our model is trained using a triplet loss. Experiments are done on four available benchmark datasets, and the findings demonstrate that the proposed model outperforms or is comparable to state-of-the-art graph clustering models.
APA, Harvard, Vancouver, ISO, and other styles
50

Lawley, Lane, Will Frey, Patrick Mullen, and Alexander D. Wissner-Gross. "Joint sparsity-biased variational graph autoencoders." Journal of Defense Modeling and Simulation: Applications, Methodology, Technology 18, no. 3 (March 9, 2021): 239–46. http://dx.doi.org/10.1177/1548512921996828.

Full text
Abstract:
To bring the full benefits of machine learning to defense modeling and simulation, it is essential to first learn useful representations for sparse graphs consisting of both key entities (vertices) and their relationships (edges). Here, we present a new model, the Joint Sparsity-Biased Variational Graph AutoEncoder (JSBVGAE), capable of learning embedded representations of nodes from which both sparse network topologies and node features can be jointly and accurately reconstructed. We show that our model outperforms the previous state of the art on standard link-prediction and node-classification tasks, and achieves significantly higher whole-network reconstruction accuracy, while reducing the number of trained parameters.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography